A minimal, production-capable ChatGPT app starter that proves the full loop:
- local MCP server
- local widget development
- tunnel-based ChatGPT testing
- Render deployment
- production connector setup
backend/: Node.js + TypeScript MCP server over HTTP.frontend/: React + Vite widget frontend with fixture harness.render.yaml: split Render Blueprint (backend web service + frontend static site).scripts/dev.sh: optional helper to run backend and frontend together.scripts/validate-env.sh: optional environment validation script.
- Node.js 20+
- npm 10+
- ngrok account and CLI
- ChatGPT account with developer mode enabled
- Render account
- Copy environment defaults.
cp .env.example backend/.env- Install dependencies.
cd backend && npm install
cd ../frontend && npm installBackend:
cd backend
npm run devFrontend:
cd frontend
npm run devOptional helper:
./scripts/dev.shBuild checks:
cd backend && npm run build && npm run start
cd ../frontend && npm run build- Start backend locally on port
3000. - Expose it with ngrok:
ngrok http 3000- Copy the public HTTPS URL from ngrok.
- Connector URL to paste into ChatGPT:
https://<your-ngrok-domain>/mcp
This starter uses the /mcp path.
- In ChatGPT, enable developer mode.
- Add a custom connector with MCP URL set to:
- local tunnel:
https://<ngrok-domain>/mcp - deployed backend:
https://<backend-service>.onrender.com/mcp
- local tunnel:
- Auth behavior:
- default: no auth required
- optional: set
APP_AUTH_TOKENand sendx-app-token: <token>in connector headers
- Push this repo to GitHub.
- In Render, create a new Blueprint from the repo.
- Render reads
render.yamland provisions:starter-chatgpt-app-backend(Node web service)starter-chatgpt-app-frontend(static site)
- Set required secret values when prompted:
APP_AUTH_TOKEN(optional)OPENAI_API_KEY(optional in this starter, required if you extend tools)BASE_URL(optional)
- Deploy and wait for healthy status on
/health.
Connector URL format:
https://starter-chatgpt-app-backend.onrender.com/mcp
Replace host with your actual backend Render URL.
If you enabled token auth, configure header x-app-token in the connector.
| Variable | Local | Render | Required | Notes |
|---|---|---|---|---|
NODE_ENV |
yes | yes | yes | development locally, production on Render |
HOST |
yes | yes | yes | Use 0.0.0.0 |
PORT |
yes | yes | yes | Render provides $PORT |
WIDGET_BASE_URL |
yes | optional | yes* | Use local Vite URL locally |
WIDGET_HOST |
optional | yes | yes* | Set from frontend service via fromService |
APP_AUTH_TOKEN |
optional | optional | no | Enables simple header-based auth |
OPENAI_API_KEY |
optional | optional | no | Add when tools call OpenAI APIs |
BASE_URL |
optional | optional | no | Public backend URL if needed by custom logic |
* Set at least one of WIDGET_BASE_URL or WIDGET_HOST.
Edit backend/src/tools/search_items.ts:
- update tool names, descriptions, and schemas
- replace sample fixture filtering logic with your own data source
- keep
structuredContent,content, and_metain sync - keep
_meta["openai/outputTemplate"]aligned with widget URI
Edit frontend/src/App.tsx and frontend/src/components/ItemCard.tsx:
- start in fixture mode for fast iteration
- keep rendering logic compatible with tool
structuredContent - build frontend and verify
frontend/distoutput
401 Unauthorizedon/mcp: connector is missingx-app-tokenwhileAPP_AUTH_TOKENis set.- Widget not updating: rebuild frontend and redeploy static site.
- Widget asset 404: verify
WIDGET_BASE_URL(local) orWIDGET_HOST(Render wiring). - Connector adds but tool fails: confirm MCP path is exactly
/mcp. - Render health check failing: open
/healthand confirm backend process is listening on0.0.0.0:$PORT.
Use these after connector setup (local tunnel and deployed app):
Search for sample itemsShow me details for item-001Open the example widget
MIT. See LICENSE.