Clone this repo, install the dependencies (all locally), and run the development server (which auto-watches the files for changes):
npm install
npm run dev
The development app will be running on http://localhost:3000
. Development builds have the advantage of not requiring
a build step, but can be slower than production builds. Also, development builds won't have timeout on edge functions.
The production build of the application is optimized for performance and is performed by the npm run build
command,
after installing the required dependencies.
# .. repeat the steps above up to `npm install`, then:
npm run build
npm run start --port 3000
The app will be running on the specified port, e.g. http://localhost:3000
.
Want to deploy with username/password? See the Authentication guide.
For more detailed information on deploying with Docker, please refer to the docker deployment documentation.
Build and run:
docker build -t com-chat .
docker run -d -p 3000:3000 com-chat
Please refer to the Cloudflare deployment documentation.
- Local models: Ollama, Oobabooga, LocalAi, etc.
- ElevenLabs Voice Synthesis (bring your own voice too) - Settings > Text To Speech
- Helicone LLM Observability Platform - Models > OpenAI > Advanced > API Host: 'oai.hconeai.com'
- Paste.gg Paste Sharing - Chat Menu > Share via paste.gg
- Prodia Image Generation - Settings > Image Generation > Api Key & Model