This repository contains the backend for an AI-powered English-speaking. The backend is built with Node.js and integrates an open-source LLM (Large Language Model) for real-time speech evaluation, pronunciation correction, and fluency feedback.
- Node.js - Backend runtime
- Express.js - Server framework
- Ollama (LLM API) - AI model for speech and text analysis
- Docker - Containerized deployment
Ensure you have the following installed:
- Node.js (v16+ recommended) → Download Node.js
- Docker → Download Docker
- Ollama (LLM API) running on
localhost:11434
git clone https://github.com/vinodnextcoder/Speak-english-ai-agent.git
cd Speak-english-ai-agentnpm installCreate a .env file in the root directory and add:
PORT=5000
OLLAMA_API_URL=http://localhost:11434/api/generatenpm startThe server should now be running at http://localhost:5000.
This repository provides an easy-to-use Docker setup for running Mistral 7B using Ollama.
✔️ Run Mistral 7B locally with a single command.
✔️ Pull Ollama and the model directly from the registry.
✔️ Lightweight & efficient setup.
If you haven't installed Docker, get it here:
🔗 Download Docker
Verify installation:
docker --versiondocker pull ollama/ollamadocker run -d --name ollama-container -p 11434:11434 ollama/ollama-d→ Run in the background.--name ollama-container→ Assigns a name.-p 11434:11434→ Exposes Ollama API.
Check if the container is running:
docker psdocker exec -it ollama-container ollama pull mistral- Endpoint:
POST /api/analyze - Description: Processes user speech or text and returns AI feedback.
- Request Body:
{ "text": "I goes to school." } - Response:
{ "correctedText": "I go to school.", "feedback": "Verb conjugation corrected." }
We welcome contributions! Feel free to open issues and submit pull requests.
MIT License
This backend is designed to be lightweight, efficient, and scalable for real-time AI-powered language learning. 🚀