With the purpose of interact with Ollama using Llama 3 Model to use it on a Node Express app. Very easy to use with this basic example.
- Run the services inside the
docker-compose.yml
file
docker compose up -d
- We're gonna pull the Llama3 model inside the ollama container
docker exec -it ollama-example ollama pull llama3
- After pulling the model correctly, you can start using the
http://127.0.0.1:3000/chat
POST
endpoint locally