This project sets up Ollama with HTTPS support using Docker and Nginx.
- Docker
- Docker Compose
- OpenSSL (version 1.1.1 or higher)
- curl
- OrbStack(optional for MacOs)
- macOS
- OrbStack installed on your system
-
Install OrbStack Ensure you have OrbStack installed on your macOS. If not, download and install it from the official OrbStack website.
-
Run Ollama Container Open a terminal and execute the following command:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
This command runs the Ollama container in detached mode, maps the necessary volume and port.
-
Pull and Run a Model Replace
<modelname>
with your desired model (e.g., llama2, codellama, etc.):docker exec -it ollama ollama run <modelname>
This command pulls the specified model and runs it within the Ollama container.
-
Test the Setup Use the following curl command to test the Ollama API:
curl https://ollama.orb.local/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "your_model_name", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] }'
Replace
your_model_name
with the model you pulled in step 3.
- OrbStack automatically manages the HTTPS connection, so you can use
https://ollama.orb.local
without additional setup. - Ensure you have sufficient disk space for the Ollama images and models.
- The Ollama API will be accessible at
https://ollama.orb.local:11434
.
This setup provides a straightforward way to run Ollama with HTTPS support on macOS using OrbStack, simplifying the process compared to traditional Docker setups.
- Clone this repository:
git clone https://github.com/ChenYCL/docker-ollama-with-https.git cd docker-ollama-with-https
Or just download directly and extract file.

-
Run the setup script:
chmod +x setup_ollama_https.sh ./setup_ollama_https.sh
-
When prompted, enter the Ollama model names you want to use (comma-separated, e.g., qwen:4b,llama2:7b).
-
The script will create necessary files, start Docker containers, and pull the specified models.
-
To trust the self-signed certificate on your system, run:
chmod +x install_cert.sh sudo ./install_cert.sh
-
Access Ollama at
https://localhost:11434
.
You can test the setup using curl:
curl https://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "your_model_name",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
Replace your_model_name
with one of the models you specified during setup.
To stop and remove the containers:
cd ollama_https_setup && docker-compose down
To remove all generated files:
cd .. && rm -rf ollama_https_setup
- This setup uses a self-signed certificate, suitable for development and testing only.
- You may need to restart your browser or system after running
install_cert.sh
. - Some applications may require additional steps to trust the certificate.
- The
-k
option in curl bypasses certificate verification (not recommended for production). - For production, always use valid SSL certificates and proper verification.
- Regularly update your Ollama images and models.
- This setup is designed for local use. Additional security measures are needed for internet exposure.
- The default port is 11434. Modify
nginx.conf
anddocker-compose.yml
to change it.
- If certificate trust issues occur, ensure you've run
install_cert.sh
and restarted your browser. - For Docker-related issues, check if Docker and Docker Compose are properly installed and running.
- If models fail to pull, check your internet connection and ensure sufficient disk space.
Contributions are welcome! Please submit issues and pull requests on the GitHub repository.
This project is licensed under the MIT License.