/invokeai-docker

Docker image for InvokeAI: Professional Creative AI Tools for Visual Media

Primary LanguageShellGNU General Public License v3.0GPL-3.0

Docker image for InvokeAI

GitHub Repo Docker Image Version (latest semver) RunPod.io Template
Docker Pulls Template Version

Installs

Available on RunPod

This image is designed to work on RunPod. You can use my custom RunPod template to launch it on RunPod.

Building the Docker image

Note

You will need to edit the docker-bake.hcl file and update REGISTRY_USER, and RELEASE. You can obviously edit the other values too, but these are the most important ones.

Important

In order to cache the models, you will need at least 32GB of CPU/system memory (not VRAM) due to the large size of the models. If you have less than 32GB of system memory, you can comment out or remove the code in the Dockerfile that caches the models.

# Clone the repo
git clone https://github.com/ashleykleynhans/invokeai-docker.git

# Log in to Docker Hub
docker login

# Build the image, tag the image, and push the image to Docker Hub
docker buildx bake -f docker-bake.hcl --push

# Same as above but customize registry/user/release:
REGISTRY=ghcr.io REGISTRY_USER=myuser RELEASE=my-release docker buildx \
    bake -f docker-bake.hcl --push

Running Locally

Install Nvidia CUDA Driver

Start the Docker container

docker run -d \
  --gpus all \
  -v /workspace \
  -p 2999:2999 \
  -p 3000:3001 \
  -p 7777:7777 \
  -p 8000:8000 \
  -p 8888:8888 \
  -e JUPYTER_PASSWORD=Jup1t3R! \
  ashleykza/invokeai:latest

You can obviously substitute the image name and tag with your own.

Ports

Connect Port Internal Port Description
3000 3001 InvokeAI
7777 7777 Code Server
8000 8000 Application Manager
8888 8888 Jupyter Lab
2999 2999 RunPod File Uploader

Environment Variables

Variable Description Default
JUPYTER_LAB_PASSWORD Set a password for Jupyter lab not set - no password
DISABLE_AUTOLAUNCH Disable application from launching automatically (not set)
DISABLE_SYNC Disable syncing if using a RunPod network volume (not set)

Logs

InvokeAI creates a log file, and you can tail it instead of killing the service to view the logs

Application Log file
InvokeAI /workspace/logs/invokeai.log

Acknowledgements

A special word of thanks to Madiator2011 for advice and suggestions on improving these images, as well as all of the code for the code-server which was borrowed from his madiator-docker-runpod GitHub repository.

Community and Contributing

Pull requests and issues on GitHub are welcome. Bug fixes and new features are encouraged.

Appreciate my work?

Buy Me A Coffee