/aitools_server

Stable Diffusion WebUI server forked with extra features - designed for Mortens AIStickerMachine client

Primary LanguageJupyter NotebookGNU Affero General Public License v3.0AGPL-3.0

AI Tools Server (made to be used with Seth's AI Tools front-end)

This is a forked version of the AUTOMATIC1111/stable-diffusion-webui project - basically the same thing with some additional API features like background removal.

Warning: I think this server requires an NVidia card, that's all I've tested it with

This server is designed to be used with the Seth's AI Tools Client <-- github page that has its download and screenshots/movies

Or, don't use my front-end client and just use its API directly:

  • Here's a Python Jupter notebook showing examples of how to use the standard AUTOMATIC1111 api
  • Here's a Python Jupter notebook showing how to use the extended features available in my forked server (AI background removal, AI subject masking, etc)

Note: This repository was deleted and replaced with the AUTOMATIC1111/stable-diffusion-webui fork Sept 19th 2022, specific missing features that I need are folded into it. Previously I had written my own custom server but that was like, too much work man

Last update Feb 8th, 2023, recent changes:

  • (merged with latest auto1111 stuff)
  • 0.44: Fixed issues with latest automatic1111 and the AI Client, but requires AI Client 0.59+ now

Installation and Running (modified from stable-diffusion-webui docs)

Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.

Installation on Windows

  1. Install Python 3.10.6, checking "Add Python to PATH"
  2. Install git.
  3. Download the aitools_server repository, for example by running git clone https://github.com/SethRobinson/aitools_server.git.
  4. Place any stable diffusion checkpoint such as sd-v1-5-inpainting.ckpt in the models/Stable-diffusion directory (see dependencies for where to get one)
  5. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Installation on Linux

  1. Install the dependencies: Note: Requires Python 3.9+!
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
  1. To install in /home/$(whoami)/aitools_server/, run:
bash <(wget -qO- https://raw.githubusercontent.com/SethRobinson/aitools_server/master/webui.sh)

Adding a necessary file (needed for Win/linux installs)

  1. Place sd-v1.5-inpainting.ckpt or another stable diffusion model in models/Stable-diffusion. (see dependencies for where to get it).
  2. Run the server from shell with:
python launch.py --listen --port 7860 --api

(if on linux, you can do sh runserver.sh, it's an included helper script that does something similar)

Google Colab

Don't have a strong enough GPU or want to give it a quick test run without hassle? No problem, use this Colab notebook. (Works fine on the free tier)

How to update an existing install of the server to the latest version

Go to its directory (probably aitools_server) in a shell or command prompt and type:

git pull

Merging with Automatic1111 manually

If you feel bold, you can also merge it with the latest Automatic1111 server yourself. This CAN break things, so you probably shouldn't do this unless you really need a new feature and Seth hasn't merged the latest yet. (not recommended unless you know how to resolve what are probably simple merge issues)

sh merge_with_automatic1111.sh

Running Seth's AI Tools front-end

Verify the server works by visiting it with a browser. You should be able to generate and paint images via the default web gradio interface. Now you're ready to use the native client.

Note The first time you use the server, it may appear that nothing is happening - look at the server window/shell, it's probably downloading a bunch of stuff for each new feature you use. This only happens the first time!

The client should start up. If you click "Generate", images should start being made. By default it tries to find the server at localhost at port 7860. If it's somewhere else, you need to click "Configure" and edit/add server info. You can add/remove multiple servers on the fly while using the app. (all will be utilitized simultaneously by the app)

Using multiple GPUs on the same computer

You can run multiple instances of the server from the same install.

Start one instance: (uh, this is how for linux, not sure about Windows)

CUDA_VISIBLE_DEVICES=0 python launch.py --listen --port 7860 --api

Then from another shell start another specifying a different GPU and port:

CUDA_VISIBLE_DEVICES=1 python launch.py --listen --port 7861 --api

Then on the client, click Configure and edit in an add_server command for both servers.

Credits for things specific to this fork

Credits for Automatic1111's Stable Diffusion WebUI