Colab notebook for InvokeAI 3.x.x
How-to: https://www.pogs.cafe/invokeai-colab
Works with the SDXL base model and refiner on a GPU runtime. Other models can be added, as long as they are in diffusers format. The free tier offers little disk space, so I'm using Google Drive to store the base model. Using a 16 bit model works without a connected Google Drive account.
Runs out of RAM on Colab when trying to convert SDXL .safetensors to diffusers and when trying to use LoRAs on the free tier.