jbilcke-hf/ai-comic-factory

How to run this on windows ?

Opened this issue · 9 comments

Is there a way ?

No, this is in development the Creator is working on a way that you may be able to run it in hugging face or possibly local. But even then you would need to get your Python setup.

I hope the creator makes it possible to run local in the future :) This is incredibly fun! :D

Hello, indeed to summarize the current situation:

  • the AI Comic Factory isn't a single all-in-one Gradio app (like for the other HF spaces): it's a production web app, that uses multiple services to run
  • that's why it is a bit difficult (but not impossible) to make it cloneable by people
  • same for the secret variables, we can't just give away the "keys to the cloud" (behind there are expensive servers for llama 70b, a dozen of SDXL servers etc which cost money)
  • I fully get that people want to be able to duplicate (that's natural, myself too - I would like to try it with OpenAI, Replicate etc for various experiment)

So what I propose:

Hi,

I've started to make modifications in the code to support alternative backend engines

  • LLM: either HF Inference API or HF Inference Endpoint
  • Rendering: either my custom DIY server or Replicate

I plan to add more options in the future (such as a locally running llamacpp implementation, and a locally running sdxl)

Will this ever be able to run with SD1.5?

Hello everyone, I have updated the README.md and .env files, to add more options

https://twitter.com/flngr/status/1706340053041975480

Will this ever be able to run with SD1.5?

Well, in theory it should be possible if you set:

AUTH_HF_API_TOKEN="<YOUR TOKEN>"
RENDERING_ENGINE="INFERENCE_API"
RENDERING_HF_RENDERING_INFERENCE_API_MODEL="runwayml/stable-diffusion-v1-5"

However in practice my code is designed to generate panels with sizes such as 1024x1024px, 1024x768, 512x1024 etc
so I'm afraid you will have to change the code to account for those image size changes

On the other hand, know that today PRO Hugging Face users can also use:

RENDERING_HF_RENDERING_INFERENCE_API_MODEL="stabilityai/stable-diffusion-xl-base-1.0"

jbilcke-hf

Thank you for the reply. I have another question though! Will Lora ever be able to be used? I have many char. Lora that I would love to use. And thank you for creating this project and for no cost as well. This looks like something that one should charge for and imo is worth a subscription especially if it can become more versatile. I've always wanted to make my own comic but I lack the drawing skills, then AI came. But then creating comic panels was very hard in 1.5...

then you created this. =)

Sorry for the late reply @alarmgoose , it is now possinle to use a LoRA (no need to fork the app, this can even be done directly the the settings panel the "improve quality" button)

ok can you tell me how to run this comic ai on local computer , and if possible please so me the step , I'm not great with coding