Uminosachi/inpaint-anything

[Feature] Support API

SpenserCai opened this issue · 17 comments

Can inpaint-anything support API

I'm focusing on UI-based image processing and not considering an external API now.

I may be able to try adding the API section

The API here is worth calling through the SDK webui API

The process of Inpaint Anything involves several steps, including segmentation, pointing by sketch, and mask generation. If even one of these steps is missing, the whole process won't function. I'm concerned about how to implement these steps as APIs.

I think it's possible to break down each step into a separate API

  1. Generate segmentation map: input parameters are images, and output parameters are segmented data/segmentation maps

  2. Generate a mask: input parameters are images, segmented data/images, and output parameters are mask images

...

I've moved the SAM execution and mask generation code to separate library files, making it easier for other applications to utilize them.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md
https://github.com/Uminosachi/sd-webui-inpaint-anything/blob/main/README_DEV.md

thanks for your complete this!

Will DINO be supported? If possible, I will be willing to contribute API related code

I'm not considering using DINO at the moment.

I've moved the SAM execution and mask generation code to separate library files, making it easier for other applications to utilize them.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md https://github.com/Uminosachi/sd-webui-inpaint-anything/blob/main/README_DEV.md

Thank you so much for everything you've done. By the way, I have a couple of questions:

  1. Will there be an API available for inpainting?
  2. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?
  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.

https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))
  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.

https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

Thanks for your reply~
I would like to know if Lama Cleaner's API is directly used in the clean process, and do you provide a separate API for this part?
My understanding is to directly input the mask into the lama cleaner.

I would like to know if Lama Cleaner's API is directly used in the clean process, and do you provide a separate API for this part?
My understanding is to directly input the mask into the lama cleaner.

Lama Cleaner can be installed as an individual package using pip, similar to diffusers.

pip install lama-cleaner

While I haven't created sample code, you might be able to code by referring to the run_cleaner function in the iasam_app.py file. In the code below init_image and mask_image are images of type PIL.Image.

import cv2
import numpy as np
import torch
from lama_cleaner.model_manager import ModelManager
from lama_cleaner.schema import Config, HDStrategy, LDMSampler, SDSampler
from PIL import Image

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = ModelManager(name="lama", device=device)

init_image = np.array(init_image)
mask_image = np.array(mask_image.convert("L"))

config = Config(
    ldm_steps=20,
    ldm_sampler=LDMSampler.ddim,
    hd_strategy=HDStrategy.ORIGINAL,
    hd_strategy_crop_margin=32,
    hd_strategy_crop_trigger_size=512,
    hd_strategy_resize_limit=512,
    prompt="",
    sd_steps=20,
    sd_sampler=SDSampler.ddim
)

output_image = model(image=init_image, mask=mask_image, config=config)
output_image = cv2.cvtColor(output_image.astype(np.uint8), cv2.COLOR_BGR2RGB)
output_image = Image.fromarray(output_image)
  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.

https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.

https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

import importlib

import numpy as np
from PIL import Image, ImageDraw

inpalib = importlib.import_module("inpaint-anything.inpalib")

ModuleNotFoundError: No module named 'inpaint-anything'

Hi, i have try the code, but where can i find the "inpaint-anything.inpalib"?

Before you proceed, please make sure you've cloned this repository to your current directory using the following command:

git clone https://github.com/Uminosachi/inpaint-anything.git

I have build API around this, I am returning segmented image, mask based on selected points , and merged image of segments and original,
anything else you suggest that I should add?

  1. Will there be an API available for inpainting?

The inpainting feature in this app uses the StableDiffusionInpaintPipeline class from the Python diffusers package (you can find the code at the link below). Therefore, I haven't provided a separate API.
https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/inpaint

  1. After the masking step, is it possible to manually input coordinate points (box's center points generated by SAM) and perform inpainting?

In the sample code, you can set the coordinates at the line provided below. By modifying (input_image.shape[1] // 2, input_image.shape[0] // 2), you can specify any point within the image. I haven't prepared an API that displays and allows you to select from multiple segment candidates by SAM.
https://github.com/Uminosachi/inpaint-anything/blob/main/README_DEV.md

draw.point((input_image.shape[1] // 2, input_image.shape[0] // 2), fill=(255, 255, 255))

import importlib

import numpy as np from PIL import Image, ImageDraw

inpalib = importlib.import_module("inpaint-anything.inpalib")

ModuleNotFoundError: No module named 'inpaint-anything'

Hi, i have try the code, but where can i find the "inpaint-anything.inpalib"?

change "inpaint-anything.inpalib" to "inpalib"