/clipcrop

Implementations of zero-shot capabilities with Open AI's CLIP and computer vision models

Primary LanguagePythonMIT LicenseMIT

clipcrop

  • Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers
  • Added new capability for segmentation using CLIP and Detr segmentation models

Installation

pip install clipcrop

Clip Crop

Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers

Extraction

from clipcrop import clipcrop

cc = clipcrop.ClipCrop("/content/sample.jpg")

DFE, DM, CLIPM, CLIPP = cc.load_models()

result = cc.extract_image(DFE, DM, CLIPM, CLIPP, "text content", num=2)

Captcha

Solve captacha images using CLIP and Object detection models. Ensure Tesseract is installed and executable in your path

from clipcrop import clipcrop

cc = clipcrop.ClipCrop(image_path)

DFE, DM, CLIPM, CLIPP = cc.load_models()

result = cc.auto_captcha(CLIPM, CLIPP, 4)

Clip Segmentation

Segment out images using Detr Panoptic segmentation pipeline and leverage CLIP models to derive the most probable one for your query

Extraction

from clipcrop import clipcrop

clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")

segmentor, clipmodel, clipprocessor = clipseg.load_models()

result = clipseg.segment_image(segmentor, clipmodel, clipprocessor)

Remove Background

from clipcrop import clipcrop

clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")

result = clipseg.remove_background()

Other projects

Contact