openai-clip
There are 61 repositories under openai-clip topic.
mazzzystar/Queryable
Run OpenAI's CLIP and Apple's MobileCLIP model on iOS to search photos.
jina-ai/finetuner
:dart: Task-oriented embedding tuning for BERT, CLIP, etc.
moein-shariatnia/OpenAI-CLIP
Simple implementation of OpenAI CLIP model in PyTorch.
omriav/blended-diffusion
Official implementation for "Blended Diffusion for Text-driven Editing of Natural Images" [CVPR 2022]
afiaka87/clip-guided-diffusion
A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.
nerdyrodent/CLIP-Guided-Diffusion
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
roboflow/zero-shot-object-tracking
Object tracking implemented with the Roboflow Inference API, DeepSort, and OpenAI CLIP.
josephrocca/clip-image-sorter
Sort a folder of images according to their similarity with provided text in your browser (uses a browser-ported version of OpenAI's CLIP model and the web's new File System Access API)
jaketae/koclip
KoCLIP: Korean port of OpenAI CLIP, in Flax
mehdidc/feed_forward_vqgan_clip
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
jianjieluo/OpenAI-CLIP-Feature
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.
sajjjadayobi/CLIPfa
CLIPfa: Connecting Farsi Text and Images
bentoml/CLIP-API-service
CLIP as a service - Embed image and sentences, object recognition, visual reasoning, image classification and reverse image search
OpenSparseLLMs/CLIP-MoE
CLIP-MoE: Mixture of Experts for CLIP
sMamooler/CLIP_Explainability
code for studying OpenAI's CLIP explainability
deepmancer/clip-object-detection
Zero-shot object detection with CLIP, utilizing Faster R-CNN for region proposals.
Snehil-Shah/Multimodal-Image-Search-Engine
Text to Image & Reverse Image Search Engine built upon Vector Similarity Search utilizing CLIP VL-Transformer for Semantic Embeddings & Qdrant as the Vector-Store
kantharajucn/CLIP-imagenet-evaluation
Run CLIP inference on the ImageNet dataset and use these inferences as labels to train other models and again evaluate the trained model on Imagenet validation dataset using original labels or CLIP labels
zabir-nabil/bangla-image-search
A dead-simple image search / retrieval and image-text matching system for Bangla using CLIP
kj3moraes/movieclip
An experiment with movie scenes and contrastive learning
zabir-nabil/bangla-CLIP
CLIP (Contrastive Language–Image Pre-training) for Bangla.
capjamesg/awesome-clip-projects
A list of projects that use OpenAI's CLIP model.
PoCInnovation/SpaceVector
SpaceVector is a platform for semantic search on satellite images using state of the art AI. It aims to support the use of satellite images.
jarvisx17/OpenAI-Clip-Image-Search
Open AI Clip + Faiss Image Semantic search
Armaggheddon/ClipServe
🚀 ClipServe: A fast API server for embedding text, images, and performing zero-shot classification using OpenAI’s CLIP model. Powered by FastAPI, Redis, and CUDA for lightning-fast, scalable AI applications. Transform texts and images into embeddings or classify images with custom labels—all through easy-to-use endpoints. 🌐📊
retkowsky/visual_search_openai_clip
Visual Search with OpenAI Clip
armaank/dbn
Generative models for architecture prose and schematics
FibonacciDude/ContrastiveProsthetics
Computationally-free personalization at test time for sEMG gesture classification. Fast (gpu/cpu) ninapro API.
monk1337/OpenAI-CLIP-Image-search
OpenAI's CLIP neural network
ubaidkhan08/CLIFS-Contrastive-Language-Image-Forensic-Search
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
HQarroum/piaggio
🏍️ A clustering tool providing exact and near de-duplication of images using vector embeddings.
mahadev0811/Text2ImageDescription
Text2ImageDescription retrieves relevant images from Pascal VOC 2012 dataset using OpenAI CLIP, based on text queries, and generates descriptions using quantized Mistral-7b model.
pulkitgoyal56/master-thesis-notebooks
Notebooks used for my Master's Thesis
pulkitgoyal56/master-thesis-report
Report for Master's Thesis on Building Visual Semantic Bias in Curious Exploration during Free Play
StephenMaaa/ChatSense
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot