/awesome-openai-vision-api-experiments

Examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams

Primary LanguagePython

awesome-openai-vision-api-experiments

ssstwitter.com_1699453288672.mp4

๐Ÿ‘‹ hello

A set of examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams.

๐Ÿ’ป Install

# create and activate virtual environment
python3 -m venv venv
source venv/bin/activate

# install dependencies
pip install -r requirements.txt

๐Ÿ”‘ Keys

Experimenting with the OpenAI API requires an API key. You can get one here.

๐Ÿงช Experiments

Experiment Description Code HF Space
WebcamGPT chat with video stream GitHub HuggingFace
Grounding DINO + GPT-4V label images with Grounding DINO and GPT-4V GitHub
GPT-4V Classification classify images with GPT-4V GitHub
GPT-4V vs. CLIP label images with Grounding DINO and Autodistill GitHub
Hot Dog or not Hot Dog simple image classification GitHub HuggingFace

๐Ÿฆธ Contribution

I would love your help in making this repository even better! Whether you want to correct a typo, add some new experiment, or if you have any suggestions for improvement, feel free to open an issue or pull request.