/autodistill-gemini

Use Gemini to auto-label images for use with Autodistill.

Primary LanguagePythonMIT LicenseMIT

Autodistill Gemini Module

This repository contains the code supporting the Gemini base model for use with Autodistill.

Gemini, developed by Google, is a multimodal computer vision model that allows you to ask questions about images. You can use Gemini with Autodistill for image classification.

You can combine Gemini with other base models to label regions of an object. For example, you can use Grounding DINO to identify abstract objects (i.e. a vinyl record) then Gemini to classify the object (i.e. say which of five vinyl records the region represents). Read the Autodistill Combine Models guide for more information.

Note

Using this project will incur billing charges for API calls to the Gemini API. Refer to the Google Cloud pricing page for more information and to calculate your expected pricing. This package makes one API call per image you want to label.

Read the full Autodistill documentation.

Installation

To use Gemini with autodistill, you need to install the following dependency:

pip3 install autodistill-gemini

Quickstart

from autodistill_gemini import Gemini

# define an ontology to map class names to our Gemini prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = Gemini(
    ontology=CaptionOntology(
        {
            "person": "person",
            "a forklift": "forklift"
        }
    ),
    gcp_region="us-central1",
    gcp_project="project-name",
    model="gemini-1.5-flash"
)

# run inference on an image
result = base_model.predict("image.jpg")

print(result)

# label a folder of images
base_model.label("./context_images", extension=".jpeg")

License

This project is licensed under an MIT license.

🏆 Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!