This Demo provides a tool to classify images using the FashionCLIP model. It processes an image and classifies it based on predefined vocabulary sets. The results are the top classes (categories) that the image likely belongs to, based on the similarity scores computed by the CLIP model.
- Utilizes the pretrained CLIP model from
patrickjohncyh/fashion-clip
. - Supports custom vocabulary sets for classification. We provide vocabulary for Parts/Types, Colours/Patterns, Brands and Fabrics.
- Allows users to specify the maximum number of words per category, the path to the vocabulary directory, and the number of top classes to be printed per category.
python clip_image_classifier.py -f <path_to_image> [-v <max_vocab>] [-d <vocab_directory>] [-n <number_of_top_classes>]
-f, --file IMG
: Specify the path to the image file. (Required)-v, --maxvocab
: Maximum number of words per category. Default is 600.-d, --dir DIR
: Path to the vocabulary directory. Default isvocab/
.-n, --number
: Number of top classes to be printed per category. Default is 5.
Image sources for the examples:
Ricardo Acevedo: https://www.pexels.com/de-de/foto/frau-die-blauen-pelzmantel-und-kleid-tragt-1375736/
Wikimedia: https://commons.wikimedia.org/wiki/File:Tennis-shirt-lacoste.jpg
PIL
for image processing.transformers
for utilizing the FashionCLIP model and processor.
Copyright (C) 2023 by Jules Kreuer - @not_a_feature
This piece of software is published unter the GNU General Public License v3.0
TLDR:
| Permissions | Conditions | Limitations |
| ---------------- | ---------------------------- | ----------- |
| ✓ Commercial use | Disclose source | ✕ Liability |
| ✓ Distribution | License and copyright notice | ✕ Warranty |
| ✓ Modification | Same license | |
| ✓ Patent use | State changes | |
| ✓ Private use | | |
Go to LICENSE.md to see the full version.