/simple-aesthetics-predictor

CLIP-based aesthetics predictor inspired by the interface of 🤗 huggingface transformers.

Primary LanguagePythonMIT LicenseMIT

🤗 Simple Aesthetics Predictor

CI Release Python PyPI

CLIP-based aesthetics predictor inspired by the interface of 🤗 huggingface transformers. This library provides a simple wrapper that can load the predictor using the from_pretrained method.

We currently provide the following wrappers for aesthetics predictor:

Install

pip install simple-aesthetics-predictor

How to Use

import requests
import torch
from PIL import Image
from transformers import CLIPProcessor

from aesthetics_predictor import AestheticsPredictorV1

#
# Load the aesthetics predictor
#
model_id = "shunk031/aesthetics-predictor-v1-vit-large-patch14"

predictor = AestheticsPredictorV1.from_pretrained(model_id)
processor = CLIPProcessor.from_pretrained(model_id)

#
# Download sample image
#
url = "https://github.com/shunk031/simple-aesthetics-predictor/blob/master/assets/a-photo-of-an-astronaut-riding-a-horse.png?raw=true"
image = Image.open(requests.get(url, stream=True).raw)

#
# Preprocess the image
#
inputs = processor(images=image, return_tensors="pt")

#
# Move to GPU
#
device = "cuda"
predictor = predictor.to(device)
inputs = {k: v.to(device) for k, v in inputs.items()}

#
# Inference for the image
#
with torch.no_grad(): # or `torch.inference_model` in torch 1.9+
    outputs = predictor(**inputs)
prediction = outputs.logits

print(f"Aesthetics score: {prediction}")

The Predictors found in 🤗 Huggingface Hub

Acknowledgements