Adding support for MobileViTV2 model
Closed this issue · 3 comments
laszlokiss-szelena commented
Model description
Hi,
I would love to use MobileViTV2 in my application. I am definitely not an expert, but it seems that its architecture is pretty similar to MobileViT, so adding it seems fairly straightforward to me.
Laszlo
Prerequisites
- The model is supported in Transformers (i.e., listed here)
- The model can be exported to ONNX with Optimum (i.e., listed here)
Additional information
No response
Your contribution
I experimented with this model on my fork here: KLaci@e1e02b1
I can submit a PR too if needed.
xenova commented
Hi there 👋 Looks like the ONNX export isn't as simple as I originally thought (see here). Is this something you'd be able to look into? :)
xenova commented
Okay I might have got it working.
xenova commented
Example code (requires #721):
import { pipeline } from '@xenova/transformers';
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const classifier = await pipeline('image-classification', 'Xenova/mobilevitv2-1.0-imagenet1k-256', {
quantized: false,
});
const output = await classifier(url);
// [{ label: 'tiger, Panthera tigris', score: 0.6491137742996216 }]