Issues
- 1
RuntimeError: Input is too long for context length 77. No truncation passed
#468 opened by hessaAlawwad - 1
Segmentation Fault On Importing Clip
#447 opened by AbdelrahmanMohamed129 - 0
Are there public datasets in your WIT dataset ?
#475 opened by soniamartinot - 0
colab [8] error:No such file or directory: '/root/.cache/scikit-image/0.24.0/data'
#474 opened by a-persimmons - 1
- 0
out proj weight typo fix
#473 opened by louiswang524 - 0
welcome to my clip.cpp project, support openai clip and chinese clip model, quantization type(q4,q8_0,f16,f32), metal gpu and cuda speed
#463 opened by yysu-888 - 0
pip subprocess error
#471 opened by Uduba-robo - 0
pip subprocess error
#470 opened by Uduba-robo - 0
How do I open clip for other visual tasks?
#469 opened by kunge98 - 0
- 0
How to subdivide the same category, for example, how to distinguish Persian cats, coffee cats, and jingle cats, which are all cat categories?
#466 opened by watertianyi - 0
- 2
My CUDA Has seen the GPU but did not use it, why?
#443 opened by Mark-ssss - 0
What is the best practice to train image with multiple captions/keywords
#464 opened by pengzhao-life - 1
On YouTube channel
#461 opened by Cascadipalone - 0
- 0
Preprocessor - How does it work?
#459 opened by whishei - 0
- 1
Issue with Text Encoder Output Dimensions in Fine-Tuned CLIP Model When Using with Stable Diffusion
#452 opened by QXGeraldMo - 0
How to run in
#455 opened by Izaankaskar - 0
About rope-vit
#454 opened by CoinCheung - 0
Is that normal that logits are not within -1 to 1?
#444 opened by Leo-T-Zang - 2
- 1
`from pkg_resources import packaging` seems not work with `setuptools>=70.0.0`
#446 opened by PawelPeczek-Roboflow - 2
If the text embedding can be recovered to text?
#428 opened by Zhangwenyao1 - 0
Question about Checkpoint on ResNet-50.
#445 opened by MorningStarOvO - 0
- 0
Enquiry about the FER2013 dataset used in CLIP paper
#439 opened by hyy-2000 - 0
CLIP
#438 opened by DorsaCharkhian - 0
- 0
On a Bag-of-Words baseline and a transformer
#436 opened by iburenko - 0
from pkg_resources import packaging
#434 opened by qiuyangyue666 - 0
Did CLIP use COCO for training?
#433 opened by slavaheroes - 0
Context Length Error
#432 opened by hamza13-12 - 0
- 0
Multi-thread usage of open_clip
#430 opened by mobines96 - 1
encode_text gives different clip features for the same text, single batch vs mutliple batch
#429 opened by KevinNWalker - 0
some confusion about fine-tunning CLIP
#427 opened by SoulProficiency - 0
Determine how similar the text I entered is to the text in the training set
#426 opened by yumianhuli1 - 0
R50x64.pt SHA256 does not match - fyi
#425 opened by xvdp - 0
How to combine with ControlNet?
#424 opened by MinzChan - 0
about position embedding scale
#423 opened by OliverHuang1220 - 0
What is impact on the image embedding result when the alpha channel is dropped by the converter
#422 opened by githubusersel - 0
Mistake in tutor
#421 opened by spzhuang - 0
Dimension Discrepancy in VisionTransformer?
#420 opened by Shiran-Yuan - 0
Please add VIT-G support
#419 opened by ppbrown - 1
- 2
CLIP Recognition Error
#416 opened by nhw649 - 0
Regarding the fact that the clip library can only be used to read data using the PIL library, we hope that the official can modify the source code
#418 opened by qsd-github