Issues
- 2
Model support request for BAAI/bge-m3
#210 opened - 1
remove max_token_length from load_tokenizer
#208 opened - 4
incorrect nomic embeddings
#204 opened - 6
Slower than HF transformers?
#203 opened - 8
- 1
- 0
- 2
- 1
- 1
- 14
- 1
Support mxbai-embed-large-v1
#142 opened - 0
- 1
Support dangvantuan/sentence-camembert-base
#137 opened - 4
failed to retrieve a model from cache
#136 opened - 3
Sentence Transformers Candidate Models
#133 opened - 0
- 2
- 2
Quantization Investigation
#126 opened - 2
intfloat/multilingual-e5-small request
#123 opened - 2
Move deprecation message inside class
#122 opened - 1
Sample code gave an error
#114 opened - 1
Simplify imports
#110 opened - 10
Support BAAI/bge-m3
#107 opened - 1
- 1
Supported model causes error
#104 opened - 1
Feature: support Mac M2/M3 GPU
#97 opened - 1
[Qdrant Client] Use Supported Models API
#95 opened - 3
Add support for Image/Multimodal Model
#92 opened - 2
- 1
- 1
Add support for custom models
#87 opened - 1
- 4
- 2
- 3
- 3
- 2
- 2
Support for thenlper/gte-large
#72 opened - 6
- 1
Single cache_dir determination
#69 opened - 1
Embedding Limit
#64 opened - 16
- 1
Version Tag: 1.1
#62 opened - 2
Progress bar?
#59 opened - 2
Load fine-tuning model using fastembed
#58 opened - 1
- 2
- 2
- 0
Working with Frozen embeddings
#51 opened