Issues
- 9
Need help fine-tuning sgpt model
#18 opened by faicalbounedjar - 0
S-GPT
#48 opened by Davidtverd - 6
Usage for semantic search
#47 opened by rut00 - 2
- 1
Usage for text-2-text-generation
#45 opened by shafkat-07 - 11
- 2
- 3
Could I change the max_seq_length?
#43 opened by asenasen123 - 4
Fine-tune Muennighoff/SGPT-2.7B-weightedmean-msmarco-specb-bitfit using TSDAE approach
#42 opened by BalajiAJ - 15
Error when using sentence_transformer
#14 opened by TamHHM - 4
If I input more than the max_seq_length?
#40 opened by runwean - 8
How to fine tune on my datasets
#2 opened by shaileshj2803 - 3
Why use low chunksizes?
#39 opened by aksj98 - 2
Training scripts have wrong name of gpt-neo-125m
#38 opened by aksj98 - 1
- 1
does it support Korea and Japanese?
#35 opened by sz2three - 1
- 2
- 1
fine-tune sgpt-bloom-7b1-msmarco oom
#32 opened by wing7171 - 2
Can I use multi GPUS
#31 opened by magicleo - 2
accelerate + deepspeed?
#30 opened by alex-ht - 3
the example model of 'SGPT-125M-weightedmean-nli-bitfit' not compatible with ST
#29 opened by dengwx2009 - 9
When I train an encoder using bloom 3b, I get this error? What is the cause of this problem, please?
#27 opened by ScottishFold007 - 5
Model taken down?
#28 opened by extradosages - 1
Training on unlabeled data (German)
#26 opened by anastasiia-ps - 4
chinese support?
#25 opened by Lukangkang123 - 7
- 1
- 0
Fine Tuning
#21 opened by Kartali-Mohamed - 2
Use SGPT
#20 opened by Kartali-Mohamed - 6
Different model sizes
#19 opened by KnutJaegersberg - 2
- 1
cannot reproduce leaderboard result
#16 opened by hsl89 - 2
Evaluating cross encoders
#15 opened by loopdeloop76 - 1
- 4
Construct SGPT
#11 opened by tsamarahrana - 2
Metric scores for CQADupStack
#10 opened by rafaljanwojcik - 2
- 2
Learning rate & schedule
#7 opened by malteos - 1
Training SGPT for Custom Dataset
#6 opened by rajarajanvakil - 4
OpenAI-GPT3 search endpoint deprecated
#5 opened by rajarajanvakil - 1
generating similar sentences
#4 opened by nikky4D