Issues
- 0
- 0
too many features using corrector using api.py
#79 opened by chaosido - 1
GPU usage is 0
#78 opened by pijiawei - 0
- 1
Hardware conditions for training
#75 opened by pijiawei - 1
OpenAI tokens requirement
#74 opened by mursaleen5336 - 1
Struggling in to reproduce code
#73 opened by mursaleen5336 - 3
Apple Silicon support
#72 opened by sibeliu - 1
Support other embeddings
#48 opened by robvanvolt - 2
Generated text length problem
#71 opened by pijiawei - 12
RuntimeError when training corrector
#67 opened by aaFrostnova - 2
- 3
Examining hypothesis embeddings
#42 opened by lbertge - 4
- 3
Problems encountered while replicating code
#68 opened by jyyang26 - 0
Model Evaluation
#66 opened by victoriazinkovich - 3
Need "llama_unigram.pt" to load "jxm/t5-base___llama-7b___one-million-instructions__correct"
#65 opened by tiandong1234 - 8
use llama 7b as corrector
#63 opened by karanjotsv - 0
why need to revise attention_mask
#64 opened by miaodog - 2
InversionFromLogitsEmbModel
#60 opened by cdxzyc - 1
Can the model be trained successfully if log probabilities (logprobs) or probabilities (probs) are used as inputs instead?
#62 opened by cdxzyc - 1
- 0
Accidental issue
#55 opened by victoriazinkovich - 1
About train error in inversion experiment
#59 opened by K1sna - 5
Any limitation on the targeted embedding model?
#58 opened by qwcai - 1
About hypothesis generation
#57 opened by MarkDeng1 - 1
Multi-GPU Support
#56 opened by shrijayan - 3
- 17
Reproducing results from paper
#40 opened by carriex - 1
about demo
#53 opened by Hannibal046 - 0
llama-2-7b inversion
#50 opened by d-bohn - 0
Languages supported?
#49 opened by kkckk1110 - 10
Sentence Transformer training support
#27 opened by sciencecw - 0
Requirements file has no versions pinned
#47 opened by kristopher-smith - 7
Using Initial Inversion Model Only
#46 opened by VMS-6511 - 1
Question about storage path
#45 opened by sun1187 - 2
- 1
LLAMA-2 inversion models
#41 opened by mihirp1998 - 2
Question about embedding_transform
#38 opened by icerooqiu - 1
Request for GTR Inverters Checkpoint
#39 opened by Lu-Yang666 - 2
Why padding all text to max length in trainer.base. _get_decoded_sequences
#37 opened by liyongkang123 - 3
What's happening in the example?
#33 opened by startakovsky - 2
- 1
Document gtr-base support
#30 opened by dokterbob - 0
- 2
Sentence Transformer Embeddings
#32 opened by mim201820 - 7
Missing Pooling Layer in sentence transformers
#28 opened by sciencecw - 4
DDP error when running training script on mac
#26 opened by sciencecw - 1
- 2
Vector to Text inversion using MPS/CPU
#23 opened by hyungkwonko