Issues
- 7
- 2
AttributeError: 'LlamaModel' object has no attribute '_use_flash_attention_2'
#10 opened by raghavgarg97 - 0
Spider results higher than paper.
#16 opened by jacklishufan - 0
How to combine lookahead with cllms?
#15 opened by jianyuheng - 0
Acceleration for batch Inference ?
#14 opened by jianyuheng - 8
Model collapse
#11 opened by TrueNobility303 - 6
Reproduce Table 1 by this repo?
#6 opened by dreaming-panda - 5
- 0
Changes needed?
#8 opened by sdmorrey - 1
Is the generation greedy-decoding?
#4 opened by w32zhong