pratyushasharma/laser
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
PythonMIT
Issues
- 2
Does LASER have future work?
#29 opened by pangsg - 6
After your code is saved, the size of the weights is the same as the pre-trained ones, and no memory is saved. What is the reason for this?
#28 opened by pursure-D - 4
- 2
Reproducing LLAMA-2 metrics
#27 opened by sidhantls - 2
how to get base model accuracy
#21 opened by pursure-D - 2
Problem Encountered During Reproduction
#22 opened by ZY123-GOOD - 2
method of composing reductions across layers
#19 opened by KTALS - 3
Generic model?
#20 opened by forresti - 8
Llama2-7B + TruthfulQA reproduce issue
#18 opened by JiwenJ - 1
Application to three-dimensional tensors
#17 opened by xl-lei - 0
Feature Request for Upcoming Refactoring
#9 opened by dkmisra - 2
- 1
Potential improvements for evaluation
#15 opened by BenjaminBossan - 16
Mistral Support
#4 opened by fakerybakery - 1
- 2
- 4
Rank-reduced models?
#8 opened by turboderp - 5
Where to Get the Dataset
#5 opened by fakerybakery - 2
Question
#6 opened by fakerybakery - 3
- 6
License
#3 opened by fakerybakery - 2
What is the ETA on the code
#2 opened by MrigankRaman