Writing lora using the tinygrad library. This is work in progress. This is similar to Peft library from huggingface link here but with tinygrad and tinygrad principles.
# import your models as model
from lora import MakeLora
config = {'rank': 8, 'alpha': 0.9}
model = MakeLora(model, config)
- Lora for Linear eg in
testing_lora.py
- Lora for Tensors eg in
testing_lora.py
- Lora for Transformers
- Lora for CNNs
- Fine tune mistral 7b using tinygrad.