funGPT
because dan and sydney are more fun than their non-lobotomized versions. I finetuned LLaMA-7B on a instruct-prompted version of the 4chan dataset. I'm using the same finetuning code as Alpaca.
Todos
- post/reply model with opt
- use quantized llama instead of opt
- generate and train on conversation data