EleutherAI/gpt-neo

Freeze Transformer Weight

ivokun opened this issue · 1 comments

Can we finetune the model with Transformer layers' weight freeze (only train the embeddings and softmax)?

That is not currently supported by this codebase but is easy enough to do using either the HuggingFace library or by writing a small wrapper for this codebase.