Pinned Repositories
arnavgarg1
datasets
EETQ
Easy and Efficient Quantization for Transformers
horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
NEFTune
Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
PtitPrince
python version of raincloud
shareable_artifacts_for_talks
ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
ludwig-docs
Ludwig's documentation
arnavgarg1's Repositories
arnavgarg1/arnavgarg1
arnavgarg1/EETQ
Easy and Efficient Quantization for Transformers
arnavgarg1/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
arnavgarg1/NEFTune
Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning
arnavgarg1/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
arnavgarg1/PtitPrince
python version of raincloud
arnavgarg1/shareable_artifacts_for_talks
arnavgarg1/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
arnavgarg1/trl
Train transformer language models with reinforcement learning.
arnavgarg1/unsloth
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory