Pinned Repositories
alignment-handbook
Robust recipes for to align language models with human and AI preferences
knrm
langchain
âš¡ Building applications with LLMs through composability âš¡
llama_index
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
mlflow
Open source platform for the machine learning lifecycle
multi-trait-sgns
PyTorch implementation of skip-gram negative sampling for learning weighted item embeddings for items with side information.
optimum
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for accelerating ML workloads.
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
nathan-az's Repositories
nathan-az/alignment-handbook
Robust recipes for to align language models with human and AI preferences
nathan-az/knrm
nathan-az/langchain
âš¡ Building applications with LLMs through composability âš¡
nathan-az/llama_index
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
nathan-az/mlflow
Open source platform for the machine learning lifecycle
nathan-az/multi-trait-sgns
PyTorch implementation of skip-gram negative sampling for learning weighted item embeddings for items with side information.
nathan-az/optimum
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
nathan-az/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
nathan-az/ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for accelerating ML workloads.
nathan-az/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
nathan-az/unsloth
Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory