Sion1225's Stars
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
yagays/pretrained_doc2vec_ja
facebookresearch/fairscale
PyTorch extensions for high performance and large scale training.
kodxana/OhMyRunPod
Collection of usefull scripts for RunPod pods
bagustris/text-vad
VAD analysis of text using some affective lexicon (ANEW, SENTIWORDNET, and VADER)
alexa/Topical-Chat
A dataset containing human-human knowledge-grounded open-domain conversations.
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
SungjoonPark/EmotionDetection
Dimensional Emotion Detection from Categorical Emotion Annotation
HIPS/autograd
Efficiently computes derivatives of NumPy code.
google/flax
Flax is a neural network library for JAX that is designed for flexibility.
jax-ml/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
keirp/automatic_prompt_engineer
microsoft/varuna
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
horovod/horovod
Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
NVIDIA/apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
chrischoy/MakePytorchPlusPlus
How and why you want to make your pytorch CUDA/CPP extension with a Makefile
bzhangGo/rmsnorm
Root Mean Square Layer Normalization
facebookresearch/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
artidoro/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
mocobeta/janome
Japanese morphological analysis engine written in pure Python
meta-llama/llama
Inference code for Llama models
microsoft/LightGBM
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
catboost/catboost
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
optuna/optuna
A hyperparameter optimization framework
ShiqinHuo/Numerical-Optimization-Books
Collected study materials in Numerical Optimization ANU@MATH3514(HPC)
cs231n/cs231n.github.io
Public facing notes page
scipy/scipy
SciPy library main repository