mekaneeky's Stars
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
huggingface/trl
Train transformer language models with reinforcement learning.
abbodi1406/vcredist
AIO Repack for latest Microsoft Visual C++ Redistributable Runtimes
microsoft/LLMLingua
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
makcedward/nlpaug
Data augmentation for NLP
MahmoudAshraf97/whisper-diarization
Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper
lucidrains/lion-pytorch
🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch
learning-at-home/hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
kyegomez/BitNet
Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
SakanaAI/evolutionary-model-merge
Official repository of Evolutionary Optimization of Model Merging Recipes
keroro824/HashingDeepLearning
Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"
sail-sg/lorahub
[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
mlabonne/llm-autoeval
Automatically evaluate your LLMs in Google Colab
SamsungLabs/zero-cost-nas
Zero-Cost Proxies for Lightweight NAS
rejunity/tiny-asic-1_58bit-matrix-mul
Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit
nkotak/1.58BitNet
Experimental BitNet Implementation
TensorTeacher/bittensor-mining-tutorial
AlarioAI/bitnet
Train and evaluate 1.58 bits Neural Networks
liyongqi67/Data-Distillation-for-Text-Classification
salahawk/bittensor-model-finetune
Model Fine-tuning for Bittensor miners using the dataset generated by validators
TensorTeacher/endpoint-center
spencrr/automl-cup-starter-kit
unconst/gradient
Gradient
unconst/PretrainSubnet
cmplx-xyttmt/mlops
nakamoto-ai/nya-compute-subnet
unconst/delta-subnet
Secret.
unconst/detrain
testing playground for distributed federated learning
unconst/mach-subnet
Secret.
unconst/turing
Turing: distributed training