Pinned Repositories
annotated-s4
Implementation of https://srush.github.io/annotated-s4
AQLM
Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdf
GravityShowed
MothersFriendTextWriter
Python project to create text generator.
testforvklab
trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
dl-course
Deep Learning with Catalyst
RAdam_research
Discovering real variance of gradients and reliableness of rectification in RAdam optimizer
hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
artek0chumak's Repositories
artek0chumak/trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
artek0chumak/testforvklab
artek0chumak/annotated-s4
Implementation of https://srush.github.io/annotated-s4
artek0chumak/AQLM
Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.pdf
artek0chumak/artek0chumak.github.io
artek0chumak/article_essence
artek0chumak/botmaker_get_contact_info
artek0chumak/candle
Minimalist ML framework for Rust
artek0chumak/catalyst
Accelerated deep learning R&D
artek0chumak/catgpt
GPT for catebi project
artek0chumak/DIHT_3semester
Решения задач 3 семестра ФИВТ(МФТИ).
artek0chumak/fast_arxiver
artek0chumak/hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
artek0chumak/hivemind-lightning
artek0chumak/kartuli-ena
artek0chumak/llama2.rs
artek0chumak/mipt-thesis
Шаблон для дипломов в МФТИ
artek0chumak/petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
artek0chumak/photoclo
artek0chumak/PowerEF
artek0chumak/PrivateDB_bot
Telegram bot for private use.
artek0chumak/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
artek0chumak/simple-interpreter
Simple interpreter for a test task.
artek0chumak/smoothquant
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
artek0chumak/swarm
artek0chumak/Tk-Instruct
Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.
artek0chumak/torch-int
This repository contains integer operators on GPUs for PyTorch.
artek0chumak/torchtitan
A native PyTorch Library for large model training
artek0chumak/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
artek0chumak/WikiBotWordStat