chris-tng's Stars
zed-industries/zed
Code at the speed of thought – Zed is a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
ruanyf/weekly
科技爱好者周刊,每周五发布
sharkdp/bat
A cat(1) clone with wings.
faif/python-patterns
A collection of design patterns/idioms in Python
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
satwikkansal/wtfpython
What the f*ck Python? 😱
stanfordnlp/dspy
DSPy: The framework for programming—not prompting—language models
unslothai/unsloth
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
guidance-ai/guidance
A guidance language for controlling large language models.
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
charmbracelet/glow
Render markdown on the CLI, with pizzazz! 💅🏻
apache/arrow
Apache Arrow is the universal columnar format and multi-language toolbox for fast data interchange and in-memory analytics
google-deepmind/deepmind-research
This repository contains implementations and illustrative code to accompany DeepMind publications
zijie0/HumanSystemOptimization
健康学习到150岁 - 人体系统调优不完全指南
stas00/ml-engineering
Machine Learning Engineering Open Book
girliemac/a-picture-is-worth-a-1000-words
I am trying to describe complex matters in simple doodles!
nlpxucan/WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
lancedb/lancedb
Developer-friendly, serverless vector database for AI applications. Easily add long-term memory to your LLM apps!
pytorch/torchtune
PyTorch native post-training library
pytorch/torchtitan
A PyTorch native library for large model training
silverbulletmd/silverbullet
The knowledge tinkerer's notebook
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
pytorch/ao
PyTorch native quantization and sparsity for training and inference
charmbracelet/skate
A personal key value store 🛼
Lightning-AI/lightning-thunder
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
dpfried/incoder
Generative model for code infilling and synthesis
ExpressAI/reStructured-Pretraining
reStructured Pre-training
AaronWatters/jp_proxy_widget
Generic Jupyter/IPython widget implementation that will support many types of javascript libraries and interactions.
PegasisForever/typora-parser
Convert Typora flavoured markdown to HTML.