Pinned Repositories
100-Days-Of-ML-Code
100 Days of ML Coding
android-ocr
Experimental app for optical character recognition on Android.
android-tesseract-ocr
Android Tesseract OCR
annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
asr_preprocessing
Python implementation of pre-processing for End-to-End speech recognition
AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Automatic-Speech-Recognition
End-to-End Speech Recognition Using Tensorflow
awesome-compression
模型压缩的小白入门教程
awesome-cpp-cn
C++ 资源大全中文版,标准库、Web应用框架、人工智能、数据库、图片处理、机器学习、日志、代码分析等
baiduocr
Read Chinese and English text from JPEG/PNG image with Baidu OCR services.
liubai521's Repositories
liubai521/annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
liubai521/AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
liubai521/Automatic-Speech-Recognition
End-to-End Speech Recognition Using Tensorflow
liubai521/awesome-compression
模型压缩的小白入门教程
liubai521/awesome-LLM-resourses
🧑🚀 全世界最好的LLM资料总结 | Summary of the world's best LLM resources.
liubai521/ByteTransformer
optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
liubai521/DB-GPT
Revolutionizing Database Interactions with Private LLM Technology
liubai521/DeepLearningExamples
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
liubai521/exllamav2
A fast inference library for running LLMs locally on modern consumer-class GPUs
liubai521/FasterTransformer
Transformer related optimization, including BERT, GPT
liubai521/flash-attention
Fast and memory-efficient exact attention
liubai521/freeCodeCamp
freeCodeCamp.org's open-source codebase and curriculum. Learn to code for free.
liubai521/Go
Algorithms implemented in Go for beginners, following best practices.
liubai521/gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
liubai521/H2O
[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.
liubai521/lightseq
LightSeq: A High Performance Library for Sequence Processing and Generation
liubai521/llm-foundry
LLM training code for MosaicML foundation models
liubai521/LLMTrainer
A comparison of pretraining framework for LLM
liubai521/lockable-resources-plugin
Lock resources against concurrent use
liubai521/oneflow
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
liubai521/project-layout
Standard Go Project Layout
liubai521/SpecAugment
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
liubai521/spyder
Official repository for Spyder - The Scientific Python Development Environment
liubai521/streaming-llm
Efficient Streaming Language Models with Attention Sinks
liubai521/SwiftTransformer
High performance Transformer implementation in C++.
liubai521/text-generation-inference
Large Language Model Text Generation Inference
liubai521/trt-samples-for-hackathon-cn
Simple samples for TensorRT programming
liubai521/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
liubai521/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
liubai521/xtuner
A toolkit for efficiently fine-tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2)