bluaxe's Stars
ggerganov/llama.cpp
LLM inference in C/C++
PlexPt/awesome-chatgpt-prompts-zh
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
byoungd/English-level-up-tips
An advanced guide to learn English which might benefit you a lot 🎉 . 离谱的英语学习指南/英语学习教程。
LAION-AI/Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
ggerganov/whisper.cpp
Port of OpenAI's Whisper model in C/C++
chatchat-space/Langchain-Chatchat
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Chanzhaoyu/chatgpt-web
用 Express 和 Vue3 搭建的 ChatGPT 演示网页
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
camenduru/stable-diffusion-webui-colab
stable diffusion webui colab
microsoft/qlib
Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas to implementing productions. Qlib supports diverse machine learning modeling paradigms. including supervised learning, market dynamics modeling, and RL.
OpenTalker/SadTalker
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
ggerganov/ggml
Tensor library for machine learning
facebookresearch/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
cloneofsimo/lora
Using Low-rank adaptation to quickly fine-tune diffusion models.
harvardnlp/annotated-transformer
An annotated implementation of the Transformer paper.
pkuliyi2015/multidiffusion-upscaler-for-automatic1111
Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Linaqruf/kohya-trainer
Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
ELS-RD/kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
mit-han-lab/smoothquant
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
piglei/ai-vocabulary-builder
一个使用了 AI 技术的智能生词本工具,特色功能:自动添加生词、读故事助记单词。
Victorwz/LongMem
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
pytorch/kineto
A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.
SqueezeAILab/SqueezeLLM
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
huchenxucs/ChatDB
The official repository of "ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory".
usagitoneko97/klara
Automatic test case generation for python and static analysis library
fujitsu/xbyak_aarch64
TsinghuaAI/CPM
Introduction to CPM
STHSF/alpha101
101 alpha factors calculate based on Alpha101