ruiqianheartseed's Stars
baichuan-inc/Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
NVIDIA/cutlass
CUDA Templates for Linear Algebra Subroutines
InternLM/InternLM
Official release of InternLM2.5 base and chat models. 1M context support
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
ramon-victor/freegpt-webui
GPT 3.5/4 with a Chat Web UI. No API key required.
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
THUDM/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Dao-AILab/flash-attention
Fast and memory-efficient exact attention
baichuan-inc/Baichuan-7B
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
FesonX/cn-text-classifier
中文文本聚类
ggerganov/llama.cpp
LLM inference in C/C++
openlm-research/open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
mlfoundations/open_clip
An open source implementation of CLIP.
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
hikariming/chat-dataset-baseline
人工精调的中文对话数据集和一段chatglm的微调代码
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
yuanzhoulvpi2017/zero_nlp
中文nlp解决方案(大模型、数据、模型、训练、推理)
THUDM/VisualGLM-6B
Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
BlinkDL/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
destwang/CTCResources
wdimmy/Automatic-Corpus-Generation
This repository is for the paper "A Hybrid Approach to Automatic Corpus Generation for Chinese Spelling Check"
houbb/word-checker
🇨🇳🇬🇧Chinese and English word spelling corrector.(中文易错别字检测,中文拼写检测纠正。英文单词拼写校验工具)
houbb/nlp-hanzi-similar
The hanzi similar tool.(汉字相似度计算工具,中文形近字算法。可用于手写汉字识别纠正,文本混淆等。)
ganguagua/error_recognize
基于bert的弱监督训练的中文错别字识别
THUDM/GLM-130B
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
openai/whisper
Robust Speech Recognition via Large-Scale Weak Supervision