Pinned Repositories
Alignment_Working_Paper
Baichuan2
A series of large language models developed by Baichuan Intelligent Technology
InstructMT
A collection of instruction data and scripts for machine translation.
LLaMA-Efficient-Tuning
Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA)
llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
NaCGEC
Papers
for research, enjoy it
self-speculative-decoding
Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**
Tensorflow-Tutorial
Some interesting TensorFlow tutorials for beginners.
tsingcoo's Repositories
tsingcoo/Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
tsingcoo/baichuan-7B
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
tsingcoo/Baichuan2
A series of large language models developed by Baichuan Intelligent Technology
tsingcoo/ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
tsingcoo/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
tsingcoo/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
tsingcoo/InstructMT
A collection of instruction data and scripts for machine translation.
tsingcoo/LLaMA-Efficient-Tuning
Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA)
tsingcoo/llama.cpp
Port of Facebook's LLaMA model in C/C++
tsingcoo/llm-awq
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
tsingcoo/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
tsingcoo/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
tsingcoo/multi-query-attention
Fast Transformer Decoding: One Write-Head is All You Need
tsingcoo/NaCGEC
tsingcoo/ParroT
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
tsingcoo/self-speculative-decoding
Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**
tsingcoo/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
tsingcoo/alibi
ALiBi mask implementation
tsingcoo/atec-nlp
ATEC 金融大脑-金融智能NLP服务
tsingcoo/BELLE
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
tsingcoo/bytepiece
更纯粹、更高压缩率的Tokenizer
tsingcoo/CLG-CGEC
tsingcoo/COMET
A Neural Framework for MT Evaluation
tsingcoo/intro-llm.github.io
website
tsingcoo/llama
Inference code for LLaMA models
tsingcoo/MT-Reading-List
A machine translation reading list maintained by Tsinghua Natural Language Processing Group
tsingcoo/NJUWA
word alignment tool by nju_wql
tsingcoo/nltk
NLTK Source
tsingcoo/nlu_sim
all kinds of baseline models for sentence similarity
tsingcoo/TIM
code for Teaching LM to Translate with Comparison