whr94621's Stars
state-spaces/mamba
Mamba SSM architecture
facebookresearch/seamless_communication
Foundational Models for State-of-the-Art Speech and Text Translation
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
SJTU-IPADS/PowerInfer
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
pytorch-labs/gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
microsoft/promptbase
All things prompt engineering
mars-project/mars
Mars is a tensor-based unified framework for large-scale data computation which scales numpy, pandas, scikit-learn and Python functions.
openai/weak-to-strong
deepseek-ai/DeepSeek-LLM
DeepSeek LLM: Let there be answers
allenai/open-instruct
hao-ai-lab/LookaheadDecoding
[ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
zchoi/Awesome-Embodied-Agent-with-LLMs
This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
pjlab-sys4nlp/llama-moe
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
thu-coai/Safety-Prompts
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
IEIT-Yuan/Yuan-2.0
Yuan 2.0 Large Language Model
XueFuzhao/InstructionWild
srush/annotated-mamba
Annotated version of the Mamba paper
InternLM/agentlego
Enhance LLM agents with rich tool APIs
OpenBMB/InfiniteBench
Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718
morecry/CharacterEval
thu-coai/SafetyBench
Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety.
allenai/catwalk
This project studies the performance and robustness of language models and task-adaptation methods.
mutonix/RefGPT
wangwenju269/work_project
NLP 项目记录档案
allenai/bff
blcuicall/OMGEval
OMGEval😮: An Open Multilingual Generative Evaluation Benchmark for Foundation Models
claws-lab/XLingEval
Code and Resources for the paper, "Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries"
SAP/software-documentation-data-set-for-machine-translation
A parallel evaluation data set of SAP software documentation with document structure annotation
neulab/globalbench
GlobalBench: A Benchmark for Global Progress in Language Technology
sjtu-compling/MELA