WJMacro's Stars
2dust/v2rayN
A GUI client for Windows, support Xray core and v2fly core and others
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
facebookresearch/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
lllyasviel/ControlNet
Let us control diffusion models!
openai/evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
OpenBMB/MiniCPM
MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
wooorm/franc
Natural language detection
MLNLP-World/Paper-Writing-Tips
MLNLP社区用来帮助大家避免论文投稿小错误的整理仓库。 Paper Writing Tips
eric-mitchell/direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
anthropics/hh-rlhf
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
allenai/natural-instructions
Expanding natural instructions
princeton-nlp/SimPO
[NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward
facebookresearch/fairseq2
FAIR Sequence Modeling Toolkit 2
JusticeFighterDance/JusticeFighter110
田柯宇 (Tian Keyu)恶意攻击集群事件的证据揭露
centerforaisafety/HarmBench
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
ictnlp/BayLing
“百聆”是一个基于LLaMA的语言对齐增强的英语/中文大语言模型,具有优越的英语/中文能力,在多语言和通用任务等多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.
UIC-Liu-Lab/ContinualLM
An Extensible Continual Learning Framework Focused on Language Models (LMs)
ZNLP/BigTranslate
BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages
AGI-Edgerunners/LLM-Continual-Learning-Papers
Must-read Papers on Large Language Model (LLM) Continual Learning
Spico197/Humback
🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.
sail-sg/sdft
[ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".
microsoft/gpt-MT
cwang621/blsp
BLSP: Bootstrapping Langauge-Speech Pre-training via Behavior Alignment of Continuation Writing
NJUNLP/MAPO
The implement of ACL2024: "MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization"
ldery/Bonsai
Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"
SeaEval/SeaEval
NAACL 2024: SeaEval for Multilingual Foundation Models: From Cross-Lingual Alignment to Cultural Reasoning
dirkiedai/sk-mt
This is the official code for our paper "Simple and Scalable Nearest Neighbor Machine Translation" (ICLR 2023).
xydaytoy/EVA
Peter-Devine/multilingual_mt_bench
A fork of the lm-sys/FastChat repo that evaluates models on multilingual versions of MT-Bench
vyraun/literalness
Code for "Do GPTs Produce Less Literal Translations?"