MarcellusZhao
I am a master thesis student at EPFL.
École Polytechnique Fédérale de LausanneLaussane, Switzerland
MarcellusZhao's Stars
3b1b/manim
Animation engine for explanatory math videos
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
ikatyang/emoji-cheat-sheet
A markdown version emoji cheat sheet
nlpxucan/WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
google-research/arxiv-latex-cleaner
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
jonbarron/website
EleutherAI/pythia
The hub for EleutherAI's work on interpretability and learning dynamics
elax46/custom-brand-icons
Custom brand icons for Home Assistant
jxmorris12/vec2text
utilities for decoding deep representations (like sentence embeddings) back to text
hsiehjackson/RULER
This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?
FranxYao/Long-Context-Data-Engineering
Implementation of paper Data Engineering for Scaling Language Models to 128K Context
GPT-Fathom/GPT-Fathom
GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings.
tianyi-lab/Reflection_Tuning
[ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
centerforaisafety/HarmBench
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Re-Align/URIAL
3DAgentWorld/Toolkit-for-Prompt-Compression
Toolkit for Prompt Compression
JailbreakBench/jailbreakbench
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]
vivek3141/dl-visualization
This is the source code for the animations in the series "Visualizing Deep Learning"
tml-epfl/llm-adaptive-attacks
Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]
allenai/WildBench
Benchmarking LLMs with Challenging Tasks from Real Users
Re-Align/just-eval
A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.
epfml/llm-baselines
LINs-lab/RDED
[CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm
max-andr/adversarial-random-search-gpt4
Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]
abertsch72/long-context-icl
Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"
tml-epfl/icl-alignment
Is In-Context Learning Sufficient for Instruction Following in LLMs?
YuejiangLIU/csl
[Preprint] Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts
tml-epfl/long-is-more-for-alignment
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]
fra31/rlhf-trojan-competition-submission