sori424's Stars
skywalker023/fantom
👻 Code and benchmark for our EMNLP 2023 paper - "FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions"
allenai/scruples
A corpus and code for understanding norms and subjectivity. 🤖
belindal/state-probes
Code for the paper "Implicit Representations of Meaning in Neural Language Models"
nyu-mll/jiant-v1-legacy
The jiant toolkit for general-purpose text understanding models
john-hewitt/control-tasks
Repository describing example random control tasks for designing and interpreting neural probes
nyu-mll/jiant
jiant is an nlp toolkit
hendrycks/ethics
Aligning AI With Shared Human Values (ICLR 2021)
mistralai/mistral-inference
Official inference library for Mistral models
john-hewitt/conditional-probing
Codebase for running (conditional) probing experiments
john-hewitt/structural-probes
Codebase for testing whether hidden states of neural networks encode discrete structures.
msclar/symbolictom
Nealcly/templateNER
Source code for template-based NER
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
google-research/adapter-bert
bofenghuang/vigogne
French instruction-following and chat models
hipe-eval/HIPE-2022-data
Data for the HIPE 2022 shared task.
princeton-nlp/tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
meta-llama/llama
Inference code for Llama models
OpenGVLab/LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
lupantech/ScienceQA
Data and code for NeurIPS 2022 Paper "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering".
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
openai/whisper
Robust Speech Recognition via Large-Scale Weak Supervision
daanelson/alpaca-lora
Instruct-tune LLaMA on consumer hardware
nicola-decao/KnowledgeEditor
Code for Editing Factual Knowledge in Language Models
AGI-Edgerunners/LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
22-hours/cabrita
Finetuning InstructLLaMA with portuguese data
facebookresearch/fastMRI
A large-scale dataset of both raw MRI measurements and clinical MRI images.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.