dsj96's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
geekan/MetaGPT
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
facebookresearch/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
mli/paper-reading
深度学习经典、新论文逐段精读
facebookresearch/fastText
Library for fast text representation and classification.
karpathy/llama2.c
Inference Llama 2 in one file of pure C
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
CompVis/taming-transformers
Taming Transformers for High-Resolution Image Synthesis
camel-ai/camel
🐫 CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
facebookresearch/ReAgent
A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)
facebookresearch/MUSE
A library for Multilingual Unsupervised or Supervised word Embeddings
ahmetbersoz/chatgpt-prompts-for-academic-writing
This list of writing prompts covers a range of topics and tasks, including brainstorming research ideas, improving language and style, conducting literature reviews, and developing research plans.
pliang279/awesome-phd-advice
Collection of advice for prospective and current PhD students
gururise/AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
ConnorJL/GPT2
An implementation of training for GPT2, supports TPUs
keirp/automatic_prompt_engineer
thunlp/OpenDelta
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
WeOpenML/PandaLM
Helsinki-NLP/Tatoeba-Challenge
lucidrains/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
Unbabel/COMET
A Neural Framework for MT Evaluation
PANXiao1994/mRASP2
microsoft/gpt-MT
twinkle0331/LGTM
[ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.09651)
samuki/reinforce-joey
This is a fork of the awesome Joey-NMT with Reinforcement Learning algorithms like Policy Gradient, MRT and Advantage Actor Critic.
rbawden/mt-bigscience
Evaluation results for Machine Translation within the BigScience project
OrangeInSouth/Pareto-Mutual-Distillation
Implementation of Pareto-Mutual-Distillation (paper: Towards Higher Pareto Frontier in Multilingual Machine Translation)
rbawden/promptsource
Toolkit for collecting and applying templates of prompting instances