keshavramji's Stars
IBM/ensemble-instruct
codebase release for EMNLP2023 paper publication
IBM/iter-refine-dialgen
Internship project
princeton-nlp/tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Liuhong99/Sophia
The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
madaan/self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
meta-llama/llama
Inference code for Llama models
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
SinclairCoder/Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
lucidrains/PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
HornHehhf/Equi-Separation
primeqa/primeqa
The prime repository for state-of-the-art Multilingual Question Answering research and development.
IBM/transition-amr-parser
SoTA Abstract Meaning Representation (AMR) parsing with word-node alignments in Pytorch. Includes checkpoints and other tools such as statistical significance Smatch.