e0397123
Research Fellow @ National University of Singapore
National University of Singapore (ECE-HLT)Singapore
e0397123's Stars
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
THUDM/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
nlpxucan/WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
openlm-research/open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
InternLM/InternLM
Official release of InternLM2.5 base and chat models. 1M context support
imoneoi/openchat
OpenChat: Advancing Open-source Language Models with Imperfect Data
MaartenGr/KeyBERT
Minimal keyword extraction with BERT
project-baize/baize-chatbot
Let ChatGPT teach your own chatbot in hours with a single GPU!
openai/human-eval
Code for the paper "Evaluating Large Language Models Trained on Code"
allenai/open-instruct
amazon-science/auto-cot
Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
tatsu-lab/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
google-research/FLAN
davidmrau/mixture-of-experts
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
aneesha/RAKE
A python implementation of the Rapid Automatic Keyword Extraction
tatsu-lab/alpaca_farm
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
salesforce/xgen
Salesforce open-source LLMs with 8k sequence length.
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
OpenLMLab/LEval
[ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark
declare-lab/flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
ictnlp/BayLing
“百聆”是一个基于LLaMA的语言对齐增强的英语/中文大语言模型,具有优越的英语/中文能力,在多语言和通用任务等多项测试中取得ChatGPT 90%的性能。BayLing is an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.
declare-lab/flacuna
Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is already an excellent writing assistant, and the intention behind Flacuna was to enhance Vicuna's problem-solving capabilities. To achieve this, we curated a dedicated instruction dataset called Flan-mini.
neelsjain/BYOD
The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"
leehanchung/lora-instruct
Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
KohakuBlueleaf/guanaco-lora
Instruct-tune LLaMA on consumer hardware
linhduongtuan/doctorwithbloom
We finetune Bloomz-7b1-mt using LoRA with the chatdoctor-200k dataset at here https://huggingface.co/LinhDuong/doctorwithbloomz-7b1-mt and https://huggingface.co/LinhDuong/doctorwithbloomz-7b1.
yanzhangnlp/BSL
Bootstrapped Unsupervised Sentence Representation Learning (ACL 2021)
PlusLabNLP/ACCENT