milliemaoo's Stars
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
meta-llama/llama
Inference code for Llama models
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
karpathy/llama2.c
Inference Llama 2 in one file of pure C
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
THUDM/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
princeton-nlp/tree-of-thought-llm
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
bigscience-workshop/promptsource
Toolkit for creating, sharing and using natural language prompts.
reiinakano/scikit-plot
An intuitive library to add plotting functionality to scikit-learn objects.
spcl/graph-of-thoughts
Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
hkust-nlp/ceval
Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]
allenai/open-instruct
hendrycks/test
Measuring Massive Multitask Language Understanding | ICLR 2021
Tencent/TencentPretrain
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
microsoft/ToRA
ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools [ICLR'24].
sail-sg/lorahub
[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
txsun1997/LMaaS-Papers
Awesome papers on Language-Model-as-a-Service (LMaaS)
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
suzgunmirac/BIG-Bench-Hard
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
declare-lab/flan-alpaca
This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as Flan-T5.
tonyzhaozh/few-shot-learning
Few-shot Learning of GPT-3
HeddaCohenIndelman/Learning-Gumbel-Sinkhorn-Permutations-w-Pytorch
LEARNING LATENT PERMUTATIONS WITH GUMBEL-SINKHORN NETWORKS IMPLEMENTATION WITH PYTORCH
arazd/ResidualPrompts
Residual Prompt Tuning: a method for faster and better prompt tuning.
OpenBMB/DecT
Source code for ACL 2023 paper Decoder Tuning: Efficient Language Understanding as Decoding
Pale-Blue-Dot-97/Minerva
Minerva project includes the minerva package that aids in the fitting and testing of neural network models. Includes pre and post-processing of land cover data. Designed for use with torchgeo datasets.
xyltt/LPT
This repo contains the code for Late Prompt Tuning.
ecs-vlc/iridis-useful-scripts
gyanendrol9/ConversationMOC