ruocwang
Founder @turningpoint-ai. PhD at UCLA and Google. Working on Multimodal & Agents.
University of California at Los Angeles
ruocwang's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
dair-ai/Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
rasbt/LLMs-from-scratch
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
karpathy/LLM101n
LLM101n: Let's build a Storyteller
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
Mooler0410/LLMsPracticalGuide
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
promptslab/Awesome-Prompt-Engineering
This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
Eladlev/AutoPrompt
A framework for prompt tuning using Intent-based Prompt Calibration
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
MLGroupJLU/LLM-eval-survey
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
toshas/torch-fidelity
High-fidelity performance metrics for generative models in PyTorch
SalesforceAIResearch/uni2ts
Unified Training of Universal Time Series Forecasting Transformers
andyzoujm/representation-engineering
Representation Engineering: A Top-Down Approach to AI Transparency
suzgunmirac/BIG-Bench-Hard
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
weixi-feng/Structured-Diffusion-Guidance
Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis
jxzhangjhu/Awesome-LLM-Prompt-Optimization
Awesome-LLM-Prompt-Optimization: a curated list of advanced prompt optimization and tuning methods in Large Language Models
AGI-Edgerunners/LLM-Optimizers-Papers
Must-read Papers on Large Language Model (LLM) as Optimizers and Automatic Optimization for Prompting LLMs.
wenhuchen/Program-of-Thoughts
Data and Code for Program of Thoughts (TMLR 2023)
Lichang-Chen/InstructZero
Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts!
xiangning-chen/DrNAS
Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"
Mavenoid/prompt-hyperopt
Improve prompts for e.g. GPT3 and GPT-J using templates and hyperparameter optimization.
ruocwang/dpo-diffusion
[ICML 2024] On Discrete Prompt Optimization for Diffusion Models - Google
measure-infinity/mulan-code
xirui-li/DrAttack
Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers
mayank31398/pseudo-code-instructions
Pseudo-code Instructions dataset
ruocwang/mixture-of-prompts
[ICML 2024] One Prompt is Not Enough: Automated Construction of a Mixture-of-Expert Prompts - TurningPoint AI
ruocwang/llm-symbolic-program
Official implementation: Large Language Models are Interpretable Learners - Google
johnsonkao0213/Formulate_and_Solve
Official implementation and dataset of the paper [EMNLP'24]: "Solving for X and Beyond: Can Large Language Models Solve Complex Math Problems with More-Than Two Unknowns?"
ruocwang/GM-NAS
Code for our ICLR'2022 paper "Generalizing Few-Shot NAS with Gradient Matching"