AaronWhy
Ph.D. @ Rutgers University. Previous undergraduate @ Peking University.
Rutgers UniversityNew Jersey, USA
Pinned Repositories
ALBERT
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
gluon-ts
Probabilistic time series modeling in Python
llama
Inference code for LLaMA models
LLM_Evaluation
llmux
llmux is an experimental operating system that heps software development on Large Language Models
MAgent
A Platform for Many-agent Reinforcement Learning
NERF
Code for our paper Non-Autoregressive Electron Redistribution Modeling for Reaction Prediction (ICML 2021)
llm-continual-learning-survey
Continual Learning of Large Language Models: A Comprehensive Survey
AaronWhy's Repositories
AaronWhy/NERF
Code for our paper Non-Autoregressive Electron Redistribution Modeling for Reaction Prediction (ICML 2021)
AaronWhy/ALBERT
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
AaronWhy/Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
AaronWhy/Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
AaronWhy/gluon-ts
Probabilistic time series modeling in Python
AaronWhy/llama
Inference code for LLaMA models
AaronWhy/LLM_Evaluation
AaronWhy/llmux
llmux is an experimental operating system that heps software development on Large Language Models
AaronWhy/MAgent
A Platform for Many-agent Reinforcement Learning
AaronWhy/pivot_analysis
The pivot analysis algorithm for lexical analysis of text attribute transfer
AaronWhy/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
AaronWhy/opro
official code for "Large Language Models as Optimizers"
AaronWhy/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP
AaronWhy/Turiss
Google ML Camp 2020, By Turiss