Kyriection
I am a Ph.D. student at @VITA-Group, University of Texas at Austin. My research interests include trustworthy, efficient, and quantum machine learning.
The University of Texas at AustinAustin, TX, USA
Pinned Repositories
H2O
[NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.
GaLore
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Awesome-Distributed-Quantum-Computing
An awesome list of references for distributed quantum computing
Awesome-Quantum-Compiler
An awesome list of references for quantum compiler
DoRA
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
BNN_NoBN
[CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu, Zhiqiang Shen, Zhangyang Wang
PrAC-LTH
[ICML 2021] "Efficient Lottery Ticket Finding: Less Data is More" by Zhenyu Zhang*, Xuxi Chen*, Tianlong Chen*, Zhangyang Wang
Q-GaLore
Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.
Kyriection's Repositories
Kyriection/open_lth
A repository in preparation for open-sourcing lottery ticket hypothesis code.
Kyriection/Adv-SS-Pretraining
[CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Kyriection/SSE-Net-Update