Pinned Repositories
AutoDNNchip
BNS-GCN
[MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling" by Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin
DepthShrinker
[ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman Krishnamoorthi, Vikas Chandra, and Yingyan (Celine) Lin.
Early-Bird-Tickets
[ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks
Edge-LLM
[DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
HW-NAS-Bench
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark
mg-verilog
ShiftAddLLM
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
ShiftAddNet
[NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network
ViTCoD
[HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
EIC@GaTech's Repositories
GATECH-EIC/HW-NAS-Bench
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark
GATECH-EIC/ViTCoD
[HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
GATECH-EIC/ShiftAddLLM
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
GATECH-EIC/BNS-GCN
[MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling" by Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin
GATECH-EIC/Edge-LLM
[DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
GATECH-EIC/DepthShrinker
[ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman Krishnamoorthi, Vikas Chandra, and Yingyan (Celine) Lin.
GATECH-EIC/mg-verilog
GATECH-EIC/ShiftAddViT
[NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
GATECH-EIC/PipeGCN
[ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, Yingyan Lin
GATECH-EIC/CPT
[ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, and Yingyan (Celine) Lin.
GATECH-EIC/ACT
[ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
GATECH-EIC/Linearized-LLM
[ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
GATECH-EIC/Castling-ViT
[CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
GATECH-EIC/DNN-Chip-Predictor
[ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
GATECH-EIC/ViTALiTy
ViTALiTy (HPCA'23) Code Repository
GATECH-EIC/SuperTickets
[ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning
GATECH-EIC/LLM4HWDesign_Starting_Toolkit
LLM4HWDesign Starting Toolkit
GATECH-EIC/S3-Router
[NeurIPS 2022] "Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing" by Yonggan Fu, Yang Zhang, Kaizhi Qian, Zhifan Ye, Zhongzhi Yu, Cheng-I Lai, Yingyan Lin
GATECH-EIC/NeRFool
[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin
GATECH-EIC/ShiftAddNAS
[ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
GATECH-EIC/torchshiftadd
An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.
GATECH-EIC/HALO
The official code for [ECCV2020] "HALO: Hardware-aware Learning to Optimize"
GATECH-EIC/TinyML-Contest-Solution
GATECH-EIC/NASA
[ICCAD 2022] NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks
GATECH-EIC/AmoebaLLM
[NeurIPS 2024] "AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment" by Yonggan Fu, Zhongzhi Yu, Junwei Li, Jiayi Qian, Yongan Zhang, Xiangchi Yuan, Dachuan Shi, Roman Yakunin, and Yingyan (Celine) Lin.
GATECH-EIC/Omni-Recon
[ECCV 2024 Oral] "Omni-Recon: Harnessing Image-based Rendering for General-Purpose Neural Radiance Fields" by Yonggan Fu, Huaizhi Qu, Zhifan Ye, Chaojian Li, Kevin Zhao, and Yingyan (Celine) Lin.
GATECH-EIC/TinyML2023EIC-Gatech-Open
GATECH-EIC/Hint-Aug
GATECH-EIC/DiffRatio-MoD
Layer- and Timestep-Adaptive Differentiable Token Compression Ratios for Efficient Diffusion Transformers
GATECH-EIC/Spline-EB
[TMLR] Max-Affine Spline Insights Into Deep Network Pruning