horseee/Awesome-Efficient-LLM

Suggest incorporating one efficient PLM finetuning/compression paper

weitianxin opened this issue · 1 comments

Thank you for managing the excellent GitHub repository. I'm curious about the possibility of adding our recent work from ICML'23, conducting at UIUC. It centered on the one-shot compression technique of Pre-trained Langugae Models (PLMs). This research focuses on the one-shot compression technique for Pre-trained Language Models (PLMs). The study explores the neural tangent kernel (NTK) of multilayer perceptrons (MLP) modules in PLMs and proposes a more efficient PLM by fusing MLPs that approximate the NTK.

Hi @weitianxin, I added your paper in efficient_plm/others.