-
Dataset Distillation by Matching Training Trajectories, CVPR 2022, George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu
-
DATASET DISTILLATION, axiv,
Tongzhou Wang -
Flexible dataset distillation: Learn labels instead of images, NeurIPS 2020, Ondrej Bohdal, Yongxin Yang, Timothy Hospedales
-
Dataset Condensation with Gradient Matching, ICLR 2021, Bo Zhao, Konda Reddy Mopuri, Hakan Bilen
-
Dataset Condensation with Contrastive Signals, ICML 2022, Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon
-
Dataset Meta-Learning from Kernel Ridge-Regression, ICLR 2021, Timothy Nguyen, Zhourong Chen, Jaehoon Lee
-
Dataset Distillation with Infinitely Wide Convolutional Networks, NeurIPS 2021, Timothy Nguyen, Roman Novak, Lechao Xiao, Jaehoon Lee
-
CAFE: Learning to Condense Dataset by Aligning Features, CVPR 2022, Kai Wang, Bo Zhao, Xiangyu Peng, Zheng Zhu, Shuo Yang, Shuo Wang, Guan Huang, Hakan Bilen, Xinchao Wang, Yang You
-
Dataset Condensation with Differentiable Siamese Augmentation, ICML 2021, Bo Zhao, Hakan Bilen
-
Dataset Condensation via Efficient Synthetic-Data Parameterization, ICML 2022, Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, Hyun Oh Song
-
Soft-Label Dataset Distillation and Text Dataset Distillation), IJCNN 2021, Ilia Sucholutsky; Matthias Schonlau
-
Dataset Condensation with Distribution Matching, axiv, Bo Zhao, Hakan Bilen
- Soft-Label Dataset Distillation and Text Dataset Distillation, IJCNN 2021, Ilia Sucholutsky, Matthias Schonlau
- Data Distillation for Text Classification,axiv 2021, Yongqi Li, Wenjie Li