This code reproduces some of the experiments from Lopez et al. 2016 and Collier et al. 2022, two papers that propose various knowledge transfer techniques for the LUPI paradygm. Additionally, we conduct a real-life experiment using an open-source dataset from IJCAI15 competition.
The notebooks gen_dist_exp.ipynb
and tram_exp.ipynb
contain experiments with synthetic data for
Generalized distillation Lopez et al. 2016 and TRAM Collier et al. 2022.
This code reproduces the MNIST experiment from Lopez et al. 2016
and extends its training epochs to beyond the original 50 epochs.
The code has been ported to work with Python 3.9, which required changing some of the requirements.
For training, download the MNIST dataset and put it to mnist/data
, cd
to mnist/
folder and run
python mnist_varying_size.py
This code reproduces the experiment from Lopez et al. 2016
and extends it with a naive baseline of predicting flat zeros. The code is from 2016 and requires Python 2.7.18 to run.
Because the original code did not have requirements specified we have added the highest working requirements we could find.
sarcos/requirements.txt
contains the necessary dependencies.
For training, download the Sarcos dataset and put it to sarcos/data
, cd
to sarcos/
folder and run
python sarcos.py
Real-world experiment with IJCAI15 competition data bandit_data/
We compare Generalized distillation and TRAM on the Repeat Buyers Prediction dataset, a large-scale public dataset from the IJCAI-15 competition. The data provides users’ activity logs of an online retail platform, including user-related features, information about items at sale, and implicit multi-behavioral feedback such as click, add to cart, and purchase.
For training:
- download data from https://tianchi.aliyun.com/dataset/42
- copy
user_info_format1.csv
anduser_log_format1.csv
tobandit_data/data/IJCAI15/
cd bandit_data/
- run
python train.py