Authors: Tianshu Shen, Zheda Mai, Ga Wu, Scott Sanner
All the compiling scripts can be found under ./cluster_bash/
, where experiments were ran using Compute Canada
- Yelp:
- ML10M:
- configuration folder:
./conf_hp_search/
For example, to run the hyperparameter-tuning script for the model using one of the configurations, run the following:
python hp_search.py --model_name VAEmultilayer_contrast --data_name yelp_SIGIR --conf VAEcontrast_tuning1.config --fold_name fold0 --top_items 10
- configuration folder:
./conf/
To run model evaluation for a set of hyperparameters:
python model_evaluate.py --model_name VAEmultilayer_contrast --data_name yelp_SIGIR --conf VAEcontrast1.config --log_dir VAEcontrast1 --top_items 10 --rating_threshold 3
- configuration folder:
./conf/
To save the model after the model evaluation step:
python model_save.py --model_name VAEmultilayer_contrast --data_name yelp_SIGIR --data_dir fold0 --conf VAEmultilayer_contrast2.config --log_dir VAEmultilayer_contrast2 --top_items 10
- configuration folder:
./conf_simulate/
All the configurations for different critiquing/clarification-based critiquing tasks are listed under the specified folder. For example, to run the simulation task for DCE's experiment critiquing scenario using the Yelp dataset:
python simulate_yelp.py --saved_model models_DCE-VAE/VAEmultilayer_contrast3.pt --data_name yelp_SIGIR --data_dir fold0 --conf sim_abs_diff_neg1_noise0_expert.config --top_items 10
cluster_bash
stores all the bash scripts used to run the experiments, all subdirectory names are self-explanatory. All the bash scripts that are named withrun_<dataset>.sh
are used for compute canada executions and are not essential.conf_<hp_search/simulate/>
are the configurations for the model experiments.experiments
stores the experiment results presented in the papermodels
stores the models used in the papersaves
stores the models and their corresponding model performancetables
stores the experiment tables for the clarification-critiquing tasks
Please cite:
@article{shen2022distributional,
title={Distributional Contrastive Embedding for Clarification-based Conversational Critiquing},
author={Shen, Tianshu and Mai, Zheda and Wu, Ga and Sanner, Scott},
year={2022}
}