/delta-loss

Code repository for the paper "A Simple Loss Function for Convergent Algorithm Synthesis using RNNs"

Primary LanguagePythonMIT LicenseMIT

Delta Loss

This is the code repository for the paper A Simple Loss Function for Convergent Algorithm Synthesis using RNNs by Alexandre Salle and Shervin Malmasi.

It is based on this repository. The main modifications are in the file deepthinking/utils/training.py, function train_delta.

Installation

If you have Conda installed, run:

$ conda env create -f environment.yml
$ conda activate p38_delta_loss

Else, install Python 3.8, nVidia CUDA, and dependencies using pip:

$ pip install -r requirements.txt

Pre-trained models from paper

To download and test the pre-trained models, run:

$ ./test_release_models.sh.

Models are saved in releasemodels.

Test results will be placed in the folder outputs/testing_default.

Training models from paper

To train from scratch and test models from the paper, run:

$ ./train_test_models.sh.

Models are saved in outputs/training_default/training-{chess,mazes,prefixsums}, and test results in outputs/testing_default.

Citing our work

Please cite our paper using:

@inproceedings{DBLP:conf/iclr/SalleM23,
  author       = {Alexandre Salle and
                  Shervin Malmasi},
  editor       = {Krystal Maughan and
                  Rosanne Liu and
                  Thomas F. Burns},
  title        = {A Simple Loss Function for Convergent Algorithm Synthesis using RNNs},
  booktitle    = {The First Tiny Papers Track at {ICLR} 2023, Tiny Papers @ {ICLR} 2023,
                  Kigali, Rwanda, May 5, 2023},
  publisher    = {OpenReview.net},
  year         = {2023},
  url          = {https://openreview.net/pdf?id=WaAJ883AqiY},
  timestamp    = {Wed, 19 Jul 2023 17:21:16 +0200},
  biburl       = {https://dblp.org/rec/conf/iclr/SalleM23.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

Below are instructions for the original repository.


Deep Thinking Systems

A centralized repository for deep thinking projects. Developed collaboratively by Avi Schwarzschild, Eitan Borgnia, Arpit Bansal, Zeyad Emam, and Jonas Geiping, all at the University of Maryland. This repository contains the official implementation of DeepThinking Networks (DT nets), including architectures with recall and a training routine with the progressive loss term. Much of the structure of this repository is based on the code in Easy-To-Hard. In fact, this repository is capable of executing all the same experiments and should be used instead. Our work on thinking systems is availble in two papers:

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks (NeurIPS '21)

Notes:

February 21, 2022: Pretrained models added to our drive.

February 11, 2022: Code initially realsed with our paper on Arxiv. Several features, including some trained models will be added in the comming weeks.

Citing Our Work

To cite our work, please reference the appropriate paper.

@misc{bansal2022endtoend,
      title={End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking}, 
      author={Arpit Bansal and Avi Schwarzschild and Eitan Borgnia and Zeyad Emam and Furong Huang and Micah Goldblum and Tom Goldstein},
      year={2022},
      eprint={2202.05826},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
@article{schwarzschild2021can,
  title={Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks},
  author={Schwarzschild, Avi and Borgnia, Eitan and Gupta, Arjun and Huang, Furong and Vishkin, Uzi and Goldblum, Micah and Goldstein, Tom},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

Getting Started

Requirements

This code was developed and tested with Python 3.8.2.

If you use Conda, you can also run:

$> conda env create -f environment.yml

Else, make sure you have CUDA installed and run:

```$> pip install -r requirements.txt``

Training

To train models, run train_model.py with the desired command line arguments. With these arguments, you can choose a model architecture and set all the pertinent hyperparameters. The default values for all the arguments in the hydra directory configuration files and will work together to train a DT net with recall to solve prefix sums. To try this, run the following.

$ python train_model.py

This command will train and save a model. For more examples see the launch directory, where we have left several files corresponding to our main experiments.

The optional commandline argument problem.train_data=<N> determines the data to be used for training. Here, N is an integer that corresponds to N-bit strings for prefix sums, N x N mazes, and indices [0, N] for chess data. Additionally, the flag problem.test_data=<N> determines the data to be used for testing. For chess puzzles, the test data flag differs slightly from train data. problem.test_data=<N> instead corresponds to indices [N-100K, N]. The other problem domains use the same nomenclature for training/testing. Also, the flags problem.model.test_iterations.low and problem.model.test_iterations.high allow you to pass a range of iterations to use for testing, i.e. at which to save the accuracy. More information about the structure of other command line arguments can be found in the config files.

Saving Protocol (during training)

Each time train_model.py is executed, a hash-like adjective-Name combiniation is created and saved as the run id for that execution. The run_id is used to save checkpoints and results without being able to accidentally overwrite any previous runs with similar hyperparameters. The folder used for saving both checkpoints and results can be chosen using the following command line argument.

$ python train_model.py name=<path_to_exp>

During training, the best performing model (on held-out validation set) is saved in the folder outputs/<path_to_exp>/training-<run_id>/model_best.pth and the corresponding arguments for that run are saved in outputs/<path_to_exp>/training-<run_id>/.hydra/. The <path_to_exp>/training-<run_id> string is necessary to later run the test_model.py file for testing on harder/larger datasets than used during training.

The results (i.e. accuracy metrics) for the test data used in the train_model.py run are saved in outputs/<path_to_exp>/training-<run_id>/stats.json, the tensorboard data is saved in outputs/<path_to_exp>/training-<run_id>/tensorboard.

The outputs directory should be as follows. Note that the default value of <path_to_exp> is training_default, and that happy-Melissa is the adjective-Name combination for this example.

outputs
└── training_default
    └── training-happy-Melissa
        ├── .hydra
        │   ├── config.yaml
        │   ├── hydra.yaml
        │   └── overrides.yaml
        ├── model_best.pth
        ├── stats.json
        ├── tensorboard
        │   └── events.out.tfevents.1641237856
        └── train.log

Testing

To test a saved model, run test_model.py as follows.

$ python test_model.py problem.model.model_path=<dir_with_checkpoint>

To point to the command line arguments that were used during training and to the model checkpoint file, use the flags in the example above. Other command line arguments are outlined in the code itself, and generally match the structure used for training. As with training, the outputs folder will have performance metrics in json data. (See the saving protocol below.)

Saving Protocol (during testing)

For testing, you can run the following commandline argument to specify the location of the outputs.

$ python test_model.py name=<path_to_exp>

This creates another unique run_id adjective-Name combination (different from the one created during training) and the results are saved in outputs/<path_to_exp>/testing-<run_id>/stats.json.

Download Pretrained Models

For getting started without training models from scratch, you can download a checkpoint for any of the three problems. See our project drive. The folder training-roupy-Ambr contains the output (including a checkpoint) from training a DT-net with recall and progressive loss on the 32-bit Prefix Sum dataset. The folder training-rusty-Tayla contains the output (including a checkpoint) from training a DT-net with recall and progressive loss on the 9x9 Mazes. Finally, the folder training-mansard-Janean contains the output (including a checkpoint) from training a DT-net with recall and progressive loss on the 0-600K Chess Puzzles dataset. Download those folders and pass their paths to test_model.py using the syntax above to see how they perform on various test sets.

Analysis

To generate a pivot table with average accuracies over several trials, make_table.py is helpful. The first command line argument (without a flag) points to an ouput directory. All the json results are then read in and averages over similar runs are nicely tabulated. For example, if you run a few trials of train_model.py with the same command line arguments, including name=my_experiment, then you can run

$ python make_table.py outputs/my_experiment

to see the results in an easy-to-read format.

The file called make_schoop.py will use those pivot tables to make plots of the accuracy at various iterations. Use it the same way as make_table.py to get a visualization of deep thinking behavior. For models that perform better with added iterations, we say that these curves "schoop" upwards, and therefore name these plots "schoopy plots."

A Note on Our Metrics

We report (print and save) three quantities for accuracy: train_acc refers to the accuracy on the specific data used for training, val_acc refers to the accuracy on a held-out set from the same distribution as the data used for training, and test_acc refers to the accuracy on the test data (specified with a command line argument), which can be harder/larger problems.

Development

Adding Training and Testing Modes

A new training mode with name new_mode can be added to the code base by writing a function named train_new_mode in training_utils.py. This allows the training mode to easily be implemented in train_model.py by passing the following command line argument.

$ python train_model.py problem.hyp.train_mode=<new_mode>

Similarly, a new testing mode with name new_mode can be added as a function named test_new_mode in testing_utils.py. The new testing mode can be implemented in test_model.py by passing the following command line argument.

$ python test_model.py problem.hyp.test_mode=<new_mode>

Adding New Datasets

The only datasets available in the easy_to_hard_data package are prefix sums, mazes, and chess. To make use of a different dataset called new_dataset, create a file named new_dataset_data.py in the utils directory containing a prepare_dataloaders_new_dataset function (see other data files as an example). The file new_dataset_data.py should be correctly imported and the corresponding prepare_dataloaders_new_dataset function should be added to the get_dataloaders function in common.py. Also, a new configuration file should be added to the config/problem directory

To then train or test models using the new dataset, you can use the problem= <new_dataset> flag.

Contributing

We believe in open-source community driven software development. Please open issues and pull requests with any questions or improvements you have.