/Ouroboros

Introverted NNs

Primary LanguagePython

Ouroboros

M.C Escher's Dragon

Introverted NNs, inspired by meta learning techniques, and NN architectures including NN Quines, Introspective NNs, Hypernetworks, and self-referential weight matrices.

Paper

We have an annotated bibliography in progress (Link to LaTeX document on Overleaf) as part of our literature review efforts.

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgements

About The Project

Built With

Several specialty libraries are used. A list of all packages and their version can be found in the config/envs directory.

Code Style

Docstrings, typehints, and comments brought to you in Google Style.

Directory Structure

Directory structure mimics conventional PyTorch projects. A full dated summary can be found in directory_tree.txt (use Get-ChildItem | tree /F > foo.txt in PowerShell to create your own!)

Getting Started

Prerequisites

You will need conda and Python version 3.6 or above.

Installation

Assuming you're in base dir of this project and are using a Linux based system: First you'll want to create a new conda (or pip) env with Python 3.7

conda create -n env_name python=3.7 anaconda
source activate env_name

Before cloning into this repository:

git clone https://github.com/flawnson/Generic_GNN.git
OR
pip install git+https://github.com/flawnson/Generic_GNN.git

Then you can run setup.py

python setup.py

Environment Setup

Install depedencies in the requirements.txt.

pip install -r configs/envs/requirements_cpu.txt

Then you'll need to create an empty directory for model outputs (including saved models).

cd Generic_GNN && mkdir outputs

Finally you can run a demo version of the pipeline (default configs in configs directory).

python -c path/to/config/files/file.json -s path/to/schema/files/file.json

You can see the logged results using TensorBoard (to be setup soon).

tensorboard --logdir=logs/GAT_tuning/tune_model

Container Setup

Docker containers for running the project are on the roadmap!

Usage

The steps executed by each pipeline run are outlined:

  1. Load config dictionary from JSON file and setup logging, device management, set seeds, etc.
  2. Data downloading and preprocessing
  3. Setup standard model (MLP, CNN, GCN, Transformer, etc.)
  4. Setup augmented model implemented as a standard model wrapper (Quine, HyperNetwork, etc.)
  5. Load model-dependant datasets (if required)
  6. Split datasets using selected strategy
  7. Select run-type pipeline

Run:

python main.py -c "path_to_config_file.json"

Configuration

Configs are validated by a json schema to ensure only properly defined config files are run. There are a few configurations that are passed directly (unpacked) into function arguments and must therefore follow the function's signature. For example, the split_kwargs config must correspond with the scikitlearn.

Splitting is corresponded with SKLearn's model selection classes and methods as such:

"binary" == train_test_split()
"holdout" == LeavePOut()
"shuffle" == ShuffleSplit()
"kfold" == StratifiedKFold()

Naturally, splits that require shuffling data do not apply to time series data and our sequential models. The pipeline currently supports train and test splits only (you cannot specify a validation set)

Standard Models

There are 2 standard models implemented (Transformer and CNN in development):

  • MLP
  • GCN

Augmented Models

There are 2 augmented models implemented:

Datasets

There are two datasets with loading and transformations implemented and used:

  • MNIST - Quine and Classical model types
  • CIFAR - HyperNetwork

Demo

Demo is a simple training run. It takes a configuration file and runs the Trainer once.

Tuning

Tuning is a pair of consecutive runs. The first run executes the Tuner (a wrapper of the Trainer pipeline meant to find and return the best parameters it can find) once and the second run executes the Trainer once.

Parallelizing

Parallelizing allows you to execute and run several Demo and/or Tuning pipelines in tandem. It uses mutliprocessing to find and use as many cores you define in the confiuration file (yet to be implemented).

Logging and Checkpointing

Logging is controlled by the config files.

  1. Console logs - Runs are logged to console with logzero (mostly consists of info and exception logs) and saved as a .txt file in the saves/logs directory.
  2. Config logs - A copy of the config is saved as a .json for each run in the saves/logs directory.
  3. Tensorboard logs - Saved in the runs directory, used to visualize training.

Model checkpointing is performed with PyTorch's save function.

  1. Model checkpoints are saved at each interval as specified in the run_config (saved in the saves/checkpoints directory).
  2. The model file itself is copied into the checkpoint directory, where it can be used with the saved .json config (saved in the saves/checkpoints directory).

Roadmap

Contributing

Currently we have a couple branches for making changes and contributions. New branches can be created if the process of achieving the desired outcome risks breaking the existing copy of the codebase on main.

License

Contact

The core contributors are reachable by email, Twitter, and most other means.

Acknowledgements

Thank you to Kevin for being a reliable partner, and a close friend.