Use this seed to start new deep learning / ML projects.
- Built in setup.py
- Built in requirements
- Examples with MNIST
- Badges
- Bibtex
The goal of this seed is to structure ML paper-code the same so that work can easily be extended and replicated.
What it does
First, install dependencies
# clone project
git clone https://github.com/YourGithubName/deep-learning-project-template
# install project
cd deep-learning-project-template
pip install -e .
pip install -r requirements.txt
Next, navigate to any file and run it.
# module folder
cd project
# run module (example: mnist as your main contribution)
python lit_classifier_main.py
This project is setup as a package which means you can now easily import any file into any other file like so:
from project.datasets.mnist import mnist
from project.lit_classifier_main import LitClassifier
from pytorch_lightning import Trainer
# model
model = LitClassifier()
# data
train, val, test = mnist()
# train
trainer = Trainer()
trainer.fit(model, train, val)
# test using the best model!
trainer.test(test_dataloaders=test)
This project demonstrates how make fair machine learning models.
fairness-in-ml.ipynb
: keras & TensorFlow implementation of Towards fairness in ML with adversarial networks.fairness-in-torch.ipynb
: PyTorch implementation of Fairness in Machine Learning with PyTorch.playground/*
: Various experiments.
This repo uses conda's virtual environment for Python 3.
Install (mini)conda if not yet installed.
For MacOS:
$ wget http://repo.continuum.io/miniconda/Miniconda-latest-MacOSX-x86_64.sh -O miniconda.sh
$ chmod +x miniconda.sh
$ ./miniconda.sh -b
cd
into this directory and create the conda virtual environment for Python 3 from environment.yml
:
$ conda env create -f environment.yml
Activate the virtual environment:
$ source activate fairness-in-ml
Install the fairness
library:
$ python setup.py develop
If you have applied these models to a different dataset or implemented any other fair models, consider submitting a Pull Request!
@article{YourName,
title={Your Title},
author={Your team},
journal={Location},
year={Year}
}