Cookiecutter template for ML projects
Inspired by:
- drivendata template
- gazprom-neft template
- The Data Science Lifecycle Process
- hitchhikers-guide by Data Science for Social Good
- Python 3.5+
- Cookiecutter Python package >= 1.4.0: This can be installed with pip by or conda depending on how you manage your Python packages:
$ pip install cookiecutter
or
$ conda config --add channels conda-forge
$ conda install cookiecutter
- Run
cookiecutter https://github.com/YKatser/ml-project-template
- Create a GitLab Repo
Go to your GitLab account and create a new repository. Name it after your {{cookiecutter.project_name}}
.
- Activate your GitLab repo
On your computer, enter your newly created project folder, where project folder is the project_name you entered when you ran cookiecutter, then activate your repository:
git init .
git add .
git commit -m "Initial skeleton."
git remote add origin your-gitlab-repo
git push -u origin master
The directory structure of your new project looks like this:
├── README.md <- The top-level README for developers using this project.
│
├── data
│ ├── raw <- The original, immutable data dump.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── external <- Data from third party sources.
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is task name SHKPA-XX (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `SHKPA-67-mms-test-LSTM-model-on-all-electrolyses`.
│
├── docs <- Questions and some other related documentation
│
├── results <- Intermediate analysis as HTML, PDF, LaTeX, etc.
│
├── .gitignore <- Avoids uploading data, credentials, outputs, system files etc
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
|
└── src <- Source code for use in this project.