/deepbud

Yeast image segmentation using convolutional neural networks.

Primary LanguagePythonMIT LicenseMIT

example

Instructions

  1. Clone the repo.
  2. Run make dirs to create the missing parts of the directory structure described below.
  3. Optional: Run make venv to create a python virtual environment. Skip if using conda or some other env manager.
    1. Run source .venv/bin/activate to activate the venv. (or use provided functions/aliases!)
  4. Run make requirements to install required python packages.
  5. Put the raw data in data/raw.
  6. To save the raw data to the DVC cache, run dvc commit raw_data.dvc
  7. Edit the code files to your heart's desire.
  8. Process your data, train and evaluate your model using dvc repro eval.dvc or make reproduce
  9. When you're happy with the result, commit files (including .dvc files) to git.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make dirs` or `make clean`
├── README.md          <- The top-level README for developers using this project.
│
├── data
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── definitions.py     <- Contains useful project-specific "environment variables", such as ROOT_DIR.
│
├── eval.dvc           <- The end of the data pipeline - evaluates the trained model on the test dataset.
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── process_data.dvc   <- Process the raw data and prepare it for training.
├── raw_data.dvc       <- Keeps the raw data versioned.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports                     <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures                 <- Generated graphics and figures to be used in reporting
│   └── metrics.txt             <- Relevant metrics after evaluating the model.
│   └── training_metrics.txt    <- Relevant metrics from training the model.
│
├── requirements-core.txt    <- project specific requirements, no secondary dependencies.
├── requirements-dev.txt     <- development requirements.
├── requirements.txt         <- The complete requirements file for reproducing the analysis environment,
│                               automatically generated with `pip freeze`, by `make requirements`.
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make predictions
│   │   │            
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
├── tox.ini            <- tox file with settings for running tox; see tox.testrun.org
└── train.dvc          <- Traing a model on the processed data.

venv activation aliases

to avoid typing the whole path to activation scripts, create a function/alias!

bash

add the following alias to ~/.bashrc

alias activate=". .venv/bin/activate"

fish

create ~/.config/fish/functions/activate.fish containing

# activate python venv from project root, in fish.
function activate
    source .venv/bin/activate.fish
end

references

Modified from: DAGsHub template

Project based on the cookiecutter data science project template. #cookiecutterdatascience

pre-commit hooks: article