LEGO Classification Model using Tensorflow
Project Organization
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── final <- The final, canonical data sets for modeling.
│ ├── interim <- Intermediate data that has been transformed.
│ └── raw <- The original, immutable data dump.
│
├── experiments
│ ├── <code-file-name> <- Experiments executed with <code-file-name>.
│ │
│ └── ... <- Experiments executed with other source code.
│
├── models
│ ├── <code-file-name> <- Final model generated by code located in <code-file-name>.
│ │
│ └── ... <- Other final models.
│
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── src
│ ├── data <- Source code to do various data manipulations. This code is used to
│ │ read and write from /data/... repositories.
│ │
│ ├── models <- Source code to train models (save in /models/<code-file-name>/) and
│ │ then use trained models to make predictions.
│ │
│ └── visualization <- Source code to create visualizations.
│
└── env.yml <- The file required for reproducing the analysis environment, e.g.
generated with `conda env export > requirements.txt`
Install dependencies
Create a new environment 'env-name' with all necessary packages. $ conda env create -f env.yml
Update environment 'env-name' when env.yml changed. $ conda env update --name env-name --file env.yml
Export dependencies for a cross-platform use from own environment to env.yml. NOTE: Dependencies installed with pip have to be added manually. $ conda env export --from-history > env.yml
Project based on the cookiecutter data science project template. #cookiecutterdatascience