deep_learning

This is a copy of the ML22 repository by raoulg: https://github.com/raoulg/ML22

For this project you will need some dependencies. The project uses python 3.8, and all dependencies can be installed with poetry if you are using that with poetry install.

If you prefer to use pip you can run pip install -r requirements.txt.

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make format` or `make lint`
├── README.md          <- The top-level README for developers using this project.
├── .gitignore         <- Stuff not to add to git
├── poetry.lock        <- Computer readable file that manages the dependencies of the libraries
├── pyproject.toml     <- Human readable file. This specifies the libraries I installed to
|                         let the code run, and their versions.
├── requirements.txt   <- Export from poetry.lock to requirements.txt, to be used with pip
├── setug.cfg          <- Config file for linters etc.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py

Project based on the cookiecutter data science project template. #cookiecutterdatascience