/DEEP-OC-image-classification-tf-dicom

DEEP as a Service container for medical image classification

Primary LanguageDockerfile

logo

DEEP as a Service container for image classification

Build Status

This is a container that will run the DEEP as a Service API component. From the DEEPaas API the user can choose the model to train or to predict, together with the basic input parameters.

Run the container

Directly from Docker Hub

To run the Docker container directly from Docker Hub and start using the API simply run the following command:

$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-image-classification-tf-dicom

This command will pull the Docker container from the Docker Hub deephdc organization.

Building the container

If you want to build the container directly in your machine (because you want to modify the Dockerfile for instance) follow the following instructions:

Building the container:

  1. Get the DEEP-OC-image-classification-tf-dicom repository (this repo):

    $ git clone https://github.com/deephdc/DEEP-OC-image-classification-tf-dicom
  2. Build the container:

    $ cd DEEP-OC-image-classification-tf-dicom
    $ docker build -t deephdc/deep-oc-image-classification-tf-dicom .
  3. Run the container:

    $ docker run -ti -p 5000:5000 -p 6006:6006  -p 8888:8888  deephdc/deep-oc-image-classification-tf-dicom

    You can also run Jupyter Lab inside the container:

    $ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 deephdc/deep-oc-image-classification-tf-dicom /bin/bash
    $root@47a6604ef008:/srv# jupyter lab --allow-root

These three steps will download the repository from GitHub and will build the Docker container locally on your machine. You can inspect and modify the Dockerfile in order to check what is going on. For instance, you can pass the --debug=True flag to the deepaas-run command, in order to enable the debug mode.

Connect to the API

Once the container is up and running, browse to http://localhost:5000/ui to get the OpenAPI (Swagger) documentation page. If you are training on your dataset, you can monitor the training progress in Tensorboard connecting to http://localhost:6006.