/im2txt-docker-server

A deployment of the im2txt (show and tell) model in a local docker container with a flask/gunicorn/nginx server.

Primary LanguagePython

Im2txt Docker Server

This project deploys a im2txt model in a docker container.

In order to build a inference server into the container, we use the following stack:

  1. [nginx][nginx] is a light-weight layer that handles the incoming HTTP requests and manages the I/O in and out of the container efficiently.
  2. [gunicorn][gunicorn] is a WSGI pre-forking worker server that runs multiple copies of your application and load balances between them.
  3. [flask][flask] is a simple web framework used in the inference app that you write. It lets you respond to call on the /ping and /invocations endpoints without having to write much code.