lensq is a application that provides an API for predicting handwritten digits using a pre-trained deep learning model.
Built w/ Flask, Keras and Celery.
- Clone the repository:
git clone https://github.com/your-username/lensq.git
cd lensq
- Install dependencies and drop into
env
shell:
poetry install
poetry shell
- Download the pre-trained model weights file
model.keras
and place it in the project root directory. Alternatively, you can train it locally usingscripts/train.py
.
The application uses a pre-trained CNN model for handwritten digit recognition. The model is trained on the MNIST dataset and achieves an accuracy of around 99% on the test set.
The pre-trained weights model.keras
are required to run the application. You can download the weights file from the project repository or train the model yourself using the provided train.py
script.
- Spin up a
redis
instance usingdocker run
on port6379
. Alternatively, you can runredis
locally.
docker run -d -p 6379:6379 redis
- Start the
celery
worker.
celery -A make_celery worker --loglevel INFO
- In a separate terminal, start the Flask application:
flask -A lensq run --debug
The application should now be accessible at http://localhost:5000
.
To deploy the application to a server environment, follow these steps:
- Setup an EC2 instance and install required dependencies (Redis, Celery, Flask) on the server.
- Clone the repository and copy the project files to the server.
- Place the pre-trained model weights file
model.keras
in the project root directory. - Configure the Redis and Celery settings in
lensq/__init__.py
as per your server environment. - Start the Redis server.
- Start the Celery worker.
- Start the Flask application using Gunicorn.
pm2
is used to manage the celery
and flask
jobs.
Method: POST
Input Parameters:
images
(multipart/form-data): One or more image files to be processed for digit recognition.
Response:
{
"result_id": "string"
}
The result_id
is a unique identifier for the submitted task. It can be used to check the status and retrieve the prediction results.
Method: GET
Path Parameter:
id
(string): Theresult_id
returned by the/tasks/predict
endpoint.
Response:
{
"ready": bool,
"successful": bool,
"value": list
}
ready
: Indicates whether the prediction task has completed.successful
: Indicates whether the prediction task was successful (only present whenready
istrue
).value
: A list of predicted digit labels for each submitted image (only present whenready
andsuccessful
aretrue
).