AsyncIO serving for data science models built on Starlette
Requirements: Python 3.6.1+
Installation using pip
:
pip install foxcross
Create some test data and a simple model in the same directory to be served:
directory structure
.
+-- data.json
+-- models.py
data.json
[1,2,3,4,5]
models.py
from foxcross.serving import ModelServing, run_model_serving
class AddOneModel(ModelServing):
test_data_path = "data.json"
def predict(self, data):
return [x + 1 for x in data]
if __name__ == "__main__":
run_model_serving()
Run the model locally
python models.py
Navigate to localhost:8000/predict-test/
in your web browser, and you should see the
list incremented by 1. You can visit localhost:8000/
to see all the available
endpoints for your model.
Currently, some of the most popular data science model building frameworks such as PyTorch and Scikit-Learn do not come with a built in serving library similar to TensorFlow Serving.
To fill this gap, people create Flask applications to serve their model. This can be error prone, and the implementation can differ between each model. Additionally, Flask is a WSGI web framework whereas Foxcross is built on Starlette, a more performant ASGI web framework.
Foxcross aims to be the serving library for data science models built with frameworks that do not come with their own serving library. Using Foxcross enables consistent and testable serving of data science models.
If you believe you've found a bug with security implications, please do not disclose this issue in a public forum.
Email us at support@laac.dev