himanshurawlani/keras-and-tensorflow-serving

how we get own model inference? not pretrained model.?

MuruganR96 opened this issue · 0 comments

i was read article. very useful for me. thank you so much. i was struggle
one place.

in scripts/serving_sample_request.py

sir. how we get own model inference? not pretrained model.

from keras.applications import inception_v3

r = requests.post('http://localhost:9000/v1/models/ImageClassifier:predict', json=payload)
pred = json.loads(r.content.decode('utf-8'))
print(json.dumps(inception_v3.decode_predictions(np.array(pred['predictions']))[0]))

in pretrained model they given keras inception_v3 function to do prediction.

but own model how to do this?