tensorflow/serving

Tensorflow serving Keras model

GeorgianaPetria opened this issue ยท 48 comments

I am trying to convert my Keras graph to a TF graph.

I managed to run the provided tensorflow_serving examples, but I'm having issues to run my custom model.

Here is my code:

`
import tensorflow as tf
from keras import backend as K
from tensorflow.contrib.session_bundle import exporter
def export_model_to_tf(model):
K.set_learning_phase(0) # all new operations will be in test mode from now on
# serialize the model and get its weights, for quick re-building

export_path = "./tmp" # where to save the exported graph
export_version = "1" # version number (integer)

print('Exporting trained model to %s' % export_path)

saver = tf.train.Saver(sharded=True)
with tf.Session() as sess:
    model_exporter = exporter.Exporter(saver)

    signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)

    model_exporter.init(sess.graph.as_graph_def(),
                default_graph_signature=signature)

    model_exporter.export(export_path, tf.constant(export_version), sess)

`

This is the error I am getting:

root@566d926360d6:/serving# bazel-bin/tensorflow_serving/example/main ./tmp/main_model
Using TensorFlow backend.
Exporting trained model to ./tmp
2017-02-03 22:32:37.001202: W external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-02-03 22:32:37.001260: W external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
<tensorflow.contrib.session_bundle.exporter.Exporter object at 0x7f617023db50>
Traceback (most recent call last):
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/tf_serving/tensorflow_serving/example/main.py", line 100, in
main()
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/tf_serving/tensorflow_serving/example/main.py", line 98, in main
save_model.export_model_to_tf(model)
File "/serving/tensorflow_serving/example/save_model.py", line 25, in export_model_to_tf
model_exporter.export(export_path, tf.constant(export_version), sess)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/contrib/session_bundle/exporter.py", line 275, in export
meta_graph_suffix=constants.EXPORT_SUFFIX_NAME)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1390, in save
{self.saver_def.filename_tensor_name: checkpoint_file})
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value dense_1_W
[[Node: save/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](save/ShardedFilename, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, dense_1_W, dense_1_b, dense_2_W, dense_2_b)]]

Caused by op u'save/SaveV2', defined at:
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/tf_serving/tensorflow_serving/example/main.py", line 100, in
main()
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/tf_serving/tensorflow_serving/example/main.py", line 98, in main
save_model.export_model_to_tf(model)
File "/serving/tensorflow_serving/example/save_model.py", line 13, in export_model_to_tf
saver = tf.train.Saver(sharded=True)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1067, in init
self.build()
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1097, in build
restore_sequentially=self._restore_sequentially)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 685, in build
save_tensor = self._AddShardedSaveOps(filename_tensor, per_device)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 361, in _AddShardedSaveOps
return self._AddShardedSaveOpsForV2(filename_tensor, per_device)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 335, in _AddShardedSaveOpsForV2
sharded_saves.append(self._AddSaveOps(sharded_filename, saveables))
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 276, in _AddSaveOps
save = self.save_op(filename_tensor, saveables)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 219, in save_op
tensors)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/ops/gen_io_ops.py", line 780, in save_v2
tensors=tensors, name=name)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/framework/ops.py", line 2402, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/framework/ops.py", line 1264, in init
self._traceback = _extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value dense_1_W
[[Node: save/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](save/ShardedFilename, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, dense_1_W, dense_1_b, dense_2_W, dense_2_b)]]

Do you know what could cause the Saver to fail?

Thanks!

Have you tried deleting:
with tf.Session() as sess:
and adding:
sess = tf.Session()
K.set_session(sess)
at the begin of your script?

Yes, I have tried and I'm getting a similar error. The code is still failing when calling model_exporter.export(...).

I've also tried to adapt the mnist_saved_model.py, but I'm also getting erorrs.
Could it be because if the way I'm creating the signature from my keras model?
My ultimate goal is to create the protobuf graph, in order to use it in my Android app.

Code:
`def export_model_to_tf(model, sess):

    classification_inputs = utils.build_tensor_info(model.input)
    classification_outputs_scores = utils.build_tensor_info(model.output)

    classification_signature = signature_def_utils.classification_signature_def(
    model.input,
    model.output,
    None)

    builder = saved_model_builder.SavedModelBuilder("./tmp/2")

    builder.add_meta_graph_and_variables(
            sess, [tag_constants.SERVING],
            signature_def_map={
                    signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:classification_signature
            })

    builder.save()`

Error:

root@566d926360d6:/serving# bazel-bin/tensorflow_serving/example/main ./tmp/2
Using TensorFlow backend.
2017-02-04 02:48:56.048045: W external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-02-04 02:48:56.048099: W external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Traceback (most recent call last):
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/tf_serving/tensorflow_serving/example/main.py", line 106, in
main()
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/tf_serving/tensorflow_serving/example/main.py", line 104, in main
save_model2.export_model_to_tf(model, sess)
File "/serving/tensorflow_serving/example/save_model2.py", line 28, in export_model_to_tf
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:classification_signature
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/saved_model/builder_impl.py", line 438, in add_meta_graph_and_variables
saver.save(sess, variables_path, write_meta_graph=False, write_state=False)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1390, in save
{self.saver_def.filename_tensor_name: checkpoint_file})
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/serving/bazel-bin/tensorflow_serving/example/main.runfiles/org_tensorflow/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value dense_1_W
[[Node: save/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](save/ShardedFilename, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, dense_1_W, dense_1_b, dense_2_W, dense_2_b)]]

I don't think so, I'm guessing you have an issue with the sessions. Here is an example I adapt to export a Keras Model, maybe it can help you.

@viksit the section 4 of tutorial is broken btw

I managed to export a Keras model for Tensorflow Serving (not sure whether it is the official way to do this). My first trial prior to creating my custom model was to use a trained model available on Keras such as VGG19.

Here is how I did (I put in separate boxes to help understanding and because I use Jupyter :)):

Creating the model

import keras.backend as K
from keras.applications import VGG19
from keras.models import Model

# very important to do this as a first thing
K.set_learning_phase(0)
model = VGG19(include_top=True, weights='imagenet')

# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights)

Exporting the model

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
export_path = 'folder_to_export'
builder = saved_model_builder.SavedModelBuilder(export_path)

signature = predict_signature_def(inputs={'images': new_model.input},
                                  outputs={'scores': new_model.output})

with K.get_session() as sess:
    builder.add_meta_graph_and_variables(sess=sess,
                                         tags=[tag_constants.SERVING],
                                         signature_def_map={'predict': signature})
    builder.save()

Some side notes:

  • It can vary depending on Keras, TensorFlow, and TensorFlow Serving version. I used the latest ones.
  • Beware of the names of the signatures, since they should be used in the client as well.
  • When creating the client, all preprocessing steps that are needed for the model (preprocess_input() for example) must be executed. I didn't try to add such step in the graph itself as Inception client example.

In case you're curious about the client side, it should be similar to the below one. I added some extra things to use Keras methods for decoding predictions, but it could also be done in the serving side:

request = predict_pb2.PredictRequest()
request.model_spec.name = 'vgg19'
request.model_spec.signature_name = 'predict'
request.inputs['images'].CopyFrom(tf.contrib.util.make_tensor_proto(img))

result = stub.Predict(request, 10.0)  # 10 secs timeout
to_decode = np.expand_dims(result.outputs['outputs'].float_val, axis=0)
decoded = decode_predictions(to_decode, 5)
print(decoded)

Hopefully it will help someone :)

@tspthomas I have tried to use your guide, but I'm getting the client side error:

grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="input tensor alias not found in signature: images")

Do you have any idea how I can solve this issue?
Thanks!

Hi @azagovora ! Well, it seems that I made a mistake in the code.

Could you try to change from "images" to "inputs" in the client code?

request.inputs['images'].CopyFrom(tf.contrib.util.make_tensor_proto(img))

to

request.inputs['inputs'].CopyFrom(tf.contrib.util.make_tensor_proto(img))

I think the only problem is that you need to make the input signature match. Let me know if that solves your problem.

Hi @tspthomas!

Thank you for your quick reply to my question. I have corrected my code but I'm getting the same error message:

grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="input tensor alias not found in signature: input")

Hi @azagovora! Sorry for the dumb question, but you put 'inputs' or 'input' in the client code? It looks like you put 'input' per the error message, so you'd need to change accordingly.

If it is correct, could you please paste your export code and your client code?

Sorry, it was my mistake, I put 'input' instead of 'inputs'.
Unfortunately, I'm getting another error:

grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INTERNAL, details="Output 0 of type string does not match declared output type float for node _recv_input_1_1_0 = _Recvclient_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=92768290196530094, tensor_name="input_1_1:0", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"")

It looks like the export problem.

Here is my export code:

import os
import tensorflow as tf
import keras.backend as K

from keras.applications.inception_v3 import InceptionV3
#from keras.applications import VGG19
from keras.models import Model

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def

#from inception_v3_finetuning_v2 import load_trained

from os.path import join as join_path

tf.app.flags.DEFINE_string('output_dir', '/tmp/inception_output',
"""Directory where to export inference model.""")
tf.app.flags.DEFINE_integer('model_version', 4,
"""Version number of the model.""")
FLAGS = tf.app.flags.FLAGS

def export():

K.set_learning_phase(0)

# model, _ = load_trained()

model = InceptionV3(include_top=True, weights='imagenet')
#model = VGG19(include_top=True, weights='imagenet')

# The creation of a new model might be optional depending on the goal
config = model.get_config()
weights = model.get_weights()
new_model = Model.from_config(config)
new_model.set_weights(weights)

output_path = os.path.join(FLAGS.output_dir, str(FLAGS.model_version))
print ('Exporting trained model to', output_path)

builder = saved_model_builder.SavedModelBuilder(output_path)

signature = predict_signature_def(inputs={'images': new_model.input},
                                  outputs={'scores': new_model.output})
with K.get_session() as sess:

    builder.add_meta_graph_and_variables(sess=sess,
                                         tags=[tag_constants.SERVING],
                                         signature_def_map={'predict': signature})

    builder.save()
    print ('Successfully exported model to %s' % FLAGS.output_dir)

def main(unused_argv=None):
export()

if name == 'main':
tf.app.run()

Hello @azagovora. It seems to me that is something in your client code not on the export. I think that you should review the way that you're reading the image. In the code I put here, I'm reading the image with Keras methods and passing as a float array to the input. It might be the case that you're reading it as a binary string and this is why you're facing the error.

If you're following the Inception v3 sample code, you need to change the way you read the image and you can use Keras default methods for that. In my case, I created a method to read and pre-process the image:

from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.imagenet_utils import preprocess_input
import numpy as np

def load_preprocess_img(img_path, target_size=(224, 224)):
    img = load_img(img_path, target_size=target_size)
    x = img_to_array(img)
    x = np.expand_dims(x, axis=0)
    x = preprocess_input(x)
    return x

# Before making the request, I read the image like this:
img = load_preprocess_img(FLAGS.image)

Please review the target size and other information and change it according to the model you're using. In case you still didn't find any issue, please paste your complete client code.

Hi @tspthomas!

It works now.

Thank you very much for your help!

@tspthomas awesome ! Yours was the only help I found for converting the keras model to tensorflow using the saved_model_builder. Thanks !

@azagovora @ashavish You're welcome. I'm glad that it helped :)

tbchj commented

@GeorgianaPetria how to add new model (like you add keras model) to serving?
I want to do this, but I don't know what should I do. #452

@tspthomas Thank you so much good sir! This is the only good explanation of how to do this with Keras models. Thank you again!

Cheers,
Dylan

Hi @tspthomas,

Where do you put your preprocessing method? (is it in the client, or where?)

Thank you!

Dylan

@dylanrandle what kind of pre-processing are you doing? tf ops, or normal python?

Hello @dylanrandle !

Well, in this simple example I put in the client side (I used Keras default methods for this). Some of the pre-trained models in Keras have the preprocess function in the same file (e.g. InceptionV3 - keras/applications/inception_v3.py), while others use the default from keras/applications/imagenet_utils.py. You need to choose accordingly, which is related to the given dataset and network.

IMHO, I think that this part could be handled by the servable because the client shouldn't need to be aware of specific preprocessing. I noticed that there are some efforts to add this kind of code within the graph, where one of the nodes prior to the first layers of the network are for preprocessing. If you take a look at the default code for Inception V3 in examples folder (tensorflow_serving/example/inception_export.py), they did exactly this. Unfortunately, I didn't take a look into how to do this with a Keras model.

Not sure if this helps :)

Regards,
Thomas

Hi @viksit, It is normal python preprocessing. E.g. converting characters to vectors. Thank you.

Hi @tspthomas, Yes thank you. That is very helpful. So there are essentially 3 options it sounds like: python in the client, in the same file, or in the graph itself? Thank you!

PS @viksit @tspthomas What would you recommend for best performance? Thank you.

Best,
Dylan

Hi @dylanrandle ! Sorry for the delay. I thought I had answered this question... I didn't understand what you mean by the same file (I mean, which file you're mentioning). But the other two ones are true.

I'm not the best person to talk about performance (mainly without measuring anything), but I think that placing in the graph should be a good idea.

It would be interesting to measure the difference, but I don't think that it has too much impact in performance. Of course, it depends on the kind of preprocessing you're doing. I'm assuming something like subtracting means of channels, which is very optimized in Numpy-like libraries. If you think about other types of preprocessing (e.g. for NLP, where you can have mapping to dictionaries, etc), the results could be different.

Since it should be easy to evaluate, my advice would be to test and check which one is suitable for your scenario. And if you get any results, please share with us :)

It is important to note, that when you are exporting a model, if you use keras from the tensorflow.contrib.keras, it is better to pass namely bool flag: K.set_leraning_phase(False). Otherwise, unfortunately experience has shown that inference will not working on server correctly.

Hello Friends,

2 Questions:

  1. @ipoletaev I have tried using both K.set_learning_phase(False) and K.set_learning_phase(0) and both times when I load my model I get model.uses_learning_phase = True:
from tensorflow.contrib.keras.python.keras import backend as K
from tensorflow.contrib.keras.python.keras.models import load_model

K.set_learning_phase(0) # "test" mode

MODEL_NAME = input('Input the model path:')

model = load_model(MODEL_PATH)
print('Loaded model succesfully.')

if (model.uses_learning_phase):
    raise ValueError('Model using learning phase.')
  1. @tspthomas After I run
    result = stub.Predict(request, 10.0)
    I get a PredictResponse object back but I don't know how to get out the float_vals?
outputs {
  key: "outputs"
  value {
    dtype: DT_FLOAT
    tensor_shape {
      dim {
        size: 1
      }
      dim {
        size: 20
      }
    }
    float_val: 0.000343723397236
    float_val: 0.999655127525
    float_val: 3.96821117632e-11
    float_val: 1.20521548297e-09
    float_val: 2.09611101809e-08
    float_val: 1.46216549979e-09
    float_val: 3.87274603497e-08
    float_val: 1.83520256769e-08
    float_val: 1.47733780764e-08
    float_val: 8.00914179422e-08
    float_val: 2.29388191997e-07
    float_val: 6.27798826258e-08
    float_val: 1.08802950649e-07
    float_val: 4.39628813353e-08
    float_val: 7.87182985462e-10
    float_val: 1.31638898893e-07
    float_val: 1.42612295306e-08
    float_val: 3.0768305237e-07
    float_val: 1.12661648899e-08
    float_val: 1.68554503688e-08
  }
}

I can do something like result.outputs but that just returns a protobuf MessageMap, and I still can't get out the float vals.

Any help greatly appreciated guys. Thank you!

Cheers,
Dylan

Hey Viksit,

  1. from tensorflow.contrib.keras.python.keras import backend as K
  2. I'm not sure how to parse it?

Thank you!
Dylan

@dylanrandle See keras-team/keras#2310 - sometimes, there can be python import issues. Try importing K from the layers core and retrying to see if that works. If it does, there may be something wrong in the way the imports are being processed (in order).

See how to use GRPC via examples/docs. Something like res = stub.predict(); r = res.result(); r.scores/r.values

@dylanrandle , you could also try to do something simple similar to this:

Import Keras function to decode predictions (if you want to)
from keras.applications.imagenet_utils import decode_predictions

Since the outputs are like a dictionary, you can access it simply by result.outputs['outputs']. Hence, your code for decoding the predictions could be similar to this:

result = stub.Predict(request, 10.0)
to_decode = np.expand_dims(result.outputs['outputs'].float_val, axis=0)
decoded = decode_predictions(to_decode, 5) # 5 here means top-5
print(decoded)

Please observe that the name of the outputs and keys may change depending of how you structured things.

@tspthomas Thank you so much! result.outputs['outputs'].float_val works!

@viksit This import shenanigans only happens after I upgraded to tensorflow 1.2 btw. I tried importing from layers core and it did not fix.

I've tried to reproduce the steps described here to export a trained resnet50(from scratch) model from keras.applications but the tensorflow serving outputs random predictions and is very slow(4s/7s). I managed to export a Squeezenet 1.1 model in the same way but the tensorflow serving keep returning wrong values (p.s it returns the correct shape of course) :/
Anyone presenced the same issue? Thanks

@mauri870 I had the same issue with tensorflow serving outputs random predictions until I fixed an error in my image preprocessing functions.

Hi everyone,

I have exported my model through this way and it works, so thanks a lot for the helpful posts here! One problem I am having now is that what if I want to do some data preprocessing in the export.py itself?

Thanks!

@wengchen1993 Why would you preprocess data in your export? I think you should be preprocessing either in your client.py or in the graph itself? (The latter will require you to re-export your model).

@dylanrandle Hi, yeah I have been doing that in my client.py but just wondering is it possible to merge that with export.py so that an user only has to send raw data and export.py can just do a little preprocessing before passing the preprocessed data into the model. Then again I suppose I can just let users send data right into client.py before passing it to export.py (if I intend to keep the preprocessing steps and model as a blackbox).

@tspthomas ...What is predict_pb2...as I am getting the Below error ..


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-135-13c7a9a11d95> in <module>()
      1 
----> 2 request = predict_pb2.PredictRequest()
      3 request.model_spec.name = 'tiramisu'
      4 request.model_spec.signature_name = 'predict'
      5 request.inputs['inputs'].CopyFrom(tf.contrib.util.make_tensor_proto(img1))

NameError: name 'predict_pb2' is not defined

also can This Protobuff file be used for android...

Hello all, but mainly @ipoletaev, regarding K.set_learning_phase(False) vs. K.set_learning_phase(0).

I have used K.set_learning_phase(False) with Tensorflow 1.1 and indeed my accuracy numbers seem correct for the test phase. So I think it works. But I am confused how this can work? In the backend documentation (even in Tensorflow 1.1 project), it shows that 0 or 1 are the only legitimate values for set_learning_phase.

Thanks.

@tspthomas Hi Thomas,

I followed your process. I am getting this error with the current code listed below:

export_path_base = 'serving'
model_version = 1

export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(model_version)))

print 'Exporting trained model to', export_path

builder = saved_model_builder.SavedModelBuilder(export_path)

signature = predict_signature_def(inputs = {'input': model.input},
                              outputs = {'output':model.output})

with K.get_session() as sess:

    builder.add_meta_graph_and_variables(sess = sess, 
                                     tags = [tag_constants.SERVING],
                                     signature_def_map = {
                                        signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()

Error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-82-921736d7a66d> in <module>()
  8 
  9 signature = predict_signature_def(inputs = {'input': model.input},
---> 10                                   outputs = {'output':model.output})
 11 
 12 with K.get_session() as sess:

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.pyc in predict_signature_def(inputs, outputs)
146       signature_constants.PREDICT_METHOD_NAME)
147 
--> 148   return signature_def

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.pyc in <dictcomp>((key, tensor))
146       signature_constants.PREDICT_METHOD_NAME)
147 
--> 148   return signature_def

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/saved_model/utils_impl.pyc in build_tensor_info(tensor)
 35         build the TensorInfo. For SparseTensors, the names of the three
 36         constitutent Tensors are used.
---> 37 
 38   Returns:
 39     A TensorInfo protocol buffer constructed based on the supplied argument.

AttributeError: 'list' object has no attribute 'dtype'

It's weird because my inputs and outputs have the ff formats and clearly have dtypes. Unless there's a problem with having 2 types of inputs.

INPUTS:

[<tf.Tensor 'lstm_1_input:0' shape=(?, 100, 6) dtype=float32>,
<tf.Tensor 'dense_1_input:0' shape=(?, 19) dtype=float32>]

OUTPUTS:

<tf.Tensor 'dense_3/BiasAdd:0' shape=(?, 4) dtype=float32>

Hope you, or anyone can help me. Thanks!

***UPDATE: RESOLVED.

I just figured it out. Change this line:

signature = predict_signature_def(inputs = {'input': model.input},
                              outputs = {'output':model.output})

to:

signature = predict_signature_def(inputs = {'input1': model.input[0],
                                        'input2': model.input[1]},
                              outputs = {'output':model.output})

I am now getting this error with the code above. Hoping someone can help me with this.

Exporting trained model to serving/3
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.

RuntimeError Traceback (most recent call last)
in ()
14 builder.add_meta_graph_and_variables(sess = sess,
15 tags = [tag_constants.SERVING],
---> 16 signature_def_map = {'predict': signature})
17 builder.save()

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/saved_model/builder_impl.pyc in add_meta_graph_and_variables(self, sess, tags, signature_def_map, assets_collection, legacy_init_op, clear_devices, main_op)
436 tf_logging.info("No assets to save.")
437 return asset_source_filepath_list
--> 438
439 # Iterate over the supplied asset collection, build the AssetFile proto
440 # and add them to the collection with key constants.ASSETS_KEY, in the

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.pyc in save(self, sess, save_path, global_step, latest_filename, meta_graph_suffix, write_meta_graph, write_state)
1389 save_path,
1390 global_step=None,
-> 1391 latest_filename=None,
1392 meta_graph_suffix="meta",
1393 write_meta_graph=True,

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
776 N.B. Entering a with sess.as_default(): block does not affect
777 the current default graph. If you are using multiple graphs, and
--> 778 sess.graph is different from the value of @{tf.get_default_graph},
779 you must explicitly enter a with sess.graph.as_default(): block
780 to make sess.graph the default graph.

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
912 then a sequence of partial_run(). partial_run_setup specifies the
913 list of feeds and fetches that will be used in the subsequent
--> 914 partial_run calls.
915
916 The optional feed_dict argument allows the caller to override

RuntimeError: Attempted to use a closed Session.

Hi @franciscogmm, having no assets to save/write is not an error (just means it wasn't supplied as part of the calls to add_meta_graph_and_variables or add_meta_graph). The first call to the builder requires a session with the variables, etc. Are you calling the builder within the scope of the session in the updated code as well?

Hi @sukritiramesh ,

Yes. The code is currently like this:

with K.get_session() as sess:
    builder.add_meta_graph_and_variables(sess = sess, 
                                         tags = [tag_constants.SERVING],
                                         signature_def_map = {'predict': signature,},
                                        legacy_init_op = legacy_init_op)
    builder.save()
    print 'Save successful to', export_path

It's giving me the error above.

However, when I put this line at the start of the entire code:

sess2 = tf.Session()
K.set_session(sess2)

I got this error:

Exporting trained model to serving/9
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
---------------------------------------------------------------------------
FailedPreconditionError                   Traceback (most recent call last)
<ipython-input-142-159e25a7a279> in <module>()
     16                                          tags = [tag_constants.SERVING],
     17                                          signature_def_map = {'predict': signature,},
---> 18                                         legacy_init_op = legacy_init_op)
     19     builder.save()
     20     print 'Save successful to', export_path

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/saved_model/builder_impl.pyc in add_meta_graph_and_variables(self, sess, tags, signature_def_map, assets_collection, legacy_init_op, clear_devices, main_op)
    436     tf_logging.info("No assets to save.")
    437     return asset_source_filepath_list
--> 438 
    439   # Iterate over the supplied asset collection, build the `AssetFile` proto
    440   # and add them to the collection with key `constants.ASSETS_KEY`, in the

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.pyc in save(self, sess, save_path, global_step, latest_filename, meta_graph_suffix, write_meta_graph, write_state)
   1389            save_path,
   1390            global_step=None,
-> 1391            latest_filename=None,
   1392            meta_graph_suffix="meta",
   1393            write_meta_graph=True,

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
    776     *N.B.* Entering a `with sess.as_default():` block does not affect
    777     the current default graph. If you are using multiple graphs, and
--> 778     `sess.graph` is different from the value of @{tf.get_default_graph},
    779     you must explicitly enter a `with sess.graph.as_default():` block
    780     to make `sess.graph` the default graph.

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
    980                       % (feed, type(feed)))
    981 
--> 982     # Check session.
    983     if self._closed:
    984       raise RuntimeError('Attempted to use a closed Session.')

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1030       final_fetches = [t._as_tf_output() for t in fetch_handler.fetches()]
   1031       final_targets = [op._c_op for op in fetch_handler.targets()]
-> 1032       # pylint: enable=protected-access
   1033     else:
   1034       final_fetches = _name_list(fetch_handler.fetches())

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
   1050     if self._closed:
   1051       raise RuntimeError('Attempted to use a closed Session.')
-> 1052     if self.graph.version == 0:
   1053       raise RuntimeError('The Session graph is empty.  Add operations to the '
   1054                          'graph before calling run().')

FailedPreconditionError: Attempting to use uninitialized value Adadelta/decay
	 [[Node: save_12/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](save_12/ShardedFilename, save_12/SaveV2/tensor_names, save_12/SaveV2/shape_and_slices, Adadelta/decay, Adadelta/iterations, Adadelta/lr, Adadelta_1/decay, Adadelta_1/iterations, Adadelta_1/lr, Adagrad/decay, Adagrad/iterations, Adagrad/lr, Adagrad_1/decay, Adagrad_1/iterations, Adagrad_1/lr, Adam/beta_1, Adam/beta_2, Adam/decay, Adam/iterations, Adam/lr, Adam_1/beta_1, Adam_1/beta_2, Adam_1/decay, Adam_1/iterations, Adam_1/lr, Adam_2/beta_1, Adam_2/beta_2, Adam_2/decay, Adam_2/iterations, Adam_2/lr, Adam_3/beta_1, Adam_3/beta_2, Adam_3/decay, Adam_3/iterations, Adam_3/lr, Adam_4/beta_1, Adam_4/beta_2, Adam_4/decay, Adam_4/iterations, Adam_4/lr, Adam_5/beta_1, Adam_5/beta_2, Adam_5/decay, Adam_5/iterations, Adam_5/lr, Adam_6/beta_1, Adam_6/beta_2, Adam_6/decay, Adam_6/iterations, Adam_6/lr, RMSprop/decay, RMSprop/iterations, RMSprop/lr, RMSprop/rho, RMSprop_1/decay, RMSprop_1/iterations, RMSprop_1/lr, RMSprop_1/rho, dense_1/bias, dense_1/kernel, dense_1_1/bias, dense_1_1/kernel, dense_1_2/bias, dense_1_2/kernel, dense_1_3/bias, dense_1_3/kernel, dense_2/bias, dense_2/kernel, dense_2_1/bias, dense_2_1/kernel, dense_2_2/bias, dense_2_2/kernel, dense_2_3/bias, dense_2_3/kernel, dense_3/bias, dense_3/kernel, dense_3_1/bias, dense_3_1/kernel, dense_3_2/bias, dense_3_2/kernel, dense_3_3/bias, dense_3_3/kernel, dense_4/bias, dense_4/kernel, dense_4_1/bias, dense_4_1/kernel, dense_4_2/bias, dense_4_2/kernel, dense_4_3/bias, dense_4_3/kernel, dense_5/bias, dense_5/kernel, dense_5_1/bias, dense_5_1/kernel, dense_5_2/bias, dense_5_2/kernel, dense_5_3/bias, dense_5_3/kernel, dense_6/bias, dense_6/kernel, dense_6_1/bias, dense_6_1/kernel, dense_6_2/bias, dense_6_2/kernel, dense_6_3/bias, dense_6_3/kernel, lstm_1/bias, lstm_1/kernel, lstm_1/recurrent_kernel, lstm_1_1/bias, lstm_1_1/kernel, lstm_1_1/recurrent_kernel, lstm_1_2/bias, lstm_1_2/kernel, lstm_1_2/recurrent_kernel, lstm_1_3/bias, lstm_1_3/kernel, lstm_1_3/recurrent_kernel, lstm_2/bias, lstm_2/kernel, lstm_2/recurrent_kernel, lstm_2_1/bias, lstm_2_1/kernel, lstm_2_1/recurrent_kernel, lstm_2_2/bias, lstm_2_2/kernel, lstm_2_2/recurrent_kernel, lstm_2_3/bias, lstm_2_3/kernel, lstm_2_3/recurrent_kernel, lstm_3/bias, lstm_3/kernel, lstm_3/recurrent_kernel, lstm_3_1/bias, lstm_3_1/kernel, lstm_3_1/recurrent_kernel, lstm_3_2/bias, lstm_3_2/kernel, lstm_3_2/recurrent_kernel, lstm_3_3/bias, lstm_3_3/kernel, lstm_3_3/recurrent_kernel, lstm_4/bias, lstm_4/kernel, lstm_4/recurrent_kernel, lstm_4_1/bias, lstm_4_1/kernel, lstm_4_1/recurrent_kernel, lstm_4_2/bias, lstm_4_2/kernel, lstm_4_2/recurrent_kernel, lstm_4_3/bias, lstm_4_3/kernel, lstm_4_3/recurrent_kernel, training/Adam/Variable, training/Adam/Variable_1, training/Adam/Variable_10, training/Adam/Variable_11, training/Adam/Variable_12, training/Adam/Variable_13, training/Adam/Variable_14, training/Adam/Variable_15, training/Adam/Variable_16, training/Adam/Variable_17, training/Adam/Variable_18, training/Adam/Variable_19, training/Adam/Variable_2, training/Adam/Variable_20, training/Adam/Variable_21, training/Adam/Variable_22, training/Adam/Variable_23, training/Adam/Variable_3, training/Adam/Variable_4, training/Adam/Variable_5, training/Adam/Variable_6, training/Adam/Variable_7, training/Adam/Variable_8, training/Adam/Variable_9, training_1/Adam/Variable, training_1/Adam/Variable_1, training_1/Adam/Variable_10, training_1/Adam/Variable_11, training_1/Adam/Variable_12, training_1/Adam/Variable_13, training_1/Adam/Variable_14, training_1/Adam/Variable_15, training_1/Adam/Variable_16, training_1/Adam/Variable_17, training_1/Adam/Variable_18, training_1/Adam/Variable_19, training_1/Adam/Variable_2, training_1/Adam/Variable_20, training_1/Adam/Variable_21, training_1/Adam/Variable_22, training_1/Adam/Variable_23, training_1/Adam/Variable_3, training_1/Adam/Variable_4, training_1/Adam/Variable_5, training_1/Adam/Variable_6, training_1/Adam/Variable_7, training_1/Adam/Variable_8, training_1/Adam/Variable_9)]]

Caused by op u'save_12/SaveV2', defined at:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/__main__.py", line 3, in <module>
    app.launch_new_instance()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/traitlets/config/application.py", line 658, in launch_instance
    app.start()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/kernelapp.py", line 474, in start
    ioloop.IOLoop.instance().start()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/zmq/eventloop/ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tornado/ioloop.py", line 883, in start
    handler_func(fd_obj, events)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
    self._handle_recv()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tornado/stack_context.py", line 275, in null_wrapper
    return fn(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
    handler(stream, idents, msg)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
    user_expressions, allow_stdin)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2821, in run_ast_nodes
    if self.run_code(code, result):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-142-159e25a7a279>", line 18, in <module>
    legacy_init_op = legacy_init_op)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/saved_model/builder_impl.py", line 432, in add_meta_graph_and_variables
    """
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1056, in __init__
    # Pass the variables as a dict:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1086, in build
    restore_sequentially: A `Bool`, which if true, causes restore of different
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 685, in build
    else:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 361, in _AddShardedSaveOps
    return self._AddShardedSaveOpsForV2(filename_tensor, per_device)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 335, in _AddShardedSaveOpsForV2
    sharded_saves.append(self._AddSaveOps(sharded_filename, saveables))
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 276, in _AddSaveOps
    save = self.save_op(filename_tensor, saveables)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 219, in save_op
    tensors)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 780, in save_v2
    shard: A `Tensor` of type `int32`.
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
    if output_structure:
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
    grouped_inputs, self._control_inputs)

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Adadelta/decay
	 [[Node: save_12/SaveV2 = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](save_12/ShardedFilename, save_12/SaveV2/tensor_names, save_12/SaveV2/shape_and_slices, Adadelta/decay, Adadelta/iterations, Adadelta/lr, Adadelta_1/decay, Adadelta_1/iterations, Adadelta_1/lr, Adagrad/decay, Adagrad/iterations, Adagrad/lr, Adagrad_1/decay, Adagrad_1/iterations, Adagrad_1/lr, Adam/beta_1, Adam/beta_2, Adam/decay, Adam/iterations, Adam/lr, Adam_1/beta_1, Adam_1/beta_2, Adam_1/decay, Adam_1/iterations, Adam_1/lr, Adam_2/beta_1, Adam_2/beta_2, Adam_2/decay, Adam_2/iterations, Adam_2/lr, Adam_3/beta_1, Adam_3/beta_2, Adam_3/decay, Adam_3/iterations, Adam_3/lr, Adam_4/beta_1, Adam_4/beta_2, Adam_4/decay, Adam_4/iterations, Adam_4/lr, Adam_5/beta_1, Adam_5/beta_2, Adam_5/decay, Adam_5/iterations, Adam_5/lr, Adam_6/beta_1, Adam_6/beta_2, Adam_6/decay, Adam_6/iterations, Adam_6/lr, RMSprop/decay, RMSprop/iterations, RMSprop/lr, RMSprop/rho, RMSprop_1/decay, RMSprop_1/iterations, RMSprop_1/lr, RMSprop_1/rho, dense_1/bias, dense_1/kernel, dense_1_1/bias, dense_1_1/kernel, dense_1_2/bias, dense_1_2/kernel, dense_1_3/bias, dense_1_3/kernel, dense_2/bias, dense_2/kernel, dense_2_1/bias, dense_2_1/kernel, dense_2_2/bias, dense_2_2/kernel, dense_2_3/bias, dense_2_3/kernel, dense_3/bias, dense_3/kernel, dense_3_1/bias, dense_3_1/kernel, dense_3_2/bias, dense_3_2/kernel, dense_3_3/bias, dense_3_3/kernel, dense_4/bias, dense_4/kernel, dense_4_1/bias, dense_4_1/kernel, dense_4_2/bias, dense_4_2/kernel, dense_4_3/bias, dense_4_3/kernel, dense_5/bias, dense_5/kernel, dense_5_1/bias, dense_5_1/kernel, dense_5_2/bias, dense_5_2/kernel, dense_5_3/bias, dense_5_3/kernel, dense_6/bias, dense_6/kernel, dense_6_1/bias, dense_6_1/kernel, dense_6_2/bias, dense_6_2/kernel, dense_6_3/bias, dense_6_3/kernel, lstm_1/bias, lstm_1/kernel, lstm_1/recurrent_kernel, lstm_1_1/bias, lstm_1_1/kernel, lstm_1_1/recurrent_kernel, lstm_1_2/bias, lstm_1_2/kernel, lstm_1_2/recurrent_kernel, lstm_1_3/bias, lstm_1_3/kernel, lstm_1_3/recurrent_kernel, lstm_2/bias, lstm_2/kernel, lstm_2/recurrent_kernel, lstm_2_1/bias, lstm_2_1/kernel, lstm_2_1/recurrent_kernel, lstm_2_2/bias, lstm_2_2/kernel, lstm_2_2/recurrent_kernel, lstm_2_3/bias, lstm_2_3/kernel, lstm_2_3/recurrent_kernel, lstm_3/bias, lstm_3/kernel, lstm_3/recurrent_kernel, lstm_3_1/bias, lstm_3_1/kernel, lstm_3_1/recurrent_kernel, lstm_3_2/bias, lstm_3_2/kernel, lstm_3_2/recurrent_kernel, lstm_3_3/bias, lstm_3_3/kernel, lstm_3_3/recurrent_kernel, lstm_4/bias, lstm_4/kernel, lstm_4/recurrent_kernel, lstm_4_1/bias, lstm_4_1/kernel, lstm_4_1/recurrent_kernel, lstm_4_2/bias, lstm_4_2/kernel, lstm_4_2/recurrent_kernel, lstm_4_3/bias, lstm_4_3/kernel, lstm_4_3/recurrent_kernel, training/Adam/Variable, training/Adam/Variable_1, training/Adam/Variable_10, training/Adam/Variable_11, training/Adam/Variable_12, training/Adam/Variable_13, training/Adam/Variable_14, training/Adam/Variable_15, training/Adam/Variable_16, training/Adam/Variable_17, training/Adam/Variable_18, training/Adam/Variable_19, training/Adam/Variable_2, training/Adam/Variable_20, training/Adam/Variable_21, training/Adam/Variable_22, training/Adam/Variable_23, training/Adam/Variable_3, training/Adam/Variable_4, training/Adam/Variable_5, training/Adam/Variable_6, training/Adam/Variable_7, training/Adam/Variable_8, training/Adam/Variable_9, training_1/Adam/Variable, training_1/Adam/Variable_1, training_1/Adam/Variable_10, training_1/Adam/Variable_11, training_1/Adam/Variable_12, training_1/Adam/Variable_13, training_1/Adam/Variable_14, training_1/Adam/Variable_15, training_1/Adam/Variable_16, training_1/Adam/Variable_17, training_1/Adam/Variable_18, training_1/Adam/Variable_19, training_1/Adam/Variable_2, training_1/Adam/Variable_20, training_1/Adam/Variable_21, training_1/Adam/Variable_22, training_1/Adam/Variable_23, training_1/Adam/Variable_3, training_1/Adam/Variable_4, training_1/Adam/Variable_5, training_1/Adam/Variable_6, training_1/Adam/Variable_7, training_1/Adam/Variable_8, training_1/Adam/Variable_9)]]

It finally worked. I think there must've been a problem with the model that I was calling in the signature.

I followed another example, which used the more traditional way of loading up a model from keras (using H5 and json) and it worked. I think the problem before was the model I was calling wasn't compiled.

Here is the link. https://github.com/krystianity/keras-serving

Hi everyone,

I followed the steps here to 1. export a model as a .pb file in savedModel format for TensorFlow serving and then build the gRPC Client.

I have the very weird behavior, that no the model always predicts exactly the same class (no matter what image I take as input).

I'm unsure if my error is on client side or if the export is somehow wrong. Did anybody have the same issue?

EDIT:
After searching for the error for a few hours I finally found the error myself:
I didn't need this line:
#image = imagenet_utils.preprocess_input(image)

I guess the images didn't get preprocecessed using imagenet_utils during training and thus it shouldn't be used during inference, but I'm not completly sure at this time.

Here the code I use for the export:

from keras import backend as K
from keras.models import Sequential, Model, load_model
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
import tensorflow as tf
from keras.models import model_from_config
from tensorflow.contrib.session_bundle import exporter
import keras as k
MODEL_PATH = './models/full_inception_model.h5'
export_path = './models/out3'

K.set_learning_phase(0)

new_model = load_model(MODEL_PATH)

builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(
    inputs={'input': new_model.input},
    outputs={'prob': new_model.output})

with K.get_session() as sess:

    builder.add_meta_graph_and_variables(
        sess=sess,
        tags=[tag_constants.SERVING],
        clear_devices = True,
        signature_def_map={
            signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature}
    )

builder.save()`

And the Client:
`

`def prepare_image(image, target):
    # if the image mode is not RGB, convert it
    if image.mode != "RGB":
        image = image.convert("RGB")

    # resize the input image and preprocess it
    image = image.resize(target)
    image = img_to_array(image)
     # important! otherwise the predictions will be '0'
    image = image / 255
    image = np.expand_dims(image, axis=0)
    image = imagenet_utils.preprocess_input(image)
 
    # return the processed image
    return image

def load_preprocess_img( ):
    
    target_size=(299, 299)
    image = flask.request.files["image"].read()
    image = Image.open(io.BytesIO(image))
    
    img = prepare_image(image, target=target_size)

    return img


@app.route('/', methods=['POST'])
def main2():
    credentials = implementations.ssl_channel_credentials(root_certificates=ROOT_CERT)
    channel = implementations.secure_channel(MODEL_SERVER_HOST, MODEL_SERVER_PORT, credentials)
    stub = prediction_service_pb2.beta_create_PredictionService_stub(channel, metadata_transformer=metadata_transformer)
 
    data = load_preprocess_img()
	
    with open("Output2.txt", "w") as text_file:
        text_file.write(np.array_str(data))
    
    request = predict_pb2.PredictRequest()
    request.model_spec.name = MODEL_NAME
    #request.model_spec.signature_name = 'predict_images'
    request.inputs['input'].CopyFrom(
		tf.contrib.util.make_tensor_proto(data))
    
    result = stub.Predict(request, 20.0)
    
    #print(np.expand_dims(result.outputs['prob'].float_val, axis=0))
    to_decode = np.expand_dims(result.outputs['prob'].float_val, axis=0)
    print(to_decode)
    #decoded = decode_predictions(to_decode, 10) # 5 here means top-5
    #print(decoded)
    return json.dumps(to_decode.tolist())`

Hi @tspthomas ,
I am currently reproducing your suggestion to serve keras model on tensorflow-serving, but I am encountering some error. Can you help me with this?
Thanks in advance.

error message:
2018-06-07 15:06:36.856731: E tensorflow_serving/util/retrier.cc:38] Loading servable: {name: vgg19 version: 3} failed: Not found: Op type not registered 'ClipByValue' in binary running on 229d61c80ffd. Make sure the Op and Kernel are registered in the binary running in this process.

Hello @cchung100m !

It's been a while since I posted this suggestion, so things might have changed :)

Anyway, looking at this error, it seems that 'ClipByValue' operation (which might be used by your model) is not available in the current Tensorflow version you're using to run with Tensorflow Serving.

I took a quick look at TF's code and it seems this operator was added in this commit here (and it seems this operation is only available starting on TF 1.8): tensorflow/tensorflow@083cf6b

I'd recommend you to try to change the TF version that you used to build TF Serving with to a more recent one and test. If that's not possible, I'd try to downgrade the Keras version you're using and re-export the model. Another possibility is to downgrade your versions.

I believe the problem could be either that the model implements something not available at your current TF version or some bug with the current TF Serving version you're using.

It also seems that more people are facing a similar issue here (no answer yet, but can help to confirm the versions): tensorflow/tensorflow#19822

Hope that helps!

Regards,

Hello @cchung100m
I had to play with the versdions as well for tensorlfow serving to work and finally got it running in tensorflow 1.5 and Keras 2.1.4.

I never got this error though.

Hope this helps

Hi @tspthomas @R-Miner,

Thank you for the prompt reply.
I downgrade my TF version and it works successful.
Thanks for your suggestions again :)

Package Version


Keras 2.1.3
tensorboard 1.7.0
tensorflow 1.7.0
tensorflow-gpu 1.4.0
tensorflow-serving-api-python3 1.7.0

How to use an appointed GPU to run tensorflow serving with docker.I dont't want to take uo all gpus when running tf serving.So does anybody know?In addition, my model is writen by tf.keras.