uber/neuropod

Neuropod (v3.0.0-rc1) cannot be packaged through python

davidxiaozhi opened this issue · 7 comments

In neuropod2.0, we only need to install a dependency and package TensorFlow and Pytorch models through python,then run it

image

But the whl released by Neuropod (v3.0.0-rc1) only has the following
image

In addition, after compiling from the source code, I did not find all backends that conform to the compiled device, thus completing the local installation

Also ask if we can complete the TensorFlow model packaging through C ++ API

how to package a model for TensorFlow2.0 ,it does't have a session and static graph

create_keras_neuropod( neuropod_path, model_name, sess, model, node_name_mapping = None, input_spec = None, output_spec = None, input_tensor_device = None, default_input_tensor_device = GPU, custom_ops = [], package_as_zip = True, test_input_data = None, test_expected_out = None, persist_test_data = True, )

create_tensorflow_neuropod( neuropod_path, model_name, node_name_mapping, input_spec, output_spec, frozen_graph_path = None, graph_def = None, init_op_names = [], input_tensor_device = None, default_input_tensor_device = GPU, custom_ops = [], package_as_zip = True, test_input_data = None, test_expected_out = None, persist_test_data = True, )

See the installation docs that were linked from the release (https://neuropod.ai/docs/master/installing/). These are different than the v0.2.0 installation instructions.

There is now a single way to install backends regardless of what language they are being used from (Python, C++, C, Java, etc.).

Also ask if we can complete the TensorFlow model packaging through C ++ API

Model packaging is currently only available from Python. Do you have a usecase where you're building a TF model in C++?

(Inference works from all supported languages)

how to package a model for TensorFlow2.0 ,it does't have a session and static graph

@vkuzmin-uber should be able to help with that

@VivekPanyam Algorithm engineer after training complete model, the model will be stored in a particular file system, is not responsible for, a model based on neuropod packaging, if using neuropod as reasoning, the last is the package can be executed native model directly, if c + + can be packaged directly, can avoid the switch to the python repack cost

@vkuzmin-uber If possible, the protocol specification of Neuropod Package can be opened so that users can more easily de-instrumentalize the package

@VivekPanyam @vkuzmin-uber V How do you do reasoning with TensorFlow 2.0 based on Neuropod, using the Create Keras Neuropod encapsulation model

We use workaround for now:

TF.Keras:

 from tensorflow import keras
 from neuropod.backends.keras.packager import *

 # TF2 doesn't recommend to use session anynmore.
 # But Keras backend still needs this concept.
 try:
     import tensorflow.compat.v1 as tf
     tf.disable_v2_behavior()

     import tensorflow.python.keras.backend as K
     sess = K.get_session()
 except ImportError:
     import tensorflow as tf
     sess = keras.backend.get_session()
     
     create_keras_neuropod(

Tensorflow

     if tf.__version__[0] != "1":
         import tensorflow.compat.v1 as tf
         tf.disable_v2_behavior()

     create_tensorflow_neuropod(

I investigated a little bit how to make it TF2 compatible w/o switching to V1. Will write here some ideas later.

image
It is recommended to add a new package to handle the 2.0 series separately, otherwise the existing processing for the 2.0 version will actually be run in the compatibility mode back

image
In addition, the Keras-based model packaging method "contains no parameters for the new generation path, whereas Torchscript contains parameters for the save path