Version required
lucamocerino opened this issue · 8 comments
Hi,
interesting work. I've some issues that I cannot actually fix. Can u post the right version of TensorFlow, Keras etc. , because some features are not compatible.
Thanks!
This was tested with Tensorflow 2.1 and should use the built in tensorflow.keras
. If there still are issues after updating please let me know specifically what they are and I'll try to get them fixed asap!
I faced this issue when tried the AlexNet example. Tf 2.1 but it seems that the model definition is not compliant with the one required since i've an assertation error when calling mod, params = relay.frontend.from_keras
:
assert isinstance(model, expected_model_class)
Make sure you're using the TVM submodule included in the repo (riptide fork at https://github.com/jwfromm/tvm/tree/riptide
). I'm planning on getting the required functionality added to the main TVM branch soon.
Yes, i made same mistakes. Now I'm sure to have the right TVM branch, but i got this error:
`Traceback (most recent call last):
File "deploy_mod.py", line 37, in
mod, params = relay.frontend.from_keras(model, shape={'input_1': [1,32, 32, 3]}, layout='NHWC')
File "/home/anaconda3/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/relay/frontend/keras.py", line 1113, in from_keras
if o in model.outputs:
File "/home/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 757, in bool
self._disallow_bool_casting()
File "/home/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 526, in _disallow_bool_casting
self._disallow_in_graph_mode("using a tf.Tensor
as a Python bool
")
File "/home/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 515, in _disallow_in_graph_mode
" this function with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a tf.Tensor
as a Python bool
is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
`
Hmm are you sure you're using tf2.1? In tf2+ all tensors are eager by default. This error is complaining about you being in graph mode. You could try adding tf.enable_eager_execution()
to the top of your script.
It is not related on eager execution. I re-do all the setup process. Now this is the traceback:
Traceback (most recent call last):
File "deploy_mod.py", line 34, in
mod, params = relay.frontend.from_keras(model, shape={'input_1': [1, 32, 32, 3]}, layout='NHWC')
File "/home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/relay/frontend/keras.py", lin
e 1123, in from_keras
keras_op_to_relay(inexpr, keras_layer, op_name, etab)
File "/home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/relay/frontend/keras.py", lin
e 1003, in keras_op_to_relay
outs = _convert_map[op_name](inexpr, keras_layer, etab)
File "/home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/relay/frontend/keras.py", lin
e 781, in _convert_bitserial_convolution
out = _op.nn.bitserial_conv2d(data=inexpr, **params)
File "/home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/relay/op/nn/nn.py", line 2140
, in bitserial_conv2d
out_dtype, unipolar)
File "/home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/_ffi/_ctypes/packed_func.py",
line 213, in call
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (4) /home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x6$) [0x7f37a41df0b5]
[bt] (3) /home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(std::Function
tvm::runtime::DataType, tvm::runtime::DataType, bool)>::AssignTypedLambda<tvm::RelayExpr ()(tvm::RelayExpr, tvm::RelayExpr, tvm::Array<tvm::PrimExpr, void>, tvm::Array<tvm::PrimExpr, void>, tvm::PrimExpr, tvm::Array<tvm::PrimExpr, void>, int, int, std::__cxx11::basic_string<char,
std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime
::DataType, tvm::runtime::DataType, bool)>(tvm::RelayExpr ()(tvm::RelayExpr, tvm::RelayExpr, tvm::Array<tvm::PrimExpr, void>, tvm::Array<tvm:
:PrimExpr, void>, tvm::PrimExpr, tvm::Array<tvm::PrimExpr, void>, int, int, std::__cxx11::basic_string<char, std::char_traits, std::allo
cator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime::DataType, tvm::runtime::DataType
, bool))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::M_invoke(std::Any_data const&, tvm::runtime::TVMArgs&&, tvm
::runtime::TVMRetValue*&&)+0x1de) [0x7f37a3dd83ee]
[bt] (2) /home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(void tvm::runtim
e::detail::unpack_call_dispatcher<tvm::RelayExpr, 0, 13, tvm::RelayExpr ()(tvm::RelayExpr, tvm::RelayExpr, tvm::Array<tvm::PrimExpr, void>, t
vm::Array<tvm::PrimExpr, void>, tvm::PrimExpr, tvm::Array<tvm::PrimExpr, void>, int, int, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime::DataType, tvm::run
time::DataType, bool)>::run<tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::r
untime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime:
:TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue>(tvm::RelayExpr ( const&)(tvm::RelayExpr, tvm::
RelayExpr, tvm::Array<tvm::PrimExpr, void>, tvm::Array<tvm::PrimExpr, void>, tvm::PrimExpr, tvm::Array<tvm::PrimExpr, void>, int, int, std::
cxx11::basic_string<char, std::char_traits, std::allocator >, std::cxx11::basic_string<char, std::char_traits, std::alloca
tor >, tvm::runtime::DataType, tvm::runtime::DataType, bool), tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*, tvm::runtime::TV
MArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime:
:TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runti
me::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&)+0x110) [0x7f37a3dd7f00]
[bt] (1) /home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::TV
MPODValue::operator int() const+0x140) [0x7f37a39c8540]
[bt] (0) /home/mocerino/Desktop/riptide/pyrip/lib/python3.7/site-packages/tvm-0.7.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessage
Fatal::~LogMessageFatal()+0x43) [0x7f37a39bd6b3]
File "/home/mocerino/Desktop/riptide/tvm/include/tvm/runtime/packed_func.h", line 433
TVMError: Check failed: type_code == kDLInt (2 vs. 0) : expected int but get float
Thanks for pointing this out @lucamocerino. You're absolutely right that some bugs started showing up, maybe due to a change in tensorflow behavior that I missed. I just pushed changes to the Keras frontend that fix the bugs you encountered.
I'm going to close this issue for now. Please let me know if you encounter other errors.