onnx/tensorflow-onnx
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Jupyter NotebookApache-2.0
Issues
- 0
AttributeError: 'NoneType' object has no attribute 'decode' during TFLite to ONNX conversion
#2362 opened by kismeter - 0
--outputs-as-nchw does'nt seem to work?
#2360 opened by astalavistababe - 0
tf2onnx.tfonnx:Tensorflow op [sequential_1_1/lstm_1/CudnnRNNV3: CudnnRNNV3] is not supported
#2359 opened by nassimus26 - 0
Integrate with ONNX 1.17.0 release branch
#2358 opened by roborags - 0
NCHW conversion not supported for 3D models
#2357 opened by alkamid - 0
ssd_mobilenet_v2_320x320 converting to Onnx
#2355 opened by YigitKaanBingol - 2
- 2
error while creating onnx_node
#2354 opened by KTibow - 4
Tensorflow 2.16 / Keras 3 support
#2329 opened by pwuertz - 4
publish a command line executable on releases
#2321 opened by vpenades - 0
Remove EinsumOptimizer
#2352 opened by LinGeLin - 1
- 1
Convert prelu to other five basic operators
#2341 opened by salanzewei - 1
Error of running tf2onnx.convert
#2346 opened by tangty11 - 0
Unsupported op: PartitionedCall
#2347 opened by geiche735 - 1
Are there plans to support bf16?
#2345 opened by LinGeLin - 0
tf2onnx failed to convert ComplexAbs with opset 15
#2340 opened by yjiangling - 0
Inconsistent output with "correct" classification
#2339 opened by neo-yuan-fit - 0
- 1
Properly Support BFloat16
#2337 opened by AndrewJBean - 0
How to fix the output batch in tf2onnx
#2334 opened by another-tee - 4
Project requires an old version of Protobuf
#2328 opened by wingdagger - 0
NA GERAÇÃO DO VÍDEO RETORNOU A SEGUINTE MENSAGEM DE ERRO : RuntimeError: Error in execution: Non-zero status code returned while running Transpose node.
#2331 opened by eduardosumita - 2
Support CTCBeamSearchDecoder
#2302 opened by CaptainDario - 0
multiple inputs
#2330 opened by gufett0 - 9
cannot convert keras model to onnx : 'Sequential' object has no attribute 'output_names'I
#2319 opened by LaurentBerger - 3
DirectML returning empty result with ObjectDetection (Mobilinet V2 FPN Keras)
#2325 opened by willianwrm - 0
Registering operator for tf.linalg.eig
#2326 opened by frytoli - 0
There are discrepancies between the outputs of the Ttflite and converted ONNX model.
#2323 opened by SuhwanSong - 0
Is it possible to convert tflite to onnx with changed input dimension?
#2322 opened by DongNaeSwellfish - 0
tf.linalg.eigh not supported in tf2onnx
#2320 opened by XXXDoraemonslayer - 1
Integrate with ONNX 1.16.0 release branch
#2310 opened by cjvolzka - 0
Converting TF on numpy 1.26
#2317 opened by izaszopa - 2
Add support for tf.keras.layers.CuDNNLSTM
#2290 opened by hashJoe - 1
Please help validate release candidate for ONNX 1.16.0rc2
#2318 opened by liqunfu - 2
Conv3D performance degradation after ONNX conversion
#2303 opened by jm2201 - 0
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 101: invalid start byte
#2316 opened by falloLIYX - 1
Model conversion fail from Tensorflow while 2 GPU are utilized and first one set to be not visible in Tensorflow
#2314 opened by rytisss - 1
- 1
tf.image.resize can't convert to FP16 model
#2305 opened by nistarlwc - 2
YOLOV8Detector with non_max_suppression is not converted
#2298 opened by ksv87 - 1
tf2onnx produces a graph not good for performance
#2297 opened by Rayndell - 3
Inconsistency in conv+bn fusion and addition of useless reshapes in tf2onnx==1.16.1
#2300 opened by Rikyf3 - 0
- 0
Maxpool 2D layer error `Negative dimension size caused by subtracting 2 from 1 input shape shape=(128, 128, 1, 16)`
#2304 opened by shreya-ibind - 0
add mypy type hints to reduce possible sources of error
#2296 opened by andife - 0
Support TensorFlow latest version 2.15.0.
#2291 opened by fatcat-z - 3
Is there any upgrades on tf2onnx
#2287 opened by hanzigs - 0
The TF model and ONNX model converted from the TF model have different output shapes, when tf.keras.layers.Conv3DTranspose is used
#2286 opened by ktsumura - 0
Enable support to onnxruntime 1.16.3.
#2284 opened by fatcat-z