Cannot convert tf to onnx in object detection
arthurkafer opened this issue · 2 comments
Describe the bug
In a simple object detection training, I need to convert the trained model to onnx, but an error saying AttributeError: module 'tensorflow.keras.backend' has no attribute 'get_session'
happens;
To Reproduce
To reproduce, I ran a simple person detector training and the converter set to 'onnx'
Expected behavior
Just to convert it so I can use it on my Jetson Nano.
Environment (please complete the following information):
- Using Google Colab right now
Additional context
I saw that there aren't any examples of object detection, but I assume that it would work as well.
This is my config dict:
{
"model":{
"type": "Detector",
"architecture": "MobileNet1_0", # MobileNet7_5
"input_size": 224,
"anchors": [0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828],
"labels": ["bobina"],
"coord_scale" : 1.0,
"class_scale" : 1.0,
"object_scale" : 5.0,
"no_object_scale" : 3.0 # 1.0e
},
"weights" : {
"full": "mobilenet_1_0_224_tf_no_top.h5",
"backend": "" # mobilenet_1_0_224_tf_no_top.h5
},
"train" : {
"actual_epoch": 15,
"train_image_folder": "bobinas_vert/imgs",
"train_annot_folder": "bobinas_vert/anns",
"train_times": 10,
"valid_image_folder": "bobinas_vert/imgs_validation",
"valid_annot_folder": "bobinas_vert/anns_validation",
"valid_times": 5,
"valid_metric": "mAP",
"batch_size": 8,
"learning_rate": 1e-4,
"saved_folder": "TESTE_ZERO_MEU",
"first_trainable_layer": "", #conv_pw_13_bn
"augumentation": True,
"is_only_detect" : False
},
"converter" : {
"type": ["onnx"]
}
}
Hi there!
Yes, the onnx converter option was broken by moving to tf 2.0, too many API changes.
I fixed it now, can you check using dev branch?
If running in colab, you will need to replace the first cell with
#we need imgaug 0.4 for image augmentations to work properly, see https://stackoverflow.com/questions/62580797/in-colab-doing-image-data-augmentation-with-imgaug-is-not-working-as-intended
!pip uninstall -y imgaug && pip uninstall -y albumentations && pip install imgaug==0.4 && pip install tf2onnx
!git clone https://github.com/AIWintermuteAI/aXeleRate.git
!cd aXeleRate && git checkout dev
import sys
sys.path.append('/content/aXeleRate')
from axelerate import setup_training, setup_inference
And then run training as usual. I tested all three types of networks (classifier, detector and segnet) on my local computer - they all worked as expected, outputting .onnx file to project folder. I also tested just detector in Colab
Btw, in your config,
"weights" : {
"full": "mobilenet_1_0_224_tf_no_top.h5",
is not supposed to be used like that. For full weights, you're supposed to pass the pass to full weights, normally it is done for resuming the training.
You're passing "no_top_model, which is needs to go to backend.
Thanks for the fast update!
I forgot to change the weights for this test, and now it worked fine.