Perform "Full Yolo" training, fail to convert tflite.
dreamsidae opened this issue · 12 comments
Describe the bug:
Perform "Full Yolo" training, fail to convert tflite.
描述錯誤:
進行 "Full Yolo" 訓練,轉換 tflite 失敗
To Reproduce:
se the following json settings to run, and finally report an error. But the same dataset using
"Tiny Yolo" or "MobileNet7_5" can be normal and export the kmodel file.
重現:
使用 下面 json 設置運行,最後報錯。 但是相同 dataset 使用 "Tiny Yolo" 或 "MobileNet7_5" 皆可以
正常並且轉出 kmodel 檔案。
{
"model" : {
"type": "Detector",
"architecture": "Full Yolo",
"input_size": [224,224],
"anchors": [1.30,1.73, 2.50,2.80, 2.91,4.62, 4.35,5.16, 6.00,6.16],
"labels": ["cat_face","dog_face"],
"coord_scale" : 1.0,
"class_scale" : 1.0,
"object_scale" : 5.0,
"no_object_scale" : 1.0
},
"weights" : {
"full": "",
"backend": "imagenet"
},
"train" : {
"actual_epoch": 30,
"train_image_folder": "dc_dataset/images",
"train_annot_folder": "dc_dataset/annotations",
"train_times": 12,
"valid_image_folder": "dc_dataset/val_images",
"valid_annot_folder": "dc_dataset/val_annotations",
"valid_times": 4,
"valid_metric": "mAP",
"batch_size": 32,
"learning_rate": 1e-4,
"saved_folder": "dc_fyolo",
"first_trainable_layer": "",
"augumentation": true,
"is_only_detect" : false
},
"converter" : {
"type": ["k210"]
}
}
Expected behavior / 報錯:
Using TensorFlow backend.
Project folder projects/dc_fyolo already exists. Creating a folder for new training session.
K210 Converter ready
['cat_face', 'dog_face']
Imagenet for YOLO backend are not available yet, defaulting to random weights
Failed to load pre-trained weights for the whole model. It might be because you didn't specify any or the weight file cannot be found
Current training session folder is projects/dc_fyolo/2020-10-07_14-52-01
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
conv_1 (Conv2D) (None, 224, 224, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
norm_1 (BatchNormalization) (None, 224, 224, 32) 128 conv_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 224, 224, 32) 0 norm_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 112, 112, 32) 0 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
conv_2 (Conv2D) (None, 112, 112, 64) 18432 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
norm_2 (BatchNormalization) (None, 112, 112, 64) 256 conv_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 112, 112, 64) 0 norm_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 56, 56, 64) 0 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
conv_3 (Conv2D) (None, 56, 56, 128) 73728 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
norm_3 (BatchNormalization) (None, 56, 56, 128) 512 conv_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 56, 56, 128) 0 norm_3[0][0]
__________________________________________________________________________________________________
conv_4 (Conv2D) (None, 56, 56, 64) 8192 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
norm_4 (BatchNormalization) (None, 56, 56, 64) 256 conv_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 56, 56, 64) 0 norm_4[0][0]
__________________________________________________________________________________________________
conv_5 (Conv2D) (None, 56, 56, 128) 73728 leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
norm_5 (BatchNormalization) (None, 56, 56, 128) 512 conv_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 56, 56, 128) 0 norm_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 28, 28, 128) 0 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
conv_6 (Conv2D) (None, 28, 28, 256) 294912 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
norm_6 (BatchNormalization) (None, 28, 28, 256) 1024 conv_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 28, 28, 256) 0 norm_6[0][0]
__________________________________________________________________________________________________
conv_7 (Conv2D) (None, 28, 28, 128) 32768 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
norm_7 (BatchNormalization) (None, 28, 28, 128) 512 conv_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 28, 28, 128) 0 norm_7[0][0]
__________________________________________________________________________________________________
conv_8 (Conv2D) (None, 28, 28, 256) 294912 leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
norm_8 (BatchNormalization) (None, 28, 28, 256) 1024 conv_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 28, 28, 256) 0 norm_8[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 14, 14, 256) 0 leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
conv_9 (Conv2D) (None, 14, 14, 512) 1179648 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
norm_9 (BatchNormalization) (None, 14, 14, 512) 2048 conv_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 14, 14, 512) 0 norm_9[0][0]
__________________________________________________________________________________________________
conv_10 (Conv2D) (None, 14, 14, 256) 131072 leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
norm_10 (BatchNormalization) (None, 14, 14, 256) 1024 conv_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 14, 14, 256) 0 norm_10[0][0]
__________________________________________________________________________________________________
conv_11 (Conv2D) (None, 14, 14, 512) 1179648 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
norm_11 (BatchNormalization) (None, 14, 14, 512) 2048 conv_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 14, 14, 512) 0 norm_11[0][0]
__________________________________________________________________________________________________
conv_12 (Conv2D) (None, 14, 14, 256) 131072 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
norm_12 (BatchNormalization) (None, 14, 14, 256) 1024 conv_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 14, 14, 256) 0 norm_12[0][0]
__________________________________________________________________________________________________
conv_13 (Conv2D) (None, 14, 14, 512) 1179648 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
norm_13 (BatchNormalization) (None, 14, 14, 512) 2048 conv_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 14, 14, 512) 0 norm_13[0][0]
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 7, 7, 512) 0 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
conv_14 (Conv2D) (None, 7, 7, 1024) 4718592 max_pooling2d_5[0][0]
__________________________________________________________________________________________________
norm_14 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 7, 7, 1024) 0 norm_14[0][0]
__________________________________________________________________________________________________
conv_15 (Conv2D) (None, 7, 7, 512) 524288 leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
norm_15 (BatchNormalization) (None, 7, 7, 512) 2048 conv_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 7, 7, 512) 0 norm_15[0][0]
__________________________________________________________________________________________________
conv_16 (Conv2D) (None, 7, 7, 1024) 4718592 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
norm_16 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 7, 7, 1024) 0 norm_16[0][0]
__________________________________________________________________________________________________
conv_17 (Conv2D) (None, 7, 7, 512) 524288 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
norm_17 (BatchNormalization) (None, 7, 7, 512) 2048 conv_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 7, 7, 512) 0 norm_17[0][0]
__________________________________________________________________________________________________
conv_18 (Conv2D) (None, 7, 7, 1024) 4718592 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
norm_18 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 7, 7, 1024) 0 norm_18[0][0]
__________________________________________________________________________________________________
conv_19 (Conv2D) (None, 7, 7, 1024) 9437184 leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
norm_19 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_19[0][0]
__________________________________________________________________________________________________
conv_21 (Conv2D) (None, 14, 14, 64) 32768 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 7, 7, 1024) 0 norm_19[0][0]
__________________________________________________________________________________________________
norm_21 (BatchNormalization) (None, 14, 14, 64) 256 conv_21[0][0]
__________________________________________________________________________________________________
conv_20 (Conv2D) (None, 7, 7, 1024) 9437184 leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 14, 14, 64) 0 norm_21[0][0]
__________________________________________________________________________________________________
norm_20 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_20[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 7, 7, 256) 0 leaky_re_lu_21[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 7, 7, 1024) 0 norm_20[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 7, 7, 1280) 0 lambda_1[0][0]
leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
conv_22 (Conv2D) (None, 7, 7, 1024) 11796480 concatenate_1[0][0]
__________________________________________________________________________________________________
norm_22 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_22[0][0]
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 7, 7, 1024) 0 norm_22[0][0]
__________________________________________________________________________________________________
detection_layer_35 (Conv2D) (None, 7, 7, 35) 35875 leaky_re_lu_22[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 7, 7, 5, 7) 0 detection_layer_35[0][0]
==================================================================================================
Total params: 50,583,811
Trainable params: 50,563,139
Non-trainable params: 20,672
__________________________________________________________________________________________________
Epoch 1/1
/home/user/miniconda3/lib/python3.7/site-packages/imgaug/imgaug.py:184: DeprecationWarning: Function `ContrastNormalization()` is deprecated. Use `imgaug.contrast.LinearContrast` instead.
warn_deprecated(msg, stacklevel=3)
1534/1534 [==============================] - 27720s 18s/step - loss: 0.5426 - val_loss: 0.6020
cat_face 0.2088
dog_face 0.1289
mAP: 0.1689
Saving model on first epoch irrespective of mAP
/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/yolo/backend/utils/map_evaluation.py:261: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
plt.show(block=False)
/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/yolo/backend/utils/map_evaluation.py:262: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
plt.pause(1)
471-mins to train
Traceback (most recent call last):
File "axelerate/train.py", line 184, in <module>
setup_training(config_file=args.config)
File "axelerate/train.py", line 169, in setup_training
return(train_from_config(config, dirname))
File "axelerate/train.py", line 149, in train_from_config
converter.convert_model(model_path)
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/convert.py", line 220, in convert_model
model = keras.models.load_model(model_path, compile=False)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 492, in load_wrapper
return load_function(*args, **kwargs)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 584, in load_model
model = _deserialize_model(h5dict, custom_objects, compile)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 274, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/saving.py", line 627, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/layers/__init__.py", line 168, in deserialize
printable_module_name='layer')
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/utils/generic_utils.py", line 147, in deserialize_keras_object
list(custom_objects.items())))
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/network.py", line 1075, in from_config
process_node(layer, node_data)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/network.py", line 1025, in process_node
layer(unpack_singleton(input_tensors), **kwargs)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/base_layer.py", line 489, in __call__
output = self.call(inputs, **kwargs)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/layers/core.py", line 716, in call
return self.function(inputs, **arguments)
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/feature.py", line 78, in space_to_depth_x2
return tf.space_to_depth(x, block_size=2)
NameError: name 'tf' is not defined
Environment (please complete the following information):
環境(請填寫以下信息):
linux 18.04
tensorflow 1.15
aXeleRate 0.60 or 0.59
After I modify feature.py, I can convert
我修改 feature.py 之後,即可以進行轉換
before fixing
修改前
# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K)
def space_to_depth_x2(x):
return tf.space_to_depth(x, block_size=2)
After modification
修改後
# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K)
def space_to_depth_x2(x):
print ("space_to_depth_x2 import tf ")
import tensorflow as tf
return tf.space_to_depth(x, block_size=2)
You can convert normally!
But this place, during the training process, will be called continuously, and somehow it will report an error when the final conversion occurs!
就可以正常轉換了!
但是這一個地方,在訓練過程中,會不斷的被調用,不知為何就在最終轉換時,才會報錯!
feature_NEW.py.zip
After fixing the above problems, new problems were discovered! Although the tflite file can be generated, the conversion of kmodel fails!
修正上述問題後,發現新的問題! 雖可以產生 tflite 檔案,但是轉換 kmodel 失敗!
New error message
新的錯誤訊息
Using TensorFlow backend.
Project folder projects/dc_fyolo already exists. Creating a folder for new training session.
K210 Converter ready
['cat_face', 'dog_face']
space_to_depth_x2 import tf
space_to_depth_x2 import tf
Imagenet for YOLO backend are not available yet, defaulting to random weights
Failed to load pre-trained weights for the whole model. It might be because you didn't specify any or the weight file cannot be found
Current training session folder is projects/dc_fyolo/2020-10-08_11-39-44
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
conv_1 (Conv2D) (None, 224, 224, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
norm_1 (BatchNormalization) (None, 224, 224, 32) 128 conv_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 224, 224, 32) 0 norm_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 112, 112, 32) 0 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
conv_2 (Conv2D) (None, 112, 112, 64) 18432 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
norm_2 (BatchNormalization) (None, 112, 112, 64) 256 conv_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 112, 112, 64) 0 norm_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 56, 56, 64) 0 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
conv_3 (Conv2D) (None, 56, 56, 128) 73728 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
norm_3 (BatchNormalization) (None, 56, 56, 128) 512 conv_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 56, 56, 128) 0 norm_3[0][0]
__________________________________________________________________________________________________
conv_4 (Conv2D) (None, 56, 56, 64) 8192 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
norm_4 (BatchNormalization) (None, 56, 56, 64) 256 conv_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 56, 56, 64) 0 norm_4[0][0]
__________________________________________________________________________________________________
conv_5 (Conv2D) (None, 56, 56, 128) 73728 leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
norm_5 (BatchNormalization) (None, 56, 56, 128) 512 conv_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 56, 56, 128) 0 norm_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 28, 28, 128) 0 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
conv_6 (Conv2D) (None, 28, 28, 256) 294912 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
norm_6 (BatchNormalization) (None, 28, 28, 256) 1024 conv_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 28, 28, 256) 0 norm_6[0][0]
__________________________________________________________________________________________________
conv_7 (Conv2D) (None, 28, 28, 128) 32768 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
norm_7 (BatchNormalization) (None, 28, 28, 128) 512 conv_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 28, 28, 128) 0 norm_7[0][0]
__________________________________________________________________________________________________
conv_8 (Conv2D) (None, 28, 28, 256) 294912 leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
norm_8 (BatchNormalization) (None, 28, 28, 256) 1024 conv_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 28, 28, 256) 0 norm_8[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 14, 14, 256) 0 leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
conv_9 (Conv2D) (None, 14, 14, 512) 1179648 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
norm_9 (BatchNormalization) (None, 14, 14, 512) 2048 conv_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 14, 14, 512) 0 norm_9[0][0]
__________________________________________________________________________________________________
conv_10 (Conv2D) (None, 14, 14, 256) 131072 leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
norm_10 (BatchNormalization) (None, 14, 14, 256) 1024 conv_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 14, 14, 256) 0 norm_10[0][0]
__________________________________________________________________________________________________
conv_11 (Conv2D) (None, 14, 14, 512) 1179648 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
norm_11 (BatchNormalization) (None, 14, 14, 512) 2048 conv_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 14, 14, 512) 0 norm_11[0][0]
__________________________________________________________________________________________________
conv_12 (Conv2D) (None, 14, 14, 256) 131072 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
norm_12 (BatchNormalization) (None, 14, 14, 256) 1024 conv_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 14, 14, 256) 0 norm_12[0][0]
__________________________________________________________________________________________________
conv_13 (Conv2D) (None, 14, 14, 512) 1179648 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
norm_13 (BatchNormalization) (None, 14, 14, 512) 2048 conv_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 14, 14, 512) 0 norm_13[0][0]
__________________________________________________________________________________________________
max_pooling2d_5 (MaxPooling2D) (None, 7, 7, 512) 0 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
conv_14 (Conv2D) (None, 7, 7, 1024) 4718592 max_pooling2d_5[0][0]
__________________________________________________________________________________________________
norm_14 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 7, 7, 1024) 0 norm_14[0][0]
__________________________________________________________________________________________________
conv_15 (Conv2D) (None, 7, 7, 512) 524288 leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
norm_15 (BatchNormalization) (None, 7, 7, 512) 2048 conv_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 7, 7, 512) 0 norm_15[0][0]
__________________________________________________________________________________________________
conv_16 (Conv2D) (None, 7, 7, 1024) 4718592 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
norm_16 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 7, 7, 1024) 0 norm_16[0][0]
__________________________________________________________________________________________________
conv_17 (Conv2D) (None, 7, 7, 512) 524288 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
norm_17 (BatchNormalization) (None, 7, 7, 512) 2048 conv_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 7, 7, 512) 0 norm_17[0][0]
__________________________________________________________________________________________________
conv_18 (Conv2D) (None, 7, 7, 1024) 4718592 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
norm_18 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 7, 7, 1024) 0 norm_18[0][0]
__________________________________________________________________________________________________
conv_19 (Conv2D) (None, 7, 7, 1024) 9437184 leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
norm_19 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_19[0][0]
__________________________________________________________________________________________________
conv_21 (Conv2D) (None, 14, 14, 64) 32768 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 7, 7, 1024) 0 norm_19[0][0]
__________________________________________________________________________________________________
norm_21 (BatchNormalization) (None, 14, 14, 64) 256 conv_21[0][0]
__________________________________________________________________________________________________
conv_20 (Conv2D) (None, 7, 7, 1024) 9437184 leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 14, 14, 64) 0 norm_21[0][0]
__________________________________________________________________________________________________
norm_20 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_20[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 7, 7, 256) 0 leaky_re_lu_21[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 7, 7, 1024) 0 norm_20[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 7, 7, 1280) 0 lambda_1[0][0]
leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
conv_22 (Conv2D) (None, 7, 7, 1024) 11796480 concatenate_1[0][0]
__________________________________________________________________________________________________
norm_22 (BatchNormalization) (None, 7, 7, 1024) 4096 conv_22[0][0]
__________________________________________________________________________________________________
leaky_re_lu_22 (LeakyReLU) (None, 7, 7, 1024) 0 norm_22[0][0]
__________________________________________________________________________________________________
detection_layer_35 (Conv2D) (None, 7, 7, 35) 35875 leaky_re_lu_22[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 7, 7, 5, 7) 0 detection_layer_35[0][0]
==================================================================================================
Total params: 50,583,811
Trainable params: 50,563,139
Non-trainable params: 20,672
__________________________________________________________________________________________________
Epoch 1/1
/home/user/miniconda3/lib/python3.7/site-packages/imgaug/imgaug.py:184: DeprecationWarning: Function `ContrastNormalization()` is deprecated. Use `imgaug.contrast.LinearContrast` instead.
warn_deprecated(msg, stacklevel=3)
1534/1534 [==============================] - 28188s 18s/step - loss: 0.5464 - val_loss: 0.6622
cat_face 0.2032
dog_face 0.2183
mAP: 0.2107
Saving model on first epoch irrespective of mAP
/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/yolo/backend/utils/map_evaluation.py:261: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
plt.show(block=False)
/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/yolo/backend/utils/map_evaluation.py:262: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure.
plt.pause(1)
479-mins to train
space_to_depth_x2 import tf
space_to_depth_x2 import tf
Converting to tflite without Reshape layer for K210 Yolo
space_to_depth_x2 import tf
space_to_depth_x2 import tf
space_to_depth_x2 import tf
Traceback (most recent call last):
File "axelerate/train.py", line 184, in <module>
setup_training(config_file=args.config)
File "axelerate/train.py", line 169, in setup_training
return(train_from_config(config, dirname))
File "axelerate/train.py", line 149, in train_from_config
converter.convert_model(model_path)
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/convert.py", line 228, in convert_model
self.convert_k210(model_path.split(".")[0] + '.tflite')
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/convert.py", line 119, in convert_k210
folder_name = self.k210_dataset_gen()
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/convert.py", line 92, in k210_dataset_gen
backend = create_feature_extractor(self._backend, [self._img_size[0], self._img_size[1]])
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/feature.py", line 35, in create_feature_extractor
feature_extractor = FullYoloFeature(input_size, weights)
File "/home/user/miniconda3/lib/python3.7/site-packages/axelerate/networks/common_utils/feature.py", line 196, in __init__
x = concatenate([skip_connection, x])
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/layers/merge.py", line 649, in concatenate
return Concatenate(axis=axis, **kwargs)(inputs)
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/engine/base_layer.py", line 463, in __call__
self.build(unpack_singleton(input_shapes))
File "/home/user/miniconda3/lib/python3.7/site-packages/keras/layers/merge.py", line 357, in build
shape_set.add(tuple(reduced_inputs_shapes[i]))
TypeError: unhashable type: 'Dimension'
K210 doesn't support Full YOLO - the Full YOLO model is too big to fit in the memory. From the architectures available in aXeleRate, only MobileNet(alpha 0.25 - 0.75 are supported by Micropython firmware, for alpha 1.0 yo might need to use C or minimal version of micro python firmware) and Tiny YOLO are guaranteed to work wit K210.
TypeError: unhashable type: 'Dimension'
I've already fixed this error, but didn't commit the fix yet. I'll try my best to commit the fix during the weekend.
感謝你的回復! 使用 Full YOLO 原只是想要做一下測試,那我改回用 Tiny yolo !
再次感謝你的無私付出!
After I modify feature.py, I can convert
我修改 feature.py 之後,即可以進行轉換before fixing
修改前# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): return tf.space_to_depth(x, block_size=2)
After modification
修改後# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): print ("space_to_depth_x2 import tf ") import tensorflow as tf return tf.space_to_depth(x, block_size=2)
You can convert normally!
But this place, during the training process, will be called continuously, and somehow it will report an error when the final conversion occurs!就可以正常轉換了!
但是這一個地方,在訓練過程中,會不斷的被調用,不知為何就在最終轉換時,才會報錯!
feature_NEW.py.zip
我照做了,还是报同样错误
After I modify feature.py, I can convert
我修改 feature.py 之後,即可以進行轉換
before fixing
修改前# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): return tf.space_to_depth(x, block_size=2)
After modification
修改後# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): print ("space_to_depth_x2 import tf ") import tensorflow as tf return tf.space_to_depth(x, block_size=2)
You can convert normally!
But this place, during the training process, will be called continuously, and somehow it will report an error when the final conversion occurs!
就可以正常轉換了!
但是這一個地方,在訓練過程中,會不斷的被調用,不知為何就在最終轉換時,才會報錯!
feature_NEW.py.zip我照做了,还是报同样错误
Try development branch - I migrated it to tf 2.3 and fixed that error in process. Just keep in mind that you can't use Full YOLO for K210.
After git cloning the repo, do
cd aXeleRate && git checkout dev
After I modify feature.py, I can convert
我修改 feature.py 之後,即可以進行轉換
before fixing
修改前# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): return tf.space_to_depth(x, block_size=2)
After modification
修改後# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): print ("space_to_depth_x2 import tf ") import tensorflow as tf return tf.space_to_depth(x, block_size=2)
You can convert normally!
But this place, during the training process, will be called continuously, and somehow it will report an error when the final conversion occurs!
就可以正常轉換了!
但是這一個地方,在訓練過程中,會不斷的被調用,不知為何就在最終轉換時,才會報錯!
feature_NEW.py.zip我照做了,还是报同样错误
Try development branch - I migrated it to tf 2.3 and fixed that error in process. Just keep in mind that you can't use Full YOLO for K210.
After git cloning the repo, do
cd aXeleRate && git checkout dev
Thanks
After I modify feature.py, I can convert
我修改 feature.py 之後,即可以進行轉換
before fixing
修改前# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): return tf.space_to_depth(x, block_size=2)
After modification
修改後# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): print ("space_to_depth_x2 import tf ") import tensorflow as tf return tf.space_to_depth(x, block_size=2)
You can convert normally!
But this place, during the training process, will be called continuously, and somehow it will report an error when the final conversion occurs!
就可以正常轉換了!
但是這一個地方,在訓練過程中,會不斷的被調用,不知為何就在最終轉換時,才會報錯!
feature_NEW.py.zip我照做了,还是报同样错误
Try development branch - I migrated it to tf 2.3 and fixed that error in process. Just keep in mind that you can't use Full YOLO for K210.
After git cloning the repo, do
cd aXeleRate && git checkout dev
Thanks,if my alpha==0.75, can work about the full yolo2?
Alpha parameter is only for MobileNet, Full YOLO and Tiny YOLO don't have it. If you need to train network for K210, just use Tiny YOLO(if you have enough data to train from scratch) or MobileNet(if you want to use imagenet weights).
After I modify feature.py, I can convert
我修改 feature.py 之後,即可以進行轉換
before fixing
修改前# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): return tf.space_to_depth(x, block_size=2)
After modification
修改後# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K) def space_to_depth_x2(x): print ("space_to_depth_x2 import tf ") import tensorflow as tf return tf.space_to_depth(x, block_size=2)
You can convert normally!
But this place, during the training process, will be called continuously, and somehow it will report an error when the final conversion occurs!
就可以正常轉換了!
但是這一個地方,在訓練過程中,會不斷的被調用,不知為何就在最終轉換時,才會報錯!
feature_NEW.py.zip我照做了,还是报同样错误
Try development branch - I migrated it to tf 2.3 and fixed that error in process. Just keep in mind that you can't use Full YOLO for K210.
After git cloning the repo, do
cd aXeleRate && git checkout dev
I update 2.3,but:
File "C:\ProgramData\Anaconda3\envs\ax210tf2\lib\site-packages\axelerate\networks\yolo\backend\utils\map_evaluation.py", line 4, in
import tensorflow.keras
ModuleNotFoundError: No module named 'tensorflow.keras'
Please check your tensorflow installation -
ModuleNotFoundError: No module named 'tensorflow.keras'
means that you do not have tensorflow 2.3 installed.
Please check your tensorflow installation -
ModuleNotFoundError: No module named 'tensorflow.keras'
means that you do not have tensorflow 2.3 installed.
thanks