hhk7734/tensorflow-yolov4

Enable mixed precision

Closed this issue · 2 comments

Is it possible to use TensorFlow's mixed precision for a performance boost on newer nvidia graphics cards?

from tensorflow.keras.mixed_precision import experimental as mixed_precision

policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)

yolo.make_model()

Right now it throws a TypeError when trying to do this because the outputs and inputs will not match in the existing model.

TypeError: Input 'y' of 'AddV2' Op has type float32 that does not match type float16 of argument 'x'.

Am I doing something wrong in enabling it this way or is it just not possible with the current implementation?

x = self.yolov3_head(x)

x = self.yolov3_head_tiny(x)

remove the code above.
I plan to modify the head part. (#47 ) But, I'm busy these days...
So, if it is difficult to wait for the update, you will have to write your own head.

On yolov4 v3.0.0, you can enable mixed precision.

yolo.predict or yolo.inference use yolo_diou_nms.

pred_bboxes = self.yolo_diou_nms(
candidates=candidates, beta_nms=self.config.yolo_0.beta_nms
)

But, this allows only np.float32 type array. So if you want to enable mixed precision, modify the code shown below.

    if candidates is not np.float32:
        candidates = candidates.astype(np.float32)
    pred_bboxes = self.yolo_diou_nms( 
        candidates=candidates, beta_nms=self.config.yolo_0.beta_nms 
    )