dme-compunet/YoloV8

Logic for resizing images to fit the size of the model input data.

oneprofittips opened this issue · 4 comments

Add or change in parameter
Mode = originalAspectRatio ? ResizeMode.Max : ResizeMode.Stretch
method for resizing an image, originalAspectRatio suggests leaving the original size, but in a fluid situation, if the size is 480 x 60, for example, with a model size of 480 x 480, it will stretch the smallest side to fit the size of the model.
Instead of ResizeMode.Max, I think it’s better to change it to ResizeMode.BoxPad or display these settings in the configuration.

For example, I do training on a size of 480x480 and pass 480x80 to the input of the Detect model, and I get 0 matches.
If I train at 480 and pass 480x80 to the input during training, and with Detect 480x80 I get a more or less good result.
It turns out that the resize is stretching 80 pixels by 480 pixels, and not adding empty pixels.

You can close, I rewrote the application in python.
Without batch input, resources are wasted.