OpenAOI/anodet

Uneven anomaly score.

Opened this issue · 5 comments

Sorry, Its me again.

Is it possible for a model, to be biassed to the right side or left side of the image, in the
sense that it can give a bigger score to the defects on the same sample, but located on the right
side vs the left, like this?

Already tried differents image sizes and aspect ratios, with no change.
The training dataset includes vertical and horizontal flip, rotation , shear, zoom, brightness augment.

The lightning comes from a led ring, and the camera is located in its centre.

(Padim resnet_18)

https://www.youtube.com/watch?v=W4-ZtpJtE5c&ab_channel=tektronix475

Thanks a lot!

If you just want to make it more probable to detect anomalies in one part of the image, I guess you could simply apply a transfer function (maybe just linear) on one part of the score_map. I.e. not changing anything in the model or training, but just interpreting the output differently.

But I'm not sure I understand your problem fully. How many images are in your training set? Do you think there could be a bias there even when using flip and rotation?

Cool project btw. Are you building something or is it just for trying out algorithms?

The dataset has 800 pictures augmented from a initial group of 20 images.
It seem to me, that padim gives a higer anomaly score, when the defect is located to the right side of the sample.

In the bad score above 12.5 padim folder linked below, you can see that the same sample gives a lower score, if the black spot is located on the right side of the pasta pic.

Inside that folder the inferred images, are sorted by score from the lower, to the higher.

The 5 samples with lower score, are the ones with the black stain on the left side.
This do not happen with the anomalous images, inferred with patchcore.

In the bad score above 2.35 patchcore folder , those images appears mixed or unsorted.
with almost no score difference between the ones with the spot on the right, or the left.

Which is the way the system is supposed to work.

https://drive.google.com/drive/folders/1_OF6-_MNTskgPd5Sqqtm1-p4WqDu0Jhl?usp=share_link

I am trying to learn something about images anomaly detection, by doing this experiments
with various algos like yours 4 example.

The conveyor is to test the performance of the model, with a set up closer to what could be, a real life application.

Okay! Yes that's a little weird. But I think 20 initial images is a bit few. My guess is that you would get better results if you used some more

Right, I will start from a bigger number of samples.

Thanks!