RodrigoGantier/Mask_R_CNN_Keypoints

training problem

nebuladream opened this issue · 8 comments

I run your inference phrase success, but when I try to finetune your model by AIChallenger data, it seems not right, after some epoch the keypoints disappear…… in your main.py, train heads layers which may not include mask_class_loss_graph, is this cause the issue? can you provide how to train the net?

@nebuladream Actually I train the neural network with the code that is published, using mini_mask, if you use utils.minimize_mask_2 to cut and resize the keypoints, these should not "disappear", when a point is not present (missing) because is not in the picture the code assign (0, 0) = 1
you can check, the code is in:

image, image_meta, gt_bbox, gt_mask = modellib.load_image_gt(dataset_val, config, image_id,
use_mini_mask=True, augment=False)
buffer_mask[i] = utils.minimize_mask_2(bbox[i, :5].reshape([1, 5]), mask[i], config.MINI_MASK_SHAPE)

if m.sum() == 0:
mini_mask[0, 0, i] = 1

actually I dont know way the performance is not as the paper

I train the model from coco pretrained weights, finding mrcnn_mask_loss branch may not convergence……the loss like this , do you know why?

2018-01-16 12 19 15

I've encountered the same problem. The kp mask branch gets really hard to converge. Did we miss something important in the config parameters?
image

@minizon @nebuladream Do you finally fix this issue? My loss can't converge too. It confused me a lot.

@minizon @minizon You can refer to my repository https://github.com/Superlee506/Mask_RCNN. I refereed to the original detectron project and modified the code with detailed comments. The loss converges quickly, but there is still much room for improvement.

@Superlee506 Thank you for sharing your code. I've realized that I've made a mistake on the horizontal flip augmentation. I did not change the kp left/right labels when I mirrored the persons. Since I remove this augmentation, both the training and val losses converge, though there's still a larger margin to the theoretical value compared to other branches' losses. By the way, your space encoding is more efficient in GPU memory use.

@nebuladream Yes, I also notice this problem. Can your model distinguish the symmetrical lef/right keypoint? My model often predict these points together.