unsky/focal-loss

test on pascal voc dataset?

zimenglan-sysu-512 opened this issue · 7 comments

hi @unsky,

have u tested focal loss in pascal voc dataset? btw, can u share your parameters? like the hyperparamter in solver.prototxt and some parameters of rpn and fast rcnn?

thanks.

unsky commented

@zimenglan-sysu-512 ---
mxnet (faster rcnn) parameters:

MXNET_VERSION: "mxnet"
output_path: "./output/rcnn/kitti"
symbol: resnet_v1_101_rcnn_dcn
gpus: '1,2,3,4,5,6,7'
CLASS_AGNOSTIC: false
SCALES:

  • 740

  • 2448
    default:
    frequent: 100
    kvstore: device
    network:
    pretrained: "./model/pretrained_model/resnet_v1_101"
    pretrained_epoch: 0
    PIXEL_MEANS:

    • 103.06
    • 115.90
    • 123.15
      IMAGE_STRIDE: 0
      RCNN_FEAT_STRIDE: 16
      RPN_FEAT_STRIDE: 16
      FIXED_PARAMS:
    • conv1
    • bn_conv1
    • res2
    • bn2
    • gamma
    • beta
      FIXED_PARAMS_SHARED:
    • conv1
    • bn_conv1
    • res2
    • bn2
    • res3
    • bn3
    • res4
    • bn4
    • gamma
    • beta
      ANCHOR_RATIOS:
    • 0.25
    • 0.5
    • 0.75
    • 1
    • 1.25
    • 1.5
    • 1.75
      ANCHOR_SCALES:
    • 4
    • 8
    • 16
    • 32
    • 64
    • 128
    • 256
      NUM_ANCHORS: 49
      dataset:
      NUM_CLASSES: 10
      dataset: kitti
      dataset_path: "./data/kitti"
      image_set: train
      root_path: "./data"
      test_image_set: test
      proposal: rpn
      TRAIN:
      lr: 0.0001

    lr_step: '4.83'
    warmup: false
    warmup_lr: 0.005

    typically we will use 4000 warmup step for single GPU on VOC

    warmup_step: 100
    begin_epoch: 0
    end_epoch: 80
    model_prefix: 'rcnn_kitti'

    whether resume training

    RESUME: false

    whether flip image

    FLIP: false

    whether shuffle image

    SHUFFLE: true

    whether use OHEM

    ENABLE_OHEM: false
    ENABLE_FOCALLOSS: true

    size of images for each device, 2 for rcnn, 1 for rpn and e2e

    BATCH_IMAGES: 1

    e2e changes behavior of anchor loader and metric

    END2END: true

    group images with similar aspect ratio

    ASPECT_GROUPING: true

    R-CNN

    rcnn rois batch size

    BATCH_ROIS: 128
    BATCH_ROIS_OHEM: 128

    rcnn rois sampling params

    FG_FRACTION: 0.25
    FG_THRESH: 0.5
    BG_THRESH_HI: 0.5
    BG_THRESH_LO: 0.1

    rcnn bounding box regression params

    BBOX_REGRESSION_THRESH: 0.5
    BBOX_WEIGHTS:

    • 1.0
    • 1.0
    • 1.0
    • 1.0

    RPN anchor loader

    rpn anchors batch size

    RPN_BATCH_SIZE: 256

    rpn anchors sampling params

    RPN_FG_FRACTION: 0.5
    RPN_POSITIVE_OVERLAP: 0.7
    RPN_NEGATIVE_OVERLAP: 0.3
    RPN_CLOBBER_POSITIVES: false

    rpn bounding box regression params

    RPN_BBOX_WEIGHTS:

    • 1.0
    • 1.0
    • 1.0
    • 1.0
      RPN_POSITIVE_WEIGHT: -1.0

    used for end2end training

    RPN proposal

    CXX_PROPOSAL: false
    RPN_NMS_THRESH: 0.7
    RPN_PRE_NMS_TOP_N: 6000
    RPN_POST_NMS_TOP_N: 300
    RPN_MIN_SIZE: 0

    approximate bounding box regression

    BBOX_NORMALIZATION_PRECOMPUTED: true
    BBOX_MEANS:

    • 0.0
    • 0.0
    • 0.0
    • 0.0
      BBOX_STDS:
    • 0.1
    • 0.1
    • 0.2
    • 0.2
      TEST:

    use rpn to generate proposal

    HAS_RPN: true

    size of images for each device

    BATCH_IMAGES: 1

    RPN proposal

    CXX_PROPOSAL: false
    RPN_NMS_THRESH: 0.7
    RPN_PRE_NMS_TOP_N: 6000
    RPN_POST_NMS_TOP_N: 300
    RPN_MIN_SIZE: 0

    RPN generate proposal

    PROPOSAL_NMS_THRESH: 0.7
    PROPOSAL_PRE_NMS_TOP_N: 20000
    PROPOSAL_POST_NMS_TOP_N: 2000
    PROPOSAL_MIN_SIZE: 0

    RCNN nms

    NMS: 0.75
    test_epoch: 73

hi @unsky,

thanks for sharing your hyper-parameters.

hi @unsky,
can you share the kitti(10 cls) dataset?

i have train the focal loss with doformable conv several times
i found that the mAP is lower than the origin

  • lr = 0.0005,warmup_lr = 0.00005
  • AP@0.5 = 0.8002,AP@0.7 = 0.6792
    this is the performance on voc2007 test with training data 2012+2007 trainval
unsky commented

@iFighting focal loss is a method to process the imbalance examples. it is not better in the balance scene. for the level of imbalance , you must choose a suit alpha and gamma

@unsky actually, i have tried several different alpha and gamma