[Bug]The mAP metric is too low.
ZhenboZhao77 opened this issue · 4 comments
Prerequisite
- I have searched Issues and Discussions but cannot get the expected help.
- I have read the FAQ documentation but cannot get the expected help.
- The bug has not been fixed in the latest version (master) or latest version (1.x).
Task
I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.
Branch
master branch https://github.com/open-mmlab/mmrotate
Environment
sys.platform: win32
Python: 3.8.19 (default, Mar 20 2024, 19:55:45) [MSC v.1916 64 bit (AMD64)]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 4060 Ti
CUDA_HOME: D:\cuda10.1
NVCC: Not Available
MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.29.30154 版
GCC: n/a
PyTorch: 2.0.0
PyTorch compiling details: PyTorch built with:
- C++ Version: 199711
- MSVC 193431937
- Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 2019
- LAPACK is enabled (usually provided by MKL)
- CPU capability usage: AVX2
- CUDA Runtime 11.8
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61
,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_37,code=compute_37 - CuDNN 8.7
- Magma 2.5.4
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.e
xe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj /FS -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_
XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=OFF, TORCH_VERSION=2
.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.0
OpenCV: 4.10.0
MMEngine: 0.10.4
MMRotate: 1.0.0rc1+fd60bef
Reproduces the problem - code sample
base = [
'../base/datasets/dota.py', '../base/schedules/schedule_1x.py',
'../base/default_runtime.py'
]
angle_version = 'le135'
model = dict(
type='RefineSingleStageDetector',
data_preprocessor=dict(
type='mmdet.DetDataPreprocessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
bgr_to_rgb=True,
pad_size_divisor=32,
boxtype2tensor=False),
backbone=dict(
type='LSKNet',
embed_dims=[32, 64, 160, 256],
drop_rate=0.1,
drop_path_rate=0.1,
depths=[3, 3, 5, 2],
init_cfg=dict(type='Pretrained', checkpoint="pretrained/lsk_t_backbone-2ef8a593.pth"),
norm_cfg=dict(type='SyncBN', requires_grad=True)
),
neck=dict(
type='mmdet.FPN',
in_channels=[32, 64, 160, 256],
out_channels=256,
start_level=1,
add_extra_convs='on_input',
num_outs=5),
bbox_head_init=dict(
type='S2AHead',
num_classes=15,
in_channels=256,
stacked_convs=2,
feat_channels=256,
anchor_generator=dict(
type='FakeRotatedAnchorGenerator',
angle_version=angle_version,
scales=[4],
ratios=[1.0],
strides=[8, 16, 32, 64, 128]),
bbox_coder=dict(
type='DeltaXYWHTRBBoxCoder',
angle_version=angle_version,
norm_factor=1,
edge_swap=False,
proj_xy=True,
target_means=(.0, .0, .0, .0, .0),
target_stds=(1.0, 1.0, 1.0, 1.0, 1.0),
use_box_type=False),
loss_cls=dict(
type='mmdet.FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='mmdet.SmoothL1Loss', beta=0.11, loss_weight=1.0)),
bbox_head_refine=[
dict(
type='S2ARefineHead',
num_classes=15,
in_channels=256,
stacked_convs=2,
feat_channels=256,
frm_cfg=dict(
type='AlignConv',
feat_channels=256,
kernel_size=3,
strides=[8, 16, 32, 64, 128]),
anchor_generator=dict(
type='PseudoRotatedAnchorGenerator',
strides=[8, 16, 32, 64, 128]),
bbox_coder=dict(
type='DeltaXYWHTRBBoxCoder',
angle_version=angle_version,
norm_factor=1,
edge_swap=False,
proj_xy=True,
target_means=(0.0, 0.0, 0.0, 0.0, 0.0),
target_stds=(1.0, 1.0, 1.0, 1.0, 1.0)),
loss_cls=dict(
type='mmdet.FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(
type='mmdet.SmoothL1Loss', beta=0.11, loss_weight=1.0))
],
train_cfg=dict(
init=dict(
assigner=dict(
type='mmdet.MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1,
iou_calculator=dict(type='RBboxOverlaps2D')),
allowed_border=-1,
pos_weight=-1,
debug=False),
refine=[
dict(
assigner=dict(
type='mmdet.MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1,
iou_calculator=dict(type='RBboxOverlaps2D')),
allowed_border=-1,
pos_weight=-1,
debug=False)
],
stage_loss_weights=[1.0]),
test_cfg=dict(
nms_pre=2000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type='nms_rotated', iou_threshold=0.1),
max_per_img=2000))
optim_wrapper = dict(optimizer=dict(lr=0.005))
Reproduces the problem - command or script
python tools/train.py configs/s2anet/s2anet-le135_r50_fpn_1x_dota_lsknet.py
Reproduces the problem - error message
map
Additional information
No response
Could the author please share the training configurations for RoI Transformer, S2A-Net, and R3Det? I am currently unsure of what to do.
The configurations have been updated (in README and config folders). Please ensure you are using 8 GPUs if you are directly using these configs. Additionally, the reported mAP is achieved with multi-scale training and testing. It appears that your mmrotate version is not aligned with ours; please use mmrotate 0.x.
thank you very much indeed
👍👍👍👍👍