huanglianghua/GlobalTrack

RuntimeError: cuda runtime error (30) : unknown error at mmdet/ops/roi_align/src/roi_align_kernel.cu:145

1071189147 opened this issue · 1 comments

My GPU is 2080Ti ,CUDA version is 9.0 ,pytorch is 1.1.0, torchvisio is 0.3.0,but I meet a error:,I tried many ways but failed to solve the problem,so I hope to get help from the author and other friends.

Args:
-- Namespace(autoscale_lr=True, base_dataset='got10k_train', base_transforms='extra_partial', config='configs/qg_rcnn_r50_fpn.py', fp16=False, gpus=2, launcher='none', load_from=None, local_rank=0, resume_from=None, sampling_prob='0.4,0.4,0.2', seed=None, validate=False, work_dir='work_dirs/qg_rcnn_r50_fpn', workers=None)
Configs:
-- Config (path: /media/hdc/data4/wxl/GlobalTrack/configs/qg_rcnn_r50_fpn.py): {'model': {'type': 'QG_RCNN', 'pretrained': 'torchvision://resnet50', 'backbone': {'type': 'ResNet', 'depth': 50, 'num_stages': 4, 'out_indices': (0, 1, 2, 3), 'frozen_stages': 1, 'style': 'pytorch'}, 'neck': {'type': 'FPN', 'in_channels': [256, 512, 1024, 2048], 'out_channels': 256, 'num_outs': 5}, 'rpn_head': {'type': 'RPNHead', 'in_channels': 256, 'feat_channels': 256, 'anchor_scales': [8], 'anchor_ratios': [0.5, 1.0, 2.0], 'anchor_strides': [4, 8, 16, 32, 64], 'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [1.0, 1.0, 1.0, 1.0], 'loss_cls': {'type': 'CrossEntropyLoss', 'use_sigmoid': True, 'loss_weight': 1.0}, 'loss_bbox': {'type': 'SmoothL1Loss', 'beta': 0.1111111111111111, 'loss_weight': 1.0}}, 'bbox_roi_extractor': {'type': 'SingleRoIExtractor', 'roi_layer': {'type': 'RoIAlign', 'out_size': 7, 'sample_num': 2}, 'out_channels': 256, 'featmap_strides': [4, 8, 16, 32]}, 'bbox_head': {'type': 'SharedFCBBoxHead', 'num_fcs': 2, 'in_channels': 256, 'fc_out_channels': 1024, 'roi_feat_size': 7, 'num_classes': 2, 'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [0.1, 0.1, 0.2, 0.2], 'reg_class_agnostic': False, 'loss_cls': {'type': 'CrossEntropyLoss', 'use_sigmoid': False, 'loss_weight': 1.0}, 'loss_bbox': {'type': 'SmoothL1Loss', 'beta': 1.0, 'loss_weight': 1.0}}}, 'train_cfg': {'rpn': {'assigner': {'type': 'MaxIoUAssigner', 'pos_iou_thr': 0.7, 'neg_iou_thr': 0.3, 'min_pos_iou': 0.3, 'ignore_iof_thr': -1}, 'sampler': {'type': 'RandomSampler', 'num': 256, 'pos_fraction': 0.5, 'neg_pos_ub': -1, 'add_gt_as_proposals': False}, 'allowed_border': 0, 'pos_weight': -1, 'debug': False}, 'rpn_proposal': {'nms_across_levels': False, 'nms_pre': 2000, 'nms_post': 2000, 'max_num': 2000, 'nms_thr': 0.7, 'min_bbox_size': 0}, 'rcnn': {'assigner': {'type': 'MaxIoUAssigner', 'pos_iou_thr': 0.5, 'neg_iou_thr': 0.5, 'min_pos_iou': 0.5, 'ignore_iof_thr': -1}, 'sampler': {'type': 'RandomSampler', 'num': 512, 'pos_fraction': 0.25, 'neg_pos_ub': -1, 'add_gt_as_proposals': True}, 'pos_weight': -1, 'debug': False}}, 'test_cfg': {'rpn': {'nms_across_levels': False, 'nms_pre': 1000, 'nms_post': 1000, 'max_num': 1000, 'nms_thr': 0.7, 'min_bbox_size': 0}, 'rcnn': {'score_thr': 0.0, 'nms': {'type': 'nms', 'iou_thr': 0.5}, 'max_per_img': 1000}}, 'data': {'imgs_per_gpu': 1, 'workers_per_gpu': 4, 'train': {'type': 'PairWrapper', 'ann_file': None, 'base_dataset': 'got10k_train', 'base_transforms': 'extra_partial', 'sampling_prob': [0.4, 0.4, 0.2], 'max_size': 30000, 'max_instances': 8, 'with_label': True}}, 'optimizer': {'type': 'SGD', 'lr': 0.0025, 'momentum': 0.9, 'weight_decay': 0.0001}, 'optimizer_config': {'grad_clip': {'max_norm': 35, 'norm_type': 2}}, 'lr_config': {'policy': 'step', 'warmup': 'linear', 'warmup_iters': 500, 'warmup_ratio': 0.3333333333333333, 'step': [8, 11]}, 'checkpoint_config': {'interval': 1}, 'log_config': {'interval': 50, 'hooks': [{'type': 'TextLoggerHook'}]}, 'total_epochs': 12, 'cudnn_benchmark': True, 'dist_params': {'backend': 'nccl'}, 'log_level': 'INFO', 'work_dir': 'work_dirs/qg_rcnn_r50_fpn', 'load_from': 'checkpoints/qg_rcnn_r50_fpn_2x_20181010-443129e1.pth', 'resume_from': None, 'workflow': [('train', 1)], 'gpus': 2}
2021-02-10 12:04:44,722 - INFO - Distributed training: False
2021-02-10 12:04:45,171 - INFO - load model from: torchvision://resnet50
2021-02-10 12:04:45,366 - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

2021-02-10 12:04:50,101 - INFO - load checkpoint from checkpoints/qg_rcnn_r50_fpn_2x_20181010-443129e1.pth
2021-02-10 12:04:50,431 - INFO - Start running, host: root@hdc-IBM, work_dir: /media/hdc/data4/wxl/GlobalTrack/work_dirs/qg_rcnn_r50_fpn
2021-02-10 12:04:50,432 - INFO - workflow: [('train', 1)], max: 12 epochs
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=383 error=11 : invalid argument
Traceback (most recent call last):

File "/media/hdc/data4/wxl/GlobalTrack/tools/train_qg_rcnn.py", line 143, in
main()
File "/media/hdc/data4/wxl/GlobalTrack/tools/train_qg_rcnn.py", line 138, in main
logger=logger)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/apis/train.py", line 62, in train_detector
_non_dist_train(model, dataset, cfg, validate=validate)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/apis/train.py", line 229, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/mmcv/runner/runner.py", line 358, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/mmcv/runner/runner.py", line 264, in train
self.model, data_batch, train_mode=True, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/apis/train.py", line 38, in batch_processor
losses = model(**data)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/core/fp16/decorators.py", line 49, in new_func
return old_func(*args, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/modules/qg_rcnn.py", line 58, in forward
img_z, img_x, img_meta_z, img_meta_x, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/modules/qg_rcnn.py", line 91, in forward_train
for x_ij, i, j in self.rpn_modulator(z, x, gt_bboxes_z):
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/modules/modulators.py", line 37, in forward
modulator=self.learn(feats_z, gt_bboxes_z))
File "/media/hdc/data4/wxl/GlobalTrack/modules/modulators.py", line 54, in learn
feats_z[:self.roi_extractor.num_inputs], rois)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/core/fp16/decorators.py", line 127, in new_func
return old_func(*args, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/submodules/mmdetection/mmdet/models/roi_extractors/single_level.py", line 105, in forward
roi_feats_t = self.roi_layers[i](feats[i], rois
)
File "/home/hdc/anaconda3/envs/GlobalTrack_private/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/ops/roi_align/roi_align.py", line 80, in forward
self.sample_num)
File "/media/hdc/data4/wxl/GlobalTrack/_submodules/mmdetection/mmdet/ops/roi_align/roi_align.py", line 26, in forward
sample_num, output)
RuntimeError: cuda runtime error (30) : unknown error at mmdet/ops/roi_align/src/roi_align_kernel.cu:145

After four times of rebuilding, it's works for no reason