image input channels issue
taeyeopl opened this issue · 1 comments
taeyeopl commented
Can I ask why the input channel issue??
RuntimeError: weight of size [64, 3, 7, 7], expected input[32, 256, 192, 3] to have 3 channels, but got 256 channels instead
(simple) user@user:/sdata1/workspace/simple-pose$ python pose_estimation/train.py --cfg experiments/coco/resnet50/256x192_d256x3_adam_lr1e-3.yaml
/sdata1/workspace/simple-pose/pose_estimation/../lib/core/config.py:161: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
exp_config = edict(yaml.load(f))
=> creating output/coco/pose_resnet_50/256x192_d256x3_adam_lr1e-3
=> creating log/coco/pose_resnet_50/256x192_d256x3_adam_lr1e-3_2020-05-22-22-31
Namespace(cfg='experiments/coco/resnet50/256x192_d256x3_adam_lr1e-3.yaml', frequent=100, gpus=None, workers=None)
{'CUDNN': {'BENCHMARK': True, 'DETERMINISTIC': False, 'ENABLED': True},
'DATASET': {'DATASET': 'coco',
'DATA_FORMAT': 'jpg',
'FLIP': True,
'HYBRID_JOINTS_TYPE': '',
'ROOT': './data/coco/',
'ROT_FACTOR': 40,
'SCALE_FACTOR': 0.3,
'SELECT_DATA': False,
'TEST_SET': 'val2017',
'TRAIN_SET': 'train2017'},
'DATA_DIR': '',
'DEBUG': {'DEBUG': True,
'SAVE_BATCH_IMAGES_GT': True,
'SAVE_BATCH_IMAGES_PRED': True,
'SAVE_HEATMAPS_GT': True,
'SAVE_HEATMAPS_PRED': True},
'GPUS': '0',
'LOG_DIR': 'log',
'LOSS': {'USE_TARGET_WEIGHT': True},
'MODEL': {'EXTRA': {'DECONV_WITH_BIAS': False,
'FINAL_CONV_KERNEL': 1,
'HEATMAP_SIZE': array([48, 64]),
'NUM_DECONV_FILTERS': [256, 256, 256],
'NUM_DECONV_KERNELS': [4, 4, 4],
'NUM_DECONV_LAYERS': 3,
'NUM_LAYERS': 50,
'SIGMA': 2,
'TARGET_TYPE': 'gaussian'},
'IMAGE_SIZE': array([192, 256]),
'INIT_WEIGHTS': True,
'NAME': 'pose_resnet',
'NUM_JOINTS': 17,
'PRETRAINED': 'models/pytorch/imagenet/resnet50-19c8e357.pth',
'STYLE': 'pytorch'},
'OUTPUT_DIR': 'output',
'PRINT_FREQ': 100,
'TEST': {'BATCH_SIZE': 1,
'BBOX_THRE': 1.0,
'COCO_BBOX_FILE': 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json',
'FLIP_TEST': False,
'IMAGE_THRE': 0.0,
'IN_VIS_THRE': 0.2,
'MODEL_FILE': '',
'NMS_THRE': 1.0,
'OKS_THRE': 0.9,
'POST_PROCESS': True,
'SHIFT_HEATMAP': True,
'USE_GT_BBOX': True},
'TRAIN': {'BATCH_SIZE': 32,
'BEGIN_EPOCH': 0,
'CHECKPOINT': '',
'END_EPOCH': 140,
'GAMMA1': 0.99,
'GAMMA2': 0.0,
'LR': 0.001,
'LR_FACTOR': 0.1,
'LR_STEP': [90, 120],
'MOMENTUM': 0.9,
'NESTEROV': False,
'OPTIMIZER': 'adam',
'RESUME': False,
'SHUFFLE': True,
'WD': 0.0001},
'WORKERS': 4}
=> init deconv weights from normal distribution
=> init 0.weight as normal(0, 0.001)
=> init 0.bias as 0
=> init 1.weight as 1
=> init 1.bias as 0
=> init 3.weight as normal(0, 0.001)
=> init 3.bias as 0
=> init 4.weight as 1
=> init 4.bias as 0
=> init 6.weight as normal(0, 0.001)
=> init 6.bias as 0
=> init 7.weight as 1
=> init 7.bias as 0
=> init final conv weights from normal distribution
=> init 8.weight as normal(0, 0.001)
=> init 8.bias as 0
=> loading pretrained model models/pytorch/imagenet/resnet50-19c8e357.pth
/home/user/anaconda3/envs/simple/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead.
warnings.warn(warning.format(ret))
loading annotations into memory...
Done (t=7.44s)
creating index...
index created!
=> classes: ['__background__', 'person']
=> num_images: 118287
=> load 149813 samples
loading annotations into memory...
Done (t=0.22s)
creating index...
index created!
=> classes: ['__background__', 'person']
=> num_images: 5000
=> load 6352 samples
Traceback (most recent call last):
File "pose_estimation/train.py", line 206, in <module>
main()
File "pose_estimation/train.py", line 174, in main
final_output_dir, tb_log_dir, writer_dict)
File "/sdata1/workspace/simple-pose/pose_estimation/../lib/core/function.py", line 45, in train
output = model(input)
File "/home/user/anaconda3/envs/simple/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/anaconda3/envs/simple/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/user/anaconda3/envs/simple/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/sdata1/workspace/simple-pose/pose_estimation/../lib/models/pose_resnet.py", line 235, in forward
x = self.conv1(x)
File "/home/user/anaconda3/envs/simple/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/anaconda3/envs/simple/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[32, 256, 192, 3] to have 3 channels, but got 256 channels instead
qinghuan007 commented
Do you know how to solve this problem now?