Very stupid question on using models generated from cfgs
Closed this issue · 0 comments
MuXiaoYun commented
I am trying to do a simple classfication task with 9 classes using xyzs of points. Here's the cfg file I am using:
model:
NAME: BaseCls
encoder_args:
NAME: PointNextEncoder
blocks: [1, 1, 1, 1, 1, 1]
strides: [1, 2, 2, 2, 2, 1]
width: 32
in_channels: 3
radius: 0.15
radius_scaling: 1.5
sa_layers: 2
sa_use_res: True
nsample: 32
expansion: 4
aggr_args:
feature_type: 'dp_fj'
reduction: 'max'
group_args:
NAME: 'ballquery'
normalize_dp: True
conv_args:
order: conv-norm-act
act_args:
act: 'relu'
norm_args:
norm: 'bn'
cls_args:
NAME: ClsHead
num_classes: 9
mlps: [512, 256]
norm_args:
norm: 'bn1d'
My inputs shape is [32, 2048, 3]. Of course I'm expecting an output with [32, 2048, 9] or [32, 9, 2048]. However I got [32,9]. I guess the model takes [2048, 3] as one entry but I am expecting it to take [3] as one. I can't figure out how to adjust my args to make it work.
I am sorry that I'm not famaliar with this cfg-build-model process and ask this stupid question. Could u pls help me?