WoodwindHu/DTS

beam_id of my other datdsets

Closed this issue · 4 comments

hello, thanks for your outstanding work, I have a doubt about make other datasets of their beam id labels , such as dair-v2x, it has 300 beam lidar; when i run this kitti_beam_id.py , it only classify four or five beams, the key code as follow:

``def get_beam_id(pc_index, path):
print(f'processing {pc_index}')
pc_kitti = np.fromfile(os.path.join(path, 'velodyne', "%06d.bin"%(pc_index)), dtype=np.float32).reshape(-1,4)
beam_id = np.zeros(pc_kitti.shape[0], dtype=np.float32)
selected = True
beam_count=0
cloud_size = pc_kitti.shape[0]
selectcount = 0
index = [0] * 65
for i in range(cloud_size):
beam_id[i] = beam_count
angle = np.arctan2(pc_kitti[i,1], -pc_kitti[i,0]) / np.pi * 180
if i==0:
last_point_angle = angle
back_point_angle = angle
if selected and selectcount<10:
selectcount += 1
else:
selected = False
if ((angle-last_point_angle)>=90) and (not selected) and ((angle-back_point_angle)>=90):
beam_count += 1
index[beam_count] = i
selected = True
selectcount = 0

    back_point_angle = last_point_angle
    last_point_angle = angle
    
if beam_count<63:
    for i in range(beam_count,64):
        index[i] = index[beam_count]
    print("scans less than 64")
elif beam_count>=64:
    print("gg, sort failed")
    return 
beam_id.tofile(os.path.join(path, "beam_labels", "%06d.bin"%(pc_index)))``

I have known kitti has 64 beams, and i changed the 64 to 300, is it right? could you help me...

Hello, different datasets have different organization mode of LiDAR beams. If you want to get the beam id of dair-v2x, you You'd better check its official documentation.

Hello, different datasets have different organization mode of LiDAR beams. If you want to get the beam id of dair-v2x, you You'd better check its official documentation.

image
Hello, thank you for your reply. As shown in the picture, I have three questions to consult you:

  1. Does "oracle 84.8/71.6" in the figure refer to the average of difficult, general, and easy modes on Kit's test set dataset using Pointpillar? (BEV/3D);
c71e158e68ad98eec5af7eb8557dcfd
  1. The "ours 79.5/51.8" in the figure refers to the results obtained from training on Nuscences and testing on Kitti. However, when I used the model you published for testing, the actual results were much higher. May I ask if there was a mistake in my operation?
  2. What does "closed gap" mean? I didn't see the explanation in the paper.

I would greatly appreciate it if you could reply to my question.

Does "oracle 84.8/71.6" in the figure refer to the average of difficult, general, and easy modes on Kit's test set dataset using Pointpillar? (BEV/3D);

It refer to the middle difficulty on kitti's val set under an IoU threshold of 0.7 over 40 recall positions.

May I ask if there was a mistake in my operation?

As in your figure, the performance of "ours" is "80.2/55.7". Public code is slightly different from experimental code for the sake of cleanliness, maybe this leads to a increase of performance.

What does "closed gap" mean?

Actually, we have explained it in the paper:
截屏2023-07-30 18 28 25

Thank you. The last one was indeed my negligence. I'm sorry, thank you again for answering my question