RobustFieldAutonomyLab/lidar_super_resolution

I want to know your environment

Gironote opened this issue · 9 comments

Thank you for sharing the code.
I want to know your development environment. ( ubuntu, cuda, cudnn, python version, tensroflow etc..)

@Jihun-Kim-kmu
The environment is
Ubuntu 16.04
Cuda 10.0
CUDNN 7.6.2
Python 2
Tensorflow 1.13

@TixiaoShan
Thank you for your answer.
In addition, is it possible to apply data obtained from vlp-16 lidar in the real world?
If possible, can I get the pcd file converted from 16 to 64?

@Jihun-Kim-kmu
Yes, you can train a network for "VLP-64" and test it using VLP-16. We showed some results in the paper.
Sorry I don't have more files to share. I lost the experimental files since I changed the job. The bags shared here is from a very early backup.

@TixiaoShan
If I train for vlp 64, can I get it in the form of pcd or npy by output?

@Jihun-Kim-kmu
Yes, there is a script provided here that can convert the output to PCL point cloud.

Thank you again.

@TixiaoShan
sorry I have one more question. If i want to train for vlp-16 to vlp-64.
Do I need both vlp16, vlp64 data(for training) obtained at the same time?

@Jihun-Kim-kmu
You just need the VLP-64 data when gathering. VLP-16 data can be extracted from VLP-64 data - extract 16 channels from 64 channels.

@TixiaoShan
I have a question about data.py
If I run data.py, I understood it as extracting npy of 16 channels from carla_ouster_range_image.npy.
But i can't get extarcted 16 channels npy.
image
If the command is wrong, can you give me the command?