lvfengkun/DS-SLAM-to-Libtorch

配置问题

Opened this issue · 19 comments

博主您好!关于你对DS-SLAM做的改进工作,我觉得非常不错!!!我想复现你所做的改进,方便加个联系方式请教您一下吗?万分感谢

请问一下博主,按照你的说明,我配置好rgbd_tum.cc后,应该怎样运行程序呢?

./Examples/RGB-D/rgbd_tum,我执行这条命令后,程序运行了几秒后就闪退了。错误为“段错误 (核心已转储)”

这是我的完整报错:
(cuda11+python36) sjm@sjm:~/ql/projects/ORB_SLAM2$ ./Examples/RGB-D/rgbd_tum
####in Viewer constructor
####in PangolinViewer contructor

Welcome !

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
loading duration: 3.52s
Vocabulary loaded!

  • size: 640x480
  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • bf: 40
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.09294
####in PangolinViewer register
Segment is running
cuda support:

Start processing sequence ...
Images in the sequence: 573

####in PangolinViewer run
zoe
ture
zoe1
Load model ...
Wait for new RGB img time =
process start
Processing time = 0.328772 sec
wait for new segment img time =2314.5
in Frame::InitializeClass
New map created with 947 points
Wait for new RGB img time =
process start
Processing time = 0.018475 sec
wait for new segment img time =29.7592
段错误 (核心已转储)

感谢博主的及时回复,对我的帮助非常大!谢谢!我再检查检查我这边的路径

博主您好!再次打扰,经过调试还是没有解决问题。拿到你的代码后,我做的操作如下:
所配置的环境运行原版的ORB-SLAM2无报错,深度学习环境为cuda11.3+python3.6
1.安装libtorch,并在cmakelists中添加:set(Torch_DIR /home/sjm/ql/lib/libtorch/share/cmake/Torch)
2.修改rgbd_tum.cc文件:
string strAssociationFilename = "Examples/RGB-D/associations/fr1_desk.txt";//string(argv[4])
string voc_path="Vocabulary/ORBvoc.txt";
string yaml_path="Examples/RGB-D/TUM1.yaml";
string img_path="date/rgbd_dataset_freiburg1_desk";
string model_path="model.pt";
string pascal_png="pascal.png";
3.执行./build.sh,并显示编译100%
4.运行命令:./Examples/RGB-D/rgbd_tum
5:结果
(cuda11+python36) sjm@sjm:~/ql/projects/ORB_SLAM2$ sudo ./Examples/RGB-D/rgbd_tum
####in Viewer constructor
####in PangolinViewer contructor

Welcome !

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
loading duration: 3.46s
Vocabulary loaded!

  • size: 640x480
  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • bf: 40
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.09294
####in PangolinViewer register


Start processing sequence ...
Images in the sequence: 573

Segment is running
####in PangolinViewer runcuda support:
zoe
ture
zoe1
Load model ...
Wait for new RGB img time =
process start
Processing time = 0.339867 sec
wait for new segment img time =2358.09
in Frame::InitializeClass
New map created with 946 points
Wait for new RGB img time =
process start
Processing time = 0.018857 sec
wait for new segment img time =31.0244
段错误

所有路径换成绝对路径也报同样的问题。
在不打扰博主您的前提下,麻烦帮我解决一下问题呢,再次感谢!!!

解决了很久还是有问题,博主方便留一个联系方式吗?或者这是我的联系方式qq:623864216

博主您好!请问可以提供你的深度学习(语义分割)这块的代码吗?

谢谢博主这么多次的回答!是的,就是libtorch训练网络的部分。另外请问一下您提供的权重文件里把网络的前向传播也编译进去了吗?因为我看到您代码中直接用权重文件调用forward。

博主您好!再次打扰你了,请问一下在与分割线程中,你使用的数据集是什么呢?分割类别你也改过吗?因为我使用你的权重文件跑的时候,只把人(person)这一类别分割出来的,然而我用自己训练出来的权重去跑,其中把椅子和显示屏也分割出来了。麻烦解答呢 谢谢

感谢回复,方便提供数据集吗?如是自己拍摄的照片不方便外传,那就不需要啦

博主你好,我也遇到这个问题,是出现在第二帧那里,请问应该怎么解决呀,非常感谢

####in Viewer constructor
####in PangolinViewer contructor

Welcome !

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while...
loading duration: 4.67s
Vocabulary loaded!

  • size: 640x480
  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • bf: 40
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.09294
Segment is running
####in PangolinViewer register
cuda support:

Start processing sequence ...
Images in the sequence: 228

####in PangolinViewer run
zoe
ture
zoe1
Load model ...
Wait for new RGB img time =
process start
Processing time = 1.29933 sec
wait for new segment img time =2195.78

博主你好,我也遇到这个问题,是出现在第二帧那里,请问应该怎么解决呀,非常感谢

####in Viewer constructor ####in PangolinViewer contructor

Welcome !

Input sensor was set to: RGB-D

Loading ORB Vocabulary. This could take a while... loading duration: 4.67s Vocabulary loaded!

  • size: 640x480
  • fx: 517.306
  • fy: 516.469
  • cx: 318.643
  • cy: 255.314
  • k1: 0.262383
  • k2: -0.953104
  • k3: 1.16331
  • p1: -0.005358
  • p2: 0.002628
  • bf: 40
  • fps: 30
  • color order: RGB (ignored if grayscale)

ORB Extractor Parameters:

  • Number of Features: 1000
  • Scale Levels: 8
  • Scale Factor: 1.2
  • Initial Fast Threshold: 20
  • Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 3.09294

Segment is running
####in PangolinViewer register
cuda support:
Start processing sequence ... Images in the sequence: 228

####in PangolinViewer run zoe ture zoe1 Load model ... Wait for new RGB img time = process start Processing time = 1.29933 sec wait for new segment img time =2195.78
这个问题解决了,现在是跑两三帧,追踪关键帧,段错误了。