gradslam/gradslam

Using realsense data

Closed this issue · 5 comments

Hi

I am trying to use gradslam on realsense data. I have saved a depth folder, rgb folder and both the timestamps files rgb and depth obtained from realsense
I tried using the TUM loader, but got some errors. So to figure how what I am doing wrong, I took TUM data that I have downloaded(rgbd_dataset_freiburg1_xyz) and which gradslam ran successfully with it and truncated all the data to 10 rgb 10 depth images and cleaned up the timestamps files and groundtruth file to reflect 10 samples only but then got error (shown below) loading the data.
Would appreciate your help infiguring out what i am doing wrong both for the truncated TUM data and the realsense data.
Thanks

load dataset

dataset = TUM(data_path, sequences="C:/Users/cv/Documents/gradslam_data/TUM/seq", seqlen=5)
loader = DataLoader(dataset=dataset, batch_size=1)
colors, depths, intrinsics, poses, *_ = next(iter(loader))

create rgbdimages object

rgbdimages = RGBDImages(colors, depths, intrinsics, poses)
rgbdimages.plotly(0).update_layout(autosize=False, height=600, width=400).show()

----errors

StopIteration Traceback (most recent call last)
in
2 dataset = TUM(data_path, sequences="C:/Users/cv/Documents/gradslam_data/TUM/seq", seqlen=5)
3 loader = DataLoader(dataset=dataset, batch_size=1)
----> 4 colors, depths, intrinsics, poses, *_ = next(iter(loader))
5
6 # create rgbdimages object

~\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py in next(self)
361
362 def next(self):
--> 363 data = self._next_data()
364 self._num_yielded += 1
365 if self._dataset_kind == _DatasetKind.Iterable and \

~\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
400
401 def _next_data(self):
--> 402 index = self._next_index() # may raise StopIteration
403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
404 if self._pin_memory:

~\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py in _next_index(self)
355
356 def _next_index(self):
--> 357 return next(self._sampler_iter) # may raise StopIteration
358
359 def _next_data(self):

StopIteration:

@nawara72
TL;DR You might be seeing this error because the number of frames in your dataset is less than the seqlen. Might want to verify that the timestamps between rgb.txt, depth.txt, and groundtruth.txt are close to one another.

In the TUM loader, the rgb frames, depth frames and poses are asynchronous and the timestamps in rgb.txt, depth.txt, and groundtruth.txt are used to associate them (see here). It could be the case that if you're using the first 10 frames from rgb.txt, depth.txt, and groundtruth.txt the timestamps won't match one another and you end up with less number of associated frames than the specified seqlen.

Hi
I was able to get gradslam working with realsense data, but I can't seem to process more than 30 frames.
I f I try 40 frames eg
dataset = RS(data_path, sequences="C:/Users/cv/Documents/gradslam_data/TUM/seq", seqlen=40)
I get
RuntimeError: CUDA error: out of memory
when running pointclouds, recovered_poses = slam(rgbdimages)

Any idea how to avoid this?
Thanks again for all your help

Are you running with autograd enabled? In which case gradslam will try to maintain a computation graph and you'll eventually run out of memory. You can get around this a little by decreasing the image resolution and skipping a few frames in between (but at some point, depending on your GPU memory, this error will recur)

I tried with stride =5, but same error.
This is what I am running

load dataset

dataset = RS(data_path, sequences="C:/Users/cv/Documents/gradslam_data/TUM/seq", stride=5, seqlen=40)
#dataset = RS(data_path, sequences="C:/Users/cv/Documents/gradslam_data/TUM/seq", seqlen=40)
loader = DataLoader(dataset=dataset, batch_size=1)
colors, depths, intrinsics, poses, *_ = next(iter(loader))

create rgbdimages object

rgbdimages = RGBDImages(colors, depths, intrinsics, poses)
rgbdimages.plotly(0).update_layout(autosize=False, height=600, width=400).show()
odometry = "gradicp" # "gt", "icp", "gradicp"

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
#device = torch.device("cpu")
slam = PointFusion(odom=odometry, dsratio=4, device=device)
pointclouds, recovered_poses = slam(rgbdimages)

My image resolutions are 640 by 480. Not sure how to enable/disable autograd. Excuse my ignorance I am a newbie in python

Thanks

Closing due to inactivity