gradslam/gradslam

How do gradslam learn anything using the differentiable modules?

Closed this issue · 1 comments

I read the original paper on gradslam and I got that it creates differentiable counterparts for different components of slam. The paper mentions that this allows us to backpropagate error from output to input, end-to-end (learning based systems)

My doubt is, what are the learnable parameters here? How does backpropagation help if we dont have any learnable parameters? Does this code has the ability to train or it just presented differentiable counterparts?

This reply has a couple of pointers to example codebases that leverage gradslam to learn parameters (e.g., colors, depths). Closing this issue, as this other issue has already been marked for possible enhancement (adding a set of examples in a subsequent library release)