What is the object fusion algorithm or architecture ? (radar + mobileye camera)
Closed this issue · 3 comments
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Apollo installed from (source):
- Apollo version (3.0):
You provided the code and the system architecture as open source. Thank you for that. However, I could not find sufficient information about object fusion (especially, radar + camera). Since I could not understand the object fusion logic from the codes, It is hard to grasp the architecture of radar + camera fusion. Can you provide a hint or documentation or something else which describes the radar+camera fusion of Apollo ?
Any information from anybody would be awesome.
@earcz Thank you for using Apollo! We do have document explaining general fusion algorithm but we don't have documents explaining all details at this moment.
Thank you for the response. I am actually curious about object fusion logic. Let’s say the camera does not see the object, but the radar sees the same object at a moment. This object cannot be fused. So, does the radar assigns an ID to this object and system takes an action according to the radar output? Or, is this object ignored because camera does not see? An answer to this question would satisfy me.
Closed due to inactivity. If the problem persists, pls feel free to reopen it or create a new one and refer to it.