metadriverse/scenarionet

Extracting leading vehicle in nuScenes after conversion

Opened this issue · 9 comments

Hello, thank you very much for your work.
Do you think it would be possible to extract the leading vehicle given the ego position in nuScenes dataset after conversion?
What I am thinking about is to:

  • get the lane id where the ego vehicle is in (and all the other object present in it)
  • convert to curvilinear coordinate based on that lane (I was wondering whether there is already a centre line in nuSCenes, since LANES data have both 'polylinbes' and 'polygons')
  • compute the distance in curvilinear coordinate between ego vehicle and all other objects present in that lane.

I am not quite sure about whether using just lanes or also other objects, actually.
Anyway, I was wondering whether you have some implementations that extract the lane from the vehicle position or something like that, also in curvilinear coordinates (in metadrive as well).

Yeah, we do have that tools. If the map is loaded with ScenarioMap, you can get all lanes by map.road_network and get the polygon for each lane by lane.polygon. If there is no polygon for the lane, we will generate one from its lane center line, so this API will definetely return a polygon. With these polygons, you can easily find which polygon the point (vehicle position) is in and hence get the lane id and lane center line. You can prompt GPT to write the point-in-polygon code :).
Actually, there is an advanced function, ray_localization(), allowing you to do this directly. Check: https://github.com/metadriverse/metadrive/blob/c326ae5f6b409ed90d2bcda8b5bc12689c8c03b5/metadrive/component/navigation_module/edge_network_navigation.py#L159
It will return a set of lanes that the point is on. I think this one might be faster because we do the point-in-polygon calculation with physics engine API.

For getting curvilinear coordinates, check PointLane
https://github.com/metadriverse/metadrive/blob/c326ae5f6b409ed90d2bcda8b5bc12689c8c03b5/metadrive/component/lane/point_lane.py#L17 The polyline of a lane is indeed the centerline. So just create a PointLane object with the lane center line. Then you can get the longitudinal and lateral position given an object with lane.local_coordinates(position)

Do you also have some tool to get the leading vehicle ?
Thank you very much for your help!

Screenshot 2024-02-22 151050
I am plotting here the all the objects that intersect with the current lane where the ego vechile is. However it seems that the ego and other vehicles are misaligned in some way (maybe I have some bug in the code). It is strange since I am using the same function to plot the occupancy box given a state of an object.
`def get_object_occupancy(self, state):
"""Extracts the occupancy of the object without considering velocity but only rotation
Args: state (State): The state of the object
Returns: Polygon: The occupancy polygon rotated based on heading"""

    # Extract the position, heading, length, and width from the state object
    obj_position = state.position[:2]
    obj_heading = -state.heading
    obj_length = state.length
    obj_width = state.width

    # Compute the rotation matrix
    cos_angle = np.sin(obj_heading)
    sin_angle = np.cos(obj_heading)
    rotation_matrix = np.array([[cos_angle, -sin_angle],
                                [sin_angle, cos_angle]])

    # Define the vertices of the box relative to its center
    half_length = obj_length / 2
    half_width = obj_width / 2
    vertices_relative = np.array([[-half_length, -half_width],
                                  [half_length, -half_width],
                                  [half_length, half_width],
                                  [-half_length, half_width]])
    
    # vertices_relative = np.array([[-half_width, -half_length],
    #                               [half_width, -half_length],
    #                               [half_width, half_length],
    #                               [-half_width, half_length]])

    # Rotate the vertices using the rotation matrix and translate to the object's position
    rotated_vertices = np.dot(vertices_relative.squeeze(), rotation_matrix.T) + obj_position

    return rotated_vertices`

Apparently it works for ego but not for other objects, why could this happend?
I have also checked the object size, and apparently length and width for ego is different wrt to other objects (width larger in ego, while legth larger in others).

Where is the scenario from? Waymo? or NuScenes?

If you are testing with Nuscenes data, it is a bug from nuscenes... We extract the length and width for all nuScenes objects with the same nuScenes API, but the ego car's width and length turn out to be inverse. So if it is a nuScenes data, the length is actually width and width is length.

Yes, I am using nuScenes. I switched width and length for ego vehicels and it works. But apparently I have problems with the velocities as well (see attached pictures). All the velocities directions are correct, except for the ego one (it should be roatte by 180 deg).
In the first image the ego vehicle is going in the up-left direction in time. This nuScenes scene has been converted with your API few months in the past.
Screenshot 2024-02-23 144345
In this other picture all the velocity are correct (converted few days ago).
id you paraphs fixed a bug about the velocities?
Screenshot 2024-02-23 144639

Yes, I guess so. The previous nuScenes converter may be buggy. If the recent one is good, that's fine. Also, you can find some scenarios in MetaDrive/assets, which can serve as your test case as well.

Actually I am having problems with the current version.
Anyway I will check with your test cases and update you.