The goal of the project is to build a ROS system that would be able to build a point-cloud-based model of the scene enhanced with information about articulated objects based on a single RGB-D image. The articulated objects defined by this work are drawers, wardrobes, and cabinets.
- a location of the object and a class of the object
- a handler of the object to interact it
- a rotational joint or transitional joint estimation
- estimation of object parameters without the need to interact with the object - improved time of the estimation, reduces the risk of damaging the object
- estimation of many objects at the same time
- localizing the objects in the scene
- the object might be in a random state during estimation
This project estimates the parameters of those objects without the need to interact with the object. This improves the time of the estimation and reduces the risk of damaging the object during estimation, comparing to the classic methods of estimation.
This module is part of my master thesis "Point cloud-based model of the scene enhanced with information about articulated objects"
This node concatenates results and runs on top of the previous three nodes that are available here:
to build a final model of the scene in a 3D environment.
- topic with Microsoft Kinect point cloud, that can be set using
rosparam set input_point_cloud_topic "input_point_cloud_topic"
front_prediction
which contains information about detected fronts of articulated objects.handler_prediction_topic
which contains information about detected handlers of articulated objects.joint_prediction_topic
which contains information about detected joints of articulated objects that are rotational.
image_to_process
- RGB image obtained from input point cloud, which has to be processed by the rest of the nodescloud_to_process
- currently processed point cloudprocessed_fronts_point_cloud
- Processed point cloud with fronts dataprocessed_handlers_point_cloud
- Processed point cloud with handlers datatrans_fronts_normals
- Normals of transitional objectsrot_front_joints
- Rotational jointslast_processed_point_cloud
- Point cloud that was processed a moment ago
The dataset that was used using developing the project is available here:
The dataset consists of 38 rosbag sequences, which contain 32 unique transitional objects and 53 unique rotational objects.
The results gathered on the dataset above looks as follow:
- Mean time of scene building: 1.9393s
| | Transitional objects | Rotational Objects |
|:---------------------------------------:|:--------------------:|--------------------|
| Found fronts/Total number of fronts | 31/32 | 50/53 |
| Found joints/Total number of joints | 31/32 | 46/53 |
| Found handlers/Total number of handlers | 39/40 | 50/53 |
- ROS Noetic
- PCL library
sudo apt install libpcl-dev
rosparam set input_point_cloud_topic "input_point_cloud_topic"
roslaunch model_builder model_builder.launch
Run nodes responsible for handler, front and rotational joint detection.