- Description
- Required packages - Kinetic Version
- Run GGCNN in Gazebo and RVIZ
- Sending commands through the action server
- Connecting with real UR5
- Meetings minutes
- To do
Repository created to store my research about robot grasping. This repository is kept for backup only. It is not intended to be fully organized (in future versions, for sure).
This project is under development.
The method used here is based on GGCNN created by Doug Morrison.
NOTE: This package should be placed into your src folder
- Realsense Gazebo Plugin
- Realsense-ros Release version 2.2.11
- Librealsense Release version 2.31.0 - Install from source
- Moveit Kinetic
- Moveit Python
- Robotiq Gripper
- Universal Robot
- ur_modern_driver
In order to use the Realsense Gazebo Plugin, create a build folder inside the plugin package and run the following codes:
cmake ../
and then make
Install ros-control dependencies
sudo apt-get install ros-kinetic-gazebo-ros-pkgs ros-kinetic-gazebo-msgs ros-kinetic-gazebo-plugins ros-kinetic-gazebo-ros-control
Install any dependencies you might have missed by using this command in catkin_ws folder
rosdep install --from-paths src --ignore-src -r -y --rosdistro kinetic
NOTE: Remember to always update the Intel Realsense SDK to the required version by realsense-ros pkg
Please check the correct version of the cuda based on the Nvidia driver version (https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
Please check the correct version of Tensorflow based on the cuda and CuDNN version (https://www.tensorflow.org/install/source#tested_build_configurations)
Packages used:
- Ubuntu - 16.04
- Nvidia - 410.78
- Cuda - 10.0.130
- tensorflow-estimator - 1.14
- tensorflow-gpu - 1.14.0
- tensorflow-tensorboard - 0.4.0
- Keras - 2.1.5
- Keras-Applications - 1.0.8
- Keras-Preprocessing - 1.1.0
- CuDNN - 7.4.2
In order to install all the required packages easily, create a catkin workspace folder and then a src inside it.
mkdir -p ~/catkin_ws_new/src
Clone this repository into the src folder
cd ~/catkin_ws_new/src
git clone https://github.com/caiobarrosv/grasp_project
Run the install.sh file
cd ~/catkin_ws_new/src/grasp_project/install
sudo chmod +x ./install.sh
./install.sh #without sudo
Launch Gazebo first: obs: The robot may not start correctly due to a hack method used to set initial joint positions in gazebo as mentioned in this issue. If it happens, try to restart gazebo.
roslaunch grasp_project gazebo_ur5.launch
Launch RVIZ if you want to see the frame (object_detected) corresponding to the object detected by GGCNN and the point cloud. In order to see the point cloud, please add pointcloud2 into the RVIZ and select the correct topic:
roslaunch grasp_project rviz_ur5.launch
Run the GGCNN. This node will publish a frame corresponding to the object detected by the GGCNN.
rosrun grasp_project run_ggcnn_ur5.py
Running this node will move the robot to the position published by the run_ggcnn_ur5.py node.
rosrun grasp_project command_GGCNN_ur5.py --gazebo
You might want to see the grasp or any other image. In order to do that, you can use the rqt_image_view.
rosrun rqt_image_view
If you want to test the position controller sending commands directly to the /'controller_command'/command topic use the following:
rostopic pub -1 /pos_based_pos_traj_controller/command trajectory_msgs/JointTrajectory "header:
seq: 0
stamp:
secs: 0
nsecs: 0
frame_id: ''
joint_names: ['shoulder_pan_joint', 'shoulder_lift_joint', 'elbow_joint', 'wrist_1_joint', 'wrist_2_joint', 'wrist_3_joint']
points:
- positions: [1.57, 0, 0, 0, 0, 0]
time_from_start: {secs: 1, nsecs: 0}"
Use the following command in order to connect with real UR5. If you are using velocity control, do not use bring_up. Use ur5_ros_control instead.
roslaunch grasp_project ur5_ros_control.launch robot_ip:=192.168.131.13
Launch the real Intel Realsense D435
roslaunch grasp_project rs_d435_camera.launch
Launch the gripper control node
rosrun robotiq_2f_gripper_control Robotiq2FGripperRtuNode.py /dev/ttyUSB0
Launch the ggcnn node
rosrun grasp_project run_ggcnn_ur5.py --real
Launch the main node of the Intel Realsense D435
rosrun grasp_project command_GGCNN_ur5.py
If you want to visualize the depth or point cloud, you can launch RVIZ
roslaunch grasp_project rviz_ur5.launch
Firstly check the machine IP. The IP configured on the robot must have the last digit different.
ifconfig
Disable firewall
sudo ufw disable
Set up a static IP on UR5 according to the following figure
Set up a connection on Ubuntu according to the following figure
Topics covered:
- Preferably use devices already in the lab, such as UR5, Intel Realsense and Gripper 2-Fingers Robotiq
- Check how to use neural networks to predict the position of objects. Thus, the proposed method would be robust against camera limitations regarding the proximity of the object, that is, even if there is no depth information, the neural network would use past data to predict where the object is at the given moment.
- Search for grasping applications.
- Translate the thesis into English
- Test realsense post-processing to enhance depth images - librealsense/examples/post-processing
- Record a rosbag file of the realsense depth cam
- Set the right position for the object detected frame
- Test the goal position using UR5
- Implement Robotiq gripper and force control
- [] Update realsense-ros to the new version