Authors: Kelly MacDonald, Lidi Yafei, Mavis Murlock
This project uses Kinova Gen3 Lite robot arm, to implement robot manipulation and shared autonomy.
use the command roslaunch grab_target moveit_example.launch
The Kinova arm web app is available at: 192.168.1.10/monitoring
grab_target/scripts/plan_and_move
:
(note that example
is a ExampleMoveItTrajectories object )
To open the gripper: example.reach_gripper_position(1)
To close the gripper 50%: example.reach_gripper_position(0.5)
To reach a given pose: example.reach_cartesian_pose(pose=pose_goal, tolerance=0.01, constraints=None)
(where pose_goal is a PoseStamped())
To get the current pose: example.get_cartesian_pose
get_xyz.py
uses forward kinematics to calculate the position of the end effector, using the data from gen3_lite_gripper.urdf
.
Requires the realsense2_camera package. For debugging, see the documentation.
To view the color image from the camera, use the topic /camera/color/image_raw
To view the depth image from the camera, use the topic /camera/depth/image_rect_raw
Given a set of objects arranged in a semicircle, we have a user control the robotic arm using an xbox controller. We implement shared autonomy to predict which object they have selected.
An xbox controller is connected to the computer. We read user input by using the Joy library.
Our code calculates the probability of a particular target having been chosen given the input from the user and the position of the end effector in relation to each cup.