Kinovarobotics/kinova-ros

tool_pose action fails

Achllle opened this issue · 4 comments

Problem

MICO arm doesn't execute move sent with tool_pose

Steps to reproduce

Example 1

  1. Home robot using the joystick
  2. roslaunch kinova_bringup kinova_robot.launch kinova_robotType:=m1n6s300. This launches fine.
  3. rosrun some_package kinova_test.py. This brings the robot to a preprogrammed pose. It executes, but the bringup terminal reports Trajectory command failed`.
  4. rostopic pub -r 100 /m1n6s300_driver/in/joint_velocity kinova_msgs/JointVelocity "{joint1: 0.0, joint2: -10.0, joint3: 0, joint4: 0.0, joint5: 0.0, joint6: 0}" --> move joint 2, this executes fine
  5. rosrun some_package kinova_test.py. This fails: nothing executes and the same Trajectory command failed is shown.

The robot is connected to an Ubuntu 16.04 laptop using ROS Kinetic through USB. The pre-programmed pose is clearly in the workspace and any movement using the rostopic pub command will make the next execution of the script fail.

Example 2

Run step 1-3 and change the script kinova_test.py below to go to the pose

move_to_position([0.163641229272, -0.000552169978619, 0.48992022872],
                   [-0.422789037228, 0.717485964298, -0.0742567181587, 0.548588454723])

and skip step 4 and 5. The robot will not even get to the pose.

test script (kinova_test.py)

#! /usr/bin/env python
import rospy
import actionlib
import kinova_msgs.msg
import geometry_msgs.msg
import std_msgs.msg

def move_to_position(position, orientation):
    """Send a cartesian goal to the action server."""
    action_address = '/m1n6s300_driver/pose_action/tool_pose'
    position_client = actionlib.SimpleActionClient(action_address, kinova_msgs.msg.ArmPoseAction)
    position_client.wait_for_server()
    goal = kinova_msgs.msg.ArmPoseGoal()
    goal.pose.header = std_msgs.msg.Header(frame_id=('m1n6s300_link_base'))
    goal.pose.pose.position = geometry_msgs.msg.Point(
        x=position[0], y=position[1], z=position[2])
    goal.pose.pose.orientation = geometry_msgs.msg.Quaternion(
        x=orientation[0], y=orientation[1], z=orientation[2], w=orientation[3])

    position_client.send_goal(goal)
    print('Sent position goal: \n {}'.format(goal))

    if position_client.wait_for_result(rospy.Duration(10.0)):
        return position_client.get_result()
    else:
        position_client.cancel_all_goals()
        print('Execution failed: timed out waiting for ArmPoseGoal')
        return None

if __name__ == '__main__':
    rospy.init_node('test_kinova')
    move_to_position([0.313, 0.05, 0.014],
                     [0.24, 0.97, -0.026, -0.033])

Hi @Achllle

First of all, this package is officially supported on Ubuntu 14.04 only, not Ubuntu 16.04. Ideally, please test on an Ubuntu 14.04 machine

Then, it is possible that the robot is not able to reach a pose, even if this pose is in the robot workspace, depending where is the robot initially. Have you tried with other poses as well?

Please have a look at the Q and A below that explains what happens within the robot when you send a pose request:

Q: I am providing a list of Cartesian position to the robot, but it almost always stops right in the middle. Why?

A: When you send a Cartesian position command to the robot, it does a bunch of things:

  • first, it considers a linear interpolation between its current position and the position you sent

  • then, it finds the next achievable position and orientation on that linear interpolation, considering the velocity and acceleration limits.

  • then, it only considers this next achievable point as the command, and it checks if the command complies with a few criteria:
    - does not bring the robot too close to a singular configuration
    - does not bring the robot too close to a self-collision configuration
    - does not bring the robot too close to one of its angular limits. ** for Cartesian control, the robot has different angular limits than for angular control. For angular control, the angle limits are listed in the Advanced specification guide (ASG). For Cartesian control, joint2 and joint 3 have the following limits (in right hand mode): joint 2 min: 140 deg, joint 2 max: as listed in the ASG , joint 3 min: as listed in the ASG, joint 3 max: 165 deg. For left hand mode, the limits are these: joint 2 min as listed in the ASG, joint 2 max: 220 deg, joint 3 min: 195 deg, joint 3 max: as listed in the ASG
    - does not bring the robot too close to a protection zone

  • it modifies the next achievable point accordingly to stay away from the situations mentioned above

  • it finds the numerical inverse kinematics solution to reach the next achievable point

  • it rechecks the conditions for singularity, self-collision, etc.

  • it sends the command to the joints

  • it repeats the process until the error with the position and orientation you provided for a given point is small enough

this means that the robot will never stop trying to reach a commanded Cartesian pose until it reaches it within an acceptable error margin.This explains why the robot can stop in the middle of a trajectory. It will not try to reach point 3 if it was not able to reach point 2 before within an acceptable error margin.One important thing to consider is that it is not necessarily the pose you provided that is inaccessible to the robot, but maybe one of the ‘next achievable point’ computed by the robot depending on its current position makes it come too close from a singular config, or too close from a self-collision, etc.

Important parameters to consider are:

  • the list of pose you provide
  • the sequence in which you provide the poses to reach
  • the starting pose of the robot
  • the admittance mode active or not
  • the reach of the robot
  • the robot's singular configurations

A few things that help:

  • sending poses that are relatively close to each other
  • start the robot in a configuration that is relatively near the first commanded pose
  • choose a combination or positions and orientation that you think will keep the robot far from its singular configurations (especially the wrist alignment / joint 5 to 180 degrees one)
  • if you can, move the robot with the joystick and record robot positions while your move it relative to your desired pattern. Then, ask the robot to repeat the trajectory
  • use angular control instead of position control

Let us know
Martine

A few more details after testing on a MICO here:
In this case, I think it is the position part that is problematic. Indeed, in order to protect itself from singularities, the MICO arm has a virtual cylinder defined around its base. The radius of this virtual cylinder is about 20 cm, so very close to the command you give. In fact, if you try to connect a joystick to your MICO and bring the end-effector near point (0.163641229272, -0.000552169978619, 0.48992022872), you'll see that MICO tries to avoid getting its end effector near its base (from experiment, I get the impression that I cannot get X closer than 26-27 cm if I try to keep Y near 0). What I recommend is to move the robot with the joystick and record the position when you find one that suits you. This way, you know the pose you are providing is achievable by the robot.

If you do not want to be bothered by this virtual cylinder and you want the robot to go to this exact pose, you can always choose to control the robot with MoveIt.

Thanks for your thorough reply. Since I need a planner that will always find the result if it exists and not even attempt moving if it can't, I'll use MoveIt! instead.

There's not a lot of documentation on how the tool pose action service works and I assumed it would be superior because it's specific to the MICO robot. Perhaps this should be documented on the wiki.

Hi @Achllle

Ok, I'll add a few more details in the README. In short, the tool pose action service will send Cartesian position commands to the robot and the inverse kinematics will be handled within the robot. The way the inverse kinematics algorithm that is programmed in the robot works is explained in the Q & A I sent earlier. Good luck with Moveit :)