UniversalRobots/Universal_Robots_ROS_controllers_cartesian

Allow arbitrary frame for cartesian trajectory in cartesian_trajectory_controller

fmauch opened this issue · 4 comments

As far as I can see, the cartesian trajectory controller does not use the frame of the poses that are coming in.

In our example test_move script we do not set the frame_id explicitly and in the controller implementation I cannot find any mechanism to do a lookup.

@stefanscherzinger am I missing something or is this functionality missing?

@fmauch
You are right. This functionality is not yet implemented. We currently assume the robot's base frame for control when processing sampled trajectory points. This is particularly the case for calls to our IK solver and when forwarding targets to the UR IK solver. It's also the case here.

The cartesian_trajectory_interpolation, however, is independently formulated in its own coordinate system that's implicitly defined by the given poses (without the loss of generality). All poses must have the same implicit reference frame, though. But that's assured by the message definition

I assume you have two common use cases in mind:

  1. Re-use a previously taught Cartesian trajectory (e.g. some polishing motion) at a new - but static - spot in the robot's workspace.
  2. Re-use and apply a previously taught Cartesian trajectory to some moving object.

Both 1. and 2. probably only make sense if the new frame is not directly part of the robotic chain, but belongs somewhere to the TF tree of the setup. I think we could achieve 1. with a static transform that we need to look-up only once. That's probably okay. But since we sample control points during trajectory execution on the fly, 2. will require to do the lookups online and possibly in real-time. Not sure at the moment how critical that is.

What I have in mind is for example commanding a trajectory in a different reference frame than the one defined for the controller as in https://forum.universal-robots.com/t/about-the-cartesian-trajectory-controllers-and-movegroup/18606.

I am aware, that the message definition does provide a reference frame definition, but I couldn't find anything in the controller's code that is actually interpreting this.

Hi!

So I am attempting to control the UR10 with the forward_cartesian_traj_controller and the UR ROS driver, and I have set up my tf tree to include some additional frames that are children of the tool0_controller (I have a setup with more than one "tool center point", attached is the tf tree generated by running rosrun tf2_tools view_frames.py). I am interested in commanding each tcp ("tcp_gripper1, tcp_gripper2) to go to the same pose in my environment (relative to my base frame), at separate instances.

I have therefore defined a desired CartesianTrajectoryPoint msg, which I intend to use in two CartesianTrajectory trajectory definitions, that differ in their controlled_frame attribute; one of them uses "tcp_gripper1" and the second uses "tcp_gripper2". Unfortunately, it seems that when I send each of these trajectories to the trajectory controller the arm drives the flange, and not the desired grippers, to the pose of interest. Note: on my teach pendant, I have made sure to have the active tcp set as the flange.

I am not sure if this is precisely the subject of this Issue, but I wanted to address it here before I raise another Issue.

Thank you for your time!

Screenshot from 2022-03-25 23-57-21

Hi @wafts95

Sorry for my late reply. I'm catching up after my extended vacation. Is this still relevant for you?

I am not sure if this is precisely the subject of this Issue

Yes, that's one of the consequences that controlled_frame is not taken into account at the moment.
I see two workarounds for your use cases (the first being a guess to be honest):

  1. If you somehow manage to specify either tcp_gripper1 or tcp_gripper2 in the teach pendant as end-effector, then the controller's commands (from processing the trajectory) could be interpreted correctly. This might involve specifying both frames in advance with fixed coordinates in the teach pedant with respect to tool0 or similar. But I have now idea how to do that or how to chose one of them online during the application.

  2. You could configure, load, and switch between two controller instances of type position_controllers/CartesianTrajectoryController. This type has almost equivalent behavior, but gives you more control on the ROS side. You would then specify tip to tcp_gripper1 and tcp_gripper2, respectively. The controller will turn the Cartesian trajectory into joint configurations and will send them to the driver. This motion should then reflect your specified trajectory. Note to specify your URDF correctly with tcp_gripper1 and tcp_gripper2 being children of tool0, not tool0_controller, so that you have all of the active joints in between in your kinematic chain.

Also try to use this mechanism to check if your trajectory looks good in RViz before commanding that to the real robot.