Jittering with Dual Franka + Inspire Hand Setup
Closed this issue · 3 comments
Hi OmniGibson Team,
Thanks for the impressive work! I'm using omnigibson to build an environment for bimanual dexterous manipulation. I've written a testing script to move the dual robots arm's end-effector upwards, using an IK solver to calculate joint angles and apply them to the robots.
However, I've encountered a problem: When I load two Franka arms with Inspire hands, the robots exhibit severe jittering:
two_inspire-2024-10-04_13.59.39.mp4
I've thoroughly check the code but can't pinpoint the issue. Interestingly, when I simply replace the end-effector with a gripper, it works normally:
two_gripper-2024-10-04_13.58.44.mp4
Even when I use just one single Franka + Inspire setup, it works correctly:
one_inspire-2024-10-04_13.56.59.mp4
Could you help me understand what might be causing this issue? I'm using the og-develop with IsaacSim 4.1.0, Ubuntu 20.04. Here's my code:
Thank you very much!
Best,
Yuanpei
Hi @cypypccpy , thank you so much for the repro and the detailed description. I have tried to reproduce this and can confirm there's something wrong with the inspire hand setup. I haven't had enough time to fix it, but I'll get back to you tomorrow!!
Hi @cypypccpy , thanks for your patience! It turns out that the problem is when we spawn two robots into the environment, both of them default to be at [0,0,0], which would lead to some crazy collisions. The fix is easy though - when we write the config of the robots, we can pass in position as arguments to specify their initial position/orientation so that they don't collide. One thing to note is that we are in the process of deprecating lula for IK, so while debugging your script, I rewrote it with our own ik controller instead of using lula. Here's my script for your reference:
import torch as th
import omnigibson as og
import omnigibson.utils.transform_utils as T
def generate_action(robot, arm_targets):
"""
Generate a no-op action that will keep the robot still but aim to move the arms to the saved pose targets, if possible
Returns:
th.tensor or None: Action array for one step for the robot to do nothing
"""
action = th.zeros(robot.action_dim)
for name, controller in robot._controllers.items():
# if desired arm targets are available, generate an action that moves the arms to the saved pose targets
if name in arm_targets:
arm = name.replace("arm_", "")
target_pos, target_orn_axisangle = arm_targets[name]
current_pos = robot.get_eef_position(arm)
delta_pos = target_pos - current_pos
partial_action = th.cat((delta_pos, target_orn_axisangle))
else:
partial_action = controller.compute_no_op_action(robot.get_control_dict())
action_idx = robot.controller_action_idx[name]
action[action_idx] = partial_action
return action
def main():
scene_cfg = {"type": "Scene"}
left_robot_cfg = {
"type": "FrankaPanda",
"fixed_base": True,
"end_effector": "inspire",
"controller_config": {
"arm_0": {
"name": "InverseKinematicsController",
"mode": "pose_absolute_ori",
},
"gripper_0": {
"name": "MultiFingerGripperController",
"mode": "independent",
},
},
"position": [0.0, 0.0, 0.0],
}
right_robot_cfg = {
"type": "FrankaPanda",
"fixed_base": True,
"end_effector": "inspire",
"controller_config": {
"arm_0": {
"name": "InverseKinematicsController",
"mode": "pose_absolute_ori",
},
"gripper_0": {
"name": "MultiFingerGripperController",
"mode": "independent",
},
},
"position": [0.0, 1.0, 0.0],
}
cfg = dict(scene=scene_cfg, robots=[left_robot_cfg, right_robot_cfg])
env = og.Environment(configs=cfg)
# Update the viewer camera's pose so that it points towards the robot
og.sim.viewer_camera.set_position_orientation(
position=th.tensor([1.7038, -0.8032, 1.2708]),
orientation=th.tensor([0.4872, 0.2458, 0.3775, 0.7482]),
)
left_robot = env.robots[0]
right_robot = env.robots[1]
# At least one simulation step while the simulator is playing must occur for the robot (or in general, any object)
# to be fully initialized after it is imported into the simulator
og.sim.step()
# Make sure none of the joints are moving
left_robot.keep_still()
right_robot.keep_still()
l_pos, l_ori = left_robot.get_eef_position(), T.quat2axisangle(left_robot.get_eef_orientation())
r_pos, r_ori = right_robot.get_eef_position(), T.quat2axisangle(right_robot.get_eef_orientation())
l_target_pos, r_target_pos = l_pos + th.tensor([0.1, 0.0, 0.0]), r_pos + th.tensor([0.1, 0.0, 0.0])
arm_targets = {"left": {"arm_0": (l_target_pos, l_ori)}, "right": {"arm_0": (r_target_pos, r_ori)}}
while True:
env.step(
th.cat(
[generate_action(left_robot, arm_targets["left"]), generate_action(right_robot, arm_targets["right"])]
)
)
l_pos, r_pos = left_robot.get_eef_position(), right_robot.get_eef_position()
if th.norm(l_pos - l_target_pos) < 0.001 and th.norm(r_pos - r_target_pos) < 0.001:
break
# Always shut the simulation down cleanly at the end
og.clear()
if __name__ == "__main__":
main()