How to decide the sample range on se(3)?
Closed this issue · 1 comments
Dear author, thank you so much for making your source code public, which has benefited me greatly. However, I still have some confusion. How do you define the sampling range of rotation vectors and translation vectors? Unlike rotation matrices and Euler angles, the range defined by rotation vectors is always difficult to accurately imagine the angle at which the source moves. If you could provide help, I would greatly appreciate it.
def get_random_offset(batch_size: int, device) -> RigidTransform:
r1 = torch.distributions.Normal(0, 0.2).sample((batch_size,))
r2 = torch.distributions.Normal(0, 0.1).sample((batch_size,))
r3 = torch.distributions.Normal(0, 0.25).sample((batch_size,))
t1 = torch.distributions.Normal(10, 70).sample((batch_size,))
t2 = torch.distributions.Normal(250, 90).sample((batch_size,))
t3 = torch.distributions.Normal(5, 50).sample((batch_size,))
log_R_vee = torch.stack([r1, r2, r3], dim=1).to(device)
log_t_vee = torch.stack([t1, t2, t3], dim=1).to(device)
return convert(
[log_R_vee, log_t_vee],
"se3_log_map",
"se3_exp_map",
)
Hi @elevenseventao , the ranges for se(3) were determined by converting all the camera poses in the DeepFluoroDataset to se(3) and picking sensible upper and lower bounds. In the newest versions of DiffDRR (>0.4
), I changed the camera pose parameterization from [R | t]
to [R | Rt ]
(reference to Hartley and ZIssermann, Chapter 6). This makes it a lot easier to specify human-interpretable bounds on the pose parameters (eg, +/- 45 degrees lateral). I'm working on an update to DiffPose that will use that easier-to-understand parameterization, so stay tuned for that!