The new uploaded pretrained model doesn't seem to be performing very well
HospitableHost opened this issue · 15 comments
I tested on plausible poses and noise-added poses, and found that the output distance is not very good at distinguishing noisy poses from reasonable poses. Noise-added poses are processed by self.sigma = [0.01, 0.05, 0.1, 0.25, 0.5] and self.sample_distribution = np.array([0.2, 0.2, 0.2, 0.2, 0.2]). Below are some predictions.
Plausible poses from AMASS trainset:
Noise-added poses:
Please share the pose file, so that i can test
I suspect the following reasons:
- version 2 of PoseNDF works on positive quaternions, so you need to flip the quaternion to the positive hemisphere, before calculating the distance
Please refer to: https://github.com/garvita-tiwari/PoseNDF/blob/version2/experiments/sample_poses.py#L73, for calculating distance and projecting pose on the manifold.
I have added some examples here:
bad pose(initial, projected pose):
initial distance: tensor(0.0308, device='cuda:0').
good pose(initial, projected pose): -> from ACCAD/Female1General_c3d
initial distance: tensor(0.0005, device='cuda:0')
Thank you very much! I will compare the difference between the two versions in detail.
It seems not good. I tested the pretrained model on the raw pose directly sampled from the AMASS trainset, and use the function 'axis_angle_to_quaternion' and 'quat_flip' to process the pose, and then input to the model.
But the distance of the plausible poses are still 0.09....
Here is my test code.
# test for pretrained model
checkpoint_path = './experiment/2-27.tar'
config_path = './configs/2-27.yaml'
opt = load_config(config_path)
checkpoint = torch.load(checkpoint_path, map_location=opt['train']['device'])
model = PoseNDF(opt)
model.load_state_dict(checkpoint['model_state_dict'])
print(f"checkpoint loaded, its epoch is {checkpoint['epoch']}.")
path = '/mnt/nas_8/datasets/Pose-NDF-raw/ACCAD/Female1General_c3d.npz'
data = np.load(path)
num_poses = 5
rand_idx = np.random.randint(0, len(data['pose_body']), num_poses)
pose = torch.from_numpy(data['pose_body'][rand_idx])[:,:63].reshape(-1,21,3)
pose = axis_angle_to_quaternion(pose)
pose = quat_flip(pose)
distance = model(pose,train = False)
distance = distance[:,0].cpu().detach().clone().numpy()
gt_distance = np.zeros_like(distance)
data = {'predicted distance': distance.tolist(), 'gt distance': gt_distance.tolist(), 'error:': np.absolute(distance-gt_distance).tolist()}
df = pd.DataFrame(data)
print('poses from :{}'.format(path.split('/')[-3:]))
print(df)
the dir 'Pose-NDF-raw' refers to the sampled poses processed by 'data/sample_poses.py'
I checked the same code and I get the following:
{'predicted distance': [0.00030276746838353574, 0.0004039875348098576, 0.00040361887658946216, 0.0008321860223077238, 0.00046724494313821197], 'gt distance': [0.0, 0.0, 0.0, 0.0, 0.0], 'error:': [0.00030276746838353574, 0.0004039875348098576, 0.00040361887658946216, 0.0008321860223077238, 0.00046724494313821197]}
I notice that you get distance using:
distance = model(pose,train = False)
but it should be
distance = model(pose,train = False)['dist_pred'].
Just to be double sure, I have also re-uploaded the checkpoint.
I can't believe that. Why it is still wrong for me using the new one?
https://drive.google.com/file/d/1PjOGLqh4P6RPkjG9Gc7YjFkDmES7YWLa/view?usp=sharing
I have uploaded a npz file, it is raw pose directly from the AMASS' ACCAD.
I use your pretrained model, your config file, your model code.
If this data is not wrong, then it must be the problem of the pretrained model.
Please help me to test the pose file, I really dont know what is wrong. o(╥﹏╥)o
I can't believe that. Why it is still wrong for me using the new one? https://drive.google.com/file/d/1PjOGLqh4P6RPkjG9Gc7YjFkDmES7YWLa/view?usp=sharing I have uploaded a npz file, it is raw pose directly from the AMASS' ACCAD.
I tested with this file and the code snippet your shared:
{'predicted distance': [0.00072555395308882, 0.0004589210730046034, 0.0005504807340912521, 0.0010260505368933082, 0.0011828476563096046], 'gt distance': [0.0, 0.0, 0.0, 0.0, 0.0], 'error:': [0.00072555395308882, 0.0004589210730046034, 0.0005504807340912521, 0.0010260505368933082, 0.0011828476563096046]}
Did you check this line:
I notice that you get distance using:
distance = model(pose,train = False)
but it should be
distance = model(pose,train = False)['dist_pred'].
Ok, I think I understood the problem. You are using main branch of the code. But this is for version2 branch of the code.
I think in your code/posendf.py:
pose = torch.nn.functional.normalize(pose.to(device=self.device),dim=1). -> This line is uncommented. (b53d015#diff-ae0c20e62851250b817bb51457c32e268c4d57078f02689516a52b049ed27e70)
Thank you very much! That is the wrong place.
I tested on the noisy pose, and rendered the noisy pose. It can work well.
Hi, thank you for your work!
How do you render and draw images? Is there any script available?
Hi, thank you for your work! How do you render and draw images? Is there any script available?
You can use this pytorch3d based renderer to create results: https://github.com/garvita-tiwari/PoseNDF/blob/version2/experiments/exp_utils.py#L34
Check the usage here: https://github.com/garvita-tiwari/PoseNDF/blob/version2/experiments/sample_poses.py#L56