Xinyu-Yi/PIP

some questions about process_amass in preprocess.py

huzijun1996 opened this issue · 12 comments

Hi,
I have preprocessed the ACCAD/Male2General_c3d/A6- Box lift_poses.npz in the AMASS dataset according to your method.I found that after deploying the actions in the DIP-IMU dataset in Unity first, then putting the actions in the AMASS dataset into Unity will result in a flipped situation. The first picture is the situation after inputting data directly into unity, the second picture is the situation after inputting pre-processed data into unity, and the third picture is the official render provided by AMASS, is this considered as "align AMASS global frame with DIP"? Is this pre-processing effect correct? (The fourth figure shows the direct import of the DIP dataset s_10/01_a.pkl into unity.)
If the DIP and AMASS datasets cannot be in the same global framework, I am afraid it will affect the training and testing of the neural network.

1预处理前

2预处理后

3Picture3

4DIP-IMU

Yes, you need to "align AMASS global frame with DIP". I remember I have left-multiplied a matrix with all entries 1 or 0 to the AMASS global rotation/translation in the preprocessing script. That matrix will rotate AMASS into a y-up coordinate.

Oh, its in Transpose/preprocess.py in Line 64. You can check the "Transpose" repository

I'm sorry. PIP also has the code, in Line 65 in preprocess.py

Oh, its in Transpose/preprocess.py in Line 64. You can check the "Transpose" repository

I'm sorry. PIP also has the code, in Line 65 in preprocess.py

I have processed the data accordingly according to your method, and after putting the data in the processed AMASS dataset and the data in the original DIP-IMU dataset into unity3D, the pose in the DIP-IMU dataset is facing me, and the pose in the AMASS dataset is back to me, is this considered to have completed the preprocessing?(The attitude in the DIP-IMU dataset is shown in Figure 4, and the attitude in the processed AMASS dataset is shown in Figure 2)

Oh, its in Transpose/preprocess.py in Line 64. You can check the "Transpose" repository

I'm sorry. PIP also has the code, in Line 65 in preprocess.py

I have processed the data accordingly according to your method, and after putting the data in the processed AMASS dataset and the data in the original DIP-IMU dataset into unity3D, the pose in the DIP-IMU dataset is facing me, and the pose in the AMASS dataset is back to me, is this considered to have completed the preprocessing?(The attitude in the DIP-IMU dataset is shown in Figure 4, and the attitude in the processed AMASS dataset is shown in Figure 2)

The pose given in the official AMASS render is shown in Figure 3.

If you process correctly, y-axis will be the up direction

If you process correctly, y-axis will be the up direction

Yes, I have followed the method you provided earlier and the y-axis is already directed upwards, but it seems that the y-axis is rotated and does not have the character model facing me as in the official render, is this pre-processing successful? Or do I need some further processing to make the model no longer turn its back to me?

If you process correctly, y-axis will be the up direction

After preprocessing as you did, the video that appears in unity3D is shown below:(it always turns its back to me)

wrong0.mp4

I think that doesn't matter. There are many motions that the subject walks and turns around. You should keep the y-axis's rotation.

I think that doesn't matter. There are many motions that the subject walks and turns around. You should keep the y-axis's rotation.

But the official render is facing me directly without rotation (as shown below), can these two situations be considered the same? Or do I need to rotate the coordinate axis on the basis of the pre-processing you give, so that the Mannequin changes from being back to me to facing me?

A6__Box_lift.mp4

I see. It will not affect the training and testing. Maybe you can place the camera on the other side of the human in Unity to render the pose,