eth-ait/aitviewer

How do you display the difference in real time with different color levels?

wwwpkol opened this issue · 4 comments

Thanks for your great work, I now estimate five different ways to get the pose, trying to distinguish the gap size by color change. I found that the v.run() function generates the entire animation directly. If I want the color gap every frame I need to transport every frame. But v.run() is a loop and I don't know how to add color. Do you have any solution?
image
The effect I want is real-time animation output, with large color distinctions that are very different from GT. Something like this:
image
image
Thanks for your help and work!

Thanks for your great work, I now estimate five different ways to get the pose, trying to distinguish the gap size by color change. I found that the v.run() function generates the entire animation directly. If I want the color gap every frame I need to transport every frame. But v.run() is a loop and I don't know how to add color. Do you have any solution? image The effect I want is real-time animation output, with large color distinctions that are very different from GT. Something like this: image image Thanks for your help and work!

https://bcv-uniandes.github.io/bodiffusion-wp/ & https://zxz267.github.io/AvatarJLM/

Hi, yes this can be done easily in aitviewer because it supports per-frame vertex colors. I adapted the script from #44 to fit your need:

import os

import numpy as np

from aitviewer.configuration import CONFIG as C
from aitviewer.renderables.meshes import Meshes
from aitviewer.renderables.smpl import SMPLSequence
from aitviewer.viewer import Viewer

if __name__ == "__main__":
    # Load a random AMASS sequence to get some SMPL data.
    c = (149 / 255, 85 / 255, 149 / 255, 0.5)
    seq_amass = SMPLSequence.from_amass(
        npz_data_path=os.path.join(C.datasets.amass, "ACCAD/Female1Running_c3d/C2 - Run to stand_poses.npz"),
        fps_out=60.0,
        color=c,
        name="AMASS Running",
        show_joint_angles=True,
    )

    # Extract two mesh sequences, let's pretend vertices1 is the ground-truth.
    n_frames = seq_amass.n_frames//2
    vertices1 = seq_amass.mesh_seq.vertices[:n_frames]
    vertices2 = seq_amass.mesh_seq.vertices[n_frames:]

    vertices1 = vertices1 - np.mean(vertices1, axis=1, keepdims=True)
    vertices2 = vertices2 - np.mean(vertices2, axis=1, keepdims=True)

    # Swap z and y because for AMASS z is up, but in the viewer y is up. This is usually not required if you don't use AMASS data.
    vertices1[..., [1, 2]] = vertices1[..., [2, 1]]
    vertices2[..., [1, 2]] = vertices2[..., [2, 1]]
    faces = seq_amass.mesh_seq.faces
    faces[:, [1, 2]] = faces[:, [2, 1]]

    # Pick a color map of your liking https://matplotlib.org/stable/users/explain/colors/colormaps.html
    from matplotlib.pyplot import cm

    color_map = cm.get_cmap('cool', 256)
    color_map_output = color_map(np.linalg.norm(vertices2 - vertices1, axis=-1))

    # Change ambient and diffuse coefficients because the default looks a bit too shiny with the 'cool' colormap.
    from aitviewer.scene.material import Material
    material = Material(ambient=0.3, diffuse=0.3)
    material_gt = Material(ambient=0.3, diffuse=0.3, color=color_map(0))

    # Create mesh objects and display it in the viewer.
    mesh1 = Meshes(vertices=vertices1, faces=faces, name="ground-truth", material=material_gt)
    mesh2 = Meshes(vertices=vertices2, faces=faces, vertex_colors=color_map_output, name="prediction",
                   position=np.array([-1.0, 0.0, -1.0]), material=material)

    v = Viewer()
    v.scene.add(mesh1, mesh2)
    v.run()

This produces (the left sequence is assumed to be the ground-truth here).

aitviewer_2

Thank you for your enthusiastic and detailed answer.

image
very cool