kijai/ComfyUI-LivePortraitKJ

Driving still image with video and source_video_smoothed

Quasimondo opened this issue · 2 comments

Not sure if I am using it wrong here, but I do not seem to be able to use source_video_smoothed mode when trying to drive a single still image as a source (and not a video instead). I can use "single_frame", but that results in wiggeling and unnatural scale motions.

I figured out that if I just duplicate the still images into a batch of multiple frames it works, but that seems a bit unnecessary. So I wonder if this is still something on the to-do list or if there is a reason behind?

Oh an BTW - I had to slightly change the smooth() method in order not to throw an error when running it on a single frame:

if x_d_stacked.shape[0] == 1:
        # If there's only one observation, return it as is
        smoothed_state_means = x_d_stacked
    else:
        kf = KalmanFilter(
            initial_state_mean=x_d_stacked[0],
            n_dim_obs=x_d_stacked.shape[1],
            transition_covariance=process_variance * np.eye(x_d_stacked.shape[1]),
            observation_covariance=observation_variance * np.eye(x_d_stacked.shape[1])
        )

        smoothed_state_means, _ = kf.smooth(x_d_stacked)

Ah well - I guess given that it is called "source" video smoothed I should have guessed that it does not work with a single frame....

Nevertheless - I try to understand why the results for still images are looking cleaner (less jumpy) when using that mode.

Ah well - I guess given that it is called "source" video smoothed I should have guessed that it does not work with a single frame....

Nevertheless - I try to understand why the results for still images are looking cleaner (less jumpy) when using that mode.

Yeah it does some calculations from the source frames and is intended for video2video use, haven't really even tried to use it with anything else.