ashawkey/RAD-NeRF

Testing application without colab

iboyles opened this issue · 2 comments

I trained the model locally finally on a 5 minute video that had eye contact and the live asr gui seems to have too much latency for me to test accuracy. How can I display audio next to the video on my local machine like in the colab the code chunk that plays them together on colab does not transfer to python directly :

#@title Display Video

import os
import glob
from IPython.display import HTML
from base64 import b64encode

def get_latest_file(path):
dir_list = glob.glob(path)
dir_list.sort(key=lambda x: os.path.getmtime(x))
return dir_list[-1]

Video = get_latest_file(os.path.join('trial_ian', 'results', '*.mp4'))
Video_aud = Video.replace('.mp4', 'aud.mp4')

concat audio

! ffmpeg -y -i {Video} -i data/{Aud} -c:v copy -c:a aac {Video_aud}

display

def show_video(video_path, video_width=450):

video_file = open(video_path, "r+b").read()
video_url = f"data:video/mp4;base64,{b64encode(video_file).decode()}"

return HTML(f"""""")

show_video(Video_aud)

from moviepy.editor import VideoFileClip, AudioFileClip

def merge_audio_and_video(video_path, audio_path, output_path):
    video_clip = VideoFileClip(video_path)
    audio_clip = AudioFileClip(audio_path)

    # Make sure audio duration matches the video duration
    if video_clip.duration < audio_clip.duration:
        audio_clip = audio_clip.subclip(0, video_clip.duration)

    # Set the audio of the video clip to the merged audio
    video_clip = video_clip.set_audio(audio_clip)

    # Write the merged video with the new audio to the output path
    video_clip.write_videofile(output_path, codec="libx264")

    # Close the clips to free up resources
    video_clip.close()
    audio_clip.close()

if __name__ == "__main__":
    video_path = "trial_ian2_torso/results/ngp_ep0030.mp4"
    audio_path = "data/audio5.wav"
    output_path = "trial_ian2_torso/results/ian_aud.mp4"

    merge_audio_and_video(video_path, audio_path, output_path)

jk chat gpt wrote this script for me it tests it in python without ffmpeg may even be faster too.

Iam facing errors when running the model in collab,  can u help
rocessing ./freqencoder
Preparing metadata (setup.py) ... done
Building wheels for collected packages: freqencoder
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for freqencoder (setup.py) ... error
ERROR: Failed building wheel for freqencoder
Running setup.py clean for freqencoder
Failed to build freqencoder
ERROR: Could not build wheels for freqencoder, which is required to install pyproject.toml-based projects
Processing ./shencoder
Preparing metadata (setup.py) ... done
Building wheels for collected packages: shencoder