This repository contains the official implementation for the paper titled "Leveraging Language Models and Audio-Driven Dynamic Facial Motion Synthesis: A New Paradigm in AI-Driven Interview Training," published in AIED 2024.

#Update: Facial Motion synthesis now optimized for close to real time inference. Demo video can be found in the Faster-Inference folder within Demo Videos folder.

Getting Started

Installation

Clone the repository to install SadTalker, the core component of our Avatar Generation module:

git clone https://github.com/OpenTalker/SadTalker

Running the Avatar Generation Module

After installation, navigate to the repository's root directory and start the Avatar Generation module by executing:

python api.py --port=4500

Launching the Application

To run the main application interface, execute the following command in the Mock_Trial_Demo directory:

python main.py

Demonstration

For practical demonstrations of the system, refer to the Demo_Videos folder, which contains several video files showcasing the system's capabilities.

If you find our code useful, please consider citing our research.