HOME Board
Notice

Notice

Hit 248
Subject [ICASSP 2024] Text-driven Talking Face Synthesis by Reprogramming Audio-driven Models (by Jeongsoo Choi) is accepted in ICASSP 2024
Name °ü¸®ÀÚ
Date 2023-12-20
Title: Text-driven Talking Face Synthesis by Reprogramming Audio-driven Models

Authors: Jeongsoo Choi,  Minsu Kim,  Se Jin Park,  and Yong Man Ro

In this paper, we present a method for reprogramming pre-trained audio-driven talking face synthesis models to operate in a text-driven manner. Consequently, we can easily generate face videos that articulate the provided textual sentences, eliminating the necessity of recording speech for each inference, as required in the audio-driven model. To this end, we propose to embed the input text into the learned audio latent space of the pre-trained audio-driven model, while preserving the face synthesis capability of the original pretrained model. Specifically, we devise a Text-to-Audio Embedding Module (TAEM) which maps a given text input into the audio latent space by modeling pronunciation and duration characteristics. Furthermore, to consider the speaker characteristics in audio while using text inputs, TAEM is designed to accept a visual speaker embedding. The visual speaker embedding is derived from a single target face image and enables improved mapping of input text to the learned audio latent space by incorporating the speaker characteristics inherent in the audio. The main advantages of the proposed framework are that 1) it can be applied to diverse audio-driven talking face synthesis models and 2) we can generate talking face videos with either text inputs or audio inputs with high flexibility.


IMAGE VIDEO SYSTEM (IVY.) KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY (KAIST), ICASSP 2024