Notice

Hit 664
Subject [NeurIPS 2021] Lip to Speech Synthesis with Visual Context Attentional GAN (by Minsu Kim) is accepted in NeurIPS 2021
Name IVY Lab. KAIST
Date 2021-09-29
Title: Lip to Speech Synthesis with Visual Context Attentional GAN

Authors: Minsu Kim, Joanna Hong, and Yong Man Ro

In this paper, we propose a novel lip-to-speech generative adversarial network, Visual Context Attentional GAN (VCA-GAN), which can jointly model local and global lip movements during speech synthesis. Specifically, the proposed VCA-GAN synthesizes the speech from local lip visual features by finding a mapping function of visemes-to-phonemes, while global visual context is embedded into the intermediate speech representation to refine the coarse speech representation in details. To achieve this, a visual context attention module is proposed where it encodes global representations from the local visual features and provides the desired global visual context corresponding to the given coarse speech representation to the generator. In addition to the explicit modelling of local and global visual representations, a synchronization technique is introduced through contrastive learning that guides the generator to synthesize a speech in sync with the given input lip movements. Extensive experiments demonstrate that the proposed VCA-GAN outperforms existing state-of-the-art and is able to effectively synthesize the speech from multi-speaker that has been barely handled in the previous works.


IMAGE VIDEO SYSTEM (IVY.) KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY (KAIST), NeurIPS 2021