HOME Board
Notice

Notice

Hit 1665
Subject [CVPR 2021] Video Prediction Recalling Long-term Motion Context via Memory Alignment Learning (Oral presentation) (by Sangmin Lee) is accepted in CVPR 2021
Name IVY Lab. KAIST
Date 2021-03-05
Title: Video Prediction Recalling Long-term Motion Context via Memory Alignment Learning
Authors: Sangmin Lee, Hak Gu Kim, Dae Hwi Choi, Hyung-Il Kim, Yong Man Ro

Our work addresses long-term motion context issues for predicting future frames. To predict the future precisely, it is required to capture which long-term motion context (e.g., walking or running) the input motion (e.g., leg movement) belongs to. The bottlenecks arising when dealing with the long-term motion context are: (i) how to capture the long-term motion context naturally matching input sequences with limited dynamics, (ii) how to capture the long-term motion context with high-dimensionality (e.g., motion complexity). To address the issues, we propose novel motion context-aware video prediction. To solve the bottleneck (i), we introduce a long-term motion context memory (LMC-Memory) with memory alignment learning. The proposed memory alignment learning enables to store long-term motion contexts into the memory and to match them with sequences including limited dynamics. As a result, the long-term context can be recalled from the limited input sequence. In addition, to resolve the bottleneck (ii), we propose memory query decomposition to store local motion context (i.e., low-dimensional dynamics) and recall the suitable local context for each local part of the input individually. It enables to boost the alignment effects of the memory. Experimental results show the proposed method outperforms other sophisticated RNN-based methods, especially in the long-term condition. Further, we validate the effectiveness of the proposed network designs by conducting ablation studies and memory feature analysis.