Non-Adversarial Video Synthesis with Learned Priors
Authors
Authors
- Abhishek Aich
- Akash Gupta
- Rameswar Panda
- Rakib Hyder
- M. Salman Asif
- Amit K. Roy-Chowdhury
Authors
- Abhishek Aich
- Akash Gupta
- Rameswar Panda
- Rakib Hyder
- M. Salman Asif
- Amit K. Roy-Chowdhury
Published on
03/21/2020
Categories
Most of the existing works in video synthesis focus on generating videos using adversarial learning. Despite their success, these methods often require input reference frame or fail to generate diverse videos from the given data distribution, with little to no uniformity in the quality of videos that can be generated. Different from these methods, we focus on the problem of generating videos from latent noise vectors, without any reference input frames. To this end, we develop a novel approach that jointly optimizes the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning. Optimizing for the input latent space along with the network weights allows us to generate videos in a controlled environment, i.e., we can faithfully generate all videos the model has seen during the learning process as well as new unseen videos. Extensive experiments on three challenging and diverse datasets well demonstrate that our approach generates superior quality videos compared to the existing state-of-the-art methods.
Please cite our work using the BibTeX below.
@InProceedings{Aich_2020_CVPR,
author = {Aich, Abhishek and Gupta, Akash and Panda, Rameswar and Hyder, Rakib and Asif, M. Salman and Roy-Chowdhury, Amit K.},
title = {Non-Adversarial Video Synthesis With Learned Priors},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}