Diffusion Video Autoencoders:
Toward Temporally Consistent Face Video Editing
via Disentangled Video Encoding

CVPR 2023
1Korea Advanced Institute of Science and Technology (KAIST), South Korea
2NAVER AI Lab       3AITRICS, South Korea

Figure: Face video editing. Our editing method shows improvement compared to the baseline (STIT) in terms of temporal consistency (left, “eyeglasses”) and robustness to the unusual case such as the hand-occluded face (right, “beard”).


Abstract

Inspired by the impressive performance of recent face image editing methods, several studies have been naturally proposed to extend these methods to the face video editing task. One of the main challenges here is temporal consistency among edited frames, which is still unresolved. To this end, we propose a novel face video editing framework based on diffusion autoencoders that can successfully extract the decomposed features - for the first time as a face video editing model - of identity and motion from a given video. This modeling allows us to edit the video by simply manipulating the temporally invariant feature to the desired direction for the consistency. Another unique strength of our model is that, since our model is based on diffusion models, it can satisfy both reconstruction and edit capabilities at the same time, and is robust to corner cases in wild face videos (e.g. occluded faces) unlike the existing GAN-based methods.


Method Overview


Overview of our Diffusion Video Autoencoder


Our diffusion video autoencoder encode each frame into facial-related feature $z_{\text{face}}$ and a noise map $x_T$ containing background information. Facial-related feature $z_{\text{face}}$ consists of representative identity feature which is calculated by averaging the identity feature of all frames, and corresponding frame's motion feature. Afterward, by using $z_{\text{face}}$ as a condition, the forward process of the diffusion model results in the noise map $x_T$, where only the background information remains since all facial-related information is encoded in the condition. To achieve perfect decomposition of identity, motion, and background information, we use a pre-trained (identity, landmark) encoder that can extract each feature without training it. We modify the representative identity feature, compute a new facial-related feature $z_{\text{face}}^{\text{edit}}$, and then proceed with the reverse process to perform editing. To train our model, we use two objectives. The first is the simple existing DDPM loss, which learns the distribution of face video frames by predicting the noise used in each batch. The second is the regularization loss, which helps ensure clear decomposition between the background and face information. When estimating original images from noisy images generated using different noises, we minimize the difference in the facial parts.


Comparison of Temporal Consistency for "beard"


Original

Latent Transformer

STIT

VideoEditGAN

Ours

We demonstrate that only our diffusion video autoencoder successfully produces the consistent result.



Additional Examples

+Beard

Original

STIT

Ours


+Eyeglasses

Original

STIT

Ours


+Mustache

Original

STIT

Ours

Our method shows temporally consistent results.


+Beard

Original

STIT

Ours

Our method also robustly reconstructs and edits the unusual case such as the hand-occluded face effectively.


-Sideburns

Original

STIT

Ours

Moreover, entire frames of long video can be edited at once by modifying the single identity feature.



Inference Time Comparison

Original

STIT
12.0 sec/frame

Ours (T=1000)
62.4 sec/frame

Ours (T=100)
7.3 sec/frame

Ours (+Sampler)
2.9 sec/frame

Furthermore, our method can utilize ODE samplers to reduce time with comparable quality.



BibTeX

If you find our work useful, please cite our paper:

@InProceedings{Kim_2023_CVPR,
      author={Kim, Gyeongman and Shim, Hajin and Kim, Hyunsu and Choi, Yunjey and Kim, Junho and Yang, Eunho},
      title={Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month={June},
      year={2023},
      pages={6091-6100}
}