Existing face relighting methods often struggle with two problems: maintaining the local facial details of the subject and accurately removing and synthesizing shadows in the relit image, especially hard shadows. We propose a novel deep face relighting method that addresses both problems. Our method learns to predict the ratio (quotient) image between a source image and the target image with the desired lighting, allowing us to relight the image while maintaining the local facial details. During training, our model also learns to accurately modify shadows by using estimated shadow masks to emphasize on the high-contrast shadow borders. Furthermore, we introduce a method to use the shadow mask to estimate the ambient light intensity in an image, and are thus able to leverage multiple datasets during training with different global lighting intensities. With quantitative and qualitative evaluations on the Multi-PIE and FFHQ datasets, we demonstrate that our proposed method faithfully maintains the local facial details of the subject and can accurately handle hard shadows while achieving state-of-the-art face relighting performance.

Overview

Figure 1: Given an input image and a target lighting, our model can produce a relit image with realistic hard cast shadows, such as those around the nose and eyes, which prior work struggles with. Images of SIPR [31] are provided by the authors.

Architecture

Figure 2: Overview of our proposed method. Given a source image Is and a target lighting t as input, our method estimates the source lighting p and the ratio image Rp in the luminance channel. In training, we assign high emphasis to learning the correct ratio image at pixels near shadow borders in both source and target images, helping the model add and remove shadows more accurately. We also estimate the ambient light intensity of the source image (as) and of the target image (at) and add them to the groundtruth source lighting s and target lighting t respectively to help our model adapt to images across different datasets with varying ambient light.

FFHQ Qualitative

Figure 3: Relighting on FFHQ dataset Using Target Lighting. Images for SIPR [31] are provided by the authors. Across all lighting conditions, our model produces better cast shadows than prior work, especially around the nose and eyes.

FFHQ Video

Face Relighting Source Code

The source code can be downloaded from here

Publications

  • Towards High Fidelity Face Relighting with Realistic Shadows
    Andrew Hou, Ze Zhang, Michel Sarkis, Ning Bi, Yiying Tong, Xiaoming Liu
    In Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2021), Nashville, TN, Jun. 2021
    Bibtex | PDF | arXiv | Supplemental | Code | Video
  • @inproceedings{ towards-high-fidelity-face-relighting-with-realistic-shadows,
      author = { Andrew Hou and Ze Zhang and Michel Sarkis and Ning Bi and Yiying Tong and Xiaoming Liu },
      title = { Towards High Fidelity Face Relighting with Realistic Shadows },
      booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
      address = { Nashville, TN },
      month = { June },
      year = { 2021 },
    }