Existing face relighting methods often struggle with two problems: maintaining the local facial details of the subject and accurately removing and synthesizing shadows in the relit image, especially hard shadows. We propose a novel deep face relighting method that addresses both problems. Our method learns to predict the ratio (quotient) image between a source image and the target image with the desired lighting, allowing us to relight the image while maintaining the local facial details. During training, our model also learns to accurately modify shadows by using estimated shadow masks to emphasize on the high-contrast shadow borders. Furthermore, we introduce a method to use the shadow mask to estimate the ambient light intensity in an image, and are thus able to leverage multiple datasets during training with different global lighting intensities. With quantitative and qualitative evaluations on the Multi-PIE and FFHQ datasets, we demonstrate that our proposed method faithfully maintains the local facial details of the subject and can accurately handle hard shadows while achieving state-of-the-art face relighting performance.

Figure 1: Given an input image and a target lighting, our model can produce a relit image with realistic hard cast shadows, such as those around the nose and eyes, which prior work struggles with. Images of SIPR [31] are provided by the authors.

Figure 2: Overview of our proposed method. Given a source image Is and a target lighting ℓt as input, our method estimates the source lighting ℓp and the ratio image Rp in the luminance channel. In training, we assign high emphasis to learning the correct ratio image at pixels near shadow borders in both source and target images, helping the model add and remove shadows more accurately. We also estimate the ambient light intensity of the source image (as) and of the target image (at) and add them to the groundtruth source lighting ℓs and target lighting ℓt respectively to help our model adapt to images across different datasets with varying ambient light.

Figure 3: Relighting on FFHQ dataset Using Target Lighting. Images for SIPR [31] are provided by the authors. Across all lighting conditions, our model produces better cast shadows than prior work, especially around the nose and eyes.