Google AI Releases Portrait Light: Enhancing Portrait Lighting With Machine Learning

Google recently released Portrait Light to better emulate professional-looking portraits. Portrait Light is a new post-capture feature for the Pixel Camera and Google Photos apps. It adds a simulated directional light source to the portraits, with the appropriate directionality and intensity set to complement the original photograph’s lighting.

Example image with and without Portrait Light applied. Note how Portrait Light contours the face, adding dimensionality, volume, and visual interest. Source:

In the Pixel Camera, Portrait Light is automatically applied post-capture to images in the default mode that may include one person or even a small group. In Portrait Mode photographs, it provides more dramatic lighting to accompany the shallow depth-of-field effect already applied, providing a studio-quality look. But Pixel users who shoot in Portrait Mode can also manually re-position and adjust the brightness within Google Photos to match their preference. For those who run Google Photos on Pixel 2 or newer, this relighting capability is also available for many pre-existing portrait photographs.

Fig-2: Pixel users can adjust a portrait’s lighting as they like in Google Photos, after capture. Source:

The technology behind Portrait Light

Portrait Light is inspired by the off-camera lights used by portrait photographers. It models a repositionable light source that can be added into the scene, having the initial lighting direction and intensity to complement the existing lighting in the photo. It leverages novel machine learning models. Each model is trained using a diverse dataset of photographs captured in the Light Stage computational illumination system. These models have two new capabilities:

  • Automatic directional light placement: The algorithm places an artificial directional light in the scene (of the portrait), having an idea of how a photographer would have placed an off-camera (real) light source in the actual world.
  • Synthetic post-capture relighting: Synthetic light is added for a given lighting direction and portrait in a way that looks realistic and natural.

Automatic Light Placement

First, the model is trained to estimate a high dynamic range, omnidirectional illumination profile for an input portrait scene. This new lighting estimation model deduces the direction, color, and relative intensity of all light sources in the scene from all directions, considering the face as a light probe. It uses MediaPipe Face Mesh to estimate the head pose of the portrait’s subject.

Data-Driven Portrait Relighting

With the desired lighting direction and portrait, the model is next trained to add the illumination to the original photograph from a directional light source. Training data was generated by photographing seventy different people using the Light Stage computational illumination system. The spherical lighting rig used includes 64 cameras with different viewpoints, and 331 individually-programmable LED light sources. Each individual was photographed with one-light-at-a-time (OLAT) by each light. Doing so generates their reflectance field (their appearance as illuminated by the spherical environment’s discrete sections). The reflectance field helps to encode the unique color and light-reflecting properties of the subject’s skin, hair, and clothing. Creating such a dataset encouraged model performance across diverse lighting environments and individuals.

Left: Example images from an individual’s photographed reflectance field, their appearance in the Light Stage as illuminated one-light-at-a-time. Right: The images can be added together to form the appearance of the subject in any novel lighting environment. Source:

The procedure of training the model further includes:

  • Learning detail-preserving relighting using the low-resolution quotient image.
  • Supervising relighting with “Geometry Estimation.”

The pipeline:

  • Given an input portrait, per-pixel surface normals are estimated.
  • These normals are then used to compute a light visibility map. 
  • The model then produces a low-resolution quotient image. 
  • The quotient image is upsampled and applied as a multiplier to the original image.
  • It thus creates the original portrait with an additional light source added synthetically into the scene.

This is how the Portrait Light feature is added and is seen as the first step towards creative post-capture lighting controls for mobile cameras powered by ML.


Shilpi is a Contributor to She is currently pursuing her third year of B.Tech in computer science and engineering from IIT Bhubaneswar. She has a keen interest in exploring latest technologies. She likes to write about different domains and learn about their real life applications.

🐝 [FREE AI WEBINAR] 'Beginners Guide to LangChain: Chat with Your Multi-Model Data' Dec 11, 2023 10 am PST