←back to Blog

Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single Images

Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single Images

Manipulating lighting conditions in images post-capture poses significant challenges. Traditional methods often rely on 3D graphics techniques, which reconstruct scene geometry and properties from multiple images before simulating new lighting using physical illumination models. While these techniques allow for precise control over light sources, accurately recovering 3D models from single images frequently leads to unsatisfactory results.

Modern diffusion-based image editing methods have emerged as alternatives, utilizing robust statistical priors to bypass the complexities of physical modeling. However, these approaches often lack precise parametric control due to their inherent stochasticity and reliance on textual conditioning.

Generative image editing methods have been employed for various relighting tasks, with varying degrees of success. For instance, portrait relighting typically leverages light stage data to supervise generative models, while object relighting may involve fine-tuning diffusion models using synthetic datasets based on environment maps. While some methods assume a singular dominant light source, such as the sun for outdoor scenes, indoor environments present more complex multi-illumination challenges.

Researchers from Google, Tel Aviv University, Reichman University, and Hebrew University of Jerusalem have introduced LightLab, a diffusion-based method that provides explicit parametric control over light sources in images. This innovative approach focuses on two fundamental properties of light sources: intensity and color.

LightLab enables users to manipulate ambient illumination and tone mapping effects, offering a comprehensive suite of editing tools to transform an image’s overall appearance through illumination adjustments. Its effectiveness has been demonstrated on indoor images with visible light sources, and preliminary results show promise for outdoor scenes as well.

LightLab utilizes a pair of images to implicitly model controlled light changes in image space, training a specialized diffusion model. The dataset comprises 600 raw image pairs captured with mobile devices, ensuring consistent exposure through auto-exposure settings and post-capture calibration. Additionally, a larger set of synthetic images was generated from 20 artist-created indoor 3D scenes using physically-based rendering in Blender, enhancing the dataset with varying camera views and light source parameters.

Comparative analyses indicate that integrating synthetic data with real captures yields optimal results across various settings. The quantitative improvement from synthetic data addition to real captures registers a modest 2.2% increase in PSNR. Qualitative comparisons reveal LightLab’s superiority over existing methods such as OmniGen, RGB X, ScribbleLight, and IC-Light, which often introduce unwanted illumination changes, color distortion, or geometric inconsistencies. In contrast, LightLab maintains faithful control over target light sources while generating physically plausible lighting effects throughout the scene.

In conclusion, LightLab represents a significant advancement in diffusion-based light source manipulation for images. By leveraging light linearity principles and synthetic 3D data, researchers have created high-quality paired images that effectively model complex illumination changes. However, limitations related to dataset bias, particularly regarding light source types, remain. Future iterations could potentially enhance performance through integration with unpaired fine-tuning methods.

For further details, check out the Paper and Project Page. All credit for this research goes to the researchers involved in this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.

External illustration