Controlling light sources in an image is a fundamental aspect of photography that affects the subject, depth separation, colors, and mood of the image. Existing relighting methods either rely on multiple input views to perform inverse rendering at inference time, or fail to provide explicit control.
We present a diffusion-based method for fine-grained, parametric control over light sources from a single image. Our method can change the intensity and color of visible light sources, the intensity of ambient lighting, and can insert virtual light source into the scene. We propose using the diffusion model's photorealistic prior to implicitly simulate complex light effects such as indirect illumination, shadows, and reflections, directly in image space, using paired examples depicting controlled illumination changes.
We generate such examples using a combination of a small set of raw photograph pairs supplemented by a large set of synthetically rendered images. By leveraging the linearity of light, we disentangle a target light from the scenes ambient lighting, and then generate a parametric sequence of images with varying light intensities and colors.
Our method generates physically plausible lighting edits across diverse settings, and outperforms existing methods quantitatively and based on user preference.
LightLab enables a rich set of lighting controls that can be used sequentially to create complex lighting effects. You can adjust the intensity and color of each light source by moving the sliders.
Given a pair of real (raw) photograph pairs, we first isolate the target light change. Bottom row. For synthetic data, we render each light component separately. These disentangled components are then scaled and combined to create parameterized sequences of images in linear color space. We use both a sequence consistent tone-mapping strategy and tone-map each image individually to SDR.
We use different conditioning schemes for localized spatial signals and for global controls. Spatial conditions include the input image a depth map of the input image, and two spatial segmentation masks for the target light source intensity change and color respectively. Global controls (ambient light intensity and tone-mapping strategy) are projected to text embedding dimension and inserted through cross-attention.
Lightswitch gives parametric control over the intensity of light sources. Note how light phenonmena are consistent across different intensities, allowing for interactive editing.
Our method can create colored illumination according to user input. Use the colored slider to adjust the color of the light source.
By transfering knowlege from synthetic 3D renders, LightLab can insert virtual point lights (with no geometry) into the scene. Click the circle to light a point.
Disentangling the target light from the ambient light allows enables control over light that enters through windows, which is not easilly controlled physically.
Left. The input sequence was created by capturing photographs of a turned-off lamp rotated around the polygon dog. Middle, Right. Inference results of our method and a zoom-in on the dog. Note how self occlusions on the different faces, and dogs shadow match the lamps position and angle.
We thank Kiran Murthy for his valuable contribution and guidance in tone-mapping linear color images. We are also very grateful to Alberto García García, Erroll Wood, Jesús Pérez and Iker J. de los Mozos, for their help and contributions to the synthetic rendering effort. Finally, we also thank Dani Lischinski, Andrey Voynov, Bar Cavia, Tal Reiss, Daniel Winter, Yarden Frenkel, Asaf Shul, Matan Cohen, Julian Iseringhausen, Francois Bleibel, Chloe LeGendre, Dani Cohen-Or and Or Patashnik for discussion and their support that aided and improved our work.
@article{lightlab@Magar2025,
Author = {Nadav Magar and Amir Hertz and Eric Tabellion and Yael Pritch and Alex Rav-Acha and Ariel Shamir and Yedid Hoshen},
Title = {LightLab: Controlling Light Sources in Images with Diffusion Models},
Year = {2025},
Eprint = {arXiv:2505.09608},
Doi = {10.1145/3721238.3730696},
}