Abstract:Accurate depth estimation enhances endoscopy navigation and diagnostics, but obtaining ground-truth depth in clinical settings is challenging. Synthetic datasets are often used for training, yet the domain gap limits generalization to real data. We propose a novel image-to-image translation framework that preserves structure while generating realistic textures from clinical data. Our key innovation integrates Stable Diffusion with ControlNet, conditioned on a latent representation extracted from a Per-Pixel Shading (PPS) map. PPS captures surface lighting effects, providing a stronger structural constraint than depth maps. Experiments show our approach produces more realistic translations and improves depth estimation over GAN-based MI-CycleGAN. Our code is publicly accessible at https://github.com/anaxqx/PPS-Ctrl.
Abstract:Image-based relighting of indoor rooms creates an immersive virtual understanding of the space, which is useful for interior design, virtual staging, and real estate. Relighting indoor rooms from a single image is especially challenging due to complex illumination interactions between multiple lights and cluttered objects featuring a large variety in geometrical and material complexity. Recently, generative models have been successfully applied to image-based relighting conditioned on a target image or a latent code, albeit without detailed local lighting control. In this paper, we introduce ScribbleLight, a generative model that supports local fine-grained control of lighting effects through scribbles that describe changes in lighting. Our key technical novelty is an Albedo-conditioned Stable Image Diffusion model that preserves the intrinsic color and texture of the original image after relighting and an encoder-decoder-based ControlNet architecture that enables geometry-preserving lighting effects with normal map and scribble annotations. We demonstrate ScribbleLight's ability to create different lighting effects (e.g., turning lights on/off, adding highlights, cast shadows, or indirect lighting from unseen lights) from sparse scribble annotations.
Abstract:In this paper, we develop a personalized video relighting algorithm that produces high-quality and temporally consistent relit video under any pose, expression, and lighting conditions in real-time. Existing relighting algorithms typically rely either on publicly available synthetic data, which yields poor relighting results, or instead on Light Stage data which is inaccessible and is not publicly available. We show that by casually capturing video of a user watching YouTube videos on a monitor we can train a personalized algorithm capable of producing high-quality relighting under any condition. Our key contribution is a novel neural relighting architecture that effectively separates the intrinsic appearance features, geometry and reflectance, from the source lighting and then combines it with the target lighting to generate a relit image. This neural architecture enables smoothing of intrinsic appearance features leading to temporally stable video relighting. Both qualitative and quantitative evaluations show that our relighting architecture improves portrait image relighting quality and temporal consistency over state-of-the-art approaches on both casually captured Light Stage at Your Desk (LSYD) data and Light Stage captured One Light At a Time (OLAT) datasets.