We are pleased to announce that our paper “Unsupervised single-image intrinsic image decomposition with LiDAR intensity enhanced training” has been accepted to IEEE CVF Winter Conference on Applications of Computer Vision (WACV).
Shogo Sato, Takuhiro Kaneko, Taiga Yoshida, Akisato Kimura, Ryuichi Tanida, “Unsupervised single-image intrinsic image decomposition with LiDAR intensity enhanced training,” IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025.
Unsupervised intrinsic image decomposition (IID) is the process of separating a natural image into albedo and shade without these ground truths. A recent model employing light detection and ranging (LiDAR) intensity demonstrated impressive performance, though the necessity of LiDAR intensity during inference restricts its practicality. Thus, IID models employing only a single image during inference while keeping as high IID quality as the one with an image plus LiDAR intensity are highly desired.
To address this challenge, we propose a novel approach that utilizes only an image during inference while utilizing an image and LiDAR intensity during training.
Specifically, we introduce a partially-shared model that accepts an image and LiDAR intensity individually using a different specific encoder but processes them together in specific components to learn shared representations.
In addition, to enhance IID quality, we propose albedo-alignment loss and image-LiDAR conversion (ILC) paths. Albedo-alignment loss aligns the grayscale albedo from an image to that inferred from LiDAR intensity, thereby reducing cast shadows in albedo from an image due to the absence of cast shadows in LiDAR intensity.
Furthermore, to translate the input image into albedo and shade style while keeping the image contents, the input image is separated into style code and content code by encoders. The ILC path mutually translates the image and LiDAR intensity, which share content but differ in style, contributing to the distinct differentiation of style from content.
Consequently, LIET achieves comparable IID quality to the existing model with LiDAR intensity, while utilizing only an image without LiDAR intensity during inference.
The paper has already been published at CVF Archive https://openaccess.thecvf.com/content/WACV2025/html/Sato_Unsupervised_Single-Image_Intrinsic_Image_Decomposition_with_LiDAR_Intensity_Enhanced_Training_WACV_2025_paper.html , and the dataset for the empirical evaluations can be found at GitHub https://github.com/ntthilab-cv/NTT-intrinsic-dataset.