A paper presented at ICASSP2025

A paper presented at ICASSP2025

Created
June 10, 2025
Tags
PaperComputer VisionCross-modal
Updated
June 10, 2025

We are pleased to announce that our paper “Multi-task learning for ultrasonic echo-based depth estimation with audible frequency recovery” has been accepted to IEEE International Conferences on Audio, Speech and Signal Processing (ICASSP).

Junpei Honma, Akisato Kimura, Go Irie, “Multi-task learning for ultrasonic echo-based depth estimation with audible frequency recovery,” IEEE International Conference on Acoustics, speech and Signal Processing ICASSP), 2025.

While depth maps of indoor scenes are often essential for a variety of applications, measuring depth maps usually requires dedicated depth sensors, which are not always available. Echo-based depth estimation has been explored as a promising alternative solution. However, most existing methods assume the use of audible echoes, with the major problem that prevents their use in quiet spaces or in situations where the generation of audible sound is prohibited.

image

In this paper, we explore depth estimation based on ultrasonic echoes, which has scarcely been explored so far. The key idea of our method is to learn a depth estimation model that can exploit useful, but missing information in the audible frequency band. To this end, we perform multitask learning that requires estimation of depth maps from ultrasound echoes while simultaneously restoring the audible frequency range.

image

Furthermore, to evaluate the performance with real echo data, we develop a data collection device and collect a real sound dataset. Experimental results on this real echo dataset and public simulation benchmark dataset demonstrate that our method outperforms existing methods.

More details can be seen at the official publication in IEEExplore or a preprint in arXiv.