A paper accepted to BMVC2024

A paper accepted to BMVC2024

Created
August 14, 2024
Tags
PaperComputer VisionCross-modal
Updated
October 14, 2024

We are pleased to announce that our paper “Acoustic-based 3D human pose estimation robust to human position” has been accepted to British Machine Vision Conference (BMVC2024).

Yusuke Oumi, Yuto Shibata, Go Irie, Akisato Kimura, Yoshimitsu Aoki, Mariko Isogawa, “Acoustic-based 3D human pose estimation robust to human position,” British Machine Vision Conference (BMVC), 2024.
image

This paper explores the problem of 3D human pose estimation from only low-level acoustic signals. The existing active acoustic sensing-based approach for 3D human pose estimation implicitly assumes that the target user is positioned along a line between loud-speakers and a microphone. Because reflection and diffraction of sound by the human body cause subtle acoustic signal changes compared to sound obstruction, the existing model degrades its accuracy significantly when subjects deviate from this line, limiting its practicality in real-world scenarios.

To overcome this limitation, we propose a novel method composed of a position discriminator and reverberation-resistant model. The former predicts the standing positions of subjects and applies adversarial learning to extract subject position-invariant features. The latter utilizes acoustic signals before the estimation target time as references to enhance robustness against the variations in sound arrival times due to diffraction and reflection.

image

We construct an acoustic pose estimation dataset that covers diverse human locations and demonstrate through experiments that our proposed method outperforms existing approaches.

More details will be disclosed later.