A paper presented at CVPR2023

A paper presented at CVPR2023

Created
February 28, 2023
Tags
PaperComputer VisionCross-modal
Updated
February 28, 2023

We are excited to announce that our paper “Listening human behavior: 3D human pose estimation with acoustic signals” has been accepted to CVPR2023.

image

Given only acoustic signals without any high-level information, such as voices or sounds of scenes/actions, how much can we infer about the behavior of humans?

Unlike existing methods, which suffer from privacy issues because they use signals that include human speech or the sounds of specific actions, we explore how low-level acoustic signals can provide enough clues to estimate 3D human poses by active acoustic sensing with a single pair of microphones and loudspeakers (see the left figure). This is a challenging task since sound is much more diffractive than other signals and therefore covers up the shape of objects in a scene.

Accordingly, we introduce a framework that encodes multichannel audio features into 3D human poses. Aiming to capture subtle sound changes to reveal detailed pose information, we explicitly extract phase features from the acoustic signals together with typical spectrum features and feed them into our human pose estimation network. Also, we show that reflected or diffracted sounds are easily influenced by subjects’ physique differences e.g., height and muscularity, which deteriorates prediction accuracy. We reduce these gaps by using a subject discriminator to improve accuracy.

Our experiments suggest that with the use of only low-dimensional acoustic information, our method outperforms baseline methods.

image

The details can be checked on the project page, which contains the dataset and the source code.