A paper published at CVPR2024

A paper published at CVPR2024

Created
March 15, 2024
Tags
PaperComputer VisionMachine Learning
Updated
June 10, 2024

We are excited to announce that our paper “Understanding and improving source-free domain adaptation from a theoretical perspective” has been accepted to IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2024).

Yu Mitsuzumi, Akisato Kimura, Hisashi Kashima, “Understanding and improving source-free domain adaptation from a theoretical perspective,” IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.

Source-free Domain Adaptation (SFDA) is an emerging and challenging research area that addresses the problem of unsupervised domain adaptation (UDA) without source data.

Though numerous successful methods have been proposed for SFDA, a theoretical understanding of why these methods work well is still absent.

image

In this paper, we shed light on the theoretical perspective of existing SFDA methods.

Specifically, we find that SFDA loss functions comprising discriminability and diversity losses work in the same way as the training objective in the theory of self-training based on the expansion assumption, which shows the existence of the target error bound.

image

This finding brings two novel insights that enable us to build an improved SFDA method comprising

  1. model training with auto-adjusting diversity constraint,
  2. augmentation training with teacher-student framework,

yielding a better recognition performance.

The paper has been published at the Computer Vision Foundation.

Also, a short video describing the content of this paper is available on YouTube.

Please check CVPR2024 virtual site for the details.