We are pleased to announce that our paper “Towards defensive letter design” has been accepted to ACPR2023.
Rentato Katakoka, Akisato Kimura, Seiichi Uchida, “Towards defensive letter design,” Asian Conference on Pattern Recognition (ACPR), 2023.
A major approach for defending against adversarial attacks aims at controlling only image classifiers to be more resilient, and it does not care about visual objects, such as pandas and cars, in images. This means that visual objects themselves cannot take any defensive actions, and they are still vulnerable to adversarial attacks.
In contrast, letters are artificial symbols, and we can freely control their appearance unless losing their readability. In other words, we can make the letters more defensive to the attacks.
This paper poses three research questions related to the adversarial vulnerability of letter images:
- How defensive are the letters against adversarial attacks?
- Can we estimate how defensive a given letter image is before attacks?
- Can we control the letter images to be more defensive against adversarial attacks?
For answering the first and second questions, we measure the defensibility of letters by employing Iterative Fast Gradient Sign Method (I-FGSM) and then build a deep regression model for estimating the defensibility of each letter image.
We also propose a two-step method based on a generative adversarial network (GAN) for generating character images with higher defensibility, which solves the third research question.
Details will be available later.