Publications

What Makes Vision Transformers Robust Towards Bit-Flip Attack?

Abstract

The bit-flip attack (BFA) is a well-studied assault that can dramatically degrade the accuracy of a machine learning model by flipping a small number of bits in the model parameters. Numerous studies have focused on enhancing the performance of BFA and mitigating their effects on traditional Convolutional Neural Networks (CNNs). However, there remains a lack of understanding regarding the security of vision transformers against BFA. In our work, we conduct various experiments on vision transformer models and discover that the flipped bits are concentrated in the classification layer and MLP layers, specifically in the initial and final several blocks. Furthermore, we find an inverse relationship between the size of the transformer model and its robustness. Several defense strategies were examined, and the results show that protecting the beginning blocks helps mitigate BFA the most. Our findings in this study can …

Date
November 30, 2024
Authors
Xuan Zhou, Souvik Kundu, Dake Chen, Jie Huang, Peter Beerel
Book
International Conference on Pattern Recognition
Pages
424-438
Publisher
Springer Nature Switzerland