Tutorials

Adversarial Perturbations in Biometrics: Detection and Mitigation

 

Date of the tutorial: Monday, October 22nd.

Time of the tutorial: 1:30 - 5:30 PM

Speakers: Mayank Vatsa, IIIT Delhi, Richa Singh, IIIT Delhi, Nalini Ratha, IBM TJ Watson Research Center

Abstract: Deep neural network architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. However, adversarial attacks on automated classification systems has been an area of interest for a long time. In 2002, Ratha et al. proposed eight points of attacks on a biometric system and several of these attacks are relevant for non-biometric classification tasks as well, including object recognition and autonomous driving. For instance, the adversary can operate at the input level or the decision level, and lead to incorrect prediction results by the classifier. Therefore, it is important to detect the adversarial perturbations and mitigate the effect caused due to such adversaries. 

The research on adversarial learning has three key components: (i) creating adversarial images, (ii) detecting whether an image is adversely altered or not, and (iii) mitigating the effect of the adversarial perturbation process. These adversaries create different kinds of effect on the input and detecting them requires the application of a combination of hand-crafted as well as learned features; for instance, some of the existing attacks can be detected using principal components while some hand-crafted attacks can be detected using well-defined image processing operations. This tutorial will focus on these three key ideas related to adversarial learning (aka perturbations, detection, and mitigation), building from basics of adversarial learning to discussing new algorithms for detection and mitigation, and conclude with some of the research questions in this spectrum.

 

Groups: