Using object-specific frequency information from labeled data to improve a CNN’s robustness to adversarial attacks
More Info
expand_more
expand_more
Abstract
Convolutional Neural Networks are particularly vulnerable to attacks that manipulate their data, which are usually called adversarial attacks. In this paper, a method of filtering images using the Fast Fourier Transform is explored, along with its potential to be used as a defense mechanism to such attacks. The main contribution that differs from other methods that use the Fourier Transform as a filtering element in neural networks is the use of labeled data to determine how to filter the pictures. This paper concludes that, while the filtering proposed is hardly better than a simple low-pass filter, it still manages to improve resistance to adversarial attacks with a minimal drop in the standard accuracy of the network.