In the past few years, researchers have discovered numerous ways on how to fool neural networks. The input samples that can do that were called adversarial examples and they were mainly designed to fool object detectors and other networks that work with visual data.
Today, Baidu has presented AdvBox – a new toolbox that generates adversarial examples that can fool neural networks. The new toolbox is part of the series of AI security tools that Baidu has developed and open-sourced under the name of the Advbox Family. It works with the most popular deep learning frameworks such as Pytorch, Caffe2, MxNet, Tensorflow, Keras, etc., and its main purpose is to serve as a benchmark for testing the robustness of deep neural networks against adversarial attacks.
Advbox contains a number of white-box and black-box attack methods and also defense methods for adversarial attacks. The toolbox covers many attack scenarios including Face Recognition Attack, Stealth T-shirt Attack or Deepfake Detect. All of these can be used directly from the command line to generate adversarial examples with zero coding.
The Advbox toolbox was open-sourced and is available under the Apache 2.0 license. More in detail about the implementation and the supported attack and defense mechanisms can be read in the paper published on arxiv. Advbox can be downloaded from the official Github page.
[…] 12. Advbox: Baidu Released Toolbox for Generating Adversarial … […]
[…] 12. Advbox: Baidu Released Toolbox for Generating Adversarial … […]