Skip to content

Latest commit

 

History

History
43 lines (34 loc) · 1.47 KB

README.md

File metadata and controls

43 lines (34 loc) · 1.47 KB

Black-box Adversarial Attacks on Video Recognition Models. (VBAD)

Introduction

This is the code for the paper "Black-box Adversarial Attacks on Video Recognition Models". It utilizes transferred perturbations from ImageNet pre-trained model and reduce dimensionality of attack space by partition-based rectification , to boost the black-box attack. More information can be found on the paper.

Requirement

The code is tested on the python 3.6.7 pytorch 0.4.1

pip install -r requirements.txt  # install requirements

We use the pre-trained I3D model from https://github.com/piergiaj/pytorch-i3d.

Usage

Targeted attack

Run sh ./targeted_attack.sh

Untargetd attack

Run sh ./untargeted_attack.sh

Cite

If you find this work is useful, please cite the following:

@inproceedings{jiang2019black,
  author    = {Linxi Jiang and
               Xingjun Ma and
               Shaoxiang Chen and
               James Bailey and
               Yu{-}Gang Jiang},
  title     = {Black-box Adversarial Attacks on Video Recognition Models},
  booktitle = {Proceedings of the 27th {ACM} International Conference on Multimedia,
               {MM} 2019, Nice, France, October 21-25, 2019},
  pages     = {864--872},
  year      = {2019}
}

Contact

For questions related to VBAD, please send an email to [email protected]