This repository is the official implementation of our paper Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks. In this paper, we propose a novel and effective black-box unrestricted attack Natural Color Fool (NCF) which is guided by realistic color distributions sampled from a publicly available dataset. The following is the simplified pipeline of NCF (optimizing one image variant without initialization reset):
If you just want to quickly reproduce the results of our paper, run:
git clone https://github.com/VL-Group/Natural-Color-Fool.git
conda create -n ncf python==3.8
conda activate ncf
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.1 -c pytorch -c conda-forge
conda install -f matplotlib h5py scipy tqdm
pip install wandb timm
cd Natural-Color-Fool
wget -P ./dataset/ https://github.com/VL-Group/Natural-Color-Fool/releases/download/data/images.zip
unzip -q -d ./dataset/ ./dataset/images.zip
wget -P ./dataset/ https://github.com/VL-Group/Natural-Color-Fool/releases/download/data/lib_299.zip
unzip -q -d ./dataset/ ./dataset/lib_299.zip
wget -P ./segm/ https://github.com/VL-Group/Natural-Color-Fool/releases/download/data/masks.zip
unzip -d ./segm/ ./segm/masks.zip
python main.py --gpu 0
- create environment
conda create -n ncf python==3.8
conda activate ncf
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.1 -c pytorch -c conda-forge
conda install matplotlib
pip install mmcv-full==1.3.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
pip install mmsegmentation==0.11.0
conda install h5py scipy tqdm wandb
-
Datasets: Download ImageNet-compatible Dataset from Releases and unzip it in
./dataset/
. -
Color Distribution Library: Download from Releases and unzip it in
./dataset/
.
To reproduce this paper, you need to obtain masks of all images using the semantic segmentation model Swin-T.
- Configuring the semantic segmentation environment. You need to clone the repository at any location and install it.
git clone https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation.git
cd Swin-Transformer-Semantic-Segmentation
pip install -e .
-
Downloading pre-trained weights (or here) for semantic segmentation models Swin-T and unzip it in
./segm/pretrained/
. -
To perform semantic segmentation of images, run:
python segm/get_segMasks.py
- Store the color distribution space of each image in advance:
python dataset/get_lib.py
- To generate adversarial examples, run:
python main.py
The results are stored in ./adv/
.
The parameters of NCF are shown in : config_NCF.yaml. Test different models by modifying parameters white_models_name and black_models_name in
config_NCF.yaml
.
The sources of pre-training weights used in this paper are as follows:
-
CNNs: Official pre-training weights from torch.
-
Transformers: Pre-training weights from the timm library.
-
$\rm Inc\mbox{-}v3_{ens3}$ ,$\rm IncRes\mbox{-}v2_{ens}$ : Pre-training weights from the repo tf_to_pytorch_model. -
Others: Pre-training weights from the corresponding paper.
If you find this work is useful in your research, please consider citing:
@inproceedings{yuan2022natural,
author = {Shengming Yuan
Qilong Zhang and
Lianli Gao and
Yaya Chen and
Jingkuan Song},
title = {Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks},
Booktitle = {NeurIPS},
year = {2022}
}