Evaluation system of image generation using GAN.
This repository is based on stylegan3, with several new abilities:
- Tools for the heat map visualization of the influence of the feature distribution of different feature extractors on the evaluation results (
grad_cam.py
). - Tools for matching the histogram of the number of labels for generated dataset with the histogram of real dataset (
label_match.py
,gen_match.py
,histogram.py
). - Centered Kernel Alignment metrics (
cka
). - Multi-level for metrics (
layers
). - General improvements: a new and stable evaluation system of image generation using GAN.
Compatibility:
- Compatible with old metrics and operations of stylegan3.
- Linux and Windows are supported, but we recommend Linux for performance and compatibility reasons.
- 1–8 high-end NVIDIA GPUs with at least 12 GB of memory. We have done all testing and development using Tesla V100 and A100 GPUs.
- 64-bit Python 3.8 and PyTorch 1.9.0 (or later). See https://pytorch.org for PyTorch install instructions.
- CUDA toolkit 11.1 or later. (Why is a separate CUDA toolkit installation required? See Troubleshooting).
- GCC 7 or later (Linux) or Visual Studio (Windows) compilers. Recommended GCC version depends on CUDA version, see for example CUDA 11.4 system requirements.
- Python libraries: see environment.yml for exact library dependencies. You can use the following commands with Miniconda3 to create and activate your StyleGAN3 Python environment:
conda env create -f environment.yml
conda activate stylegan3
- Docker users:
- Ensure you have correctly installed the NVIDIA container runtime.
- Use the provided Dockerfile to build an image with the required library dependencies.
The code relies heavily on custom PyTorch extensions that are compiled on the fly using NVCC. On Windows, the compilation requires Microsoft Visual Studio. We recommend installing Visual Studio Community Edition and adding it into PATH
using "C:\Program Files (x86)\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvars64.bat"
.
See Troubleshooting for help on common installation and run-time problems.
See stylegan3 for basic operations such as generating images and training...
# Pre-trained network pickle: specify dataset explicitly, print result to stdout and save result to txt.
python calc_metrics.py \
--metrics=fid50k_full \
--data /mnt/petrelfs/zhangyichi/data/ffhq256_50k.zip \
--eval_bs=1000 \
--layers=Conv2d_4a_3x3,Mixed_5d,Mixed_6e,Mixed_7c \
--mirror=1 \
--cache=1 \
--feature_save_flag=1 \
--cfg=stylegan2 \
--random=0 \
--max_real=50000 \
--num_gen=50000 \
--save_name=ffhq_full_vs_ffhq_50K_random_new_set \
--generate /mnt/petrelfs/zhangyichi/generate_datasets/ffhq/random_50K_ffhq \
--network /mnt/petrelfs/zhangyichi/fid/stylegan2-ffhq-256x256.pkl \
# Metrics:fid/kid/cka, choose detector for grad_cam and save result images to html.
python -u grad_cam.py \
--metrics=fid \
--detectors=inception,clip,moco_vit_i,clip_vit_B16 \
--stats_path /mnt/petrelfs/zhangyichi/stats/mu_sigma \
--html_name=visualize_fid_ffhq \
--generate_image_path /mnt/petrelfs/zhangyichi/generate_datasets/ffhq_cam \
--outdir /mnt/petrelfs/zhangyichi/grad_cam/processed_ffhq_fid \
# The real dataset label statistics using inception_v3/resnet50.
python -u label_match.py \
--real_dataset /mnt/petrelfs/zhangyichi/data/ffhq256_50k.zip \
--inception_label /mnt/petrelfs/zhangyichi/data/real_ffhq_labels_inception.pickle \
--resnet50_label /mnt/petrelfs/zhangyichi/data/real_ffhq_labels_resnet50.pickle \
# Generate images matching real dataset, save images to outdir.
python -u gen_match.py \
--seeds=8000000-8100000 \
--trunc=1 \
--limit=0.001 \
--cfg=stylegan2 \
--num_real=50000 \
--inception_label /mnt/petrelfs/zhangyichi/data/real_ffhq_labels_inception.pickle \
--detector=inception \
--outdir /mnt/petrelfs/zhangyichi/generate_datasets/ffhq/match_inception_ffhq_final \
--network /mnt/petrelfs/zhangyichi/fid/stylegan2-ffhq-256x256.pkl \
# Choose which real dataset and generate dataset for histogram with which detector.
python -u histogram.py \
--real_dataset /mnt/petrelfs/zhangyichi/data/ffhq256_50k.zip \
--gen_dataset /mnt/petrelfs/zhangyichi/generate_datasets/ffhq/match_inception_ffhq_new \
--detector inception_v3 \
--histogram_save /mnt/petrelfs/zhangyichi/histogram/ffhq_inceptionset_inception.png \
Recommended metrics:
moco_vit_i_cka50k_full
: Centered Kernel Alignment using MOCO-ViT exactor trained on ImageNet dataset.fid50k_full
: Fréchet inception distance[1] against the full dataset.kid50k_full
: Kernel inception distance[2] against the full dataset.pr50k3_full
: Precision and recall[3] againt the full dataset.ppl2_wend
: Perceptual path length[4] in W, endpoints, full image.eqt50k_int
: Equivariance[5] w.r.t. integer translation (EQ-T).eqt50k_frac
: Equivariance w.r.t. fractional translation (EQ-Tfrac).eqr50k
: Equivariance w.r.t. rotation (EQ-R).
Legacy metrics:
cka50k
: Centered Kernel Alignment against 50k real images.fid50k
: Fréchet inception distance against 50k real images.kid50k
: Kernel inception distance against 50k real images.pr50k3
: Precision and recall against 50k real images.is50k
: Inception score[6] for CIFAR-10.
References:
- GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, Heusel et al. 2017
- Demystifying MMD GANs, Bińkowski et al. 2018
- Improved Precision and Recall Metric for Assessing Generative Models, Kynkäänniemi et al. 2019
- A Style-Based Generator Architecture for Generative Adversarial Networks, Karras et al. 2018
- Alias-Free Generative Adversarial Networks, Karras et al. 2021
- Improved Techniques for Training GANs, Salimans et al. 2016