Skip to content

Latest commit

 

History

History
75 lines (56 loc) · 3.19 KB

File metadata and controls

75 lines (56 loc) · 3.19 KB

Module google/‌imagenet/‌pnasnet_large/‌feature_vector/1

Module URL: https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/1

Overview

PNASNet-5 is a family of convolutional neural networks for image classification. The architecture of its convolutional cells (or layers) has been found by Progressive Neural Architecture Search. PNASNet resues several techniques from is precursor NASNet, including regularization by path dropout. PNASNet and NASNet were originally published by

PNASNets come in various sizes. This TF-Hub module uses the TF-Slim implementation pnasnet_large of PNASNet-5 for ImageNet that uses 12 cells (plus 2 for the "ImageNet stem"), starting with 216 convolutional filters (after the stem). It has an input size of 331x331 pixels.

The module contains a trained instance of the network, packaged to get feature vectors from images. If you want the full model including the classification it was originally trained for, use module google/imagenet/pnasnet_large/classification/1 instead.

Training

The checkpoint exported into this module was pnasnet-5_large_2017_12_13/model.ckpt downloaded from TF-Slim's pre-trained models. Its weights were originally obtained by training on the ILSVRC-2012-CLS dataset for image classification ("ImageNet").

Usage

This module implements the common signature for computing image feature vectors. It can be used like

module = hub.Module("https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/1")
height, width = hub.get_expected_image_size(module)
images = ...  # A batch of images with shape [batch_size, height, width, 3].
features = module(images)  # Features with shape [batch_size, num_features].

...or using the signature name image_feature_vector. The output for each image in the batch is a feature vector of size num_features = 4320.

For this module, the size of the input image is fixed to height x width = 331 x 331 pixels. The input images are expected to have color values in the range [0,1], following the common image input conventions.

Fine-tuning

Consumers of this module can fine-tune it.

Fine-tuning requires to import the graph version with tag set {"train"} in order to operate batch normalization and dropout in training mode. The dropout probability in NASNet path dropout is not scaled with the training steps of fine-tuning and remains at the final (maximal) value from the initial training.