Module URL: https://tfhub.dev/google/imagenet/inception_v3/classification/1
Inception V3 is a neural network architecture for image classification, originally published by
- Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna: "Rethinking the Inception Architecture for Computer Vision", 2015.
This TF-Hub module uses the TF-Slim implementation of inception_v3
.
The module contains a trained instance of the network, packaged to do the
image classification
that the network was trained on. If you merely want to transform images into
feature vectors, use module
google/imagenet/inception_v3/feature_vector/1
instead, and save the space occupied by the classification layer.
The checkpoint exported into this module was inception_v3_2016_08_28/inception_v3.ckpt
downloaded
from
TF-Slim's pre-trained models.
Its weights were originally obtained by training on the ILSVRC-2012-CLS
dataset for image classification ("Imagenet").
This module implements the common signature for image classification. It can be used like
module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/classification/1")
height, width = hub.get_expected_image_size(module)
images = ... # A batch of images with shape [batch_size, height, width, 3].
logits = module(images) # Logits with shape [batch_size, num_classes].
...or using the signature name image_classification
. The indices into logits
are the num_classes
= 1001 classes of the classification from
the original training (see above).
This module can also be used to compute image feature
vectors,
using the signature name image_feature_vector
.
For this module, the size of the input image is fixed to
height
x width
= 299 x 299 pixels.
The input images
are expected to have color values in the range [0,1],
following the
common image input
conventions.
In principle, consumers of this module can fine-tune it. However, fine-tuning through a large classification might be prone to overfit.
Fine-tuning requires importing the graph version with tag set {"train"}
in order to operate batch normalization in training mode.