Module URL: https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1
Inception V3 is a neural network architecture for image classification, originally published by
- Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna: "Rethinking the Inception Architecture for Computer Vision", 2015.
This TF-Hub module uses the TF-Slim implementation of inception_v3
.
The module contains a trained instance of the network, packaged to get
feature vectors from images.
If you want the full model including the classification it was originally
trained for, use module
google/imagenet/inception_v3/classification/1
instead.
The checkpoint exported into this module was inception_v3_2016_08_28/inception_v3.ckpt
downloaded
from
TF-Slim's pre-trained models.
Its weights were originally obtained by training on the ILSVRC-2012-CLS
dataset for image classification ("Imagenet").
This module implements the common signature for computing image feature vectors. It can be used like
module = hub.Module("https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1")
height, width = hub.get_expected_image_size(module)
images = ... # A batch of images with shape [batch_size, height, width, 3].
features = module(images) # Features with shape [batch_size, num_features].
...or using the signature name image_feature_vector
. The output for each image
in the batch is a feature vector of size num_features
= 2048.
For this module, the size of the input image is fixed to
height
x width
= 299 x 299 pixels.
The input images
are expected to have color values in the range [0,1],
following the
common image input
conventions.
Consumers of this module can fine-tune it.
This requires importing the graph version with tag set {"train"}
in order to operate batch normalization in training mode.