- secretmanager:
- add IAM helper for generic resource IAM handle
- cloudbuild:
- migrate to microgen in a major version
- Various updates to autogenerated clients.
- profiler:
- Support disabling CPU profile collection.
- Log when a profile creation attempt begins.
- compute/metadata:
- Fix panic on malformed URLs.
- InstanceName returns actual instance name.
- Various updates to autogenerated clients.
- functions/metadata:
- Handle string resources in JSON unmarshaller.
- Various updates to autogenerated clients.
- Various updates to autogenerated clients
This release drops support for Go 1.9 and Go 1.10: we continue to officially support Go 1.11, Go 1.12, and Go 1.13.
- Various updates to autogenerated clients.
- Add cloudbuild/apiv1 client.
This is an empty release that was created solely to aid in storage's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
This is an empty release that was created solely to aid in spanner's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
This is an empty release that was created solely to aid in firestore's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
- spanner:
- Retry "Session not found" for read-only transactions.
- Retry aborted PDMLs.
- spanner/spannertest:
- Fix a bug that was causing 0X-prefixed number to be parsed incorrectly.
- storage:
- Add HMACKeyOptions.
- Remove *REGIONAL from StorageClass documentation. Using MULTI_REGIONAL, DURABLE_REDUCED_AVAILABILITY, and REGIONAL are no longer best practice StorageClasses but they are still acceptable values.
- trace:
- Remove cloud.google.com/go/trace. Package cloud.google.com/go/trace has been marked OBSOLETE for several years: it is now no longer provided. If you relied on this package, please vendor it or switch to using https://cloud.google.com/trace/docs/setup/go (which obsoleted it).
This is an empty release that was created solely to aid in pubsub's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
- compute/metadata:
- Add Email method.
- storage:
- Fix duplicated retry logic.
- Add ReaderObjectAttrs.StartOffset.
- Support reading last N bytes of a file when a negative range is given, such
as
obj.NewRangeReader(ctx, -10, -1)
. - Add HMACKey listing functionality.
- spanner/spannertest:
- Support primary keys with no columns.
- Fix MinInt64 parsing.
- Implement deletion of key ranges.
- Handle reads during a read-write transaction.
- Handle returning DATE values.
- pubsub:
- Fix Ack/Modack request size calculation.
- logging:
- Add auto-detection of monitored resources on GAE Standard.
This is an empty release that was created solely to aid in bigtable's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
This is an empty release that was created solely to aid in bigquery's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
This is an empty release that was created solely to aid in datastore's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
- datastore:
- Interface elements whose underlying types are supported, are now supported.
- Reduce time to initial retry from 1s to 100ms.
- firestore:
- Add Increment transformation.
- storage:
- Allow emulator with STORAGE_EMULATOR_HOST.
- Add methods for HMAC key management.
- pubsub:
- Add PublishCount and PublishLatency measurements.
- Add DefaultPublishViews and DefaultSubscribeViews for convenience of importing all views.
- Add add Subscription.PushConfig.AuthenticationMethod.
- spanner:
- Allow emulator usage with SPANNER_EMULATOR_HOST.
- Add cloud.google.com/go/spanner/spannertest, a spanner emulator.
- Add cloud.google.com/go/spanner/spansql which contains types and a parser for the Cloud Spanner SQL dialect.
- asset:
- Add apiv1p2beta1 client.
This is an empty release that was created solely to aid in logging's module carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
- bigtable:
- Add an admin method to update an instance and clusters.
- Fix bttest regex matching behavior for alternations (things like
|a
). - Expose BlockAllFilter filter.
- bigquery:
- Add Routines API support.
- storage:
- Add read-only Bucket.LocationType.
- logging:
- Add TraceSampled to Entry.
- Fix to properly extract {Trace, Span}Id from X-Cloud-Trace-Context.
- pubsub:
- Add Cloud Key Management to TopicConfig.
- Change ExpirationPolicy to optional.Duration.
- automl:
- Add apiv1beta1 client.
- iam:
- Fix compilation problem with iam/credentials/apiv1.
- bigtable:
- Check results from PredicateFilter in bttest, which fixes certain false matches.
- profiler:
- debugLog checks user defined logging options before logging.
- spanner:
- PartitionedUpdates respect query parameters.
- StartInstance allows specifying cloud API access scopes.
- bigquery:
- Use empty slice instead of nil for ValueSaver, fixing an issue with zero-length, repeated, nested fields causing panics.
- firestore:
- Return same number of snapshots as doc refs (in the form of duplicate records) during GetAll.
- replay:
- Change references to IPv4 addresses to localhost, making replay compatible with IPv6.
- all:
- Update to protobuf-golang v1.3.1.
- datastore:
- Attempt to decode GAE-encoded keys if initial decoding attempt fails.
- Support integer time conversion.
- pubsub:
- Add PublishSettings.BundlerByteLimit. If users receive pubsub.ErrOverflow, this value should be adjusted higher.
- Use IPv6 compatible target in testutil.
- bigtable:
- Fix Latin-1 regexp filters in bttest, allowing \C.
- Expose PassAllFilter.
- profiler:
- Add log messages for slow path in start.
- Fix start to allow retry until success.
- firestore:
- Add admin client.
- containeranalysis:
- Add apiv1 client.
- grafeas:
- Add apiv1 client.
- bigtable:
- Implement DeleteInstance in bttest.
- Return an error on invalid ReadRowsRequest.RowRange key ranges in bttest.
- bigquery:
- Move RequirePartitionFilter outside of TimePartioning.
- Expose models API.
- firestore:
- Allow array values in create and update calls.
- Add CollectionGroup method.
- pubsub:
- Add ExpirationPolicy to Subscription.
- storage:
- Add V4 signing.
- rpcreplay:
- Match streams by first sent request. This further improves rpcreplay's ability to distinguish streams.
- httpreplay:
- Set up Man-In-The-Middle config only once. This should improve proxy creation when multiple proxies are used in a single process.
- Remove error on empty Content-Type, allowing requests with no Content-Type header but a non-empty body.
- all:
- Fix an edge case bug in auto-generated library pagination by properly propagating pagetoken.
This update includes a substantial reduction in our transitive dependency list by way of updating to [email protected].
- spanner:
- Error implements GRPCStatus, allowing status.Convert.
- bigtable:
- Fix a bug in bttest that prevents single column queries returning results that match other filters.
- Remove verbose retry logging.
- logging:
- Ensure RequestUrl has proper UTF-8, removing the need for users to wrap and rune replace manually.
- recaptchaenterprise:
- Add v1beta1 client.
- phishingprotection:
- Add v1beta1 client.
This patch releases re-builds the go.sum. This was not possible in the previous release.
- firestore:
- Add sentinel value DetectProjectID for auto-detecting project ID.
- Add OpenCensus tracing for public methods.
- Marked stable. All future changes come with a backwards compatibility guarantee.
- Removed firestore/apiv1beta1. All users relying on this low-level library should migrate to firestore/apiv1. Note that most users should use the high-level firestore package instead.
- pubsub:
- Allow large messages in synchronous pull case.
- Cap bundler byte limit. This should prevent OOM conditions when there are a very large number of message publishes occurring.
- storage:
- Add ETag to BucketAttrs and ObjectAttrs.
- datastore:
- Removed some non-sensical OpenCensus traces.
- webrisk:
- Add v1 client.
- asset:
- Add v1 client.
- cloudtasks:
- Add v2 client.
This patch release removes github.com/golang/lint from the transitive
dependency list, resolving go get -u
problems.
Note: this release intentionally has a broken go.sum. Please use v0.37.4.
This patch release is mostly intended to bring in v0.3.0 of google.golang.org/api, which fixes a GCF deployment issue.
Note: we had to-date accidentally marked Redis as stable. In this release, we've fixed it by downgrading its documentation to alpha, as it is in other languages and docs.
- all:
- Document context in generated libraries.
Small go.mod version bumps to bring in v0.2.0 of google.golang.org/api, which introduces a new oauth2 url.
- spanner:
- Add BatchDML method.
- Reduced initial time between retries.
- bigquery:
- Produce better error messages for InferSchema.
- Add logical type control for avro loads.
- Add support for the GEOGRAPHY type.
- datastore:
- Add sentinel value DetectProjectID for auto-detecting project ID.
- Allow flatten tag on struct pointers.
- Fixed a bug that caused queries to panic with invalid queries. Instead they will now return an error.
- profiler:
- Add ability to override GCE zone and instance.
- pubsub:
- BEHAVIOR CHANGE: Refactor error code retry logic. RPCs should now more consistently retry specific error codes based on whether they're idempotent or non-idempotent.
- httpreplay: Fixed a bug when a non-GET request had a zero-length body causing the Content-Length header to be dropped.
- iot:
- Add new apiv1 client.
- securitycenter:
- Add new apiv1 client.
- cloudscheduler:
- Add new apiv1 client.
- spanner:
- Reduce minimum retry backoff from 1s to 100ms. This makes time between retries much faster and should improve latency.
- storage:
- Add support for Bucket Policy Only.
- kms:
- Add ResourceIAM helper method.
- Deprecate KeyRingIAM and CryptoKeyIAM. Please use ResourceIAM.
- firestore:
- Switch from v1beta1 API to v1 API.
- Allow emulator with FIRESTORE_EMULATOR_HOST.
- bigquery:
- Add NumLongTermBytes to Table.
- Add TotalBytesProcessedAccuracy to QueryStatistics.
- irm:
- Add new v1alpha2 client.
- talent:
- Add new v4beta1 client.
- rpcreplay:
- Fix connection to work with grpc >= 1.17.
- It is now required for an actual gRPC server to be running for Dial to succeed.
- spanner:
- Adds OpenCensus views back to public API.
- all:
- Add go.mod and go.sum.
- Switch usage of gax-go to gax-go/v2.
- bigquery:
- Fix bug where time partitioning could not be removed from a table.
- Fix panic that occurred with empty query parameters.
- bttest:
- Fix bug where deleted rows were returned by ReadRows.
- bigtable/emulator:
- Configure max message size to 256 MiB.
- firestore:
- Allow non-transactional queries in transactions.
- Allow StartAt/EndBefore on direct children at any depth.
- QuerySnapshotIterator.Stop may be called in an error state.
- Fix bug the prevented reset of transaction write state in between retries.
- functions/metadata:
- Make Metadata.Resource a pointer.
- logging:
- Make SpanID available in logging.Entry.
- metadata:
- Wrap !200 error code in a typed err.
- profiler:
- Add function to check if function name is within a particular file in the profile.
- Set parent field in create profile request.
- Return kubernetes client to start cluster, so client can be used to poll cluster.
- Add function for checking if filename is in profile.
- pubsub:
- Fix bug where messages expired without an initial modack in synchronous=true mode.
- Receive does not retry ResourceExhausted errors.
- spanner:
- client.Close now cancels existing requests and should be much faster for large amounts of sessions.
- Correctly allow MinOpened sessions to be spun up.
- functions/metadata:
- Switch to using JSON in context.
- Make Resource a value.
- vision: Fix ProductSearch return type.
- datastore: Add an example for how to handle MultiError.
- compute: Removes an erroneously added go.mod.
- logging: Populate source location in fromLogEntry.
- bttest:
- Add support for apply_label_transformer.
- expr:
- Add expr library.
- firestore:
- Support retrieval of missing documents.
- kms:
- Add IAM methods.
- pubsub:
- Clarify extension documentation.
- scheduler:
- Add v1beta1 client.
- vision:
- Add product search helper.
- Add new product search client.
Note: This release is the last to support Go 1.6 and 1.8.
- bigquery:
- Add support for removing an expiration.
- Ignore NeverExpire in Table.Create.
- Validate table expiration time.
- cbt:
- Add note about not supporting arbitrary bytes.
- datastore:
- Align key checks.
- firestore:
- Return an error when using Start/End without providing values.
- pubsub:
- Add pstest Close method.
- Clarify MaxExtension documentation.
- securitycenter:
- Add v1beta1 client.
- spanner:
- Allow nil in mutations.
- Improve doc of SessionPoolConfig.MaxOpened.
- Increase session deletion timeout from 5s to 15s.
- bigtable:
- Group mutations across multiple requests.
- bigquery:
- Link to bigquery troubleshooting errors page in bigquery.Error comment.
- cbt:
- Fix go generate command.
- Document usage of both maxage + maxversions.
- datastore:
- Passing nil keys results in ErrInvalidKey.
- firestore:
- Clarify what Document.DataTo does with untouched struct fields.
- profile:
- Validate service name in agent.
- pubsub:
- Fix deadlock with pstest and ctx.Cancel.
- Fix a possible deadlock in pstest.
- trace:
- Update doc URL with new fragment.
Special thanks to @fastest963 for going above and beyond helping us to debug hard-to-reproduce Pub/Sub issues.
- spanner: DML support added. See https://godoc.org/cloud.google.com/go/spanner#hdr-DML_and_Partitioned_DML for more information.
- bigtable: bttest supports row sample filter.
- functions: metadata package added for accessing Cloud Functions resource metadata.
- bigtable:
- Add retry to all idempotent RPCs.
- cbt supports complex GC policies.
- Emulator supports arbitrary bytes in regex filters.
- firestore: Add ArrayUnion and ArrayRemove.
- logging: Add the ContextFunc option to supply the context used for asynchronous RPCs.
- profiler: Ignore NotDefinedError when fetching the instance name
- pubsub:
- BEHAVIOR CHANGE: Receive doesn't retry if an RPC returns codes.Cancelled.
- BEHAVIOR CHANGE: Receive retries on Unavailable intead of returning.
- Fix deadlock.
- Restore Ack/Nack/Modacks metrics.
- Improve context handling in iterator.
- Implement synchronous mode for Receive.
- pstest: add Pull.
- spanner: Add a metric for the number of sessions currently opened.
- storage:
- Canceling the context releases all resources.
- Add additional RetentionPolicy attributes.
- vision/apiv1: Add LocalizeObjects method.
- bigtable:
- Emulator returns Unimplemented for snapshot RPCs.
- bigquery:
- Support zero-length repeated, nested fields.
- cloud assets:
- Add v1beta client.
- datastore:
- Don't nil out transaction ID on retry.
- firestore:
- BREAKING CHANGE: When watching a query with Query.Snapshots, QuerySnapshotIterator.Next returns a QuerySnapshot which contains read time, result size, change list and the DocumentIterator (previously, QuerySnapshotIterator.Next returned just the DocumentIterator). See: https://godoc.org/cloud.google.com/go/firestore#Query.Snapshots.
- Add array-contains operator.
- IAM:
- Add iam/credentials/apiv1 client.
- pubsub:
- Canceling the context passed to Subscription.Receive causes Receive to return when processing finishes on all messages currently in progress, even if new messages are arriving.
- redis:
- Add redis/apiv1 client.
- storage:
- Add Reader.Attrs.
- Deprecate several Reader getter methods: please use Reader.Attrs for these instead.
- Add ObjectHandle.Bucket and ObjectHandle.Object methods.
- bigquery:
- Allow modification of encryption configuration and partitioning options to a table via the Update call.
- Add a SchemaFromJSON function that converts a JSON table schema.
- bigtable:
- Restore cbt count functionality.
- containeranalysis:
- Add v1beta client.
- spanner:
- Fix a case where an iterator might not be closed correctly.
- storage:
- Add ServiceAccount method https://godoc.org/cloud.google.com/go/storage#Client.ServiceAccount.
- Add a method to Reader that returns the parsed value of the Last-Modified header.
- bigquery:
- Support filtering listed jobs by min/max creation time.
- Support data clustering (https://godoc.org/cloud.google.com/go/bigquery#Clustering).
- Include job creator email in Job struct.
- bigtable:
- Add
RowSampleFilter
. - emulator: BREAKING BEHAVIOR CHANGE: Regexps in row, family, column and value filters
must match the entire target string to succeed. Previously, the emulator was
succeeding on partial matches.
NOTE: As of this release, this change only affects the emulator when run
from this repo (bigtable/cmd/emulator/cbtemulator.go). The version launched
from
gcloud
will be updated in a subsequentgcloud
release.
- Add
- dataproc: Add apiv1beta2 client.
- datastore: Save non-nil pointer fields on omitempty.
- logging: populate Entry.Trace from the HTTP X-Cloud-Trace-Context header.
- logging/logadmin: Support writer_identity and include_children.
- pubsub:
- Support labels on topics and subscriptions.
- Support message storage policy for topics.
- Use the distribution of ack times to determine when to extend ack deadlines.
The only user-visible effect of this change should be that programs that
call only
Subscription.Receive
need no IAM permissions other thanPub/Sub Subscriber
.
- storage:
- Support predefined ACLs.
- Support additional ACL fields other than Entity and Role.
- Support bucket websites.
- Support bucket logging.
- Added Code of Conduct
- bigtable:
- cbt: Support a GC policy of "never".
- errorreporting:
- Support User.
- Close now calls Flush.
- Use OnError (previously ignored).
- Pass through the RPC error as-is to OnError.
- httpreplay: A tool for recording and replaying HTTP requests (for the bigquery and storage clients in this repo).
- kms: v1 client added
- logging: add SourceLocation to Entry.
- storage: improve CRC checking on read.
- bigquery: Support for the NUMERIC type.
- bigtable:
- cbt: Optionally specify columns for read/lookup
- Support instance-level administration.
- oslogin: New client for the OS Login API.
- pubsub:
- The package is now stable. There will be no further breaking changes.
- Internal changes to improve Subscription.Receive behavior.
- storage: Support updating bucket lifecycle config.
- spanner: Support struct-typed parameter bindings.
- texttospeech: New client for the Text-to-Speech API.
- bigquery: Add DDL stats to query statistics.
- bigtable:
- cbt: Add cells-per-column limit for row lookup.
- cbt: Make it possible to combine read filters.
- dlp: v2beta2 client removed. Use the v2 client instead.
- firestore, spanner: Fix compilation errors due to protobuf changes.
-
bigtable:
- cbt: Support cells per column limit for row read.
- bttest: Correctly handle empty RowSet.
- Fix ReadModifyWrite operation in emulator.
- Fix API path in GetCluster.
-
bigquery:
- BEHAVIOR CHANGE: Retry on 503 status code.
- Add dataset.DeleteWithContents.
- Add SchemaUpdateOptions for query jobs.
- Add Timeline to QueryStatistics.
- Add more stats to ExplainQueryStage.
- Support Parquet data format.
-
datastore:
- Support omitempty for times.
-
dlp:
- BREAKING CHANGE: Remove v1beta1 client. Please migrate to the v2 client, which is now out of beta.
- Add v2 client.
-
firestore:
- BEHAVIOR CHANGE: Treat set({}, MergeAll) as valid.
-
iam:
- Support JWT signing via SignJwt callopt.
-
profiler:
- BEHAVIOR CHANGE: PollForSerialOutput returns an error when context.Done.
- BEHAVIOR CHANGE: Increase the initial backoff to 1 minute.
- Avoid returning empty serial port output.
-
pubsub:
- BEHAVIOR CHANGE: Don't backoff during next retryable error once stream is healthy.
- BEHAVIOR CHANGE: Don't backoff on EOF.
- pstest: Support Acknowledge and ModifyAckDeadline RPCs.
-
redis:
- Add v1 beta Redis client.
-
spanner:
- Support SessionLabels.
-
speech:
- Add api v1 beta1 client.
-
storage:
- BEHAVIOR CHANGE: Retry reads when retryable error occurs.
- Fix delete of object in requester-pays bucket.
- Support KMS integration.
-
bigquery:
- Add OpenCensus tracing.
-
firestore:
- BREAKING CHANGE: If a document does not exist, return a DocumentSnapshot whose Exists method returns false. DocumentRef.Get and Transaction.Get return the non-nil DocumentSnapshot in addition to a NotFound error. DocumentRef.GetAll and Transaction.GetAll return a non-nil DocumentSnapshot instead of nil.
- Add DocumentIterator.Stop. Call Stop whenever you are done with a DocumentIterator.
- Added Query.Snapshots and DocumentRef.Snapshots, which provide realtime notification of updates. See https://cloud.google.com/firestore/docs/query-data/listen.
- Canceling an RPC now always returns a grpc.Status with codes.Canceled.
-
spanner:
- Add
CommitTimestamp
, which supports inserting the commit timestamp of a transaction into a column.
- Add
-
bigquery: Support SchemaUpdateOptions for load jobs.
-
bigtable:
- Add SampleRowKeys.
- cbt: Support union, intersection GCPolicy.
- Retry admin RPCS.
- Add trace spans to retries.
-
datastore: Add OpenCensus tracing.
-
firestore:
- Fix queries involving Null and NaN.
- Allow Timestamp protobuffers for time values.
-
logging: Add a WriteTimeout option.
-
spanner: Support Batch API.
-
storage: Add OpenCensus tracing.
-
bigquery:
- Support customer-managed encryption keys.
-
bigtable:
- Improved emulator support.
- Support GetCluster.
-
datastore:
- Add general mutations.
- Support pointer struct fields.
- Support transaction options.
-
firestore:
- Add Transaction.GetAll.
- Support document cursors.
-
logging:
- Support concurrent RPCs to the service.
- Support per-entry resources.
-
profiler:
- Add config options to disable heap and thread profiling.
- Read the project ID from $GOOGLE_CLOUD_PROJECT when it's set.
-
pubsub:
- BEHAVIOR CHANGE: Release flow control after ack/nack (instead of after the callback returns).
- Add SubscriptionInProject.
- Add OpenCensus instrumentation for streaming pull.
-
storage:
- Support CORS.
-
bigquery:
- Marked stable.
- Schema inference of nullable fields supported.
- Added TimePartitioning to QueryConfig.
-
firestore: Data provided to DocumentRef.Set with a Merge option can contain Delete sentinels.
-
logging: Clients can accept parent resources other than projects.
-
pubsub:
- pubsub/pstest: A lighweight fake for pubsub. Experimental; feedback welcome.
- Support updating more subscription metadata: AckDeadline, RetainAckedMessages and RetentionDuration.
-
oslogin/apiv1beta: New client for the Cloud OS Login API.
-
rpcreplay: A package for recording and replaying gRPC traffic.
-
spanner:
- Add a ReadWithOptions that supports a row limit, as well as an index.
- Support query plan and execution statistics.
- Added OpenCensus support.
-
storage: Clarify checksum validation for gzipped files (it is not validated when the file is served uncompressed).
- firestore BREAKING CHANGES:
-
Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update. Change
docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})
todocref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})
Change
docref.UpdateStruct(ctx, []string{"Field"}, aStruct)
todocref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})
-
Rename MergePaths to Merge; require args to be FieldPaths
-
A value stored as an integer can be read into a floating-point field, and vice versa.
-
- bigtable/cmd/cbt:
- Support deleting a column.
- Add regex option for row read.
- spanner: Mark stable.
- storage:
- Add Reader.ContentEncoding method.
- Fix handling of SignedURL headers.
- bigquery:
- If Uploader.Put is called with no rows, it returns nil without making a call.
- Schema inference supports the "nullable" option in struct tags for non-required fields.
- TimePartitioning supports "Field".
-
Other bigquery changes:
JobIterator.Next
returns*Job
; removedJobInfo
(BREAKING CHANGE).- UseStandardSQL is deprecated; set UseLegacySQL to true if you need Legacy SQL.
- Uploader.Put will generate a random insert ID if you do not provide one.
- Support time partitioning for load jobs.
- Support dry-run queries.
- A
Job
remembers its last retrieved status. - Support retrieving job configuration.
- Support labels for jobs and tables.
- Support dataset access lists.
- Improve support for external data sources, including data from Bigtable and Google Sheets, and tables with external data.
- Support updating a table's view configuration.
- Fix uploading civil times with nanoseconds.
-
storage:
- Support PubSub notifications.
- Support Requester Pays buckets.
-
profiler: Support goroutine and mutex profile types.
-
firestore: beta release. See the announcement.
-
errorreporting: The existing package has been redesigned.
-
errors: This package has been removed. Use errorreporting.
-
bigquery BREAKING CHANGES:
- Standard SQL is the default for queries and views.
Table.Create
takesTableMetadata
as a second argument, instead of options.Dataset.Create
takesDatasetMetadata
as a second argument.DatasetMetadata
fieldID
renamed toFullID
TableMetadata
fieldID
renamed toFullID
-
Other bigquery changes:
- The client will append a random suffix to a provided job ID if you set
AddJobIDSuffix
to true in a job config. - Listing jobs is supported.
- Better retry logic.
- The client will append a random suffix to a provided job ID if you set
-
vision, language, speech: clients are now stable
-
monitoring: client is now beta
-
profiler:
- Rename InstanceName to Instance, ZoneName to Zone
- Auto-detect service name and version on AppEngine.
-
bigquery: UseLegacySQL options for CreateTable and QueryConfig. Use these options to continue using Legacy SQL after the client switches its default to Standard SQL.
-
bigquery: Support for updating dataset labels.
-
bigquery: Set DatasetIterator.ProjectID to list datasets in a project other than the client's. DatasetsInProject is no longer needed and is deprecated.
-
bigtable: Fail ListInstances when any zones fail.
-
spanner: support decoding of slices of basic types (e.g. []string, []int64, etc.)
-
logging/logadmin: UpdateSink no longer creates a sink if it is missing (actually a change to the underlying service, not the client)
-
profiler: Service and ServiceVersion replace Target in Config.
-
pubsub: Subscription.Receive now uses streaming pull.
-
pubsub: add Client.TopicInProject to access topics in a different project than the client.
-
errors: renamed errorreporting. The errors package will be removed shortly.
-
datastore: improved retry behavior.
-
bigquery: support updates to dataset metadata, with etags.
-
bigquery: add etag support to Table.Update (BREAKING: etag argument added).
-
bigquery: generate all job IDs on the client.
-
storage: support bucket lifecycle configurations.
-
Clients for spanner, pubsub and video are now in beta.
-
New client for DLP.
-
spanner: performance and testing improvements.
-
storage: requester-pays buckets are supported.
-
storage, profiler, bigtable, bigquery: bug fixes and other minor improvements.
-
pubsub: bug fixes and other minor improvements
-
pubsub: Subscription.ModifyPushConfig replaced with Subscription.Update.
-
pubsub: Subscription.Receive now runs concurrently for higher throughput.
-
vision: cloud.google.com/go/vision is deprecated. Use cloud.google.com/go/vision/apiv1 instead.
-
translation: now stable.
-
trace: several changes to the surface. See the link below.
-
pubsub: Replace
sub.ModifyPushConfig(ctx, pubsub.PushConfig{Endpoint: "https://example.com/push"})
with
sub.Update(ctx, pubsub.SubscriptionConfigToUpdate{ PushConfig: &pubsub.PushConfig{Endpoint: "https://example.com/push"}, })
-
trace: traceGRPCServerInterceptor will be provided from *trace.Client. Given an initialized
*trace.Client
namedtc
, instead ofs := grpc.NewServer(grpc.UnaryInterceptor(trace.GRPCServerInterceptor(tc)))
write
s := grpc.NewServer(grpc.UnaryInterceptor(tc.GRPCServerInterceptor()))
-
trace trace.GRPCClientInterceptor will also provided from *trace.Client. Instead of
conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(trace.GRPCClientInterceptor()))
write
conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor()))
-
trace: We removed the deprecated
trace.EnableGRPCTracing
. Use the gRPC interceptor as a dial option as shown below when initializing Cloud package clients:c, err := pubsub.NewClient(ctx, "project-id", option.WithGRPCDialOption(grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor()))) if err != nil { ... }
- Breaking changes to some autogenerated clients.
- rpcreplay package added.
- profiler package added.
- storage:
- Retry Objects.Insert call.
- Add ProgressFunc to WRiter.
- pubsub: breaking changes:
- Publish is now asynchronous (announcement).
- Subscription.Pull replaced by Subscription.Receive, which takes a callback (announcement).
- Message.Done replaced with Message.Ack and Message.Nack.
- Release of a client library for Spanner. See the blog post. Note that although the Spanner service is beta, the Go client library is alpha.
-
Beta release of BigQuery, DataStore, Logging and Storage. See the blog post.
-
bigquery:
-
struct support. Read a row directly into a struct with
RowIterator.Next
, and upload a row directly from a struct withUploader.Put
. You can also use field tags. See the [package documentation][cloud-bigquery-ref] for details. -
The
ValueList
type was removed. It is no longer necessary. Instead of
var v ValueList ... it.Next(&v) ..
use
var v []Value ... it.Next(&v) ...
-
Previously, repeatedly calling
RowIterator.Next
on the same[]Value
orValueList
would append to the slice. Now each call resets the size to zero first. -
Schema inference will infer the SQL type BYTES for a struct field of type []byte. Previously it inferred STRING.
-
The types
uint
,uint64
anduintptr
are no longer supported in schema inference. BigQuery's integer type is INT64, and those types may hold values that are not correctly represented in a 64-bit signed integer.
-
- bigquery:
- The SQL types DATE, TIME and DATETIME are now supported. They correspond to
the
Date
,Time
andDateTime
types in the newcloud.google.com/go/civil
package. - Support for query parameters.
- Support deleting a dataset.
- Values from INTEGER columns will now be returned as int64, not int. This will avoid errors arising from large values on 32-bit systems.
- The SQL types DATE, TIME and DATETIME are now supported. They correspond to
the
- datastore:
- Nested Go structs encoded as Entity values, instead of a
flattened list of the embedded struct's fields. This means that you may now have twice-nested slices, eg.
See the announcement for more details.
type State struct { Cities []struct{ Populations []int } }
- Contexts no longer hold namespaces; instead you must set a key's namespace explicitly. Also, key functions have been changed and renamed.
- The WithNamespace function has been removed. To specify a namespace in a Query, use the Query.Namespace method:
q := datastore.NewQuery("Kind").Namespace("ns")
- All the fields of Key are exported. That means you can construct any Key with a struct literal:
k := &Key{Kind: "Kind", ID: 37, Namespace: "ns"}
- As a result of the above, the Key methods Kind, ID, d.Name, Parent, SetParent and Namespace have been removed.
NewIncompleteKey
has been removed, replaced byIncompleteKey
. ReplacewithNewIncompleteKey(ctx, kind, parent)
and if you do use namespaces, make sure you set the namespace on the returned key.IncompleteKey(kind, parent)
NewKey
has been removed, replaced byNameKey
andIDKey
. ReplacewithNewKey(ctx, kind, name, 0, parent) NewKey(ctx, kind, "", id, parent)
and if you do use namespaces, make sure you set the namespace on the returned key.NameKey(kind, name, parent) IDKey(kind, id, parent)
- The
Done
variable has been removed. Replacedatastore.Done
withiterator.Done
, from the packagegoogle.golang.org/api/iterator
. - The
Client.Close
method will have a return type of error. It will return the result of closing the underlying gRPC connection. - See the announcement for more details.
- Nested Go structs encoded as Entity values, instead of a
flattened list of the embedded struct's fields. This means that you may now have twice-nested slices, eg.
-
bigquery: -
NewGCSReference
is now a function, not a method onClient
.Table.LoaderFrom
now accepts aReaderSource
, enabling loading data into a table from a file or anyio.Reader
.
-
Client.Table and Client.OpenTable have been removed. Replace
client.OpenTable("project", "dataset", "table")
with
client.DatasetInProject("project", "dataset").Table("table")
-
Client.CreateTable has been removed. Replace
client.CreateTable(ctx, "project", "dataset", "table")
with
client.DatasetInProject("project", "dataset").Table("table").Create(ctx)
-
Dataset.ListTables have been replaced with Dataset.Tables. Replace
tables, err := ds.ListTables(ctx)
with
it := ds.Tables(ctx) for { table, err := it.Next() if err == iterator.Done { break } if err != nil { // TODO: Handle error. } // TODO: use table. }
-
Client.Read has been replaced with Job.Read, Table.Read and Query.Read. Replace
it, err := client.Read(ctx, job)
with
it, err := job.Read(ctx)
and similarly for reading from tables or queries.
-
The iterator returned from the Read methods is now named RowIterator. Its behavior is closer to the other iterators in these libraries. It no longer supports the Schema method; see the next item. Replace
for it.Next(ctx) { var vals ValueList if err := it.Get(&vals); err != nil { // TODO: Handle error. } // TODO: use vals. } if err := it.Err(); err != nil { // TODO: Handle error. }
with
for { var vals ValueList err := it.Next(&vals) if err == iterator.Done { break } if err != nil { // TODO: Handle error. } // TODO: use vals. }
Instead of the
RecordsPerRequest(n)
option, writeit.PageInfo().MaxSize = n
Instead of the
StartIndex(i)
option, writeit.StartIndex = i
-
ValueLoader.Load now takes a Schema in addition to a slice of Values. Replace
func (vl *myValueLoader) Load(v []bigquery.Value)
with
func (vl *myValueLoader) Load(v []bigquery.Value, s bigquery.Schema)
-
Table.Patch is replace by Table.Update. Replace
p := table.Patch() p.Description("new description") metadata, err := p.Apply(ctx)
with
metadata, err := table.Update(ctx, bigquery.TableMetadataToUpdate{ Description: "new description", })
-
Client.Copy is replaced by separate methods for each of its four functions. All options have been replaced by struct fields.
-
To load data from Google Cloud Storage into a table, use Table.LoaderFrom.
Replace
client.Copy(ctx, table, gcsRef)
with
table.LoaderFrom(gcsRef).Run(ctx)
Instead of passing options to Copy, set fields on the Loader:
loader := table.LoaderFrom(gcsRef) loader.WriteDisposition = bigquery.WriteTruncate
-
To extract data from a table into Google Cloud Storage, use Table.ExtractorTo. Set fields on the returned Extractor instead of passing options.
Replace
client.Copy(ctx, gcsRef, table)
with
table.ExtractorTo(gcsRef).Run(ctx)
-
To copy data into a table from one or more other tables, use Table.CopierFrom. Set fields on the returned Copier instead of passing options.
Replace
client.Copy(ctx, dstTable, srcTable)
with
dst.Table.CopierFrom(srcTable).Run(ctx)
-
To start a query job, create a Query and call its Run method. Set fields on the query instead of passing options.
Replace
client.Copy(ctx, table, query)
with
query.Run(ctx)
-
-
Table.NewUploader has been renamed to Table.Uploader. Instead of options, configure an Uploader by setting its fields. Replace
u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
with
u := table.NewUploader(bigquery.UploadIgnoreUnknownValues()) u.IgnoreUnknownValues = true
-
pubsub: remove
pubsub.Done
. Useiterator.Done
instead, whereiterator
is the packagegoogle.golang.org/api/iterator
.
-
storage:
-
AdminClient replaced by methods on Client. Replace
adminClient.CreateBucket(ctx, bucketName, attrs)
with
client.Bucket(bucketName).Create(ctx, projectID, attrs)
-
BucketHandle.List replaced by BucketHandle.Objects. Replace
for query != nil { objs, err := bucket.List(d.ctx, query) if err != nil { ... } query = objs.Next for _, obj := range objs.Results { fmt.Println(obj) } }
with
iter := bucket.Objects(d.ctx, query) for { obj, err := iter.Next() if err == iterator.Done { break } if err != nil { ... } fmt.Println(obj) }
(The
iterator
package is atgoogle.golang.org/api/iterator
.)Replace
Query.Cursor
withObjectIterator.PageInfo().Token
.Replace
Query.MaxResults
withObjectIterator.PageInfo().MaxSize
. -
ObjectHandle.CopyTo replaced by ObjectHandle.CopierFrom. Replace
attrs, err := src.CopyTo(ctx, dst, nil)
with
attrs, err := dst.CopierFrom(src).Run(ctx)
Replace
attrs, err := src.CopyTo(ctx, dst, &storage.ObjectAttrs{ContextType: "text/html"})
with
c := dst.CopierFrom(src) c.ContextType = "text/html" attrs, err := c.Run(ctx)
-
ObjectHandle.ComposeFrom replaced by ObjectHandle.ComposerFrom. Replace
attrs, err := dst.ComposeFrom(ctx, []*storage.ObjectHandle{src1, src2}, nil)
with
attrs, err := dst.ComposerFrom(src1, src2).Run(ctx)
-
ObjectHandle.Update's ObjectAttrs argument replaced by ObjectAttrsToUpdate. Replace
attrs, err := obj.Update(ctx, &storage.ObjectAttrs{ContextType: "text/html"})
with
attrs, err := obj.Update(ctx, storage.ObjectAttrsToUpdate{ContextType: "text/html"})
-
ObjectHandle.WithConditions replaced by ObjectHandle.If. Replace
obj.WithConditions(storage.Generation(gen), storage.IfMetaGenerationMatch(mgen))
with
obj.Generation(gen).If(storage.Conditions{MetagenerationMatch: mgen})
Replace
obj.WithConditions(storage.IfGenerationMatch(0))
with
obj.If(storage.Conditions{DoesNotExist: true})
-
storage.Done
replaced byiterator.Done
(from packagegoogle.golang.org/api/iterator
).
-
-
Package preview/logging deleted. Use logging instead.
-
Logging client replaced with preview version (see below).
-
New clients for some of Google's Machine Learning APIs: Vision, Speech, and Natural Language.
-
Preview version of a new [Stackdriver Logging][cloud-logging] client in
cloud.google.com/go/preview/logging
. This client uses gRPC as its transport layer, and supports log reading, sinks and metrics. It will replace the current client atcloud.google.com/go/logging
shortly.