| <html><body> |
| <style> |
| |
| body, h1, h2, h3, div, span, p, pre, a { |
| margin: 0; |
| padding: 0; |
| border: 0; |
| font-weight: inherit; |
| font-style: inherit; |
| font-size: 100%; |
| font-family: inherit; |
| vertical-align: baseline; |
| } |
| |
| body { |
| font-size: 13px; |
| padding: 1em; |
| } |
| |
| h1 { |
| font-size: 26px; |
| margin-bottom: 1em; |
| } |
| |
| h2 { |
| font-size: 24px; |
| margin-bottom: 1em; |
| } |
| |
| h3 { |
| font-size: 20px; |
| margin-bottom: 1em; |
| margin-top: 1em; |
| } |
| |
| pre, code { |
| line-height: 1.5; |
| font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| } |
| |
| pre { |
| margin-top: 0.5em; |
| } |
| |
| h1, h2, h3, p { |
| font-family: Arial, sans serif; |
| } |
| |
| h1, h2, h3 { |
| border-bottom: solid #CCC 1px; |
| } |
| |
| .toc_element { |
| margin-top: 0.5em; |
| } |
| |
| .firstline { |
| margin-left: 2 em; |
| } |
| |
| .method { |
| margin-top: 1em; |
| border: solid 1px #CCC; |
| padding: 1em; |
| background: #EEE; |
| } |
| |
| .details { |
| font-weight: bold; |
| font-size: 14px; |
| } |
| |
| </style> |
| |
| <h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1> |
| <h2>Instance Methods</h2> |
| <p class="toc_element"> |
| <code><a href="#beginTransaction">beginTransaction(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Begins a new transaction. This step can often be skipped:</p> |
| <p class="toc_element"> |
| <code><a href="#commit">commit(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Commits a transaction. The request includes the mutations to be</p> |
| <p class="toc_element"> |
| <code><a href="#create">create(database, x__xgafv=None)</a></code></p> |
| <p class="firstline">Creates a new session. A session can be used to perform</p> |
| <p class="toc_element"> |
| <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p> |
| <p class="firstline">Ends a session, releasing server resources associated with it.</p> |
| <p class="toc_element"> |
| <code><a href="#executeSql">executeSql(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Executes an SQL query, returning all rows in a single reply. This</p> |
| <p class="toc_element"> |
| <code><a href="#executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Like ExecuteSql, except returns the result</p> |
| <p class="toc_element"> |
| <code><a href="#get">get(name, x__xgafv=None)</a></code></p> |
| <p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p> |
| <p class="toc_element"> |
| <code><a href="#read">read(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Reads rows from the database using key lookups and scans, as a</p> |
| <p class="toc_element"> |
| <code><a href="#rollback">rollback(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p> |
| <p class="toc_element"> |
| <code><a href="#streamingRead">streamingRead(session, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Like Read, except returns the result set as a</p> |
| <h3>Method Details</h3> |
| <div class="method"> |
| <code class="details" id="beginTransaction">beginTransaction(session, body, x__xgafv=None)</code> |
| <pre>Begins a new transaction. This step can often be skipped: |
| Read, ExecuteSql and |
| Commit can begin a new transaction as a |
| side-effect. |
| |
| Args: |
| session: string, Required. The session in which the transaction runs. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for BeginTransaction. |
| "options": { # # Transactions # Required. Options for the new transaction. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # A transaction. |
| "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen |
| # for the transaction. Not returned by default: see |
| # TransactionOptions.ReadOnly.return_read_timestamp. |
| "id": "A String", # `id` may be used to identify the transaction in subsequent |
| # Read, |
| # ExecuteSql, |
| # Commit, or |
| # Rollback calls. |
| # |
| # Single-use read-only transactions do not have IDs, because |
| # single-use transactions do not support multiple requests. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="commit">commit(session, body, x__xgafv=None)</code> |
| <pre>Commits a transaction. The request includes the mutations to be |
| applied to rows in the database. |
| |
| `Commit` might return an `ABORTED` error. This can occur at any time; |
| commonly, the cause is conflicts with concurrent |
| transactions. However, it can also happen for a variety of other |
| reasons. If `Commit` returns `ABORTED`, the caller should re-attempt |
| the transaction from the beginning, re-using the same session. |
| |
| Args: |
| session: string, Required. The session in which the transaction to be committed is running. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for Commit. |
| "transactionId": "A String", # Commit a previously-started transaction. |
| "mutations": [ # The mutations to be executed when this transaction commits. All |
| # mutations are applied atomically, in the order they appear in |
| # this list. |
| { # A modification to one or more Cloud Spanner rows. Mutations can be |
| # applied to a Cloud Spanner database by sending them in a |
| # Commit call. |
| "insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist, |
| # the write or transaction fails with error `ALREADY_EXISTS`. |
| # replace operations. |
| "table": "A String", # Required. The table whose rows will be written. |
| "values": [ # The values to be written. `values` can contain more than one |
| # list of values. If it does, then multiple rows are written, one |
| # for each entry in `values`. Each list in `values` must have |
| # exactly as many entries as there are entries in columns |
| # above. Sending multiple lists is equivalent to sending multiple |
| # `Mutation`s, each containing one `values` entry and repeating |
| # table and columns. Individual values in each list are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "columns": [ # The names of the columns in table to be written. |
| # |
| # The list of columns must contain enough columns to allow |
| # Cloud Spanner to derive values for all primary key columns in the |
| # row(s) to be modified. |
| "A String", |
| ], |
| }, |
| "replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is |
| # deleted, and the column values provided are inserted |
| # instead. Unlike insert_or_update, this means any values not |
| # explicitly written become `NULL`. |
| # replace operations. |
| "table": "A String", # Required. The table whose rows will be written. |
| "values": [ # The values to be written. `values` can contain more than one |
| # list of values. If it does, then multiple rows are written, one |
| # for each entry in `values`. Each list in `values` must have |
| # exactly as many entries as there are entries in columns |
| # above. Sending multiple lists is equivalent to sending multiple |
| # `Mutation`s, each containing one `values` entry and repeating |
| # table and columns. Individual values in each list are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "columns": [ # The names of the columns in table to be written. |
| # |
| # The list of columns must contain enough columns to allow |
| # Cloud Spanner to derive values for all primary key columns in the |
| # row(s) to be modified. |
| "A String", |
| ], |
| }, |
| "delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named |
| # rows were present. |
| "table": "A String", # Required. The table whose rows will be deleted. |
| "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete. |
| # the keys are expected to be in the same table or index. The keys need |
| # not be sorted in any particular way. |
| # |
| # If the same key is specified multiple times in the set (for example |
| # if two ranges, two keys, or a key and a range overlap), Cloud Spanner |
| # behaves as if the key were only specified once. |
| "ranges": [ # A list of key ranges. See KeyRange for more information about |
| # key range specifications. |
| { # KeyRange represents a range of rows in a table or index. |
| # |
| # A range has a start key and an end key. These keys can be open or |
| # closed, indicating if the range includes rows with that key. |
| # |
| # Keys are represented by lists, where the ith value in the list |
| # corresponds to the ith component of the table or index primary key. |
| # Individual values are encoded as described here. |
| # |
| # For example, consider the following table definition: |
| # |
| # CREATE TABLE UserEvents ( |
| # UserName STRING(MAX), |
| # EventDate STRING(10) |
| # ) PRIMARY KEY(UserName, EventDate); |
| # |
| # The following keys name rows in this table: |
| # |
| # "Bob", "2014-09-23" |
| # |
| # Since the `UserEvents` table's `PRIMARY KEY` clause names two |
| # columns, each `UserEvents` key has two elements; the first is the |
| # `UserName`, and the second is the `EventDate`. |
| # |
| # Key ranges with multiple components are interpreted |
| # lexicographically by component using the table or index key's declared |
| # sort order. For example, the following range returns all events for |
| # user `"Bob"` that occurred in the year 2015: |
| # |
| # "start_closed": ["Bob", "2015-01-01"] |
| # "end_closed": ["Bob", "2015-12-31"] |
| # |
| # Start and end keys can omit trailing key components. This affects the |
| # inclusion and exclusion of rows that exactly match the provided key |
| # components: if the key is closed, then rows that exactly match the |
| # provided components are included; if the key is open, then rows |
| # that exactly match are not included. |
| # |
| # For example, the following range includes all events for `"Bob"` that |
| # occurred during and after the year 2000: |
| # |
| # "start_closed": ["Bob", "2000-01-01"] |
| # "end_closed": ["Bob"] |
| # |
| # The next example retrieves all events for `"Bob"`: |
| # |
| # "start_closed": ["Bob"] |
| # "end_closed": ["Bob"] |
| # |
| # To retrieve events before the year 2000: |
| # |
| # "start_closed": ["Bob"] |
| # "end_open": ["Bob", "2000-01-01"] |
| # |
| # The following range includes all rows in the table: |
| # |
| # "start_closed": [] |
| # "end_closed": [] |
| # |
| # This range returns all users whose `UserName` begins with any |
| # character from A to C: |
| # |
| # "start_closed": ["A"] |
| # "end_open": ["D"] |
| # |
| # This range returns all users whose `UserName` begins with B: |
| # |
| # "start_closed": ["B"] |
| # "end_open": ["C"] |
| # |
| # Key ranges honor column sort order. For example, suppose a table is |
| # defined as follows: |
| # |
| # CREATE TABLE DescendingSortedTable { |
| # Key INT64, |
| # ... |
| # ) PRIMARY KEY(Key DESC); |
| # |
| # The following range retrieves all rows with key values between 1 |
| # and 100 inclusive: |
| # |
| # "start_closed": ["100"] |
| # "end_closed": ["1"] |
| # |
| # Note that 100 is passed as the start, and 1 is passed as the end, |
| # because `Key` is a descending column in the schema. |
| "endOpen": [ # If the end is open, then the range excludes rows whose first |
| # `len(end_open)` key columns exactly match `end_open`. |
| "", |
| ], |
| "startOpen": [ # If the start is open, then the range excludes rows whose first |
| # `len(start_open)` key columns exactly match `start_open`. |
| "", |
| ], |
| "endClosed": [ # If the end is closed, then the range includes all rows whose |
| # first `len(end_closed)` key columns exactly match `end_closed`. |
| "", |
| ], |
| "startClosed": [ # If the start is closed, then the range includes all rows whose |
| # first `len(start_closed)` key columns exactly match `start_closed`. |
| "", |
| ], |
| }, |
| ], |
| "keys": [ # A list of specific keys. Entries in `keys` should have exactly as |
| # many elements as there are columns in the primary or index key |
| # with which this `KeySet` is used. Individual key values are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "all": True or False, # For convenience `all` can be set to `true` to indicate that this |
| # `KeySet` matches all keys in the table or index. Note that any keys |
| # specified in `keys` or `ranges` are only yielded once. |
| }, |
| }, |
| "update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not |
| # already exist, the transaction fails with error `NOT_FOUND`. |
| # replace operations. |
| "table": "A String", # Required. The table whose rows will be written. |
| "values": [ # The values to be written. `values` can contain more than one |
| # list of values. If it does, then multiple rows are written, one |
| # for each entry in `values`. Each list in `values` must have |
| # exactly as many entries as there are entries in columns |
| # above. Sending multiple lists is equivalent to sending multiple |
| # `Mutation`s, each containing one `values` entry and repeating |
| # table and columns. Individual values in each list are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "columns": [ # The names of the columns in table to be written. |
| # |
| # The list of columns must contain enough columns to allow |
| # Cloud Spanner to derive values for all primary key columns in the |
| # row(s) to be modified. |
| "A String", |
| ], |
| }, |
| "insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then |
| # its column values are overwritten with the ones provided. Any |
| # column values not explicitly written are preserved. |
| # replace operations. |
| "table": "A String", # Required. The table whose rows will be written. |
| "values": [ # The values to be written. `values` can contain more than one |
| # list of values. If it does, then multiple rows are written, one |
| # for each entry in `values`. Each list in `values` must have |
| # exactly as many entries as there are entries in columns |
| # above. Sending multiple lists is equivalent to sending multiple |
| # `Mutation`s, each containing one `values` entry and repeating |
| # table and columns. Individual values in each list are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "columns": [ # The names of the columns in table to be written. |
| # |
| # The list of columns must contain enough columns to allow |
| # Cloud Spanner to derive values for all primary key columns in the |
| # row(s) to be modified. |
| "A String", |
| ], |
| }, |
| }, |
| ], |
| "singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike |
| # commit of a previously-started transaction, commit with a |
| # temporary transaction is non-idempotent. That is, if the |
| # `CommitRequest` is sent to Cloud Spanner more than once (for |
| # instance, due to retries in the application, or in the |
| # transport library), it is possible that the mutations are |
| # executed more than once. If this is undesirable, use |
| # BeginTransaction and |
| # Commit instead. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # The response for Commit. |
| "commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="create">create(database, x__xgafv=None)</code> |
| <pre>Creates a new session. A session can be used to perform |
| transactions that read and/or modify data in a Cloud Spanner database. |
| Sessions are meant to be reused for many consecutive |
| transactions. |
| |
| Sessions can only execute one transaction at a time. To execute |
| multiple concurrent read-write/write-only transactions, create |
| multiple sessions. Note that standalone reads and queries use a |
| transaction internally, and count toward the one transaction |
| limit. |
| |
| Cloud Spanner limits the number of sessions that can exist at any given |
| time; thus, it is a good idea to delete idle and/or unneeded sessions. |
| Aside from explicit deletes, Cloud Spanner can delete sessions for which no |
| operations are sent for more than an hour. If a session is deleted, |
| requests to it return `NOT_FOUND`. |
| |
| Idle sessions can be kept alive by sending a trivial SQL query |
| periodically, e.g., `"SELECT 1"`. |
| |
| Args: |
| database: string, Required. The database in which the new session is created. (required) |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # A session in the Cloud Spanner API. |
| "name": "A String", # Required. The name of the session. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="delete">delete(name, x__xgafv=None)</code> |
| <pre>Ends a session, releasing server resources associated with it. |
| |
| Args: |
| name: string, Required. The name of the session to delete. (required) |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # A generic empty message that you can re-use to avoid defining duplicated |
| # empty messages in your APIs. A typical example is to use it as the request |
| # or the response type of an API method. For instance: |
| # |
| # service Foo { |
| # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); |
| # } |
| # |
| # The JSON representation for `Empty` is empty JSON object `{}`. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="executeSql">executeSql(session, body, x__xgafv=None)</code> |
| <pre>Executes an SQL query, returning all rows in a single reply. This |
| method cannot be used to return a result set larger than 10 MiB; |
| if the query yields more data than that, the query fails with |
| a `FAILED_PRECONDITION` error. |
| |
| Queries inside read-write transactions might return `ABORTED`. If |
| this occurs, the application should restart the transaction from |
| the beginning. See Transaction for more details. |
| |
| Larger result sets can be fetched in streaming fashion by calling |
| ExecuteStreamingSql instead. |
| |
| Args: |
| session: string, Required. The session in which the SQL query should be performed. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for ExecuteSql and |
| # ExecuteStreamingSql. |
| "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a |
| # temporary read-only transaction with strong concurrency. |
| # Read or |
| # ExecuteSql call runs. |
| # |
| # See TransactionOptions for more information about transactions. |
| "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in |
| # it. The transaction ID of the new transaction is returned in |
| # ResultSetMetadata.transaction, which is a Transaction. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction. |
| # This is the most efficient way to execute a transaction that |
| # consists of a single SQL query. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "id": "A String", # Execute the read or SQL query in a previously-started transaction. |
| }, |
| "resumeToken": "A String", # If this request is resuming a previously interrupted SQL query |
| # execution, `resume_token` should be copied from the last |
| # PartialResultSet yielded before the interruption. Doing this |
| # enables the new SQL query execution to resume where the last one left |
| # off. The rest of the request parameters must exactly match the |
| # request that yielded this token. |
| "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type |
| # from a JSON value. For example, values of type `BYTES` and values |
| # of type `STRING` both appear in params as JSON strings. |
| # |
| # In these cases, `param_types` can be used to specify the exact |
| # SQL type for some or all of the SQL query parameters. See the |
| # definition of Type for more information |
| # about SQL types. |
| "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a |
| # table cell or returned from an SQL query. |
| "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type` |
| # provides type information for the struct's fields. |
| "code": "A String", # Required. The TypeCode for this type. |
| "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type` |
| # is the type of the array elements. |
| }, |
| }, |
| "queryMode": "A String", # Used to control the amount of debugging information returned in |
| # ResultSetStats. |
| "sql": "A String", # Required. The SQL query string. |
| "params": { # The SQL query string can contain parameter placeholders. A parameter |
| # placeholder consists of `'@'` followed by the parameter |
| # name. Parameter names consist of any combination of letters, |
| # numbers, and underscores. |
| # |
| # Parameters can appear anywhere that a literal value is expected. The same |
| # parameter name can be used more than once, for example: |
| # `"WHERE id > @msg_id AND id < @msg_id + 100"` |
| # |
| # It is an error to execute an SQL query with unbound parameters. |
| # |
| # Parameter values are specified using `params`, which is a JSON |
| # object whose keys are parameter names, and whose values are the |
| # corresponding parameter values. |
| "a_key": "", # Properties of the object. |
| }, |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Results from Read or |
| # ExecuteSql. |
| "rows": [ # Each element in `rows` is a row whose format is defined by |
| # metadata.row_type. The ith element |
| # in each row matches the ith field in |
| # metadata.row_type. Elements are |
| # encoded based on type as described |
| # here. |
| [ |
| "", |
| ], |
| ], |
| "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this |
| # result set. These can be requested by setting |
| # ExecuteSqlRequest.query_mode. |
| "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result. |
| "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting |
| # with the plan root. Each PlanNode's `id` corresponds to its index in |
| # `plan_nodes`. |
| { # Node information for nodes appearing in a QueryPlan.plan_nodes. |
| "index": 42, # The `PlanNode`'s index in node list. |
| "kind": "A String", # Used to determine the type of node. May be needed for visualizing |
| # different kinds of nodes differently. For example, If the node is a |
| # SCALAR node, it will have a condensed representation |
| # which can be used to directly embed a description of the node in its |
| # parent. |
| "displayName": "A String", # The display name for the node. |
| "executionStats": { # The execution statistics associated with the node, contained in a group of |
| # key-value pairs. Only present if the plan was returned as a result of a |
| # profile query. For example, number of executions, number of rows/time per |
| # execution etc. |
| "a_key": "", # Properties of the object. |
| }, |
| "childLinks": [ # List of child node `index`es and their relationship to this parent. |
| { # Metadata associated with a parent-child relationship appearing in a |
| # PlanNode. |
| "variable": "A String", # Only present if the child node is SCALAR and corresponds |
| # to an output variable of the parent node. The field carries the name of |
| # the output variable. |
| # For example, a `TableScan` operator that reads rows from a table will |
| # have child links to the `SCALAR` nodes representing the output variables |
| # created for each column that is read by the operator. The corresponding |
| # `variable` fields will be set to the variable names assigned to the |
| # columns. |
| "childIndex": 42, # The node to which the link points. |
| "type": "A String", # The type of the link. For example, in Hash Joins this could be used to |
| # distinguish between the build child and the probe child, or in the case |
| # of the child being an output variable, to represent the tag associated |
| # with the output variable. |
| }, |
| ], |
| "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes. |
| # `SCALAR` PlanNode(s). |
| "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases |
| # where the `description` string of this node references a `SCALAR` |
| # subquery contained in the expression subtree rooted at this node. The |
| # referenced `SCALAR` subquery may not necessarily be a direct child of |
| # this node. |
| "a_key": 42, |
| }, |
| "description": "A String", # A string representation of the expression subtree rooted at this node. |
| }, |
| "metadata": { # Attributes relevant to the node contained in a group of key-value pairs. |
| # For example, a Parameter Reference node could have the following |
| # information in its metadata: |
| # |
| # { |
| # "parameter_reference": "param1", |
| # "parameter_type": "array" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| ], |
| }, |
| "queryStats": { # Aggregated statistics from the execution of the query. Only present when |
| # the query is profiled. For example, a query could return the statistics as |
| # follows: |
| # |
| # { |
| # "rows_returned": "3", |
| # "elapsed_time": "1.22 secs", |
| # "cpu_time": "1.19 secs" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information. |
| "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result |
| # set. For example, a SQL query like `"SELECT UserId, UserName FROM |
| # Users"` could return a `row_type` value like: |
| # |
| # "fields": [ |
| # { "name": "UserId", "type": { "code": "INT64" } }, |
| # { "name": "UserName", "type": { "code": "STRING" } }, |
| # ] |
| "fields": [ # The list of fields that make up this struct. Order is |
| # significant, because values of this struct type are represented as |
| # lists, where the order of field values matches the order of |
| # fields in the StructType. In turn, the order of fields |
| # matches the order of columns in a read request, or the order of |
| # fields in the `SELECT` clause of a query. |
| { # Message representing a single field of a struct. |
| "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field. |
| # table cell or returned from an SQL query. |
| "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type` |
| # provides type information for the struct's fields. |
| "code": "A String", # Required. The TypeCode for this type. |
| "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type` |
| # is the type of the array elements. |
| }, |
| "name": "A String", # The name of the field. For reads, this is the column name. For |
| # SQL queries, it is the column alias (e.g., `"Word"` in the |
| # query `"SELECT 'hello' AS Word"`), or the column name (e.g., |
| # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some |
| # columns might have an empty name (e.g., !"SELECT |
| # UPPER(ColName)"`). Note that a query result can contain |
| # multiple fields with the same name. |
| }, |
| ], |
| }, |
| "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the |
| # information about the new transaction is yielded here. |
| "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen |
| # for the transaction. Not returned by default: see |
| # TransactionOptions.ReadOnly.return_read_timestamp. |
| "id": "A String", # `id` may be used to identify the transaction in subsequent |
| # Read, |
| # ExecuteSql, |
| # Commit, or |
| # Rollback calls. |
| # |
| # Single-use read-only transactions do not have IDs, because |
| # single-use transactions do not support multiple requests. |
| }, |
| }, |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</code> |
| <pre>Like ExecuteSql, except returns the result |
| set as a stream. Unlike ExecuteSql, there |
| is no limit on the size of the returned result set. However, no |
| individual row in the result set can exceed 100 MiB, and no |
| column value can exceed 10 MiB. |
| |
| Args: |
| session: string, Required. The session in which the SQL query should be performed. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for ExecuteSql and |
| # ExecuteStreamingSql. |
| "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a |
| # temporary read-only transaction with strong concurrency. |
| # Read or |
| # ExecuteSql call runs. |
| # |
| # See TransactionOptions for more information about transactions. |
| "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in |
| # it. The transaction ID of the new transaction is returned in |
| # ResultSetMetadata.transaction, which is a Transaction. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction. |
| # This is the most efficient way to execute a transaction that |
| # consists of a single SQL query. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "id": "A String", # Execute the read or SQL query in a previously-started transaction. |
| }, |
| "resumeToken": "A String", # If this request is resuming a previously interrupted SQL query |
| # execution, `resume_token` should be copied from the last |
| # PartialResultSet yielded before the interruption. Doing this |
| # enables the new SQL query execution to resume where the last one left |
| # off. The rest of the request parameters must exactly match the |
| # request that yielded this token. |
| "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type |
| # from a JSON value. For example, values of type `BYTES` and values |
| # of type `STRING` both appear in params as JSON strings. |
| # |
| # In these cases, `param_types` can be used to specify the exact |
| # SQL type for some or all of the SQL query parameters. See the |
| # definition of Type for more information |
| # about SQL types. |
| "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a |
| # table cell or returned from an SQL query. |
| "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type` |
| # provides type information for the struct's fields. |
| "code": "A String", # Required. The TypeCode for this type. |
| "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type` |
| # is the type of the array elements. |
| }, |
| }, |
| "queryMode": "A String", # Used to control the amount of debugging information returned in |
| # ResultSetStats. |
| "sql": "A String", # Required. The SQL query string. |
| "params": { # The SQL query string can contain parameter placeholders. A parameter |
| # placeholder consists of `'@'` followed by the parameter |
| # name. Parameter names consist of any combination of letters, |
| # numbers, and underscores. |
| # |
| # Parameters can appear anywhere that a literal value is expected. The same |
| # parameter name can be used more than once, for example: |
| # `"WHERE id > @msg_id AND id < @msg_id + 100"` |
| # |
| # It is an error to execute an SQL query with unbound parameters. |
| # |
| # Parameter values are specified using `params`, which is a JSON |
| # object whose keys are parameter names, and whose values are the |
| # corresponding parameter values. |
| "a_key": "", # Properties of the object. |
| }, |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Partial results from a streaming read or SQL query. Streaming reads and |
| # SQL queries better tolerate large result sets, large rows, and large |
| # values, but are a little trickier to consume. |
| "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such |
| # as TCP connection loss. If this occurs, the stream of results can |
| # be resumed by re-sending the original request and including |
| # `resume_token`. Note that executing any other transaction in the |
| # same session invalidates the token. |
| "chunkedValue": True or False, # If true, then the final value in values is chunked, and must |
| # be combined with more values from subsequent `PartialResultSet`s |
| # to obtain a complete field value. |
| "values": [ # A streamed result set consists of a stream of values, which might |
| # be split into many `PartialResultSet` messages to accommodate |
| # large rows and/or large values. Every N complete values defines a |
| # row, where N is equal to the number of entries in |
| # metadata.row_type.fields. |
| # |
| # Most values are encoded based on type as described |
| # here. |
| # |
| # It is possible that the last value in values is "chunked", |
| # meaning that the rest of the value is sent in subsequent |
| # `PartialResultSet`(s). This is denoted by the chunked_value |
| # field. Two or more chunked values can be merged to form a |
| # complete value as follows: |
| # |
| # * `bool/number/null`: cannot be chunked |
| # * `string`: concatenate the strings |
| # * `list`: concatenate the lists. If the last element in a list is a |
| # `string`, `list`, or `object`, merge it with the first element in |
| # the next list by applying these rules recursively. |
| # * `object`: concatenate the (field name, field value) pairs. If a |
| # field name is duplicated, then apply these rules recursively |
| # to merge the field values. |
| # |
| # Some examples of merging: |
| # |
| # # Strings are concatenated. |
| # "foo", "bar" => "foobar" |
| # |
| # # Lists of non-strings are concatenated. |
| # [2, 3], [4] => [2, 3, 4] |
| # |
| # # Lists are concatenated, but the last and first elements are merged |
| # # because they are strings. |
| # ["a", "b"], ["c", "d"] => ["a", "bc", "d"] |
| # |
| # # Lists are concatenated, but the last and first elements are merged |
| # # because they are lists. Recursively, the last and first elements |
| # # of the inner lists are merged because they are strings. |
| # ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] |
| # |
| # # Non-overlapping object fields are combined. |
| # {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} |
| # |
| # # Overlapping object fields are merged. |
| # {"a": "1"}, {"a": "2"} => {"a": "12"} |
| # |
| # # Examples of merging objects containing lists of strings. |
| # {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]} |
| # |
| # For a more complete example, suppose a streaming SQL query is |
| # yielding a result set whose rows contain a single string |
| # field. The following `PartialResultSet`s might be yielded: |
| # |
| # { |
| # "metadata": { ... } |
| # "values": ["Hello", "W"] |
| # "chunked_value": true |
| # "resume_token": "Af65..." |
| # } |
| # { |
| # "values": ["orl"] |
| # "chunked_value": true |
| # "resume_token": "Bqp2..." |
| # } |
| # { |
| # "values": ["d"] |
| # "resume_token": "Zx1B..." |
| # } |
| # |
| # This sequence of `PartialResultSet`s encodes two rows, one |
| # containing the field value `"Hello"`, and a second containing the |
| # field value `"World" = "W" + "orl" + "d"`. |
| "", |
| ], |
| "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this |
| # streaming result set. These can be requested by setting |
| # ExecuteSqlRequest.query_mode and are sent |
| # only once with the last response in the stream. |
| "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result. |
| "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting |
| # with the plan root. Each PlanNode's `id` corresponds to its index in |
| # `plan_nodes`. |
| { # Node information for nodes appearing in a QueryPlan.plan_nodes. |
| "index": 42, # The `PlanNode`'s index in node list. |
| "kind": "A String", # Used to determine the type of node. May be needed for visualizing |
| # different kinds of nodes differently. For example, If the node is a |
| # SCALAR node, it will have a condensed representation |
| # which can be used to directly embed a description of the node in its |
| # parent. |
| "displayName": "A String", # The display name for the node. |
| "executionStats": { # The execution statistics associated with the node, contained in a group of |
| # key-value pairs. Only present if the plan was returned as a result of a |
| # profile query. For example, number of executions, number of rows/time per |
| # execution etc. |
| "a_key": "", # Properties of the object. |
| }, |
| "childLinks": [ # List of child node `index`es and their relationship to this parent. |
| { # Metadata associated with a parent-child relationship appearing in a |
| # PlanNode. |
| "variable": "A String", # Only present if the child node is SCALAR and corresponds |
| # to an output variable of the parent node. The field carries the name of |
| # the output variable. |
| # For example, a `TableScan` operator that reads rows from a table will |
| # have child links to the `SCALAR` nodes representing the output variables |
| # created for each column that is read by the operator. The corresponding |
| # `variable` fields will be set to the variable names assigned to the |
| # columns. |
| "childIndex": 42, # The node to which the link points. |
| "type": "A String", # The type of the link. For example, in Hash Joins this could be used to |
| # distinguish between the build child and the probe child, or in the case |
| # of the child being an output variable, to represent the tag associated |
| # with the output variable. |
| }, |
| ], |
| "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes. |
| # `SCALAR` PlanNode(s). |
| "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases |
| # where the `description` string of this node references a `SCALAR` |
| # subquery contained in the expression subtree rooted at this node. The |
| # referenced `SCALAR` subquery may not necessarily be a direct child of |
| # this node. |
| "a_key": 42, |
| }, |
| "description": "A String", # A string representation of the expression subtree rooted at this node. |
| }, |
| "metadata": { # Attributes relevant to the node contained in a group of key-value pairs. |
| # For example, a Parameter Reference node could have the following |
| # information in its metadata: |
| # |
| # { |
| # "parameter_reference": "param1", |
| # "parameter_type": "array" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| ], |
| }, |
| "queryStats": { # Aggregated statistics from the execution of the query. Only present when |
| # the query is profiled. For example, a query could return the statistics as |
| # follows: |
| # |
| # { |
| # "rows_returned": "3", |
| # "elapsed_time": "1.22 secs", |
| # "cpu_time": "1.19 secs" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information. |
| # Only present in the first response. |
| "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result |
| # set. For example, a SQL query like `"SELECT UserId, UserName FROM |
| # Users"` could return a `row_type` value like: |
| # |
| # "fields": [ |
| # { "name": "UserId", "type": { "code": "INT64" } }, |
| # { "name": "UserName", "type": { "code": "STRING" } }, |
| # ] |
| "fields": [ # The list of fields that make up this struct. Order is |
| # significant, because values of this struct type are represented as |
| # lists, where the order of field values matches the order of |
| # fields in the StructType. In turn, the order of fields |
| # matches the order of columns in a read request, or the order of |
| # fields in the `SELECT` clause of a query. |
| { # Message representing a single field of a struct. |
| "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field. |
| # table cell or returned from an SQL query. |
| "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type` |
| # provides type information for the struct's fields. |
| "code": "A String", # Required. The TypeCode for this type. |
| "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type` |
| # is the type of the array elements. |
| }, |
| "name": "A String", # The name of the field. For reads, this is the column name. For |
| # SQL queries, it is the column alias (e.g., `"Word"` in the |
| # query `"SELECT 'hello' AS Word"`), or the column name (e.g., |
| # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some |
| # columns might have an empty name (e.g., !"SELECT |
| # UPPER(ColName)"`). Note that a query result can contain |
| # multiple fields with the same name. |
| }, |
| ], |
| }, |
| "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the |
| # information about the new transaction is yielded here. |
| "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen |
| # for the transaction. Not returned by default: see |
| # TransactionOptions.ReadOnly.return_read_timestamp. |
| "id": "A String", # `id` may be used to identify the transaction in subsequent |
| # Read, |
| # ExecuteSql, |
| # Commit, or |
| # Rollback calls. |
| # |
| # Single-use read-only transactions do not have IDs, because |
| # single-use transactions do not support multiple requests. |
| }, |
| }, |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="get">get(name, x__xgafv=None)</code> |
| <pre>Gets a session. Returns `NOT_FOUND` if the session does not exist. |
| This is mainly useful for determining whether a session is still |
| alive. |
| |
| Args: |
| name: string, Required. The name of the session to retrieve. (required) |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # A session in the Cloud Spanner API. |
| "name": "A String", # Required. The name of the session. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="read">read(session, body, x__xgafv=None)</code> |
| <pre>Reads rows from the database using key lookups and scans, as a |
| simple key/value style alternative to |
| ExecuteSql. This method cannot be used to |
| return a result set larger than 10 MiB; if the read matches more |
| data than that, the read fails with a `FAILED_PRECONDITION` |
| error. |
| |
| Reads inside read-write transactions might return `ABORTED`. If |
| this occurs, the application should restart the transaction from |
| the beginning. See Transaction for more details. |
| |
| Larger result sets can be yielded in streaming fashion by calling |
| StreamingRead instead. |
| |
| Args: |
| session: string, Required. The session in which the read should be performed. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for Read and |
| # StreamingRead. |
| "index": "A String", # If non-empty, the name of an index on table. This index is |
| # used instead of the table primary key when interpreting key_set |
| # and sorting result rows. See key_set for further information. |
| "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a |
| # temporary read-only transaction with strong concurrency. |
| # Read or |
| # ExecuteSql call runs. |
| # |
| # See TransactionOptions for more information about transactions. |
| "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in |
| # it. The transaction ID of the new transaction is returned in |
| # ResultSetMetadata.transaction, which is a Transaction. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction. |
| # This is the most efficient way to execute a transaction that |
| # consists of a single SQL query. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "id": "A String", # Execute the read or SQL query in a previously-started transaction. |
| }, |
| "resumeToken": "A String", # If this request is resuming a previously interrupted read, |
| # `resume_token` should be copied from the last |
| # PartialResultSet yielded before the interruption. Doing this |
| # enables the new read to resume where the last read left off. The |
| # rest of the request parameters must exactly match the request |
| # that yielded this token. |
| "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the |
| # primary keys of the rows in table to be yielded, unless index |
| # is present. If index is present, then key_set instead names |
| # index keys in index. |
| # |
| # Rows are yielded in table primary key order (if index is empty) |
| # or index key order (if index is non-empty). |
| # |
| # It is not an error for the `key_set` to name rows that do not |
| # exist in the database. Read yields nothing for nonexistent rows. |
| # the keys are expected to be in the same table or index. The keys need |
| # not be sorted in any particular way. |
| # |
| # If the same key is specified multiple times in the set (for example |
| # if two ranges, two keys, or a key and a range overlap), Cloud Spanner |
| # behaves as if the key were only specified once. |
| "ranges": [ # A list of key ranges. See KeyRange for more information about |
| # key range specifications. |
| { # KeyRange represents a range of rows in a table or index. |
| # |
| # A range has a start key and an end key. These keys can be open or |
| # closed, indicating if the range includes rows with that key. |
| # |
| # Keys are represented by lists, where the ith value in the list |
| # corresponds to the ith component of the table or index primary key. |
| # Individual values are encoded as described here. |
| # |
| # For example, consider the following table definition: |
| # |
| # CREATE TABLE UserEvents ( |
| # UserName STRING(MAX), |
| # EventDate STRING(10) |
| # ) PRIMARY KEY(UserName, EventDate); |
| # |
| # The following keys name rows in this table: |
| # |
| # "Bob", "2014-09-23" |
| # |
| # Since the `UserEvents` table's `PRIMARY KEY` clause names two |
| # columns, each `UserEvents` key has two elements; the first is the |
| # `UserName`, and the second is the `EventDate`. |
| # |
| # Key ranges with multiple components are interpreted |
| # lexicographically by component using the table or index key's declared |
| # sort order. For example, the following range returns all events for |
| # user `"Bob"` that occurred in the year 2015: |
| # |
| # "start_closed": ["Bob", "2015-01-01"] |
| # "end_closed": ["Bob", "2015-12-31"] |
| # |
| # Start and end keys can omit trailing key components. This affects the |
| # inclusion and exclusion of rows that exactly match the provided key |
| # components: if the key is closed, then rows that exactly match the |
| # provided components are included; if the key is open, then rows |
| # that exactly match are not included. |
| # |
| # For example, the following range includes all events for `"Bob"` that |
| # occurred during and after the year 2000: |
| # |
| # "start_closed": ["Bob", "2000-01-01"] |
| # "end_closed": ["Bob"] |
| # |
| # The next example retrieves all events for `"Bob"`: |
| # |
| # "start_closed": ["Bob"] |
| # "end_closed": ["Bob"] |
| # |
| # To retrieve events before the year 2000: |
| # |
| # "start_closed": ["Bob"] |
| # "end_open": ["Bob", "2000-01-01"] |
| # |
| # The following range includes all rows in the table: |
| # |
| # "start_closed": [] |
| # "end_closed": [] |
| # |
| # This range returns all users whose `UserName` begins with any |
| # character from A to C: |
| # |
| # "start_closed": ["A"] |
| # "end_open": ["D"] |
| # |
| # This range returns all users whose `UserName` begins with B: |
| # |
| # "start_closed": ["B"] |
| # "end_open": ["C"] |
| # |
| # Key ranges honor column sort order. For example, suppose a table is |
| # defined as follows: |
| # |
| # CREATE TABLE DescendingSortedTable { |
| # Key INT64, |
| # ... |
| # ) PRIMARY KEY(Key DESC); |
| # |
| # The following range retrieves all rows with key values between 1 |
| # and 100 inclusive: |
| # |
| # "start_closed": ["100"] |
| # "end_closed": ["1"] |
| # |
| # Note that 100 is passed as the start, and 1 is passed as the end, |
| # because `Key` is a descending column in the schema. |
| "endOpen": [ # If the end is open, then the range excludes rows whose first |
| # `len(end_open)` key columns exactly match `end_open`. |
| "", |
| ], |
| "startOpen": [ # If the start is open, then the range excludes rows whose first |
| # `len(start_open)` key columns exactly match `start_open`. |
| "", |
| ], |
| "endClosed": [ # If the end is closed, then the range includes all rows whose |
| # first `len(end_closed)` key columns exactly match `end_closed`. |
| "", |
| ], |
| "startClosed": [ # If the start is closed, then the range includes all rows whose |
| # first `len(start_closed)` key columns exactly match `start_closed`. |
| "", |
| ], |
| }, |
| ], |
| "keys": [ # A list of specific keys. Entries in `keys` should have exactly as |
| # many elements as there are columns in the primary or index key |
| # with which this `KeySet` is used. Individual key values are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "all": True or False, # For convenience `all` can be set to `true` to indicate that this |
| # `KeySet` matches all keys in the table or index. Note that any keys |
| # specified in `keys` or `ranges` are only yielded once. |
| }, |
| "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit` |
| # is zero, the default is no limit. |
| "table": "A String", # Required. The name of the table in the database to be read. |
| "columns": [ # The columns of table to be returned for each row matching |
| # this request. |
| "A String", |
| ], |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Results from Read or |
| # ExecuteSql. |
| "rows": [ # Each element in `rows` is a row whose format is defined by |
| # metadata.row_type. The ith element |
| # in each row matches the ith field in |
| # metadata.row_type. Elements are |
| # encoded based on type as described |
| # here. |
| [ |
| "", |
| ], |
| ], |
| "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this |
| # result set. These can be requested by setting |
| # ExecuteSqlRequest.query_mode. |
| "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result. |
| "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting |
| # with the plan root. Each PlanNode's `id` corresponds to its index in |
| # `plan_nodes`. |
| { # Node information for nodes appearing in a QueryPlan.plan_nodes. |
| "index": 42, # The `PlanNode`'s index in node list. |
| "kind": "A String", # Used to determine the type of node. May be needed for visualizing |
| # different kinds of nodes differently. For example, If the node is a |
| # SCALAR node, it will have a condensed representation |
| # which can be used to directly embed a description of the node in its |
| # parent. |
| "displayName": "A String", # The display name for the node. |
| "executionStats": { # The execution statistics associated with the node, contained in a group of |
| # key-value pairs. Only present if the plan was returned as a result of a |
| # profile query. For example, number of executions, number of rows/time per |
| # execution etc. |
| "a_key": "", # Properties of the object. |
| }, |
| "childLinks": [ # List of child node `index`es and their relationship to this parent. |
| { # Metadata associated with a parent-child relationship appearing in a |
| # PlanNode. |
| "variable": "A String", # Only present if the child node is SCALAR and corresponds |
| # to an output variable of the parent node. The field carries the name of |
| # the output variable. |
| # For example, a `TableScan` operator that reads rows from a table will |
| # have child links to the `SCALAR` nodes representing the output variables |
| # created for each column that is read by the operator. The corresponding |
| # `variable` fields will be set to the variable names assigned to the |
| # columns. |
| "childIndex": 42, # The node to which the link points. |
| "type": "A String", # The type of the link. For example, in Hash Joins this could be used to |
| # distinguish between the build child and the probe child, or in the case |
| # of the child being an output variable, to represent the tag associated |
| # with the output variable. |
| }, |
| ], |
| "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes. |
| # `SCALAR` PlanNode(s). |
| "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases |
| # where the `description` string of this node references a `SCALAR` |
| # subquery contained in the expression subtree rooted at this node. The |
| # referenced `SCALAR` subquery may not necessarily be a direct child of |
| # this node. |
| "a_key": 42, |
| }, |
| "description": "A String", # A string representation of the expression subtree rooted at this node. |
| }, |
| "metadata": { # Attributes relevant to the node contained in a group of key-value pairs. |
| # For example, a Parameter Reference node could have the following |
| # information in its metadata: |
| # |
| # { |
| # "parameter_reference": "param1", |
| # "parameter_type": "array" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| ], |
| }, |
| "queryStats": { # Aggregated statistics from the execution of the query. Only present when |
| # the query is profiled. For example, a query could return the statistics as |
| # follows: |
| # |
| # { |
| # "rows_returned": "3", |
| # "elapsed_time": "1.22 secs", |
| # "cpu_time": "1.19 secs" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information. |
| "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result |
| # set. For example, a SQL query like `"SELECT UserId, UserName FROM |
| # Users"` could return a `row_type` value like: |
| # |
| # "fields": [ |
| # { "name": "UserId", "type": { "code": "INT64" } }, |
| # { "name": "UserName", "type": { "code": "STRING" } }, |
| # ] |
| "fields": [ # The list of fields that make up this struct. Order is |
| # significant, because values of this struct type are represented as |
| # lists, where the order of field values matches the order of |
| # fields in the StructType. In turn, the order of fields |
| # matches the order of columns in a read request, or the order of |
| # fields in the `SELECT` clause of a query. |
| { # Message representing a single field of a struct. |
| "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field. |
| # table cell or returned from an SQL query. |
| "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type` |
| # provides type information for the struct's fields. |
| "code": "A String", # Required. The TypeCode for this type. |
| "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type` |
| # is the type of the array elements. |
| }, |
| "name": "A String", # The name of the field. For reads, this is the column name. For |
| # SQL queries, it is the column alias (e.g., `"Word"` in the |
| # query `"SELECT 'hello' AS Word"`), or the column name (e.g., |
| # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some |
| # columns might have an empty name (e.g., !"SELECT |
| # UPPER(ColName)"`). Note that a query result can contain |
| # multiple fields with the same name. |
| }, |
| ], |
| }, |
| "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the |
| # information about the new transaction is yielded here. |
| "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen |
| # for the transaction. Not returned by default: see |
| # TransactionOptions.ReadOnly.return_read_timestamp. |
| "id": "A String", # `id` may be used to identify the transaction in subsequent |
| # Read, |
| # ExecuteSql, |
| # Commit, or |
| # Rollback calls. |
| # |
| # Single-use read-only transactions do not have IDs, because |
| # single-use transactions do not support multiple requests. |
| }, |
| }, |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="rollback">rollback(session, body, x__xgafv=None)</code> |
| <pre>Rolls back a transaction, releasing any locks it holds. It is a good |
| idea to call this for any transaction that includes one or more |
| Read or ExecuteSql requests and |
| ultimately decides not to commit. |
| |
| `Rollback` returns `OK` if it successfully aborts the transaction, the |
| transaction was already aborted, or the transaction is not |
| found. `Rollback` never returns `ABORTED`. |
| |
| Args: |
| session: string, Required. The session in which the transaction to roll back is running. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for Rollback. |
| "transactionId": "A String", # Required. The transaction to roll back. |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # A generic empty message that you can re-use to avoid defining duplicated |
| # empty messages in your APIs. A typical example is to use it as the request |
| # or the response type of an API method. For instance: |
| # |
| # service Foo { |
| # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); |
| # } |
| # |
| # The JSON representation for `Empty` is empty JSON object `{}`. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="streamingRead">streamingRead(session, body, x__xgafv=None)</code> |
| <pre>Like Read, except returns the result set as a |
| stream. Unlike Read, there is no limit on the |
| size of the returned result set. However, no individual row in |
| the result set can exceed 100 MiB, and no column value can exceed |
| 10 MiB. |
| |
| Args: |
| session: string, Required. The session in which the read should be performed. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The request for Read and |
| # StreamingRead. |
| "index": "A String", # If non-empty, the name of an index on table. This index is |
| # used instead of the table primary key when interpreting key_set |
| # and sorting result rows. See key_set for further information. |
| "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a |
| # temporary read-only transaction with strong concurrency. |
| # Read or |
| # ExecuteSql call runs. |
| # |
| # See TransactionOptions for more information about transactions. |
| "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in |
| # it. The transaction ID of the new transaction is returned in |
| # ResultSetMetadata.transaction, which is a Transaction. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction. |
| # This is the most efficient way to execute a transaction that |
| # consists of a single SQL query. |
| # |
| # |
| # Each session can have at most one active transaction at a time. After the |
| # active transaction is completed, the session can immediately be |
| # re-used for the next transaction. It is not necessary to create a |
| # new session for each transaction. |
| # |
| # # Transaction Modes |
| # |
| # Cloud Spanner supports two transaction modes: |
| # |
| # 1. Locking read-write. This type of transaction is the only way |
| # to write data into Cloud Spanner. These transactions rely on |
| # pessimistic locking and, if necessary, two-phase commit. |
| # Locking read-write transactions may abort, requiring the |
| # application to retry. |
| # |
| # 2. Snapshot read-only. This transaction type provides guaranteed |
| # consistency across several reads, but does not allow |
| # writes. Snapshot read-only transactions can be configured to |
| # read at timestamps in the past. Snapshot read-only |
| # transactions do not need to be committed. |
| # |
| # For transactions that only read, snapshot read-only transactions |
| # provide simpler semantics and are almost always faster. In |
| # particular, read-only transactions do not take locks, so they do |
| # not conflict with read-write transactions. As a consequence of not |
| # taking locks, they also do not abort, so retry loops are not needed. |
| # |
| # Transactions may only read/write data in a single database. They |
| # may, however, read/write data in different tables within that |
| # database. |
| # |
| # ## Locking Read-Write Transactions |
| # |
| # Locking transactions may be used to atomically read-modify-write |
| # data anywhere in a database. This type of transaction is externally |
| # consistent. |
| # |
| # Clients should attempt to minimize the amount of time a transaction |
| # is active. Faster transactions commit with higher probability |
| # and cause less contention. Cloud Spanner attempts to keep read locks |
| # active as long as the transaction continues to do reads, and the |
| # transaction has not been terminated by |
| # Commit or |
| # Rollback. Long periods of |
| # inactivity at the client may cause Cloud Spanner to release a |
| # transaction's locks and abort it. |
| # |
| # Reads performed within a transaction acquire locks on the data |
| # being read. Writes can only be done at commit time, after all reads |
| # have been completed. |
| # Conceptually, a read-write transaction consists of zero or more |
| # reads or SQL queries followed by |
| # Commit. At any time before |
| # Commit, the client can send a |
| # Rollback request to abort the |
| # transaction. |
| # |
| # ### Semantics |
| # |
| # Cloud Spanner can commit the transaction if all read locks it acquired |
| # are still valid at commit time, and it is able to acquire write |
| # locks for all writes. Cloud Spanner can abort the transaction for any |
| # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees |
| # that the transaction has not modified any user data in Cloud Spanner. |
| # |
| # Unless the transaction commits, Cloud Spanner makes no guarantees about |
| # how long the transaction's locks were held for. It is an error to |
| # use Cloud Spanner locks for any sort of mutual exclusion other than |
| # between Cloud Spanner transactions themselves. |
| # |
| # ### Retrying Aborted Transactions |
| # |
| # When a transaction aborts, the application can choose to retry the |
| # whole transaction again. To maximize the chances of successfully |
| # committing the retry, the client should execute the retry in the |
| # same session as the original attempt. The original session's lock |
| # priority increases with each consecutive abort, meaning that each |
| # attempt has a slightly better chance of success than the previous. |
| # |
| # Under some circumstances (e.g., many transactions attempting to |
| # modify the same row(s)), a transaction can abort many times in a |
| # short period before successfully committing. Thus, it is not a good |
| # idea to cap the number of retries a transaction can attempt; |
| # instead, it is better to limit the total amount of wall time spent |
| # retrying. |
| # |
| # ### Idle Transactions |
| # |
| # A transaction is considered idle if it has no outstanding reads or |
| # SQL queries and has not started a read or SQL query within the last 10 |
| # seconds. Idle transactions can be aborted by Cloud Spanner so that they |
| # don't hold on to locks indefinitely. In that case, the commit will |
| # fail with error `ABORTED`. |
| # |
| # If this behavior is undesirable, periodically executing a simple |
| # SQL query in the transaction (e.g., `SELECT 1`) prevents the |
| # transaction from becoming idle. |
| # |
| # ## Snapshot Read-Only Transactions |
| # |
| # Snapshot read-only transactions provides a simpler method than |
| # locking read-write transactions for doing several consistent |
| # reads. However, this type of transaction does not support writes. |
| # |
| # Snapshot transactions do not take locks. Instead, they work by |
| # choosing a Cloud Spanner timestamp, then executing all reads at that |
| # timestamp. Since they do not acquire locks, they do not block |
| # concurrent read-write transactions. |
| # |
| # Unlike locking read-write transactions, snapshot read-only |
| # transactions never abort. They can fail if the chosen read |
| # timestamp is garbage collected; however, the default garbage |
| # collection policy is generous enough that most applications do not |
| # need to worry about this in practice. |
| # |
| # Snapshot read-only transactions do not need to call |
| # Commit or |
| # Rollback (and in fact are not |
| # permitted to do so). |
| # |
| # To execute a snapshot transaction, the client specifies a timestamp |
| # bound, which tells Cloud Spanner how to choose a read timestamp. |
| # |
| # The types of timestamp bound are: |
| # |
| # - Strong (the default). |
| # - Bounded staleness. |
| # - Exact staleness. |
| # |
| # If the Cloud Spanner database to be read is geographically distributed, |
| # stale read-only transactions can execute more quickly than strong |
| # or read-write transaction, because they are able to execute far |
| # from the leader replica. |
| # |
| # Each type of timestamp bound is discussed in detail below. |
| # |
| # ### Strong |
| # |
| # Strong reads are guaranteed to see the effects of all transactions |
| # that have committed before the start of the read. Furthermore, all |
| # rows yielded by a single read are consistent with each other -- if |
| # any part of the read observes a transaction, all parts of the read |
| # see the transaction. |
| # |
| # Strong reads are not repeatable: two consecutive strong read-only |
| # transactions might return inconsistent results if there are |
| # concurrent writes. If consistency across reads is required, the |
| # reads should be executed within a transaction or at an exact read |
| # timestamp. |
| # |
| # See TransactionOptions.ReadOnly.strong. |
| # |
| # ### Exact Staleness |
| # |
| # These timestamp bounds execute reads at a user-specified |
| # timestamp. Reads at a timestamp are guaranteed to see a consistent |
| # prefix of the global transaction history: they observe |
| # modifications done by all transactions with a commit timestamp <= |
| # the read timestamp, and observe none of the modifications done by |
| # transactions with a larger commit timestamp. They will block until |
| # all conflicting transactions that may be assigned commit timestamps |
| # <= the read timestamp have finished. |
| # |
| # The timestamp can either be expressed as an absolute Cloud Spanner commit |
| # timestamp or a staleness relative to the current time. |
| # |
| # These modes do not require a "negotiation phase" to pick a |
| # timestamp. As a result, they execute slightly faster than the |
| # equivalent boundedly stale concurrency modes. On the other hand, |
| # boundedly stale reads usually return fresher results. |
| # |
| # See TransactionOptions.ReadOnly.read_timestamp and |
| # TransactionOptions.ReadOnly.exact_staleness. |
| # |
| # ### Bounded Staleness |
| # |
| # Bounded staleness modes allow Cloud Spanner to pick the read timestamp, |
| # subject to a user-provided staleness bound. Cloud Spanner chooses the |
| # newest timestamp within the staleness bound that allows execution |
| # of the reads at the closest available replica without blocking. |
| # |
| # All rows yielded are consistent with each other -- if any part of |
| # the read observes a transaction, all parts of the read see the |
| # transaction. Boundedly stale reads are not repeatable: two stale |
| # reads, even if they use the same staleness bound, can execute at |
| # different timestamps and thus return inconsistent results. |
| # |
| # Boundedly stale reads execute in two phases: the first phase |
| # negotiates a timestamp among all replicas needed to serve the |
| # read. In the second phase, reads are executed at the negotiated |
| # timestamp. |
| # |
| # As a result of the two phase execution, bounded staleness reads are |
| # usually a little slower than comparable exact staleness |
| # reads. However, they are typically able to return fresher |
| # results, and are more likely to execute at the closest replica. |
| # |
| # Because the timestamp negotiation requires up-front knowledge of |
| # which rows will be read, it can only be used with single-use |
| # read-only transactions. |
| # |
| # See TransactionOptions.ReadOnly.max_staleness and |
| # TransactionOptions.ReadOnly.min_read_timestamp. |
| # |
| # ### Old Read Timestamps and Garbage Collection |
| # |
| # Cloud Spanner continuously garbage collects deleted and overwritten data |
| # in the background to reclaim storage space. This process is known |
| # as "version GC". By default, version GC reclaims versions after they |
| # are one hour old. Because of this, Cloud Spanner cannot perform reads |
| # at read timestamps more than one hour in the past. This |
| # restriction also applies to in-progress reads and/or SQL queries whose |
| # timestamp become too old while executing. Reads and SQL queries with |
| # too-old read timestamps fail with the error `FAILED_PRECONDITION`. |
| "readWrite": { # Options for read-write transactions. # Transaction may write. |
| # |
| # Authorization to begin a read-write transaction requires |
| # `spanner.databases.beginOrRollbackReadWriteTransaction` permission |
| # on the `session` resource. |
| }, |
| "readOnly": { # Options for read-only transactions. # Transaction will not write. |
| # |
| # Authorization to begin a read-only transaction requires |
| # `spanner.databases.beginReadOnlyTransaction` permission |
| # on the `session` resource. |
| "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`. |
| # |
| # This is useful for requesting fresher data than some previous |
| # read, or data that is fresh enough to observe the effects of some |
| # previously committed transaction whose timestamp is known. |
| # |
| # Note that this option can only be used in single-use transactions. |
| "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in |
| # the Transaction message that describes the transaction. |
| "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness` |
| # seconds. Guarantees that all writes that have committed more |
| # than the specified number of seconds ago are visible. Because |
| # Cloud Spanner chooses the exact timestamp, this mode works even if |
| # the client's local clock is substantially skewed from Cloud Spanner |
| # commit timestamps. |
| # |
| # Useful for reading the freshest data available at a nearby |
| # replica, while bounding the possible staleness if the local |
| # replica has fallen behind. |
| # |
| # Note that this option can only be used in single-use |
| # transactions. |
| "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness` |
| # old. The timestamp is chosen soon after the read is started. |
| # |
| # Guarantees that all writes that have committed more than the |
| # specified number of seconds ago are visible. Because Cloud Spanner |
| # chooses the exact timestamp, this mode works even if the client's |
| # local clock is substantially skewed from Cloud Spanner commit |
| # timestamps. |
| # |
| # Useful for reading at nearby replicas without the distributed |
| # timestamp negotiation overhead of `max_staleness`. |
| "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes, |
| # reads at a specific timestamp are repeatable; the same read at |
| # the same timestamp always returns the same data. If the |
| # timestamp is in the future, the read will block until the |
| # specified timestamp, modulo the read's deadline. |
| # |
| # Useful for large scale consistent reads such as mapreduces, or |
| # for coordinating many reads against a consistent snapshot of the |
| # data. |
| "strong": True or False, # Read at a timestamp where all previously committed transactions |
| # are visible. |
| }, |
| }, |
| "id": "A String", # Execute the read or SQL query in a previously-started transaction. |
| }, |
| "resumeToken": "A String", # If this request is resuming a previously interrupted read, |
| # `resume_token` should be copied from the last |
| # PartialResultSet yielded before the interruption. Doing this |
| # enables the new read to resume where the last read left off. The |
| # rest of the request parameters must exactly match the request |
| # that yielded this token. |
| "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the |
| # primary keys of the rows in table to be yielded, unless index |
| # is present. If index is present, then key_set instead names |
| # index keys in index. |
| # |
| # Rows are yielded in table primary key order (if index is empty) |
| # or index key order (if index is non-empty). |
| # |
| # It is not an error for the `key_set` to name rows that do not |
| # exist in the database. Read yields nothing for nonexistent rows. |
| # the keys are expected to be in the same table or index. The keys need |
| # not be sorted in any particular way. |
| # |
| # If the same key is specified multiple times in the set (for example |
| # if two ranges, two keys, or a key and a range overlap), Cloud Spanner |
| # behaves as if the key were only specified once. |
| "ranges": [ # A list of key ranges. See KeyRange for more information about |
| # key range specifications. |
| { # KeyRange represents a range of rows in a table or index. |
| # |
| # A range has a start key and an end key. These keys can be open or |
| # closed, indicating if the range includes rows with that key. |
| # |
| # Keys are represented by lists, where the ith value in the list |
| # corresponds to the ith component of the table or index primary key. |
| # Individual values are encoded as described here. |
| # |
| # For example, consider the following table definition: |
| # |
| # CREATE TABLE UserEvents ( |
| # UserName STRING(MAX), |
| # EventDate STRING(10) |
| # ) PRIMARY KEY(UserName, EventDate); |
| # |
| # The following keys name rows in this table: |
| # |
| # "Bob", "2014-09-23" |
| # |
| # Since the `UserEvents` table's `PRIMARY KEY` clause names two |
| # columns, each `UserEvents` key has two elements; the first is the |
| # `UserName`, and the second is the `EventDate`. |
| # |
| # Key ranges with multiple components are interpreted |
| # lexicographically by component using the table or index key's declared |
| # sort order. For example, the following range returns all events for |
| # user `"Bob"` that occurred in the year 2015: |
| # |
| # "start_closed": ["Bob", "2015-01-01"] |
| # "end_closed": ["Bob", "2015-12-31"] |
| # |
| # Start and end keys can omit trailing key components. This affects the |
| # inclusion and exclusion of rows that exactly match the provided key |
| # components: if the key is closed, then rows that exactly match the |
| # provided components are included; if the key is open, then rows |
| # that exactly match are not included. |
| # |
| # For example, the following range includes all events for `"Bob"` that |
| # occurred during and after the year 2000: |
| # |
| # "start_closed": ["Bob", "2000-01-01"] |
| # "end_closed": ["Bob"] |
| # |
| # The next example retrieves all events for `"Bob"`: |
| # |
| # "start_closed": ["Bob"] |
| # "end_closed": ["Bob"] |
| # |
| # To retrieve events before the year 2000: |
| # |
| # "start_closed": ["Bob"] |
| # "end_open": ["Bob", "2000-01-01"] |
| # |
| # The following range includes all rows in the table: |
| # |
| # "start_closed": [] |
| # "end_closed": [] |
| # |
| # This range returns all users whose `UserName` begins with any |
| # character from A to C: |
| # |
| # "start_closed": ["A"] |
| # "end_open": ["D"] |
| # |
| # This range returns all users whose `UserName` begins with B: |
| # |
| # "start_closed": ["B"] |
| # "end_open": ["C"] |
| # |
| # Key ranges honor column sort order. For example, suppose a table is |
| # defined as follows: |
| # |
| # CREATE TABLE DescendingSortedTable { |
| # Key INT64, |
| # ... |
| # ) PRIMARY KEY(Key DESC); |
| # |
| # The following range retrieves all rows with key values between 1 |
| # and 100 inclusive: |
| # |
| # "start_closed": ["100"] |
| # "end_closed": ["1"] |
| # |
| # Note that 100 is passed as the start, and 1 is passed as the end, |
| # because `Key` is a descending column in the schema. |
| "endOpen": [ # If the end is open, then the range excludes rows whose first |
| # `len(end_open)` key columns exactly match `end_open`. |
| "", |
| ], |
| "startOpen": [ # If the start is open, then the range excludes rows whose first |
| # `len(start_open)` key columns exactly match `start_open`. |
| "", |
| ], |
| "endClosed": [ # If the end is closed, then the range includes all rows whose |
| # first `len(end_closed)` key columns exactly match `end_closed`. |
| "", |
| ], |
| "startClosed": [ # If the start is closed, then the range includes all rows whose |
| # first `len(start_closed)` key columns exactly match `start_closed`. |
| "", |
| ], |
| }, |
| ], |
| "keys": [ # A list of specific keys. Entries in `keys` should have exactly as |
| # many elements as there are columns in the primary or index key |
| # with which this `KeySet` is used. Individual key values are |
| # encoded as described here. |
| [ |
| "", |
| ], |
| ], |
| "all": True or False, # For convenience `all` can be set to `true` to indicate that this |
| # `KeySet` matches all keys in the table or index. Note that any keys |
| # specified in `keys` or `ranges` are only yielded once. |
| }, |
| "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit` |
| # is zero, the default is no limit. |
| "table": "A String", # Required. The name of the table in the database to be read. |
| "columns": [ # The columns of table to be returned for each row matching |
| # this request. |
| "A String", |
| ], |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Partial results from a streaming read or SQL query. Streaming reads and |
| # SQL queries better tolerate large result sets, large rows, and large |
| # values, but are a little trickier to consume. |
| "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such |
| # as TCP connection loss. If this occurs, the stream of results can |
| # be resumed by re-sending the original request and including |
| # `resume_token`. Note that executing any other transaction in the |
| # same session invalidates the token. |
| "chunkedValue": True or False, # If true, then the final value in values is chunked, and must |
| # be combined with more values from subsequent `PartialResultSet`s |
| # to obtain a complete field value. |
| "values": [ # A streamed result set consists of a stream of values, which might |
| # be split into many `PartialResultSet` messages to accommodate |
| # large rows and/or large values. Every N complete values defines a |
| # row, where N is equal to the number of entries in |
| # metadata.row_type.fields. |
| # |
| # Most values are encoded based on type as described |
| # here. |
| # |
| # It is possible that the last value in values is "chunked", |
| # meaning that the rest of the value is sent in subsequent |
| # `PartialResultSet`(s). This is denoted by the chunked_value |
| # field. Two or more chunked values can be merged to form a |
| # complete value as follows: |
| # |
| # * `bool/number/null`: cannot be chunked |
| # * `string`: concatenate the strings |
| # * `list`: concatenate the lists. If the last element in a list is a |
| # `string`, `list`, or `object`, merge it with the first element in |
| # the next list by applying these rules recursively. |
| # * `object`: concatenate the (field name, field value) pairs. If a |
| # field name is duplicated, then apply these rules recursively |
| # to merge the field values. |
| # |
| # Some examples of merging: |
| # |
| # # Strings are concatenated. |
| # "foo", "bar" => "foobar" |
| # |
| # # Lists of non-strings are concatenated. |
| # [2, 3], [4] => [2, 3, 4] |
| # |
| # # Lists are concatenated, but the last and first elements are merged |
| # # because they are strings. |
| # ["a", "b"], ["c", "d"] => ["a", "bc", "d"] |
| # |
| # # Lists are concatenated, but the last and first elements are merged |
| # # because they are lists. Recursively, the last and first elements |
| # # of the inner lists are merged because they are strings. |
| # ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"] |
| # |
| # # Non-overlapping object fields are combined. |
| # {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"} |
| # |
| # # Overlapping object fields are merged. |
| # {"a": "1"}, {"a": "2"} => {"a": "12"} |
| # |
| # # Examples of merging objects containing lists of strings. |
| # {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]} |
| # |
| # For a more complete example, suppose a streaming SQL query is |
| # yielding a result set whose rows contain a single string |
| # field. The following `PartialResultSet`s might be yielded: |
| # |
| # { |
| # "metadata": { ... } |
| # "values": ["Hello", "W"] |
| # "chunked_value": true |
| # "resume_token": "Af65..." |
| # } |
| # { |
| # "values": ["orl"] |
| # "chunked_value": true |
| # "resume_token": "Bqp2..." |
| # } |
| # { |
| # "values": ["d"] |
| # "resume_token": "Zx1B..." |
| # } |
| # |
| # This sequence of `PartialResultSet`s encodes two rows, one |
| # containing the field value `"Hello"`, and a second containing the |
| # field value `"World" = "W" + "orl" + "d"`. |
| "", |
| ], |
| "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this |
| # streaming result set. These can be requested by setting |
| # ExecuteSqlRequest.query_mode and are sent |
| # only once with the last response in the stream. |
| "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result. |
| "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting |
| # with the plan root. Each PlanNode's `id` corresponds to its index in |
| # `plan_nodes`. |
| { # Node information for nodes appearing in a QueryPlan.plan_nodes. |
| "index": 42, # The `PlanNode`'s index in node list. |
| "kind": "A String", # Used to determine the type of node. May be needed for visualizing |
| # different kinds of nodes differently. For example, If the node is a |
| # SCALAR node, it will have a condensed representation |
| # which can be used to directly embed a description of the node in its |
| # parent. |
| "displayName": "A String", # The display name for the node. |
| "executionStats": { # The execution statistics associated with the node, contained in a group of |
| # key-value pairs. Only present if the plan was returned as a result of a |
| # profile query. For example, number of executions, number of rows/time per |
| # execution etc. |
| "a_key": "", # Properties of the object. |
| }, |
| "childLinks": [ # List of child node `index`es and their relationship to this parent. |
| { # Metadata associated with a parent-child relationship appearing in a |
| # PlanNode. |
| "variable": "A String", # Only present if the child node is SCALAR and corresponds |
| # to an output variable of the parent node. The field carries the name of |
| # the output variable. |
| # For example, a `TableScan` operator that reads rows from a table will |
| # have child links to the `SCALAR` nodes representing the output variables |
| # created for each column that is read by the operator. The corresponding |
| # `variable` fields will be set to the variable names assigned to the |
| # columns. |
| "childIndex": 42, # The node to which the link points. |
| "type": "A String", # The type of the link. For example, in Hash Joins this could be used to |
| # distinguish between the build child and the probe child, or in the case |
| # of the child being an output variable, to represent the tag associated |
| # with the output variable. |
| }, |
| ], |
| "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes. |
| # `SCALAR` PlanNode(s). |
| "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases |
| # where the `description` string of this node references a `SCALAR` |
| # subquery contained in the expression subtree rooted at this node. The |
| # referenced `SCALAR` subquery may not necessarily be a direct child of |
| # this node. |
| "a_key": 42, |
| }, |
| "description": "A String", # A string representation of the expression subtree rooted at this node. |
| }, |
| "metadata": { # Attributes relevant to the node contained in a group of key-value pairs. |
| # For example, a Parameter Reference node could have the following |
| # information in its metadata: |
| # |
| # { |
| # "parameter_reference": "param1", |
| # "parameter_type": "array" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| ], |
| }, |
| "queryStats": { # Aggregated statistics from the execution of the query. Only present when |
| # the query is profiled. For example, a query could return the statistics as |
| # follows: |
| # |
| # { |
| # "rows_returned": "3", |
| # "elapsed_time": "1.22 secs", |
| # "cpu_time": "1.19 secs" |
| # } |
| "a_key": "", # Properties of the object. |
| }, |
| }, |
| "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information. |
| # Only present in the first response. |
| "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result |
| # set. For example, a SQL query like `"SELECT UserId, UserName FROM |
| # Users"` could return a `row_type` value like: |
| # |
| # "fields": [ |
| # { "name": "UserId", "type": { "code": "INT64" } }, |
| # { "name": "UserName", "type": { "code": "STRING" } }, |
| # ] |
| "fields": [ # The list of fields that make up this struct. Order is |
| # significant, because values of this struct type are represented as |
| # lists, where the order of field values matches the order of |
| # fields in the StructType. In turn, the order of fields |
| # matches the order of columns in a read request, or the order of |
| # fields in the `SELECT` clause of a query. |
| { # Message representing a single field of a struct. |
| "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field. |
| # table cell or returned from an SQL query. |
| "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type` |
| # provides type information for the struct's fields. |
| "code": "A String", # Required. The TypeCode for this type. |
| "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type` |
| # is the type of the array elements. |
| }, |
| "name": "A String", # The name of the field. For reads, this is the column name. For |
| # SQL queries, it is the column alias (e.g., `"Word"` in the |
| # query `"SELECT 'hello' AS Word"`), or the column name (e.g., |
| # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some |
| # columns might have an empty name (e.g., !"SELECT |
| # UPPER(ColName)"`). Note that a query result can contain |
| # multiple fields with the same name. |
| }, |
| ], |
| }, |
| "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the |
| # information about the new transaction is yielded here. |
| "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen |
| # for the transaction. Not returned by default: see |
| # TransactionOptions.ReadOnly.return_read_timestamp. |
| "id": "A String", # `id` may be used to identify the transaction in subsequent |
| # Read, |
| # ExecuteSql, |
| # Commit, or |
| # Rollback calls. |
| # |
| # Single-use read-only transactions do not have IDs, because |
| # single-use transactions do not support multiple requests. |
| }, |
| }, |
| }</pre> |
| </div> |
| |
| </body></html> |