| <html><body> |
| <style> |
| |
| body, h1, h2, h3, div, span, p, pre, a { |
| margin: 0; |
| padding: 0; |
| border: 0; |
| font-weight: inherit; |
| font-style: inherit; |
| font-size: 100%; |
| font-family: inherit; |
| vertical-align: baseline; |
| } |
| |
| body { |
| font-size: 13px; |
| padding: 1em; |
| } |
| |
| h1 { |
| font-size: 26px; |
| margin-bottom: 1em; |
| } |
| |
| h2 { |
| font-size: 24px; |
| margin-bottom: 1em; |
| } |
| |
| h3 { |
| font-size: 20px; |
| margin-bottom: 1em; |
| margin-top: 1em; |
| } |
| |
| pre, code { |
| line-height: 1.5; |
| font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| } |
| |
| pre { |
| margin-top: 0.5em; |
| } |
| |
| h1, h2, h3, p { |
| font-family: Arial, sans serif; |
| } |
| |
| h1, h2, h3 { |
| border-bottom: solid #CCC 1px; |
| } |
| |
| .toc_element { |
| margin-top: 0.5em; |
| } |
| |
| .firstline { |
| margin-left: 2 em; |
| } |
| |
| .method { |
| margin-top: 1em; |
| border: solid 1px #CCC; |
| padding: 1em; |
| background: #EEE; |
| } |
| |
| .details { |
| font-weight: bold; |
| font-size: 14px; |
| } |
| |
| </style> |
| |
| <h1><a href="remotebuildexecution_v2.html">Remote Build Execution API</a> . <a href="remotebuildexecution_v2.actionResults.html">actionResults</a></h1> |
| <h2>Instance Methods</h2> |
| <p class="toc_element"> |
| <code><a href="#get">get(instanceName, hash, sizeBytes, inlineStdout=None, inlineOutputFiles=None, inlineStderr=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Retrieve a cached execution result.</p> |
| <p class="toc_element"> |
| <code><a href="#update">update(instanceName, hash, sizeBytes, body, resultsCachePolicy_priority=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Upload a new execution result.</p> |
| <h3>Method Details</h3> |
| <div class="method"> |
| <code class="details" id="get">get(instanceName, hash, sizeBytes, inlineStdout=None, inlineOutputFiles=None, inlineStderr=None, x__xgafv=None)</code> |
| <pre>Retrieve a cached execution result. |
| |
| Implementations SHOULD ensure that any blobs referenced from the |
| ContentAddressableStorage |
| are available at the time of returning the |
| ActionResult and will be |
| for some period of time afterwards. The TTLs of the referenced blobs SHOULD be increased |
| if necessary and applicable. |
| |
| Errors: |
| |
| * `NOT_FOUND`: The requested `ActionResult` is not in the cache. |
| |
| Args: |
| instanceName: string, The instance of the execution system to operate against. A server may |
| support multiple instances of the execution system (with their own workers, |
| storage, caches, etc.). The server MAY require use of this field to select |
| between them in an implementation-defined fashion, otherwise it can be |
| omitted. (required) |
| hash: string, The hash. In the case of SHA-256, it will always be a lowercase hex string |
| exactly 64 characters long. (required) |
| sizeBytes: string, The size of the blob, in bytes. (required) |
| inlineStdout: boolean, A hint to the server to request inlining stdout in the |
| ActionResult message. |
| inlineOutputFiles: string, A hint to the server to inline the contents of the listed output files. |
| Each path needs to exactly match one path in `output_files` in the |
| Command message. (repeated) |
| inlineStderr: boolean, A hint to the server to request inlining stderr in the |
| ActionResult message. |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # An ActionResult represents the result of an |
| # Action being run. |
| "executionMetadata": { # ExecutedActionMetadata contains details about a completed execution. # The details of the execution that originally produced this result. |
| "outputUploadStartTimestamp": "A String", # When the worker started uploading action outputs. |
| "workerCompletedTimestamp": "A String", # When the worker completed the action, including all stages. |
| "queuedTimestamp": "A String", # When was the action added to the queue. |
| "worker": "A String", # The name of the worker which ran the execution. |
| "executionStartTimestamp": "A String", # When the worker started executing the action command. |
| "inputFetchStartTimestamp": "A String", # When the worker started fetching action inputs. |
| "workerStartTimestamp": "A String", # When the worker received the action. |
| "outputUploadCompletedTimestamp": "A String", # When the worker finished uploading action outputs. |
| "executionCompletedTimestamp": "A String", # When the worker completed executing the action command. |
| "inputFetchCompletedTimestamp": "A String", # When the worker finished fetching action inputs. |
| }, |
| "outputFileSymlinks": [ # The output files of the action that are symbolic links to other files. Those |
| # may be links to other output files, or input files, or even absolute paths |
| # outside of the working directory, if the server supports |
| # SymlinkAbsolutePathStrategy.ALLOWED. |
| # For each output file requested in the `output_files` field of the Action, |
| # if the corresponding file existed after |
| # the action completed, a single entry will be present either in this field, |
| # or in the `output_files` field, if the file was not a symbolic link. |
| # |
| # If an output symbolic link of the same name was found, but its target |
| # type was not a regular file, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputSymlink` is similar to a |
| # Symlink, but it is used as an |
| # output in an `ActionResult`. |
| # |
| # `OutputSymlink` is binary-compatible with `SymlinkNode`. |
| "path": "A String", # The full path of the symlink relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| # The target path can be relative to the parent directory of the symlink or |
| # it can be an absolute path starting with `/`. Support for absolute paths |
| # can be checked using the Capabilities |
| # API. The canonical form forbids the substrings `/./` and `//` in the target |
| # path. `..` components are allowed anywhere in the target path. |
| }, |
| ], |
| "stderrDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard error of the action, which |
| # can be retrieved from the |
| # ContentAddressableStorage. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| "stdoutRaw": "A String", # The standard output buffer of the action. The server SHOULD NOT inline |
| # stdout unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "stderrRaw": "A String", # The standard error buffer of the action. The server SHOULD NOT inline |
| # stderr unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "stdoutDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard output of the action, which |
| # can be retrieved from the |
| # ContentAddressableStorage. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| "outputFiles": [ # The output files of the action. For each output file requested in the |
| # `output_files` field of the Action, if the corresponding file existed after |
| # the action completed, a single entry will be present either in this field, |
| # or the `output_file_symlinks` field if the file was a symbolic link to |
| # another file. |
| # |
| # If an output of the same name was found, but was a directory rather |
| # than a regular file, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputFile` is similar to a |
| # FileNode, but it is used as an |
| # output in an `ActionResult`. It allows a full file path rather than |
| # only a name. |
| "path": "A String", # The full path of the file relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "isExecutable": True or False, # True if file is executable, false otherwise. |
| "contents": "A String", # The contents of the file if inlining was requested. The server SHOULD NOT inline |
| # file contents unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| }, |
| ], |
| "outputDirectorySymlinks": [ # The output directories of the action that are symbolic links to other |
| # directories. Those may be links to other output directories, or input |
| # directories, or even absolute paths outside of the working directory, |
| # if the server supports |
| # SymlinkAbsolutePathStrategy.ALLOWED. |
| # For each output directory requested in the `output_directories` field of |
| # the Action, if the directory existed after the action completed, a |
| # single entry will be present either in this field, or in the |
| # `output_directories` field, if the directory was not a symbolic link. |
| # |
| # If an output of the same name was found, but was a symbolic link to a file |
| # instead of a directory, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputSymlink` is similar to a |
| # Symlink, but it is used as an |
| # output in an `ActionResult`. |
| # |
| # `OutputSymlink` is binary-compatible with `SymlinkNode`. |
| "path": "A String", # The full path of the symlink relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| # The target path can be relative to the parent directory of the symlink or |
| # it can be an absolute path starting with `/`. Support for absolute paths |
| # can be checked using the Capabilities |
| # API. The canonical form forbids the substrings `/./` and `//` in the target |
| # path. `..` components are allowed anywhere in the target path. |
| }, |
| ], |
| "outputDirectories": [ # The output directories of the action. For each output directory requested |
| # in the `output_directories` field of the Action, if the corresponding |
| # directory existed after the action completed, a single entry will be |
| # present in the output list, which will contain the digest of a |
| # Tree message containing the |
| # directory tree, and the path equal exactly to the corresponding Action |
| # output_directories member. |
| # |
| # As an example, suppose the Action had an output directory `a/b/dir` and the |
| # execution produced the following contents in `a/b/dir`: a file named `bar` |
| # and a directory named `foo` with an executable file named `baz`. Then, |
| # output_directory will contain (hashes shortened for readability): |
| # |
| # ```json |
| # // OutputDirectory proto: |
| # { |
| # path: "a/b/dir" |
| # tree_digest: { |
| # hash: "4a73bc9d03...", |
| # size: 55 |
| # } |
| # } |
| # // Tree proto with hash "4a73bc9d03..." and size 55: |
| # { |
| # root: { |
| # files: [ |
| # { |
| # name: "bar", |
| # digest: { |
| # hash: "4a73bc9d03...", |
| # size: 65534 |
| # } |
| # } |
| # ], |
| # directories: [ |
| # { |
| # name: "foo", |
| # digest: { |
| # hash: "4cf2eda940...", |
| # size: 43 |
| # } |
| # } |
| # ] |
| # } |
| # children : { |
| # // (Directory proto with hash "4cf2eda940..." and size 43) |
| # files: [ |
| # { |
| # name: "baz", |
| # digest: { |
| # hash: "b2c941073e...", |
| # size: 1294, |
| # }, |
| # is_executable: true |
| # } |
| # ] |
| # } |
| # } |
| # ``` |
| # If an output of the same name was found, but was not a directory, the |
| # server will return a FAILED_PRECONDITION. |
| { # An `OutputDirectory` is the output in an `ActionResult` corresponding to a |
| # directory's full contents rather than a single file. |
| "path": "A String", # The full path of the directory relative to the working directory. The path |
| # separator is a forward slash `/`. Since this is a relative path, it MUST |
| # NOT begin with a leading forward slash. The empty string value is allowed, |
| # and it denotes the entire working directory. |
| "treeDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the encoded |
| # Tree proto containing the |
| # directory's contents. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| }, |
| ], |
| "exitCode": 42, # The exit code of the command. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="update">update(instanceName, hash, sizeBytes, body, resultsCachePolicy_priority=None, x__xgafv=None)</code> |
| <pre>Upload a new execution result. |
| |
| In order to allow the server to perform access control based on the type of |
| action, and to assist with client debugging, the client MUST first upload |
| the Action that produced the |
| result, along with its |
| Command, into the |
| `ContentAddressableStorage`. |
| |
| Errors: |
| |
| * `INVALID_ARGUMENT`: One or more arguments are invalid. |
| * `FAILED_PRECONDITION`: One or more errors occurred in updating the |
| action result, such as a missing command or action. |
| * `RESOURCE_EXHAUSTED`: There is insufficient storage space to add the |
| entry to the cache. |
| |
| Args: |
| instanceName: string, The instance of the execution system to operate against. A server may |
| support multiple instances of the execution system (with their own workers, |
| storage, caches, etc.). The server MAY require use of this field to select |
| between them in an implementation-defined fashion, otherwise it can be |
| omitted. (required) |
| hash: string, The hash. In the case of SHA-256, it will always be a lowercase hex string |
| exactly 64 characters long. (required) |
| sizeBytes: string, The size of the blob, in bytes. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # An ActionResult represents the result of an |
| # Action being run. |
| "executionMetadata": { # ExecutedActionMetadata contains details about a completed execution. # The details of the execution that originally produced this result. |
| "outputUploadStartTimestamp": "A String", # When the worker started uploading action outputs. |
| "workerCompletedTimestamp": "A String", # When the worker completed the action, including all stages. |
| "queuedTimestamp": "A String", # When was the action added to the queue. |
| "worker": "A String", # The name of the worker which ran the execution. |
| "executionStartTimestamp": "A String", # When the worker started executing the action command. |
| "inputFetchStartTimestamp": "A String", # When the worker started fetching action inputs. |
| "workerStartTimestamp": "A String", # When the worker received the action. |
| "outputUploadCompletedTimestamp": "A String", # When the worker finished uploading action outputs. |
| "executionCompletedTimestamp": "A String", # When the worker completed executing the action command. |
| "inputFetchCompletedTimestamp": "A String", # When the worker finished fetching action inputs. |
| }, |
| "outputFileSymlinks": [ # The output files of the action that are symbolic links to other files. Those |
| # may be links to other output files, or input files, or even absolute paths |
| # outside of the working directory, if the server supports |
| # SymlinkAbsolutePathStrategy.ALLOWED. |
| # For each output file requested in the `output_files` field of the Action, |
| # if the corresponding file existed after |
| # the action completed, a single entry will be present either in this field, |
| # or in the `output_files` field, if the file was not a symbolic link. |
| # |
| # If an output symbolic link of the same name was found, but its target |
| # type was not a regular file, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputSymlink` is similar to a |
| # Symlink, but it is used as an |
| # output in an `ActionResult`. |
| # |
| # `OutputSymlink` is binary-compatible with `SymlinkNode`. |
| "path": "A String", # The full path of the symlink relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| # The target path can be relative to the parent directory of the symlink or |
| # it can be an absolute path starting with `/`. Support for absolute paths |
| # can be checked using the Capabilities |
| # API. The canonical form forbids the substrings `/./` and `//` in the target |
| # path. `..` components are allowed anywhere in the target path. |
| }, |
| ], |
| "stderrDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard error of the action, which |
| # can be retrieved from the |
| # ContentAddressableStorage. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| "stdoutRaw": "A String", # The standard output buffer of the action. The server SHOULD NOT inline |
| # stdout unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "stderrRaw": "A String", # The standard error buffer of the action. The server SHOULD NOT inline |
| # stderr unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "stdoutDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard output of the action, which |
| # can be retrieved from the |
| # ContentAddressableStorage. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| "outputFiles": [ # The output files of the action. For each output file requested in the |
| # `output_files` field of the Action, if the corresponding file existed after |
| # the action completed, a single entry will be present either in this field, |
| # or the `output_file_symlinks` field if the file was a symbolic link to |
| # another file. |
| # |
| # If an output of the same name was found, but was a directory rather |
| # than a regular file, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputFile` is similar to a |
| # FileNode, but it is used as an |
| # output in an `ActionResult`. It allows a full file path rather than |
| # only a name. |
| "path": "A String", # The full path of the file relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "isExecutable": True or False, # True if file is executable, false otherwise. |
| "contents": "A String", # The contents of the file if inlining was requested. The server SHOULD NOT inline |
| # file contents unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| }, |
| ], |
| "outputDirectorySymlinks": [ # The output directories of the action that are symbolic links to other |
| # directories. Those may be links to other output directories, or input |
| # directories, or even absolute paths outside of the working directory, |
| # if the server supports |
| # SymlinkAbsolutePathStrategy.ALLOWED. |
| # For each output directory requested in the `output_directories` field of |
| # the Action, if the directory existed after the action completed, a |
| # single entry will be present either in this field, or in the |
| # `output_directories` field, if the directory was not a symbolic link. |
| # |
| # If an output of the same name was found, but was a symbolic link to a file |
| # instead of a directory, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputSymlink` is similar to a |
| # Symlink, but it is used as an |
| # output in an `ActionResult`. |
| # |
| # `OutputSymlink` is binary-compatible with `SymlinkNode`. |
| "path": "A String", # The full path of the symlink relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| # The target path can be relative to the parent directory of the symlink or |
| # it can be an absolute path starting with `/`. Support for absolute paths |
| # can be checked using the Capabilities |
| # API. The canonical form forbids the substrings `/./` and `//` in the target |
| # path. `..` components are allowed anywhere in the target path. |
| }, |
| ], |
| "outputDirectories": [ # The output directories of the action. For each output directory requested |
| # in the `output_directories` field of the Action, if the corresponding |
| # directory existed after the action completed, a single entry will be |
| # present in the output list, which will contain the digest of a |
| # Tree message containing the |
| # directory tree, and the path equal exactly to the corresponding Action |
| # output_directories member. |
| # |
| # As an example, suppose the Action had an output directory `a/b/dir` and the |
| # execution produced the following contents in `a/b/dir`: a file named `bar` |
| # and a directory named `foo` with an executable file named `baz`. Then, |
| # output_directory will contain (hashes shortened for readability): |
| # |
| # ```json |
| # // OutputDirectory proto: |
| # { |
| # path: "a/b/dir" |
| # tree_digest: { |
| # hash: "4a73bc9d03...", |
| # size: 55 |
| # } |
| # } |
| # // Tree proto with hash "4a73bc9d03..." and size 55: |
| # { |
| # root: { |
| # files: [ |
| # { |
| # name: "bar", |
| # digest: { |
| # hash: "4a73bc9d03...", |
| # size: 65534 |
| # } |
| # } |
| # ], |
| # directories: [ |
| # { |
| # name: "foo", |
| # digest: { |
| # hash: "4cf2eda940...", |
| # size: 43 |
| # } |
| # } |
| # ] |
| # } |
| # children : { |
| # // (Directory proto with hash "4cf2eda940..." and size 43) |
| # files: [ |
| # { |
| # name: "baz", |
| # digest: { |
| # hash: "b2c941073e...", |
| # size: 1294, |
| # }, |
| # is_executable: true |
| # } |
| # ] |
| # } |
| # } |
| # ``` |
| # If an output of the same name was found, but was not a directory, the |
| # server will return a FAILED_PRECONDITION. |
| { # An `OutputDirectory` is the output in an `ActionResult` corresponding to a |
| # directory's full contents rather than a single file. |
| "path": "A String", # The full path of the directory relative to the working directory. The path |
| # separator is a forward slash `/`. Since this is a relative path, it MUST |
| # NOT begin with a leading forward slash. The empty string value is allowed, |
| # and it denotes the entire working directory. |
| "treeDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the encoded |
| # Tree proto containing the |
| # directory's contents. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| }, |
| ], |
| "exitCode": 42, # The exit code of the command. |
| } |
| |
| resultsCachePolicy_priority: integer, The priority (relative importance) of this content in the overall cache. |
| Generally, a lower value means a longer retention time or other advantage, |
| but the interpretation of a given value is server-dependent. A priority of |
| 0 means a *default* value, decided by the server. |
| |
| The particular semantics of this field is up to the server. In particular, |
| every server will have their own supported range of priorities, and will |
| decide how these map into retention/eviction policy. |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # An ActionResult represents the result of an |
| # Action being run. |
| "executionMetadata": { # ExecutedActionMetadata contains details about a completed execution. # The details of the execution that originally produced this result. |
| "outputUploadStartTimestamp": "A String", # When the worker started uploading action outputs. |
| "workerCompletedTimestamp": "A String", # When the worker completed the action, including all stages. |
| "queuedTimestamp": "A String", # When was the action added to the queue. |
| "worker": "A String", # The name of the worker which ran the execution. |
| "executionStartTimestamp": "A String", # When the worker started executing the action command. |
| "inputFetchStartTimestamp": "A String", # When the worker started fetching action inputs. |
| "workerStartTimestamp": "A String", # When the worker received the action. |
| "outputUploadCompletedTimestamp": "A String", # When the worker finished uploading action outputs. |
| "executionCompletedTimestamp": "A String", # When the worker completed executing the action command. |
| "inputFetchCompletedTimestamp": "A String", # When the worker finished fetching action inputs. |
| }, |
| "outputFileSymlinks": [ # The output files of the action that are symbolic links to other files. Those |
| # may be links to other output files, or input files, or even absolute paths |
| # outside of the working directory, if the server supports |
| # SymlinkAbsolutePathStrategy.ALLOWED. |
| # For each output file requested in the `output_files` field of the Action, |
| # if the corresponding file existed after |
| # the action completed, a single entry will be present either in this field, |
| # or in the `output_files` field, if the file was not a symbolic link. |
| # |
| # If an output symbolic link of the same name was found, but its target |
| # type was not a regular file, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputSymlink` is similar to a |
| # Symlink, but it is used as an |
| # output in an `ActionResult`. |
| # |
| # `OutputSymlink` is binary-compatible with `SymlinkNode`. |
| "path": "A String", # The full path of the symlink relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| # The target path can be relative to the parent directory of the symlink or |
| # it can be an absolute path starting with `/`. Support for absolute paths |
| # can be checked using the Capabilities |
| # API. The canonical form forbids the substrings `/./` and `//` in the target |
| # path. `..` components are allowed anywhere in the target path. |
| }, |
| ], |
| "stderrDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard error of the action, which |
| # can be retrieved from the |
| # ContentAddressableStorage. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| "stdoutRaw": "A String", # The standard output buffer of the action. The server SHOULD NOT inline |
| # stdout unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "stderrRaw": "A String", # The standard error buffer of the action. The server SHOULD NOT inline |
| # stderr unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "stdoutDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest for a blob containing the standard output of the action, which |
| # can be retrieved from the |
| # ContentAddressableStorage. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| "outputFiles": [ # The output files of the action. For each output file requested in the |
| # `output_files` field of the Action, if the corresponding file existed after |
| # the action completed, a single entry will be present either in this field, |
| # or the `output_file_symlinks` field if the file was a symbolic link to |
| # another file. |
| # |
| # If an output of the same name was found, but was a directory rather |
| # than a regular file, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputFile` is similar to a |
| # FileNode, but it is used as an |
| # output in an `ActionResult`. It allows a full file path rather than |
| # only a name. |
| "path": "A String", # The full path of the file relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "isExecutable": True or False, # True if file is executable, false otherwise. |
| "contents": "A String", # The contents of the file if inlining was requested. The server SHOULD NOT inline |
| # file contents unless requested by the client in the |
| # GetActionResultRequest |
| # message. The server MAY omit inlining, even if requested, and MUST do so if inlining |
| # would cause the response to exceed message size limits. |
| "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| }, |
| ], |
| "outputDirectorySymlinks": [ # The output directories of the action that are symbolic links to other |
| # directories. Those may be links to other output directories, or input |
| # directories, or even absolute paths outside of the working directory, |
| # if the server supports |
| # SymlinkAbsolutePathStrategy.ALLOWED. |
| # For each output directory requested in the `output_directories` field of |
| # the Action, if the directory existed after the action completed, a |
| # single entry will be present either in this field, or in the |
| # `output_directories` field, if the directory was not a symbolic link. |
| # |
| # If an output of the same name was found, but was a symbolic link to a file |
| # instead of a directory, the server will return a FAILED_PRECONDITION. |
| # If the action does not produce the requested output, then that output |
| # will be omitted from the list. The server is free to arrange the output |
| # list as desired; clients MUST NOT assume that the output list is sorted. |
| { # An `OutputSymlink` is similar to a |
| # Symlink, but it is used as an |
| # output in an `ActionResult`. |
| # |
| # `OutputSymlink` is binary-compatible with `SymlinkNode`. |
| "path": "A String", # The full path of the symlink relative to the working directory, including the |
| # filename. The path separator is a forward slash `/`. Since this is a |
| # relative path, it MUST NOT begin with a leading forward slash. |
| "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| # The target path can be relative to the parent directory of the symlink or |
| # it can be an absolute path starting with `/`. Support for absolute paths |
| # can be checked using the Capabilities |
| # API. The canonical form forbids the substrings `/./` and `//` in the target |
| # path. `..` components are allowed anywhere in the target path. |
| }, |
| ], |
| "outputDirectories": [ # The output directories of the action. For each output directory requested |
| # in the `output_directories` field of the Action, if the corresponding |
| # directory existed after the action completed, a single entry will be |
| # present in the output list, which will contain the digest of a |
| # Tree message containing the |
| # directory tree, and the path equal exactly to the corresponding Action |
| # output_directories member. |
| # |
| # As an example, suppose the Action had an output directory `a/b/dir` and the |
| # execution produced the following contents in `a/b/dir`: a file named `bar` |
| # and a directory named `foo` with an executable file named `baz`. Then, |
| # output_directory will contain (hashes shortened for readability): |
| # |
| # ```json |
| # // OutputDirectory proto: |
| # { |
| # path: "a/b/dir" |
| # tree_digest: { |
| # hash: "4a73bc9d03...", |
| # size: 55 |
| # } |
| # } |
| # // Tree proto with hash "4a73bc9d03..." and size 55: |
| # { |
| # root: { |
| # files: [ |
| # { |
| # name: "bar", |
| # digest: { |
| # hash: "4a73bc9d03...", |
| # size: 65534 |
| # } |
| # } |
| # ], |
| # directories: [ |
| # { |
| # name: "foo", |
| # digest: { |
| # hash: "4cf2eda940...", |
| # size: 43 |
| # } |
| # } |
| # ] |
| # } |
| # children : { |
| # // (Directory proto with hash "4cf2eda940..." and size 43) |
| # files: [ |
| # { |
| # name: "baz", |
| # digest: { |
| # hash: "b2c941073e...", |
| # size: 1294, |
| # }, |
| # is_executable: true |
| # } |
| # ] |
| # } |
| # } |
| # ``` |
| # If an output of the same name was found, but was not a directory, the |
| # server will return a FAILED_PRECONDITION. |
| { # An `OutputDirectory` is the output in an `ActionResult` corresponding to a |
| # directory's full contents rather than a single file. |
| "path": "A String", # The full path of the directory relative to the working directory. The path |
| # separator is a forward slash `/`. Since this is a relative path, it MUST |
| # NOT begin with a leading forward slash. The empty string value is allowed, |
| # and it denotes the entire working directory. |
| "treeDigest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the encoded |
| # Tree proto containing the |
| # directory's contents. |
| # and its hash. The hash algorithm to use is defined by the server, but servers |
| # SHOULD use SHA-256. |
| # |
| # The size is considered to be an integral part of the digest and cannot be |
| # separated. That is, even if the `hash` field is correctly specified but |
| # `size_bytes` is not, the server MUST reject the request. |
| # |
| # The reason for including the size in the digest is as follows: in a great |
| # many cases, the server needs to know the size of the blob it is about to work |
| # with prior to starting an operation with it, such as flattening Merkle tree |
| # structures or streaming it to a worker. Technically, the server could |
| # implement a separate metadata store, but this results in a significantly more |
| # complicated implementation as opposed to having the client specify the size |
| # up-front (or storing the size along with the digest in every message where |
| # digests are embedded). This does mean that the API leaks some implementation |
| # details of (what we consider to be) a reasonable server implementation, but |
| # we consider this to be a worthwhile tradeoff. |
| # |
| # When a `Digest` is used to refer to a proto message, it always refers to the |
| # message in binary encoded form. To ensure consistent hashing, clients and |
| # servers MUST ensure that they serialize messages according to the following |
| # rules, even if there are alternate valid encodings for the same message: |
| # |
| # * Fields are serialized in tag order. |
| # * There are no unknown fields. |
| # * There are no duplicate fields. |
| # * Fields are serialized according to the default semantics for their type. |
| # |
| # Most protocol buffer implementations will always follow these rules when |
| # serializing, but care should be taken to avoid shortcuts. For instance, |
| # concatenating two messages to merge them may produce duplicate fields. |
| "sizeBytes": "A String", # The size of the blob, in bytes. |
| "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| # exactly 64 characters long. |
| }, |
| }, |
| ], |
| "exitCode": 42, # The exit code of the command. |
| }</pre> |
| </div> |
| |
| </body></html> |