| <html><body> |
| <style> |
| |
| body, h1, h2, h3, div, span, p, pre, a { |
| margin: 0; |
| padding: 0; |
| border: 0; |
| font-weight: inherit; |
| font-style: inherit; |
| font-size: 100%; |
| font-family: inherit; |
| vertical-align: baseline; |
| } |
| |
| body { |
| font-size: 13px; |
| padding: 1em; |
| } |
| |
| h1 { |
| font-size: 26px; |
| margin-bottom: 1em; |
| } |
| |
| h2 { |
| font-size: 24px; |
| margin-bottom: 1em; |
| } |
| |
| h3 { |
| font-size: 20px; |
| margin-bottom: 1em; |
| margin-top: 1em; |
| } |
| |
| pre, code { |
| line-height: 1.5; |
| font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| } |
| |
| pre { |
| margin-top: 0.5em; |
| } |
| |
| h1, h2, h3, p { |
| font-family: Arial, sans serif; |
| } |
| |
| h1, h2, h3 { |
| border-bottom: solid #CCC 1px; |
| } |
| |
| .toc_element { |
| margin-top: 0.5em; |
| } |
| |
| .firstline { |
| margin-left: 2 em; |
| } |
| |
| .method { |
| margin-top: 1em; |
| border: solid 1px #CCC; |
| padding: 1em; |
| background: #EEE; |
| } |
| |
| .details { |
| font-weight: bold; |
| font-size: 14px; |
| } |
| |
| </style> |
| |
| <h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1> |
| <h2>Instance Methods</h2> |
| <p class="toc_element"> |
| <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Cancels a running job.</p> |
| <p class="toc_element"> |
| <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Creates a training or a batch prediction job.</p> |
| <p class="toc_element"> |
| <code><a href="#get">get(name, x__xgafv=None)</a></code></p> |
| <p class="firstline">Describes a job.</p> |
| <p class="toc_element"> |
| <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Gets the access control policy for a resource.</p> |
| <p class="toc_element"> |
| <code><a href="#list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Lists the jobs in the project.</p> |
| <p class="toc_element"> |
| <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p> |
| <p class="firstline">Retrieves the next page of results.</p> |
| <p class="toc_element"> |
| <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Updates a specific job resource.</p> |
| <p class="toc_element"> |
| <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Sets the access control policy on the specified resource. Replaces any</p> |
| <p class="toc_element"> |
| <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p> |
| <p class="firstline">Returns permissions that a caller has on the specified resource.</p> |
| <h3>Method Details</h3> |
| <div class="method"> |
| <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code> |
| <pre>Cancels a running job. |
| |
| Args: |
| name: string, Required. The name of the job to cancel. (required) |
| body: object, The request body. |
| The object takes the form of: |
| |
| { # Request message for the CancelJob method. |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # A generic empty message that you can re-use to avoid defining duplicated |
| # empty messages in your APIs. A typical example is to use it as the request |
| # or the response type of an API method. For instance: |
| # |
| # service Foo { |
| # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); |
| # } |
| # |
| # The JSON representation for `Empty` is empty JSON object `{}`. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code> |
| <pre>Creates a training or a batch prediction job. |
| |
| Args: |
| parent: string, Required. The project name. (required) |
| body: object, The request body. |
| The object takes the form of: |
| |
| { # Represents a training or prediction job. |
| "createTime": "A String", # Output only. When the job was created. |
| "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job. |
| "dataFormat": "A String", # Required. The format of the input data files. |
| "outputPath": "A String", # Required. The output Google Cloud Storage location. |
| "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain |
| # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>. |
| "A String", |
| ], |
| "region": "A String", # Required. The Google Compute Engine region to run the prediction job in. |
| # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> |
| # for AI Platform services. |
| "versionName": "A String", # Use this field if you want to specify a version of the model to use. The |
| # string is formatted the same way as `model_version`, with the addition |
| # of the version information: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"` |
| "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for |
| # the model to use. |
| "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch |
| # prediction. If not set, AI Platform will pick the runtime version used |
| # during the CreateVersion request for this model version, or choose the |
| # latest stable version when model version information is not available |
| # such as when the model is specified by uri. |
| "modelName": "A String", # Use this field if you want to use the default version for the specified |
| # model. The string must use the following format: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL"` |
| "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for |
| # this job. Please refer to |
| # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html) |
| # for information about how to use signatures. |
| # |
| # Defaults to |
| # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants) |
| # , which is "serving_default". |
| "batchSize": "A String", # Optional. Number of records per batch, defaults to 64. |
| # The service will buffer batch_size number of records in memory before |
| # invoking one Tensorflow prediction call internally. So take the record |
| # size and memory available into consideration when setting this parameter. |
| "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing. |
| # Defaults to 10 if not specified. |
| }, |
| "labels": { # Optional. One or more labels that you can add, to organize your jobs. |
| # Each label is a key-value pair, where both the key and the value are |
| # arbitrary strings that you supply. |
| # For more information, see the documentation on |
| # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| "a_key": "A String", |
| }, |
| "endTime": "A String", # Output only. When the job processing was completed. |
| "trainingOutput": { # Represents results of a training job. Output only. # The current training job result. |
| "trials": [ # Results for individual Hyperparameter trials. |
| # Only set for hyperparameter tuning jobs. |
| { # Represents the result of a single hyperparameter tuning trial from a |
| # training job. The TrainingOutput object that is returned on successful |
| # completion of a training job with hyperparameter tuning includes a list |
| # of HyperparameterOutput objects, one for each successful trial. |
| "endTime": "A String", # Output only. End time for the trial. |
| "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| "hyperparameters": { # The hyperparameters given to this trial. |
| "a_key": "A String", |
| }, |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for trials of built-in algorithms jobs that have succeeded. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "startTime": "A String", # Output only. Start time for the trial. |
| "allMetrics": [ # All recorded object metrics for this trial. This field is not currently |
| # populated. |
| { # An observed value of a metric. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| ], |
| "trialId": "A String", # The trial id for these results. |
| "isTrialStoppedEarly": True or False, # True if the trial is stopped early. |
| "state": "A String", # Output only. The detailed state of the trial. |
| }, |
| ], |
| "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully. |
| # Only set for hyperparameter tuning jobs. |
| "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job. |
| "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job. |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for built-in algorithms jobs. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "consumedMLUnits": 3.14, # The amount of ML units consumed by the job. |
| "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning |
| # trials. See |
| # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag) |
| # for more information. Only set for hyperparameter tuning jobs. |
| }, |
| "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| "predictionOutput": { # Represents results of a prediction job. # The current prediction job result. |
| "errorCount": "A String", # The number of data instances which resulted in errors. |
| "nodeHours": 3.14, # Node hours used by the batch prediction job. |
| "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time. |
| "predictionCount": "A String", # The number of generated predictions. |
| }, |
| "startTime": "A String", # Output only. When the job processing was started. |
| "state": "A String", # Output only. The detailed state of a job. |
| "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job. |
| # to submit your training job, you can specify the input parameters as |
| # command-line arguments and/or in a YAML configuration file referenced from |
| # the --config command-line argument. For details, see the guide to [submitting |
| # a training job](/ai-platform/training/docs/training-jobs). |
| "serviceAccount": "A String", # Optional. Specifies the service account for workload run-as account. |
| # Users submitting jobs must have act-as permission on this run-as account. |
| # If not specified, then CMLE P4SA will be used by default. |
| "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers. |
| # |
| # You should only set `workerConfig.acceleratorConfig` if `workerType` is set |
| # to a Compute Engine machine type. [Learn about restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `workerConfig.imageUri` only if you build a custom image for your |
| # worker. If `workerConfig.imageUri` has not been set, AI Platform uses |
| # the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment |
| # variable when training with a custom container. Defaults to `false`. [Learn |
| # more about this |
| # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master) |
| # |
| # This field has no effect for training jobs that don't use a custom |
| # container. |
| "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's master worker. You must specify this field when `scaleTier` is set to |
| # `CUSTOM`. |
| # |
| # You can use certain Compute Engine machine types directly in this field. |
| # The following types are supported: |
| # |
| # - `n1-standard-4` |
| # - `n1-standard-8` |
| # - `n1-standard-16` |
| # - `n1-standard-32` |
| # - `n1-standard-64` |
| # - `n1-standard-96` |
| # - `n1-highmem-2` |
| # - `n1-highmem-4` |
| # - `n1-highmem-8` |
| # - `n1-highmem-16` |
| # - `n1-highmem-32` |
| # - `n1-highmem-64` |
| # - `n1-highmem-96` |
| # - `n1-highcpu-16` |
| # - `n1-highcpu-32` |
| # - `n1-highcpu-64` |
| # - `n1-highcpu-96` |
| # |
| # Learn more about [using Compute Engine machine |
| # types](/ml-engine/docs/machine-types#compute-engine-machine-types). |
| # |
| # Alternatively, you can use the following legacy machine types: |
| # |
| # - `standard` |
| # - `large_model` |
| # - `complex_model_s` |
| # - `complex_model_m` |
| # - `complex_model_l` |
| # - `standard_gpu` |
| # - `complex_model_m_gpu` |
| # - `complex_model_l_gpu` |
| # - `standard_p100` |
| # - `complex_model_m_p100` |
| # - `standard_v100` |
| # - `large_model_v100` |
| # - `complex_model_m_v100` |
| # - `complex_model_l_v100` |
| # |
| # Learn more about [using legacy machine |
| # types](/ml-engine/docs/machine-types#legacy-machine-types). |
| # |
| # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this |
| # field. Learn more about the [special configuration options for training |
| # with |
| # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers. |
| # |
| # You should only set `parameterServerConfig.acceleratorConfig` if |
| # `parameterServerType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `parameterServerConfig.imageUri` only if you build a custom image for |
| # your parameter server. If `parameterServerConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "region": "A String", # Required. The region to run the training job in. See the [available |
| # regions](/ai-platform/training/docs/regions) for AI Platform Training. |
| "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs |
| # and other data needed for training. This path is passed to your TensorFlow |
| # program as the '--job-dir' command-line argument. The benefit of specifying |
| # this field is that Cloud ML validates the path for use in training. |
| "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify |
| # this field or specify `masterConfig.imageUri`. |
| # |
| # The following Python versions are available: |
| # |
| # * Python '3.7' is available when `runtime_version` is set to '1.15' or |
| # later. |
| # * Python '3.5' is available when `runtime_version` is set to a version |
| # from '1.4' to '1.14'. |
| # * Python '2.7' is available when `runtime_version` is set to '1.15' or |
| # earlier. |
| # |
| # Read more about the Python versions available for [each runtime |
| # version](/ml-engine/docs/runtime-version-list). |
| "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune. |
| "maxTrials": 42, # Optional. How many training trials should be attempted to optimize |
| # the specified hyperparameters. |
| # |
| # Defaults to one. |
| "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial |
| # early stopping. |
| "params": [ # Required. The set of parameters to tune. |
| { # Represents a single hyperparameter to optimize. |
| "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is INTEGER. |
| "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories. |
| "A String", |
| ], |
| "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube. |
| # Leave unset for categorical parameters. |
| # Some kind of scaling is strongly recommended for real or integral |
| # parameters (e.g., `UNIT_LINEAR_SCALE`). |
| "discreteValues": [ # Required if type is `DISCRETE`. |
| # A list of feasible points. |
| # The list should be in strictly increasing order. For instance, this |
| # parameter might have possible settings of 1.5, 2.5, and 4.0. This list |
| # should not contain more than 1,000 values. |
| 3.14, |
| ], |
| "type": "A String", # Required. The type of the parameter. |
| "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is `INTEGER`. |
| "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in |
| # a HyperparameterSpec message. E.g., "learning_rate". |
| }, |
| ], |
| "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing |
| # the hyperparameter tuning job. You can specify this field to override the |
| # default failing criteria for AI Platform hyperparameter tuning jobs. |
| # |
| # Defaults to zero, which means the service decides when a hyperparameter |
| # job should fail. |
| "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For |
| # current versions of TensorFlow, this tag name should exactly match what is |
| # shown in TensorBoard, including all scopes. For versions of TensorFlow |
| # prior to 0.12, this should be only the tag passed to tf.Summary. |
| # By default, "training/hptuning/metric" will be used. |
| "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to |
| # continue with. The job id will be used to find the corresponding vizier |
| # study guid and resume the study. |
| "goal": "A String", # Required. The type of goal to use for tuning. Available types are |
| # `MAXIMIZE` and `MINIMIZE`. |
| # |
| # Defaults to `MAXIMIZE`. |
| "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter |
| # tuning job. |
| # Uses the default AI Platform hyperparameter tuning |
| # algorithm if unspecified. |
| "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently. |
| # You can reduce the time it takes to perform hyperparameter tuning by adding |
| # trials in parallel. However, each trail only benefits from the information |
| # gained in completed trials. That means that a trial does not get access to |
| # the results of trials running at the same time, which could reduce the |
| # quality of the overall optimization. |
| # |
| # Each trial will use the same scale tier and machine types. |
| # |
| # Defaults to one. |
| }, |
| "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's evaluator nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `evaluatorCount` is greater than zero. |
| "network": "A String", # Optional. The full name of the Google Compute Engine |
| # [network](/compute/docs/networks-and-firewalls#networks) to which the Job |
| # is peered. For example, projects/12345/global/networks/myVPC. Format is of |
| # the form projects/{project}/global/networks/{network}. Where {project} is a |
| # project number, as in '12345', and {network} is network name.". |
| # |
| # Private services access must already be configured for the network. If left |
| # unspecified, the Job is not peered with any network. Learn more - |
| # Connecting Job to user network over private |
| # IP. |
| "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's parameter server. |
| # |
| # The supported values are the same as those described in the entry for |
| # `master_type`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `parameter_server_count` is greater than zero. |
| "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's worker nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # If you use `cloud_tpu` for this value, see special instructions for |
| # [configuring a custom TPU |
| # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `workerCount` is greater than zero. |
| "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker. |
| # |
| # You should only set `masterConfig.acceleratorConfig` if `masterType` is set |
| # to a Compute Engine machine type. Learn about [restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `masterConfig.imageUri` only if you build a custom image. Only one of |
| # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more |
| # about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job. |
| # Each replica in the cluster will be of the type specified in |
| # `evaluator_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `evaluator_type`. |
| # |
| # The default value is zero. |
| "args": [ # Optional. Command-line arguments passed to the training application when it |
| # starts. If your job uses a custom container, then the arguments are passed |
| # to the container's <a class="external" target="_blank" |
| # href="https://docs.docker.com/engine/reference/builder/#entrypoint"> |
| # `ENTRYPOINT`</a> command. |
| "A String", |
| ], |
| "pythonModule": "A String", # Required. The Python module name to run after installing the packages. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must |
| # either specify this field or specify `masterConfig.imageUri`. |
| # |
| # For more information, see the [runtime version |
| # list](/ai-platform/training/docs/runtime-version-list) and learn [how to |
| # manage runtime versions](/ai-platform/training/docs/versioning). |
| "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training |
| # job. Each replica in the cluster will be of the type specified in |
| # `parameter_server_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `parameter_server_type`. |
| # |
| # The default value is zero. |
| "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators. |
| # |
| # You should only set `evaluatorConfig.acceleratorConfig` if |
| # `evaluatorType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `evaluatorConfig.imageUri` only if you build a custom image for |
| # your evaluator. If `evaluatorConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to |
| # protect resources created by a training job, instead of using Google's |
| # default encryption. If this is set, then all resources created by the |
| # training job will be encrypted with the customer-managed encryption key |
| # that you specify. |
| # |
| # [Learn how and when to use CMEK with AI Platform |
| # Training](/ai-platform/training/docs/cmek). |
| # a resource. |
| "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key |
| # used to protect a resource, such as a training job. It has the following |
| # format: |
| # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}` |
| }, |
| "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each |
| # replica in the cluster will be of the type specified in `worker_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `worker_type`. |
| # |
| # The default value is zero. |
| "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job. |
| "maxWaitTime": "A String", |
| "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can |
| # contain up to nine fractional digits, terminated by `s`. If not specified, |
| # this field defaults to `604800s` (seven days). |
| # |
| # If the training job is still running after this duration, AI Platform |
| # Training cancels it. |
| # |
| # For example, if you want to ensure your job runs for no more than 2 hours, |
| # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds / |
| # minute). |
| # |
| # If you submit your training job using the `gcloud` tool, you can [provide |
| # this field in a `config.yaml` |
| # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters). |
| # For example: |
| # |
| # ```yaml |
| # trainingInput: |
| # ... |
| # scheduling: |
| # maxRunningTime: 7200s |
| # ... |
| # ``` |
| }, |
| "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers |
| # and parameter servers. |
| "packageUris": [ # Required. The Google Cloud Storage location of the packages with |
| # the training program and any additional dependencies. |
| # The maximum number of package URIs is 100. |
| "A String", |
| ], |
| }, |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a job from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform job updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `GetJob`, and |
| # systems are expected to put that etag in the request to `UpdateJob` to |
| # ensure that their change will be applied to the same version of the job. |
| "jobId": "A String", # Required. The user-specified id of the job. |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Represents a training or prediction job. |
| "createTime": "A String", # Output only. When the job was created. |
| "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job. |
| "dataFormat": "A String", # Required. The format of the input data files. |
| "outputPath": "A String", # Required. The output Google Cloud Storage location. |
| "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain |
| # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>. |
| "A String", |
| ], |
| "region": "A String", # Required. The Google Compute Engine region to run the prediction job in. |
| # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> |
| # for AI Platform services. |
| "versionName": "A String", # Use this field if you want to specify a version of the model to use. The |
| # string is formatted the same way as `model_version`, with the addition |
| # of the version information: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"` |
| "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for |
| # the model to use. |
| "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch |
| # prediction. If not set, AI Platform will pick the runtime version used |
| # during the CreateVersion request for this model version, or choose the |
| # latest stable version when model version information is not available |
| # such as when the model is specified by uri. |
| "modelName": "A String", # Use this field if you want to use the default version for the specified |
| # model. The string must use the following format: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL"` |
| "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for |
| # this job. Please refer to |
| # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html) |
| # for information about how to use signatures. |
| # |
| # Defaults to |
| # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants) |
| # , which is "serving_default". |
| "batchSize": "A String", # Optional. Number of records per batch, defaults to 64. |
| # The service will buffer batch_size number of records in memory before |
| # invoking one Tensorflow prediction call internally. So take the record |
| # size and memory available into consideration when setting this parameter. |
| "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing. |
| # Defaults to 10 if not specified. |
| }, |
| "labels": { # Optional. One or more labels that you can add, to organize your jobs. |
| # Each label is a key-value pair, where both the key and the value are |
| # arbitrary strings that you supply. |
| # For more information, see the documentation on |
| # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| "a_key": "A String", |
| }, |
| "endTime": "A String", # Output only. When the job processing was completed. |
| "trainingOutput": { # Represents results of a training job. Output only. # The current training job result. |
| "trials": [ # Results for individual Hyperparameter trials. |
| # Only set for hyperparameter tuning jobs. |
| { # Represents the result of a single hyperparameter tuning trial from a |
| # training job. The TrainingOutput object that is returned on successful |
| # completion of a training job with hyperparameter tuning includes a list |
| # of HyperparameterOutput objects, one for each successful trial. |
| "endTime": "A String", # Output only. End time for the trial. |
| "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| "hyperparameters": { # The hyperparameters given to this trial. |
| "a_key": "A String", |
| }, |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for trials of built-in algorithms jobs that have succeeded. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "startTime": "A String", # Output only. Start time for the trial. |
| "allMetrics": [ # All recorded object metrics for this trial. This field is not currently |
| # populated. |
| { # An observed value of a metric. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| ], |
| "trialId": "A String", # The trial id for these results. |
| "isTrialStoppedEarly": True or False, # True if the trial is stopped early. |
| "state": "A String", # Output only. The detailed state of the trial. |
| }, |
| ], |
| "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully. |
| # Only set for hyperparameter tuning jobs. |
| "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job. |
| "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job. |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for built-in algorithms jobs. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "consumedMLUnits": 3.14, # The amount of ML units consumed by the job. |
| "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning |
| # trials. See |
| # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag) |
| # for more information. Only set for hyperparameter tuning jobs. |
| }, |
| "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| "predictionOutput": { # Represents results of a prediction job. # The current prediction job result. |
| "errorCount": "A String", # The number of data instances which resulted in errors. |
| "nodeHours": 3.14, # Node hours used by the batch prediction job. |
| "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time. |
| "predictionCount": "A String", # The number of generated predictions. |
| }, |
| "startTime": "A String", # Output only. When the job processing was started. |
| "state": "A String", # Output only. The detailed state of a job. |
| "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job. |
| # to submit your training job, you can specify the input parameters as |
| # command-line arguments and/or in a YAML configuration file referenced from |
| # the --config command-line argument. For details, see the guide to [submitting |
| # a training job](/ai-platform/training/docs/training-jobs). |
| "serviceAccount": "A String", # Optional. Specifies the service account for workload run-as account. |
| # Users submitting jobs must have act-as permission on this run-as account. |
| # If not specified, then CMLE P4SA will be used by default. |
| "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers. |
| # |
| # You should only set `workerConfig.acceleratorConfig` if `workerType` is set |
| # to a Compute Engine machine type. [Learn about restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `workerConfig.imageUri` only if you build a custom image for your |
| # worker. If `workerConfig.imageUri` has not been set, AI Platform uses |
| # the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment |
| # variable when training with a custom container. Defaults to `false`. [Learn |
| # more about this |
| # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master) |
| # |
| # This field has no effect for training jobs that don't use a custom |
| # container. |
| "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's master worker. You must specify this field when `scaleTier` is set to |
| # `CUSTOM`. |
| # |
| # You can use certain Compute Engine machine types directly in this field. |
| # The following types are supported: |
| # |
| # - `n1-standard-4` |
| # - `n1-standard-8` |
| # - `n1-standard-16` |
| # - `n1-standard-32` |
| # - `n1-standard-64` |
| # - `n1-standard-96` |
| # - `n1-highmem-2` |
| # - `n1-highmem-4` |
| # - `n1-highmem-8` |
| # - `n1-highmem-16` |
| # - `n1-highmem-32` |
| # - `n1-highmem-64` |
| # - `n1-highmem-96` |
| # - `n1-highcpu-16` |
| # - `n1-highcpu-32` |
| # - `n1-highcpu-64` |
| # - `n1-highcpu-96` |
| # |
| # Learn more about [using Compute Engine machine |
| # types](/ml-engine/docs/machine-types#compute-engine-machine-types). |
| # |
| # Alternatively, you can use the following legacy machine types: |
| # |
| # - `standard` |
| # - `large_model` |
| # - `complex_model_s` |
| # - `complex_model_m` |
| # - `complex_model_l` |
| # - `standard_gpu` |
| # - `complex_model_m_gpu` |
| # - `complex_model_l_gpu` |
| # - `standard_p100` |
| # - `complex_model_m_p100` |
| # - `standard_v100` |
| # - `large_model_v100` |
| # - `complex_model_m_v100` |
| # - `complex_model_l_v100` |
| # |
| # Learn more about [using legacy machine |
| # types](/ml-engine/docs/machine-types#legacy-machine-types). |
| # |
| # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this |
| # field. Learn more about the [special configuration options for training |
| # with |
| # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers. |
| # |
| # You should only set `parameterServerConfig.acceleratorConfig` if |
| # `parameterServerType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `parameterServerConfig.imageUri` only if you build a custom image for |
| # your parameter server. If `parameterServerConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "region": "A String", # Required. The region to run the training job in. See the [available |
| # regions](/ai-platform/training/docs/regions) for AI Platform Training. |
| "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs |
| # and other data needed for training. This path is passed to your TensorFlow |
| # program as the '--job-dir' command-line argument. The benefit of specifying |
| # this field is that Cloud ML validates the path for use in training. |
| "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify |
| # this field or specify `masterConfig.imageUri`. |
| # |
| # The following Python versions are available: |
| # |
| # * Python '3.7' is available when `runtime_version` is set to '1.15' or |
| # later. |
| # * Python '3.5' is available when `runtime_version` is set to a version |
| # from '1.4' to '1.14'. |
| # * Python '2.7' is available when `runtime_version` is set to '1.15' or |
| # earlier. |
| # |
| # Read more about the Python versions available for [each runtime |
| # version](/ml-engine/docs/runtime-version-list). |
| "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune. |
| "maxTrials": 42, # Optional. How many training trials should be attempted to optimize |
| # the specified hyperparameters. |
| # |
| # Defaults to one. |
| "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial |
| # early stopping. |
| "params": [ # Required. The set of parameters to tune. |
| { # Represents a single hyperparameter to optimize. |
| "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is INTEGER. |
| "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories. |
| "A String", |
| ], |
| "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube. |
| # Leave unset for categorical parameters. |
| # Some kind of scaling is strongly recommended for real or integral |
| # parameters (e.g., `UNIT_LINEAR_SCALE`). |
| "discreteValues": [ # Required if type is `DISCRETE`. |
| # A list of feasible points. |
| # The list should be in strictly increasing order. For instance, this |
| # parameter might have possible settings of 1.5, 2.5, and 4.0. This list |
| # should not contain more than 1,000 values. |
| 3.14, |
| ], |
| "type": "A String", # Required. The type of the parameter. |
| "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is `INTEGER`. |
| "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in |
| # a HyperparameterSpec message. E.g., "learning_rate". |
| }, |
| ], |
| "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing |
| # the hyperparameter tuning job. You can specify this field to override the |
| # default failing criteria for AI Platform hyperparameter tuning jobs. |
| # |
| # Defaults to zero, which means the service decides when a hyperparameter |
| # job should fail. |
| "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For |
| # current versions of TensorFlow, this tag name should exactly match what is |
| # shown in TensorBoard, including all scopes. For versions of TensorFlow |
| # prior to 0.12, this should be only the tag passed to tf.Summary. |
| # By default, "training/hptuning/metric" will be used. |
| "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to |
| # continue with. The job id will be used to find the corresponding vizier |
| # study guid and resume the study. |
| "goal": "A String", # Required. The type of goal to use for tuning. Available types are |
| # `MAXIMIZE` and `MINIMIZE`. |
| # |
| # Defaults to `MAXIMIZE`. |
| "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter |
| # tuning job. |
| # Uses the default AI Platform hyperparameter tuning |
| # algorithm if unspecified. |
| "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently. |
| # You can reduce the time it takes to perform hyperparameter tuning by adding |
| # trials in parallel. However, each trail only benefits from the information |
| # gained in completed trials. That means that a trial does not get access to |
| # the results of trials running at the same time, which could reduce the |
| # quality of the overall optimization. |
| # |
| # Each trial will use the same scale tier and machine types. |
| # |
| # Defaults to one. |
| }, |
| "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's evaluator nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `evaluatorCount` is greater than zero. |
| "network": "A String", # Optional. The full name of the Google Compute Engine |
| # [network](/compute/docs/networks-and-firewalls#networks) to which the Job |
| # is peered. For example, projects/12345/global/networks/myVPC. Format is of |
| # the form projects/{project}/global/networks/{network}. Where {project} is a |
| # project number, as in '12345', and {network} is network name.". |
| # |
| # Private services access must already be configured for the network. If left |
| # unspecified, the Job is not peered with any network. Learn more - |
| # Connecting Job to user network over private |
| # IP. |
| "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's parameter server. |
| # |
| # The supported values are the same as those described in the entry for |
| # `master_type`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `parameter_server_count` is greater than zero. |
| "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's worker nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # If you use `cloud_tpu` for this value, see special instructions for |
| # [configuring a custom TPU |
| # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `workerCount` is greater than zero. |
| "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker. |
| # |
| # You should only set `masterConfig.acceleratorConfig` if `masterType` is set |
| # to a Compute Engine machine type. Learn about [restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `masterConfig.imageUri` only if you build a custom image. Only one of |
| # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more |
| # about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job. |
| # Each replica in the cluster will be of the type specified in |
| # `evaluator_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `evaluator_type`. |
| # |
| # The default value is zero. |
| "args": [ # Optional. Command-line arguments passed to the training application when it |
| # starts. If your job uses a custom container, then the arguments are passed |
| # to the container's <a class="external" target="_blank" |
| # href="https://docs.docker.com/engine/reference/builder/#entrypoint"> |
| # `ENTRYPOINT`</a> command. |
| "A String", |
| ], |
| "pythonModule": "A String", # Required. The Python module name to run after installing the packages. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must |
| # either specify this field or specify `masterConfig.imageUri`. |
| # |
| # For more information, see the [runtime version |
| # list](/ai-platform/training/docs/runtime-version-list) and learn [how to |
| # manage runtime versions](/ai-platform/training/docs/versioning). |
| "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training |
| # job. Each replica in the cluster will be of the type specified in |
| # `parameter_server_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `parameter_server_type`. |
| # |
| # The default value is zero. |
| "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators. |
| # |
| # You should only set `evaluatorConfig.acceleratorConfig` if |
| # `evaluatorType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `evaluatorConfig.imageUri` only if you build a custom image for |
| # your evaluator. If `evaluatorConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to |
| # protect resources created by a training job, instead of using Google's |
| # default encryption. If this is set, then all resources created by the |
| # training job will be encrypted with the customer-managed encryption key |
| # that you specify. |
| # |
| # [Learn how and when to use CMEK with AI Platform |
| # Training](/ai-platform/training/docs/cmek). |
| # a resource. |
| "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key |
| # used to protect a resource, such as a training job. It has the following |
| # format: |
| # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}` |
| }, |
| "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each |
| # replica in the cluster will be of the type specified in `worker_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `worker_type`. |
| # |
| # The default value is zero. |
| "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job. |
| "maxWaitTime": "A String", |
| "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can |
| # contain up to nine fractional digits, terminated by `s`. If not specified, |
| # this field defaults to `604800s` (seven days). |
| # |
| # If the training job is still running after this duration, AI Platform |
| # Training cancels it. |
| # |
| # For example, if you want to ensure your job runs for no more than 2 hours, |
| # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds / |
| # minute). |
| # |
| # If you submit your training job using the `gcloud` tool, you can [provide |
| # this field in a `config.yaml` |
| # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters). |
| # For example: |
| # |
| # ```yaml |
| # trainingInput: |
| # ... |
| # scheduling: |
| # maxRunningTime: 7200s |
| # ... |
| # ``` |
| }, |
| "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers |
| # and parameter servers. |
| "packageUris": [ # Required. The Google Cloud Storage location of the packages with |
| # the training program and any additional dependencies. |
| # The maximum number of package URIs is 100. |
| "A String", |
| ], |
| }, |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a job from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform job updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `GetJob`, and |
| # systems are expected to put that etag in the request to `UpdateJob` to |
| # ensure that their change will be applied to the same version of the job. |
| "jobId": "A String", # Required. The user-specified id of the job. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="get">get(name, x__xgafv=None)</code> |
| <pre>Describes a job. |
| |
| Args: |
| name: string, Required. The name of the job to get the description of. (required) |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Represents a training or prediction job. |
| "createTime": "A String", # Output only. When the job was created. |
| "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job. |
| "dataFormat": "A String", # Required. The format of the input data files. |
| "outputPath": "A String", # Required. The output Google Cloud Storage location. |
| "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain |
| # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>. |
| "A String", |
| ], |
| "region": "A String", # Required. The Google Compute Engine region to run the prediction job in. |
| # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> |
| # for AI Platform services. |
| "versionName": "A String", # Use this field if you want to specify a version of the model to use. The |
| # string is formatted the same way as `model_version`, with the addition |
| # of the version information: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"` |
| "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for |
| # the model to use. |
| "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch |
| # prediction. If not set, AI Platform will pick the runtime version used |
| # during the CreateVersion request for this model version, or choose the |
| # latest stable version when model version information is not available |
| # such as when the model is specified by uri. |
| "modelName": "A String", # Use this field if you want to use the default version for the specified |
| # model. The string must use the following format: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL"` |
| "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for |
| # this job. Please refer to |
| # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html) |
| # for information about how to use signatures. |
| # |
| # Defaults to |
| # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants) |
| # , which is "serving_default". |
| "batchSize": "A String", # Optional. Number of records per batch, defaults to 64. |
| # The service will buffer batch_size number of records in memory before |
| # invoking one Tensorflow prediction call internally. So take the record |
| # size and memory available into consideration when setting this parameter. |
| "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing. |
| # Defaults to 10 if not specified. |
| }, |
| "labels": { # Optional. One or more labels that you can add, to organize your jobs. |
| # Each label is a key-value pair, where both the key and the value are |
| # arbitrary strings that you supply. |
| # For more information, see the documentation on |
| # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| "a_key": "A String", |
| }, |
| "endTime": "A String", # Output only. When the job processing was completed. |
| "trainingOutput": { # Represents results of a training job. Output only. # The current training job result. |
| "trials": [ # Results for individual Hyperparameter trials. |
| # Only set for hyperparameter tuning jobs. |
| { # Represents the result of a single hyperparameter tuning trial from a |
| # training job. The TrainingOutput object that is returned on successful |
| # completion of a training job with hyperparameter tuning includes a list |
| # of HyperparameterOutput objects, one for each successful trial. |
| "endTime": "A String", # Output only. End time for the trial. |
| "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| "hyperparameters": { # The hyperparameters given to this trial. |
| "a_key": "A String", |
| }, |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for trials of built-in algorithms jobs that have succeeded. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "startTime": "A String", # Output only. Start time for the trial. |
| "allMetrics": [ # All recorded object metrics for this trial. This field is not currently |
| # populated. |
| { # An observed value of a metric. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| ], |
| "trialId": "A String", # The trial id for these results. |
| "isTrialStoppedEarly": True or False, # True if the trial is stopped early. |
| "state": "A String", # Output only. The detailed state of the trial. |
| }, |
| ], |
| "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully. |
| # Only set for hyperparameter tuning jobs. |
| "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job. |
| "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job. |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for built-in algorithms jobs. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "consumedMLUnits": 3.14, # The amount of ML units consumed by the job. |
| "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning |
| # trials. See |
| # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag) |
| # for more information. Only set for hyperparameter tuning jobs. |
| }, |
| "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| "predictionOutput": { # Represents results of a prediction job. # The current prediction job result. |
| "errorCount": "A String", # The number of data instances which resulted in errors. |
| "nodeHours": 3.14, # Node hours used by the batch prediction job. |
| "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time. |
| "predictionCount": "A String", # The number of generated predictions. |
| }, |
| "startTime": "A String", # Output only. When the job processing was started. |
| "state": "A String", # Output only. The detailed state of a job. |
| "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job. |
| # to submit your training job, you can specify the input parameters as |
| # command-line arguments and/or in a YAML configuration file referenced from |
| # the --config command-line argument. For details, see the guide to [submitting |
| # a training job](/ai-platform/training/docs/training-jobs). |
| "serviceAccount": "A String", # Optional. Specifies the service account for workload run-as account. |
| # Users submitting jobs must have act-as permission on this run-as account. |
| # If not specified, then CMLE P4SA will be used by default. |
| "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers. |
| # |
| # You should only set `workerConfig.acceleratorConfig` if `workerType` is set |
| # to a Compute Engine machine type. [Learn about restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `workerConfig.imageUri` only if you build a custom image for your |
| # worker. If `workerConfig.imageUri` has not been set, AI Platform uses |
| # the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment |
| # variable when training with a custom container. Defaults to `false`. [Learn |
| # more about this |
| # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master) |
| # |
| # This field has no effect for training jobs that don't use a custom |
| # container. |
| "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's master worker. You must specify this field when `scaleTier` is set to |
| # `CUSTOM`. |
| # |
| # You can use certain Compute Engine machine types directly in this field. |
| # The following types are supported: |
| # |
| # - `n1-standard-4` |
| # - `n1-standard-8` |
| # - `n1-standard-16` |
| # - `n1-standard-32` |
| # - `n1-standard-64` |
| # - `n1-standard-96` |
| # - `n1-highmem-2` |
| # - `n1-highmem-4` |
| # - `n1-highmem-8` |
| # - `n1-highmem-16` |
| # - `n1-highmem-32` |
| # - `n1-highmem-64` |
| # - `n1-highmem-96` |
| # - `n1-highcpu-16` |
| # - `n1-highcpu-32` |
| # - `n1-highcpu-64` |
| # - `n1-highcpu-96` |
| # |
| # Learn more about [using Compute Engine machine |
| # types](/ml-engine/docs/machine-types#compute-engine-machine-types). |
| # |
| # Alternatively, you can use the following legacy machine types: |
| # |
| # - `standard` |
| # - `large_model` |
| # - `complex_model_s` |
| # - `complex_model_m` |
| # - `complex_model_l` |
| # - `standard_gpu` |
| # - `complex_model_m_gpu` |
| # - `complex_model_l_gpu` |
| # - `standard_p100` |
| # - `complex_model_m_p100` |
| # - `standard_v100` |
| # - `large_model_v100` |
| # - `complex_model_m_v100` |
| # - `complex_model_l_v100` |
| # |
| # Learn more about [using legacy machine |
| # types](/ml-engine/docs/machine-types#legacy-machine-types). |
| # |
| # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this |
| # field. Learn more about the [special configuration options for training |
| # with |
| # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers. |
| # |
| # You should only set `parameterServerConfig.acceleratorConfig` if |
| # `parameterServerType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `parameterServerConfig.imageUri` only if you build a custom image for |
| # your parameter server. If `parameterServerConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "region": "A String", # Required. The region to run the training job in. See the [available |
| # regions](/ai-platform/training/docs/regions) for AI Platform Training. |
| "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs |
| # and other data needed for training. This path is passed to your TensorFlow |
| # program as the '--job-dir' command-line argument. The benefit of specifying |
| # this field is that Cloud ML validates the path for use in training. |
| "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify |
| # this field or specify `masterConfig.imageUri`. |
| # |
| # The following Python versions are available: |
| # |
| # * Python '3.7' is available when `runtime_version` is set to '1.15' or |
| # later. |
| # * Python '3.5' is available when `runtime_version` is set to a version |
| # from '1.4' to '1.14'. |
| # * Python '2.7' is available when `runtime_version` is set to '1.15' or |
| # earlier. |
| # |
| # Read more about the Python versions available for [each runtime |
| # version](/ml-engine/docs/runtime-version-list). |
| "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune. |
| "maxTrials": 42, # Optional. How many training trials should be attempted to optimize |
| # the specified hyperparameters. |
| # |
| # Defaults to one. |
| "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial |
| # early stopping. |
| "params": [ # Required. The set of parameters to tune. |
| { # Represents a single hyperparameter to optimize. |
| "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is INTEGER. |
| "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories. |
| "A String", |
| ], |
| "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube. |
| # Leave unset for categorical parameters. |
| # Some kind of scaling is strongly recommended for real or integral |
| # parameters (e.g., `UNIT_LINEAR_SCALE`). |
| "discreteValues": [ # Required if type is `DISCRETE`. |
| # A list of feasible points. |
| # The list should be in strictly increasing order. For instance, this |
| # parameter might have possible settings of 1.5, 2.5, and 4.0. This list |
| # should not contain more than 1,000 values. |
| 3.14, |
| ], |
| "type": "A String", # Required. The type of the parameter. |
| "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is `INTEGER`. |
| "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in |
| # a HyperparameterSpec message. E.g., "learning_rate". |
| }, |
| ], |
| "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing |
| # the hyperparameter tuning job. You can specify this field to override the |
| # default failing criteria for AI Platform hyperparameter tuning jobs. |
| # |
| # Defaults to zero, which means the service decides when a hyperparameter |
| # job should fail. |
| "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For |
| # current versions of TensorFlow, this tag name should exactly match what is |
| # shown in TensorBoard, including all scopes. For versions of TensorFlow |
| # prior to 0.12, this should be only the tag passed to tf.Summary. |
| # By default, "training/hptuning/metric" will be used. |
| "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to |
| # continue with. The job id will be used to find the corresponding vizier |
| # study guid and resume the study. |
| "goal": "A String", # Required. The type of goal to use for tuning. Available types are |
| # `MAXIMIZE` and `MINIMIZE`. |
| # |
| # Defaults to `MAXIMIZE`. |
| "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter |
| # tuning job. |
| # Uses the default AI Platform hyperparameter tuning |
| # algorithm if unspecified. |
| "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently. |
| # You can reduce the time it takes to perform hyperparameter tuning by adding |
| # trials in parallel. However, each trail only benefits from the information |
| # gained in completed trials. That means that a trial does not get access to |
| # the results of trials running at the same time, which could reduce the |
| # quality of the overall optimization. |
| # |
| # Each trial will use the same scale tier and machine types. |
| # |
| # Defaults to one. |
| }, |
| "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's evaluator nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `evaluatorCount` is greater than zero. |
| "network": "A String", # Optional. The full name of the Google Compute Engine |
| # [network](/compute/docs/networks-and-firewalls#networks) to which the Job |
| # is peered. For example, projects/12345/global/networks/myVPC. Format is of |
| # the form projects/{project}/global/networks/{network}. Where {project} is a |
| # project number, as in '12345', and {network} is network name.". |
| # |
| # Private services access must already be configured for the network. If left |
| # unspecified, the Job is not peered with any network. Learn more - |
| # Connecting Job to user network over private |
| # IP. |
| "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's parameter server. |
| # |
| # The supported values are the same as those described in the entry for |
| # `master_type`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `parameter_server_count` is greater than zero. |
| "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's worker nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # If you use `cloud_tpu` for this value, see special instructions for |
| # [configuring a custom TPU |
| # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `workerCount` is greater than zero. |
| "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker. |
| # |
| # You should only set `masterConfig.acceleratorConfig` if `masterType` is set |
| # to a Compute Engine machine type. Learn about [restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `masterConfig.imageUri` only if you build a custom image. Only one of |
| # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more |
| # about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job. |
| # Each replica in the cluster will be of the type specified in |
| # `evaluator_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `evaluator_type`. |
| # |
| # The default value is zero. |
| "args": [ # Optional. Command-line arguments passed to the training application when it |
| # starts. If your job uses a custom container, then the arguments are passed |
| # to the container's <a class="external" target="_blank" |
| # href="https://docs.docker.com/engine/reference/builder/#entrypoint"> |
| # `ENTRYPOINT`</a> command. |
| "A String", |
| ], |
| "pythonModule": "A String", # Required. The Python module name to run after installing the packages. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must |
| # either specify this field or specify `masterConfig.imageUri`. |
| # |
| # For more information, see the [runtime version |
| # list](/ai-platform/training/docs/runtime-version-list) and learn [how to |
| # manage runtime versions](/ai-platform/training/docs/versioning). |
| "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training |
| # job. Each replica in the cluster will be of the type specified in |
| # `parameter_server_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `parameter_server_type`. |
| # |
| # The default value is zero. |
| "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators. |
| # |
| # You should only set `evaluatorConfig.acceleratorConfig` if |
| # `evaluatorType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `evaluatorConfig.imageUri` only if you build a custom image for |
| # your evaluator. If `evaluatorConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to |
| # protect resources created by a training job, instead of using Google's |
| # default encryption. If this is set, then all resources created by the |
| # training job will be encrypted with the customer-managed encryption key |
| # that you specify. |
| # |
| # [Learn how and when to use CMEK with AI Platform |
| # Training](/ai-platform/training/docs/cmek). |
| # a resource. |
| "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key |
| # used to protect a resource, such as a training job. It has the following |
| # format: |
| # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}` |
| }, |
| "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each |
| # replica in the cluster will be of the type specified in `worker_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `worker_type`. |
| # |
| # The default value is zero. |
| "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job. |
| "maxWaitTime": "A String", |
| "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can |
| # contain up to nine fractional digits, terminated by `s`. If not specified, |
| # this field defaults to `604800s` (seven days). |
| # |
| # If the training job is still running after this duration, AI Platform |
| # Training cancels it. |
| # |
| # For example, if you want to ensure your job runs for no more than 2 hours, |
| # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds / |
| # minute). |
| # |
| # If you submit your training job using the `gcloud` tool, you can [provide |
| # this field in a `config.yaml` |
| # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters). |
| # For example: |
| # |
| # ```yaml |
| # trainingInput: |
| # ... |
| # scheduling: |
| # maxRunningTime: 7200s |
| # ... |
| # ``` |
| }, |
| "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers |
| # and parameter servers. |
| "packageUris": [ # Required. The Google Cloud Storage location of the packages with |
| # the training program and any additional dependencies. |
| # The maximum number of package URIs is 100. |
| "A String", |
| ], |
| }, |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a job from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform job updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `GetJob`, and |
| # systems are expected to put that etag in the request to `UpdateJob` to |
| # ensure that their change will be applied to the same version of the job. |
| "jobId": "A String", # Required. The user-specified id of the job. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</code> |
| <pre>Gets the access control policy for a resource. |
| Returns an empty policy if the resource exists and does not have a policy |
| set. |
| |
| Args: |
| resource: string, REQUIRED: The resource for which the policy is being requested. |
| See the operation documentation for the appropriate value for this field. (required) |
| options_requestedPolicyVersion: integer, Optional. The policy format version to be returned. |
| |
| Valid values are 0, 1, and 3. Requests specifying an invalid value will be |
| rejected. |
| |
| Requests for policies with any conditional bindings must specify version 3. |
| Policies without any conditional bindings may specify any valid value or |
| leave the field unset. |
| |
| To learn which resources support conditions in their IAM policies, see the |
| [IAM |
| documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # An Identity and Access Management (IAM) policy, which specifies access |
| # controls for Google Cloud resources. |
| # |
| # |
| # A `Policy` is a collection of `bindings`. A `binding` binds one or more |
| # `members` to a single `role`. Members can be user accounts, service accounts, |
| # Google groups, and domains (such as G Suite). A `role` is a named list of |
| # permissions; each `role` can be an IAM predefined role or a user-created |
| # custom role. |
| # |
| # For some types of Google Cloud resources, a `binding` can also specify a |
| # `condition`, which is a logical expression that allows access to a resource |
| # only if the expression evaluates to `true`. A condition can add constraints |
| # based on attributes of the request, the resource, or both. To learn which |
| # resources support conditions in their IAM policies, see the |
| # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| # |
| # **JSON example:** |
| # |
| # { |
| # "bindings": [ |
| # { |
| # "role": "roles/resourcemanager.organizationAdmin", |
| # "members": [ |
| # "user:[email protected]", |
| # "group:[email protected]", |
| # "domain:google.com", |
| # "serviceAccount:[email protected]" |
| # ] |
| # }, |
| # { |
| # "role": "roles/resourcemanager.organizationViewer", |
| # "members": [ |
| # "user:[email protected]" |
| # ], |
| # "condition": { |
| # "title": "expirable access", |
| # "description": "Does not grant access after Sep 2020", |
| # "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", |
| # } |
| # } |
| # ], |
| # "etag": "BwWWja0YfJA=", |
| # "version": 3 |
| # } |
| # |
| # **YAML example:** |
| # |
| # bindings: |
| # - members: |
| # - user:[email protected] |
| # - group:[email protected] |
| # - domain:google.com |
| # - serviceAccount:[email protected] |
| # role: roles/resourcemanager.organizationAdmin |
| # - members: |
| # - user:[email protected] |
| # role: roles/resourcemanager.organizationViewer |
| # condition: |
| # title: expirable access |
| # description: Does not grant access after Sep 2020 |
| # expression: request.time < timestamp('2020-10-01T00:00:00.000Z') |
| # - etag: BwWWja0YfJA= |
| # - version: 3 |
| # |
| # For a description of IAM and its features, see the |
| # [IAM documentation](https://cloud.google.com/iam/docs/). |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a policy from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform policy updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `getIamPolicy`, and |
| # systems are expected to put that etag in the request to `setIamPolicy` to |
| # ensure that their change will be applied to the same version of the policy. |
| # |
| # **Important:** If you use IAM Conditions, you must include the `etag` field |
| # whenever you call `setIamPolicy`. If you omit this field, then IAM allows |
| # you to overwrite a version `3` policy with a version `1` policy, and all of |
| # the conditions in the version `3` policy are lost. |
| "auditConfigs": [ # Specifies cloud audit logging configuration for this policy. |
| { # Specifies the audit configuration for a service. |
| # The configuration determines which permission types are logged, and what |
| # identities, if any, are exempted from logging. |
| # An AuditConfig must have one or more AuditLogConfigs. |
| # |
| # If there are AuditConfigs for both `allServices` and a specific service, |
| # the union of the two AuditConfigs is used for that service: the log_types |
| # specified in each AuditConfig are enabled, and the exempted_members in each |
| # AuditLogConfig are exempted. |
| # |
| # Example Policy with multiple AuditConfigs: |
| # |
| # { |
| # "audit_configs": [ |
| # { |
| # "service": "allServices", |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # }, |
| # { |
| # "log_type": "DATA_WRITE" |
| # }, |
| # { |
| # "log_type": "ADMIN_READ" |
| # } |
| # ] |
| # }, |
| # { |
| # "service": "sampleservice.googleapis.com", |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ" |
| # }, |
| # { |
| # "log_type": "DATA_WRITE", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # } |
| # ] |
| # } |
| # ] |
| # } |
| # |
| # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ |
| # logging. It also exempts [email protected] from DATA_READ logging, and |
| # [email protected] from DATA_WRITE logging. |
| "service": "A String", # Specifies a service that will be enabled for audit logging. |
| # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`. |
| # `allServices` is a special value that covers all services. |
| "auditLogConfigs": [ # The configuration for logging of each type of permission. |
| { # Provides the configuration for logging a type of permissions. |
| # Example: |
| # |
| # { |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # }, |
| # { |
| # "log_type": "DATA_WRITE" |
| # } |
| # ] |
| # } |
| # |
| # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting |
| # [email protected] from DATA_READ logging. |
| "logType": "A String", # The log type that this config enables. |
| "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of |
| # permission. |
| # Follows the same format of Binding.members. |
| "A String", |
| ], |
| }, |
| ], |
| }, |
| ], |
| "version": 42, # Specifies the format of the policy. |
| # |
| # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value |
| # are rejected. |
| # |
| # Any operation that affects conditional role bindings must specify version |
| # `3`. This requirement applies to the following operations: |
| # |
| # * Getting a policy that includes a conditional role binding |
| # * Adding a conditional role binding to a policy |
| # * Changing a conditional role binding in a policy |
| # * Removing any role binding, with or without a condition, from a policy |
| # that includes conditions |
| # |
| # **Important:** If you use IAM Conditions, you must include the `etag` field |
| # whenever you call `setIamPolicy`. If you omit this field, then IAM allows |
| # you to overwrite a version `3` policy with a version `1` policy, and all of |
| # the conditions in the version `3` policy are lost. |
| # |
| # If a policy does not include any conditions, operations on that policy may |
| # specify any valid version or leave the field unset. |
| # |
| # To learn which resources support conditions in their IAM policies, see the |
| # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a |
| # `condition` that determines how and when the `bindings` are applied. Each |
| # of the `bindings` must contain at least one member. |
| { # Associates `members` with a `role`. |
| "role": "A String", # Role that is assigned to `members`. |
| # For example, `roles/viewer`, `roles/editor`, or `roles/owner`. |
| "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding. |
| # |
| # If the condition evaluates to `true`, then this binding applies to the |
| # current request. |
| # |
| # If the condition evaluates to `false`, then this binding does not apply to |
| # the current request. However, a different role binding might grant the same |
| # role to one or more of the members in this binding. |
| # |
| # To learn which resources support conditions in their IAM policies, see the |
| # [IAM |
| # documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| # syntax. CEL is a C-like expression language. The syntax and semantics of CEL |
| # are documented at https://github.com/google/cel-spec. |
| # |
| # Example (Comparison): |
| # |
| # title: "Summary size limit" |
| # description: "Determines if a summary is less than 100 chars" |
| # expression: "document.summary.size() < 100" |
| # |
| # Example (Equality): |
| # |
| # title: "Requestor is owner" |
| # description: "Determines if requestor is the document owner" |
| # expression: "document.owner == request.auth.claims.email" |
| # |
| # Example (Logic): |
| # |
| # title: "Public documents" |
| # description: "Determine whether the document should be publicly visible" |
| # expression: "document.type != 'private' && document.type != 'internal'" |
| # |
| # Example (Data Manipulation): |
| # |
| # title: "Notification string" |
| # description: "Create a notification string with a timestamp." |
| # expression: "'New message received at ' + string(document.create_time)" |
| # |
| # The exact variables and functions that may be referenced within an expression |
| # are determined by the service that evaluates it. See the service |
| # documentation for additional information. |
| "expression": "A String", # Textual representation of an expression in Common Expression Language |
| # syntax. |
| "title": "A String", # Optional. Title for the expression, i.e. a short string describing |
| # its purpose. This can be used e.g. in UIs which allow to enter the |
| # expression. |
| "location": "A String", # Optional. String indicating the location of the expression for error |
| # reporting, e.g. a file name and a position in the file. |
| "description": "A String", # Optional. Description of the expression. This is a longer text which |
| # describes the expression, e.g. when hovered over it in a UI. |
| }, |
| "members": [ # Specifies the identities requesting access for a Cloud Platform resource. |
| # `members` can have the following values: |
| # |
| # * `allUsers`: A special identifier that represents anyone who is |
| # on the internet; with or without a Google account. |
| # |
| # * `allAuthenticatedUsers`: A special identifier that represents anyone |
| # who is authenticated with a Google account or a service account. |
| # |
| # * `user:{emailid}`: An email address that represents a specific Google |
| # account. For example, `[email protected]` . |
| # |
| # |
| # * `serviceAccount:{emailid}`: An email address that represents a service |
| # account. For example, `[email protected]`. |
| # |
| # * `group:{emailid}`: An email address that represents a Google group. |
| # For example, `[email protected]`. |
| # |
| # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique |
| # identifier) representing a user that has been recently deleted. For |
| # example, `[email protected]?uid=123456789012345678901`. If the user is |
| # recovered, this value reverts to `user:{emailid}` and the recovered user |
| # retains the role in the binding. |
| # |
| # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus |
| # unique identifier) representing a service account that has been recently |
| # deleted. For example, |
| # `[email protected]?uid=123456789012345678901`. |
| # If the service account is undeleted, this value reverts to |
| # `serviceAccount:{emailid}` and the undeleted service account retains the |
| # role in the binding. |
| # |
| # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique |
| # identifier) representing a Google group that has been recently |
| # deleted. For example, `[email protected]?uid=123456789012345678901`. If |
| # the group is recovered, this value reverts to `group:{emailid}` and the |
| # recovered group retains the role in the binding. |
| # |
| # |
| # * `domain:{domain}`: The G Suite domain (primary) that represents all the |
| # users of that domain. For example, `google.com` or `example.com`. |
| # |
| "A String", |
| ], |
| }, |
| ], |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code> |
| <pre>Lists the jobs in the project. |
| |
| If there are no jobs that match the request parameters, the list |
| request returns an empty response body: {}. |
| |
| Args: |
| parent: string, Required. The name of the project for which to list jobs. (required) |
| filter: string, Optional. Specifies the subset of jobs to retrieve. |
| You can filter on the value of one or more attributes of the job object. |
| For example, retrieve jobs with a job identifier that starts with 'census': |
| <p><code>gcloud ai-platform jobs list --filter='jobId:census*'</code> |
| <p>List all failed jobs with names that start with 'rnn': |
| <p><code>gcloud ai-platform jobs list --filter='jobId:rnn* |
| AND state:FAILED'</code> |
| <p>For more examples, see the guide to |
| <a href="/ml-engine/docs/tensorflow/monitor-training">monitoring jobs</a>. |
| pageToken: string, Optional. A page token to request the next page of results. |
| |
| You get the token from the `next_page_token` field of the response from |
| the previous call. |
| pageSize: integer, Optional. The number of jobs to retrieve per "page" of results. If there |
| are more remaining results than this number, the response message will |
| contain a valid value in the `next_page_token` field. |
| |
| The default value is 20, and the maximum page size is 100. |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Response message for the ListJobs method. |
| "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a |
| # subsequent call. |
| "jobs": [ # The list of jobs. |
| { # Represents a training or prediction job. |
| "createTime": "A String", # Output only. When the job was created. |
| "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job. |
| "dataFormat": "A String", # Required. The format of the input data files. |
| "outputPath": "A String", # Required. The output Google Cloud Storage location. |
| "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain |
| # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>. |
| "A String", |
| ], |
| "region": "A String", # Required. The Google Compute Engine region to run the prediction job in. |
| # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> |
| # for AI Platform services. |
| "versionName": "A String", # Use this field if you want to specify a version of the model to use. The |
| # string is formatted the same way as `model_version`, with the addition |
| # of the version information: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"` |
| "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for |
| # the model to use. |
| "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch |
| # prediction. If not set, AI Platform will pick the runtime version used |
| # during the CreateVersion request for this model version, or choose the |
| # latest stable version when model version information is not available |
| # such as when the model is specified by uri. |
| "modelName": "A String", # Use this field if you want to use the default version for the specified |
| # model. The string must use the following format: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL"` |
| "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for |
| # this job. Please refer to |
| # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html) |
| # for information about how to use signatures. |
| # |
| # Defaults to |
| # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants) |
| # , which is "serving_default". |
| "batchSize": "A String", # Optional. Number of records per batch, defaults to 64. |
| # The service will buffer batch_size number of records in memory before |
| # invoking one Tensorflow prediction call internally. So take the record |
| # size and memory available into consideration when setting this parameter. |
| "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing. |
| # Defaults to 10 if not specified. |
| }, |
| "labels": { # Optional. One or more labels that you can add, to organize your jobs. |
| # Each label is a key-value pair, where both the key and the value are |
| # arbitrary strings that you supply. |
| # For more information, see the documentation on |
| # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| "a_key": "A String", |
| }, |
| "endTime": "A String", # Output only. When the job processing was completed. |
| "trainingOutput": { # Represents results of a training job. Output only. # The current training job result. |
| "trials": [ # Results for individual Hyperparameter trials. |
| # Only set for hyperparameter tuning jobs. |
| { # Represents the result of a single hyperparameter tuning trial from a |
| # training job. The TrainingOutput object that is returned on successful |
| # completion of a training job with hyperparameter tuning includes a list |
| # of HyperparameterOutput objects, one for each successful trial. |
| "endTime": "A String", # Output only. End time for the trial. |
| "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| "hyperparameters": { # The hyperparameters given to this trial. |
| "a_key": "A String", |
| }, |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for trials of built-in algorithms jobs that have succeeded. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "startTime": "A String", # Output only. Start time for the trial. |
| "allMetrics": [ # All recorded object metrics for this trial. This field is not currently |
| # populated. |
| { # An observed value of a metric. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| ], |
| "trialId": "A String", # The trial id for these results. |
| "isTrialStoppedEarly": True or False, # True if the trial is stopped early. |
| "state": "A String", # Output only. The detailed state of the trial. |
| }, |
| ], |
| "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully. |
| # Only set for hyperparameter tuning jobs. |
| "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job. |
| "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job. |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for built-in algorithms jobs. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "consumedMLUnits": 3.14, # The amount of ML units consumed by the job. |
| "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning |
| # trials. See |
| # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag) |
| # for more information. Only set for hyperparameter tuning jobs. |
| }, |
| "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| "predictionOutput": { # Represents results of a prediction job. # The current prediction job result. |
| "errorCount": "A String", # The number of data instances which resulted in errors. |
| "nodeHours": 3.14, # Node hours used by the batch prediction job. |
| "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time. |
| "predictionCount": "A String", # The number of generated predictions. |
| }, |
| "startTime": "A String", # Output only. When the job processing was started. |
| "state": "A String", # Output only. The detailed state of a job. |
| "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job. |
| # to submit your training job, you can specify the input parameters as |
| # command-line arguments and/or in a YAML configuration file referenced from |
| # the --config command-line argument. For details, see the guide to [submitting |
| # a training job](/ai-platform/training/docs/training-jobs). |
| "serviceAccount": "A String", # Optional. Specifies the service account for workload run-as account. |
| # Users submitting jobs must have act-as permission on this run-as account. |
| # If not specified, then CMLE P4SA will be used by default. |
| "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers. |
| # |
| # You should only set `workerConfig.acceleratorConfig` if `workerType` is set |
| # to a Compute Engine machine type. [Learn about restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `workerConfig.imageUri` only if you build a custom image for your |
| # worker. If `workerConfig.imageUri` has not been set, AI Platform uses |
| # the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment |
| # variable when training with a custom container. Defaults to `false`. [Learn |
| # more about this |
| # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master) |
| # |
| # This field has no effect for training jobs that don't use a custom |
| # container. |
| "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's master worker. You must specify this field when `scaleTier` is set to |
| # `CUSTOM`. |
| # |
| # You can use certain Compute Engine machine types directly in this field. |
| # The following types are supported: |
| # |
| # - `n1-standard-4` |
| # - `n1-standard-8` |
| # - `n1-standard-16` |
| # - `n1-standard-32` |
| # - `n1-standard-64` |
| # - `n1-standard-96` |
| # - `n1-highmem-2` |
| # - `n1-highmem-4` |
| # - `n1-highmem-8` |
| # - `n1-highmem-16` |
| # - `n1-highmem-32` |
| # - `n1-highmem-64` |
| # - `n1-highmem-96` |
| # - `n1-highcpu-16` |
| # - `n1-highcpu-32` |
| # - `n1-highcpu-64` |
| # - `n1-highcpu-96` |
| # |
| # Learn more about [using Compute Engine machine |
| # types](/ml-engine/docs/machine-types#compute-engine-machine-types). |
| # |
| # Alternatively, you can use the following legacy machine types: |
| # |
| # - `standard` |
| # - `large_model` |
| # - `complex_model_s` |
| # - `complex_model_m` |
| # - `complex_model_l` |
| # - `standard_gpu` |
| # - `complex_model_m_gpu` |
| # - `complex_model_l_gpu` |
| # - `standard_p100` |
| # - `complex_model_m_p100` |
| # - `standard_v100` |
| # - `large_model_v100` |
| # - `complex_model_m_v100` |
| # - `complex_model_l_v100` |
| # |
| # Learn more about [using legacy machine |
| # types](/ml-engine/docs/machine-types#legacy-machine-types). |
| # |
| # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this |
| # field. Learn more about the [special configuration options for training |
| # with |
| # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers. |
| # |
| # You should only set `parameterServerConfig.acceleratorConfig` if |
| # `parameterServerType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `parameterServerConfig.imageUri` only if you build a custom image for |
| # your parameter server. If `parameterServerConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "region": "A String", # Required. The region to run the training job in. See the [available |
| # regions](/ai-platform/training/docs/regions) for AI Platform Training. |
| "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs |
| # and other data needed for training. This path is passed to your TensorFlow |
| # program as the '--job-dir' command-line argument. The benefit of specifying |
| # this field is that Cloud ML validates the path for use in training. |
| "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify |
| # this field or specify `masterConfig.imageUri`. |
| # |
| # The following Python versions are available: |
| # |
| # * Python '3.7' is available when `runtime_version` is set to '1.15' or |
| # later. |
| # * Python '3.5' is available when `runtime_version` is set to a version |
| # from '1.4' to '1.14'. |
| # * Python '2.7' is available when `runtime_version` is set to '1.15' or |
| # earlier. |
| # |
| # Read more about the Python versions available for [each runtime |
| # version](/ml-engine/docs/runtime-version-list). |
| "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune. |
| "maxTrials": 42, # Optional. How many training trials should be attempted to optimize |
| # the specified hyperparameters. |
| # |
| # Defaults to one. |
| "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial |
| # early stopping. |
| "params": [ # Required. The set of parameters to tune. |
| { # Represents a single hyperparameter to optimize. |
| "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is INTEGER. |
| "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories. |
| "A String", |
| ], |
| "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube. |
| # Leave unset for categorical parameters. |
| # Some kind of scaling is strongly recommended for real or integral |
| # parameters (e.g., `UNIT_LINEAR_SCALE`). |
| "discreteValues": [ # Required if type is `DISCRETE`. |
| # A list of feasible points. |
| # The list should be in strictly increasing order. For instance, this |
| # parameter might have possible settings of 1.5, 2.5, and 4.0. This list |
| # should not contain more than 1,000 values. |
| 3.14, |
| ], |
| "type": "A String", # Required. The type of the parameter. |
| "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is `INTEGER`. |
| "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in |
| # a HyperparameterSpec message. E.g., "learning_rate". |
| }, |
| ], |
| "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing |
| # the hyperparameter tuning job. You can specify this field to override the |
| # default failing criteria for AI Platform hyperparameter tuning jobs. |
| # |
| # Defaults to zero, which means the service decides when a hyperparameter |
| # job should fail. |
| "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For |
| # current versions of TensorFlow, this tag name should exactly match what is |
| # shown in TensorBoard, including all scopes. For versions of TensorFlow |
| # prior to 0.12, this should be only the tag passed to tf.Summary. |
| # By default, "training/hptuning/metric" will be used. |
| "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to |
| # continue with. The job id will be used to find the corresponding vizier |
| # study guid and resume the study. |
| "goal": "A String", # Required. The type of goal to use for tuning. Available types are |
| # `MAXIMIZE` and `MINIMIZE`. |
| # |
| # Defaults to `MAXIMIZE`. |
| "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter |
| # tuning job. |
| # Uses the default AI Platform hyperparameter tuning |
| # algorithm if unspecified. |
| "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently. |
| # You can reduce the time it takes to perform hyperparameter tuning by adding |
| # trials in parallel. However, each trail only benefits from the information |
| # gained in completed trials. That means that a trial does not get access to |
| # the results of trials running at the same time, which could reduce the |
| # quality of the overall optimization. |
| # |
| # Each trial will use the same scale tier and machine types. |
| # |
| # Defaults to one. |
| }, |
| "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's evaluator nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `evaluatorCount` is greater than zero. |
| "network": "A String", # Optional. The full name of the Google Compute Engine |
| # [network](/compute/docs/networks-and-firewalls#networks) to which the Job |
| # is peered. For example, projects/12345/global/networks/myVPC. Format is of |
| # the form projects/{project}/global/networks/{network}. Where {project} is a |
| # project number, as in '12345', and {network} is network name.". |
| # |
| # Private services access must already be configured for the network. If left |
| # unspecified, the Job is not peered with any network. Learn more - |
| # Connecting Job to user network over private |
| # IP. |
| "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's parameter server. |
| # |
| # The supported values are the same as those described in the entry for |
| # `master_type`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `parameter_server_count` is greater than zero. |
| "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's worker nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # If you use `cloud_tpu` for this value, see special instructions for |
| # [configuring a custom TPU |
| # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `workerCount` is greater than zero. |
| "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker. |
| # |
| # You should only set `masterConfig.acceleratorConfig` if `masterType` is set |
| # to a Compute Engine machine type. Learn about [restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `masterConfig.imageUri` only if you build a custom image. Only one of |
| # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more |
| # about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job. |
| # Each replica in the cluster will be of the type specified in |
| # `evaluator_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `evaluator_type`. |
| # |
| # The default value is zero. |
| "args": [ # Optional. Command-line arguments passed to the training application when it |
| # starts. If your job uses a custom container, then the arguments are passed |
| # to the container's <a class="external" target="_blank" |
| # href="https://docs.docker.com/engine/reference/builder/#entrypoint"> |
| # `ENTRYPOINT`</a> command. |
| "A String", |
| ], |
| "pythonModule": "A String", # Required. The Python module name to run after installing the packages. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must |
| # either specify this field or specify `masterConfig.imageUri`. |
| # |
| # For more information, see the [runtime version |
| # list](/ai-platform/training/docs/runtime-version-list) and learn [how to |
| # manage runtime versions](/ai-platform/training/docs/versioning). |
| "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training |
| # job. Each replica in the cluster will be of the type specified in |
| # `parameter_server_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `parameter_server_type`. |
| # |
| # The default value is zero. |
| "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators. |
| # |
| # You should only set `evaluatorConfig.acceleratorConfig` if |
| # `evaluatorType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `evaluatorConfig.imageUri` only if you build a custom image for |
| # your evaluator. If `evaluatorConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to |
| # protect resources created by a training job, instead of using Google's |
| # default encryption. If this is set, then all resources created by the |
| # training job will be encrypted with the customer-managed encryption key |
| # that you specify. |
| # |
| # [Learn how and when to use CMEK with AI Platform |
| # Training](/ai-platform/training/docs/cmek). |
| # a resource. |
| "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key |
| # used to protect a resource, such as a training job. It has the following |
| # format: |
| # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}` |
| }, |
| "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each |
| # replica in the cluster will be of the type specified in `worker_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `worker_type`. |
| # |
| # The default value is zero. |
| "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job. |
| "maxWaitTime": "A String", |
| "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can |
| # contain up to nine fractional digits, terminated by `s`. If not specified, |
| # this field defaults to `604800s` (seven days). |
| # |
| # If the training job is still running after this duration, AI Platform |
| # Training cancels it. |
| # |
| # For example, if you want to ensure your job runs for no more than 2 hours, |
| # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds / |
| # minute). |
| # |
| # If you submit your training job using the `gcloud` tool, you can [provide |
| # this field in a `config.yaml` |
| # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters). |
| # For example: |
| # |
| # ```yaml |
| # trainingInput: |
| # ... |
| # scheduling: |
| # maxRunningTime: 7200s |
| # ... |
| # ``` |
| }, |
| "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers |
| # and parameter servers. |
| "packageUris": [ # Required. The Google Cloud Storage location of the packages with |
| # the training program and any additional dependencies. |
| # The maximum number of package URIs is 100. |
| "A String", |
| ], |
| }, |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a job from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform job updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `GetJob`, and |
| # systems are expected to put that etag in the request to `UpdateJob` to |
| # ensure that their change will be applied to the same version of the job. |
| "jobId": "A String", # Required. The user-specified id of the job. |
| }, |
| ], |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="list_next">list_next(previous_request, previous_response)</code> |
| <pre>Retrieves the next page of results. |
| |
| Args: |
| previous_request: The request for the previous page. (required) |
| previous_response: The response from the request for the previous page. (required) |
| |
| Returns: |
| A request object that you can call 'execute()' on to request the next |
| page. Returns None if there are no more items in the collection. |
| </pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code> |
| <pre>Updates a specific job resource. |
| |
| Currently the only supported fields to update are `labels`. |
| |
| Args: |
| name: string, Required. The job name. (required) |
| body: object, The request body. |
| The object takes the form of: |
| |
| { # Represents a training or prediction job. |
| "createTime": "A String", # Output only. When the job was created. |
| "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job. |
| "dataFormat": "A String", # Required. The format of the input data files. |
| "outputPath": "A String", # Required. The output Google Cloud Storage location. |
| "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain |
| # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>. |
| "A String", |
| ], |
| "region": "A String", # Required. The Google Compute Engine region to run the prediction job in. |
| # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> |
| # for AI Platform services. |
| "versionName": "A String", # Use this field if you want to specify a version of the model to use. The |
| # string is formatted the same way as `model_version`, with the addition |
| # of the version information: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"` |
| "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for |
| # the model to use. |
| "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch |
| # prediction. If not set, AI Platform will pick the runtime version used |
| # during the CreateVersion request for this model version, or choose the |
| # latest stable version when model version information is not available |
| # such as when the model is specified by uri. |
| "modelName": "A String", # Use this field if you want to use the default version for the specified |
| # model. The string must use the following format: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL"` |
| "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for |
| # this job. Please refer to |
| # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html) |
| # for information about how to use signatures. |
| # |
| # Defaults to |
| # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants) |
| # , which is "serving_default". |
| "batchSize": "A String", # Optional. Number of records per batch, defaults to 64. |
| # The service will buffer batch_size number of records in memory before |
| # invoking one Tensorflow prediction call internally. So take the record |
| # size and memory available into consideration when setting this parameter. |
| "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing. |
| # Defaults to 10 if not specified. |
| }, |
| "labels": { # Optional. One or more labels that you can add, to organize your jobs. |
| # Each label is a key-value pair, where both the key and the value are |
| # arbitrary strings that you supply. |
| # For more information, see the documentation on |
| # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| "a_key": "A String", |
| }, |
| "endTime": "A String", # Output only. When the job processing was completed. |
| "trainingOutput": { # Represents results of a training job. Output only. # The current training job result. |
| "trials": [ # Results for individual Hyperparameter trials. |
| # Only set for hyperparameter tuning jobs. |
| { # Represents the result of a single hyperparameter tuning trial from a |
| # training job. The TrainingOutput object that is returned on successful |
| # completion of a training job with hyperparameter tuning includes a list |
| # of HyperparameterOutput objects, one for each successful trial. |
| "endTime": "A String", # Output only. End time for the trial. |
| "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| "hyperparameters": { # The hyperparameters given to this trial. |
| "a_key": "A String", |
| }, |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for trials of built-in algorithms jobs that have succeeded. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "startTime": "A String", # Output only. Start time for the trial. |
| "allMetrics": [ # All recorded object metrics for this trial. This field is not currently |
| # populated. |
| { # An observed value of a metric. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| ], |
| "trialId": "A String", # The trial id for these results. |
| "isTrialStoppedEarly": True or False, # True if the trial is stopped early. |
| "state": "A String", # Output only. The detailed state of the trial. |
| }, |
| ], |
| "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully. |
| # Only set for hyperparameter tuning jobs. |
| "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job. |
| "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job. |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for built-in algorithms jobs. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "consumedMLUnits": 3.14, # The amount of ML units consumed by the job. |
| "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning |
| # trials. See |
| # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag) |
| # for more information. Only set for hyperparameter tuning jobs. |
| }, |
| "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| "predictionOutput": { # Represents results of a prediction job. # The current prediction job result. |
| "errorCount": "A String", # The number of data instances which resulted in errors. |
| "nodeHours": 3.14, # Node hours used by the batch prediction job. |
| "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time. |
| "predictionCount": "A String", # The number of generated predictions. |
| }, |
| "startTime": "A String", # Output only. When the job processing was started. |
| "state": "A String", # Output only. The detailed state of a job. |
| "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job. |
| # to submit your training job, you can specify the input parameters as |
| # command-line arguments and/or in a YAML configuration file referenced from |
| # the --config command-line argument. For details, see the guide to [submitting |
| # a training job](/ai-platform/training/docs/training-jobs). |
| "serviceAccount": "A String", # Optional. Specifies the service account for workload run-as account. |
| # Users submitting jobs must have act-as permission on this run-as account. |
| # If not specified, then CMLE P4SA will be used by default. |
| "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers. |
| # |
| # You should only set `workerConfig.acceleratorConfig` if `workerType` is set |
| # to a Compute Engine machine type. [Learn about restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `workerConfig.imageUri` only if you build a custom image for your |
| # worker. If `workerConfig.imageUri` has not been set, AI Platform uses |
| # the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment |
| # variable when training with a custom container. Defaults to `false`. [Learn |
| # more about this |
| # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master) |
| # |
| # This field has no effect for training jobs that don't use a custom |
| # container. |
| "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's master worker. You must specify this field when `scaleTier` is set to |
| # `CUSTOM`. |
| # |
| # You can use certain Compute Engine machine types directly in this field. |
| # The following types are supported: |
| # |
| # - `n1-standard-4` |
| # - `n1-standard-8` |
| # - `n1-standard-16` |
| # - `n1-standard-32` |
| # - `n1-standard-64` |
| # - `n1-standard-96` |
| # - `n1-highmem-2` |
| # - `n1-highmem-4` |
| # - `n1-highmem-8` |
| # - `n1-highmem-16` |
| # - `n1-highmem-32` |
| # - `n1-highmem-64` |
| # - `n1-highmem-96` |
| # - `n1-highcpu-16` |
| # - `n1-highcpu-32` |
| # - `n1-highcpu-64` |
| # - `n1-highcpu-96` |
| # |
| # Learn more about [using Compute Engine machine |
| # types](/ml-engine/docs/machine-types#compute-engine-machine-types). |
| # |
| # Alternatively, you can use the following legacy machine types: |
| # |
| # - `standard` |
| # - `large_model` |
| # - `complex_model_s` |
| # - `complex_model_m` |
| # - `complex_model_l` |
| # - `standard_gpu` |
| # - `complex_model_m_gpu` |
| # - `complex_model_l_gpu` |
| # - `standard_p100` |
| # - `complex_model_m_p100` |
| # - `standard_v100` |
| # - `large_model_v100` |
| # - `complex_model_m_v100` |
| # - `complex_model_l_v100` |
| # |
| # Learn more about [using legacy machine |
| # types](/ml-engine/docs/machine-types#legacy-machine-types). |
| # |
| # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this |
| # field. Learn more about the [special configuration options for training |
| # with |
| # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers. |
| # |
| # You should only set `parameterServerConfig.acceleratorConfig` if |
| # `parameterServerType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `parameterServerConfig.imageUri` only if you build a custom image for |
| # your parameter server. If `parameterServerConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "region": "A String", # Required. The region to run the training job in. See the [available |
| # regions](/ai-platform/training/docs/regions) for AI Platform Training. |
| "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs |
| # and other data needed for training. This path is passed to your TensorFlow |
| # program as the '--job-dir' command-line argument. The benefit of specifying |
| # this field is that Cloud ML validates the path for use in training. |
| "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify |
| # this field or specify `masterConfig.imageUri`. |
| # |
| # The following Python versions are available: |
| # |
| # * Python '3.7' is available when `runtime_version` is set to '1.15' or |
| # later. |
| # * Python '3.5' is available when `runtime_version` is set to a version |
| # from '1.4' to '1.14'. |
| # * Python '2.7' is available when `runtime_version` is set to '1.15' or |
| # earlier. |
| # |
| # Read more about the Python versions available for [each runtime |
| # version](/ml-engine/docs/runtime-version-list). |
| "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune. |
| "maxTrials": 42, # Optional. How many training trials should be attempted to optimize |
| # the specified hyperparameters. |
| # |
| # Defaults to one. |
| "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial |
| # early stopping. |
| "params": [ # Required. The set of parameters to tune. |
| { # Represents a single hyperparameter to optimize. |
| "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is INTEGER. |
| "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories. |
| "A String", |
| ], |
| "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube. |
| # Leave unset for categorical parameters. |
| # Some kind of scaling is strongly recommended for real or integral |
| # parameters (e.g., `UNIT_LINEAR_SCALE`). |
| "discreteValues": [ # Required if type is `DISCRETE`. |
| # A list of feasible points. |
| # The list should be in strictly increasing order. For instance, this |
| # parameter might have possible settings of 1.5, 2.5, and 4.0. This list |
| # should not contain more than 1,000 values. |
| 3.14, |
| ], |
| "type": "A String", # Required. The type of the parameter. |
| "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is `INTEGER`. |
| "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in |
| # a HyperparameterSpec message. E.g., "learning_rate". |
| }, |
| ], |
| "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing |
| # the hyperparameter tuning job. You can specify this field to override the |
| # default failing criteria for AI Platform hyperparameter tuning jobs. |
| # |
| # Defaults to zero, which means the service decides when a hyperparameter |
| # job should fail. |
| "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For |
| # current versions of TensorFlow, this tag name should exactly match what is |
| # shown in TensorBoard, including all scopes. For versions of TensorFlow |
| # prior to 0.12, this should be only the tag passed to tf.Summary. |
| # By default, "training/hptuning/metric" will be used. |
| "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to |
| # continue with. The job id will be used to find the corresponding vizier |
| # study guid and resume the study. |
| "goal": "A String", # Required. The type of goal to use for tuning. Available types are |
| # `MAXIMIZE` and `MINIMIZE`. |
| # |
| # Defaults to `MAXIMIZE`. |
| "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter |
| # tuning job. |
| # Uses the default AI Platform hyperparameter tuning |
| # algorithm if unspecified. |
| "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently. |
| # You can reduce the time it takes to perform hyperparameter tuning by adding |
| # trials in parallel. However, each trail only benefits from the information |
| # gained in completed trials. That means that a trial does not get access to |
| # the results of trials running at the same time, which could reduce the |
| # quality of the overall optimization. |
| # |
| # Each trial will use the same scale tier and machine types. |
| # |
| # Defaults to one. |
| }, |
| "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's evaluator nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `evaluatorCount` is greater than zero. |
| "network": "A String", # Optional. The full name of the Google Compute Engine |
| # [network](/compute/docs/networks-and-firewalls#networks) to which the Job |
| # is peered. For example, projects/12345/global/networks/myVPC. Format is of |
| # the form projects/{project}/global/networks/{network}. Where {project} is a |
| # project number, as in '12345', and {network} is network name.". |
| # |
| # Private services access must already be configured for the network. If left |
| # unspecified, the Job is not peered with any network. Learn more - |
| # Connecting Job to user network over private |
| # IP. |
| "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's parameter server. |
| # |
| # The supported values are the same as those described in the entry for |
| # `master_type`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `parameter_server_count` is greater than zero. |
| "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's worker nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # If you use `cloud_tpu` for this value, see special instructions for |
| # [configuring a custom TPU |
| # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `workerCount` is greater than zero. |
| "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker. |
| # |
| # You should only set `masterConfig.acceleratorConfig` if `masterType` is set |
| # to a Compute Engine machine type. Learn about [restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `masterConfig.imageUri` only if you build a custom image. Only one of |
| # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more |
| # about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job. |
| # Each replica in the cluster will be of the type specified in |
| # `evaluator_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `evaluator_type`. |
| # |
| # The default value is zero. |
| "args": [ # Optional. Command-line arguments passed to the training application when it |
| # starts. If your job uses a custom container, then the arguments are passed |
| # to the container's <a class="external" target="_blank" |
| # href="https://docs.docker.com/engine/reference/builder/#entrypoint"> |
| # `ENTRYPOINT`</a> command. |
| "A String", |
| ], |
| "pythonModule": "A String", # Required. The Python module name to run after installing the packages. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must |
| # either specify this field or specify `masterConfig.imageUri`. |
| # |
| # For more information, see the [runtime version |
| # list](/ai-platform/training/docs/runtime-version-list) and learn [how to |
| # manage runtime versions](/ai-platform/training/docs/versioning). |
| "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training |
| # job. Each replica in the cluster will be of the type specified in |
| # `parameter_server_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `parameter_server_type`. |
| # |
| # The default value is zero. |
| "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators. |
| # |
| # You should only set `evaluatorConfig.acceleratorConfig` if |
| # `evaluatorType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `evaluatorConfig.imageUri` only if you build a custom image for |
| # your evaluator. If `evaluatorConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to |
| # protect resources created by a training job, instead of using Google's |
| # default encryption. If this is set, then all resources created by the |
| # training job will be encrypted with the customer-managed encryption key |
| # that you specify. |
| # |
| # [Learn how and when to use CMEK with AI Platform |
| # Training](/ai-platform/training/docs/cmek). |
| # a resource. |
| "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key |
| # used to protect a resource, such as a training job. It has the following |
| # format: |
| # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}` |
| }, |
| "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each |
| # replica in the cluster will be of the type specified in `worker_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `worker_type`. |
| # |
| # The default value is zero. |
| "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job. |
| "maxWaitTime": "A String", |
| "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can |
| # contain up to nine fractional digits, terminated by `s`. If not specified, |
| # this field defaults to `604800s` (seven days). |
| # |
| # If the training job is still running after this duration, AI Platform |
| # Training cancels it. |
| # |
| # For example, if you want to ensure your job runs for no more than 2 hours, |
| # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds / |
| # minute). |
| # |
| # If you submit your training job using the `gcloud` tool, you can [provide |
| # this field in a `config.yaml` |
| # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters). |
| # For example: |
| # |
| # ```yaml |
| # trainingInput: |
| # ... |
| # scheduling: |
| # maxRunningTime: 7200s |
| # ... |
| # ``` |
| }, |
| "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers |
| # and parameter servers. |
| "packageUris": [ # Required. The Google Cloud Storage location of the packages with |
| # the training program and any additional dependencies. |
| # The maximum number of package URIs is 100. |
| "A String", |
| ], |
| }, |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a job from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform job updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `GetJob`, and |
| # systems are expected to put that etag in the request to `UpdateJob` to |
| # ensure that their change will be applied to the same version of the job. |
| "jobId": "A String", # Required. The user-specified id of the job. |
| } |
| |
| updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update. |
| To adopt etag mechanism, include `etag` field in the mask, and include the |
| `etag` value in your job resource. |
| |
| For example, to change the labels of a job, the `update_mask` parameter |
| would be specified as `labels`, `etag`, and the |
| `PATCH` request body would specify the new value, as follows: |
| { |
| "labels": { |
| "owner": "Google", |
| "color": "Blue" |
| } |
| "etag": "33a64df551425fcc55e4d42a148795d9f25f89d4" |
| } |
| If `etag` matches the one on the server, the labels of the job will be |
| replaced with the given ones, and the server end `etag` will be |
| recalculated. |
| |
| Currently the only supported update masks are `labels` and `etag`. |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Represents a training or prediction job. |
| "createTime": "A String", # Output only. When the job was created. |
| "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job. |
| "dataFormat": "A String", # Required. The format of the input data files. |
| "outputPath": "A String", # Required. The output Google Cloud Storage location. |
| "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain |
| # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>. |
| "A String", |
| ], |
| "region": "A String", # Required. The Google Compute Engine region to run the prediction job in. |
| # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a> |
| # for AI Platform services. |
| "versionName": "A String", # Use this field if you want to specify a version of the model to use. The |
| # string is formatted the same way as `model_version`, with the addition |
| # of the version information: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"` |
| "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for |
| # the model to use. |
| "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch |
| # prediction. If not set, AI Platform will pick the runtime version used |
| # during the CreateVersion request for this model version, or choose the |
| # latest stable version when model version information is not available |
| # such as when the model is specified by uri. |
| "modelName": "A String", # Use this field if you want to use the default version for the specified |
| # model. The string must use the following format: |
| # |
| # `"projects/YOUR_PROJECT/models/YOUR_MODEL"` |
| "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for |
| # this job. Please refer to |
| # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html) |
| # for information about how to use signatures. |
| # |
| # Defaults to |
| # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants) |
| # , which is "serving_default". |
| "batchSize": "A String", # Optional. Number of records per batch, defaults to 64. |
| # The service will buffer batch_size number of records in memory before |
| # invoking one Tensorflow prediction call internally. So take the record |
| # size and memory available into consideration when setting this parameter. |
| "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing. |
| # Defaults to 10 if not specified. |
| }, |
| "labels": { # Optional. One or more labels that you can add, to organize your jobs. |
| # Each label is a key-value pair, where both the key and the value are |
| # arbitrary strings that you supply. |
| # For more information, see the documentation on |
| # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| "a_key": "A String", |
| }, |
| "endTime": "A String", # Output only. When the job processing was completed. |
| "trainingOutput": { # Represents results of a training job. Output only. # The current training job result. |
| "trials": [ # Results for individual Hyperparameter trials. |
| # Only set for hyperparameter tuning jobs. |
| { # Represents the result of a single hyperparameter tuning trial from a |
| # training job. The TrainingOutput object that is returned on successful |
| # completion of a training job with hyperparameter tuning includes a list |
| # of HyperparameterOutput objects, one for each successful trial. |
| "endTime": "A String", # Output only. End time for the trial. |
| "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| "hyperparameters": { # The hyperparameters given to this trial. |
| "a_key": "A String", |
| }, |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for trials of built-in algorithms jobs that have succeeded. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "startTime": "A String", # Output only. Start time for the trial. |
| "allMetrics": [ # All recorded object metrics for this trial. This field is not currently |
| # populated. |
| { # An observed value of a metric. |
| "trainingStep": "A String", # The global training step for this metric. |
| "objectiveValue": 3.14, # The objective value at this training step. |
| }, |
| ], |
| "trialId": "A String", # The trial id for these results. |
| "isTrialStoppedEarly": True or False, # True if the trial is stopped early. |
| "state": "A String", # Output only. The detailed state of the trial. |
| }, |
| ], |
| "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully. |
| # Only set for hyperparameter tuning jobs. |
| "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job. |
| "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job. |
| "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs. |
| # Only set for built-in algorithms jobs. |
| "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job |
| # saves the trained model. Only set for successful jobs that don't use |
| # hyperparameter tuning. |
| "framework": "A String", # Framework on which the built-in algorithm was trained. |
| "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was |
| # trained. |
| "pythonVersion": "A String", # Python version on which the built-in algorithm was trained. |
| }, |
| "consumedMLUnits": 3.14, # The amount of ML units consumed by the job. |
| "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning |
| # trials. See |
| # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag) |
| # for more information. Only set for hyperparameter tuning jobs. |
| }, |
| "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| "predictionOutput": { # Represents results of a prediction job. # The current prediction job result. |
| "errorCount": "A String", # The number of data instances which resulted in errors. |
| "nodeHours": 3.14, # Node hours used by the batch prediction job. |
| "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time. |
| "predictionCount": "A String", # The number of generated predictions. |
| }, |
| "startTime": "A String", # Output only. When the job processing was started. |
| "state": "A String", # Output only. The detailed state of a job. |
| "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job. |
| # to submit your training job, you can specify the input parameters as |
| # command-line arguments and/or in a YAML configuration file referenced from |
| # the --config command-line argument. For details, see the guide to [submitting |
| # a training job](/ai-platform/training/docs/training-jobs). |
| "serviceAccount": "A String", # Optional. Specifies the service account for workload run-as account. |
| # Users submitting jobs must have act-as permission on this run-as account. |
| # If not specified, then CMLE P4SA will be used by default. |
| "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers. |
| # |
| # You should only set `workerConfig.acceleratorConfig` if `workerType` is set |
| # to a Compute Engine machine type. [Learn about restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `workerConfig.imageUri` only if you build a custom image for your |
| # worker. If `workerConfig.imageUri` has not been set, AI Platform uses |
| # the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment |
| # variable when training with a custom container. Defaults to `false`. [Learn |
| # more about this |
| # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master) |
| # |
| # This field has no effect for training jobs that don't use a custom |
| # container. |
| "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's master worker. You must specify this field when `scaleTier` is set to |
| # `CUSTOM`. |
| # |
| # You can use certain Compute Engine machine types directly in this field. |
| # The following types are supported: |
| # |
| # - `n1-standard-4` |
| # - `n1-standard-8` |
| # - `n1-standard-16` |
| # - `n1-standard-32` |
| # - `n1-standard-64` |
| # - `n1-standard-96` |
| # - `n1-highmem-2` |
| # - `n1-highmem-4` |
| # - `n1-highmem-8` |
| # - `n1-highmem-16` |
| # - `n1-highmem-32` |
| # - `n1-highmem-64` |
| # - `n1-highmem-96` |
| # - `n1-highcpu-16` |
| # - `n1-highcpu-32` |
| # - `n1-highcpu-64` |
| # - `n1-highcpu-96` |
| # |
| # Learn more about [using Compute Engine machine |
| # types](/ml-engine/docs/machine-types#compute-engine-machine-types). |
| # |
| # Alternatively, you can use the following legacy machine types: |
| # |
| # - `standard` |
| # - `large_model` |
| # - `complex_model_s` |
| # - `complex_model_m` |
| # - `complex_model_l` |
| # - `standard_gpu` |
| # - `complex_model_m_gpu` |
| # - `complex_model_l_gpu` |
| # - `standard_p100` |
| # - `complex_model_m_p100` |
| # - `standard_v100` |
| # - `large_model_v100` |
| # - `complex_model_m_v100` |
| # - `complex_model_l_v100` |
| # |
| # Learn more about [using legacy machine |
| # types](/ml-engine/docs/machine-types#legacy-machine-types). |
| # |
| # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this |
| # field. Learn more about the [special configuration options for training |
| # with |
| # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers. |
| # |
| # You should only set `parameterServerConfig.acceleratorConfig` if |
| # `parameterServerType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `parameterServerConfig.imageUri` only if you build a custom image for |
| # your parameter server. If `parameterServerConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "region": "A String", # Required. The region to run the training job in. See the [available |
| # regions](/ai-platform/training/docs/regions) for AI Platform Training. |
| "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs |
| # and other data needed for training. This path is passed to your TensorFlow |
| # program as the '--job-dir' command-line argument. The benefit of specifying |
| # this field is that Cloud ML validates the path for use in training. |
| "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify |
| # this field or specify `masterConfig.imageUri`. |
| # |
| # The following Python versions are available: |
| # |
| # * Python '3.7' is available when `runtime_version` is set to '1.15' or |
| # later. |
| # * Python '3.5' is available when `runtime_version` is set to a version |
| # from '1.4' to '1.14'. |
| # * Python '2.7' is available when `runtime_version` is set to '1.15' or |
| # earlier. |
| # |
| # Read more about the Python versions available for [each runtime |
| # version](/ml-engine/docs/runtime-version-list). |
| "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune. |
| "maxTrials": 42, # Optional. How many training trials should be attempted to optimize |
| # the specified hyperparameters. |
| # |
| # Defaults to one. |
| "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial |
| # early stopping. |
| "params": [ # Required. The set of parameters to tune. |
| { # Represents a single hyperparameter to optimize. |
| "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is INTEGER. |
| "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories. |
| "A String", |
| ], |
| "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube. |
| # Leave unset for categorical parameters. |
| # Some kind of scaling is strongly recommended for real or integral |
| # parameters (e.g., `UNIT_LINEAR_SCALE`). |
| "discreteValues": [ # Required if type is `DISCRETE`. |
| # A list of feasible points. |
| # The list should be in strictly increasing order. For instance, this |
| # parameter might have possible settings of 1.5, 2.5, and 4.0. This list |
| # should not contain more than 1,000 values. |
| 3.14, |
| ], |
| "type": "A String", # Required. The type of the parameter. |
| "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field |
| # should be unset if type is `CATEGORICAL`. This value should be integers if |
| # type is `INTEGER`. |
| "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in |
| # a HyperparameterSpec message. E.g., "learning_rate". |
| }, |
| ], |
| "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing |
| # the hyperparameter tuning job. You can specify this field to override the |
| # default failing criteria for AI Platform hyperparameter tuning jobs. |
| # |
| # Defaults to zero, which means the service decides when a hyperparameter |
| # job should fail. |
| "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For |
| # current versions of TensorFlow, this tag name should exactly match what is |
| # shown in TensorBoard, including all scopes. For versions of TensorFlow |
| # prior to 0.12, this should be only the tag passed to tf.Summary. |
| # By default, "training/hptuning/metric" will be used. |
| "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to |
| # continue with. The job id will be used to find the corresponding vizier |
| # study guid and resume the study. |
| "goal": "A String", # Required. The type of goal to use for tuning. Available types are |
| # `MAXIMIZE` and `MINIMIZE`. |
| # |
| # Defaults to `MAXIMIZE`. |
| "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter |
| # tuning job. |
| # Uses the default AI Platform hyperparameter tuning |
| # algorithm if unspecified. |
| "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently. |
| # You can reduce the time it takes to perform hyperparameter tuning by adding |
| # trials in parallel. However, each trail only benefits from the information |
| # gained in completed trials. That means that a trial does not get access to |
| # the results of trials running at the same time, which could reduce the |
| # quality of the overall optimization. |
| # |
| # Each trial will use the same scale tier and machine types. |
| # |
| # Defaults to one. |
| }, |
| "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's evaluator nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `evaluatorCount` is greater than zero. |
| "network": "A String", # Optional. The full name of the Google Compute Engine |
| # [network](/compute/docs/networks-and-firewalls#networks) to which the Job |
| # is peered. For example, projects/12345/global/networks/myVPC. Format is of |
| # the form projects/{project}/global/networks/{network}. Where {project} is a |
| # project number, as in '12345', and {network} is network name.". |
| # |
| # Private services access must already be configured for the network. If left |
| # unspecified, the Job is not peered with any network. Learn more - |
| # Connecting Job to user network over private |
| # IP. |
| "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's parameter server. |
| # |
| # The supported values are the same as those described in the entry for |
| # `master_type`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `parameter_server_count` is greater than zero. |
| "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training |
| # job's worker nodes. |
| # |
| # The supported values are the same as those described in the entry for |
| # `masterType`. |
| # |
| # This value must be consistent with the category of machine type that |
| # `masterType` uses. In other words, both must be Compute Engine machine |
| # types or both must be legacy machine types. |
| # |
| # If you use `cloud_tpu` for this value, see special instructions for |
| # [configuring a custom TPU |
| # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine). |
| # |
| # This value must be present when `scaleTier` is set to `CUSTOM` and |
| # `workerCount` is greater than zero. |
| "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker. |
| # |
| # You should only set `masterConfig.acceleratorConfig` if `masterType` is set |
| # to a Compute Engine machine type. Learn about [restrictions on accelerator |
| # configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `masterConfig.imageUri` only if you build a custom image. Only one of |
| # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more |
| # about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job. |
| # Each replica in the cluster will be of the type specified in |
| # `evaluator_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `evaluator_type`. |
| # |
| # The default value is zero. |
| "args": [ # Optional. Command-line arguments passed to the training application when it |
| # starts. If your job uses a custom container, then the arguments are passed |
| # to the container's <a class="external" target="_blank" |
| # href="https://docs.docker.com/engine/reference/builder/#entrypoint"> |
| # `ENTRYPOINT`</a> command. |
| "A String", |
| ], |
| "pythonModule": "A String", # Required. The Python module name to run after installing the packages. |
| "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must |
| # either specify this field or specify `masterConfig.imageUri`. |
| # |
| # For more information, see the [runtime version |
| # list](/ai-platform/training/docs/runtime-version-list) and learn [how to |
| # manage runtime versions](/ai-platform/training/docs/versioning). |
| "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training |
| # job. Each replica in the cluster will be of the type specified in |
| # `parameter_server_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `parameter_server_type`. |
| # |
| # The default value is zero. |
| "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators. |
| # |
| # You should only set `evaluatorConfig.acceleratorConfig` if |
| # `evaluatorType` is set to a Compute Engine machine type. [Learn |
| # about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # |
| # Set `evaluatorConfig.imageUri` only if you build a custom image for |
| # your evaluator. If `evaluatorConfig.imageUri` has not been |
| # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica. |
| # [Learn about restrictions on accelerator configurations for |
| # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu) |
| # Note that the AcceleratorConfig can be used in both Jobs and Versions. |
| # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and |
| # [accelerators for online |
| # prediction](/ml-engine/docs/machine-types-online-prediction#gpus). |
| "type": "A String", # The type of accelerator to use. |
| "count": "A String", # The number of accelerators to attach to each machine running the job. |
| }, |
| "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container |
| # Registry. Learn more about [configuring custom |
| # containers](/ai-platform/training/docs/distributed-training-containers). |
| "containerArgs": [ # Arguments to the entrypoint command. |
| # The following rules apply for container_command and container_args: |
| # - If you do not supply command or args: |
| # The defaults defined in the Docker image are used. |
| # - If you supply a command but no args: |
| # The default EntryPoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run without any arguments. |
| # - If you supply only args: |
| # The default Entrypoint defined in the Docker image is run with the args |
| # that you supplied. |
| # - If you supply a command and args: |
| # The default Entrypoint and the default Cmd defined in the Docker image |
| # are ignored. Your command is run with your args. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching |
| # the one used in the custom container. This field is required if the replica |
| # is a TPU worker that uses a custom container. Otherwise, do not specify |
| # this field. This must be a [runtime version that currently supports |
| # training with |
| # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support). |
| # |
| # Note that the version of TensorFlow included in a runtime version may |
| # differ from the numbering of the runtime version itself, because it may |
| # have a different [patch |
| # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20). |
| # In this field, you must specify the runtime version (TensorFlow minor |
| # version). For example, if your custom container runs TensorFlow `1.x.y`, |
| # specify `1.x`. |
| "containerCommand": [ # The command with which the replica's custom container is run. |
| # If provided, it will override default ENTRYPOINT of the docker image. |
| # If not provided, the docker image's ENTRYPOINT is used. |
| # It cannot be set if custom container image is |
| # not provided. |
| # Note that this field and [TrainingInput.args] are mutually exclusive, i.e., |
| # both cannot be set at the same time. |
| "A String", |
| ], |
| }, |
| "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to |
| # protect resources created by a training job, instead of using Google's |
| # default encryption. If this is set, then all resources created by the |
| # training job will be encrypted with the customer-managed encryption key |
| # that you specify. |
| # |
| # [Learn how and when to use CMEK with AI Platform |
| # Training](/ai-platform/training/docs/cmek). |
| # a resource. |
| "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key |
| # used to protect a resource, such as a training job. It has the following |
| # format: |
| # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}` |
| }, |
| "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each |
| # replica in the cluster will be of the type specified in `worker_type`. |
| # |
| # This value can only be used when `scale_tier` is set to `CUSTOM`. If you |
| # set this value, you must also set `worker_type`. |
| # |
| # The default value is zero. |
| "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job. |
| "maxWaitTime": "A String", |
| "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can |
| # contain up to nine fractional digits, terminated by `s`. If not specified, |
| # this field defaults to `604800s` (seven days). |
| # |
| # If the training job is still running after this duration, AI Platform |
| # Training cancels it. |
| # |
| # For example, if you want to ensure your job runs for no more than 2 hours, |
| # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds / |
| # minute). |
| # |
| # If you submit your training job using the `gcloud` tool, you can [provide |
| # this field in a `config.yaml` |
| # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters). |
| # For example: |
| # |
| # ```yaml |
| # trainingInput: |
| # ... |
| # scheduling: |
| # maxRunningTime: 7200s |
| # ... |
| # ``` |
| }, |
| "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers |
| # and parameter servers. |
| "packageUris": [ # Required. The Google Cloud Storage location of the packages with |
| # the training program and any additional dependencies. |
| # The maximum number of package URIs is 100. |
| "A String", |
| ], |
| }, |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a job from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform job updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `GetJob`, and |
| # systems are expected to put that etag in the request to `UpdateJob` to |
| # ensure that their change will be applied to the same version of the job. |
| "jobId": "A String", # Required. The user-specified id of the job. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code> |
| <pre>Sets the access control policy on the specified resource. Replaces any |
| existing policy. |
| |
| Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors. |
| |
| Args: |
| resource: string, REQUIRED: The resource for which the policy is being specified. |
| See the operation documentation for the appropriate value for this field. (required) |
| body: object, The request body. |
| The object takes the form of: |
| |
| { # Request message for `SetIamPolicy` method. |
| "policy": { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of |
| # the policy is limited to a few 10s of KB. An empty policy is a |
| # valid policy but certain Cloud Platform services (such as Projects) |
| # might reject them. |
| # controls for Google Cloud resources. |
| # |
| # |
| # A `Policy` is a collection of `bindings`. A `binding` binds one or more |
| # `members` to a single `role`. Members can be user accounts, service accounts, |
| # Google groups, and domains (such as G Suite). A `role` is a named list of |
| # permissions; each `role` can be an IAM predefined role or a user-created |
| # custom role. |
| # |
| # For some types of Google Cloud resources, a `binding` can also specify a |
| # `condition`, which is a logical expression that allows access to a resource |
| # only if the expression evaluates to `true`. A condition can add constraints |
| # based on attributes of the request, the resource, or both. To learn which |
| # resources support conditions in their IAM policies, see the |
| # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| # |
| # **JSON example:** |
| # |
| # { |
| # "bindings": [ |
| # { |
| # "role": "roles/resourcemanager.organizationAdmin", |
| # "members": [ |
| # "user:[email protected]", |
| # "group:[email protected]", |
| # "domain:google.com", |
| # "serviceAccount:[email protected]" |
| # ] |
| # }, |
| # { |
| # "role": "roles/resourcemanager.organizationViewer", |
| # "members": [ |
| # "user:[email protected]" |
| # ], |
| # "condition": { |
| # "title": "expirable access", |
| # "description": "Does not grant access after Sep 2020", |
| # "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", |
| # } |
| # } |
| # ], |
| # "etag": "BwWWja0YfJA=", |
| # "version": 3 |
| # } |
| # |
| # **YAML example:** |
| # |
| # bindings: |
| # - members: |
| # - user:[email protected] |
| # - group:[email protected] |
| # - domain:google.com |
| # - serviceAccount:[email protected] |
| # role: roles/resourcemanager.organizationAdmin |
| # - members: |
| # - user:[email protected] |
| # role: roles/resourcemanager.organizationViewer |
| # condition: |
| # title: expirable access |
| # description: Does not grant access after Sep 2020 |
| # expression: request.time < timestamp('2020-10-01T00:00:00.000Z') |
| # - etag: BwWWja0YfJA= |
| # - version: 3 |
| # |
| # For a description of IAM and its features, see the |
| # [IAM documentation](https://cloud.google.com/iam/docs/). |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a policy from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform policy updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `getIamPolicy`, and |
| # systems are expected to put that etag in the request to `setIamPolicy` to |
| # ensure that their change will be applied to the same version of the policy. |
| # |
| # **Important:** If you use IAM Conditions, you must include the `etag` field |
| # whenever you call `setIamPolicy`. If you omit this field, then IAM allows |
| # you to overwrite a version `3` policy with a version `1` policy, and all of |
| # the conditions in the version `3` policy are lost. |
| "auditConfigs": [ # Specifies cloud audit logging configuration for this policy. |
| { # Specifies the audit configuration for a service. |
| # The configuration determines which permission types are logged, and what |
| # identities, if any, are exempted from logging. |
| # An AuditConfig must have one or more AuditLogConfigs. |
| # |
| # If there are AuditConfigs for both `allServices` and a specific service, |
| # the union of the two AuditConfigs is used for that service: the log_types |
| # specified in each AuditConfig are enabled, and the exempted_members in each |
| # AuditLogConfig are exempted. |
| # |
| # Example Policy with multiple AuditConfigs: |
| # |
| # { |
| # "audit_configs": [ |
| # { |
| # "service": "allServices", |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # }, |
| # { |
| # "log_type": "DATA_WRITE" |
| # }, |
| # { |
| # "log_type": "ADMIN_READ" |
| # } |
| # ] |
| # }, |
| # { |
| # "service": "sampleservice.googleapis.com", |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ" |
| # }, |
| # { |
| # "log_type": "DATA_WRITE", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # } |
| # ] |
| # } |
| # ] |
| # } |
| # |
| # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ |
| # logging. It also exempts [email protected] from DATA_READ logging, and |
| # [email protected] from DATA_WRITE logging. |
| "service": "A String", # Specifies a service that will be enabled for audit logging. |
| # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`. |
| # `allServices` is a special value that covers all services. |
| "auditLogConfigs": [ # The configuration for logging of each type of permission. |
| { # Provides the configuration for logging a type of permissions. |
| # Example: |
| # |
| # { |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # }, |
| # { |
| # "log_type": "DATA_WRITE" |
| # } |
| # ] |
| # } |
| # |
| # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting |
| # [email protected] from DATA_READ logging. |
| "logType": "A String", # The log type that this config enables. |
| "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of |
| # permission. |
| # Follows the same format of Binding.members. |
| "A String", |
| ], |
| }, |
| ], |
| }, |
| ], |
| "version": 42, # Specifies the format of the policy. |
| # |
| # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value |
| # are rejected. |
| # |
| # Any operation that affects conditional role bindings must specify version |
| # `3`. This requirement applies to the following operations: |
| # |
| # * Getting a policy that includes a conditional role binding |
| # * Adding a conditional role binding to a policy |
| # * Changing a conditional role binding in a policy |
| # * Removing any role binding, with or without a condition, from a policy |
| # that includes conditions |
| # |
| # **Important:** If you use IAM Conditions, you must include the `etag` field |
| # whenever you call `setIamPolicy`. If you omit this field, then IAM allows |
| # you to overwrite a version `3` policy with a version `1` policy, and all of |
| # the conditions in the version `3` policy are lost. |
| # |
| # If a policy does not include any conditions, operations on that policy may |
| # specify any valid version or leave the field unset. |
| # |
| # To learn which resources support conditions in their IAM policies, see the |
| # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a |
| # `condition` that determines how and when the `bindings` are applied. Each |
| # of the `bindings` must contain at least one member. |
| { # Associates `members` with a `role`. |
| "role": "A String", # Role that is assigned to `members`. |
| # For example, `roles/viewer`, `roles/editor`, or `roles/owner`. |
| "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding. |
| # |
| # If the condition evaluates to `true`, then this binding applies to the |
| # current request. |
| # |
| # If the condition evaluates to `false`, then this binding does not apply to |
| # the current request. However, a different role binding might grant the same |
| # role to one or more of the members in this binding. |
| # |
| # To learn which resources support conditions in their IAM policies, see the |
| # [IAM |
| # documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| # syntax. CEL is a C-like expression language. The syntax and semantics of CEL |
| # are documented at https://github.com/google/cel-spec. |
| # |
| # Example (Comparison): |
| # |
| # title: "Summary size limit" |
| # description: "Determines if a summary is less than 100 chars" |
| # expression: "document.summary.size() < 100" |
| # |
| # Example (Equality): |
| # |
| # title: "Requestor is owner" |
| # description: "Determines if requestor is the document owner" |
| # expression: "document.owner == request.auth.claims.email" |
| # |
| # Example (Logic): |
| # |
| # title: "Public documents" |
| # description: "Determine whether the document should be publicly visible" |
| # expression: "document.type != 'private' && document.type != 'internal'" |
| # |
| # Example (Data Manipulation): |
| # |
| # title: "Notification string" |
| # description: "Create a notification string with a timestamp." |
| # expression: "'New message received at ' + string(document.create_time)" |
| # |
| # The exact variables and functions that may be referenced within an expression |
| # are determined by the service that evaluates it. See the service |
| # documentation for additional information. |
| "expression": "A String", # Textual representation of an expression in Common Expression Language |
| # syntax. |
| "title": "A String", # Optional. Title for the expression, i.e. a short string describing |
| # its purpose. This can be used e.g. in UIs which allow to enter the |
| # expression. |
| "location": "A String", # Optional. String indicating the location of the expression for error |
| # reporting, e.g. a file name and a position in the file. |
| "description": "A String", # Optional. Description of the expression. This is a longer text which |
| # describes the expression, e.g. when hovered over it in a UI. |
| }, |
| "members": [ # Specifies the identities requesting access for a Cloud Platform resource. |
| # `members` can have the following values: |
| # |
| # * `allUsers`: A special identifier that represents anyone who is |
| # on the internet; with or without a Google account. |
| # |
| # * `allAuthenticatedUsers`: A special identifier that represents anyone |
| # who is authenticated with a Google account or a service account. |
| # |
| # * `user:{emailid}`: An email address that represents a specific Google |
| # account. For example, `[email protected]` . |
| # |
| # |
| # * `serviceAccount:{emailid}`: An email address that represents a service |
| # account. For example, `[email protected]`. |
| # |
| # * `group:{emailid}`: An email address that represents a Google group. |
| # For example, `[email protected]`. |
| # |
| # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique |
| # identifier) representing a user that has been recently deleted. For |
| # example, `[email protected]?uid=123456789012345678901`. If the user is |
| # recovered, this value reverts to `user:{emailid}` and the recovered user |
| # retains the role in the binding. |
| # |
| # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus |
| # unique identifier) representing a service account that has been recently |
| # deleted. For example, |
| # `[email protected]?uid=123456789012345678901`. |
| # If the service account is undeleted, this value reverts to |
| # `serviceAccount:{emailid}` and the undeleted service account retains the |
| # role in the binding. |
| # |
| # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique |
| # identifier) representing a Google group that has been recently |
| # deleted. For example, `[email protected]?uid=123456789012345678901`. If |
| # the group is recovered, this value reverts to `group:{emailid}` and the |
| # recovered group retains the role in the binding. |
| # |
| # |
| # * `domain:{domain}`: The G Suite domain (primary) that represents all the |
| # users of that domain. For example, `google.com` or `example.com`. |
| # |
| "A String", |
| ], |
| }, |
| ], |
| }, |
| "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only |
| # the fields in the mask will be modified. If no mask is provided, the |
| # following default mask is used: |
| # |
| # `paths: "bindings, etag"` |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # An Identity and Access Management (IAM) policy, which specifies access |
| # controls for Google Cloud resources. |
| # |
| # |
| # A `Policy` is a collection of `bindings`. A `binding` binds one or more |
| # `members` to a single `role`. Members can be user accounts, service accounts, |
| # Google groups, and domains (such as G Suite). A `role` is a named list of |
| # permissions; each `role` can be an IAM predefined role or a user-created |
| # custom role. |
| # |
| # For some types of Google Cloud resources, a `binding` can also specify a |
| # `condition`, which is a logical expression that allows access to a resource |
| # only if the expression evaluates to `true`. A condition can add constraints |
| # based on attributes of the request, the resource, or both. To learn which |
| # resources support conditions in their IAM policies, see the |
| # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| # |
| # **JSON example:** |
| # |
| # { |
| # "bindings": [ |
| # { |
| # "role": "roles/resourcemanager.organizationAdmin", |
| # "members": [ |
| # "user:[email protected]", |
| # "group:[email protected]", |
| # "domain:google.com", |
| # "serviceAccount:[email protected]" |
| # ] |
| # }, |
| # { |
| # "role": "roles/resourcemanager.organizationViewer", |
| # "members": [ |
| # "user:[email protected]" |
| # ], |
| # "condition": { |
| # "title": "expirable access", |
| # "description": "Does not grant access after Sep 2020", |
| # "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", |
| # } |
| # } |
| # ], |
| # "etag": "BwWWja0YfJA=", |
| # "version": 3 |
| # } |
| # |
| # **YAML example:** |
| # |
| # bindings: |
| # - members: |
| # - user:[email protected] |
| # - group:[email protected] |
| # - domain:google.com |
| # - serviceAccount:[email protected] |
| # role: roles/resourcemanager.organizationAdmin |
| # - members: |
| # - user:[email protected] |
| # role: roles/resourcemanager.organizationViewer |
| # condition: |
| # title: expirable access |
| # description: Does not grant access after Sep 2020 |
| # expression: request.time < timestamp('2020-10-01T00:00:00.000Z') |
| # - etag: BwWWja0YfJA= |
| # - version: 3 |
| # |
| # For a description of IAM and its features, see the |
| # [IAM documentation](https://cloud.google.com/iam/docs/). |
| "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| # prevent simultaneous updates of a policy from overwriting each other. |
| # It is strongly suggested that systems make use of the `etag` in the |
| # read-modify-write cycle to perform policy updates in order to avoid race |
| # conditions: An `etag` is returned in the response to `getIamPolicy`, and |
| # systems are expected to put that etag in the request to `setIamPolicy` to |
| # ensure that their change will be applied to the same version of the policy. |
| # |
| # **Important:** If you use IAM Conditions, you must include the `etag` field |
| # whenever you call `setIamPolicy`. If you omit this field, then IAM allows |
| # you to overwrite a version `3` policy with a version `1` policy, and all of |
| # the conditions in the version `3` policy are lost. |
| "auditConfigs": [ # Specifies cloud audit logging configuration for this policy. |
| { # Specifies the audit configuration for a service. |
| # The configuration determines which permission types are logged, and what |
| # identities, if any, are exempted from logging. |
| # An AuditConfig must have one or more AuditLogConfigs. |
| # |
| # If there are AuditConfigs for both `allServices` and a specific service, |
| # the union of the two AuditConfigs is used for that service: the log_types |
| # specified in each AuditConfig are enabled, and the exempted_members in each |
| # AuditLogConfig are exempted. |
| # |
| # Example Policy with multiple AuditConfigs: |
| # |
| # { |
| # "audit_configs": [ |
| # { |
| # "service": "allServices", |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # }, |
| # { |
| # "log_type": "DATA_WRITE" |
| # }, |
| # { |
| # "log_type": "ADMIN_READ" |
| # } |
| # ] |
| # }, |
| # { |
| # "service": "sampleservice.googleapis.com", |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ" |
| # }, |
| # { |
| # "log_type": "DATA_WRITE", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # } |
| # ] |
| # } |
| # ] |
| # } |
| # |
| # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ |
| # logging. It also exempts [email protected] from DATA_READ logging, and |
| # [email protected] from DATA_WRITE logging. |
| "service": "A String", # Specifies a service that will be enabled for audit logging. |
| # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`. |
| # `allServices` is a special value that covers all services. |
| "auditLogConfigs": [ # The configuration for logging of each type of permission. |
| { # Provides the configuration for logging a type of permissions. |
| # Example: |
| # |
| # { |
| # "audit_log_configs": [ |
| # { |
| # "log_type": "DATA_READ", |
| # "exempted_members": [ |
| # "user:[email protected]" |
| # ] |
| # }, |
| # { |
| # "log_type": "DATA_WRITE" |
| # } |
| # ] |
| # } |
| # |
| # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting |
| # [email protected] from DATA_READ logging. |
| "logType": "A String", # The log type that this config enables. |
| "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of |
| # permission. |
| # Follows the same format of Binding.members. |
| "A String", |
| ], |
| }, |
| ], |
| }, |
| ], |
| "version": 42, # Specifies the format of the policy. |
| # |
| # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value |
| # are rejected. |
| # |
| # Any operation that affects conditional role bindings must specify version |
| # `3`. This requirement applies to the following operations: |
| # |
| # * Getting a policy that includes a conditional role binding |
| # * Adding a conditional role binding to a policy |
| # * Changing a conditional role binding in a policy |
| # * Removing any role binding, with or without a condition, from a policy |
| # that includes conditions |
| # |
| # **Important:** If you use IAM Conditions, you must include the `etag` field |
| # whenever you call `setIamPolicy`. If you omit this field, then IAM allows |
| # you to overwrite a version `3` policy with a version `1` policy, and all of |
| # the conditions in the version `3` policy are lost. |
| # |
| # If a policy does not include any conditions, operations on that policy may |
| # specify any valid version or leave the field unset. |
| # |
| # To learn which resources support conditions in their IAM policies, see the |
| # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a |
| # `condition` that determines how and when the `bindings` are applied. Each |
| # of the `bindings` must contain at least one member. |
| { # Associates `members` with a `role`. |
| "role": "A String", # Role that is assigned to `members`. |
| # For example, `roles/viewer`, `roles/editor`, or `roles/owner`. |
| "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding. |
| # |
| # If the condition evaluates to `true`, then this binding applies to the |
| # current request. |
| # |
| # If the condition evaluates to `false`, then this binding does not apply to |
| # the current request. However, a different role binding might grant the same |
| # role to one or more of the members in this binding. |
| # |
| # To learn which resources support conditions in their IAM policies, see the |
| # [IAM |
| # documentation](https://cloud.google.com/iam/help/conditions/resource-policies). |
| # syntax. CEL is a C-like expression language. The syntax and semantics of CEL |
| # are documented at https://github.com/google/cel-spec. |
| # |
| # Example (Comparison): |
| # |
| # title: "Summary size limit" |
| # description: "Determines if a summary is less than 100 chars" |
| # expression: "document.summary.size() < 100" |
| # |
| # Example (Equality): |
| # |
| # title: "Requestor is owner" |
| # description: "Determines if requestor is the document owner" |
| # expression: "document.owner == request.auth.claims.email" |
| # |
| # Example (Logic): |
| # |
| # title: "Public documents" |
| # description: "Determine whether the document should be publicly visible" |
| # expression: "document.type != 'private' && document.type != 'internal'" |
| # |
| # Example (Data Manipulation): |
| # |
| # title: "Notification string" |
| # description: "Create a notification string with a timestamp." |
| # expression: "'New message received at ' + string(document.create_time)" |
| # |
| # The exact variables and functions that may be referenced within an expression |
| # are determined by the service that evaluates it. See the service |
| # documentation for additional information. |
| "expression": "A String", # Textual representation of an expression in Common Expression Language |
| # syntax. |
| "title": "A String", # Optional. Title for the expression, i.e. a short string describing |
| # its purpose. This can be used e.g. in UIs which allow to enter the |
| # expression. |
| "location": "A String", # Optional. String indicating the location of the expression for error |
| # reporting, e.g. a file name and a position in the file. |
| "description": "A String", # Optional. Description of the expression. This is a longer text which |
| # describes the expression, e.g. when hovered over it in a UI. |
| }, |
| "members": [ # Specifies the identities requesting access for a Cloud Platform resource. |
| # `members` can have the following values: |
| # |
| # * `allUsers`: A special identifier that represents anyone who is |
| # on the internet; with or without a Google account. |
| # |
| # * `allAuthenticatedUsers`: A special identifier that represents anyone |
| # who is authenticated with a Google account or a service account. |
| # |
| # * `user:{emailid}`: An email address that represents a specific Google |
| # account. For example, `[email protected]` . |
| # |
| # |
| # * `serviceAccount:{emailid}`: An email address that represents a service |
| # account. For example, `[email protected]`. |
| # |
| # * `group:{emailid}`: An email address that represents a Google group. |
| # For example, `[email protected]`. |
| # |
| # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique |
| # identifier) representing a user that has been recently deleted. For |
| # example, `[email protected]?uid=123456789012345678901`. If the user is |
| # recovered, this value reverts to `user:{emailid}` and the recovered user |
| # retains the role in the binding. |
| # |
| # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus |
| # unique identifier) representing a service account that has been recently |
| # deleted. For example, |
| # `[email protected]?uid=123456789012345678901`. |
| # If the service account is undeleted, this value reverts to |
| # `serviceAccount:{emailid}` and the undeleted service account retains the |
| # role in the binding. |
| # |
| # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique |
| # identifier) representing a Google group that has been recently |
| # deleted. For example, `[email protected]?uid=123456789012345678901`. If |
| # the group is recovered, this value reverts to `group:{emailid}` and the |
| # recovered group retains the role in the binding. |
| # |
| # |
| # * `domain:{domain}`: The G Suite domain (primary) that represents all the |
| # users of that domain. For example, `google.com` or `example.com`. |
| # |
| "A String", |
| ], |
| }, |
| ], |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code> |
| <pre>Returns permissions that a caller has on the specified resource. |
| If the resource does not exist, this will return an empty set of |
| permissions, not a `NOT_FOUND` error. |
| |
| Note: This operation is designed to be used for building permission-aware |
| UIs and command-line tools, not for authorization checking. This operation |
| may "fail open" without warning. |
| |
| Args: |
| resource: string, REQUIRED: The resource for which the policy detail is being requested. |
| See the operation documentation for the appropriate value for this field. (required) |
| body: object, The request body. |
| The object takes the form of: |
| |
| { # Request message for `TestIamPermissions` method. |
| "permissions": [ # The set of permissions to check for the `resource`. Permissions with |
| # wildcards (such as '*' or 'storage.*') are not allowed. For more |
| # information see |
| # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions). |
| "A String", |
| ], |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Response message for `TestIamPermissions` method. |
| "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is |
| # allowed. |
| "A String", |
| ], |
| }</pre> |
| </div> |
| |
| </body></html> |