| <html><body> |
| <style> |
| |
| body, h1, h2, h3, div, span, p, pre, a { |
| margin: 0; |
| padding: 0; |
| border: 0; |
| font-weight: inherit; |
| font-style: inherit; |
| font-size: 100%; |
| font-family: inherit; |
| vertical-align: baseline; |
| } |
| |
| body { |
| font-size: 13px; |
| padding: 1em; |
| } |
| |
| h1 { |
| font-size: 26px; |
| margin-bottom: 1em; |
| } |
| |
| h2 { |
| font-size: 24px; |
| margin-bottom: 1em; |
| } |
| |
| h3 { |
| font-size: 20px; |
| margin-bottom: 1em; |
| margin-top: 1em; |
| } |
| |
| pre, code { |
| line-height: 1.5; |
| font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| } |
| |
| pre { |
| margin-top: 0.5em; |
| } |
| |
| h1, h2, h3, p { |
| font-family: Arial, sans serif; |
| } |
| |
| h1, h2, h3 { |
| border-bottom: solid #CCC 1px; |
| } |
| |
| .toc_element { |
| margin-top: 0.5em; |
| } |
| |
| .firstline { |
| margin-left: 2 em; |
| } |
| |
| .method { |
| margin-top: 1em; |
| border: solid 1px #CCC; |
| padding: 1em; |
| background: #EEE; |
| } |
| |
| .details { |
| font-weight: bold; |
| font-size: 14px; |
| } |
| |
| </style> |
| |
| <h1><a href="ml_v1beta1.html">Google Cloud Machine Learning Engine</a> . <a href="ml_v1beta1.projects.html">projects</a> . <a href="ml_v1beta1.projects.models.html">models</a></h1> |
| <h2>Instance Methods</h2> |
| <p class="toc_element"> |
| <code><a href="ml_v1beta1.projects.models.versions.html">versions()</a></code> |
| </p> |
| <p class="firstline">Returns the versions Resource.</p> |
| |
| <p class="toc_element"> |
| <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Creates a model which will later contain one or more versions.</p> |
| <p class="toc_element"> |
| <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p> |
| <p class="firstline">Deletes a model.</p> |
| <p class="toc_element"> |
| <code><a href="#get">get(name, x__xgafv=None)</a></code></p> |
| <p class="firstline">Gets information about a model, including its name, the description (if</p> |
| <p class="toc_element"> |
| <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</a></code></p> |
| <p class="firstline">Lists the models in a project.</p> |
| <p class="toc_element"> |
| <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p> |
| <p class="firstline">Retrieves the next page of results.</p> |
| <h3>Method Details</h3> |
| <div class="method"> |
| <code class="details" id="create">create(parent, body, x__xgafv=None)</code> |
| <pre>Creates a model which will later contain one or more versions. |
| |
| You must add at least one version before you can request predictions from |
| the model. Add versions by calling |
| [projects.models.versions.create](/ml-engine/reference/rest/v1beta1/projects.models.versions/create). |
| |
| Args: |
| parent: string, Required. The project name. |
| |
| Authorization: requires `Editor` role on the specified project. (required) |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # Represents a machine learning solution. |
| # |
| # A model can have multiple versions, each of which is a deployed, trained |
| # model ready to receive prediction requests. The model itself is just a |
| # container. |
| "regions": [ # Optional. The list of regions where the model is going to be deployed. |
| # Currently only one region per model is supported. |
| # Defaults to 'us-central1' if nothing is set. |
| # Note: |
| # * No matter where a model is deployed, it can always be accessed by |
| # users from anywhere, both for online and batch prediction. |
| # * The region for a batch prediction job is set by the region field when |
| # submitting the batch prediction job and does not take its value from |
| # this field. |
| "A String", |
| ], |
| "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to |
| # handle prediction requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| # |
| # Each version is a trained model deployed in the cloud, ready to handle |
| # prediction requests. A model can have multiple versions. You can get |
| # information about all of the versions of a given model by calling |
| # [projects.models.versions.list](/ml-engine/reference/rest/v1beta1/projects.models.versions/list). |
| "description": "A String", # Optional. The description specified for the version when it was created. |
| "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. |
| # If not set, Google Cloud ML will choose a version. |
| "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
| # model. You should generally use `automatic_scaling` with an appropriate |
| # `min_nodes` instead, but this option is available if you want predictable |
| # billing. Beware that latency and error rates will increase if the |
| # traffic exceeds that capability of the system to serve it based on |
| # the selected number of nodes. |
| "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| # starting from the time the model is deployed, so the cost of operating |
| # this model will be proportional to `nodes` * number of hours since |
| # last billing cycle. |
| }, |
| "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to |
| # create the version. See the |
| # [overview of model |
| # deployment](/ml-engine/docs/concepts/deployment-overview) for more |
| # informaiton. |
| # |
| # When passing Version to |
| # [projects.models.versions.create](/ml-engine/reference/rest/v1beta1/projects.models.versions/create) |
| # the model service uses the specified location as the source of the model. |
| # Once deployed, the model version is hosted by the prediction service, so |
| # this location is useful only as a historical record. |
| # The total number of model files can't exceed 1000. |
| "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
| # response to increases and decreases in traffic. Care should be |
| # taken to ramp up traffic according to the model's ability to scale |
| # or you will start seeing increases in latency and 429 response codes. |
| "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
| # nodes are always up, starting from the time the model is deployed, so the |
| # cost of operating this model will be at least |
| # `rate` * `min_nodes` * number of hours since last billing cycle, |
| # where `rate` is the cost per node-hour as documented in |
| # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), |
| # even if no predictions are performed. There is additional cost for each |
| # prediction performed. |
| # |
| # Unlike manual scaling, if the load gets too heavy for the nodes |
| # that are up, the service will automatically add nodes to handle the |
| # increased load as well as scale back as traffic drops, always maintaining |
| # at least `min_nodes`. You will be charged for the time in which additional |
| # nodes are used. |
| # |
| # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| # to a model stops (and after a cool-down period), nodes will be shut down |
| # and no charges will be incurred until traffic to the model resumes. |
| }, |
| "createTime": "A String", # Output only. The time the version was created. |
| "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| # requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| "name": "A String", # Required.The name specified for the version when it was created. |
| # |
| # The version name must be unique within the model it is created in. |
| }, |
| "name": "A String", # Required. The name specified for the model when it was created. |
| # |
| # The model name must be unique within the project it is created in. |
| "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. |
| # Default is false. |
| "description": "A String", # Optional. The description specified for the model when it was created. |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Represents a machine learning solution. |
| # |
| # A model can have multiple versions, each of which is a deployed, trained |
| # model ready to receive prediction requests. The model itself is just a |
| # container. |
| "regions": [ # Optional. The list of regions where the model is going to be deployed. |
| # Currently only one region per model is supported. |
| # Defaults to 'us-central1' if nothing is set. |
| # Note: |
| # * No matter where a model is deployed, it can always be accessed by |
| # users from anywhere, both for online and batch prediction. |
| # * The region for a batch prediction job is set by the region field when |
| # submitting the batch prediction job and does not take its value from |
| # this field. |
| "A String", |
| ], |
| "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to |
| # handle prediction requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| # |
| # Each version is a trained model deployed in the cloud, ready to handle |
| # prediction requests. A model can have multiple versions. You can get |
| # information about all of the versions of a given model by calling |
| # [projects.models.versions.list](/ml-engine/reference/rest/v1beta1/projects.models.versions/list). |
| "description": "A String", # Optional. The description specified for the version when it was created. |
| "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. |
| # If not set, Google Cloud ML will choose a version. |
| "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
| # model. You should generally use `automatic_scaling` with an appropriate |
| # `min_nodes` instead, but this option is available if you want predictable |
| # billing. Beware that latency and error rates will increase if the |
| # traffic exceeds that capability of the system to serve it based on |
| # the selected number of nodes. |
| "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| # starting from the time the model is deployed, so the cost of operating |
| # this model will be proportional to `nodes` * number of hours since |
| # last billing cycle. |
| }, |
| "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to |
| # create the version. See the |
| # [overview of model |
| # deployment](/ml-engine/docs/concepts/deployment-overview) for more |
| # informaiton. |
| # |
| # When passing Version to |
| # [projects.models.versions.create](/ml-engine/reference/rest/v1beta1/projects.models.versions/create) |
| # the model service uses the specified location as the source of the model. |
| # Once deployed, the model version is hosted by the prediction service, so |
| # this location is useful only as a historical record. |
| # The total number of model files can't exceed 1000. |
| "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
| # response to increases and decreases in traffic. Care should be |
| # taken to ramp up traffic according to the model's ability to scale |
| # or you will start seeing increases in latency and 429 response codes. |
| "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
| # nodes are always up, starting from the time the model is deployed, so the |
| # cost of operating this model will be at least |
| # `rate` * `min_nodes` * number of hours since last billing cycle, |
| # where `rate` is the cost per node-hour as documented in |
| # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), |
| # even if no predictions are performed. There is additional cost for each |
| # prediction performed. |
| # |
| # Unlike manual scaling, if the load gets too heavy for the nodes |
| # that are up, the service will automatically add nodes to handle the |
| # increased load as well as scale back as traffic drops, always maintaining |
| # at least `min_nodes`. You will be charged for the time in which additional |
| # nodes are used. |
| # |
| # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| # to a model stops (and after a cool-down period), nodes will be shut down |
| # and no charges will be incurred until traffic to the model resumes. |
| }, |
| "createTime": "A String", # Output only. The time the version was created. |
| "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| # requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| "name": "A String", # Required.The name specified for the version when it was created. |
| # |
| # The version name must be unique within the model it is created in. |
| }, |
| "name": "A String", # Required. The name specified for the model when it was created. |
| # |
| # The model name must be unique within the project it is created in. |
| "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. |
| # Default is false. |
| "description": "A String", # Optional. The description specified for the model when it was created. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="delete">delete(name, x__xgafv=None)</code> |
| <pre>Deletes a model. |
| |
| You can only delete a model if there are no versions in it. You can delete |
| versions by calling |
| [projects.models.versions.delete](/ml-engine/reference/rest/v1beta1/projects.models.versions/delete). |
| |
| Args: |
| name: string, Required. The name of the model. |
| |
| Authorization: requires `Editor` role on the parent project. (required) |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # This resource represents a long-running operation that is the result of a |
| # network API call. |
| "metadata": { # Service-specific metadata associated with the operation. It typically |
| # contains progress information and common metadata such as create time. |
| # Some services might not provide such metadata. Any method that returns a |
| # long-running operation should document the metadata type, if any. |
| "a_key": "", # Properties of the object. Contains field @type with type URL. |
| }, |
| "error": { # The `Status` type defines a logical error model that is suitable for different # The error result of the operation in case of failure or cancellation. |
| # programming environments, including REST APIs and RPC APIs. It is used by |
| # [gRPC](https://github.com/grpc). The error model is designed to be: |
| # |
| # - Simple to use and understand for most users |
| # - Flexible enough to meet unexpected needs |
| # |
| # # Overview |
| # |
| # The `Status` message contains three pieces of data: error code, error message, |
| # and error details. The error code should be an enum value of |
| # google.rpc.Code, but it may accept additional error codes if needed. The |
| # error message should be a developer-facing English message that helps |
| # developers *understand* and *resolve* the error. If a localized user-facing |
| # error message is needed, put the localized message in the error details or |
| # localize it in the client. The optional error details may contain arbitrary |
| # information about the error. There is a predefined set of error detail types |
| # in the package `google.rpc` that can be used for common error conditions. |
| # |
| # # Language mapping |
| # |
| # The `Status` message is the logical representation of the error model, but it |
| # is not necessarily the actual wire format. When the `Status` message is |
| # exposed in different client libraries and different wire protocols, it can be |
| # mapped differently. For example, it will likely be mapped to some exceptions |
| # in Java, but more likely mapped to some error codes in C. |
| # |
| # # Other uses |
| # |
| # The error model and the `Status` message can be used in a variety of |
| # environments, either with or without APIs, to provide a |
| # consistent developer experience across different environments. |
| # |
| # Example uses of this error model include: |
| # |
| # - Partial errors. If a service needs to return partial errors to the client, |
| # it may embed the `Status` in the normal response to indicate the partial |
| # errors. |
| # |
| # - Workflow errors. A typical workflow has multiple steps. Each step may |
| # have a `Status` message for error reporting. |
| # |
| # - Batch operations. If a client uses batch request and batch response, the |
| # `Status` message should be used directly inside batch response, one for |
| # each error sub-response. |
| # |
| # - Asynchronous operations. If an API call embeds asynchronous operation |
| # results in its response, the status of those operations should be |
| # represented directly using the `Status` message. |
| # |
| # - Logging. If some API errors are stored in logs, the message `Status` could |
| # be used directly after any stripping needed for security/privacy reasons. |
| "message": "A String", # A developer-facing error message, which should be in English. Any |
| # user-facing error message should be localized and sent in the |
| # google.rpc.Status.details field, or localized by the client. |
| "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
| "details": [ # A list of messages that carry the error details. There will be a |
| # common set of message types for APIs to use. |
| { |
| "a_key": "", # Properties of the object. Contains field @type with type URL. |
| }, |
| ], |
| }, |
| "done": True or False, # If the value is `false`, it means the operation is still in progress. |
| # If true, the operation is completed, and either `error` or `response` is |
| # available. |
| "response": { # The normal response of the operation in case of success. If the original |
| # method returns no data on success, such as `Delete`, the response is |
| # `google.protobuf.Empty`. If the original method is standard |
| # `Get`/`Create`/`Update`, the response should be the resource. For other |
| # methods, the response should have the type `XxxResponse`, where `Xxx` |
| # is the original method name. For example, if the original method name |
| # is `TakeSnapshot()`, the inferred response type is |
| # `TakeSnapshotResponse`. |
| "a_key": "", # Properties of the object. Contains field @type with type URL. |
| }, |
| "name": "A String", # The server-assigned name, which is only unique within the same service that |
| # originally returns it. If you use the default HTTP mapping, the |
| # `name` should have the format of `operations/some/unique/name`. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="get">get(name, x__xgafv=None)</code> |
| <pre>Gets information about a model, including its name, the description (if |
| set), and the default version (if at least one version of the model has |
| been deployed). |
| |
| Args: |
| name: string, Required. The name of the model. |
| |
| Authorization: requires `Viewer` role on the parent project. (required) |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # Represents a machine learning solution. |
| # |
| # A model can have multiple versions, each of which is a deployed, trained |
| # model ready to receive prediction requests. The model itself is just a |
| # container. |
| "regions": [ # Optional. The list of regions where the model is going to be deployed. |
| # Currently only one region per model is supported. |
| # Defaults to 'us-central1' if nothing is set. |
| # Note: |
| # * No matter where a model is deployed, it can always be accessed by |
| # users from anywhere, both for online and batch prediction. |
| # * The region for a batch prediction job is set by the region field when |
| # submitting the batch prediction job and does not take its value from |
| # this field. |
| "A String", |
| ], |
| "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to |
| # handle prediction requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| # |
| # Each version is a trained model deployed in the cloud, ready to handle |
| # prediction requests. A model can have multiple versions. You can get |
| # information about all of the versions of a given model by calling |
| # [projects.models.versions.list](/ml-engine/reference/rest/v1beta1/projects.models.versions/list). |
| "description": "A String", # Optional. The description specified for the version when it was created. |
| "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. |
| # If not set, Google Cloud ML will choose a version. |
| "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
| # model. You should generally use `automatic_scaling` with an appropriate |
| # `min_nodes` instead, but this option is available if you want predictable |
| # billing. Beware that latency and error rates will increase if the |
| # traffic exceeds that capability of the system to serve it based on |
| # the selected number of nodes. |
| "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| # starting from the time the model is deployed, so the cost of operating |
| # this model will be proportional to `nodes` * number of hours since |
| # last billing cycle. |
| }, |
| "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to |
| # create the version. See the |
| # [overview of model |
| # deployment](/ml-engine/docs/concepts/deployment-overview) for more |
| # informaiton. |
| # |
| # When passing Version to |
| # [projects.models.versions.create](/ml-engine/reference/rest/v1beta1/projects.models.versions/create) |
| # the model service uses the specified location as the source of the model. |
| # Once deployed, the model version is hosted by the prediction service, so |
| # this location is useful only as a historical record. |
| # The total number of model files can't exceed 1000. |
| "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
| # response to increases and decreases in traffic. Care should be |
| # taken to ramp up traffic according to the model's ability to scale |
| # or you will start seeing increases in latency and 429 response codes. |
| "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
| # nodes are always up, starting from the time the model is deployed, so the |
| # cost of operating this model will be at least |
| # `rate` * `min_nodes` * number of hours since last billing cycle, |
| # where `rate` is the cost per node-hour as documented in |
| # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), |
| # even if no predictions are performed. There is additional cost for each |
| # prediction performed. |
| # |
| # Unlike manual scaling, if the load gets too heavy for the nodes |
| # that are up, the service will automatically add nodes to handle the |
| # increased load as well as scale back as traffic drops, always maintaining |
| # at least `min_nodes`. You will be charged for the time in which additional |
| # nodes are used. |
| # |
| # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| # to a model stops (and after a cool-down period), nodes will be shut down |
| # and no charges will be incurred until traffic to the model resumes. |
| }, |
| "createTime": "A String", # Output only. The time the version was created. |
| "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| # requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| "name": "A String", # Required.The name specified for the version when it was created. |
| # |
| # The version name must be unique within the model it is created in. |
| }, |
| "name": "A String", # Required. The name specified for the model when it was created. |
| # |
| # The model name must be unique within the project it is created in. |
| "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. |
| # Default is false. |
| "description": "A String", # Optional. The description specified for the model when it was created. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</code> |
| <pre>Lists the models in a project. |
| |
| Each project can contain multiple models, and each model can have multiple |
| versions. |
| |
| Args: |
| parent: string, Required. The name of the project whose models are to be listed. |
| |
| Authorization: requires `Viewer` role on the specified project. (required) |
| pageToken: string, Optional. A page token to request the next page of results. |
| |
| You get the token from the `next_page_token` field of the response from |
| the previous call. |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there |
| are more remaining results than this number, the response message will |
| contain a valid value in the `next_page_token` field. |
| |
| The default value is 20, and the maximum page size is 100. |
| |
| Returns: |
| An object of the form: |
| |
| { # Response message for the ListModels method. |
| "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a |
| # subsequent call. |
| "models": [ # The list of models. |
| { # Represents a machine learning solution. |
| # |
| # A model can have multiple versions, each of which is a deployed, trained |
| # model ready to receive prediction requests. The model itself is just a |
| # container. |
| "regions": [ # Optional. The list of regions where the model is going to be deployed. |
| # Currently only one region per model is supported. |
| # Defaults to 'us-central1' if nothing is set. |
| # Note: |
| # * No matter where a model is deployed, it can always be accessed by |
| # users from anywhere, both for online and batch prediction. |
| # * The region for a batch prediction job is set by the region field when |
| # submitting the batch prediction job and does not take its value from |
| # this field. |
| "A String", |
| ], |
| "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to |
| # handle prediction requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| # |
| # Each version is a trained model deployed in the cloud, ready to handle |
| # prediction requests. A model can have multiple versions. You can get |
| # information about all of the versions of a given model by calling |
| # [projects.models.versions.list](/ml-engine/reference/rest/v1beta1/projects.models.versions/list). |
| "description": "A String", # Optional. The description specified for the version when it was created. |
| "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment. |
| # If not set, Google Cloud ML will choose a version. |
| "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
| # model. You should generally use `automatic_scaling` with an appropriate |
| # `min_nodes` instead, but this option is available if you want predictable |
| # billing. Beware that latency and error rates will increase if the |
| # traffic exceeds that capability of the system to serve it based on |
| # the selected number of nodes. |
| "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| # starting from the time the model is deployed, so the cost of operating |
| # this model will be proportional to `nodes` * number of hours since |
| # last billing cycle. |
| }, |
| "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to |
| # create the version. See the |
| # [overview of model |
| # deployment](/ml-engine/docs/concepts/deployment-overview) for more |
| # informaiton. |
| # |
| # When passing Version to |
| # [projects.models.versions.create](/ml-engine/reference/rest/v1beta1/projects.models.versions/create) |
| # the model service uses the specified location as the source of the model. |
| # Once deployed, the model version is hosted by the prediction service, so |
| # this location is useful only as a historical record. |
| # The total number of model files can't exceed 1000. |
| "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
| # response to increases and decreases in traffic. Care should be |
| # taken to ramp up traffic according to the model's ability to scale |
| # or you will start seeing increases in latency and 429 response codes. |
| "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
| # nodes are always up, starting from the time the model is deployed, so the |
| # cost of operating this model will be at least |
| # `rate` * `min_nodes` * number of hours since last billing cycle, |
| # where `rate` is the cost per node-hour as documented in |
| # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing), |
| # even if no predictions are performed. There is additional cost for each |
| # prediction performed. |
| # |
| # Unlike manual scaling, if the load gets too heavy for the nodes |
| # that are up, the service will automatically add nodes to handle the |
| # increased load as well as scale back as traffic drops, always maintaining |
| # at least `min_nodes`. You will be charged for the time in which additional |
| # nodes are used. |
| # |
| # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| # to a model stops (and after a cool-down period), nodes will be shut down |
| # and no charges will be incurred until traffic to the model resumes. |
| }, |
| "createTime": "A String", # Output only. The time the version was created. |
| "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| # requests that do not specify a version. |
| # |
| # You can change the default version by calling |
| # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1beta1/projects.models.versions/setDefault). |
| "name": "A String", # Required.The name specified for the version when it was created. |
| # |
| # The version name must be unique within the model it is created in. |
| }, |
| "name": "A String", # Required. The name specified for the model when it was created. |
| # |
| # The model name must be unique within the project it is created in. |
| "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction. |
| # Default is false. |
| "description": "A String", # Optional. The description specified for the model when it was created. |
| }, |
| ], |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="list_next">list_next(previous_request, previous_response)</code> |
| <pre>Retrieves the next page of results. |
| |
| Args: |
| previous_request: The request for the previous page. (required) |
| previous_response: The response from the request for the previous page. (required) |
| |
| Returns: |
| A request object that you can call 'execute()' on to request the next |
| page. Returns None if there are no more items in the collection. |
| </pre> |
| </div> |
| |
| </body></html> |