blob: 97a57c6f280e0d0c1274ccbd2e59a0f835d7dc70 [file] [log] [blame] [edit]
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="ces_v1beta.html">Gemini Enterprise for Customer Experience API</a> . <a href="ces_v1beta.projects.html">projects</a> . <a href="ces_v1beta.projects.locations.html">locations</a> . <a href="ces_v1beta.projects.locations.apps.html">apps</a> . <a href="ces_v1beta.projects.locations.apps.scheduledEvaluationRuns.html">scheduledEvaluationRuns</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
<code><a href="#create">create(parent, body=None, scheduledEvaluationRunId=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates a scheduled evaluation run.</p>
<p class="toc_element">
<code><a href="#delete">delete(name, etag=None, x__xgafv=None)</a></code></p>
<p class="firstline">Deletes a scheduled evaluation run.</p>
<p class="toc_element">
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets details of the specified scheduled evaluation run.</p>
<p class="toc_element">
<code><a href="#list">list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p>
<p class="firstline">Lists all scheduled evaluation runs in the given app.</p>
<p class="toc_element">
<code><a href="#list_next">list_next()</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
<p class="toc_element">
<code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates a scheduled evaluation run.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="close">close()</code>
<pre>Close httplib2 connections.</pre>
</div>
<div class="method">
<code class="details" id="create">create(parent, body=None, scheduledEvaluationRunId=None, x__xgafv=None)</code>
<pre>Creates a scheduled evaluation run.
Args:
parent: string, Required. The app to create the scheduled evaluation run for. Format: `projects/{project}/locations/{location}/apps/{app}` (required)
body: object, The request body.
The object takes the form of:
{ # Represents a scheduled evaluation run configuration.
&quot;active&quot;: True or False, # Optional. Whether this config is active
&quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the scheduled evaluation run was created.
&quot;createdBy&quot;: &quot;A String&quot;, # Output only. The user who created the scheduled evaluation run.
&quot;description&quot;: &quot;A String&quot;, # Optional. User-defined description of the scheduled evaluation run.
&quot;displayName&quot;: &quot;A String&quot;, # Required. User-defined display name of the scheduled evaluation run config.
&quot;etag&quot;: &quot;A String&quot;, # Output only. Etag used to ensure the object hasn&#x27;t changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
&quot;lastCompletedRun&quot;: &quot;A String&quot;, # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
&quot;lastUpdatedBy&quot;: &quot;A String&quot;, # Output only. The user who last updated the evaluation.
&quot;name&quot;: &quot;A String&quot;, # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
&quot;nextScheduledExecutionTime&quot;: &quot;A String&quot;, # Output only. The next time this is scheduled to execute
&quot;request&quot;: { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
&quot;app&quot;: &quot;A String&quot;, # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
&quot;appVersion&quot;: &quot;A String&quot;, # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
&quot;config&quot;: { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
&quot;evaluationChannel&quot;: &quot;A String&quot;, # Optional. The channel to evaluate.
&quot;inputAudioConfig&quot;: { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the input audio data.
&quot;noiseSuppressionLevel&quot;: &quot;A String&quot;, # Optional. Whether to enable noise suppression on the input audio. Available values are &quot;low&quot;, &quot;moderate&quot;, &quot;high&quot;, &quot;very_high&quot;.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the input audio data.
},
&quot;outputAudioConfig&quot;: { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the output audio data.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the output audio data.
},
&quot;toolCallBehaviour&quot;: &quot;A String&quot;, # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
},
&quot;displayName&quot;: &quot;A String&quot;, # Optional. The display name of the evaluation run.
&quot;evaluationDataset&quot;: &quot;A String&quot;, # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
&quot;evaluations&quot;: [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
&quot;A String&quot;,
],
&quot;generateLatencyReport&quot;: True or False, # Optional. Whether to generate a latency report for the evaluation run.
&quot;goldenRunMethod&quot;: &quot;A String&quot;, # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
&quot;optimizationConfig&quot;: { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
&quot;assistantSession&quot;: &quot;A String&quot;, # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
&quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The error message if the optimization run failed.
&quot;generateLossReport&quot;: True or False, # Optional. Whether to generate a loss report.
&quot;lossReport&quot;: { # Output only. The generated loss report.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;reportSummary&quot;: &quot;A String&quot;, # Output only. The summary of the loss report.
&quot;shouldSuggestFix&quot;: True or False, # Output only. Whether to suggest a fix for the losses.
&quot;status&quot;: &quot;A String&quot;, # Output only. The status of the optimization run.
},
&quot;personaRunConfigs&quot;: [ # Optional. The configuration to use for the run per persona.
{ # Configuration for running an evaluation for a specific persona.
&quot;persona&quot;: &quot;A String&quot;, # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
&quot;taskCount&quot;: 42, # Optional. The number of tasks to run for the persona.
},
],
&quot;runCount&quot;: 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
&quot;scheduledEvaluationRun&quot;: &quot;A String&quot;, # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
},
&quot;schedulingConfig&quot;: { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
&quot;daysOfWeek&quot;: [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
42,
],
&quot;frequency&quot;: &quot;A String&quot;, # Required. The frequency with which to run the eval
&quot;startTime&quot;: &quot;A String&quot;, # Required. Timestamp when the eval should start.
},
&quot;totalExecutions&quot;: 42, # Output only. The total number of times this run has been executed
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the evaluation was last updated.
}
scheduledEvaluationRunId: string, Optional. The ID to use for the scheduled evaluation run, which will become the final component of the scheduled evaluation run&#x27;s resource name. If not provided, a unique ID will be automatically assigned.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Represents a scheduled evaluation run configuration.
&quot;active&quot;: True or False, # Optional. Whether this config is active
&quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the scheduled evaluation run was created.
&quot;createdBy&quot;: &quot;A String&quot;, # Output only. The user who created the scheduled evaluation run.
&quot;description&quot;: &quot;A String&quot;, # Optional. User-defined description of the scheduled evaluation run.
&quot;displayName&quot;: &quot;A String&quot;, # Required. User-defined display name of the scheduled evaluation run config.
&quot;etag&quot;: &quot;A String&quot;, # Output only. Etag used to ensure the object hasn&#x27;t changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
&quot;lastCompletedRun&quot;: &quot;A String&quot;, # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
&quot;lastUpdatedBy&quot;: &quot;A String&quot;, # Output only. The user who last updated the evaluation.
&quot;name&quot;: &quot;A String&quot;, # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
&quot;nextScheduledExecutionTime&quot;: &quot;A String&quot;, # Output only. The next time this is scheduled to execute
&quot;request&quot;: { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
&quot;app&quot;: &quot;A String&quot;, # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
&quot;appVersion&quot;: &quot;A String&quot;, # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
&quot;config&quot;: { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
&quot;evaluationChannel&quot;: &quot;A String&quot;, # Optional. The channel to evaluate.
&quot;inputAudioConfig&quot;: { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the input audio data.
&quot;noiseSuppressionLevel&quot;: &quot;A String&quot;, # Optional. Whether to enable noise suppression on the input audio. Available values are &quot;low&quot;, &quot;moderate&quot;, &quot;high&quot;, &quot;very_high&quot;.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the input audio data.
},
&quot;outputAudioConfig&quot;: { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the output audio data.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the output audio data.
},
&quot;toolCallBehaviour&quot;: &quot;A String&quot;, # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
},
&quot;displayName&quot;: &quot;A String&quot;, # Optional. The display name of the evaluation run.
&quot;evaluationDataset&quot;: &quot;A String&quot;, # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
&quot;evaluations&quot;: [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
&quot;A String&quot;,
],
&quot;generateLatencyReport&quot;: True or False, # Optional. Whether to generate a latency report for the evaluation run.
&quot;goldenRunMethod&quot;: &quot;A String&quot;, # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
&quot;optimizationConfig&quot;: { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
&quot;assistantSession&quot;: &quot;A String&quot;, # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
&quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The error message if the optimization run failed.
&quot;generateLossReport&quot;: True or False, # Optional. Whether to generate a loss report.
&quot;lossReport&quot;: { # Output only. The generated loss report.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;reportSummary&quot;: &quot;A String&quot;, # Output only. The summary of the loss report.
&quot;shouldSuggestFix&quot;: True or False, # Output only. Whether to suggest a fix for the losses.
&quot;status&quot;: &quot;A String&quot;, # Output only. The status of the optimization run.
},
&quot;personaRunConfigs&quot;: [ # Optional. The configuration to use for the run per persona.
{ # Configuration for running an evaluation for a specific persona.
&quot;persona&quot;: &quot;A String&quot;, # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
&quot;taskCount&quot;: 42, # Optional. The number of tasks to run for the persona.
},
],
&quot;runCount&quot;: 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
&quot;scheduledEvaluationRun&quot;: &quot;A String&quot;, # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
},
&quot;schedulingConfig&quot;: { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
&quot;daysOfWeek&quot;: [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
42,
],
&quot;frequency&quot;: &quot;A String&quot;, # Required. The frequency with which to run the eval
&quot;startTime&quot;: &quot;A String&quot;, # Required. Timestamp when the eval should start.
},
&quot;totalExecutions&quot;: 42, # Output only. The total number of times this run has been executed
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the evaluation was last updated.
}</pre>
</div>
<div class="method">
<code class="details" id="delete">delete(name, etag=None, x__xgafv=None)</code>
<pre>Deletes a scheduled evaluation run.
Args:
name: string, Required. The resource name of the scheduled evaluation run to delete. (required)
etag: string, Optional. The etag of the ScheduledEvaluationRun. If provided, it must match the server&#x27;s etag.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
}</pre>
</div>
<div class="method">
<code class="details" id="get">get(name, x__xgafv=None)</code>
<pre>Gets details of the specified scheduled evaluation run.
Args:
name: string, Required. The resource name of the scheduled evaluation run to retrieve. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Represents a scheduled evaluation run configuration.
&quot;active&quot;: True or False, # Optional. Whether this config is active
&quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the scheduled evaluation run was created.
&quot;createdBy&quot;: &quot;A String&quot;, # Output only. The user who created the scheduled evaluation run.
&quot;description&quot;: &quot;A String&quot;, # Optional. User-defined description of the scheduled evaluation run.
&quot;displayName&quot;: &quot;A String&quot;, # Required. User-defined display name of the scheduled evaluation run config.
&quot;etag&quot;: &quot;A String&quot;, # Output only. Etag used to ensure the object hasn&#x27;t changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
&quot;lastCompletedRun&quot;: &quot;A String&quot;, # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
&quot;lastUpdatedBy&quot;: &quot;A String&quot;, # Output only. The user who last updated the evaluation.
&quot;name&quot;: &quot;A String&quot;, # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
&quot;nextScheduledExecutionTime&quot;: &quot;A String&quot;, # Output only. The next time this is scheduled to execute
&quot;request&quot;: { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
&quot;app&quot;: &quot;A String&quot;, # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
&quot;appVersion&quot;: &quot;A String&quot;, # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
&quot;config&quot;: { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
&quot;evaluationChannel&quot;: &quot;A String&quot;, # Optional. The channel to evaluate.
&quot;inputAudioConfig&quot;: { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the input audio data.
&quot;noiseSuppressionLevel&quot;: &quot;A String&quot;, # Optional. Whether to enable noise suppression on the input audio. Available values are &quot;low&quot;, &quot;moderate&quot;, &quot;high&quot;, &quot;very_high&quot;.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the input audio data.
},
&quot;outputAudioConfig&quot;: { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the output audio data.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the output audio data.
},
&quot;toolCallBehaviour&quot;: &quot;A String&quot;, # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
},
&quot;displayName&quot;: &quot;A String&quot;, # Optional. The display name of the evaluation run.
&quot;evaluationDataset&quot;: &quot;A String&quot;, # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
&quot;evaluations&quot;: [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
&quot;A String&quot;,
],
&quot;generateLatencyReport&quot;: True or False, # Optional. Whether to generate a latency report for the evaluation run.
&quot;goldenRunMethod&quot;: &quot;A String&quot;, # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
&quot;optimizationConfig&quot;: { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
&quot;assistantSession&quot;: &quot;A String&quot;, # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
&quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The error message if the optimization run failed.
&quot;generateLossReport&quot;: True or False, # Optional. Whether to generate a loss report.
&quot;lossReport&quot;: { # Output only. The generated loss report.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;reportSummary&quot;: &quot;A String&quot;, # Output only. The summary of the loss report.
&quot;shouldSuggestFix&quot;: True or False, # Output only. Whether to suggest a fix for the losses.
&quot;status&quot;: &quot;A String&quot;, # Output only. The status of the optimization run.
},
&quot;personaRunConfigs&quot;: [ # Optional. The configuration to use for the run per persona.
{ # Configuration for running an evaluation for a specific persona.
&quot;persona&quot;: &quot;A String&quot;, # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
&quot;taskCount&quot;: 42, # Optional. The number of tasks to run for the persona.
},
],
&quot;runCount&quot;: 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
&quot;scheduledEvaluationRun&quot;: &quot;A String&quot;, # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
},
&quot;schedulingConfig&quot;: { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
&quot;daysOfWeek&quot;: [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
42,
],
&quot;frequency&quot;: &quot;A String&quot;, # Required. The frequency with which to run the eval
&quot;startTime&quot;: &quot;A String&quot;, # Required. Timestamp when the eval should start.
},
&quot;totalExecutions&quot;: 42, # Output only. The total number of times this run has been executed
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the evaluation was last updated.
}</pre>
</div>
<div class="method">
<code class="details" id="list">list(parent, filter=None, orderBy=None, pageSize=None, pageToken=None, x__xgafv=None)</code>
<pre>Lists all scheduled evaluation runs in the given app.
Args:
parent: string, Required. The resource name of the app to list scheduled evaluation runs from. (required)
filter: string, Optional. Filter to be applied when listing the scheduled evaluation runs. See https://google.aip.dev/160 for more details. Currently supports filtering by: * request.evaluations:evaluation_id * request.evaluation_dataset:evaluation_dataset_id
orderBy: string, Optional. Field to sort by. Supported fields are: &quot;name&quot; (ascending), &quot;create_time&quot; (descending), &quot;update_time&quot; (descending), &quot;next_scheduled_execution&quot; (ascending), and &quot;last_completed_run.create_time&quot; (descending). If not included, &quot;update_time&quot; will be the default. See https://google.aip.dev/132#ordering for more details.
pageSize: integer, Optional. Requested page size. Server may return fewer items than requested. If unspecified, server will pick an appropriate default.
pageToken: string, Optional. The next_page_token value returned from a previous list EvaluationService.ListScheduledEvaluationRuns call.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for EvaluationService.ListScheduledEvaluationRuns.
&quot;nextPageToken&quot;: &quot;A String&quot;, # A token that can be sent as ListScheduledEvaluationRunsRequest.page_token to retrieve the next page. Absence of this field indicates there are no subsequent pages.
&quot;scheduledEvaluationRuns&quot;: [ # The list of scheduled evaluation runs.
{ # Represents a scheduled evaluation run configuration.
&quot;active&quot;: True or False, # Optional. Whether this config is active
&quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the scheduled evaluation run was created.
&quot;createdBy&quot;: &quot;A String&quot;, # Output only. The user who created the scheduled evaluation run.
&quot;description&quot;: &quot;A String&quot;, # Optional. User-defined description of the scheduled evaluation run.
&quot;displayName&quot;: &quot;A String&quot;, # Required. User-defined display name of the scheduled evaluation run config.
&quot;etag&quot;: &quot;A String&quot;, # Output only. Etag used to ensure the object hasn&#x27;t changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
&quot;lastCompletedRun&quot;: &quot;A String&quot;, # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
&quot;lastUpdatedBy&quot;: &quot;A String&quot;, # Output only. The user who last updated the evaluation.
&quot;name&quot;: &quot;A String&quot;, # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
&quot;nextScheduledExecutionTime&quot;: &quot;A String&quot;, # Output only. The next time this is scheduled to execute
&quot;request&quot;: { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
&quot;app&quot;: &quot;A String&quot;, # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
&quot;appVersion&quot;: &quot;A String&quot;, # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
&quot;config&quot;: { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
&quot;evaluationChannel&quot;: &quot;A String&quot;, # Optional. The channel to evaluate.
&quot;inputAudioConfig&quot;: { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the input audio data.
&quot;noiseSuppressionLevel&quot;: &quot;A String&quot;, # Optional. Whether to enable noise suppression on the input audio. Available values are &quot;low&quot;, &quot;moderate&quot;, &quot;high&quot;, &quot;very_high&quot;.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the input audio data.
},
&quot;outputAudioConfig&quot;: { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the output audio data.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the output audio data.
},
&quot;toolCallBehaviour&quot;: &quot;A String&quot;, # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
},
&quot;displayName&quot;: &quot;A String&quot;, # Optional. The display name of the evaluation run.
&quot;evaluationDataset&quot;: &quot;A String&quot;, # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
&quot;evaluations&quot;: [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
&quot;A String&quot;,
],
&quot;generateLatencyReport&quot;: True or False, # Optional. Whether to generate a latency report for the evaluation run.
&quot;goldenRunMethod&quot;: &quot;A String&quot;, # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
&quot;optimizationConfig&quot;: { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
&quot;assistantSession&quot;: &quot;A String&quot;, # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
&quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The error message if the optimization run failed.
&quot;generateLossReport&quot;: True or False, # Optional. Whether to generate a loss report.
&quot;lossReport&quot;: { # Output only. The generated loss report.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;reportSummary&quot;: &quot;A String&quot;, # Output only. The summary of the loss report.
&quot;shouldSuggestFix&quot;: True or False, # Output only. Whether to suggest a fix for the losses.
&quot;status&quot;: &quot;A String&quot;, # Output only. The status of the optimization run.
},
&quot;personaRunConfigs&quot;: [ # Optional. The configuration to use for the run per persona.
{ # Configuration for running an evaluation for a specific persona.
&quot;persona&quot;: &quot;A String&quot;, # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
&quot;taskCount&quot;: 42, # Optional. The number of tasks to run for the persona.
},
],
&quot;runCount&quot;: 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
&quot;scheduledEvaluationRun&quot;: &quot;A String&quot;, # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
},
&quot;schedulingConfig&quot;: { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
&quot;daysOfWeek&quot;: [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
42,
],
&quot;frequency&quot;: &quot;A String&quot;, # Required. The frequency with which to run the eval
&quot;startTime&quot;: &quot;A String&quot;, # Required. Timestamp when the eval should start.
},
&quot;totalExecutions&quot;: 42, # Output only. The total number of times this run has been executed
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the evaluation was last updated.
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="list_next">list_next()</code>
<pre>Retrieves the next page of results.
Args:
previous_request: The request for the previous page. (required)
previous_response: The response from the request for the previous page. (required)
Returns:
A request object that you can call &#x27;execute()&#x27; on to request the next
page. Returns None if there are no more items in the collection.
</pre>
</div>
<div class="method">
<code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
<pre>Updates a scheduled evaluation run.
Args:
name: string, Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId} (required)
body: object, The request body.
The object takes the form of:
{ # Represents a scheduled evaluation run configuration.
&quot;active&quot;: True or False, # Optional. Whether this config is active
&quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the scheduled evaluation run was created.
&quot;createdBy&quot;: &quot;A String&quot;, # Output only. The user who created the scheduled evaluation run.
&quot;description&quot;: &quot;A String&quot;, # Optional. User-defined description of the scheduled evaluation run.
&quot;displayName&quot;: &quot;A String&quot;, # Required. User-defined display name of the scheduled evaluation run config.
&quot;etag&quot;: &quot;A String&quot;, # Output only. Etag used to ensure the object hasn&#x27;t changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
&quot;lastCompletedRun&quot;: &quot;A String&quot;, # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
&quot;lastUpdatedBy&quot;: &quot;A String&quot;, # Output only. The user who last updated the evaluation.
&quot;name&quot;: &quot;A String&quot;, # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
&quot;nextScheduledExecutionTime&quot;: &quot;A String&quot;, # Output only. The next time this is scheduled to execute
&quot;request&quot;: { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
&quot;app&quot;: &quot;A String&quot;, # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
&quot;appVersion&quot;: &quot;A String&quot;, # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
&quot;config&quot;: { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
&quot;evaluationChannel&quot;: &quot;A String&quot;, # Optional. The channel to evaluate.
&quot;inputAudioConfig&quot;: { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the input audio data.
&quot;noiseSuppressionLevel&quot;: &quot;A String&quot;, # Optional. Whether to enable noise suppression on the input audio. Available values are &quot;low&quot;, &quot;moderate&quot;, &quot;high&quot;, &quot;very_high&quot;.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the input audio data.
},
&quot;outputAudioConfig&quot;: { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the output audio data.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the output audio data.
},
&quot;toolCallBehaviour&quot;: &quot;A String&quot;, # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
},
&quot;displayName&quot;: &quot;A String&quot;, # Optional. The display name of the evaluation run.
&quot;evaluationDataset&quot;: &quot;A String&quot;, # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
&quot;evaluations&quot;: [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
&quot;A String&quot;,
],
&quot;generateLatencyReport&quot;: True or False, # Optional. Whether to generate a latency report for the evaluation run.
&quot;goldenRunMethod&quot;: &quot;A String&quot;, # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
&quot;optimizationConfig&quot;: { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
&quot;assistantSession&quot;: &quot;A String&quot;, # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
&quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The error message if the optimization run failed.
&quot;generateLossReport&quot;: True or False, # Optional. Whether to generate a loss report.
&quot;lossReport&quot;: { # Output only. The generated loss report.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;reportSummary&quot;: &quot;A String&quot;, # Output only. The summary of the loss report.
&quot;shouldSuggestFix&quot;: True or False, # Output only. Whether to suggest a fix for the losses.
&quot;status&quot;: &quot;A String&quot;, # Output only. The status of the optimization run.
},
&quot;personaRunConfigs&quot;: [ # Optional. The configuration to use for the run per persona.
{ # Configuration for running an evaluation for a specific persona.
&quot;persona&quot;: &quot;A String&quot;, # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
&quot;taskCount&quot;: 42, # Optional. The number of tasks to run for the persona.
},
],
&quot;runCount&quot;: 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
&quot;scheduledEvaluationRun&quot;: &quot;A String&quot;, # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
},
&quot;schedulingConfig&quot;: { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
&quot;daysOfWeek&quot;: [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
42,
],
&quot;frequency&quot;: &quot;A String&quot;, # Required. The frequency with which to run the eval
&quot;startTime&quot;: &quot;A String&quot;, # Required. Timestamp when the eval should start.
},
&quot;totalExecutions&quot;: 42, # Output only. The total number of times this run has been executed
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the evaluation was last updated.
}
updateMask: string, Optional. Field mask is used to control which fields get updated. If the mask is not present, all fields will be updated.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Represents a scheduled evaluation run configuration.
&quot;active&quot;: True or False, # Optional. Whether this config is active
&quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the scheduled evaluation run was created.
&quot;createdBy&quot;: &quot;A String&quot;, # Output only. The user who created the scheduled evaluation run.
&quot;description&quot;: &quot;A String&quot;, # Optional. User-defined description of the scheduled evaluation run.
&quot;displayName&quot;: &quot;A String&quot;, # Required. User-defined display name of the scheduled evaluation run config.
&quot;etag&quot;: &quot;A String&quot;, # Output only. Etag used to ensure the object hasn&#x27;t changed during a read-modify-write operation. If the etag is empty, the update will overwrite any concurrent changes.
&quot;lastCompletedRun&quot;: &quot;A String&quot;, # Output only. The last successful EvaluationRun of this scheduled execution. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationRuns/{evaluationRun}`
&quot;lastUpdatedBy&quot;: &quot;A String&quot;, # Output only. The user who last updated the evaluation.
&quot;name&quot;: &quot;A String&quot;, # Identifier. The unique identifier of the scheduled evaluation run config. Format: projects/{projectId}/locations/{locationId}/apps/{appId}/scheduledEvaluationRuns/{scheduledEvaluationRunId}
&quot;nextScheduledExecutionTime&quot;: &quot;A String&quot;, # Output only. The next time this is scheduled to execute
&quot;request&quot;: { # Request message for EvaluationService.RunEvaluation. # Required. The RunEvaluationRequest to schedule
&quot;app&quot;: &quot;A String&quot;, # Required. The app to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}`
&quot;appVersion&quot;: &quot;A String&quot;, # Optional. The app version to evaluate. Format: `projects/{project}/locations/{location}/apps/{app}/versions/{version}`
&quot;config&quot;: { # EvaluationConfig configures settings for running the evaluation. # Optional. The configuration to use for the run.
&quot;evaluationChannel&quot;: &quot;A String&quot;, # Optional. The channel to evaluate.
&quot;inputAudioConfig&quot;: { # InputAudioConfig configures how the CES agent should interpret the incoming audio data. # Optional. Configuration for processing the input audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the input audio data.
&quot;noiseSuppressionLevel&quot;: &quot;A String&quot;, # Optional. Whether to enable noise suppression on the input audio. Available values are &quot;low&quot;, &quot;moderate&quot;, &quot;high&quot;, &quot;very_high&quot;.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the input audio data.
},
&quot;outputAudioConfig&quot;: { # OutputAudioConfig configures how the CES agent should synthesize outgoing audio responses. # Optional. Configuration for generating the output audio.
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. The encoding of the output audio data.
&quot;sampleRateHertz&quot;: 42, # Required. The sample rate (in Hertz) of the output audio data.
},
&quot;toolCallBehaviour&quot;: &quot;A String&quot;, # Optional. Specifies whether the evaluation should use real tool calls or fake tools.
},
&quot;displayName&quot;: &quot;A String&quot;, # Optional. The display name of the evaluation run.
&quot;evaluationDataset&quot;: &quot;A String&quot;, # Optional. An evaluation dataset to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationDatasets/{evaluationDataset}`
&quot;evaluations&quot;: [ # Optional. List of evaluations to run. Format: `projects/{project}/locations/{location}/apps/{app}/evaluations/{evaluation}`
&quot;A String&quot;,
],
&quot;generateLatencyReport&quot;: True or False, # Optional. Whether to generate a latency report for the evaluation run.
&quot;goldenRunMethod&quot;: &quot;A String&quot;, # Optional. The method to run the evaluation if it is a golden evaluation. If not set, default to STABLE.
&quot;optimizationConfig&quot;: { # Configuration for running the optimization step after the evaluation run. # Optional. Configuration for running the optimization step after the evaluation run. If not set, the optimization step will not be run.
&quot;assistantSession&quot;: &quot;A String&quot;, # Output only. The assistant session to use for the optimization based on this evaluation run. Format: `projects/{project}/locations/{location}/apps/{app}/assistantSessions/{assistantSession}`
&quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The error message if the optimization run failed.
&quot;generateLossReport&quot;: True or False, # Optional. Whether to generate a loss report.
&quot;lossReport&quot;: { # Output only. The generated loss report.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;reportSummary&quot;: &quot;A String&quot;, # Output only. The summary of the loss report.
&quot;shouldSuggestFix&quot;: True or False, # Output only. Whether to suggest a fix for the losses.
&quot;status&quot;: &quot;A String&quot;, # Output only. The status of the optimization run.
},
&quot;personaRunConfigs&quot;: [ # Optional. The configuration to use for the run per persona.
{ # Configuration for running an evaluation for a specific persona.
&quot;persona&quot;: &quot;A String&quot;, # Optional. The persona to use for the evaluation. Format: `projects/{project}/locations/{location}/apps/{app}/evaluationPersonas/{evaluationPersona}`
&quot;taskCount&quot;: 42, # Optional. The number of tasks to run for the persona.
},
],
&quot;runCount&quot;: 42, # Optional. The number of times to run the evaluation. If not set, the default value is 1 per golden, and 5 per scenario.
&quot;scheduledEvaluationRun&quot;: &quot;A String&quot;, # Optional. The resource name of the `ScheduledEvaluationRun` that is triggering this evaluation run. If this field is set, the `scheduled_evaluation_run` field on the created `EvaluationRun` resource will be populated from this value. Format: `projects/{project}/locations/{location}/apps/{app}/scheduledEvaluationRuns/{scheduled_evaluation_run}`
},
&quot;schedulingConfig&quot;: { # Eval scheduling configuration details # Required. Configuration for the timing and frequency with which to execute the evaluations.
&quot;daysOfWeek&quot;: [ # Optional. The days of the week to run the eval. Applicable only for Weekly and Biweekly frequencies. 1 is Monday, 2 is Tuesday, ..., 7 is Sunday.
42,
],
&quot;frequency&quot;: &quot;A String&quot;, # Required. The frequency with which to run the eval
&quot;startTime&quot;: &quot;A String&quot;, # Required. Timestamp when the eval should start.
},
&quot;totalExecutions&quot;: 42, # Output only. The total number of times this run has been executed
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. Timestamp when the evaluation was last updated.
}</pre>
</div>
</body></html>