blob: 44e37764681b2566494d94f517c35db084e2e237 [file] [log] [blame] [edit]
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="checks_v1alpha.html">Checks API</a> . <a href="checks_v1alpha.aisafety.html">aisafety</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="#classifyContent">classifyContent(body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Analyze a piece of content with the provided set of policies.</p>
<p class="toc_element">
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="classifyContent">classifyContent(body=None, x__xgafv=None)</code>
<pre>Analyze a piece of content with the provided set of policies.
Args:
body: object, The request body.
The object takes the form of:
{ # Request proto for ClassifyContent RPC.
&quot;classifierVersion&quot;: &quot;A String&quot;, # Optional. Version of the classifier to use. If not specified, the latest version will be used.
&quot;context&quot;: { # Context about the input that will be used to help on the classification. # Optional. Context about the input that will be used to help on the classification.
&quot;prompt&quot;: &quot;A String&quot;, # Optional. Prompt that generated the model response.
},
&quot;input&quot;: { # Content to be classified. # Required. Content to be classified.
&quot;textInput&quot;: { # Text input to be classified. # Content in text format.
&quot;content&quot;: &quot;A String&quot;, # Actual piece of text to be classified.
&quot;languageCode&quot;: &quot;A String&quot;, # Optional. Language of the text in ISO 639-1 format. If the language is invalid or not specified, the system will try to detect it.
},
},
&quot;policies&quot;: [ # Required. List of policies to classify against.
{ # List of policies to classify against.
&quot;policyType&quot;: &quot;A String&quot;, # Required. Type of the policy.
&quot;threshold&quot;: 3.14, # Optional. Score threshold to use when deciding if the content is violative or non-violative. If not specified, the default 0.5 threshold for the policy will be used.
},
],
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response proto for ClassifyContent RPC.
&quot;policyResults&quot;: [ # Results of the classification for each policy.
{ # Result for one policy against the corresponding input.
&quot;policyType&quot;: &quot;A String&quot;, # Type of the policy.
&quot;score&quot;: 3.14, # Final score for the results of this policy.
&quot;violationResult&quot;: &quot;A String&quot;, # Result of the classification for the policy.
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="close">close()</code>
<pre>Close httplib2 connections.</pre>
</div>
</body></html>