| <html><body> |
| <style> |
| |
| body, h1, h2, h3, div, span, p, pre, a { |
| margin: 0; |
| padding: 0; |
| border: 0; |
| font-weight: inherit; |
| font-style: inherit; |
| font-size: 100%; |
| font-family: inherit; |
| vertical-align: baseline; |
| } |
| |
| body { |
| font-size: 13px; |
| padding: 1em; |
| } |
| |
| h1 { |
| font-size: 26px; |
| margin-bottom: 1em; |
| } |
| |
| h2 { |
| font-size: 24px; |
| margin-bottom: 1em; |
| } |
| |
| h3 { |
| font-size: 20px; |
| margin-bottom: 1em; |
| margin-top: 1em; |
| } |
| |
| pre, code { |
| line-height: 1.5; |
| font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| } |
| |
| pre { |
| margin-top: 0.5em; |
| } |
| |
| h1, h2, h3, p { |
| font-family: Arial, sans serif; |
| } |
| |
| h1, h2, h3 { |
| border-bottom: solid #CCC 1px; |
| } |
| |
| .toc_element { |
| margin-top: 0.5em; |
| } |
| |
| .firstline { |
| margin-left: 2 em; |
| } |
| |
| .method { |
| margin-top: 1em; |
| border: solid 1px #CCC; |
| padding: 1em; |
| background: #EEE; |
| } |
| |
| .details { |
| font-weight: bold; |
| font-size: 14px; |
| } |
| |
| </style> |
| |
| <h1><a href="speech_v1p1beta1.html">Cloud Speech-to-Text API</a> . <a href="speech_v1p1beta1.speech.html">speech</a></h1> |
| <h2>Instance Methods</h2> |
| <p class="toc_element"> |
| <code><a href="#longrunningrecognize">longrunningrecognize(body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Performs asynchronous speech recognition: receive results via the</p> |
| <p class="toc_element"> |
| <code><a href="#recognize">recognize(body, x__xgafv=None)</a></code></p> |
| <p class="firstline">Performs synchronous speech recognition: receive results after all audio</p> |
| <h3>Method Details</h3> |
| <div class="method"> |
| <code class="details" id="longrunningrecognize">longrunningrecognize(body, x__xgafv=None)</code> |
| <pre>Performs asynchronous speech recognition: receive results via the |
| google.longrunning.Operations interface. Returns either an |
| `Operation.error` or an `Operation.response` which contains |
| a `LongRunningRecognizeResponse` message. |
| For more information on asynchronous speech recognition, see the |
| [how-to](https://cloud.google.com/speech-to-text/docs/async-recognize). |
| |
| Args: |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The top-level message sent by the client for the `LongRunningRecognize` |
| # method. |
| "audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized. |
| # Either `content` or `uri` must be supplied. Supplying both or neither |
| # returns google.rpc.Code.INVALID_ARGUMENT. See |
| # [content limits](/speech-to-text/quotas#content). |
| "content": "A String", # The audio data bytes encoded as specified in |
| # `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a |
| # pure binary representation, whereas JSON representations use base64. |
| "uri": "A String", # URI that points to a file that contains audio data bytes as specified in |
| # `RecognitionConfig`. The file must not be compressed (for example, gzip). |
| # Currently, only Google Cloud Storage URIs are |
| # supported, which must be specified in the following format: |
| # `gs://bucket_name/object_name` (other URI formats return |
| # google.rpc.Code.INVALID_ARGUMENT). For more information, see |
| # [Request URIs](https://cloud.google.com/storage/docs/reference-uris). |
| }, |
| "config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to |
| # process the request. |
| # request. |
| "languageCode": "A String", # *Required* The language of the supplied audio as a |
| # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. |
| # Example: "en-US". |
| # See [Language Support](/speech-to-text/docs/languages) |
| # for a list of the currently supported language codes. |
| "audioChannelCount": 42, # *Optional* The number of channels in the input audio data. |
| # ONLY set this for MULTI-CHANNEL recognition. |
| # Valid values for LINEAR16 and FLAC are `1`-`8`. |
| # Valid values for OGG_OPUS are '1'-'254'. |
| # Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`. |
| # If `0` or omitted, defaults to one channel (mono). |
| # Note: We only recognize the first channel by default. |
| # To perform independent recognition on each channel set |
| # `enable_separate_recognition_per_channel` to 'true'. |
| "encoding": "A String", # Encoding of audio data sent in all `RecognitionAudio` messages. |
| # This field is optional for `FLAC` and `WAV` audio files and required |
| # for all other audio formats. For details, see AudioEncoding. |
| "enableAutomaticPunctuation": True or False, # *Optional* If 'true', adds punctuation to recognition result hypotheses. |
| # This feature is only available in select languages. Setting this for |
| # requests in other languages has no effect at all. |
| # The default 'false' value does not add punctuation to result hypotheses. |
| # Note: This is currently offered as an experimental service, complimentary |
| # to all users. In the future this may be exclusively available as a |
| # premium feature. |
| "alternativeLanguageCodes": [ # *Optional* A list of up to 3 additional |
| # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, |
| # listing possible alternative languages of the supplied audio. |
| # See [Language Support](/speech-to-text/docs/languages) |
| # for a list of the currently supported language codes. |
| # If alternative languages are listed, recognition result will contain |
| # recognition in the most likely language detected including the main |
| # language_code. The recognition result will include the language tag |
| # of the language detected in the audio. |
| # Note: This feature is only supported for Voice Command and Voice Search |
| # use cases and performance may vary for other use cases (e.g., phone call |
| # transcription). |
| "A String", |
| ], |
| "enableSeparateRecognitionPerChannel": True or False, # This needs to be set to `true` explicitly and `audio_channel_count` > 1 |
| # to get each channel recognized separately. The recognition result will |
| # contain a `channel_tag` field to state which channel that result belongs |
| # to. If this is not true, we will only recognize the first channel. The |
| # request is billed cumulatively for all channels recognized: |
| # `audio_channel_count` multiplied by the length of the audio. |
| "enableWordTimeOffsets": True or False, # *Optional* If `true`, the top result includes a list of words and |
| # the start and end time offsets (timestamps) for those words. If |
| # `false`, no word-level time offset information is returned. The default is |
| # `false`. |
| "enableSpeakerDiarization": True or False, # *Optional* If 'true', enables speaker detection for each recognized word in |
| # the top alternative of the recognition result using a speaker_tag provided |
| # in the WordInfo. |
| # Note: Use diarization_config instead. This field will be DEPRECATED soon. |
| "maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned. |
| # Specifically, the maximum number of `SpeechRecognitionAlternative` messages |
| # within each `SpeechRecognitionResult`. |
| # The server may return fewer than `max_alternatives`. |
| # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of |
| # one. If omitted, will return a maximum of one. |
| "profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out |
| # profanities, replacing all but the initial character in each filtered word |
| # with asterisks, e.g. "f***". If set to `false` or omitted, profanities |
| # won't be filtered out. |
| "useEnhanced": True or False, # *Optional* Set to true to use an enhanced model for speech recognition. |
| # If `use_enhanced` is set to true and the `model` field is not set, then |
| # an appropriate enhanced model is chosen if: |
| # 1. project is eligible for requesting enhanced models |
| # 2. an enhanced model exists for the audio |
| # |
| # If `use_enhanced` is true and an enhanced version of the specified model |
| # does not exist, then the speech is recognized using the standard version |
| # of the specified model. |
| # |
| # Enhanced speech models require that you opt-in to data logging using |
| # instructions in the |
| # [documentation](/speech-to-text/docs/enable-data-logging). If you set |
| # `use_enhanced` to true and you have not enabled audio logging, then you |
| # will receive an error. |
| "sampleRateHertz": 42, # Sample rate in Hertz of the audio data sent in all |
| # `RecognitionAudio` messages. Valid values are: 8000-48000. |
| # 16000 is optimal. For best results, set the sampling rate of the audio |
| # source to 16000 Hz. If that's not possible, use the native sample rate of |
| # the audio source (instead of re-sampling). |
| # This field is optional for FLAC and WAV audio files, but is |
| # required for all other audio formats. For details, see AudioEncoding. |
| "diarizationSpeakerCount": 42, # *Optional* |
| # If set, specifies the estimated number of speakers in the conversation. |
| # If not set, defaults to '2'. |
| # Ignored unless enable_speaker_diarization is set to true." |
| # Note: Use diarization_config instead. This field will be DEPRECATED soon. |
| "enableWordConfidence": True or False, # *Optional* If `true`, the top result includes a list of words and the |
| # confidence for those words. If `false`, no word-level confidence |
| # information is returned. The default is `false`. |
| "model": "A String", # *Optional* Which model to select for the given request. Select the model |
| # best suited to your domain to get best results. If a model is not |
| # explicitly specified, then we auto-select a model based on the parameters |
| # in the RecognitionConfig. |
| # <table> |
| # <tr> |
| # <td><b>Model</b></td> |
| # <td><b>Description</b></td> |
| # </tr> |
| # <tr> |
| # <td><code>command_and_search</code></td> |
| # <td>Best for short queries such as voice commands or voice search.</td> |
| # </tr> |
| # <tr> |
| # <td><code>phone_call</code></td> |
| # <td>Best for audio that originated from a phone call (typically |
| # recorded at an 8khz sampling rate).</td> |
| # </tr> |
| # <tr> |
| # <td><code>video</code></td> |
| # <td>Best for audio that originated from from video or includes multiple |
| # speakers. Ideally the audio is recorded at a 16khz or greater |
| # sampling rate. This is a premium model that costs more than the |
| # standard rate.</td> |
| # </tr> |
| # <tr> |
| # <td><code>default</code></td> |
| # <td>Best for audio that is not one of the specific audio models. |
| # For example, long-form audio. Ideally the audio is high-fidelity, |
| # recorded at a 16khz or greater sampling rate.</td> |
| # </tr> |
| # </table> |
| "diarizationConfig": { # *Optional* Config to enable speaker diarization and set additional |
| # parameters to make diarization better suited for your application. |
| # Note: When this is enabled, we send all the words from the beginning of the |
| # audio for the top alternative in every consecutive STREAMING responses. |
| # This is done in order to improve our speaker tags as our models learn to |
| # identify the speakers in the conversation over time. |
| # For non-streaming requests, the diarization results will be provided only |
| # in the top alternative of the FINAL SpeechRecognitionResult. |
| "minSpeakerCount": 42, # *Optional* Only used if diarization_speaker_count is not set. |
| # Minimum number of speakers in the conversation. This range gives you more |
| # flexibility by allowing the system to automatically determine the correct |
| # number of speakers. If not set, the default value is 2. |
| "enableSpeakerDiarization": True or False, # *Optional* If 'true', enables speaker detection for each recognized word in |
| # the top alternative of the recognition result using a speaker_tag provided |
| # in the WordInfo. |
| "maxSpeakerCount": 42, # *Optional* Only used if diarization_speaker_count is not set. |
| # Maximum number of speakers in the conversation. This range gives you more |
| # flexibility by allowing the system to automatically determine the correct |
| # number of speakers. If not set, the default value is 6. |
| }, |
| "speechContexts": [ # *Optional* array of SpeechContext. |
| # A means to provide context to assist the speech recognition. For more |
| # information, see [Phrase Hints](/speech-to-text/docs/basics#phrase-hints). |
| { # Provides "hints" to the speech recognizer to favor specific words and phrases |
| # in the results. |
| "phrases": [ # *Optional* A list of strings containing words and phrases "hints" so that |
| # the speech recognition is more likely to recognize them. This can be used |
| # to improve the accuracy for specific words and phrases, for example, if |
| # specific commands are typically spoken by the user. This can also be used |
| # to add additional words to the vocabulary of the recognizer. See |
| # [usage limits](/speech-to-text/quotas#content). |
| # |
| # List items can also be set to classes for groups of words that represent |
| # common concepts that occur in natural language. For example, rather than |
| # providing phrase hints for every month of the year, using the $MONTH class |
| # improves the likelihood of correctly transcribing audio that includes |
| # months. |
| "A String", |
| ], |
| "boost": 3.14, # Hint Boost. Positive value will increase the probability that a specific |
| # phrase will be recognized over other similar sounding phrases. The higher |
| # the boost, the higher the chance of false positive recognition as well. |
| # Negative boost values would correspond to anti-biasing. Anti-biasing is not |
| # enabled, so negative boost will simply be ignored. Though `boost` can |
| # accept a wide range of positive values, most use cases are best served with |
| # values between 0 and 20. We recommend using a binary search approach to |
| # finding the optimal value for your use case. |
| }, |
| ], |
| "metadata": { # Description of audio data to be recognized. # *Optional* Metadata regarding this request. |
| "recordingDeviceType": "A String", # The type of device the speech was recorded with. |
| "originalMediaType": "A String", # The original media the speech was recorded on. |
| "microphoneDistance": "A String", # The audio type that most closely describes the audio being recognized. |
| "obfuscatedId": "A String", # Obfuscated (privacy-protected) ID of the user, to identify number of |
| # unique users using the service. |
| "recordingDeviceName": "A String", # The device used to make the recording. Examples 'Nexus 5X' or |
| # 'Polycom SoundStation IP 6000' or 'POTS' or 'VoIP' or |
| # 'Cardioid Microphone'. |
| "industryNaicsCodeOfAudio": 42, # The industry vertical to which this speech recognition request most |
| # closely applies. This is most indicative of the topics contained |
| # in the audio. Use the 6-digit NAICS code to identify the industry |
| # vertical - see https://www.naics.com/search/. |
| "audioTopic": "A String", # Description of the content. Eg. "Recordings of federal supreme court |
| # hearings from 2012". |
| "originalMimeType": "A String", # Mime type of the original audio file. For example `audio/m4a`, |
| # `audio/x-alaw-basic`, `audio/mp3`, `audio/3gpp`. |
| # A list of possible audio mime types is maintained at |
| # http://www.iana.org/assignments/media-types/media-types.xhtml#audio |
| "interactionType": "A String", # The use case most closely describing the audio content to be recognized. |
| }, |
| }, |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # This resource represents a long-running operation that is the result of a |
| # network API call. |
| "metadata": { # Service-specific metadata associated with the operation. It typically |
| # contains progress information and common metadata such as create time. |
| # Some services might not provide such metadata. Any method that returns a |
| # long-running operation should document the metadata type, if any. |
| "a_key": "", # Properties of the object. Contains field @type with type URL. |
| }, |
| "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. |
| # different programming environments, including REST APIs and RPC APIs. It is |
| # used by [gRPC](https://github.com/grpc). Each `Status` message contains |
| # three pieces of data: error code, error message, and error details. |
| # |
| # You can find out more about this error model and how to work with it in the |
| # [API Design Guide](https://cloud.google.com/apis/design/errors). |
| "message": "A String", # A developer-facing error message, which should be in English. Any |
| # user-facing error message should be localized and sent in the |
| # google.rpc.Status.details field, or localized by the client. |
| "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
| "details": [ # A list of messages that carry the error details. There is a common set of |
| # message types for APIs to use. |
| { |
| "a_key": "", # Properties of the object. Contains field @type with type URL. |
| }, |
| ], |
| }, |
| "done": True or False, # If the value is `false`, it means the operation is still in progress. |
| # If `true`, the operation is completed, and either `error` or `response` is |
| # available. |
| "response": { # The normal response of the operation in case of success. If the original |
| # method returns no data on success, such as `Delete`, the response is |
| # `google.protobuf.Empty`. If the original method is standard |
| # `Get`/`Create`/`Update`, the response should be the resource. For other |
| # methods, the response should have the type `XxxResponse`, where `Xxx` |
| # is the original method name. For example, if the original method name |
| # is `TakeSnapshot()`, the inferred response type is |
| # `TakeSnapshotResponse`. |
| "a_key": "", # Properties of the object. Contains field @type with type URL. |
| }, |
| "name": "A String", # The server-assigned name, which is only unique within the same service that |
| # originally returns it. If you use the default HTTP mapping, the |
| # `name` should be a resource name ending with `operations/{unique_id}`. |
| }</pre> |
| </div> |
| |
| <div class="method"> |
| <code class="details" id="recognize">recognize(body, x__xgafv=None)</code> |
| <pre>Performs synchronous speech recognition: receive results after all audio |
| has been sent and processed. |
| |
| Args: |
| body: object, The request body. (required) |
| The object takes the form of: |
| |
| { # The top-level message sent by the client for the `Recognize` method. |
| "audio": { # Contains audio data in the encoding specified in the `RecognitionConfig`. # *Required* The audio data to be recognized. |
| # Either `content` or `uri` must be supplied. Supplying both or neither |
| # returns google.rpc.Code.INVALID_ARGUMENT. See |
| # [content limits](/speech-to-text/quotas#content). |
| "content": "A String", # The audio data bytes encoded as specified in |
| # `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a |
| # pure binary representation, whereas JSON representations use base64. |
| "uri": "A String", # URI that points to a file that contains audio data bytes as specified in |
| # `RecognitionConfig`. The file must not be compressed (for example, gzip). |
| # Currently, only Google Cloud Storage URIs are |
| # supported, which must be specified in the following format: |
| # `gs://bucket_name/object_name` (other URI formats return |
| # google.rpc.Code.INVALID_ARGUMENT). For more information, see |
| # [Request URIs](https://cloud.google.com/storage/docs/reference-uris). |
| }, |
| "config": { # Provides information to the recognizer that specifies how to process the # *Required* Provides information to the recognizer that specifies how to |
| # process the request. |
| # request. |
| "languageCode": "A String", # *Required* The language of the supplied audio as a |
| # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. |
| # Example: "en-US". |
| # See [Language Support](/speech-to-text/docs/languages) |
| # for a list of the currently supported language codes. |
| "audioChannelCount": 42, # *Optional* The number of channels in the input audio data. |
| # ONLY set this for MULTI-CHANNEL recognition. |
| # Valid values for LINEAR16 and FLAC are `1`-`8`. |
| # Valid values for OGG_OPUS are '1'-'254'. |
| # Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only `1`. |
| # If `0` or omitted, defaults to one channel (mono). |
| # Note: We only recognize the first channel by default. |
| # To perform independent recognition on each channel set |
| # `enable_separate_recognition_per_channel` to 'true'. |
| "encoding": "A String", # Encoding of audio data sent in all `RecognitionAudio` messages. |
| # This field is optional for `FLAC` and `WAV` audio files and required |
| # for all other audio formats. For details, see AudioEncoding. |
| "enableAutomaticPunctuation": True or False, # *Optional* If 'true', adds punctuation to recognition result hypotheses. |
| # This feature is only available in select languages. Setting this for |
| # requests in other languages has no effect at all. |
| # The default 'false' value does not add punctuation to result hypotheses. |
| # Note: This is currently offered as an experimental service, complimentary |
| # to all users. In the future this may be exclusively available as a |
| # premium feature. |
| "alternativeLanguageCodes": [ # *Optional* A list of up to 3 additional |
| # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tags, |
| # listing possible alternative languages of the supplied audio. |
| # See [Language Support](/speech-to-text/docs/languages) |
| # for a list of the currently supported language codes. |
| # If alternative languages are listed, recognition result will contain |
| # recognition in the most likely language detected including the main |
| # language_code. The recognition result will include the language tag |
| # of the language detected in the audio. |
| # Note: This feature is only supported for Voice Command and Voice Search |
| # use cases and performance may vary for other use cases (e.g., phone call |
| # transcription). |
| "A String", |
| ], |
| "enableSeparateRecognitionPerChannel": True or False, # This needs to be set to `true` explicitly and `audio_channel_count` > 1 |
| # to get each channel recognized separately. The recognition result will |
| # contain a `channel_tag` field to state which channel that result belongs |
| # to. If this is not true, we will only recognize the first channel. The |
| # request is billed cumulatively for all channels recognized: |
| # `audio_channel_count` multiplied by the length of the audio. |
| "enableWordTimeOffsets": True or False, # *Optional* If `true`, the top result includes a list of words and |
| # the start and end time offsets (timestamps) for those words. If |
| # `false`, no word-level time offset information is returned. The default is |
| # `false`. |
| "enableSpeakerDiarization": True or False, # *Optional* If 'true', enables speaker detection for each recognized word in |
| # the top alternative of the recognition result using a speaker_tag provided |
| # in the WordInfo. |
| # Note: Use diarization_config instead. This field will be DEPRECATED soon. |
| "maxAlternatives": 42, # *Optional* Maximum number of recognition hypotheses to be returned. |
| # Specifically, the maximum number of `SpeechRecognitionAlternative` messages |
| # within each `SpeechRecognitionResult`. |
| # The server may return fewer than `max_alternatives`. |
| # Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of |
| # one. If omitted, will return a maximum of one. |
| "profanityFilter": True or False, # *Optional* If set to `true`, the server will attempt to filter out |
| # profanities, replacing all but the initial character in each filtered word |
| # with asterisks, e.g. "f***". If set to `false` or omitted, profanities |
| # won't be filtered out. |
| "useEnhanced": True or False, # *Optional* Set to true to use an enhanced model for speech recognition. |
| # If `use_enhanced` is set to true and the `model` field is not set, then |
| # an appropriate enhanced model is chosen if: |
| # 1. project is eligible for requesting enhanced models |
| # 2. an enhanced model exists for the audio |
| # |
| # If `use_enhanced` is true and an enhanced version of the specified model |
| # does not exist, then the speech is recognized using the standard version |
| # of the specified model. |
| # |
| # Enhanced speech models require that you opt-in to data logging using |
| # instructions in the |
| # [documentation](/speech-to-text/docs/enable-data-logging). If you set |
| # `use_enhanced` to true and you have not enabled audio logging, then you |
| # will receive an error. |
| "sampleRateHertz": 42, # Sample rate in Hertz of the audio data sent in all |
| # `RecognitionAudio` messages. Valid values are: 8000-48000. |
| # 16000 is optimal. For best results, set the sampling rate of the audio |
| # source to 16000 Hz. If that's not possible, use the native sample rate of |
| # the audio source (instead of re-sampling). |
| # This field is optional for FLAC and WAV audio files, but is |
| # required for all other audio formats. For details, see AudioEncoding. |
| "diarizationSpeakerCount": 42, # *Optional* |
| # If set, specifies the estimated number of speakers in the conversation. |
| # If not set, defaults to '2'. |
| # Ignored unless enable_speaker_diarization is set to true." |
| # Note: Use diarization_config instead. This field will be DEPRECATED soon. |
| "enableWordConfidence": True or False, # *Optional* If `true`, the top result includes a list of words and the |
| # confidence for those words. If `false`, no word-level confidence |
| # information is returned. The default is `false`. |
| "model": "A String", # *Optional* Which model to select for the given request. Select the model |
| # best suited to your domain to get best results. If a model is not |
| # explicitly specified, then we auto-select a model based on the parameters |
| # in the RecognitionConfig. |
| # <table> |
| # <tr> |
| # <td><b>Model</b></td> |
| # <td><b>Description</b></td> |
| # </tr> |
| # <tr> |
| # <td><code>command_and_search</code></td> |
| # <td>Best for short queries such as voice commands or voice search.</td> |
| # </tr> |
| # <tr> |
| # <td><code>phone_call</code></td> |
| # <td>Best for audio that originated from a phone call (typically |
| # recorded at an 8khz sampling rate).</td> |
| # </tr> |
| # <tr> |
| # <td><code>video</code></td> |
| # <td>Best for audio that originated from from video or includes multiple |
| # speakers. Ideally the audio is recorded at a 16khz or greater |
| # sampling rate. This is a premium model that costs more than the |
| # standard rate.</td> |
| # </tr> |
| # <tr> |
| # <td><code>default</code></td> |
| # <td>Best for audio that is not one of the specific audio models. |
| # For example, long-form audio. Ideally the audio is high-fidelity, |
| # recorded at a 16khz or greater sampling rate.</td> |
| # </tr> |
| # </table> |
| "diarizationConfig": { # *Optional* Config to enable speaker diarization and set additional |
| # parameters to make diarization better suited for your application. |
| # Note: When this is enabled, we send all the words from the beginning of the |
| # audio for the top alternative in every consecutive STREAMING responses. |
| # This is done in order to improve our speaker tags as our models learn to |
| # identify the speakers in the conversation over time. |
| # For non-streaming requests, the diarization results will be provided only |
| # in the top alternative of the FINAL SpeechRecognitionResult. |
| "minSpeakerCount": 42, # *Optional* Only used if diarization_speaker_count is not set. |
| # Minimum number of speakers in the conversation. This range gives you more |
| # flexibility by allowing the system to automatically determine the correct |
| # number of speakers. If not set, the default value is 2. |
| "enableSpeakerDiarization": True or False, # *Optional* If 'true', enables speaker detection for each recognized word in |
| # the top alternative of the recognition result using a speaker_tag provided |
| # in the WordInfo. |
| "maxSpeakerCount": 42, # *Optional* Only used if diarization_speaker_count is not set. |
| # Maximum number of speakers in the conversation. This range gives you more |
| # flexibility by allowing the system to automatically determine the correct |
| # number of speakers. If not set, the default value is 6. |
| }, |
| "speechContexts": [ # *Optional* array of SpeechContext. |
| # A means to provide context to assist the speech recognition. For more |
| # information, see [Phrase Hints](/speech-to-text/docs/basics#phrase-hints). |
| { # Provides "hints" to the speech recognizer to favor specific words and phrases |
| # in the results. |
| "phrases": [ # *Optional* A list of strings containing words and phrases "hints" so that |
| # the speech recognition is more likely to recognize them. This can be used |
| # to improve the accuracy for specific words and phrases, for example, if |
| # specific commands are typically spoken by the user. This can also be used |
| # to add additional words to the vocabulary of the recognizer. See |
| # [usage limits](/speech-to-text/quotas#content). |
| # |
| # List items can also be set to classes for groups of words that represent |
| # common concepts that occur in natural language. For example, rather than |
| # providing phrase hints for every month of the year, using the $MONTH class |
| # improves the likelihood of correctly transcribing audio that includes |
| # months. |
| "A String", |
| ], |
| "boost": 3.14, # Hint Boost. Positive value will increase the probability that a specific |
| # phrase will be recognized over other similar sounding phrases. The higher |
| # the boost, the higher the chance of false positive recognition as well. |
| # Negative boost values would correspond to anti-biasing. Anti-biasing is not |
| # enabled, so negative boost will simply be ignored. Though `boost` can |
| # accept a wide range of positive values, most use cases are best served with |
| # values between 0 and 20. We recommend using a binary search approach to |
| # finding the optimal value for your use case. |
| }, |
| ], |
| "metadata": { # Description of audio data to be recognized. # *Optional* Metadata regarding this request. |
| "recordingDeviceType": "A String", # The type of device the speech was recorded with. |
| "originalMediaType": "A String", # The original media the speech was recorded on. |
| "microphoneDistance": "A String", # The audio type that most closely describes the audio being recognized. |
| "obfuscatedId": "A String", # Obfuscated (privacy-protected) ID of the user, to identify number of |
| # unique users using the service. |
| "recordingDeviceName": "A String", # The device used to make the recording. Examples 'Nexus 5X' or |
| # 'Polycom SoundStation IP 6000' or 'POTS' or 'VoIP' or |
| # 'Cardioid Microphone'. |
| "industryNaicsCodeOfAudio": 42, # The industry vertical to which this speech recognition request most |
| # closely applies. This is most indicative of the topics contained |
| # in the audio. Use the 6-digit NAICS code to identify the industry |
| # vertical - see https://www.naics.com/search/. |
| "audioTopic": "A String", # Description of the content. Eg. "Recordings of federal supreme court |
| # hearings from 2012". |
| "originalMimeType": "A String", # Mime type of the original audio file. For example `audio/m4a`, |
| # `audio/x-alaw-basic`, `audio/mp3`, `audio/3gpp`. |
| # A list of possible audio mime types is maintained at |
| # http://www.iana.org/assignments/media-types/media-types.xhtml#audio |
| "interactionType": "A String", # The use case most closely describing the audio content to be recognized. |
| }, |
| }, |
| "name": "A String", # *Optional* The name of the model to use for recognition. |
| } |
| |
| x__xgafv: string, V1 error format. |
| Allowed values |
| 1 - v1 error format |
| 2 - v2 error format |
| |
| Returns: |
| An object of the form: |
| |
| { # The only message returned to the client by the `Recognize` method. It |
| # contains the result as zero or more sequential `SpeechRecognitionResult` |
| # messages. |
| "results": [ # Output only. Sequential list of transcription results corresponding to |
| # sequential portions of audio. |
| { # A speech recognition result corresponding to a portion of the audio. |
| "languageCode": "A String", # Output only. The |
| # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag of the |
| # language in this result. This language code was detected to have the most |
| # likelihood of being spoken in the audio. |
| "alternatives": [ # Output only. May contain one or more recognition hypotheses (up to the |
| # maximum specified in `max_alternatives`). |
| # These alternatives are ordered in terms of accuracy, with the top (first) |
| # alternative being the most probable, as ranked by the recognizer. |
| { # Alternative hypotheses (a.k.a. n-best list). |
| "confidence": 3.14, # Output only. The confidence estimate between 0.0 and 1.0. A higher number |
| # indicates an estimated greater likelihood that the recognized words are |
| # correct. This field is set only for the top alternative of a non-streaming |
| # result or, of a streaming result where `is_final=true`. |
| # This field is not guaranteed to be accurate and users should not rely on it |
| # to be always provided. |
| # The default of 0.0 is a sentinel value indicating `confidence` was not set. |
| "transcript": "A String", # Output only. Transcript text representing the words that the user spoke. |
| "words": [ # Output only. A list of word-specific information for each recognized word. |
| # Note: When `enable_speaker_diarization` is true, you will see all the words |
| # from the beginning of the audio. |
| { # Word-specific information for recognized words. |
| "confidence": 3.14, # Output only. The confidence estimate between 0.0 and 1.0. A higher number |
| # indicates an estimated greater likelihood that the recognized words are |
| # correct. This field is set only for the top alternative of a non-streaming |
| # result or, of a streaming result where `is_final=true`. |
| # This field is not guaranteed to be accurate and users should not rely on it |
| # to be always provided. |
| # The default of 0.0 is a sentinel value indicating `confidence` was not set. |
| "endTime": "A String", # Output only. Time offset relative to the beginning of the audio, |
| # and corresponding to the end of the spoken word. |
| # This field is only set if `enable_word_time_offsets=true` and only |
| # in the top hypothesis. |
| # This is an experimental feature and the accuracy of the time offset can |
| # vary. |
| "word": "A String", # Output only. The word corresponding to this set of information. |
| "startTime": "A String", # Output only. Time offset relative to the beginning of the audio, |
| # and corresponding to the start of the spoken word. |
| # This field is only set if `enable_word_time_offsets=true` and only |
| # in the top hypothesis. |
| # This is an experimental feature and the accuracy of the time offset can |
| # vary. |
| "speakerTag": 42, # Output only. A distinct integer value is assigned for every speaker within |
| # the audio. This field specifies which one of those speakers was detected to |
| # have spoken this word. Value ranges from '1' to diarization_speaker_count. |
| # speaker_tag is set if enable_speaker_diarization = 'true' and only in the |
| # top alternative. |
| }, |
| ], |
| }, |
| ], |
| "channelTag": 42, # For multi-channel audio, this is the channel number corresponding to the |
| # recognized result for the audio from that channel. |
| # For audio_channel_count = N, its output values can range from '1' to 'N'. |
| }, |
| ], |
| }</pre> |
| </div> |
| |
| </body></html> |