Protected
constructorThe language which VoiceML should listen to.
Options for the ML model to be used.
An array of VoiceML.QnaAction elements. It is used to pass the context in each QnaAction to the DialogML.
Should complete transcription returned. Such transcriptions after the user stopped speaking. This transcription is marked with isFinalTranscription=true
in the OnListeningUpdate
.
Should interim transcription returned. Such transcriptions are returned while the user still speaks, however they may be less accurate, and can be changed on following transcriptions. This interim results are marked with isFinalTranscription=false
in the OnListeningUpdate
.
Supports multiple speech contexts for increased transcription accuracy.
An optional attribute to specify which speech recognizer ML model to use when transcribing. When creating a new ListeningOptions
the value of this attrbute is defaulted to SPEECH_RECOGNIZER
. The supported values are: SPEECH_RECOGNIZER
.
In cases where specific words are expected from the users, the transcription accuracy of these words can be improved, by strengthening their likelihood in context. The strength is scaled 1-10
(10 being the strongest increase) the default value is 5
.
Static
createCreates voice command options.
Returns the name of this object's type.
Returns true if the object matches or derives from the passed in type.
Returns true if this object is the same as other
. Useful for checking if two references point to the same thing.
Provides the configuration for the audio input processing output. This can either include NLP processing using the VoiceML.BaseNlpModel or directly retrieving the transcription.
speechContext
provides the ability to further improve the transcription accuracy given an assumed context.Example