The name of the Asset in Lens Studio.
ReadonlyonRegisters a callback which will be called when microphone permissions are taken from the lens. stopListening() is implicitly called in such case.
ReadonlyonRegisters a callback which will be called when microphone permissions are granted to the Lens, the microphone is initialized, and is actively listening. The expected design pattern is to start the listening session once those permissions have been granted:
//@input Asset.VoiceMLModule vmlModule
var onListeningEnabledHandler = function(){
script.vmlModule.startListening(options);
}
script.vmlModule.onListeningEnabled.add(onListeningEnabledHandler);
ReadonlyonRegisters a callback, which will be called in case the VoiceML module can't process the inputs. Most errors are due to network connectivity, or misconfigured NLP inputs.
ReadonlyonRegisters a callback, which will be called with interim transcription or related NLP models.
ReadonlyuniqueAllows the user to provide voice commands for the VoiceML to execute on behalf of the users. Current supported commands: "Take a Snap", "Start Recording", "Stop Recording". In case a command was detected, it will be automtically executed by the system and returned as part of the VoiceML.NlpCommandResponse in the onListeningUpdate callback. You can retrieve the command that was executed using the following snippet:
var onUpdateListeningEventHandler = function(eventArgs) {
var commandResponses = eventArgs.getCommandResponses();
var nlpResponseText = "";
for (var i = 0; i < commandResponses.length; i++) {
var commandResponse = commandResponses[i];
nlpResponseText += "Command Response: " + commandResponse.modelName + "\n command: " + commandResponse.command;
}
}
Returns the name of this object's type.
Returns true if the object matches or derives from the passed in type.
Returns true if this object is the same as other. Useful for checking if two references point to the same thing.
Exposes User DataStarts transcribing the user audio, the NLP model is connected and sends back results using an event, optionally could request for transcription and interim results. Notice, you can only startListening, after microphone permissions have been granted. It is recommended to use startListening method only after onListeningEnabled callback was called.
Allows the Lens to incorporate transcription, keyword detection, voice command detection and other NLP based features into Lenses.
See
VoiceML guide.
Deprecated
Example
Example usage of the VoiceMLModule to configure listening options, apply speech contexts, handle keyword and intent models, process transcription responses, and handle listening events/errors.