The name of the Asset in Lens Studio.
Readonly
onRegisters a callback which will be called when microphone permissions are taken from the lens. stopListening()
is implicitly called in such case.
Readonly
onRegisters a callback which will be called when microphone permissions are granted to the Lens, the microphone is initialized, and is actively listening. The expected design pattern is to start the listening session once those permissions have been granted:
//@input Asset.VoiceMLModule vmlModule
var onListeningEnabledHandler = function(){
script.vmlModule.startListening(options);
}
script.vmlModule.onListeningEnabled.add(onListeningEnabledHandler);
Readonly
onRegisters a callback, which will be called in case the VoiceML module can't process the inputs. Most errors are due to network connectivity, or misconfigured NLP inputs.
Readonly
onRegisters a callback, which will be called with interim transcription or related NLP models.
Readonly
uniqueAllows the user to provide voice commands for the VoiceML to execute on behalf of the users. Current supported commands: "Take a Snap", "Start Recording", "Stop Recording". In case a command was detected, it will be automtically executed by the system and returned as part of the NlpCommandResponse in the onListeningUpdate
callback. You can retrieve the command that was executed using the following snippet:
var onUpdateListeningEventHandler = function(eventArgs) {
var commandResponses = eventArgs.getCommandResponses();
var nlpResponseText = "";
for (var i = 0; i < commandResponses.length; i++) {
var commandResponse = commandResponses[i];
nlpResponseText += "Command Response: " + commandResponse.modelName + "\n command: " + commandResponse.command;
}
}
Returns true if this object is the same as other
. Useful for checking if two references point to the same thing.
Exposes User Data
Starts transcribing the user audio, the NLP model is connected and sends back results using an event, optionally could request for transcription and interim results. Notice, you can only startListening
, after microphone permissions have been granted. It is recommended to use startListening
method only after onListeningEnabled
callback was called.
VoiceML Module allows voice input and commands. It enables transciption of the speech, detecting keywords within the transcription, intents as well as system commands (such as "Take a Snap"). You can use one VoiceML Module per Lens.
Example