Assets
Assets are resources imported into Lens Studio. See also: Importing and Updating Resources Guide
AnimationAsset
Asset that contains multiple animation layers. Animation assets themselves do not handle playing or orchestrating animations. This is left to the animation player component to handle.
Asset.AnimationAsset
AnimationCurveTrack
An asset that contains one or more animation curves. When evaluating multiple values, the values are selected from left to right in order. For example, for a vec3 containing x,y,z , it will correspond to track index 0, 1, 2 from left to right.
Asset.AnimationCurveTrack
[DEPRECATED] AnimationLayer
Configures an animation layer for a single SceneObject. Gives access to position, rotation, scale and blend shape animation tracks. See also: Playing 3D Animation Guide, AnimationMixer, Animation.
Asset.AnimationLayer
AnimationTrack
The base class for animation tracks.
Asset.AnimationTrack
AnimationCurveTrack
, AnimationLayer
, FloatAnimationTrack
, IntAnimationTrack
, QuaternionAnimationTrack
, Vec2AnimationTrack
, Vec3AnimationTrack
, Vec4AnimationTrack
Asset
Base class for all assets used in the engine.
Asset
AnimationAsset
, AnimationTrack
, AudioEffectAsset
, AudioTrackAsset
, BinAsset
, BitmojiModule
, CloudStorageModule
, CollisionMesh
, ConnectedLensModule
, DeviceTrackingModule
, DialogModule
, Font
, GaussianSplattingAsset
, GltfAsset
, HairDataAsset
, LeaderboardModule
, LocalizationsAsset
, LocationAsset
, LocationCloudStorageModule
, MapModule
, MarkerAsset
, Material
, Matter
, Object3DAsset
, ObjectPrefab
, Physics.WorldSettingsAsset
, ProcessedLocationModule
, RawLocationModule
, RemoteMediaModule
, RemoteReferenceAsset
, RemoteServiceModule
, RenderMesh
, ScanModule
, ScriptAsset
, TextInputModule
, TextToSpeechModule
, Texture
, VFXAsset
, VoiceMLModule
, WorldUnderstandingModule
AudioEffectAsset
Configures an audio effect for AudioEffectComponent.
Asset.AudioEffectAsset
AudioTrackAsset
Represents an audio file asset. See also: AudioComponent.
Asset.AudioTrackAsset
BitmojiModule
Provides access to getting information about the current user's Bitmoji.
Asset.BitmojiModule
BodyTrackingAsset
Asset used to configure Body Tracking for the ObjectTracking3D component.
Asset.BodyTrackingAsset
CloudStorageModule
Provides access to Cloud Storage.
Asset.CloudStorageModule
CollisionMesh
Physics.CollisionMesh
DeformingCollisionMesh
, FixedCollisionMesh
ConnectedLensModule
Connected Lenses Module allows use of networked Lens communication capabilities (real-time communication, co-located session creation and joining, and shared persistent storage). It's recommended to only use one ConnectedLensModule per Lens.
Asset.ConnectedLensModule
ConnectedLensModule.ConnectionInfo
, ConnectedLensModule.HostUpdateInfo
, ConnectedLensModule.RealtimeStoreCreationInfo
, ConnectedLensModule.RealtimeStoreDeleteInfo
, ConnectedLensModule.RealtimeStoreKeyRemovalInfo
, ConnectedLensModule.RealtimeStoreOwnershipUpdateInfo
, ConnectedLensModule.RealtimeStoreUpdateInfo
, ConnectedLensModule.SessionShareType
, ConnectedLensModule.UserInfo
DeformingCollisionMesh
Physics.DeformingCollisionMesh
DeviceTrackingModule
The module that provides DeviceTracking
component.
Asset.DeviceTrackingModule
FixedCollisionMesh
Physics.FixedCollisionMesh
[DEPRECATED] FloatAnimationTrack
The base class for animation tracks using float values.
Asset.FloatAnimationTrack
FloatAnimationTrackKeyFramed
, FloatBezierAnimationTrackKeyFramed
[DEPRECATED] FloatAnimationTrackKeyFramed
Represents an animation track using float value keyframes.
Asset.FloatAnimationTrackKeyFramed
[DEPRECATED] FloatBezierAnimationTrackKeyFramed
Represents an animation track using vec3 value keyframes for a bezier curve.
Asset.FloatBezierAnimationTrackKeyFramed
Font
A font asset used for rendering text. Used by Text. For more information, see the Text guide.
Asset.Font
GaussianSplattingAsset
Asset that contains Gaussian Splats. Used with GaussianSplattingVisual
.
Asset.GaussianSplattingAsset
GltfAsset
Represents a GLTF 3D Model.
Asset.GltfAsset
HairDataAsset
Hair asset converted from an FBX containing splines to be used with Hair Visual.
Asset.HairDataAsset
HandTracking3DAsset
Asset.HandTracking3DAsset
[DEPRECATED] IntAnimationTrack
The base class for animation tracks using integer values.
Asset.IntAnimationTrack
IntStepAnimationTrackKeyFramed
, IntStepNoLerpAnimationTrackKeyFramed
[DEPRECATED] IntStepAnimationTrackKeyFramed
Represents an animation track using stepped integer value keyframes.
Asset.IntStepAnimationTrackKeyFramed
[DEPRECATED] IntStepNoLerpAnimationTrackKeyFramed
Represents an animation track using stepped integer value keyframes.
Asset.IntStepNoLerpAnimationTrackKeyFramed
LeaderboardModule
A module which provides the Leaderboard
api.
Asset.LeaderboardModule
LevelsetColliderAsset
Collider asset generated from a mesh to be used with the Hair Visual as part of the hair simulation.
Asset.LevelsetColliderAsset
LocalizationsAsset
Asset used with the Localizations system to support custom localization strings.
Asset.LocalizationsAsset
LocationAsset
Provides a frame of reference in which to localize objects to the real world. Use with LocatedAtComponent.
Asset.LocationAsset
LocationCloudStorageModule
Provides access to location cloud storage depending upon the LocationCloudStorageOptions.
Asset.LocationCloudStorageModule
MLAsset
Binary ML model supplied by the user.
Asset.MLAsset
MapModule
Module for providing Map utils.
Asset.MapModule
MarkerAsset
Defines a marker to use for Marker Tracking with MarkerTrackingComponent. For more information, see the Marker Tracking guide.
Asset.MarkerAsset
Material
An asset that describes how visual objects should appear. Each Material is a collection of Passes which define the actual rendering passes. Materials are used by MeshVisuals for drawing meshes in the scene.
Asset.Material
Matter
Settings for the physical substance, such as friction and bounciness, of a collider. If unset, uses the default matter from the world settings.
Physics.Matter
Object3DAsset
Base class for configuring object tracking in the ObjectTracking3D component.
Asset.Object3DAsset
ObjectPrefab
A reusable object hierarchy stored as a resource. Can be instantiated through script or brought into the scene through Lens Studio. For more information, see the Prefabs guide.
Asset.ObjectPrefab
Physics.WorldSettingsAsset
Stores reusable settings uniform for a world (such as gravity magnitude and direction). See also: WorldComponent.worldSettings.
Physics.WorldSettingsAsset
Physics
ProcessedLocationModule
Asset.ProcessedLocationModule
[DEPRECATED] QuaternionAnimationTrack
The base class for animation tracks using quaternion values.
Asset.QuaternionAnimationTrack
QuaternionAnimationTrackKeyFramed
, QuaternionAnimationTrackXYZEuler
[DEPRECATED] QuaternionAnimationTrackKeyFramed
Represents an animation track using quaternion value keyframes.
Asset.QuaternionAnimationTrackKeyFramed
[DEPRECATED] QuaternionAnimationTrackXYZEuler
Represents a rotation animation track using euler angles.
Asset.QuaternionAnimationTrackXYZEuler
RemoteMediaModule
Provides access to a remote media.
Asset.RemoteMediaModule
RemoteReferenceAsset
Provides a reference to a remote asset (i.e. assets outside of the Lens size limit) that can be downloaded at runtime using script.
Asset.RemoteReferenceAsset
RemoteServiceModule
Provides access to the remote services. For Spectacles, this module process access to the open internet.
Asset.RemoteServiceModule
RenderMesh
Represents a mesh asset. See also: RenderMeshVisual.
Asset.RenderMesh
[Exposes User Data] ScanModule
Asset for detecting an object through the Scan system.
Asset.ScanModule
ScanModule.Contexts
ScriptAsset
Represents a JavaScript script which can be used to add logic in your Lens.
Asset.ScriptAsset
SegmentationModel
Segmentation model used for SegmentationTextureProvider.
Asset.SegmentationModel
TextToSpeechModule
Allows generation of speech from a given text. You can use only one TextToSpeechModule
in a Lens. However, its methods can be called multiple times in parallel if needed.
Asset.TextToSpeechModule
Texture
Represents a texture file asset.
Asset.Texture
UpperBodyTrackingAsset
An asset containing the upper body tracker. It is optimized to track with the face and in selfie use cases.
Asset.UpperBodyTrackingAsset
VFXAsset
Defines a VFX to use with VFX Component. For more information, see the VFX Guide.
Asset.VFXAsset
[DEPRECATED] Vec2AnimationTrack
Represents an animation track using vec2 value keyframes.
Asset.Vec2AnimationTrack
Vec2AnimationTrackKeyFramed
[DEPRECATED] Vec2AnimationTrackKeyFramed
Represents an animation track using vec2 value keyframes.
Asset.Vec2AnimationTrackKeyFramed
[DEPRECATED] Vec3AnimationTrack
Represents an animation track using vec3 value keyframes.
Asset.Vec3AnimationTrack
Vec3AnimationTrackKeyFramed
, Vec3AnimationTrackXYZ
[DEPRECATED] Vec3AnimationTrackKeyFramed
Represents an animation track using vec3 value keyframes.
Asset.Vec3AnimationTrackKeyFramed
[DEPRECATED] Vec3AnimationTrackXYZ
Represents an animation track using vec3 animation tracks.
Asset.Vec3AnimationTrackXYZ
[DEPRECATED] Vec4AnimationTrack
Represents an animation track using vec4 value keyframes.
Asset.Vec4AnimationTrack
Vec4AnimationTrackKeyFramed
[DEPRECATED] Vec4AnimationTrackKeyFramed
Represents an animation track using vec4 value keyframes.
Asset.Vec4AnimationTrackKeyFramed
VoiceMLModule
VoiceML Module allows voice input and commands. It enables transciption of the speech, detecting keywords within the transcription, intents as well as system commands (such as "Take a Snap"). You can use one VoiceML Module per Lens.
Asset.VoiceMLModule
VoiceMLModule.AnswerStatusCodes
, VoiceMLModule.NlpResponsesStatusCodes
, VoiceMLModule.SpeechRecognizer
Methods
enableSystemCommands() : void
void
Allows the user to provide voice commands for the VoiceML to execute on behalf of the users. Current supported commands: "Take a Snap", "Start Recording", "Stop Recording". In case a command was detected, it will be automtically executed by the system and returned as part of the NlpCommandResponse in the onListeningUpdate
callback. You can retrieve the command that was executed using the following snippet:
var onUpdateListeningEventHandler = function(eventArgs) {
var commandResponses = eventArgs.getCommandResponses();
var nlpResponseText = "";
for (var i = 0; i < commandResponses.length; i++) {
var commandResponse = commandResponses[i];
nlpResponseText += "Command Response: " + commandResponse.modelName + "\n command: " + commandResponse.command;
}
}
startListening(VoiceML.ListeningOptions options
) : void
(Exposes User Data)
void
Starts transcribing the user audio, the NLP model is connected and sends back results using an event, optionally could request for transcription and interim results. Notice, you can only startListening
, after microphone permissions have been granted. It is recommended to use startListening
method only after onListeningEnabled
callback was called.
Properties
onListeningDisabled : ()
(readonly)
()
Registers a callback which will be called when microphone permissions are taken from the lens. stopListening()
is implicitly called in such case.
onListeningEnabled : ()
(readonly)
()
Registers a callback which will be called when microphone permissions are granted to the Lens, the microphone is initialized, and is actively listening. The expected design pattern is to start the listening session once those permissions have been granted:
//@input Asset.VoiceMLModule vmlModule
var onListeningEnabledHandler = function(){
script.vmlModule.startListening(options);
}
script.vmlModule.onListeningEnabled.add(onListeningEnabledHandler);
onListeningError :
(readonly)
Registers a callback, which will be called in case the VoiceML module can't process the inputs. Most errors are due to network connectivity, or misconfigured NLP inputs.
onListeningUpdate :
(readonly)
Registers a callback, which will be called with interim transcription or related NLP models.
Inherited Methods
isOfType(String type
) : Boolean
Boolean
Returns true if the object matches or derives from the passed in type.
isSame(ScriptObject other
) : Boolean
Boolean
Returns true if this object is the same as other
. Useful for checking if two references point to the same thing.
Examples
//@input Asset.VoiceMLModule vmlModule {"label": "Voice ML Module"}
//@input Asset.AudioTrackAsset audioTrack
var options = VoiceML.ListeningOptions.create();
options.speechRecognizer = VoiceMLModule.SpeechRecognizer.Default;
//General Option
options.shouldReturnAsrTranscription = true;
options.shouldReturnInterimAsrTranscription = true;
//Speech Context
var phrasesOne = ["carrot", "tomato"];
var boostValueOne = 5;
options.addSpeechContext(phrasesOne,boostValueOne);
var phrasesTwo = ["orange", "apple"];
var boostValueTwo = 6;
options.addSpeechContext(phrasesTwo,boostValueTwo);
//NLPKeywordModel
var nlpKeywordModel = VoiceML.NlpKeywordModelOptions.create();
nlpKeywordModel.addKeywordGroup("Vegetable", ["carrot", "tomato"]);
nlpKeywordModel.addKeywordGroup("Fruit", ["orange", "apple"]);
//Command
var nlpIntentModel = VoiceML.NlpIntentsModelOptions.create("VOICE_ENABLED_UI");
nlpIntentModel.possibleIntents = ["next", "back", "left", "right", "up", "down", "first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", "ninth", "tenth"];
options.nlpModels =[nlpKeywordModel, nlpIntentModel];
var onListeningEnabledHandler = function() {
script.vmlModule.startListening(options);
};
var onListeningDisabledHandler = function() {
script.vmlModule.stopListening();
};
var getErrorMessage = function(response) {
var errorMessage = "";
switch (response) {
case "#SNAP_ERROR_INDECISIVE":
errorMessage = "indecisive";
break;
case "#SNAP_ERROR_INCONCLUSIVE":
errorMessage = "inconclusive";
break;
case "#SNAP_ERROR_NONVERBAL":
errorMessage = "non verbal";
break;
case "#SNAP_ERROR_SILENCE":
errorMessage = "too long silence";
break;
default:
if (response.includes("#SNAP_ERROR")) {
errorMessage = "general error";
} else {
errorMessage = "unknown error";
}
}
return errorMessage;
};
var parseKeywordResponses = function(keywordResponses) {
var keywords = [];
var code = "";
for (var kIterator = 0; kIterator < keywordResponses.length; kIterator++) {
var keywordResponse = keywordResponses[kIterator];
switch (keywordResponse.status.code) {
case VoiceMLModule.NlpResponsesStatusCodes.OK:
code= "OK";
for (var keywordsIterator = 0; keywordsIterator < keywordResponse.keywords.length; keywordsIterator++) {
var keyword = keywordResponse.keywords[keywordsIterator];
if (keyword.includes("#SNAP_ERROR")) {
var errorMessage = getErrorMessage(keyword);
print("Keyword Error: " + errorMessage);
break;
}
keywords.push(keyword);
}
break;
case VoiceMLModule.NlpResponsesStatusCodes.ERROR:
code = "ERROR";
print("Status Code: "+code+ " Description: " + keywordResponse.status.code.description);
break;
default:
print("Status Code: No Status Code");
}
}
return keywords;
};
var parseCommandResponses = function(commandResponses) {
var commands = [];
var code = "";
for (var iIterator = 0; iIterator < commandResponses.length; iIterator++) {
var commandResponse = commandResponses[iIterator];
switch (commandResponse.status.code) {
case VoiceMLModule.NlpResponsesStatusCodes.OK:
code= "OK";
var command = commandResponse.intent;
if (command.includes("#SNAP_ERROR")) {
var errorMessage = getErrorMessage(command);
print("Command Error: " + errorMessage);
break;
}
commands.push(commandResponse.intent);
break;
case VoiceMLModule.NlpResponsesStatusCodes.ERROR:
code = "ERROR";
print("Status Code: "+code+ " Description: " + commandResponse.status.code.description);
break;
default:
print("Status Code: No Status Code");
}
}
return commands;
};
var onUpdateListeningEventHandler = function(eventArgs) {
if (eventArgs.transcription.trim() == "") {
return;
}
print("Transcription: " + eventArgs.transcription);
if (!eventArgs.isFinalTranscription) {
return;
}
print("Final Transcription: " + eventArgs.transcription);
//Keyword Results
var keywordResponses = eventArgs.getKeywordResponses();
var keywords = parseKeywordResponses(keywordResponses);
if (keywords.length > 0) {
var keywordResponseText = "";
for (var kIterator=0;kIterator<keywords.length;kIterator++) {
keywordResponseText += keywords[kIterator] +"\n";
}
print("Keywords:" + keywordResponseText);
}
//Command Results
var commandResponses = eventArgs.getIntentResponses();
var commands = parseCommandResponses(commandResponses);
if (commands.length > 0) {
var commandResponseText = "";
for (var iIterator=0;iIterator<commands.length;iIterator++) {
commandResponseText += commands[iIterator]+"\n";
}
print("Commands: " + commandResponseText);
}
};
var onListeningErrorHandler = function(eventErrorArgs) {
print("Error: " + eventErrorArgs.error + " desc: "+ eventErrorArgs.description);
};
//VoiceML Callbacks
script.vmlModule.onListeningUpdate.add(onUpdateListeningEventHandler);
script.vmlModule.onListeningError.add(onListeningErrorHandler);
script.vmlModule.onListeningEnabled.add(onListeningEnabledHandler);
script.vmlModule.onListeningDisabled.add(onListeningDisabledHandler);