Skip to main content
Version: 4.55.1

Scripting ML Component

Although you can configure and run your ML models using the ML Component found in the Lens Studio UI, you can have additional flexibility to fine-tune the way your model is run using its API. Additionally, you can use it to pass in or get data from your model.

This page will go through some common scenarios when scripting the ML Component. You can find the full list of API in the API documentation.

Take a look at the Scripting guide to learn how to use the scripting system.

ML Component Lifecycle

For reference, you can see the lifecycle of an ML Component below.

Create ML Component

You can create the ML Component in the same way you would create other components in script.

var mlComponent = script.sceneObject.createComponent('MLComponent');

Like with the ML Component UI, we need to initialize its settings. In particular:

We can pass in a model to our script by creating an input to the script, and selecting our model in the Inspector panel where our script is.

//@input Asset.MLAsset model

Then we can assign the model to the created ML Component.

mlComponent.model = script.model;

Next configure and build the ML Component’s input placeholders using input builder.

//@input string inputName = "input"
var inputBuilder = MachineLearning.createInputBuilder();
inputBuilder.setName(script.inputName);
inputBuilder.setShape(new vec3(64, 64, 3));
var inputPlaceholder = inputBuilder.build();

And its output placeholders using output builder:

//@input string outputName = "probs"
var outputBuilder = MachineLearning.createOutputBuilder();
outputBuilder.setName(script.outputName);
outputBuilder.setShape(new vec3(1, 1, 1));
outputBuilder.setOutputMode(MachineLearning.OutputMode.Data);
var outputPlaceholder = outputBuilder.build();

Input transformer can also be created in a script and set via setTransformer method of an input and output placeholder builders

var transformer = MachineLearning.createTransformerBuilder()
.setVerticalAlignment(VerticalAlignment.Center)
.setHorizontalAlignment(HorizontalAlignment.Center)
.setRotation(TransformerRotation.Rotate180).
.setFillColor(new vec4(0, 0, 0, 1)).build();

Build ML Component

Now that we’ve set up our input and output, we can build our ML component so that it will be ready to run our model when we need it. Provide an array of all input and output placeholders as a parameter of build function:

mlComponent.build([inputPlaceholder, outputPlaceholder]);

You can set up your model, input, and output in the Inspector panel but build it in script as well by disabling the Auto Build checkbox and calling build

var mlComponent = script.getComponent(“Component.MLComponent”);
mlComponent.build([]);

Passing an empty array as a parameter. In this case input and output settings will be taken from the ML Component settings specified in the Inspector panel.

Callbacks

There are a couple useful callback functions available to hook logic to the state of your ML Component:

onLoadingFinished is called when the model has finished building, and right before it can start running. After the model has been loaded you can start working with its inputs and outputs.

mlComponent.onLoadingFinished = onLoadingFinished;
function onLoadingFinished() {
//do something
//access inputs and outputs
//start running
}

Please be sure to set this callback before calling build, otherwise it may not be executed. If you are usingAuto Build checkbox - set it on script initialized event.

onRunningFinished is called after the model has finished running the last time it has been called. You can use this callback with both synchronous and asynchronous modes when running the model once or on update.

For example, to use this callback to process an ML Component’s output when running the model asynchronously you can do:

mlComponent.onRunningFinished = onRunningFinished;
mlComponent.runImmediate(false);
function onRunningFinished() {
//process output
}

These callbacks are properties which means that if you assign another function to the callback, it will replace the previous one.

Accessing inputs and outputs

You can access inputs and outputs only after the model has been built.

For Example: setting input texture and saving reference to the input data

//@input Asset.Texture deviceCameraTexture
//@input string textureInputName
//@input string dataInputName
var inputData;
function onLoadingFinished() {
var input = mlComponent.getInput(script.textureInputName);
input.texture = script.deviceCameraTexture;
var input1 = mlComponent.getInput(script.dataInputName);
inputData = input1.data;
}

All input and output data are Float32Arrays.

You can’t override this data array (like input1.data = newData), you have to update its values instead you can use the following;

//...
for (var i = 0; i < inputData.length; i++) {
inputData[i] = someNewValue;
}

Output data or textures can be accessed in the same way:

var outputData = mlComponent.getOutput(script.output1Name).data;
var outputTexture = mlComponent.getOutput(script.output2Name).texture;

 It is always efficient to save the reference to the input/output data array or texture instead of accessing it via output every time.

Running ML Component

Depending on what your model does, you can tell the ML Component to run (the inference):

  • Immediately / scheduled
  • Synchronously / asynchronously
  • Once / run always

runImmediate(bool isSync): start running the ML Model now with an option of waiting for it to finish before returning (sync), or letting it finish naturally (async). Either way onRunningFinished will be invoked during the next update after inference finishes.

Examples:

runImmediate(false): runs ML Component once, asynchronously (use onRunningFinished callback)

runImmediate(true): runs ML Component once. Next line of code won’t be executed until the model finishes running. Output can be processed right away.

runScheduled(boolean isReccuring, MachineLearning.FrameTiming startTiming, MachineLearning.FrameTiming endTiming) : void: start running at the specified FrameTiming and wait for it to finish at the specified FrameTiming (or let it finish naturally if endTiming == FrameTiming.None).

  • isReccuring: if true, inference will start each time startTiming is reached as long as an inference isn’t already in progress.
  • startTiming: specifies a time, when ML Component starts running.
  • endTiming: specifies a time when ML Component ends running.

Frame Timing

startTiming is the point where all the inputs are ready, and endTiming is the point at which the output is required. There are several options available for the startTiming and endTiming:

MachineLearning.FrameTiming.Update: start or/and end during ML Component update, this will naturally happen before any script invokes. This is the best option for preparing tracking type data, the model will run before scripts using the current frame to prepare tracking data so that any script that accesses it will already have it ready.

MachineLearning.FrameTiming.LateUpdate:  start or/and end running in ML Component LateUpdate, after all scripts update, but before they get LateUpdate.

MachineLearning.FrameTiming.OnRender:  start or/and end running at a specific point during frame rendering. This is the best option when taking an input texture that has to be prepared that frame, and giving an output texture that should also be used that frame.

MachineLearning.FrameTiming.None: Only valid as an end timing, means to not wait: inference will start again every time after it finishes (asynchronous running)

Next chart helps to understand when these moments of time happen in relation to script events:

If startTiming == endTiming - model is running synchronously, if else - this allows while running in parallel guarantee that output data is available at certain moment of time.

If startTiming != endTiming please make sure not to update inputs or get output data between these moments of time. As result may not be reliable in this case since ML Component is running in parallel thread.

The Auto Run checkbox is on the ML Component is equivalent to:runScheduled(true, MachineLearning.FrameTiming.OnRender, MachineLearning.FrameTiming.OnRender)

Use SceneConfig panel to specify when exactly OnRender frame timing happens:

Check out the Style Transfer Template or Object Detection Template to see how different update modes for ML Component are used.

You can use a Behavior script to run ML Component with different settings without writing any code:

Was this page helpful?
Yes
No

AI-Powered Search