Lens Scripting API

    Enumeration InferenceMode

    Inference modes used by MLComponent.inferenceMode. Each mode describes how the neural network will be run.

    //@input Component.MLComponent mlComponent
    script.mlComponent.inferenceMode = MachineLearning.InferenceMode.CPU;
    Index

    Enumeration Members

    Enumeration Members

    Accelerator: number

    MLComponent will attempt to use a dedicated hardware accelerator to run the neural network. If the device doesn't support it, GPU mode will be used instead.

    Auto: number

    MLComponent will automatically decide how to run the neural network based on what is supported. It will start with Accelerator, then fall back to GPU, then CPU.

    CPU: number

    MLComponent will run the neural network on CPU. Available on all devices.

    GPU: number

    MLComponent will attempt to run the neural network on GPU. If the device doesn't support it, CPU mode will be used instead.

    MMNEPVFCICPMFPCPTTAAATR