MLComponent will attempt to use a dedicated hardware accelerator to run the neural network. If the device doesn't support it, GPU mode will be used instead.
MLComponent will automatically decide how to run the neural network based on what is supported. It will start with Accelerator, then fall back to GPU, then CPU.
MLComponent will run the neural network on CPU. Available on all devices.
MLComponent will attempt to run the neural network on GPU. If the device doesn't support it, CPU mode will be used instead.
Inference modes used by
MLComponent.inferenceMode
. Each mode describes how the neural network will be run.Example