Compatibility
SnapML takes care of running your model in the most optimized way possible, leveraging device-specific hardware acceleration when available. This page provides information on which ML layers are available for acceleration.
✔️ means it is available. ❌ means it is not available.
Layers | CPU (all devices) | iPhone GPU | iPhone NPU | Android GPU - Spectacles (2021) GPU | Comments |
---|---|---|---|---|---|
Absolute | ✔️ | ✔️ | ✔️ | ✔️ | |
Argmax | ✔️ | ✔️ | ❌ | ✔️ | Supports along channel axis |
Argmin | ✔️ | ✔️ | ❌ | ✔️ | Supports along channel axis |
Batch Matrix Multiplication | ✔️ | ✔️ | ❌ | ❌ | |
Batch Normalization | ✔️ | ✔️ | ✔️ | ✔️ | Supports batch normalization fusion |
Concat | ✔️ | ✔️ | ✔️ | ✔️ | Supports batch and channel axis |
Constant | ✔️ | ✔️ | ❌ | ✔️ | Supports constant tensors in eltwise operation using Android GPU |
Convolution | ✔️ | ✔️ | ✔️ | ✔️ | Supports same dilation and stride in height and width, padding can't be larger than kernel size using CPU and GPU, regular conv (group is 1) and depth conv using Android GPU |
Transposed Convolution | ✔️ | ✔️ | ✔️ | ✔️ | Supports dilation is 1, same stride, padding, and kernel size in height and width, regular transposed conv (group is 1) and depth transposed conv, padding can't be larger than kernel size |
Cos | ✔️ | ✔️ | ❌ | ✔️ | |
Eltwise Add | ✔️ | ✔️ | ✔️ | ✔️ | Supports operation among multiple tensors, and broadcasting along channel or height/width |
Eltwise Div | ✔️ | ✔️ | ❌ | ✔️ | Supports operation among multiple tensors, the first one has the largest size, and broadcasting along channel or height/width |
Eltwise Max | ✔️ | ✔️ | ✔️ | ✔️ | Supports operation among multiple tensors, all tensors must have same dimension |
Eltwise Min | ✔️ | ✔️ | ✔️ | ✔️ | Supports operation among multiple tensors, all tensors must have same dimension |
Eltwise Mul | ✔️ | ✔️ | ✔️ | ✔️ | Supports operation among multiple tensors, and broadcasting along channel or height/width |
Eltwise Sub | ✔️ | ✔️ | ❌ | ✔️ | Supports operation among multiple tensors, the first one has the largest size, and broadcasting along channel or height/width |
Elu | ✔️ | ✔️ | ✔️ | ✔️ | |
Embedding | ✔️ | ✔️ | ❌ | ✔️ | |
Exponential | ✔️ | ✔️ | ❌ | ✔️ | |
Flatten | ✔️ | ✔️ | ❌ | ✔️ | |
Fully Connected | ✔️ | ✔️ | ✔️ | ✔️ | |
Instance Normalization | ✔️ | ✔️ | ❌ | ✔️ | |
Inverse | ✔️ | ✔️ | ❌ | ✔️ | |
Leaky ReLU | ✔️ | ✔️ | ✔️ | ✔️ | |
Linear | ✔️ | ✔️ | ✔️ | ✔️ | |
Log | ✔️ | ✔️ | ✔️ | ✔️ | |
LSTM | ✔️ | ✔️ | ❌ | ❌ | |
Padding | ✔️ | ✔️ | ✔️ | ✔️ | Supports CONSTANT, REFLECT and SYMMETRIC padding types |
Permute | ✔️ | ✔️ | ❌ | ✔️ | |
Pooling Average | ✔️ | ✔️ | ✔️ | ✔️ | Padding can't be larger than kernel size |
Pooling Global Average | ✔️ | ✔️ | ✔️ | ✔️ | |
Pooling Max | ✔️ | ✔️ | ✔️ | ✔️ | Padding can't be larger than kernel size |
Power | ✔️ | ✔️ | ✔️ | ✔️ | |
Parametric ReLU | ✔️ | ✔️ | ✔️ | ✔️ | |
Reduce Mean | ✔️ | ✔️ | ❌ | ✔️ | Supports reduction axis along N, C, HW, and HWC using CPU and iPhone GPU, and reduction axis along N, C, and HW using Android GPU |
Reduce Max | ✔️ | ✔️ | ❌ | ✔️ | Supports reduction axis along N, C, HW, and HWC using CPU and iPhone GPU, and reduction axis along N, C, and HW using Android GPU |
ReLU | ✔️ | ✔️ | ✔️ | ✔️ | |
ReLU6 | ✔️ | ✔️ | ✔️ | ✔️ | |
Reshape | ✔️ | ✔️ | ❌ | ✔️ | |
Resize Bilinear | ✔️ | ✔️ | ✔️ | ✔️ | Supports same scaling factors in height and width, recommends to set aligh_corners as true in training since the implementation is consistent between TensorFlow and Pytorch |
Resize Nearest Neighbor | ✔️ | ✔️ | ✔️ | ✔️ | Supports same scaling factors in height and width |
RNN | ✔️ | ✔️ | ❌ | ❌ | |
Sigmoid | ✔️ | ✔️ | ✔️ | ✔️ | |
Sin | ✔️ | ✔️ | ❌ | ✔️ | |
Slice | ✔️ | ✔️ | ❌ | ✔️ | |
Softmax | ✔️ | ✔️ | ✔️ | ✔️ | Supports along channel axis |
Softplus | ✔️ | ✔️ | ❌ | ✔️ | |
Softsign | ✔️ | ✔️ | ❌ | ✔️ | |
Square Root | ✔️ | ✔️ | ❌ | ✔️ | |
Tanh | ✔️ | ✔️ | ✔️ | ✔️ |
Was this page helpful?