Skip to main content
Version: 4.55.1

SnapML Overview

In this guide, we will go over some of the bigger concepts that underlie how machine learning works within Lens Studio and Snapchat. Many of these ideas are common to machine learning, so you may be familiar with a few of them already. If not, don’t worry, we’re here to help!

If you've built an ML-powered app and would like to explore leveraging Snapchat's global, highly engaged audience to drive discovery and user acquisition, please get in touch. For qualified partners, there are both no-cost organic and premium opportunities with SnapML. We look forward to hearing from you!

Machine Learning and Lens Studio

Practically speaking, you might already be using machine learning in Lens Studio without realizing it if you’ve been using segmentation, skeletal tracking, and other features!

In addition to the built-in ML (machine learning) models which come with Lens Studio, Lens Studio 3.0 introduces SnapML. SnapML allows you to add your own ML models to your Lenses, which means that you can extend the capabilities of Lens Studio to do more than what it comes with!

Lenses with SnapML are distributed in the same way as other Lenses are, which means they are available to millions of Snapchatters without them having to download a new app or do anything additional!

How it Works

The capabilities you can add to Lens Studio and Lenses depends on the ML models that you have. ML models provide instructions for applying an algorithm in order to arrive at a result.

 You can import models created using many different frameworks like PyTorch, TensorFlow, Roboflow and frameworks compatible with ONNX.

In the case of Lens Studio, the outputs of models are used to enable features in Lenses.

For example, one model may take the camera input, run it through the computational graph, and arrive at a texture which colors the sky in white, and everywhere else in black. In other words, this model segments the sky.

Using ML Models

Models you bring into Lens Studio act similar to a black box, in that Lens Studio does not know exactly what it needs and what it provides. In other words, since every model may have different inputs (what it needs) and outputs (what it provides), you’ll need to tell Lens Studio what it should provide to the model, and how to use the result.

 Lens Studio can use the Open Neural Network Exchange (.ONNX), or Tensorflow Frozen Graph (.PB) format.

For example, in the sky segmentation case above, in order to detect where the sky is in the frame, we’d tell Lens Studio to pass in the camera texture. But another model may only need a texture around the face since the model only cares about the face--such as a model which outputs (classifies) if a person is wearing glasses or not. Yet another model may not need any texture at all.

In the same way, the output of models may be different for each one. While the sky segmentation model described above would output a black and white image which can be used as a mask texture, a model which outputs whether or not someone is wearing glasses or not would only need to output the probability of the presence of glasses.

In other words, SnapML can implement not only computer vision use cases--like detecting glasses--but also machine learning based visual effects--like style transfer!

 You can output data or images. If your output is an image, you can use it directly on an Image Component, or anywhere a texture is used!

ML Component

The way you instruct Lens Studio to use a model is via the ML Component.  With it, you can not only tell Lens Studio what data to pass in to the model and what data comes out of it, you can also say how the model should be run: should it run every frame, when the user takes an action, or even in another thread so your Lens can continue to run in real-time!

In other words, these models are used in Lens Studio via the ML Component, in a similar manner to how textures are used on the Image Component.

 SnapML takes care of running your model in the most optimized way possible, leveraging device-specific hardware acceleration when available. Take a look at the Compatibility Table for more information.

Templates

Lens Studio comes with 5 different templates which demonstrate how to use SnapML and ML Component.

  • Classification: Useful as a starting point for binary classification type models which output the probability of something. The template comes with a model that can return the probability of whether someone is wearing glasses or not and call an effect based on this information.
  • Object Detection: Useful as a starting point for object detection type models which output the location and probability of an object on the camera feed. The template comes with both a car and food detection model, as well as a way to visually call it out.
  • Style Transfer: Useful as a starting point for style transfer type models which take in a texture and return a modified texture. The template comes with an example style transfer model.
  • Custom Segmentation: Useful as a starting point for segmentation type models which take in a texture, and return a segmented version of that texture. The template comes with a pizza segmentation texture and uses Material Editor to make it look sizzling.
  • Ground Segmentation: A template which uses a segmentation model to segment the ground. The template comes with a way to replace the ground with a material and to occlude objects not on the ground.
  • Keyword Detection: Useful as a starting point for audio related ML models. The template comes with two models that can return the probability of a spoken word given a spectrogram analysis of the audio.
  • Multi Object Detection: Useful as a starting point for multi-class object detection ML models. The template comes with a model that allows you to detect 7 classes of objects: cat, dog, potted plant, TV, car, bottle, cup.

These templates come with their own sample model so you don’t need to worry about training your own! That being said, these templates are designed with flexibility in mind, and you should be able to use it with similar models of your own.

For example, if you wanted to create a Lens that draws a detection box around household objects, you can bring in a similar object detection model that is trained for household objects, and plug it into the Object Detection Template, and it will work.

 Check out ML Templates Library to find more templates you can use, including foot tracking, eyebrow removal, tree segmentation, and more.

 It might take longer for your SnapML Lenses to be reviewed. Please refer to the Community Submission guide under Lens Statuses for more information.

Making your own model

While models are built outside of Lens Studio, you can take a look at the template guides which provide a walkthrough, as well as example code (python notebooks) that can be used to build your own model!

What’s Next

Was this page helpful?
Yes
No

AI-Powered Search