World Mesh
Capabilities
World Mesh provides a real-time 3D reconstruction of the real world based on what your device sees. This allows your experience to better match and respond to the environment that it’s in.
Take a look at the World Mesh, Simple World Mesh, and Depth Materials template for an example of World Mesh in action. The World Mesh template also comes with an example of designing a fallback for devices which does not yet have World Mesh.
When a device provides a World Mesh, you have access to the following:
- World Mesh: An automatically generated mesh that represents the real world in 3D. You can use this to occlude AR effects based on the real world. Use it as you would any other mesh.
- Hit Test Results: Using the ray casting or hit test function, you can learn about the mesh that was hit at a certain point, as well as its position, normal. Learn more about this in the API documentation or in the scripting section below.
- Depth Texture: You can get a texture representing the depth of the world in the camera view.
When using World Mesh, your Camera object should contain a Device Tracking component with the World option such that it can accurately map the generated mesh to the real world.
World Mesh is available on devices with:
- Devices with LiDAR
- Recent devices with ARKit and ARCore
- Spectacles (2021)
However, there are some differences to note:
- On non-LiDAR devices and Spectacles (2021), the World Mesh will change over time as the Lens continues to refine its understanding of the surfaces it sees.
- When LiDAR is available, doing a Hit Test will provide you with semantic information about the surface hit. Types provided are: wall, floor, ceiling, table, seat, window, door and none.
In general, LiDAR will work better and be more accurate given the dedicated sensor. For example: World Mesh currently works better indoors than outdoors on devices without LiDAR. In addition, Depth Texture data will be more accurate on devices with LiDAR.
World Mesh 2.0 Release
World AR experiences refer to Lenses designed for the outward-facing camera, looking out at the world — in other words, not a selfie experience. With world mesh technology, the end user can place virtual objects on real-life surfaces. With that in mind, in order to create believable, truly immersive experiences, it is critical that world mesh technology can accurately reconstruct a space — like a room, street, or building — in 3D and in real time.
In 2021, Snap AR released the first version of World Mesh in Lens Studio for all devices. However, the technology was limited in its accuracy and precision. As part of Lens Studio version 4.55, Snap has released World Mesh 2.0; an update that represents a significant improvement in the accuracy, support, and precision of reconstructed meshes. Our novel approach to increasing mesh accuracy flips the script on why non-LiDAR world mesh wasn’t a developer’s first choice. In doing so, we also increased the number of supported phones with access to high-quality, 3D meshing in real time.
As shown in the visuals below, the previous technology provided a much coarser and far less precise scans that ended up more garbled and messy. Now, with the 2.0 technology, the result is much finer, more accurate, and more realistic.
World Mesh 1.0 | World Mesh 2.0 |
---|---|
![]() | ![]() |
Please be sure to update the Snapchat app to version 12.48 or higher
Working with World Mesh 2.0
The following templates have been updated to take advantage of World Mesh 2.0
Custom Location AR Updates
Even Custom Location AR benefits from the new and improved World Mesh 2.0. Custom Location AR puts the power in Lens Developers’ hands to map and create customized location AR experiences. It enables Lens Developers to map a specific location, upload it to Lens Studio, and author AR content for that specific site. With Lens Studio 4.55, Custom Location AR is accessible to more people around the world as the Custom Location Creator Lens is now available for a wider variety of mobile phones (see full list here)! Snap has also introduced a colorized Location Mesh for improved surface understanding of a scanned location within Lens Studio, which makes the feature even more user-friendly for virtual object positioning. Check out the visual below, which shows just how big of a difference there is when using a colorized mesh.

World Mesh 2.0 Best Practices and Scanning Guidelines
Here are some of the new best practices when working with World Mesh 2.0
- Scanning motion:
- Make sure that your device is in constant motion.
- Ideally, move your phone in a steady “sweeping figure 8” motion in front of you for a few seconds, while moving around in the space you are scanning. This will significantly improve the quality of the mesh you are getting.
- Avoid remaining static in space and rotating your device around one spot.
- Avoid extreme vertical viewing angles (pointing up and pointing down) and avoid fast camera motions.
- Positioning:
- The maximum range of the meshing is 5m, and the minimum is 20cm. Try to move within 1-2m of objects you’re scanning, moving closer/further as needed to correct errors.
- Make sure to cover as many viewpoints as you can: this will help fill holes in the mesh.
- Lighting conditions:
- Make sure to scan with enough ambient brightness. Avoid scanning at dusk or night-time.
- If you are scanning thin objects, set the “Mesh resolution” setting to “Max”.
- Avoid scanning reflective/glass objects: the meshing quality is expected to be low in those cases.
- Observe the resulting mesh as much as possible while you are scanning, and try to correct potential artifacts by capturing as many perspectives as you can, and moving around and closer to the problematic areas.
Using World Mesh and Depth Textures
There are a variety of ways you can integrate geometric understanding of the world into your Lens!
Adding a World Mesh
You can add a World Mesh by pressing + > World Mesh
in the Resources
panel
You can treat this mesh as you would with any other mesh. For example, you can display the World Mesh:
You can quickly display a mesh by adding it from the Objects
panel > + > World Mesh
Once a World Mesh is added, you can modify its settings via its API, or through the World Mesh resource in the Resources
panel.
- Use Normals: whether surface normal information should be baked into the mesh.
- Classification (for devices with LiDAR): whether classification should be baked into the mesh. Note that in Lens Studio classification data will always be provided.
Adding a Depth Texture
In some cases, you may only need depth data, rather than a constructed mesh, such as to do a depth of field effect.
You can access the Depth Texture by going into the Resources
Panel > + > Depth Texture
You can use this depth texture as an input into the Material Editor to create a shader based depth effect. Take a look at the examples in the Depth Materials Template.
Like World Mesh, Depth Texture can come from a variety of sources. For example, you may get depth data from a device with depth sensors in the front camera, through Spectacles' dual cameras, or through AR Core’s depth data. When World Mesh is used, depth data will be provided by World Mesh.
Scripting and World Mesh
With scripting, you can get additional information about the World Mesh so that your Lens can respond to surfaces that it sees.
For example the script below would send a ray from the center of the screen into the world until it hits a mesh. Then, it would print out information about the vertex of the mesh at that location.
// @input Component.DeviceTracking tracker
// @input Component.RenderMeshVisual worldMesh
// Screen position we want to do a hit test on
var centerOfScreenPos = new vec2(0.5, 0.5);
// Check if World Mesh is available
if (script.tracker.worldTrackingCapabilities.sceneReconstructionSupported) {
// Get the world mesh API
var worldTrackingProvider = script.worldMesh.mesh.control;
// Set up world mesh to check classification
// Note: Only available on devices with LiDAR
worldTrackingProvider.meshClassificationFormat =
MeshClassificationFormat.PerVertexFast;
worldTrackingProvider.useNormals = true;
// Do a hit test
var hitTestResults = script.tracker.hitTestWorldMesh(centerOfScreenPos);
hitTestResults.forEach(function (hitTestResult) {
// Get vertex information
var position = hitTestResult.position;
var normal = hitTestResult.normal;
// Classification possibilities at:
// /api/classes/TrackedMeshFaceClassification/
var classification = hitTestResult.classification;
// Do something with data
print(
'Hit mesh at ' +
position +
' with normal of ' +
normal +
' and classification of: ' +
classification
);
});
}
Try using this data to modify the position and rotation of an object so that it is always facing up from the surface!
As Spectacles (2021) does not have a single screen, if you are targeting this device specifically, it might make sense to use a different method to generate hitTestResults, like this:
// @input SceneObject cameraObject
// Do a hit test with a ray that is 10 meters long
var from = script.cameraObject.getTransform().getWorldPosition();
var to = from.add(script.cameraObject.getTransform().back.uniformScale(1000));
var hitTestResults = script.tracker.raycastWorldMesh(from, to);
Passing World Mesh Data
In addition to using the World Mesh itself, you can use the World Mesh as a source of data. For example, you can use it to understand normals and surfaces in the world such that VFXs can collide with the real world!
Take a look at the Simple World Mesh template to see various examples of this in action!