Skip to main content
Version: 5.x

Person Normals and Depth

Lens Studio comes with two built-in textures that provides information about surfaces of the body. The Body Depth Texture provides information about the distance (in centimeters) from the camera to each pixel of the body. The Body Normal Texture represents the surface direction of each pixel of the body. There is no additional performance cost to using both at the same time.

Body Normals Texture

Use the Body Normals Texture to apply realistic lighting effects to bodies.

Getting started with the Body Normals Texture

Normals tell you the shape of a surface at a given point. The Body Normals Texture contains estimates of the body's shape for every pixel in the image.

Add a Body Normals Texture in the Asset Browser panel:

The Body Normals Texture has four channels. The first three channels correspond to the XYZ direction of the normal with respect to the camera:

  • +X directs to the right of the camera
  • +Y directs up
  • +Z directs towards the viewer

The fourth channel corresponds to the confidence in the normal estimate. A low confidence value implies that the normal estimate may not be accurate.

The Body Normals Texture only produces values for a single person. See which person is associated with each texture by clicking on the texture in the Asset Browser panel.

Use the Body Normals Texture as a normal map

Create a simple PBR material that uses Body Normals to light a person using the PBR shader node.

Open the Body Normals PBR node:

Here, the normal texture is passed into the material as a Texture 2D Parameter. In order to use the normal texture, make sure that it's sampled as a Normal Map which converts its values from their texture representations (values between 0 and 1) into their real world representations (values between -1 and 1).

To ensure that you only visualize the material when the normal information is valid, map the alpha channel of the Body Normals Texture to the opacity channel of the PBR material:

Instead of attaching the material to a mesh, attach the material to an Image component in an orthographic camera to visualize how the material looks:

To make the image fill the screen, select Stretch for the Stretch Mode.

Putting it all together, you can see realistic reflections on the person:

Change the material properties to get different visual effects:

Modifying existing materials to use Body Normals

You can modify other materials to use Body Normals textures. For example, open the Toon Material from the Asset Library:

This material uses the normal direction and the light direction to find which areas should be “flat” and which areas should be “shadowed”. It expects to be applied to a mesh with a calculated Surface Normal direction:

Replace the Surface Normal direction with our Body Normal direction (remembering to sample the texture as a Normal Map):

Modify the “outline” to use the camera view direction rather than the camera facing ratio to use this material with an orthographic camera:

Modify the opacity to use the confidence value from our Body Normals Texture:

Putting it all together:

Since this material uses the input lighting direction, we can adjust the light orientation to get different lighting effects:

Combining with Upper Garment Segmentation

You can combine the Body Normal Textures with other textures as well. For example, you can combine normal information with Upper Garment Segmentation to isolate where you want your effects to appear.

Start with the simple PBR material referenced earlier. Next, we can add Texture 2D Parameter nodes to pass in texture to our material. Add a node for the camera texture so we can set the albedo of the material. Similarly, add a node for the Upper Garment Segmentation texture to decide where to apply this material.

Lastly, to have them be displayed in our material, connect these nodes to the albedo and opacity port of the PBR node:

Now the base color of the material is the color of the camera texture. Since we want the Upper Garment Segmentation and the Body Normals Texture confidence values to determine the final opacity, we will multiply their values together.

Since we only want the confidence value of the Body Normals Texture, we will use the Swizzle node to get the forth channel. In contrast, Segmentation textures have information about the cut out in the first channel (x of xyzw), thus we will use the Swizze node to get the x channel.

Lastly, we need to pass in our textures into the material. Add the Upper Garment Segmentation texture in the Asset Browser panel:

Attach all textures to the material and modify the Graph Parameters to get a plastic-like effect on the garment:

Body Depth Texture

Use the Body Depth Texture to create realistic depth based materials.

Use the depth texture property of the camera to achieve seamless virtual object occlusion. This feature works even when the phone does not have a depth camera available.

When using the Body Depth Texture in materials, always select the Nearest Filtering Mode. If you don't, your material won't work on some Android devices due to a lack of support:

Getting started with the Body Depth Texture

The Body Depth Texture provides you with a person's depth for every pixel. To use it, add a Body Depth Texture in the Asset Browser panel:

The Body Depth Texture only produces values for a single person. See which person is associated with each texture by clicking on the texture in the Asset Browser panel.

The Body Depth Texture also provides a threshold on the minimum allowable confidence level. Pixels with a confidence lower than this threshold will be set to the Body Depth Texture’s far plane, which is set to 10 meters by default.

Visualize depth

We can visualize the Person Depth Texture by creating a simple material which remaps our depth for color values.

Create a Graph material in the Asset Browser panel.

This material has three parameters:

  • The depth texture
  • The minimum depth
  • The range of depth which we want to visualize

Notice that the output of the Body Depth Texture parameter is Depth.x.

This is because Depth Map is selected in the type field of the node:

The output value corresponds to the distance from the camera to the object at each pixel. The distance is measured in centimeters. Each additional centimeter decreases the value by one. For example, a depth of one meter corresponds to an output value of -100.

For convenience, multiply this value by -1 to make the output value a positive number.

Now, using the given minimum depth parameter and range, remap the positive depth value to 0-1 and connect it to a shader to visualize the depth.

Use a Screen Image to view the result:

Notice that the Stretch Mode should be set to Stretch so that the Body Depth Texture covers the whole screen.

Make a Depth Slice material

Take what you've learned in the previous sections and create a material that draws a colored line on the parts of the person within a certain depth range.

Starting from the previous depth visualization material, add a color input parameter and replace the remap with an explicit check to see if the depth is within the provided range:

If the depth is not in range, output a default value of [0,0,0,0] — pure black. Here's what that looks like when visualized:

To see the underlying camera feed for these pixels, set Blend Mode to Normal within the material:

Now you see the normal camera output when the depth is not in range:

Occlude objects using the camera depth texture

Use the Body Depth Texture with a virtual scene to get realistic occlusions.

First, create a camera in the Scene Hierarchy panel:

In this camera’s properties, set Depth Clear Option to Custom Texture. Choose the Body Depth Texture as the Custom Texture input:

Add the object you want to be occluded when the person is closer to the camera than the object is. The calculation of this occlusion will be handled by the camera and the Body Depth Texture.

To see the occlusion working, add a sphere and place it at a depth similar to the person in your preview.

Was this page helpful?
Yes
No

AI-Powered Search