Skip to main content
Version: 4.55.1

Pixel Accurate Rendering

When working with 2D elements, in some cases, you might want to render content perfectly based on their pixel size. For example: a logo which you want to be correctly sized to avoid aliasing. Or, if you’re designing UIs in an external software, it can be useful to be able to translate exact measurements (e.g. pixels or points) from those software into your Lens.

Lens Studio allows you to switch the units you are working with to help you translate these external assets into your Lens, as well as leverage the entire resolution of the device’s screen by using Canvas and Overlay Render Target.

Rendering to a Device’s Native Resolution

A Lens is rendered through a variety of “Render Targets” (i.e. textures that the cameras in your scene send its results to). By default, you are taking advantage of the “Capture Target” and “Render Target”. That is: when you are seeing the Lens live, you will see whatever is on the “Render Target”, and when you take a Snap, you will see whatever is in the “Capture Target”. This information is set up in the Scene Config panel of Lens Studio.

Learn more about Render Targets in the Scene Set Up guide

In addition to these render targets, you can use the Overlay Render Target. Unlike the Capture and Live target, this target runs at your device’s full resolution! This is useful when you’re looking to display detailed content such as texts.

Since Overlay Render Target is taking full advantage of the current device’s display, it is not possible to show them in a recorded Snap.

On the left you can see the same camera rendering on an Overlay Target, and on the right a camera rendering on Live Target. Notice how. in the left image, the rulers are a bit smaller--this is because the image is rendering at the simulated device's native resolution which is higher than the normal Lens resolution.

To do this:

  1. In the Resources panel > + > Render Target.
  2. In the Objects panel, select the camera object you want to render at full device display resolution.
  3. In the Inspector panel, in the Camera component, select the field “Render Target” and choose your newly created Render Target.
  4. In the Scene Config panel, select the field “Overlay Target” and choose your newly created Render Target.

Pro-tip: Doing Objects panel > + > Overlay Camera will automatically do the above step for you!

If you don’t see the Scene Config panel, in the Lens Studio menu bar > Windows > Panels > Scene Config.

You can download the following textures to help test the result of your set up;

Choosing the Units for a Camera to Use

By default, objects are positioned in World Units. But by adding the Canvas component on a Camera, you can tell every descendant of the camera to use a different unit:

  • Pixels: are the smallest addressable element on a display
  • Points: sometimes referred to as “Density Independent Pixels (dp)”, are abstractions of pixels that make it easier to deal with displays of different densities.

In most cases, you will want to use Points, as they make it easier to handle the wide variety of devices that can display Lenses. That is to say: on a newer device with a high density display, a 500x500px image can show up in the corner of the screen, where as on an older device it can take the whole screen.

To do this:

  1. In the Objects panel, select the camera object you want to render at full device display resolution.
  2. In the Inspector panel, press “Add component”, then choose “Canvas”.
  3. In the newly added Canvas component, choose “Points” from the drop down menu.

Download the example 500x500px texture which is shown in the demo above.

Setting Up an Image

Now that the camera and Lens is set up, you can configure your content as you normally would. For example:

With your Orthographic Camera selected, add a new Screen Image in the Objects panel.

With the newly added image selected, in the Inspector panel, you can configure its Screen Transform to use a Fixed Size, and notice that we’re no longer sizing based on the normalized of the screen (0-1) but rather absolute value (what we chose earlier).

Note that not all the values in Screen Transform component will use Canvas' units. For example, if you want to offset the Position of the image, by default it’s still using normalized units (that is, 1 being the screen size), since the image would not know how the absolute value should relate to the canvas.

To have these offset values use the units we chose earlier, you can use Pin to Edge.

Different devices will have different aspect ratios and sizes, so you should ensure that your positioning and sizes still work in a variety of displays. I.e. 100px offset from an edge may be closer or farther from the center of the screen depending on the device.

Advanced Usage

As you continue to improve the visual quality of your Lens by leverage device native resolution, you should keep in mind how this might affect the rendering of elements.

Ensuring Pixel Accuracy

Setting up your scene correctly is critical to ensuring you are displaying your content as you intended. In some cases you might want to inspect the resolution a camera is rendering at. You can do this with a small script:

// @input Component.Camera camera
const orthoSize = script.camera.getOrthographicSize();
print(orthoSize);

Depending on the preview settings, Canvas settings, and render target, you can get quite different results! So it’s very important to make sure you’ve set up your scene correctly. For example, on one phone's preview you might get the following:

  • Default Orthographic Camera settings: {x: 11.1111, y: 20}
  • With Canvas component set to Pixels: {x: 800, y: 1440}
  • With Canvas component set to Points settings: {x: 11.1111, y: 20}
  • With Canvas component set to Pixels settings, on Overlay Target: {x: 1179, y: 2097}
  • With Canvas component set to Points settings, on Overlay Target: {x: 393, y: 699}

In most cases you will want to use Canvas Component with Overlay Target. If you don't use the Overlay Target, you will be using the pixel resolution of the input camera.

With those values, you can notice that the Pixel settings on Overlay Target is 3x the value of the Points settings on Overlay Target. This is because the chosen preview has a higher density display.

This higher density specification may sound familiar if you’ve used other software that allows you to define different resolutions on image export (e.g. @1x, @2x, @3x).

In most cases you will want to use Points as your setup will apply to more devices (i.e. regardless of their device density).

More importantly, this flexibility means you have to pay attention to how you export your image from other software. That is: if your other software exports at pixel values (e.g @3x for the high density phone illustrated above), to get it to look right in Canvas, you will want to use the Pixels unit. However, if you export it in Points (e.g. @1x), you will want to use the Canvas in Points unit.

Exporting at the right resolution will improve the rendering quality of your texture, as it doesn’t have to be resized by the renderer, which might result in aliasing.

Pro-tip: You can optionally write a script that switches between different texture sizes based on the device density display.

Working with Text component

The size of a text component is unaffected by Canvas’ unit settings, as they are just correlated with the screen height. Meaning: you don’t need to do anything special!

However, you should note that since screens might have different aspect ratios, or overall size, text might flow differently. In addition, the Screen Transform settings will still apply as usual. So, as always, you should check your Lens with different devices in the Preview panel.

In the image below, you can see that the Text Component might overflow outside of the Fix Size dimension set in the Screen Transform component. The two images change because the screen resolution changes on device, NOT because the font size changes.

If we change the Canvas component to Pixels unit, note how the font size stays the same, but the width of the text box changes. This is because these two devices use a higher density screen–meaning that the fixed size we set earlier in Points, needs to be multiplied by some factor (screen density) in order to achieve the same look.

Pro tip: It is possible to get the calculated pixel/points size of a text component by using Extent Target. That is: You can set the Extents Target of the Text Component, to another Screen Transform, and that Screen Transform will be resized to match the actual text component size (e.g. instead of the “baked in” Screen Transform setting of the Text component). You can then get the size of the Extent Target’s Screen Transform, to get the dynamically calculated points/pixel size of the Screen text!

Working with Shaders

When you’re making a Material graph, it’s important to consider how the wide variety of sizes can impact your shader.

For example, if you’re calculating a rounded border, the radius of the border might dramatically change based on the units the camera is rendering at. That is: by default the camera size is 20, but when using the Canvas and Pixels unit, the camera size will be whatever size pixels the device screen has.

One way to account for this, is by using the Code Node and getting the projection matrix to understand the size of the camera, then using that as a multiplier.

bool epsilonEqual(float a, float b) {
return a + .0005 > b && a - .0005 < b;
}

vec2 ImageSize = vec2(scale_x, scale_y);
bool isOrtho = !epsilonEqual(system.getMatrixProjection()[3].w, 0.0);

if (isOrtho) {
vec2 orthoSize = abs(vec2(system.getMatrixProjection()[0].x, system.getMatrixProjection()[1].y));
//We use 20.0 as the baseline because it is the default camera size
float orthoRatio = orthoSize.y / 0.1;
ImageSize = ImageSize * orthoRatio;
}

You can see an example of this in the 9 Slicing Material found in the Asset Library.

Learn More

Was this page helpful?
Yes
No