Skip to main content
Version: 5.x

Connected Lens

Connected Lenses is a Beta experience. You will not be able to use this for Sponsored Lenses.

The Connected Template is a great starting point for making Connected Lenses which allows Snapchatters to join their friends in one AR space, both in the same location, as well as remotely. The template comes with:

  • A re-usable Invitation Flow that takes care of how users join a session with their friends.
  • A Relay system that helps you pass messages along different participants
  • An avatar system to help co-presence when participants are not in the same room
  • Examples of syncing object instantiation and transform manipulation

If you haven’t yet, we recommend that you familiarize yourself with Connected Lenses and its concept via the Connected Lenses Overview.

New update to ConnectedController.js - Adds better error messaging for users joining Colocated sessions. Download the file here.

Guide

Invitation Flow

A key part of the Connected Lens experience is being able to join others in their Lens. As a result, there are multiple ways in which your Lens can be opened. The Invitation Flow portion of the template takes care of this.

When the Snapchatter first opens the Lens, they are first introduced to a Splash screen. This screen introduces the user to the current Connected Lens experience, as well as provide them with the option to start interacting with a “shared session” by selecting the Call to Action button to Launch the Connected Lens.

You can modify this screen to fit your needs by modifying the Screen: Splash Screen objects under Connected > Splash, Menu & Colocated Flow. In most cases, all you will need to do is replace the Image in the Teaser Graphic object.

If the Snapchatter opens this Lens (versus being invited by Chat or a Snapcode), they will be presented with a Mode Select screen. As with the Splash Screen, in most cases, all you will need to do is replace the Image in the Teaser Graphic object. In this case under Connected > Splash, Menu & Colocated Flow > Screen: Mode Select

The Invitation Flow will take care of presenting the user with the guidance for them to scan their room as well as generate a Snapcode for their friends to Scan and enter (if they choose Start with Friends in Your Room), as well as allowing them to invite their friends through Chat (if they choose Start with Friends)

You can reuse this portion of the template independent of the experience you are trying to make.

Your Connected Lens will automatically allow for text and audio chat similar to a Snap Games experience!

Session Objects Overview

Once the Snapchatter goes through the Invitation Flow, they will be brought into a shared session where the Lens can send data back and forth between each participant. This is the core part of the Lens that you will modify to create your custom experience.

The template comes with three different examples of sending data back and forth between participants:

  • Network Avatar: An object that every participant will be represented with when they join the session. Demonstrates syncing of transform for the object, as well as the syncing of text field for their display name
  • Emote Controller: A system that would allow each participant to press a button that would instantiate an image object that other participants can see. Demonstrates simple instantiation of objects that is synchronized across participants.
  • Building Blocks: A system that allows each participant to instantiate an object that persists as well as be movable. It demonstrates instantiation of objects whose transforms are synchronized.

You can find each example under the Session Objects object. Each of them operates independently and you can mix and match them depending on your needs.

Each of these examples leverages the Relay script which contains many functions to help you send data across the session.

If an object is static in the scene, you can simply add them as regular objects! You can see an example of this on the Center object that is a child of Session Objects with no script attached.

Network Avatar Example

The Network Avatar example contains a prefab that is instantiated for each participant in the session.

Customizing the Avatar

Under the NetworkAvatar object, you will find  a Visuals > Avatar object. Simply modify the contents of this object to replace the Avatar in your experience.

As with all prefabs, once you are done make sure to apply the changes you’ve made by pressing the Apply button at the top of the Inspector panel on the EyeAvatar prefab.

Under the EyeAvatar object you will also find the DisplayName Text object. You may style this object as well depending on your needs, but make sure to leave the name of the object as it is as it is how the NetworkAvatar script references the object which to populate with the user’s name.

Understanding how NetworkAvatar works

If you’d like to modify how the avatar system works, or do something similar, feel free to take a look at the scripts in this example. In short:

When the participant joins the session, the NetworkAvatar instantiates the EyeAvatar prefab and moves it based on the Camera object (the participant’s point of view), and fills in the Displayname Text object with the user’s display name.

Then, on the EyeAvatar object there is a SyncTransform script, and on the DisplayName Text object there is a SyncText script. Each of these scripts synchronizes the instantiated object across all participants so that everyone sees the same thing!

You’ll find the same SyncTransform and SyncText script used throughout the template.

Emote Controller Example

The Emote Controller allows the participant to instantiate objects where they are in the space on button press, which will destroy itself after a certain time.

Customizing Emote Controller

To modify this experience, simply replace the Textures list in the Inspector Panel of the EmoteController object.

Understanding how Emote Controller works

Like with the NetworkAvatar, you have a controller which provides the logic for what the effect does, as well as the prefab that gets instantiated and synced across participants.

You can find the effect for each emoji on the Prefab itself! On the prefab you will find an AutoDestroy script which destroys the object after some time, and on the image inside the prefab you will find the tween which floats the emoji up!

In addition, you will note that the EmoteController has a reference to Buttons. If you right click on these fields and choose select, Lens Studio will show you objects under SessionUI. These objects represent the setup for the on screen buttons which is used to trigger the emotes.

These buttons themselves are not networked, but are binded to the instantiation logic found in EmoteController.Try modifying these buttons, as well as look at the EmoteController script to see how you can modify the EmoteController to fit your needs.

Building Blocks Example

The Building Blocks example allows the user to instantiate different prefabs depending on different button presses.

Customizing Building Blocks

To modify what objects are instantiated, modify or replace the Prefabs under the Building Blocks object. As before, don’t forget to press Apply after modifying a prefab.

Each prefab is then connected to the BuildingBlocksController under the Prefabs List. Each Item in the Prefabs list has a corresponding texture in the Textures list which represents the button that the user will press to instantiate the object.

Try swapping these prefabs and textures, as well as adding more to fit your needs!

The blocks are not saved in storage. When a user joins the session, they will be sent the state of the session by another player. However, if all participants leave, then the data is lost.

Understanding how Building Blocks works

Similar to the NetworkAvatar and Emote Controller, the BuildingBlocksController instantiates the objects as needed depending on the user interaction.

Like NetworkAvatar, the prefabs that get instantiated have SyncTransform attached to them so that its transform is reflected across all participants. These prefabs need the SyncTransform (unlike EmoteController) because the user can manipulate them after they’ve been generated (in this case by the DragFromCamera script).

You’ll notice that inside the prefab is the actual mesh with a Tween attached to animate them. Since it’s not important that the animations are completely in sync, we animate the child object so that the SyncTransform does not have to send a message every frame about the position of the object!

Like Emote Controller, the Building Blocks controller, has references to Button objects. Like before you can right click > Select them to find where they are positioned.

Using Sync Transform

Now that we have a general understanding of the different examples, let's add our own! In this case, we can add a box that everyone can manipulate.

Setting up the scene

First we’ll add a Box under another Object in Session Objects. In the Scene Hierarchy panel, with Session Objects selected, press + > Child Scene Object. Then with the new object selected, + > Box

Next, let’s make the Box Manipulate-able. With the Box selected, in the Inspector panel, Add Component > Manipulate. Then we’ll add an Interaction component so the user can interact with it Add Component > Interaction. Set this up as you would usually: Camera for Camera, and the Box itself as the Mesh Visual.

Note, that since Connected Lenses utilizes World Tracking, where the 0,0,0 position of the Scene is the Camera’s position, we will need to offset the parent object to be where we want the object to be visible. E.g bringing it down and forward so that it appears in front of the user.

Finally, we just need to have the object be synced across the participants. Add Component > Script Component. Then in the Script component, choose the SyncTransform script

Customizing the SyncTransform

All we need to do is provide the Network Id for the object so that the script knows how to synchronize the same object across the participants. So select the Network Id field and type in a unique string.

Lastly, you can choose to sync the Position, Rotation, or Scale of the object depending on your need.

As with many networked systems, there are additional options on how the objects are synced. For the most part you will not have to modify them:

Send Frequency (in seconds):The maximum number of times data will be sent to other participants per second.

Time Offset (in milliseconds): Since it takes sometimes for data to be passed through the network, networked objects are often interpolated so that the different devices will see relatively similar things. In other words, this number trades off between smoothness (more offset) and latency (less offset).

Further Customization

There’s a lot more you can do with Connected Lenses than what is in the template! Each script in the template is annotated with JSDoc comments so that it would be easy to reference in your external script editor.

Try taking a look at the example sync scripts (SyncTransformSyncText) and see how they use the EntityView class to describe how an object should be synced. Note how EntityView uses PropertyWatcherSet to describe what properties should be tracked for each entity. Lastly see how EntityView utilizes Relay to communicate data for each networked participant!

Start with SyncText to see the basic structure, then look at SyncTransform to see how different properties can be synced, and finally try making a SyncMaterial!

Download Another Example

If you're ready to dive in into a more complex Connected Lens experience, check out the Connected Block Drawing example project. This example is built as a complete Lens experience that you can pick apart to see how a complete Lens might be constructed. It uses some of the core ideas from the template. We will update this project with features that are useful for Shared AR experiences.

Was this page helpful?
Yes
No

AI-Powered Search