Lens Scripting API

    Privacy Restrictions on APIs

    When using an API which provides access to the internet or communication with other Lens sessions like RemoteServiceModule, CloudStorageModule, InternetModule, or ConnectedLensModule, your access to APIs which provide information about the current user will be restricted. This is to protect the privacy of those users and have their data stay only in the local Lens session.

    API Description
    CameraFrame An entity which provided metadata about the current camera image provided by CameraTextureProvider. Modeled after VideoFrame web API
    CameraModule Provides access to a specific camera on Spectacles device.
    CameraModule.createCameraRequest Creates a camera request object.
    CameraModule.createImageRequest Spectacles: create a CameraImage.ImageRequest. This object can be used to configure a request for a high resolution image of the user's camera stream. The resolution of this image will be fixed to 3200x2400.
    CameraModule.CameraRequest An object that is used to request the desired camera ID. It should be passed to the CameraModule to get back a camera texture.
    CameraModule.ImageRequest
    DepthTextureProvider.sampleDepthAtPoint Get the depth at the given point.
    DeviceInfoSystem.getOS Returns the operating system type of the device.
    DeviceTracking.getPointCloud Returns the 3D point cloud representing important features visible by the camera.
    ImageFrame Spectacles: ImageFrame contains the results of a still image request initiated from the CameraModule. Still images are high resolution images of the user's current camera stream.
    LocalizationSystem.getLanguage Returns the language code of the language being used on the device.Example: "en" (for English)
    LocalizationSystem.localize The method takes a localization key and returns the localized string according to device language. Useful for localizing strings before formatting them and assigning them to Text.
    LocationService.getCurrentPosition Retrieves the device's current location.onSuccess: a callback function that takes a GeoPosition object as its sole input parameter.onError: a callback function that takes a string error message as its sole input parameter.
    MicrophoneAudioProvider.getAudioFrame Writes current frame audio data to the passed in Float32Array and returns its shape. The length of the array can't be more than maxFrameSize.
    ProceduralTextureProvider.getPixels Returns a Uint8 array containing the pixel values in a region of the texture. The region starts at the pixel coordinates x, y, and extends rightward by width and upward by height. Values returned are integers ranging from 0 to 255.
    ScanModule Provides access to a Scan system that allows users to scan objects, places, and cars with a database of item labels within a Lens.
    TensorMath.textureToGrayscale Converts the texture to a set of 0-255 grayscale values, and outputs the result into outTensor.outTensor should be a Uint8Array of shape {width, height, 1}.
    UserContextSystem.getAllFriends Retrieve the Snapchatter's friends list in order to access details like display name, birthdate, or Bitmoji
    UserContextSystem.getBestFriends Retrieve the Snapchatter's best friends in order to access details like display name, birthdate, or Bitmoji.
    UserContextSystem.getCurrentUser Retrieve a SnapchatUser representing the current user.
    UserContextSystem.getMyAIUser Retrieve a SnapchatUser object for MyAI which you can use to access the MyAI Bitmoji or other details.
    UserContextSystem.getUsersInCurrentContext Gets the list of friends in the current context, such as 1:1 chats, 1:many chats, and group chats.
    UserContextSystem.requestBirthdate Provides the user's birth date as a Date object.
    UserContextSystem.requestBirthdateFormatted Provides the user's birth date as localized string.
    UserContextSystem.requestCity Provides the name of the city the user is currently located in.
    UserContextSystem.requestUsername Provides the user's username.
    VoiceMLModule.startListening Starts transcribing the user audio, the NLP model is connected and sends back results using an event, optionally could request for transcription and interim results. Notice, you can only startListening, after microphone permissions have been granted. It is recommended to use startListening method only after onListeningEnabled callback was called.
    MMNEPVFCICPMFPCPTTAAATR