Skip to main content

Remote Service Gateway

Overview

Spectacles offers a set of APIs that can be used alongside user-sensitive data — like the camera frame, location, and audio. By default, access to sensitive data is disabled when a Lens uses internet-connected components unless certain Experimental APIs are enabled to allow a Lens to access both sensitive data and the internet at the same time through a feature called Extended Permissions. Lenses using Extended Permissions cannot be be published.

The Remote Service Gateway allows Experimental Lenses with access to user-sensitive data to both access the internet and be published but only when they use the Remote Service Gateway to call to Supported Services (listed below).

Supported Services

OpenAI

  • Chat Completions - Generate conversational AI responses using GPT models
  • Image Generation - Create images from text descriptions
  • Image Edit - Create edited or extended images
  • Image Variation - Create a variation of a given image.
  • Create Speech - Convert text to natural-sounding speech audio
  • Realtime - Real-time conversational AI with voice capabilities

Gemini

  • Model - Access Google's Gemini large language models for multimodal generations
  • Live - Real-time conversation AI interactions with voice and video capabilities

DeepSeek

  • Chat Completions with Deepseek-R1 Reasoning - Advanced AI chat with step-by-step reasoning capabilities

Snap3D

  • Text to 3D - Generate 3D models (GLB) and assets from text descriptions and images. See the examples and API reference for details.

Getting Started

Prerequisites

Lens Studio v5.10.1 or later
Spectacles OS v5.062 or later

The APIs are only available on Spectacles.

Setup Instructions

Remote Service Gateway Token Generator

All of the Remote Service Gateway APIs require an API token, which can be generated using the Lens Studio Token Generator. The Remote Service Gateway Token Generator plugin is available in the Asset Library under the Spectacles section. After installing the plugin, open the token generator from the Lens Studio Main Menu Windows -> Remote Service Gateway Token. Use the Generate Token button to create a token. The generated token can be copied and used for API access.

This token has no expiration date, is tied to the Snapchat account, and can be used across multiple projects and computers. When generating a token on different computers, if a token is already generated on another computer with the same Snapchat account login to My Lenses, the generator will display the existing token instead of creating a new one.

The token can be revoked through the Lens Studio Main Menu Windows -> Remote Service Gateway Token using the Revoke Token button if new token is have to be generated.

Revoking a token will invalidate Remote Service Gateway API usage for all existing lenses that use the token. This action cannot be undone.

Although this a public API token, it is unique to your account and should be treated as confidential. Do not include this token when sharing your project with others or committing code to version control systems.

Remote Service Gateway Package

The Asset Library under the Spectacles section contains the Remote Service Gateway package which includes RemoteServiceModule, helper scripts and examples for quick setup and use of the APIs. You need to manually enter the token to RemoteServiceGatewayCredentials component for initial setup.

Examples

Assumes that you have already installed the Remote Service Gateway package from the Asset Library.

For more detailed examples, refer to the example prefab included in the Remote Service Gateway package available in the Asset Library or the AI Playground avaliable in the Lens Studio Homepage Sample Project section.

OpenAI Example

This example demonstrates how to integrate OpenAI's chat completion API into Spectacles Lenses, allowing developers to send prompts with system instructions and user questions to GPT models.

import { OpenAI } from 'Remote Service Gateway.lspkg/HostedExternal/OpenAI';

@component
export class OpenAIExample extends BaseScriptComponent {
onAwake() {
this.createEvent('OnStartEvent').bind(() => {
this.doChatCompletions();
});
}
doChatCompletions() {
OpenAI.chatCompletions({
model: 'gpt-4.1-nano',
messages: [
{
role: 'system',
content:
"You are an incredibly smart but witty AI assistant who likes to answers life's greatest mysteries in under two sentences",
},
{
role: 'user',
content: 'Is a hotdog a sandwich?',
},
],
temperature: 0.7,
})
.then((response) => {
print(response.choices[0].message.content);
})
.catch((error) => {
print('Error: ' + error);
});
}
}

Multipart Example

This example demonstrates how to create a multipart body request, which is required by some OpenAI APIs.

The following code is an example of how to create a multipart body request. It is not a complete implementation and does not include the actual API request.

@component
export class MultipartExample extends BaseScriptComponent {
@input
base: Texture;
@input
mask: Texture;
onAwake() {
const fields = {
prompt: 'Add a futuristic city skyline inside the square',
size: '512x512',
};

// Unique boundary string (typically generated)
const boundary = 'cc8d4dad00a64865babb5c2dd6c835e9';

const files: {
image: [string, Texture, string];
mask: [string, Texture, string];
} = {
image: ['base_image.png', this.base, 'image/png'],
mask: ['mask_image.png', this.mask, 'image/png'],
};

// Build the body
const body = this.buildMultipartFormData(fields, files, boundary);

// Create the request here
}
buildMultipartFormData(
fields: { prompt: string; size: string },
files: {
image: [string, Texture, string];
mask: [string, Texture, string];
},
boundary: string
) {
const encoder = new TextEncoder();
const CRLF = '\r\n';
let chunks = [];

// Add fields
for (const [name, value] of Object.entries(fields)) {
chunks.push(encoder.encode(`--${boundary}${CRLF}`));
chunks.push(
encoder.encode(
`Content-Disposition: form-data; name="${name}"${CRLF}${CRLF}`
)
);
chunks.push(encoder.encode(`${value}${CRLF}`));
}

// Add files
for (const [name, [filename, filedata, contentType]] of Object.entries(
files
)) {
chunks.push(encoder.encode(`--${boundary}${CRLF}`));
chunks.push(
encoder.encode(
`Content-Disposition: form-data; name="${name}"; filename="${filename}"${CRLF}`
)
);
chunks.push(encoder.encode(`Content-Type: ${contentType}${CRLF}${CRLF}`));

if (filedata instanceof Uint8Array || filedata instanceof ArrayBuffer) {
chunks.push(
filedata instanceof Uint8Array ? filedata : new Uint8Array(filedata)
);
} else if (typeof filedata === 'string') {
chunks.push(encoder.encode(filedata));
}

chunks.push(encoder.encode(CRLF));
}

// Add final boundary (just one CRLF here)
chunks.push(encoder.encode(`--${boundary}--${CRLF}`));
chunks.push(encoder.encode(CRLF));

// Combine all chunks into a single Uint8Array
const totalLength = chunks.reduce((sum, chunk) => sum + chunk.length, 0);
const body = new Uint8Array(totalLength);
let offset = 0;

for (const chunk of chunks) {
body.set(chunk, offset);
offset += chunk.length;
}

return body;
}
}

Gemini Example

This example demonstrates how to integrate Gemini's Model API into Spectacles Lenses, allowing developers to send prompts with system instructions and user questions.

import { Gemini } from 'Remote Service Gateway.lspkg/HostedExternal/Gemini';
import { GeminiTypes } from 'Remote Service Gateway.lspkg/HostedExternal/GeminiTypes';

@component
export class GeminiExample extends BaseScriptComponent {
onAwake() {
this.createEvent('OnStartEvent').bind(() => {
this.textToTextExample();
});
}
textToTextExample() {
let request: GeminiTypes.Models.GenerateContentRequest = {
model: 'gemini-2.0-flash',
type: 'generateContent',
body: {
contents: [
{
parts: [
{
text: "You are an incredibly smart but witty AI assistant who likes to answers life's greatest mysteries in under two sentences",
},
],
role: 'model',
},
{
parts: [
{
text: 'Is a hotdog a sandwich?',
},
],
role: 'user',
},
],
},
};

Gemini.models(request)
.then((response) => {
print(response.candidates[0].content.parts[0].text);
})
.catch((error) => {
print('Error: ' + error);
});
}
}

DeepSeek Example

This example demonstrates how to integrate DeepSeek's R1 Reasoning API into Spectacles Lenses, allowing developers to send prompts with system instructions and user questions.

Please be aware that DeepSeek's chat completions processing may require significant time to complete. Allow for extended response times when testing this functionality.

import { DeepSeek } from 'Remote Service Gateway.lspkg/HostedSnap/Deepseek';
import { DeepSeekTypes } from 'Remote Service Gateway.lspkg/HostedSnap/DeepSeekTypes';

@component
export class DeepSeekExample extends BaseScriptComponent {
onAwake() {
this.createEvent('OnStartEvent').bind(() => {
this.doChatCompletions();
});
}
doChatCompletions() {
let messageArray: Array<DeepSeekTypes.ChatCompletions.Message> = [
{
role: 'system',
content:
"You are an incredibly smart but witty AI assistant who likes to answers life's greatest mysteries in under two sentences",
},
{
role: 'user',
content: 'Is a hotdog a sandwich?',
},
];

const deepSeekRequest: DeepSeekTypes.ChatCompletions.Request = {
model: 'DeepSeek-R1',
messages: messageArray,
max_tokens: 2048,
temperature: 0.7,
};

DeepSeek.chatCompletions(deepSeekRequest)
.then((response) => {
let reasoningContent = response?.choices[0]?.message?.reasoning_content;
let messageContent = response?.choices[0]?.message?.content;

print('Reasoning: ' + reasoningContent);
print('Final answer: ' + messageContent);
})
.catch((error) => {
print('Error: ' + error);
});
}
}

Snap3D Example

This example demonstrates how to integrate Snap3D into Spectacles Lenses, allowing you to generate text to 2D to 3D assets.

Please be aware that Snap3D processing may require significant time to complete. Allow for extended response times when testing this functionality.

The Snap3D example code shown here is for illustrative purposes only and will not execute as presented. Please refer to the complete example in the Remote Service Gateway package for functional implementation details.

import { Snap3D } from 'Remote Service Gateway.lspkg/HostedSnap/Snap3D';
import { Snap3DTypes } from 'Remote Service Gateway.lspkg/HostedSnap/Snap3DTypes';

@component
export class DeepSeekExample extends BaseScriptComponent {
onAwake() {
this.createEvent('OnStartEvent').bind(() => {
this.do3DGeneration();
});
}
do3DGeneration() {
Snap3D.submitAndGetStatus({
prompt: 'A cute cartoony hotdog character',
format: 'glb',
refine: true,
use_vertex_color: false,
})
.then((submitGetStatusResults) => {
submitGetStatusResults.event.add(([value, assetOrError]) => {
if (value === 'image') {
let imageAsset = assetOrError as Snap3DTypes.TextureAssetData;
//Apply imageAsset.texture;
} else if (value === 'base_mesh') {
let gltfAsset = assetOrError as Snap3DTypes.GltfAssetData;
//Apply gltfAsset.gltf;
} else if (value === 'refined_mesh') {
let gltfAsset = assetOrError as Snap3DTypes.GltfAssetData;
//Apply gltfAsset.gltf;
} else if (value === 'failed') {
let error = assetOrError as {
errorMsg: string;
errorCode: number;
};
print('Error: ' + error.errorMsg);
}
});
})
.catch((error) => {
print('Error: ' + error);
});
}
}

API Reference

Snap3D API

The Snap3D API provides endpoints for generating 3D meshes from text prompts or images. The process involves submitting a generation task and then polling for completion status.

Submit Generation Task

POST /submit

Submits a task to generate a 3D mesh from a text prompt or image input. This is an asynchronous endpoint that returns a task ID for status polling.

Request Body (JSON)
ParameterTypeRequiredDefaultDescription
promptstringNoNoneText prompt describing the desired 3D object (e.g., "a blue frog")
image_b64stringNoNoneBase64-encoded image input used for mesh generation
seedintNo-1Random seed for result reproducibility
formatAssetFormatYesOutput format: "glb" for 3D mesh, "png" for image
refineboolNotrueWhether to run mesh refinement after initial generation
use_casestringYesArbitrary string to tag the use case (e.g., "easylens")
run_aldboolNotrueWhether to apply ALD (advanced latent diffusion) on the prompt
run_prompt_augmentationboolNotrueWhether to augment the prompt for improved generation
use_vertex_colorboolNofalseIf true, uses vertex color instead of UV mapping for texturing
Response
FieldTypeDescription
successboolIndicates whether the task was successfully submitted
task_idstringUnique ID for tracking and retrieving the result of the task

Check Generation Status

GET /get_status

Retrieves the current status and results of a previously submitted generation task.

Request Parameters
ParameterTypeRequiredDescription
task_idstringYesTask ID obtained from successful submission
Response
FieldTypeRequiredDescription
statusStatusYesCurrent status of the generation task
stageStageNoLatest (or current) execution stage
error_msgstringNoError message if status is failed
error_codeintNoError code if status is failed
artifactsArtifact[]NoArray of generated artifacts (URLs and metadata)

Data Types

AssetFormat
ValueDescription
glbOutputs a 3D mesh in GLB format
pngReferences a generated driving image
Status
ValueDescription
initializedTask created but not yet running
runningCurrently executing the corresponding stage
completedAll requested stages have been completed
failedGeneration failed
Stage
ValueDescription
image_genStage 1: Image generation
base_mesh_genStage 2: Base mesh generation
refined_mesh_genStage 3: Mesh refinement
Artifact
AttributeTypeDescription
urlstringPre-signed URL of generated artifact
artifact_typeArtifactTypeType of generated asset
formatstringFile format (e.g., GLB, PNG)
ArtifactType
ValueDescription
imageStage 1: Original generated image
base_meshStage 2: Generated base mesh
refined_meshStage 3: Generated refined mesh

Known Limitations

  • The chat_completions endpoint does not support streaming.
  • For the Gemini LiveAPI only models/gemini-2.0-flash-live-preview-04-09 is supported.
Was this page helpful?
Yes
No