Skip to main content
Version: 5.x
Supported on
Snapchat

Character Skin Generator

Character Skin Generator plugin enables the creation of lenses powered by ML models that transform users into full-body digital characters with pixel-level accuracy.

Getting Started

Launching Skin Generator

  1. Navigate to the Lens Studio AI option in the top right corner of the Lens Studio. Then, click on the GenAI Home Page tab in the sidebar.

  2. On the GenAI Home Page tab, you may need to scroll down to find Character Skin Generator.

User Interface Overview

  1. Creation Panel: Provides an input field for the image prompt.

  2. Gallery: Displays all created effects along with their statuses.

  3. Status Bar: Shows information about recent actions and system feedback.

Creating a New Effect

  1. Upload the Image Reference you want to use for your effect creation.

    See the Best Practices guide to learn how to achieve the best results.

  2. Select the Non-humanoid anatomy checkbox if your reference image shows a character or object without a human-like body structure (for example, a banana, cactus, snowman, or other non-humanoid shapes).

  3. When your prompt is ready, click Generate previews to generate the previews.

  4. A new tile will appear in the Gallery with a loading indicator in the corner, showing the progress. A corresponding message is also displayed in the Status Bar at the bottom.

    Preview generation may take up to 5 minutes. You can close the plugin and return later.

  5. Once the preview has been generated, you can click its tile in the Gallery to open the details page. Here, you can use the arrows on the sides to preview the effect on different models. Additionally, you can click the button in the bottom-right corner to view the original image the effect was applied to.

  6. Happy with the preview? Great! Click Train model to start training your model. You’ll be able to track the training status directly in the Gallery.

    Training the model may take 8-12 hours. You can close the plugin and return later.

  7. Once training is complete, you can import the effect directly from the Gallery by clicking the Import button, or click Import to project on the details page to add it to your project.

  8. When you’re satisfied with the result, save your project and push the Lens for testing on Snapchat. See the Pairing to Snapchat guide to test your Lens on a device, and the Publishing guide to learn more about sharing your creation.

Best Practices

Image Reference Recommendations

You can use any image, but for best results, upload an Image Reference that meets the following criteria:

  • High-quality image (at least 720px tall);
  • Plain, high-contrast background;
  • Front-facing orientation of the object;
  • Larger head with clear, human facial features;
  • Simple textures;
  • No protruding elements;
  • Full body visible (head to toe);
  • Human-like body shape in an A-pose or T-pose
Ideal Image Reference
Good Image Reference
Poor Image Reference

Figure Crop

The tool automatically crops the uploaded image to focus on the human figure and key control points, including the head, arms, and legs. Accurate cropping helps ensure higher-quality model generation.

Background Contrast

For best results, upload an image with a solid background that clearly contrasts with the character’s colors. Low-contrast images may cause visual artifacts, such as a glow along the figure’s outline or incorrect figure detection.

In the example below, the generation boundary captures part of the background from the reference image:

Low contrast referenceResult

Increasing contrast and clearly separating the subject from the background improves figure recognition and overall output quality:

High contrast referenceResult

Background Color

Recommended background options include white, black, or a transparent PNG. When using an image reference, the tool transfers colors and textures directly from the uploaded image. If the source image is dark, the resulting model will also appear dark. You can improve results by adjusting brightness and saturation before uploading.

Dark image referenceResult

Lighting is also transferred. In the example below, side lighting from the reference image is reflected in the generated ML model.

Image reference with side lightingResult

Image References with Faces

If your reference image includes a realistic face, it will be transferred to the model:

Realistic faceResult

However, realistic faces generally produce lower-quality results compared to stylized or hand-drawn faces:

Realistic faceResult
Hand-drawn faceResult

Image References with Heads

Head size in the reference image affects facial quality:

  • Images with very small heads tend to produce lower-quality facial transfers.
  • Using a reference with a larger head helps the tool better calculate and transfer facial features.

As a result, facial detail and overall quality in the final render are typically higher when the head is more prominent in the reference image. Experiment with different references to achieve the best outcome.

Ideal Heads:

Supported Content

The technology performs best when creating ML models for:

  • 2D characters with clear shapes;
  • Stylized humanoid characters;
  • Simple 3D styles;
  • Objects with simplified textures.

Unsupported Content

The technology does not support:

  • Accessories or objects attached to hands (these may be cropped);
  • Objects with elements extending beyond the human body silhouette (for example, deer antlers or objects above the head).
Was this page helpful?
Yes
No