Face Generator
Design powerful face effects with advanced stylization and transformation capabilities. From subtle morphs and hairstyles to bold geometry changes, Face Generator gives creators full control over how a face looks. Use text prompts or image samples as inputs to generate unique styles, and apply them to the entire face or focus on specific areas.
Getting Started
Launching Face Generator
-
Navigate to the
Lens Studio AIoption in the top right corner of the Lens Studio. Then, click on theGenAI Home Pagetab in the sidebar.
-
On the
GenAI Home Pagetab, you may need to scroll down to find Face Generator.
User Interface Overview
- Creation Panel: Provides tools for creating and editing face effects.
- Gallery: Shows all the effects you’ve created along with their statuses.
Enhanced Model
Enhanced Model ideal for morphs, characters, and animal transformations, while preserving identity.
Effect Creation flow
-
Click the
Surprise mebutton to try one of the default prompts and get familiar with the model.
-
If you are ready to use your own prompt, enter a description of the Effect you’d like to generate in the
Effect Prompttext field (up to 500 characters).See the Best Practices guide to learn how to achieve the best results.
-
You can add an image input to guide the generation and make the result closer to your vision.
You can use
Effect Prompt,Image Reference, or a combination of both - whichever works best for your case.See the Best Practices guide to learn how to achieve the best results.
-
The settings can significantly affect the final result, helping you achieve the best match for your vision:
- Reference strength - Controls how strongly the effect follows the Image Reference. A higher value makes the result closer to the Image Reference, but reduces similarity to the original user photo.
- Attributes preservation - Affects the hair and headwear area the most. When increasing the value, the model maintains greater consistency in the hair area and does not remove hair or headwear from the head.
- Seed - Control randomness. Use the same number to recreate the same look, or try different ones for new variations.
See the Settings section to learn more.
-
When your prompt is ready, press the
Generate Previewsbutton to generate the previews.
-
A new tile will appear in the Gallery with a loading indicator in the corner, showing the progress.
Generating the preview may take up to 5 minutes, but you can close the plugin and come back later.
-
Once the preview has been generated, you can click its tile in the Gallery to open the details page. Here, you can use the arrows on the sides to preview the effect on different models. Additionally, you can click the button in the bottom-right corner to view the original image the effect was applied to.
-
If the preview doesn’t match your expectations, click
Copy Settingsto adjust your prompts or seed.
-
You can make as many tweaks as needed until the result feels right - all changes will appear in the Gallery as a new effect, without altering the current one. This way, you can always go back and compare whether the updated effect looks better than the original.
-
Happy with the preview? Great! Click
Train Modelto start training your model. You’ll be able to track the training status directly in the Gallery.
Training the model may take up to 2 hours, but you can close the plugin and come back later.
-
Once training is complete, you can import the effect directly from the Gallery by clicking the
Importbutton, or clickImport to projecton the details page to add it to your project.
-
Preview the Result in the preview panel and continue building your Lens.
Best Practices
Effect Prompt
You can use different types of prompts — both short and long — to guide the generation process. It’s usually best to start with a short prompt and refine it as you go. Add adjectives, specific details, or additional descriptions to bring the result closer to your desired look. Keep in mind that the tool interprets your prompt literally.
- Focus on the essentials. Describe only the changes you want to make, and avoid unnecessary or general details.
- Skip background descriptions. There’s no need to mention the background unless it’s part of your design.
- Use descriptive words. Adjectives like 3D, cartoon, or realistic can help you control the style of the generated image.
- Long prompts. More detailed prompts can help you create stronger and more unique designs. Combine them with fine-tuned settings to achieve precise and distinctive results.
- Prompt Weighting. You can use special symbols in your Effect Prompt to control how much influence specific words or phrases have on the final result - emphasizing or reducing their impact as needed.
Image Reference
Use a high-quality image with a solid background and a frontal orientation of the object. It is best if the object has features of a human face or such details as generalized eyes, lips, etc.
| Ideal Image Reference | ||
|---|---|---|
![]() | ![]() | ![]() |
| Good Image Reference | ||
|---|---|---|
![]() | ![]() | ![]() |
| Poor Image Reference | ||
|---|---|---|
![]() | ![]() | ![]() |
For example, take this strawberry:
The ML model doesn’t understand that these dots are meant to represent eyes or how to map them from the reference design onto a person’s face. As a result, the generated images will look like this:
But if you use a strawberry like this:
The ML model recognizes the eye placement and the familiar shapes and features in the original image, allowing it to transfer them to the user’s face more accurately. The resulting images will look like this:
Effect Prompt + Image Reference
You can combine an Image Reference with an Effect Prompt to expand your design possibilities and explore different styles - even if they differ from the original image. If you want to recolor the skin or add a new hairstyle, try entering a custom Effect Prompt in the corresponding field.
Comparison of Prompt Combinations
-
Effect Prompt only:
Original user image Effect Prompt Processed user image 


-
Image Reference only:
Original user image Image Reference Processed user image 


-
Effect Prompt + Image Reference:
Original user image Effect Prompt + Image Reference Processed user image 


Design Consistency
To prevent model flickering, make sure your design renders consistently across all previews. Your effect should look as similar as possible from one person to another, ensuring the generated ML model maintains the intended appearance. If the effect appears unstable or changes noticeably between persons, try adjusting your prompt or settings until you achieve consistent results.
Here’s an example of an unstable effect - you can see noticeable flickering and changes in ear shape as the head turns.
If we look at how the effect appears on different people, we can see noticeable variations - including changes in face shape, eye appearance, and ear size and shape.
| Inconsistent Previews | |||
|---|---|---|---|
![]() | ![]() | ![]() | ![]() |
Try lowering the Attributes Preservation setting to achieve a more uniform result across all subjects. Below is an example of a consistent dataset, where the effect remains stable across different faces.
| Consistent Previews | |||
|---|---|---|---|
![]() | ![]() | ![]() | ![]() |
Crop
Keep in mind that the tool operates within a defined area. It transforms the user’s head and the surrounding region. Any part of your design that extends beyond this area will be cropped. For the best visual results, make sure your design stays within the crop area.
| Crop area | Cropped effect |
|---|---|
![]() | ![]() |
Examples
Using this model, you can achieve high-quality visual effects such as morphing, gender swaps, facial hair (mustache or beard), beauty and style transformations, age changes (old or baby), animal transformations, and the addition of accessories.
-
Transformations
Create a wide variety of characters - from stylized cartoons to fantasy beings like orcs or elves. For the best results, use high-quality reference images or detailed text prompts. Characters with human-like features typically deliver the most consistent results. You can adjust or recolor skin tones by experimenting with different seeds or adding a supporting text prompt to your image.
-
Morphing
Morphing is the most flexible and controllable generation type. Using image references and customization options, you can transform the user while preserving their unique features and skin texture. By adjusting the settings, transformations can range from subtle enhancements to dramatic changes - experiment to find the perfect balance.
-
3D cartoon
The tool provides extensive support for cartoon-style designs. You can create a wide range of transformations and fine-tune them using the available settings.
-
Animal transformation
By choosing an appropriate reference image and adjusting the settings, you can achieve a controlled resemblance to the interpreted input image. Keep in mind that the model is optimized for facial detection - animals with more anthropomorphic (human-like) features will yield the best results.
-
Accessories
You can also use this model to generate various accessories for your model - such as glasses, hats, masks, and more. When generating models, make sure your designs remain consistent across all variations.
Settings
Reference Strength
Controls how strongly the effect follows the Image Reference. A higher value makes the result closer to the Image Reference, but reduces similarity to the original user photo.
| Image Reference | Value: 2.35 | Value: 5.50 | Value: 7.75 |
|---|---|---|---|
![]() | ![]() | ![]() | ![]() |
Attributes Preservation
Affects the hair and headwear area the most. Higher values provide stronger preservation of the user’s hairstyle and headwear.
| Image Reference | Value: 1.00 | Value: 5.50 | Value: 10.00 |
|---|---|---|---|
![]() | ![]() | ![]() | ![]() |
























