This guide walks you through training a “dual-character model” in Scenario, which allows multiple characters to maintain their unique traits across various poses and expressions without requiring manual adjustments or inpainting.
Building on the process outlined in the previous guide (Multi-Character Scenes), this approach delivers exceptional consistency for character pairs (or small sets) that appear together frequently.
A dedicated dual-character model offers several advantages for projects requiring consistent character interactions.
You'll achieve stronger character consistency across multiple generations, eliminating the need for constant inpainting to fix character traits.
This gives you greater control over multi-character interactions for storytelling and game design, ultimately creating a more efficient workflow when generating the same character pairs repeatedly.
The main limitation is specificity - this approach works best for pre-defined character combinations rather than assembling random characters on demand. It's ideal for projects where specific characters appear together consistently, such as game protagonists, story characters, or recurring subjects.
To train an effective dual-character model, you'll need a carefully curated dataset featuring both characters together.
Start with 8-12 high-quality images showing both characters in various interactions. These images can be created using the workflow from section 6.1, with refinements made in Scenario Canvas if needed.
Ensure each character maintains their core visual traits (hairstyle, proportions, outfit) while appearing in different poses or expressions.
All images should share the same overall style for maximum consistency.
For better model flexibility, you may include a mix of image formats by leaving some images in landscape format ("Fit to square") while cropping others to square format. This helps the model adapt to different ratio requirements.
Captions play a crucial role in dual-character models, as they determine how your model responds to prompts. Scenario's automated captioning may not always be optimal for these models, so manual editing is recommended:
Structure your captions to follow a consistent format such as "Character A (description) + Action/Scene + Character B (description)." For example:
Keep captions focused on essential character traits and their interaction. You might use AI tools like ChatGPT to help generate consistently structured descriptions across your training set.
Once your dataset is prepared with proper captions, you're ready to train:
Use the default automatic training settings for balanced results, or select the "Subject" preset when training on SDXL.
Assign a clear model name, thumbnail, and relevant tags for easy organization. Add the tag "character" or "dual-character" to help with filtering later.
Click "Start Training" to begin the process. You'll receive a notification when training is complete.
After training, test your model thoroughly to confirm it maintains character consistency. Use Prompt Spark to generate relevant prompts following the same structure as your captions: "Character A (description) + Action/Scene + Character B (description)."
Review your results to ensure both characters remain visually distinct with their defining features intact. If minor adjustments are needed, you can always use Retouch to edit specific elements, switching from the dual-character model to the original single-character model for precise control.
Your dual-character model is compatible with Scenario's full feature set, giving you additional ways to refine and control outputs.
Use reference images
You can leverage any reference mode (like Image-to-Image or ControlNet) to guide pose or composition while maintaining character consistency. Here’s an example with a “Triple Character Model”:
Try Scenario Live for real-time generation
simply sketch a rough composition in the left panel and watch as your characters are rendered with consistent features and traits in the right panel:
By training a dual-character model, you create a powerful tool that eliminates constant refinements when generating character pairs. Though similar to training any model, the key difference lies in careful caption preparation and consistent prompt structure. This workflow streamlines the creation of character interactions, ensuring your characters remain consistent through every scene and scenario you create..
Image-To-Image Generation: Scenario API Documentation - POST /generate/img2img
For SDXL LoRA models, use scheduler: “LCMScheduler”
and numInferenceSteps: 10
For Flux Dev LoRA models , use baseModelId: "flux.1-schnell"
and numInferenceSteps: 4
numSample: 1
for every generation if you want a live effect.
Was this helpful?
Quentin