Hi, how can we help you today?

Advanced Captioning


Image captioning is key for training high-quality custom AI models. Captions define the key elements of your training dataset, helping the AI recognize important details while ignoring non-essential ones. Well-structured captions guide the AI’s learning process, ensuring accurate and consistent outputs.


Use Auto-Captioning as a Starting Point

Scenario’s auto-captioning feature is the fastest and most efficient way to generate captions and works well in most cases. However, it's recommended to review and refine captions as needed for better accuracy. Auto-generated descriptions may not always capture key elements, especially for abstract concepts, so manual adjustments can improve training results.


Understand Caption Structure

A strong caption comes from describing the image as if you were explaining it to someone who can't see it. Keep it clear, structured, and detailed—mention key elements like characters, objects, colors, outfits, and distinctive traits. Use commas to separate details for clarity when needed.

Think of captions like well-crafted descriptive sentences:

  1. Identify the main subject(s) (character, object, or scene).

  2. Describe the action taking place (if any)

  3. Add extra details about the background, colors, emotions, or artistic style.

  4. You can also include a unique trigger word (optional) to enhance the caption’s impact, as explained below


Use Trigger Words (Optional)

For greater control, you can use a unique token (also called a trigger word) a specific word or phrase that consistently represents a character, style, or concept across prompts. It may be distinct and not resemble common dictionary words, preventing unintended AI interpretations. This technique is especially useful in SDXL, but it can also be applied in Flux occasionally.

Example of a captioned dataset using the trigger word “Vendalixia” (the name chosen for this character)


Create Effective Captions: Adapt Captions Based on Training Goals

The level of detail in captions should align with the model’s purpose. If the goal is to train a style, a trigger word is not always necessary, and captions can simply describe the general scene.  However, if the focus is training a specific subject, captions should ideally highlight defining features, including colors, materials, and proportions. A trigger word may be useful in this scenario, as it helps ensure the subject can be recalled accurately in future generations.

A good rule of thumb is to caption only what you expect to change later. For example, if a character’s outfit isn’t included in the caption, it may be harder to modify it in future generations. If the outfit doesn’t need to change, you can leave it out (and add it through prompts instead.)


Balancing Detail and Brevity

Captions should be clear and precise. They must be detailed enough for the AI to learn key features but not overly complex. Overloaded captions can confuse the model, while vague captions may not provide enough information. Keeping captions concise yet informative leads to better results.


Consistency Matters

Maintaining consistent captioning across a dataset helps the AI model learn more effectively. Changing phrasing or descriptions too often can lead to inconsistencies in outputs. If you use a trigger word, apply it uniformly throughout the dataset. Similarly, if you are training a character, ensure key traits like hair color, clothing, or expressions are described in the same way across all images.

In some cases, no captions at all can be a viable strategy. This applies mainly to Flux-based style models trained with 50+ diverse images. When the dataset is large, well-balanced, and visually consistent, the model can learn the style organically without requiring textual reinforcement.


Was this helpful?