Hi, how can we help you today?

Optimize Generation Settings

This guide walks you through optimizing image quality in Scenario to achieve sharp, high-quality outputs tailored to your creative needs. By adjusting key settings like Sampling Steps and Guidance, and leveraging tools like Enhance, you can enhance results efficiently.

Adjust Sampling Steps

Sampling Steps determine how many iterations the AI uses to refine an image, directly impacting detail and generation time.

  • Flux: Defaults to 28 steps, with a range of 1–50.

  • SDXL: Defaults to 30 steps, with a range of 10–150. 

It’s only to be used in the final steps of a process/workflow to possibly polish a little bit further an AI generation. Sampling steps have a marginal impact (especially on Flux, more impact on SDXL) 

Tip: Start with defaults (28 for Flux, 30 for SDXL) and tweak within the recommended ranges to find your sweet spot.

Set the Guidance value

The Guidance slider controls how strictly the AI follows your prompt, balancing creativity and accuracy.

  • Flux: Works well between 3 and 5. Use 3 for more creative freedom or 5 for tighter prompt adherence.

  • SDXL: Benefits from 6 to 12. Higher values sharpen focus on prompt details, while lower ones allow flexibility.

Tip: Make small adjustments to adjust without overcorrecting.

It’s only to be used in the final steps of a process/workflow to possibly polish a little bit further outputs, if the model struggles with prompt adherence. Works better for SDXL than Flux. Too high of a Guidance will impact the style.  

Leverage the Seed Value

Every image generated in Scenario gets a unique Seed value, which acts like a blueprint for reproducing or tweaking your results.

  • Reproducing: Input the exact seed (e.g., 1464071123) with the same prompt to recreate an image perfectly.

  • Refining: Keep the seed and tweak the prompt or use the image as a reference for controlled variations.

  • Exploring: Adjust the seed by small increments (e.g., +1) or switch models for new styles while keeping the core composition.

Note: With tools like ControlNet, seeds won’t replicate exactly but still offer consistency.

Tip: Use seeds to iterate precisely, you can find them in the image info panel after generation.


Choose a Scheduler (SDXL Only)

Schedulers, available exclusively for Stable Diffusion models like SDXL, guide the step-by-step refinement of images by managing noise reduction and clarity at each timestep.

  • Role: Schedulers refine the denoising process, influencing quality, speed, and style. The default scheduler suits most cases, but advanced users can experiment for greater control.

  • Common Options:

    • Euler & Euler Ancestral: Fast and efficient, requiring fewer steps for quality results.

    • LCM Scheduler: The fastest option, boosting generation speed on SDXL. Use lower Sampling Steps and Guidance, but expect potential quality trade-offs.

    • DDIM: Efficient with reduced processing time.

    • DPM (Single/Multi-Step): Enhances quality through differential equation approximations.

    • KDPM & KDPM Ancestral: Offers fine control and diversity in outputs.

    • UniPC: Balances quality and speed with a predictor-corrector approach.

    • DDPM: High-quality results but requires more iterations.

    • PNDM: Speeds up DDPMs while preserving quality.

    • Heun Sampling: Precise and adaptive for efficient diffusion.

  • Choosing a Scheduler:

    • Prioritize speed? Try LCM or Euler.

    • Focus on quality? Use DDPM or DPM.

    • Need control? Experiment with KDPM or UniPC.

  • Example (LCM Scheduler):

    • Model: SDXL

    • Prompt: "Portrait of a dwarf in armor, 4K"

    • Seed: 1595264324

    • Benefits: Faster generation with lower default settings.

    • Trade-Off: May reduce detail or introduce artifacts, test and adjust accordingly.

Tip: Stick to the default scheduler for simplicity; explore others to match your project’s goals. For a full list, see the Schedulers Documentation.

Step 5: Select the Base Model (SDXL)

Choosing the right Base Model sets the foundation for your image generation. Depending on how your model has been trained, you may need to switch the base model accordingly. You cannot train yet a model based on Juggernaut for the moment on Scenario, but you can import them and use them tho generate on Scenario selecting the correct base model.

More information on the base models:

  • Default (SDXL): The standard Stable Diffusion XL model, versatile and designed for general-purpose image generation from text prompts. It’s a solid starting point without specialized tweaks.

  • Juggernaut XL V9: A customized SDXL variant optimized for photorealistic images, excelling in details like skin, lighting, and contrast. Best for realistic portraits or scenes. Try it with 30–40 steps, DPM++ 2M Karras sampler, and CFG 3–7 at 832x1216 resolution.

  • Juggernaut XI V11: A more advanced SDXL-based model, improving prompt adherence, composition, and details like hands, eyes, and faces. It uses modern captioning with GPT-4 Vision for better interpretation of complex prompts, making it ideal for detailed human renders or intricate scenes.

Your choice depends on your goal: Default (SDXL) for flexibility, Juggernaut XL V9 for photorealism, or Juggernaut XI V11 for precision and prompt fidelity.

Tip: Test each model with your prompt to see which aligns best.

Adjusting the values of Sampling Steps and Guidance in Scenario is a powerful way to enhance image quality, but moderation is key to avoiding unintended consequences. Overadjusting, pushing Sampling Steps too high or setting excessive Guidance can lead to a loss of creative flexibility and style. Start with the recommended defaults (Sampling Steps: 28 for Flux, 30 for SDXL; Guidance: 3–5 for Flux, 6–12 for SDXL) and make small, deliberate tweaks within the suggested ranges. Combine these with tools like Enhance for sharp, tailored outputs that match your vision efficiently.


Was this helpful?

Emmanuel de Maistre

Emmanuel de Maistre