To effectively use the Scenario API, it helps to understand a few key concepts:
In Scenario, a model is the AI system that generates images based on your prompts or references. To create a custom model through the API, you upload training images and specify parameters. The Scenario platform handles the training process in the cloud.
Models come in several types:
Base models like Flux or SDXL
Custom-trained models (LoRAs) for specific styles or subjects
Multi-LoRAs that combine multiple custom models
An inference is a request to generate images using a model. With a single API call, you can specify parameters like prompts, reference images, and settings to generate multiple images simultaneously.
Inference parameters mirror what you'd set in the web interface: model selection, prompt text, reference images, sampling steps, guidance values, and other generation settings.
When you submit an API request for training a model or generating images, Scenario creates a job to track the progress. The job contains all information about your request and its outputs.
Jobs have states like “canceled”, “failure”, “in-progress”, “queued”, “success” and “warming-up” allowing you to monitor progress and retrieve results when ready.
An image is the output generated from an inference request. Each image includes metadata such as the prompt, model, and settings used to create it, just like in the web application available through the endpoint /assets in the API
Was this helpful?
Quentin