Tutorial: Welcome To Scenario! How to Start 👋
How to Begin
Welcome to Scenario! We are on a mission to revolutionize the way games are made by building advanced generative AI technology that enables anyone to easily create game assets. This introductory guide is designed to help navigate our web app platform.
Scenario Walkthrough
On our web app and iOS app, users have the capability to design custom fintuned diffusion generators to produce 2d image assets. Assets can range from character art to patterns to landscapes. In this short guide we will briefly overview:
Navigating the interface
Generating images with the Scenario public models
Training custom generators
Using prompts to generate images with a custom model
By the end of this tutorial, you will have a basic understanding of how to use Scenario to generate game assets, and how to use our tools to train your own custom generators. Let's get started!
Navigating the Interface
In it’s Alpha version, Scenario has a simple and straightforward interface. When you first log into Scenario you will find the following navigation categories:
Home
When you click through to the home screen you will find your custom generators at the top of the page, and your most recent images directly below. This will also show you whether images or generators are still in the process of being created.
Generators
All of your custom made generators are housed here, and can be sorted by whether they are in draft form, are in the process of being trained, or have finished training. If you click on an individual generator icon, you can see all the images that were used in your dataset, and have the opportunity to delete the generator on this screen.
Images
All of the images you’ve generated can be found within this section. By scrolling over the images, you can also see what generator token was used, and what your prompt was.
Create a Generator
In this section you can create your own custom generator. This allows you to take your own curated dataset and design custom diffusion models specifically for applications such as character creation within a certain style, texture patterns for gaming assets, unique tokens, and more.
Generate Images
Your fully trained private custom generators and our public generators are found in this section. This is where you can test out and prompt with any of the generators you have access to. Some users may find that using the public generators is a good way to create datasets for future generator creations.
Generating Images with the Scenario Public Models
Scenario has a number of public models that are available for all users who want to make assets on the platform. These models include:
Potion Generator
Low-poly Dwarf Generator
Badges
Isometric Buildings
Spell Books
Midjourney V4 Style Model
Stable Diffusion V1.5 Model
The more specialized models, such as the potion generator or low-poly dwarf, are great ways to see what an ideal custom model might look like. If you want to see the kind of range that a good custom model should be capable of, you may want to try generating a few images without adding any extra prompt modifiers.
The Midjourney and Stable Diffusion models are strong tools for creating datasets when you haven’t already got one. You can use these models to ideate new concepts, or to recreate existing concepts. You can use programs like the CLIP interrogator to develop clear pathways towards prompting specific concepts.

Training Custom Generators
Most users are likely on this platform for the custom generators, which are incredibly useful tools. On it’s most basic level, custom generators can be used to create a variety of general and hyper focused assets in user directed styles and subjects.
We will be releasing a number of focused tutorials for various styles, subjects, and hybrids. In general, users will need to collect and curate between 15-30 images for a dataset. These images should represent either the style, subject, or both that you would most like to produce. Each type of object or style has nuance to it, and we recommend checking out our tutorials on specific use cases.

Using Prompts to Generate Images with a Custom Model
All of your custom trained models are available directly on Scenario to create images within roughly 20 minutes to 2 hours of training. You can access them both on the web app and on any Apple device.
Prompting with custom models varies a little from traditional prompting on a Stable Diffusion or Midjourney system. You will find that once you have trained a model that you only need to use a few modifying tokens (that is another way to refer to the words that make up an overall prompt).
Some good starting suggestions are to add guiding prompts that support the art medium or style, communicate information about the subject, or give direction for color and mood. These prompt modifiers should either support or fill in spaces where your dataset may not producing what you want. You can reference our prompt guide for more information.

Best of Luck!
We are so glad that to support you on your journey of creation! We hope you enjoy this new adventure, and look forward to seeing your creations out in the world.
Updated on: 30/12/2022
Thank you!