Articles on: Tutorials

Tutorial: NPC Character Training Workflow

Introduction



Thanks for joining us in our regularization class guide series. In these tutorial we will walk you through one way you might approach training a custom generator using the Scenario webapp. This tutorial is a great starting point for beginners who are just getting used to training custom models.

In our Concept Art training guide, we went over three main categories your custom generator might fall into. We recommend you review that guide, but as a recap those categories are style, subject, and hybrid.

This NPC/Character guide will be focused on hybrid training. That means you should think of the custom generator you make from this as being a hyper focused looked at one type of character style you can make on Scenario. Characters/NPCs/Mobs tend to be more specialized and are between beginner and intermediate level. Feel free to read more about regularization classes here.

Training an NPC Robot Class



This tutorial is going to focus on training a NPC custom generator, focused on producing a highly specialized robot NPC style. We’ve provided a link to our dataset and welcome you in following us along in our training. Or feel free to follow along using your own robot style.


Curating the Dataset



When you take a look at the dataset provided, take a moment to notice - what is the similarity in each image, and also, what makes that image different from the rest of the dataset?

There are a few things that are similarities that jump out with our robot style. It is possible you will notice more details, but a good starting point is:

Each subject falls into a very similar category - robots.
The robots have a particular look and quality to them.
The overall aesthetic is consistent - no image is in an entirely unexpected aesthetic.
All characters are in the same setting.

What makes each image distinct and different is also important. What we notice here are the range and difference in:

Each robot has unique range in color and posture.
None of the robots are designed exactly the same way.

As a general rule the things that are shared throughout a finetuning dataset are prioritized in the training, and the things that differ are not. This isn’t a perfect rule - there are exceptions - however it is a good starting point and rule of thumb. In this case, you can reasonably expect that both the style elements and the majority of the subject elements will carry through - however the AI should understand that the robots are not all the same character. It should also notice that the robots can look more humanoid or more animal.

You will also notice that there are 36 images in this dataset. In this case we have provided more images with distinct differences to direct or generator towards a more nuanced and subtle range of output. This is what makes this particular training more on the intermediate side, and is also a good way to learn what kind of nuance might support a larger dataset.

The minimum number of images you should use is five. However, we don’t recommend going below eight, as your images are more likely to underfit the fewer training steps you have. It will depend on what you are training, but somewhere between 10-30 tends to be ideal.

It is important to remember that you may need to refine and retrain your dataset, particularly when you first start using the program.


Create Your Generator



Once you have your dataset, you’ll be ready to create your generator. You will need to go through the following steps:

Go to Home > Create a Generator
Upload your images to Add Training Images. If you need to crop them, do so at this step. /ima
For this model we will want to select Remove Background for each image. We want the training to be clear that the background is not a part of the subtlety we are attempting to train.
Click Next when you are ready.
You will not be prompted to choose a name and a regularization class. In this case we will be naming the model Rbt and picking NPC/Characters/Mobs > Robots as our regularization class.
Click next. You will see the next page which talks about learning rate and text encoding. You do not need to change anything here, and you may launch the training program.

It should take anywhere from 20 minutes to around 2 hours for your generator to train. Check back in a little bit!

Test Your Generator



Once your training is complete, it’s time to test your outputs. Follow the guide we’ve created in our document on How to Prompt. If you are happy with your work, proceed to the next step. In some cases, you may find you want to adjust the outputs. You are always welcome to add or remove images and retry.

If you are not happy with your output there are a few ways to proceed.

First, you may decide to use an imperfect generator with some extra prompts to create more nuanced images for your dataset. You can use these images to replace or add to your original dataset when you train a new model.

Your other option is to remove images from your original dataset that you see are showing up too often. It is easy to identify these if, when you generate images without additional prompts, you see one or two aesthetics or subjects from the dataset coming through multiple times. That indicates a need to prune and retrain.


Conclusion



It’s very easy to use Scenario to make a general model with the NPC/Characters/Mobs regularization class. Although it may take some adjusting, practice is very important when mastering custom generator training tools. We can’t wait to see your results - make sure to tag @Scenario_gg on twitter so we can see what you’ve made!

Updated on: 05/01/2023

Was this article helpful?

Share your feedback

Cancel

Thank you!