# CFG Scale
The classifier free guidance scale is a parameter used to control how much the model pays attention to your prompts.
1 – Mostly ignores your prompt.
3 – More creative.
7 – Achieves a good balance between following the prompt and freedom.
15 – Follows the prompt more closely.
30 – Strictly follows the prompt.
Here are a few examples of increasing the CFG scale using the same random seed. In general, you should avoid using the two extreme values – 1 and 30.
Recommendation: Start from 7. If you want it to follow your prompt more closely, increase this value.
# Sampling Steps
As the sampling steps increase, the quality improves. Typically, using the Euler sampler for 20 steps is sufficient to obtain high-quality, clear images. While stepping to higher values, the images will still undergo slight changes, but they may become different without necessarily being of higher quality.
Recommendation: 20 steps. If you suspect low quality, adjust to a higher value.
# Sampling Method
Depending on the GUI you are using, there are various sampling methods available. They are just different ways to solve the diffusion equation. They should yield the same results, but may differ slightly due to numerical deviations. However, there is no right answer here – the only criterion is that the image looks good, and the accuracy of the method should not be your concern.
Not all methods are equal. Here are the processing times for various methods.
Below is the image generated after 20 steps of processing.
There are some discussions in online communities claiming that certain sampling methods tend to produce specific styles. This claim has no theoretical basis. My advice is to keep it simple: use the Euler method for 20 steps of sampling.
Recommendation: Choose the Euler method.
# Seed
The random seed determines the initialization noise pattern, which in turn determines the final image.
Setting it to -1 means using a random seed each time. This is useful when you want to generate new images. On the other hand, a fixed seed will result in the same image being generated each time.
If using a random seed, how to find the seed used for the image? In the dialog box, you should see something like this:
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 4239744034, Size: 512×512, Model hash: 7460a6fa
Just copy this seed value into the seed input box. If generating multiple images at once, the seed value for the second image is that number incremented by 1, and so on. Alternatively, clicking the recycle button can reuse the last generated seed.
Recommendation: Set to -1 for exploration. Set to a specific value for fine-tuning.
# Image Size
The dimensions of the output image. Since Stable Diffusion is trained with images of 512×512, setting it to portrait or landscape dimensions may cause unexpected issues. Try to keep it square whenever possible.
Recommendation: Set the image size to 512×512.
# Batch Size
The batch size is the number of images generated at once. Since the final image highly depends on the random seed, generating several images at once is a good idea. This way, you can get a good idea of what the current prompts can do.
Recommendation: Set the batch size to 4 or 8.
# Face Restoration
A little-known secret of Stable Diffusion is that it often has trouble processing faces and eyes. Face restoration is a post-processing technique applied to images, using specially trained AI to correct faces.
To enable it, check the box next to “Restore Faces”. Go to the “Settings” tab and select “CodeFormer” under “Face Restoration Model”.
Here are two examples. The left image is without face restoration, and the right image is with face restoration.
Recommendation: Enable face restoration when generating images with faces.
Click the WeChat public account card below, reply with the keyword:Painting to access the AI painting tutorial.