Introduction
Last night, the Runway Gen-2 model made a significant breakthrough in video generation, with improved stability and clarity reaching commercial levels. Therefore, I wrote a simple usage guide for everyone to play with!
Usage Guide
This guide will introduce the costs, usage methods, generation patterns, etc., of Runway’s Gen-2 model. Gen-2 is very easy to use, and you can use existing high-quality images or images generated by MJ or SD.
Runway Website:
https://runwayml.com/ai-magic-tools/gen-2/
Usage Method
The basic usage method of the Gen-2 model is very simple. After logging into Runway, select Gen-2, upload an image, and click [Generate 4s] to generate it. For details, you can click on the image overview.
After the generation is complete, you can click the play button to watch it. You can also choose to extend and generate another 4 seconds, but generally, the extended video may not have as good an effect as the first generation.
More advanced methods can include prompt control, motion speed control, and camera angle control for more precise control over the video movements. A small number of prompts are less likely to cause distortion in video content, while camera angle control can more easily distort video content, requiring more experimentation.
Here are a few small details to pay attention to:
For the paid version, remember to turn on the [Upscale] option to generate HD videos, with the widest direction reaching around 1800 pixels. Turn on the [Remove watermark] option to remove the watermark.
Don’t upload images that are too large; images above 4k may get stuck and fail to generate. In this case, you need to manually cancel the task and upload a smaller image for generation.
The videos are stored by default in the Gen-2 folder. When there are too many videos in the folder, you can rename it and create a new [Gen-2] folder, and the videos will automatically be stored in the new folder. The folder can be downloaded in bulk, which is very convenient.
Generation Patterns
Here are some patterns I have discovered; they are not necessarily 100% accurate, and discussions are welcome.
Refreshing the same image multiple times generally does not change the major generation direction. For example, if the first generation is rotation, refreshing multiple times will still result in rotation, although the details may vary.
Using high-quality and highly complex images tends to yield better results. Often, the generation pattern simplifies the existing image.
Currently, the model is best at generating abstract effects, such as abstract lines and animations. The effect of generating real people is a level lower, while the generation effect of IP is good.
Some videos may appear blurry during generation, while others do not. The probability of not being blurry is higher for IP and abstract images.
There are many directions for video generation, including rotation, zooming in, dynamic IP characters, blending, etc. Some images may only have slight dynamic effects.
The video generation effect of Gen-1 is currently not very good, with low clarity and average effects. Unless there are specific needs, it is not recommended to use it.
More AI Video Generation Attempts
The 3D dynamicization of B-end icons is also promising! There are more effects worth exploring!
Conclusion
If you want to discuss more details, you can also join the Stable Diffusion Alchemy Pavilion AI video task force, and exchange alchemical secrets with fellow practitioners, such as discussing the selection of herbs and the control of fire, evaluation schemes for alchemical results, etc. ~ The road of immortality is long, and let’s walk together, haha!