
New Intelligence Report

New Intelligence Report
[New Intelligence Guide] Can “Oppenheimer” be completed in 7 steps? Is the film industry 2.0 really coming?
After the explosive popularity of “Barbenheimer” online a few days ago, many netizens have been “replicating” the amazing technique of using MidJourney+Gen-2 to create movies!
A netizen updated his own tutorial, stating that it only takes 7 steps to create “Barbenheimer”, which was praised by Karpathy as “Film Production Industry 2.0”.
A 20-second animation short film with a complete plot and 6 scenes, completed in 7 steps, is so impressive that even Cao Zhi would call it professional!
7 Steps to Complete Barbenheimer, Stunning Results
1. Use ChatGPT to help you write the storyboard script and also write the subtitles.
2. Based on the storyboard script, use Midjourney to generate the first image of each shot.
This might be the only step in the 7-step process that has a bit of a threshold, as you need to create your own prompt for each image.
However, after clicking the image to enlarge, you can see that the prompts are not very long, and friends with a basic knowledge of English can try it out.
These are the starting images of several other scenes in the short film, all generated using Midjourney.
3. To ensure a consistent color tone in the short film, you need to adjust the tone of each image using photo editing software.
For example, if the color tone in the short film is a retro style, the original images generated by Midjourney may not match.
After adjusting with any photo editing software, all scenes will have a more consistent style.
4. Use Gen-2 to animate each photo, turning 6 photos into 6 scenes.
5. Then use ElevenLabs to generate the voice for the subtitles.
6. Then use FinalCut Pro to combine the animation, sound, and special effects, and the short film is basically complete.
7. Finally, use Capcut to add subtitles, and you’re done!
Experience Gained from Testing
The biggest obstacle is that because Gen-2 can only randomly generate animations from one image, there is often a lot of distortion when it comes to facial images.
I originally wanted to make a video of Boss Ma crafting a starship, but I found that the facial distortion was too severe, so much so that every segment of the video only had the first frame of Boss Ma, and one second later, it was unclear who it would turn into.
Therefore, if you want to use Gen-2 to make a video related to a celebrity, it is still almost impossible at this point.
Moreover, these facial distortions and character movements are completely random and not under user control.
A netizen made a video by capturing the last frame every 4 seconds and letting Gen-2 continue generating new animations.
After about 40 seconds, the character in the picture went from a beautifully animated character to almost a sculpture due to distortion.
Even if there are some obvious dynamic prompts in the images, Gen-2 still cannot understand them, so the generated effects may not meet your expectations.
Character actions also cannot be controlled well; it may take a long time to generate a satisfactory animation using the same image.
So, based on the current capabilities of Gen-2, it is still quite unlikely to adapt to complex scripts at will.
This requires avoiding close-up shots of faces in scripts as much as possible.
However, if in the future Gen-2 can fully combine prompts and images, allowing images to move according to the prompt descriptions, the effects would leap forward significantly.
Other Effects of Midjourney+Gen-2
Master Guizang used Gen-2 to create several scenes from “Oppenheimer”, which are indistinguishable from the real thing.
At the same time, he also compared the differences between similar scenes and those generated by Pika Lab.
The image below shows the effects of Pika Lab.
In addition to the initial “Barbenheimer” segment, many netizens have also started side projects in animation development using the golden duo of Gen-2 and Midjourney.
Let’s enjoy some impressive animation demonstrations.
It can be seen that the facial distortion in the demonstrations is not very significant, which is a significant difference from the actual testing, likely because the author found effects with less facial distortion after many attempts.
A netizen generated a trailer for a celebrity Marvel character, and the effect was very realistic, feeling like an official Marvel work with a bit of animation effects and lighting.
This is a horror movie trailer generated by a netizen, which cleverly utilized the facial distortions from Gen-2 to enhance the horror atmosphere.
This segment is also a video with a strong artistic sense made by a netizen using MJ+Gen-2, along with updated director’s camera effects.
This is an animation generated from a still image created with oil painting; although there is some distortion, the effect is indeed good.
