I made this video in two hours and it is 100% generative content. Why did I do this?
If you know me, then you know I love cars. As such, there's always going to be, for better or for worse, a soft spot in my heart for the Fast and Furious franchise. I can't help it. It's part of who I am.
That said, making a film of that nature is, at least right now in my career, inaccessible.
Until AI came around.
Now, I can make a spec teaser for how I would treat the Fast and Furious franchise moving forward. And I can do it with nuance, with a real point of view, and with a consistent idea.
How did I do this? With two main services.
First, I generated images with a consistent look and feel. I did this, almost completely, with Google DeepMind's ImageFX. The reason I chose this model over something like Midjourney is because of its treatment of real physics and grounding the image in reality better than its competitors. I find that ImageFX renders more realistic and accurate images for what I tried to accomplish.
Next, I took my images and used them as source material in Luma AI's Dream Machine. After testing ChatGPT's Sora, Runway's Alpha-3 and Alpha-3 Turbo, and KLING AI's video rendering services, I found that for realistic and timely renders, as well as obeying camera movement and composition changes, Luma AI's Dream Machine performed best for me. That's not to say that it doesn't have its pitfalls or that the other services mentioned aren't fantastic in their own rights, this is just where I landed for this project.
What I didn't like:
Google DeepMind's ImageFX does not, at least not to my knowledge (please correct me if I'm wrong), allow for the user to copy the prompt from a previously rendered image into the next one. ImageFX acts, for the most part, as a one-and-done generator. Now, if I generate iterations, it seems to hold on to the visual motifs pretty well, but as soon as I start again, it loses the plot, and there doesn't seem to be a way to go back to a rendered image and copy the prompt from it once it lands in the gallery.
I also think that Dream Machine could considerably dial back the need to move characters and cameras without a prompt. If I did not give specific instructions for characters NOT to move, Luma AI would indeed take it upon itself to make some.... creative.... decisions.
My main takeaways:
- AI is a tool. NOT. A replacement. for creative people.
- There's a bajillion tools out there and I'd rather spend my time learning how to make the tools work for me (and, for that matter, find my favorite tools in the toolbox), and spend time using them correctly than "spray and pray" with every new and evolving tool out there. If I did that, I'd likely get overwhelmed entirely too quickly.
- Fast and Furious can be saved. Universal Pictures, I'd be happy to help with this evolution of the F&F universe.
Enjoy. And go make something, too!
_____________________
My Cameras: [ Ссылка ]
My Mics: [ Ссылка ]
Tripods: [ Ссылка ]
Lenses: [ Ссылка ]
Lights: [ Ссылка ]
TORETTO: A FAST AND FURIOUS STORY - Teaser Trailer
Теги
filmmakercontent creatorcinematography tipsFast and Furious rebootAI-generated contentAI in filmmakingGoogle DeepMind ImageFXLuma AI Dream MachineFast and Furious concept teaserAI creative processAI for video renderingAI filmmaking toolsUniversal Pictures Fast and Furiousrealistic AI renderingAI cinematic toolsLuma AI tutorialAI-generated movie teaserfilmmaking with AIrealistic AI rendersAI for storytellingAI spec teaser