Hey everyone! This video is a bit spontaneous because I was in the middle of covering another thing when Runway decided to shake things up again! 🚀 They’ve soft-launched Act One, and wow – it might be the most impressive video-to-video restyle tool I’ve seen. Let’s dive in and explore what it means for AI filmmaking, the good, the bad, and the absolutely wild results Act One is capable of!
In this video, I walk you through:
• Runway’s journey from Gen-1 to Act One
• The evolution from text-to-image to advanced video restyling
• My experiments with micro-short films and challenges I faced using video-to-video AI
• Thoughts on how Act One compares to earlier tools and where it might push the limits
We even break down a few interesting quirks, like how expressive eye movement is now handled, and I throw in a fun critique of cinematic details (hint: more wigs are always better). If you’re curious about the potential of mixing practical filmmaking with AI, you’re in the right place.
Chapters:
00:00 Intro – Stable Diffusion Interrupted!
00:36 Act One by Runway – A First Look
00:52 AI Video Evolution – Gen-1 to Act One
1:29 Current Video to Video in Runway
01:43 My Micro-Short Film Workflow (Challenges)
02:57 Video-to-Video Pitfalls and Funny Fails
03:31 Act One Example
04:50 Layering Performances with Masking
05:46 Can We Control AI-Generated Characters?
06:04 Questions and Access to Act 1?
6:41 - Input Questions about Driving Video
7:19 The State of AI Video to Video
7:39 Fresh Examples of Act-1
8:51 Final Thoughts – Hitting Refresh on Runway
Related Links:
🔗 My video on creating a micro-short film with AI: [ Ссылка ]
🔗 Example from Nicholas Neubert with music integration: [ Ссылка ]
🔗 Follow Jon Finger’s fun experiments with Gen-3: [ Ссылка ]
If you enjoyed this deep dive and want more off-the-cuff explorations of creative AI tools, hit like and subscribe to stay updated. I’m beyond excited to see where Act One takes us next. Let’s break some new ground! 🎥
Ещё видео!