
The Machine Cinema Times - June 20th, 2025
... still wondering if robots have celluloid dreams.
In this issue:
Creator Survey - please participate!
New Podcast Episode: Re-Making History w/ Gen
feat Max Einhorn & Matt Wenhardt from Gennie AI Studio
“Overheard in Basecamp” for the week June 12-18
Community Call 6/18 Recording and Notes
Creator Survey - Credits for Participating!
We are doing a Machine Cinema survey on AI Creator rates to help creators with figuring out how to charge for their work. we’ll publish this result soon, but would love to have your feedback for the community. For completing the survey we’ll thank you with some complimentary credits on Luma Dream Machine. Email hq@fantastic.day upon completing survey.
Click Here for Survey, Thank You!
Machine Cinema Podcast: Re-Making History w/ Gen AI
Link to conversation on Youtube, Spotify, and Apple. Follow us on Instagram.
In this episode, Fred talks with Max Einhorn and Matt Wenhardt, respectively co-founder and Head of Production at Gennie, the generative production company behind the AI-powered recreations in Killer Kings—a new historical true crime series now airing on Sky History (part of the Hearst and A+E UK network). Max and Matt share how Gennie is using generative tools not as gimmicks, but as production-grade solutions to one of nonfiction TV’s longest-standing limitations: cost-effective, believable reenactments. They also dig into their scalable workflow, legal guardrails, and plans to build genre-focused IP beyond client work.
Highlights:
How Gennie created the AI recreations that bring historical kings to life in Killer Kings
Why AI is unlocking new possibilities in budget-constrained nonfiction formats
What it takes to build a repeatable, production-ready generative workflow
How Gennie is navigating copyright, indemnification, and ethical prompting
Their roadmap: from work-for-hire to original, AI-native IP
Overheard in Basecamp – Week of June 12–18, 2025
Our Machine Cinema Basecamp is a firehose of activity and even the most diehard members of our community can feel overwhelmed sometimes. This weekly digest of hot topics discussed, links and articles shared and discussed is here to make sure you never miss a beat.
If you’d like to join the conversation, this link is your invitation.
Full disclosure, we had our robot friend help us pull all this together and sometimes they are prone to making harmless mistakes.
🔥 HOT TOPICS
🎬 Midjourney Video Drops—And It’s Got That Sauce
Midjourney’s new video model launched to a wave of excitement and immediate experimentation. The early verdict? Aesthetics are strong. Motion’s still janky. But the potential is very real.
Especially good at painterly and stylized looks
Motion quality varies—higher motion often works better than subtle movement
Some say it feels “cheap,” others see a foundation for future dominance
💬 “Midjourney just has that visual sauce most platforms don’t.”
🧑⚖️ QC Nightmares: Frame Duplication Is Everywhere
Pros working in TV, film, and high-end commercial pipelines flagged a growing issue with AI video models: frame duplication and cadence inconsistencies—posing a threat to professional use. While it may pass the “eye test” for YouTube, broadcast pipelines flag these issues as QC errors.
Repeated frames spotted in Gen-4, Veo, Pika, Kling, and others
PAL and EBU pipelines are especially sensitive—flagging even subtle stutter
Common fixes include frame-by-frame trimming, nesting, and re-rendering
💬 “If one rogue frame keeps a great story off the screen, that’s a loss we can avoid.”
🗣️ Should You Justify AI to Audiences?
A filmmaker asked: Do I need to tell audiences why I used AI? The answer seems to be yes—at least for now. Viewers still bring bias, and creators who explain their approach build more trust.
BTS (behind-the-scenes) content helps reframe AI as tool, not threat
Viewers rally behind emotional reasons more than technical ones
“Story-first” is still the golden rule.
💬 “Make movies for people who hate AI.”
🎥 Cinematography in the Age of Prompts
We’re not quite at “full cinematographic control via text prompt”—but we’re getting closer. Members shared their hacks for lens simulation, lighting continuity, and camera movement inside AI workflows.
Synthetic depth maps (via MiDaS, Zoe, ControlNet) help lock DOF and parallax
First/last frame anchoring improves coherence for animation
The future: true “world models” with metadata baked in
💬 “We're halfway to a virtual camera department.”
🎭 AI as the New Ripomatic
Is AI storytelling really all that different from the long-accepted practice of “ripomatics” (previs made from film clips)? Some say no—and that using AI to sketch a project is even more transformative than cutting up old trailers.
Editing AI outputs feels closer to authorship than mimicry
Creative labor is still labor, even if machines are helping
The conversation may shift as more artists own their hybrid workflows
💬 “It’s like ripomatics, but with less ripping—and more making.”
💸 Consolidation Incoming? Follow the Money.
Midjourney might be getting sued, but others are getting signed. While legal cases ramp up, so do acquisitions—and most agree: consolidation is inevitable.
Google, OpenAI, ByteDance lead the race on resources
Startups with ethical models (like Asteria) are playing the long game
Meanwhile, Midjourney has 2M+ paying users. That’s not nothing.
💬 “The war isn’t sustainable. In the end, VCs probably win.”
💡 ComfyUI: Love It, Hate It, Can’t Ignore It
ComfyUI continues to divide the community. It’s powerful and modular—but also confusing and buggy. Power users embrace its flexibility. Others are begging for a UX glow-up.
Workflows are powerful, but node errors are common.
Suggestions included building cloud-based, plug-and-play workflow libraries.
New tools like Fuser and Weavy were floated as alternatives in the infinite canvas space.
💬 “Is the juice worth the squeeze?”
🧠 Prompt Markets & the Future of Creative IP
A side thread explored the growing market of creators selling AI prompts instead of services. Some saw it as smart leverage. Others questioned its sustainability.
Monetizing style as prompt IP is gaining traction
Buyers expect production-ready results
Authenticity still matters—especially when claiming high-profile clients
🎭 Community, Collaboration, & the Future of Gen AI Events
From drink-and-draw parties to full-fledged global hackathons, creators are finding new ways to jam, test, and launch together. One proposed idea: a multi-day festival with open access to tools, judged by virality and audience response rather than traditional panels.
💬 “100 multidisciplinary teams. No sleep. Full output. Let's go.”
🔗 Link Drop
🧪 Tools & Industry
🎥 Film & Art Projects
📚 Thought Pieces
GenTalks Community Call 6/18 - recap
🎥 Machine Cinema “GenTalks” Community Call – June 18 Recap
📺 View Recording: Watch here – 72 mins
🔥 This Week’s Guests
Adi Sideman (Popcorn.co)
Maddie Hong, Julia Harris, Kenny Miller, Rachel Leventhal (Emergence film team)
🎬 Segment 1: Popcorn.co Demo with Adi Sideman
Adi introduced Popcorn, a new AI video creation platform focused on social vibe moviemaking — a playful, consumer-facing tool enabling users to co-direct films via chat with a “vibe agent.”
🛠️ Platform Highlights:
Uses multimodal orchestration: GPT, Sora, Kling, etc.
Generates scenes and shots with a collaborative co-director model
Real-time editing: change dialogue, shot length, setting, character look with natural language
Autonomous, consistent characters that retain knowledge across videos
Social layer in development: remixable assets (characters, props, scenes)
🍿 Live Demos Included - Links embedded here
Ember’s Last Stand – Volcano, magicians, epic showdown.
Cthulhu Suburbia – Lovecraftian horror meets Jersey family life.
Custom “Machine Cinema” Mini-Epic – Fred & Min as Roman generals conquering with AI.
“We're not just democratizing filmmaking—we’re weaponizing imagination.” – Popcorn Film Narrator
🧠 Takeaways:
Costs currently $6–18/min but expected to drop
Platform aims to become a social entertainment hub, not just a tool
Prompt-based creative control with “vibe agent” replaces timeline editing
Early access available for beta users
🪰 Segment 2: Emergence – A Behind-the-Scenes Deep Dive
Maddie Hong, Julia Harris, Kenny Miller, Rachel Leventhal shared their workflow and inspirations for Emergence, a short AI/hybrid film that became a top 10 finalist in the Runway AI Film Festival (6,000+ entries).
🎥 Concept:
A cicada’s transformation journey, inspired by Maddie’s childhood memories of bug swarms in Virginia.
🛠️ Workflow Highlights:
Mix of AI video (Sora, Runway, Flux) + live-action probe lens footage
Custom dirt LoRAs trained to achieve consistent underground/macro styles
Custom-built tool by Kenny to manage LoRA versioning + prompt testing
Editing + post: Julia Harris handled cohesion through Premiere, VFX, color, grain layers
“Every citizen becomes a storyteller, armed with tools that amplify their dreams.”
“What did it cost? A dream and some dirt.” – Maddie Hong
💡 Creative Tools Used:
Flux: Preferred for realism over SDXL
LoRA Training: On cicada life cycle + specific environments
Topaz Labs: Upscaling/degrading loop to add final polish
🗣️ Reflections on Teamwork in AI Filmmaking:
Collaborative “hub-and-spoke” structure with Maddie as creative anchor
Cross-disciplinary contributions (editing, 3D, sound, LoRA tuning)
Fluid team roles vs. traditional siloed production structure
Use of prompt libraries, visual references (Malick, Aronofsky, Lanthimos) to unify aesthetic
💬 Community Q&A Highlights:
Popcorn’s “Snapdragon” reference was a hallucination via character knowledge base
Editing in Popcorn = directing via agent, not timeline-based
Live-action + AI hybrid support coming soon (starting with photo uploads)
Beta access: 100,000 credits ≈ $300 (no margin, just testing phase)
💬 Final Thoughts:
This GenTalks was a tale of two paths:
Popcorn: accessible, fast prototyping & meme-style narrative tools
Emergence: meticulous, artistic experimentation blending AI + practical filmmaking
Together, they represent the evolving spectrum of generative filmmaking—from scrappy remix culture to cinematic craft.
Love seeing that creative market map - super helpful!
One category you might want to consider adding is generative motion as distinct from text-to-video. I'm talking about tools like Cartwheel, Krikey AI, and Maya's MotionMaker. They are generative models that produce movement data in 3D space that can then be applied to rigs in conventional animation software.