Brave New Worlds: Mixing AI with archive, ethics and innovation
I remember my first gig in non-fiction, doing archival photo and film research for the late, great filmmaker Bruce Sinofsky on Good Rockin’ Tonight: The Legacy of Sun Studios, an episode of PBS’ ‘American Masters’ that covered the story of the legendary Memphis recording studio. I recall the busy work of chasing down never-before-seen images from the past — the history present not just in the images but in their caretakers as well — and the treasure of authenticity all these different layers combined to express. It was a meticulous process of human-to-human interactions, far removed from Google search engines and lightyears away from the robotic alchemy of Generative AI prompts, imaginings, and hallucinations.
That’s why I find it so inspiring that the loudest non-fiction industry contingent in the Generative AI fray thus far is the growing movement of the Archival Producers Alliance, a non-profit founded by Jennifer Petrucelli, Rachel Antell and Stephanie Jenkins that has attracted more than 300 members since launching in November 2023. You’ve perhaps read about them already, as various publications (including this one) have followed how the APA is attempting to set best practices for how Generative AI intersects with the doc-making workflow.
Jenkins (pictured above) says that the organization’s formation was an organic response to what was starting to happen on productions. “We pulled together some archival producers and we were like, ‘What are you guys being asked to do?'” she recalls. “It’s a lot of generative fill for photographs, generating a picture of a child that they couldn’t clear the image for. And all of a sudden, it’s a slippery slope.”
That led Jenkins, Antell and Petrucelli to the conclusion that “no one’s really thinking about it. Combined with where we are in the industry right now and the cutting of budgets… People are like, whoa, is this vulnerable system [of checks and balances in archive-led content] about to be taken away?”
And these were merely the warning signs behind the scenes — the full-blown red flags then quickly began to emerge onscreen. For example, in the recent Netflix film What Jennifer Did, savvy viewers and critics were quick to notice Generative AI’s notorious mangling of hands in what seemed to be a faked archival photo, along with other clues such as inconsistent teeth in a “photographed” smile. Thus far neither Netflix nor the filmmakers have directly addressed whether AI tools were used to manipulate or generate the imagery.
There’s also the notable case of Roadrunner, Morgan Neville’s doc about the late Anthony Bourdain, which kicked up debate when the filmmakers generated an AI voice to read some of Bourdain’s writings. There was an unfortunate public disagreement between the filmmaker and Bourdain’s widow about whether consent had been obtained to conduct this creative experiment, and upon release the film had no disclaimer to alert viewers.
These and other examples point to the lack of guardrails in this still-gestating combination of new technology and a legacy media industry. In this case, the pertinent issue is the underlying risk the tech poses to non-fiction content and documentary films, which threatens a distorted relationship with historical sources when neither images or audio can be trusted. These are unsettling consequences for the craft in an age already rife with misinformation and disinformation. As Jenkins aptly puts it, “We need to have some sort of ethical boundary and a shared set of language so we can make decisions that represent the people making the work.”
In the coming months, Jenkins and her team will make public their “Guidelines for Use of Generative AI in Documentaries,” which they have been assembling across several months in consultation with dozens of partners. By the time of the document’s release, the goal is to have secured more than 100 endorsements from industry leaders and community members to represent broad consensus on the recommendations. (Full disclosure: I got an early look, as well as the chance to weigh in and offer suggestions, and I am proud to be listed towards the end as a contributor.)
One key section of the guidelines cautions filmmakers about “Alterations to Primary Sources.” What used to be limited to Photoshop touch-ups or colorizing has now advanced significantly, thanks to the ability of digital tools to manipulate reference imagery. Just in the last couple of weeks, the Video Generator wars have been peaking with the releases of Kling, Runway’s Gen 3, and, most notably, Luma’s Dream Machine, which cuts directly to the APA’s concerns: click here to see your favorite classic album covers brought to life.
Is it cool or creepy to watch the Beatles “actually” walk (or, in George Harrison’s case, stumble) across Abbey Road? Choose for yourself, but, at the very least, the APA wants these creative choices to be disclosed. And if this is the future of re-enactments, there needs to be the same “intentionality, and the same care for accuracy and sensitivity.”
“Being a documentary filmmaker is bearing witness to the world around you; we want to uncover past truths, [but] we can’t take the objectivity of history for granted,” says Jenkins.
Underlying much of the APA’s work is a primary value of transparency. The net positive is that the APA offers us guidance on what kinds of seatbelts we need to build for these fancy sports cars rolling out of Silicon Valley. And, given the tech world’s track record with moving fast, breaking things and apologizing later (see The Social Dilemma), it’s a welcome development to see the non-profit sector play its role in this innovation cycle. With any luck, these efforts will extend to the news industry.
But beyond the caution, I was also happy to see both within the guidelines and in speaking to Jenkins a genuine curiosity for the creative possibilities Generative AI offers. At Florentine Films, Jenkins works closely with industry icon Ken Burns, a filmmaker whose technique of zooming in and panning over archival materials on physical animation stands rendered him an eponymous “style” for Apple screensavers once upon a time. A new paradigm emerged in 2002 when Brett Morgen’s The Kid Stays in the Picture popularized the use of 2.5D animation for archival content, when filmmakers added stylized motion to elements of a static archival image. Both of these techniques breathe storytelling life into materials that used to mostly hang in museums. If used responsibly, with adherence to transparency and ethical sourcing practices, what new experiences can Generative AI yield?
Many of my favorite artists and colleagues in the Generative AI scene are themselves either from the non-fiction community or doc filmmakers, so I polled them to see what experiments are currently being conducted with archival/historical content. Not surprisingly, the granddaddy of History Channel topics emerged as the frontrunner: the JFK assassination, with not one but two innovative experiments worth noting.
Before they were AI film school educators, Curious Refuge’s Caleb and Shelby Ward (pictured right) had careers in traditional VFX. Eight years ago, the native Texans etched a new chapter of this dark annal of their state’s history by putting the famous Zapruder film into a 360 Video workflow. (Here’s a blog link to that effort; it’s best viewed with VR goggles.)
With the help of compositing software, the Wards were able to meticulously overlay the historical film, frame by frame, inside a modern-day VR scene of the same location to create a seamless experience of what it was like to stand in Zapruder’s shoes, and also make sense of the theories of how JFK was assassinated. It’s worth noting how detailed and transparent the blog post is to lay out the step-by-step process to recreate this scene. This was not yet Generative AI, but served as a precursor of future possibilities.
On the other end of the spectrum isMatt Zien’s new venture, KNGMKR Labs. In the last year, the former development executive for the Intellectual Property Corporation (Night Stalker, Leah Remini: Scientology and the Aftermath) has teamed up with Mac Boucher to form a modern creative studio that leverages state-of-the-art tools such as Generative AI for film, TV and live music installations. He showed me Camelot, an eye-popping in-house experiment they gives their own take on the Kennedy assassination.
On the surface, the work would appear to be the APA’s worst nightmare: a fully generated film that taps into tropes of archival imagery and newsreels, and fully fakes the entire experience at a very high level of visual quality. But it’s also transparently fake — per Zien, the piece “reimagines an alternate history where JFK survives his 1963 assassination, visually exploring an alternate timeline where the great American president goes on to live a long and happy life.” (For an example of this alternate timeline, see the image at the top of this article, which imagines an elderly JFK enjoying a Boston Celtics game in the 1980s.)
Between the pendulum swing of 360 VR and historical fiction, I’m personally imagining an even more progressive entry that would break the format even further and exit the linear experience. Some of the more cutting-edge recent AI experiments have involved Gaussian Splat technology. You can see what I mean here and here, but simply put, it’s a tool that allows a filmmaker to create highly realistic 3D versions of scenes by inputting 2D images. What’s more, you can then extrapolate beyond the input data to experience “the world” of the 2D images.
What if we applied this tool to archival still images? If we could follow the APA’s guidelines and bring the same standard of historical accuracy and sensitivity that we’d bring to a re-enactment to digital 3D set and costume design, could we then use authentic archival source material to faithfully render an interactive 3D world based on photos shot on that fateful day in Dallas? Is this a video game? A new type of museum installation, or educational tool? Specific to our industry, does it represent a whole new frontier for non-fiction content creators?
Undoubtedly, there will be more questions to ask as these and other new possibilities emerge, and I hope that those will be accompanied by additional guardrails or best practices where necessary. One thing is clear to me: just as the seatbelt we’ve come to rely on when driving wasn’t widely used in cars until the 1950s, having Jenkins and the APA as essential allies in this ecosystem of innovation will make sure we can keep moving forward, with care.