OpenAI Sora shorts to debut at Tribeca Film Festival — a big mainstream moment for AI
Award winning filmmakers to use Sora
OpenAI has struck a deal with the Tribeca Film Festival and will showcase five shorts made using its artificial intelligence video engine, Sora.
Sora was announced in February and is capable of creating multi-shot longer-form clips from a single prompt, something other AI video tools struggle to achieve.
The model has yet to be released to the public, with OpenAI focusing instead on getting it into the hands of filmmakers and creatives to push it to its limits and improve training data.
The Tribeca deal will see five filmmakers use Sora to create original shorts exclusively for the festival. It isn't clear if they have to be made entirely in Sora or whether Sora can be used alongside traditional techniques.
Who will be making the short films?
The five creators making Sora Shorts for Tribeca are actor and filmmaker Bonnie Discepolo, filmmaker Ellie Foumbi, writer and director Nikyatu Jusu, genre-defying filmmaker Reza Sixo Safai, and Emmy award-winning director Michaela Ternasky-Holland.
Each of their creations, which were made specifically for the Tribeca festival and are short films rather than full productions, will be shown on June 15.
The filmmakers were each given just a few weeks to complete their projects. This was in part to show the productivity improvements that could be gained from using AI tools in filmmaking.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
“Tribeca is rooted in the foundational belief that storytelling inspires change. Humans need stories to thrive and make sense of our wonderful and broken world,” said co-founder and CEO of Tribeca Enterprises Jane Rosenthal.
She added: “Sometimes these stories come to us as a feature film, an immersive experience, a piece of art, or even an AI-generated short film. I can’t wait to see what this group of fiercely creative Tribeca alumni come up with.”
What is Sora and why are filmmakers using it?
Sora was announced in February 2024 by OpenAI. It is a transformer diffusion model that can generate longer-form (up to a minute) video clips with consistent characters and fluid motion.
Unlike other tools like Runway or Pika Labs, it can create multiple shots from a single prompt, such as having it start with a close-up of a character and then cut to them walking away. Other models tend to focus on a single shot and create three-second clips.
Due to the level of realism and potential for misuse OpenAI has slowly rolled it out, starting with its own red team to test its boundaries, then with a core group of digital filmmakers. They are now expanding access to more mainstream and traditional filmmakers through Tribeca.
Adobe has also worked with OpenAI to potentially integrate Sora into a future version of its flagship Premiere Pro video editing software. This would allow for the in-timeline creation of b-roll or to extend an existing clip if you didn’t film a long enough sequence.
What about filmmaker concerns over AI?
The use of artificial intelligence in the creative field has been controversial. This is in part due to concerns over the provenance of the data used to train the models, but also the impact it will have on creative jobs — particularly in visual effects.
However, the positive take on it centers around the ability for more filmmakers to create more immersive works on a smaller budget, unlocking imaginations in ways not previously possible.
With AI able to generate b-roll, extend shots and even create visual effects, when combined with traditional filmmaking techniques a filmmaker could create a blockbuster on an indie budget.
As part of this deal, the filmmakers were taught how to use Sora and other AI tools by OpenAI to allow them to make their creations. They also had to adhere to rules negotiated by SAG and other unions last year with respect to the use of AI.
This includes ensuring disclosure of any generative AI use including in the writing process, copyrights have to be adhered to and any replication of a living creator's voice or image have to be approved and licensed.
More from Tom's Guide
- This is Siri-ous — Apple reportedly working on home robots
- Apple could bring Google Gemini to the iPhone for AI tasks
- I gave Claude, ChatGPT and Gemini a photo of ingredients to see which came up with the best recipe
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?