Runway’s ‘better and faster’ Gen-3 AI video model is coming in the ‘next few days’

Runway Gen-3
(Image credit: Runway Gen-3)

AI video platform Runway will release its Gen-3 model “in the next few days” and it will include “major improvement in fidelity, consistency, and motion over previous generations of models,” while also being considerably faster, the company told Tom’s Guide.

Runway released Gen-2, the first commercially available text-to-video AI model in June last year and since then a revolution in synthetic video has been unleashed on the world. It now competes with the likes of Pika Labs, Haiper, Luma Labs and the yet-to-be-released Sora.

Gen-3 is a major step-change for Runway and the AI video space. It was rebuilt from the ground up using a new generation infrastructure purpose-built for large-scale multimodal training. This new model was trained on image and video at the same time for improved realism.

The public will be able to get access “in the next few days” to an Alpha version. Anastasis Germanidis, Runway CTO and Co-Founder told me this was the smallest of a new generation of frontier AI models coming from the coming as a result of the new training infrastructure.

What makes Runway Gen-3 different?

Runway Gen-3

(Image credit: Runway Gen-3)

Runway Gen-3 includes an improved ability to control motion within a video as well as understanding real-world movement and physics. Combined with its photorealism and you’ve got a model that can create videos almost indistinguishable from reality.

Gen-3 Alpha improves significantly in terms of temporal consistency and has much-reduced morphing compared to Gen-2 for both text and image inputs.

Anastasis Germanidis, Runway CTO

There were some surprises for the team when first using Gen-3 after it completed training including its approach to scene creation. This is something possible thanks to a minimum 10-second video creation. The previous generation capped out at about four seconds.

“The ability to create unusual transitions has been one of the most fun and surprising ways we’ve been using Gen-3 Alpha internally,” said Germanidis. He told me: “The model is able to incorporate and make sense of drastic changes in the environment with very pleasing results.”

As well as changing the scenes and environment you have much greater degrees of “temporal control” as it was trained with “multiple highly descriptive captions per scene, which makes it capable of generating videos that have unusual and interesting transitions of environment and action, as well as precise key-framing of specific elements in time,” he explained.

“These model improvements paired with existing control modes such as Motion Brush, Advanced Camera Controls, and Director Mode give our users more control than ever before.”

You can start with images, text or even video using Gen-3, whereas Gen-2 doesn’t support video as an input. It doesn’t matter which you use, according to Germanidis. “Gen-3 Alpha improves significantly in terms of temporal consistency and has much-reduced morphing compared to Gen-2 for both text and image inputs.”

Creating a General World Model

Runway Gen-3

(Image credit: Runway Gen-3)

Germanidis told Tom’s Guide this was the “first of the next generation of foundation models trained by Runway from the ground up”.  He added that future versions “will reach and exceed the scale of large language models,” such as Google Gemini and Anthropic’s Claude.

The model can struggle with complex character and object interactions, and generations don’t always follow the laws of physics precisely.

Anastasis Germanidis, Runway CTO

In the same way the big AI LLM labs like OpenAI and Anthropic are working towards Artificial General Intelligence (AGI), Runway is working to build “General World Models.”

“A general world model,” explained Germanidis “ is an AI system that builds an internal representation of an environment, and uses it to simulate future events within that environment.”

“The aim of general world models will be to represent and simulate a wide range of situations and interactions, like those encountered in the real world,” he added.

While Gen-3 isn’t in itself an Open World Model it is the first step, Germanidis told me. “It’s still very early, and this is the first and smallest of our upcoming models”.

“The model can struggle with complex character and object interactions, and generations don’t always follow the laws of physics precisely,” he warned. So don’t get overly excited but remember this is just step one.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 44 deals
Filters
Arrow
Show more
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?

Read more
Young man on his laptop looking at images and videos
I've spent 200 hours testing the best AI video generators — here's my top picks
Luma Ray2
Forget Sora — Luma Labs unveils new Ray2 video model that takes realism to a new level
OmniHuman screenshot of AI generated video
TikTok parent company just launched stunning AI video generator — OmniHuman-1 is taking the world by storm
Sora
Forget Sora — here's the 5 best AI video alternatives you can try today
Luma Dream Machine
I just put Luma's new Ray2 AI video generator to the test — and it's better than Sora
Shutterstock Sora image
OpenAI just announced that its Sora AI video generator is coming to ChatGPT
Latest in AI Image & Video
Leonardo.Ai vs. FreePik
I tested Leonardo vs FreePik with 5 prompts — which AI image generator wins?
Next stop paris film
'Next Stop Paris' is a new romantic AI film from TCL Studios — and it's so cute you almost forget AI is coming for your job
Showrunner logo
The 'Netflix of AI' has a waitlist of 50K people — I got in and here's the good and the bad
Adobe Firefly Video
Adobe Firefly Video is here to take on Sora with new AI video generator
selfie avatar images
Synthesia just launched the most realistic Selfie Avatars I’ve ever seen — here’s how to try it
MacBook Pro M1 gets Adobe Photoshop
Hurry! Get 70% off Adobe Creative Cloud in this epic Presidents' Day offer
Latest in Features
A TV with the Netflix logo sits behind a hand holding a remote
I tried these 7 ChatGPT prompts to supercharge my Netflix viewing experience
Innocn 49QR1 on desk
I ditched my dual monitor setup for this ultrawide OLED monitor — and it's a total game changer
Washing machine in laundry room
7 laundry myths debunked by the experts
a photo of a woman holding a barbell
I swapped barbells for a banded resistance bar — and it transformed my strength training
A man in a grey t shirt sits on the edge of his mattress while clutching his lower back after waking up with back and shoulder pain
3 tell-tale signs you’ve aged out of your current mattress
Pixel 9a vs iPhone 16e composite image.
Google Pixel 9a vs iPhone 16e — 7 ways Google totally beats Apple