5 Best AI video generators — tested and compared
Artificial intelligence will now create video for you — we tried it
Generative AI video has taken on a new meaning in the past year, going from tools capable of pulling together clips from a massive library to those able to create a video nearly indistinguishable from reality synthetically — all from a text prompt.
Runway kickstarted this revolution in February last year with the release of Gen-2, the first commercially available AI video generator, emerging out of the Discord test-bed. Pika Labs quickly followed this with Pika 1.0 and then several Stable Video Diffusion-based services came online.
Things started to break through for synthetic video earlier this year when OpenAI unveiled Sora, revealing that the scale of compute and training data were among the biggest factors in making a breakthrough in realism and motion quality.
We now have several Sora-level models, some readily available or coming very soon such as Luma Labs Dream Machine and Runway’s Gen-3 and others not as easily accessed such as the Chinese video model Kling.
Pika Labs and Haiper are also regularly updating and everyone expects something new from StabilityAI around Stable Video 2. For now, here is a list of generative video models I’ve used, tested and are readily available for anyone with the time or money to try.
What makes the best AI video generators?
Why you can trust Tom's Guide
A good generative AI video platform needs to be able to create high resolution clips with clear visuals, minimal artefacts and reasonably realistic motion.
It will follow the prompt you give it, whether that is in the form of text or an image and will also offer reasonably quick generation times.
I'd also expect a platform around it including additional features such as inpainting, clip extensions and an ability to upscale the clip if it is lower resolution.
Row 0 - Cell 0 | Credits with free plan | Cost of cheapest paid plan | Credits with cheap basic plan |
Luma Labs | 30/month | $29 | 150/month |
Pika Labs | 250 total | $10 | 700/month |
Runway | 125 total | $15 | 650/month |
Haiper | 10/day | $10 | Unlimited |
FinalFrame | N/A | $3 | 20 |
For each of these reviews I've included a short video generated by that platform that I've generated myself using the default settings with no custom features or additional prompts.
Best overall video
Luma Labs Dream Machine
Our expert review:
Reasons to buy
Reasons to avoid
Dream Machine came out of nowhere and from a company previously focused on generative 3D content. Luma Labs Genie model was a big milestone in text-to-3D model and it seems they took some of that understanding and created a text-to and image-to-video model.
Demand was so high for Dream Machine when it first launched that the company had to quickly implement daily limits for free users of just five generations per day. It has also been appealing for more compute power to run its model on social media.
Each video generated is about five seconds long and it is impressive at following prompts. You can give it a descriptive idea and it will then improve that to get the best result from the model.
Soon after launch the ability to extend a clip by up to five seconds was also added, although from my experience this can be a bit hit-and-miss. When it works it is effortless but you have to get the prompt exactly right or it will do some weird changes to your original video.
The videos created with Luma Labs Dream Machine are as realistic as anything I’ve seen from the Sora examples with impressive levels of motion control. Unlike Sora I've been able to see this for myself in videos I've created. It is easy to use, enhances your own prompts and works well with traditional filmmaking queues like dolly-in.
It comes with 30 video generations per month with the free plan, which are used up very quickly if you want to do more than play about. The paid plans start at $30 a month for 120 creations on top of your 30 free. It also removes the watermark, allows for commercial use and gives you a higher priority in the queue.
Best value platform
Pika Labs
Our expert review:
Reasons to buy
Reasons to avoid
Pika Labs is one of the best overall AI video platforms but its strength is more in turning images from services like Midjourney or Ideogram into video. This is thanks to an update to the image-to-video model earlier this month.
Using its built-in motion tools gives you the best results, especially for scenes requiring a slow zoom or pan. But you can instruct the model with a text prompt alongside the image prompt.
A new generation model is coming soon but for now, Pika Labs is working on a first gen synthetic video model. A couple of months ago this alone would be something to shout about and Pika Labs was among the best, but in comparison to 2nd gen models like Sora and Kling it is showing its age.
However, as a platform, it has a lot to offer and if you start with an image the comparisons are less obvious. It generates three-second clips extendable up to 16 seconds and offers upscaling as well as the ability to inpaint a specific region of a video.
What makes me say it is one of the best platforms is the addition of sound effects, that can be your own custom noises or generated to match the video and lip sync technology created in partnership with ElevenLabs. It doesn’t move the head but is a good quick solution.
The free plan gives you a total of 300 credits and the ability to top up additional credits as needed for a cost. The Standard plan is $10 per month for 1050 credits, renewed monthly. Both include the ability to upscale videos and remove the watermark.
Best overall platform
Runway
Our expert review:
Reasons to buy
Reasons to avoid
Runway has unveiled Gen-3 but at the time of writing it hadn’t been released to the public. If it had then Runway may have been my best overall because its next-generation model offers 10-second clips, advanced motion control and impressive degrees of realism. This is based on Gen-2.
Its current generation, Gen-2, is of a similar quality to Pika Labs Pika 1.0 but it doesn’t seem to be as good as generating video from an image prompt. What it does have in its favor is an impressive toolkit of features including Motion Brush.
Motion Brush lets you paint specific parts of an image and only animate that aspect, or dictate specifically how it should be animated and move. Its not perfect but its as good as first-generation synthetic video gets. If you get the painting right, are descriptive with your prompt then you can get very good results.
The other tool that stands out from the crowd is its impressive lip-sync system. It also uses ElevenLabs and lets you add your own voice but unlike Pika also animates the head movement to create a more realistic video output.
Each video in Gen-2 is about 4 seconds with the ability to extend up to 16 seconds. Videos vary in motion accuracy with a lot of blurring and warping. If you use the built-in controls you can improve the quality of the motion in each scene.
The free plan gives you a flat 125 credits but with no option to increase. You also can't upscale or remove watermarks. The base plan is $15 per month for 625 credits renewed every month. It does include upscaling and removing watermarks and access to other features such as texture creation, custom model training and 4K exports.
Best for prompt following
Haiper
Our expert review:
Reasons to buy
Reasons to avoid
Haiper is a relative newcomer that has focused primarily on building out prompt adherence. It sits somewhere between the first-generation models and the likes of Luma Labs Dream Machine with impressive motion thanks to its transformer diffusion model but with short clips that make it hard to properly judge.
You can use it to create text-to-video or from an image as the initial prompt. It also supports repainting part of a generated video. This also works with videos loaded from outside the platform including your own filmed videos. For example you could share a video of yourself and change your head to that of a cat.
Extensions are coming soon and clips currently start at 4s with 8s initial generations also coming soon with the next update.
What makes Haiper stand out is how well it follows a prompt and how good its AI model is at interpreting likely motion within the video. I spoke to the developers early on and they said it actually works better if you leave the AI to work out how to manage movement within the video.
It is currently in beta and the free version allows for 10 creations per day. They contain a watermark and can't be used commercially.
The base plan is $10 a month fro unlimited creations and early access to new features but they also have a watermarked and can't be used commercially. The $ 30-a-month plan is required for watermarked free and commercial-use video.
Best for experimenting
FinalFrame
Our expert review:
Reasons to buy
Reasons to avoid
I love FinalFrame. It doesn’t have the best video quality or even motion quality but it has speed of iteration and new features in its favor.
Built by a small team, the bootstrapped platform quickly adds new technology and features as they become available and isn’t afraid to try new things.
It is also very easy to use. Quickly give it an image or text prompt and it will turn it into a video, adding it to a library in a UI similar to a video editor like Final Cut Pro.
While the quality isn't great, in my tests its motion is more realistic than from some of the big players including Pika Labs. It works best when prompted using an image first but it also creates impressive AI images using a version of Stable Diffusion.
One thing that stands out for me is the lip syncing. It works impressively from a video generation, keeping natural movement while matching the lip movement to the speech you give the model.
Unlike the other platforms which require a monthly commitment, with FinalFrame you just buy the credits you need. The basic plan is $3 for 20 credits.
Want to know more about using AI for creative work? Here's our breakdown of the best AI image generators.
More from Tom's Guide
- Apple is bringing iPhone Mirroring to macOS Sequoia — here’s what we know
- iOS 18 supported devices: Here are all the compatible iPhones
- Apple Intelligence unveiled — all the new AI features coming to iOS 18, iPadOS 18 and macOS Sequoia
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?