I've spent 200 hours testing the best AI video generators — here's my top picks

Young man on his laptop looking at images and videos
(Image credit: Shutterstock)

In a very short amount of time AI video has gone from short, 2-3 second clips barely identifiable as video to tools like Google Veo 2 capable of generating minutes of high definition, barely distinguishable from reality video. And we’re only just getting started.

Runway kickstarted this revolution in February 2023 with the release of Gen-2, the first commercially available AI video generator, emerging out of the Discord test-bed.

Pika Labs quickly followed this with Pika 1.0 and then several Stable Video Diffusion-based services came online. Things started to break through for synthetic video earlier this year when OpenAI unveiled Sora, revealing that the scale of compute and training data were among the biggest factors in making a breakthrough in realism and motion quality.

Sora is now live, albeit in a cut down version compared to what was promised and we’ve got a range of other models as good as or better than the OpenAI flagship. This includes Runway’s Gen-3, Pika 2.0, Chinese models Kling and Hailuo MiniMax as well as Luma Labs new Ray2.

What makes the best AI video generators?

Why you can trust Tom's Guide Our writers and editors spend hours analyzing and reviewing products, services, and apps to help find what's best for you. Find out more about how we test, analyze, and rate.

All of the best AI video generators are now as much a “platform” as they are a place to make a few seconds of motion from text or an image. For example, most now include some form of motion brush, lip-syncing, and different model types and unique features such as keyframing.

Regardless of additional features, a good generative AI video platform needs to be able to create high-resolution clips with clear visuals, minimal artifacts and reasonably realistic motion.

It will follow the prompt you give it, whether that is in the form of text or an image and will also offer reasonably quick generation times at a not unreasonably high price.

Swipe to scroll horizontally
Header Cell - Column 0 Credits with free planCost of cheapest paid planCredits with cheap basic planCommercial use on basic plan?
Luma Labslimited$9.993200/monthNo
Pika Labs150/month$10700/monthNo
Runway125 total$15625/monthYes
Haiper10/day$10UnlimitedNo
Klinglogin bonus$10660Yes
SoraN/A$20/month50/monthYes
Hailuopurchasable$14.994000/monthYes

Tips for generating video with AI

Creating video content with AI isn’t ‘that’ different to creating AI images. You need to be descriptive and paint a picture with words. The biggest difference is you also need to specify motion and describe how the scene and objects in the scene should move.

The best way to utilize these tools, especially the more advanced ones capable of 10 or more seconds of video from a single prompt, is to use cinematography language. Describe the placement and motion of the camera, outline lighting and explain scene changes if needed.

For example, you could create a video of a couple dining by describing the camera slowly panning from a wide shot of the room to a close-up of their smiles and gestures. Add details like warm candlelight, a softly blurred cityscape through the window, and natural movements like one pouring wine while the other laughs.

You could use this prompt: “A cozy restaurant with dim, golden lighting. The camera begins with a wide shot, capturing the elegant dining room and softly blurred cityscape through the window. It slowly pans towards a couple at a table, smiling and laughing, as one reaches out to pour wine into the other’s glass. The warm candlelight flickers gently on their faces, creating an intimate and inviting mood.”

  • Use Cinematic Language: Include film terms to help guide the AI such as camera angles, movements and lighting
  • Specify Motion and Actions: Describe how elements within the scene should move including objects and characters
  • Define the Environment and Atmosphere: Use detailed descriptions of the setting to set the context and mood including lighting, weather and background items
  • Maintain Temporal Consistency: Set a logical sequence of events that are coherent and match the progression of the video and action you want to see
  • Iterate and Refine Prompts: Experiment with different prompt structures and details to achieve the desired outcome. Review the generated videos and adjust your prompts accordingly to improve quality and relevance. This iterative process helps in fine-tuning the AI's output to match your vision.

My favorite AI video platforms

I’ve pulled together a selection of the best AI video platforms I’ve used over the past nearly two years. For each model, I’ve generated a video with the same prompt to share the quality difference between each one.

The list only includes models I’ve personally tried and put to the test. It also only features synthetic video models, excluding avatar models like Synthesia and Hey Gen.

The prompt for the videos I've shared with each of these entries is: "A lone cyclist on an empty rural road at golden hour, the light casting long shadows on the asphalt. Surrounding fields of tall grass glow with a warm orange hue, and the cyclist, in a bright jersey, rides steadily toward the camera. Dynamic perspective with cinematic depth."

Best for visual realism

Kling

(Image credit: Kling)

Kling

Impressive motion and video quality

Reasons to buy

+
High-quality outputs
+
Advanced motion dynamics
+
Dual operation modes

Reasons to avoid

-
Slow generations

Kling is one of the best AI video models currently available, excelling in visual realism and smooth motion. It offers advanced features like lip-syncing for dialogue, virtual try-on tools for fashion applications, and, at least for the older model versions, the ability to extend clips.

According to Kling the latest release has an uncanny ability to follow complex instructions including specific camera movements, timing changes and visual structure of the scene. I put this to the test and found it to be true, although version 1.6 does have some limitations at the moment including no extension capability.

I’ve found that Kling videos tend to look more real. They include better texturing and lighting than other models with more consistent motion. It still falls foul of many of the same issues around artifacts, people merging and subtle motion difficulties, but overall it is more good more often than others.

Built by the Chinese video platform company Kuaishou, Kling also comes with the KOLORS image model. You can generate images for a fraction of the cost to get an idea of how the final visual might look if you decide to then turn it into a video.

It comes with a free plan that rewards you with daily credits when you log in and the standard plan, with 660 base credits is $5. It costs about 35 credits for a professional 5 second video or 20 credits if you don't mind lower resolution.

Best for Prompt Adherence

Hailuo

(Image credit: Hailuo)

Hailuo MiniMax

Reliable and precise output

Reasons to buy

+
High-quality short videos
+
720p at 25 FPS
+
Impressive prompt following
+
Fast generations

Reasons to avoid

-
6-second clip limit

Hailuo is one of my favorite AI video platforms to use. It launched early in 2024 and shines when it comes to prompt adherence. It also matches the visual quality of Kling.

When it first launched it was largely in Chinese and nothing more than a small box. It is now a full featured AI platform with a chatbot, AI voice cloning and a video generation model.

Over the past few months, we’ve seen the Hailuo team add a range of new features including a character reference model that lets you give it an image of a person and have them appear within the video. This is similar to Pika’s ‘Ingredients’.

Hailuo is my go-to if I want a more complex video. Its prompt adherence and motion accuracy are ideal for scenes where groups of humans are moving or you have complex movement.

The free plan includes daily credits every time you log in and the base subscription is $9.99 per month for 1000 credits, bonus credits for daily login and no watermarks.

Best for Storyboarding

Sora

(Image credit: Sora)

Sora

A storyboarding powerhouse

Reasons to buy

+
Cinematic visuals
+
Deep language understanding
+
Multi-shot capabilities

Reasons to avoid

-
Development ongoing
-
Limited access

OpenAI's Sora is finally available, albeit only outside of the EU and UK. The version made public isn’t as powerful as the one previewed a year ago, but it still has impressive features such as the clever storyboard.

Available in text and image-to-video versions, it can take your prompt and turn it into between 5 and 15 seconds of compelling video. Motion is largely accurate and visual realism is impressive, although it isn’t as good as its initial promise as other models seem to have caught up.

Some of the features of Sora make it stand out. For example, the platform includes features such as Remix, which allows users to modify videos while preserving their core elements, and Storyboard, which aids in planning and structuring scenes.

There’s also a style preset function and an ability to blend elements from multiple videos. Although for me the storyboard is the standout. This lets you put an image or text prompt at any point within the video duration and it builds the clip from that.

Sora is integrated into OpenAI's ChatGPT subscription plans. The ChatGPT Plus plan, priced at $20 per month, supports up to 50 videos per month at 720p resolution and five seconds in duration. ChatGPT Pro plan at $200 per month provides unlimited video generation, resolutions up to 1080p, longer durations of up to 20 seconds.

OpenAI says it is launching standalone plans for Sora outside of ChatGPT this year.

Best for Collaborating with AI

Luma Dream Machine

(Image credit: Luma Dream Machine)

Luma Labs Dream Machine

A co-creator for your vision

Reasons to buy

+
Realistic video generation
+
Image-to-video feature
+
Quick processing
+
Collaboration features

Reasons to avoid

-
Occasional visual artifacts

Luma Labs' Dream Machine is one of the best interfaces for working with artificial intelligence video and image platforms. It can be used to create high-quality, realistic videos from text and images. It is able to create videos in seconds and you can iterate on the original idea just as quickly.

Even with the rapid generation of both images and video, the quality is impressive. This includes accurate and natural motion as well as photorealistic visuals.

A significant advancement in Dream Machine's capabilities is the introduction of the Ray2 model. Ray2 enhances realism by improving the understanding of real-world physics, resulting in faster and more natural motion in generated videos.

Despite its advanced features, users may encounter generation issues, such as stalled or failing outputs. Luma Labs provides comprehensive guides to troubleshoot these problems.

The built-in Photon image model is also incredibly impressive. Luma Dream Machine is incredibly useful for working out prompts. These could then even be used with another model.

Best for Character Consistency

Pika

(Image credit: Pika)

Pika Labs

Keeping Characters Cohesive

Reasons to buy

+
Dynamic video creation
+
Customizable motion
+
Easy to use
+
Character customization

Reasons to avoid

-
No clip extension on v2.0

Pika Labs is one of my favorite AI video platforms. Its most impressive feature is one of its most recent — ingredients. This feature lets you give it an image of a person, object or style and have it incorporate them into the final video output.

This was launched with Pika 2.0. This gave us improved motion and realism but also a suite of tools that make it one of the best platforms of its type that I’ve tried during my time covering generative AI.

No stranger to implementing features aimed at making the process of creating AI videos easier, the new features in Pika 2 include adding “ingredients” into the mix to create videos that more closely match your ideas, templates with pre-built structures, and more Pikaffects.

Pikaffects was the AI lab’s first foray into this type of improved controllability and saw companies like Fenty and Balenciaga, as well as celebrities and individuals, share videos of products, landmarks, and objects being squished, exploded, and blown up.

Pika Labs offers a range of pricing plans to suit different user needs. The Free Plan provides 250 initial credits, with a daily refill of 30 credits, allowing users to explore the platform's capabilities at no cost.

Best All-Rounder

Runway

(Image credit: Runway)

Runway

Versatility meets innovation

Reasons to buy

+
High-quality visuals
+
Rapid rendering
+
Intuitive interface

Reasons to avoid

-
No monthly top-ups on free plan
-
Can appear game-like

Runway was the original AI video model. It is now on Gen-3 and has improved by leaps-and-bounds over the original model. This includes the ability to control the exact motion of the final video generation.

With the Gen-3 Alpha model, users can input text or images to produce unique video clips. You can set the image input as the start, middle or end of the final output, further steering exactly how it should look.

Runway's tools have been used in various projects, including films and music videos, showcasing their impact on modern storytelling. Imagine exploring a huge, invisible world full of creative possibilities — this tool turns that into a reality.

Another recent feature is essentially "outpainting" for AI video. This lets you convert a portrait video into landscape or the reverse with nothing but a simple prompt. It matches the layout of the original model.

Runway has also announced a new AI image model called Frames. This lets you control the style and structure of each image and then animate it. The model hasn't launched yet but will make for an important addition.

Best for Experimenting

Haiper

(Image credit: Haiper)

Haiper

Playground for creative exploration

Reasons to buy

+
Hyper-realistic videos
+
Fast generation times
+
User-friendly templates

Reasons to avoid

-
Minimal features on free plan
-
Watermark free only on most expensive plan

Haiper is a bit of an underdog in the AI video space but it is shipping a range of impressive features including templates and motion consistency.

It includes a user-friendly interface and is one of the cheapest platforms, offering unlimited generations on even the lower tier plans. It also includes an AI painting tool, which allows users to modify specific areas of a video by adjusting colors, textures, and elements, thereby enhancing and transforming visual content.

Despite its robust features, Haiper has some limitations. Free users must contend with watermarked videos, which can be a drawback for those looking to use the content commercially. You also need to pay for the top-tier plans to. have commercial usage rights for the video you generate.

By leveraging a proprietary combination of transformer-based models and diffusion techniques, Haiper 2.0 improves video quality, realism and production speed. This update adds more lifelike and smoother movement, potentially setting a new standard for the best AI video generators.

Since its launch, Haiper has continued to push the boundaries of video AI, introducing several tools, including a built-in HD upscaler and keyframe conditioning for more precise control over video content. The platform continues to evolve with plans to expand its AI tools, including features that support longer video generation and advanced content customization.


Want to know more about using AI for creative work? Here's our breakdown of the best AI image generators.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 76 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?