I tried Haiper 1.5 — the latest Sora challenging AI video model
Up to 8-second videos
Haiper, the artificial intelligence video lab has released version 1.5 of its generative model, offering up eight-second initial clips and improved visual quality.
This is the latest update from a growing number of AI video platforms all chasing the realism, natural movement and clip duration capacity of OpenAI's yet to be released Sora model.
I put Haiper 1.5 to the test with a series of prompts and it feels more like an upgrade to the generation one model than a significant step change that we saw between Runway Gen-2 and Gen-3 or with the release of Luma Labs Dream Machine.
That isn’t to say that Haiper isn’t an incredibly impressive model. It is and offers some of the best value of all of the AI video platforms. It’s just that it has yet to reach the motion quality of Runway Gen-3 or solve the morphing and distortion problems found in Haiper 1.0.
What makes Haiper 1.5 different?
Haiper is the brainchild of former Google Deepmind researchers Yishu Miao and Ziyu Wang. Based in London, the company is focused on building foundation AI models and working towards Artificial General Intelligence.
The video model has been designed to be particularly good at understanding motion, so the tool hasn’t been built with motion controls like Runway or Pika Labs as the AI predicts what is needed. I have found it better if you leave off specific motion instructions from the prompt.
The startup first emerged from stealth with a ready-to-go model just four months ago and already has 1.5 million users. The previous maximum video length was 4 seconds, but for most users, it only went to 2 seconds — basically a gif. The new model can start with 8-second clips.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
It is one of the most easy-to-use AI models with a strong community built around creation. It offers a range of examples and prompt ideas and can be used to animate text or video.
Creating prompts to test Haiper 1.5
8-second videos, video extension, AND upscaler!? Just a one-month #glowup, courtesy of Haiper v1.5😊 pic.twitter.com/E2f5nGNqAMJuly 16, 2024
With Haiper 1.5 clips can be up to eight seconds long, although I noticed it does occasionally slow down the footage rather than create more movement.
You can also now produce up to eight-second clips in high definition, where previously high definition was reserved for very short two-second shots.
As with Pika Labs you can. upscale or extend any of the videos generated using Haiper. Each generation adds four seconds to the original.
1. The koi pond
First test is to see how well it handles the motion of multiple creates and I'll say it's done a surprisingly good job. Not too much warping or merging of the fish, although one looks like it is swimming above the pond.
The prompt: "A serene koi pond in a Japanese garden, with colorful fish swimming beneath floating lotus flowers."
2. A city street at night
Next was a test of a complex visual environment, in this case a busy city with bright lights and lots of people, as well as the degree of animation. The gif reflects just how slowly the people moved in the final video. You'd have to 2x the speed.
This was the simple prompt: "A bustling city street at night, neon signs flickering, and people hurrying past in the rain."
3. Making sushi
Hands are a nightmare for AI models and unfortunately, Haiper is no different. While initially, it looks like its cracked it, the next five seconds after what is shown in the gif turns into a weird nightmarish mush. Full video on the Haiper website.
"A close-up of a chef's hands preparing sushi, carefully slicing fish and rolling rice."
4. Blooming flower
This was the only outright fail of the test prompts. I think it may have either needed more specific instructions to capture the movement or even simpler instructions. Every AI video model works slightly differently so it's a tough call.
The prompt I used was: "A time-lapse of a flower blooming, petals unfurling in vibrant colors." I tried the same prompt with Luma Labs and while the result was more realistic, it also failed to show the time-lapse.
5. Astronaut in space
I love using space prompts because it often confuses models when it comes to motion or it will generate multiple Earths. Haiper did a good job here and even showed the astronaut slowly moving. It's worth viewing the full video.
I used this prompt: "An astronaut floating in space, with Earth visible in the background and stars twinkling."
6. Steam punk city (image)
The next test was of the Haiper image-to-video model rather than just simply text. I started by generating an image of a steampunk city and offering it with a motion prompt to Haiper. It did a good job of showing the unusual scene.
Prompt for Ideogram, the AI image generator: "Steampunk cityscape with airships and clockwork mechanisms". The motion prompt for Haiper alongside the image: "Gears turning, airships slowly moving across the sky."
7. The northern lights (image)
Finally the Northern Lights. This is a useful test for all AI video models and usually one where you start with text but I wanted to see how it'd animate an image. It did a very good job and the full eight-second video is worth viewing.
Prompt for Ideogram, the AI image generator: "Northern lights dancing over a snowy mountain landscape." The motion prompt for Haiper alongside the image: "Aurora borealis shifting and swirling in the night sky."
Final thoughts
Haiper 1.5 is a clear improvement on Haiper 1.0 as well as models like Runway Gen-2 and Pika Labs 1.0 but it is very much an interim upgrade. If they've achieved this with a 1.5 model, I can't wait to see what Haiper 2.0 is like.
Clips were sometimes slowed down or suffered from morphing but overall it was a big improvement in both photorealism, movement and consistency. This was in part due to the doubling length of the clips.
More from Tom's Guide
- 5 Best AI video generators — tested and compared
- OpenAI just released a Sora generated music video — and it’s like something out of a fever dream
- AI may be able to make real-time video in a year — this is huge
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?