I’ve just seen the future of memes — Pika launches 1.5 and it can cake-ify anything
A new type of gif?
Pika Labs, one of the first commercial artificial intelligence video platforms, has finally come out with its version 1.5 model, and it's taken an interesting turn compared to the likes of Runway Gen-3 and Luma Labs, focusing on fun and memes as a way to draw attention to its capabilities.
As well as updates to the underlying model that include image-to-video, text-to-video, and ever-improving degrees of motion realism, there are custom-built effects called PikaEffects that let you take an image and manipulate parts of it to turn it into cake, squish it into slime, or crush it.
Soon after launch, there was some heavy load on the Pika Labs servers, meaning it was taking some people up to 12 hours to get a video to generate, but that seems to be correcting itself now, especially if you create one of the meme-effect-style videos. My personal favorite is the explosion.
I decided to put it to the test, creating a number of images and then trying out the different default Pikaffects, including blowing up a London telephone box, crushing a chessboard, and inflating a skull.
How does Pika Labs 1.5 work?
Sry, we forgot our password.PIKA 1.5 IS HERE.With more realistic movement, big screen shots, and mind-blowing Pikaffects that break the laws of physics, there’s more to love about Pika than ever before.Try it. pic.twitter.com/lOEVZIRygxOctober 1, 2024
The equation for artificial intelligence seems to be data plus compute power plus time equals a better model, and that's exactly what Pika Labs has achieved. The company has been taking the time over the last few months to cook up something special with features not found on any other platform.
While there are a number of default meme effects, apparently there are hidden effects that you can add. I suspect at some point in the future people will be able to create their own effects and share them with others. I’d quite like to see text effects where an object is transformed into 3D text on the screen.
At some point, when the server load is a little calmer, I plan to do a proper deep dive into the other capabilities of the model. For now, to put it to the test, I created five images in Ideogram where an object or entity is front and center and then ran them all through Pika Labs.
Inflate it: A vintage typewriter
Image prompt: "A beautifully detailed vintage typewriter sitting on a wooden desk, in a cozy study with soft natural light streaming in through a window, surrounded by books and papers, warm and nostalgic atmosphere."
Melt it: A space helmet
Image prompt: "An astronaut's space helmet resting on a table in a futuristic space station, with the reflection of distant stars and planets in the visor, soft blue ambient lighting, sleek and highly detailed textures."
Explode it: A phone booth
Image prompt: "A classic British red telephone booth standing tall on a quiet London street, with wet cobblestones reflecting the streetlights, iconic architecture in the background, evening twilight, detailed and realistic."
Squish it: A grand piano
Image prompt: "A grand piano in a grand concert hall, polished black finish reflecting the soft stage lights, the elegant interior of the hall with red velvet curtains and rows of seats, dramatic and serene atmosphere."
Cake-ify it: A double-decker bus
Image prompt: "A bright red double-decker bus parked on a busy London street, with people walking by and iconic buildings in the background, mid-afternoon sunlight, highly detailed and realistic, cityscape."
Final thoughts
These aren’t all perfect, but they are an early indicator of one new way AI video could be used in the future — to create gifs and memes. I was able to generate 5-second gifs from each of the videos, and in each case, they were under 10MB, perfect for sharing on social or in a message.
Apple is already pointing some of its generative AI in the meme direction with emoji creation and image customization based on someone's photo, so maybe this is the next obvious evolution.
More from Tom's Guide
- Apple is bringing iPhone Mirroring to macOS Sequoia — here’s what we know
- iOS 18 supported devices: Here are all the compatible iPhones
- Apple Intelligence unveiled — all the new AI features coming to iOS 18, iPadOS 18 and macOS Sequoia
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?