I tried Adobe's Firefly 3 image generation tool — it takes photorealism to a new level
Firefly 3 has better photorealism
Adobe has released the latest version of its artificial intelligence image generation model Firefly 3, as well as upgrades to generative fill in Photoshop.
The upgrades to Firefly bring significant improvements in photorealism, prompt adherence and overall control over the final image.
Firefly 3 was trained on billions of licensed stock images, including more detailed labelling for lighting, structure and style, to improve the overall output. It is initially available through the Firefly website and will roll out to other Adobe products in the future, including alongside AI video models like Sora in Premiere Pro.
In addition to generative fill, Adobe is launching generative expand for the Firefly web app. This feature allows users to expand the canvas or change the orientation of any image, and Firefly fills in the gaps.
What is Adobe Firefly 3?
Adobe has been rapidly improving its Firefly family of transformer models, adding new features such as style reference and integrating them with existing Adobe products.
Firefly 3 was trained on licensed Adobe Stock images — which includes some creations from Midjourney — and will likely be the first version to have video capabilities later this year.
Most people will interact with Firefly through generative fill in Photoshop or template and other content creation in Illustrator or InDesign, but Adobe is investing in its Firefly web app.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
The text-to-image model can also use an existing image and copy its style or structure. For example, if you have a photograph with perfect lighting, you could copy that style, or if you have a product image, you could copy the structure but change the content.
“In just over a year, Firefly has become the image generation tool of choice used by millions of creators to ideate every day, and we’re just getting started,” said Ely Greenfield, chief technology officer, Digital Media at Adobe.
“As we continue to advance the state of the art with Image 3 Foundation Model, we cannot wait to see how our creative community will push the bounds of what’s possible with this beta build.”
How well does Adobe Firefly 3 work?
I tried it on a handful of prompts and compared them to Firefly 2, which was already impressive at things like art and design-based creation. Firefly 3 is a major improvement in photorealistic images. I think it will hit the stock image sector.
Adobe says the work on Firefly 3 was focused on speeding up ideation, allowing designers to go from an idea to a fully-fledged image in as little time and with as little friction as possible.
I think they’ve achieved it. Unlike Midjourney, where you have to learn multiple parameters and how to implement them, Firefly has a series of well-defined and clear menu options.
Firefly 3 seems to have better photorealism, a wider variety of outputs from a single prompt across styles like photo, art and illustration, as well as options to set the mood or lighting.
The company says it also has a better understanding of the prompt, which I put to the test by asking it to show me an old castle by a lake with people boating — and it was spot on. This means you should avoid relatively vague prompts and get the details correct.
More from Tom's Guide
- Best photo editing software in 2024
- Adobe AI image generator just went global with huge update — what you need to know
- Adobe just put generative AI into Photoshop — what you need to know
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?