The Best Things We Saw At Adobe MAX 2023
by Rudy Sanchez on 10/14/2023 | 5 Minute Read
Pumpkin spice lattes aren’t the only thing that comes around every fall. Adobe also holds its annual MAX conference in Los Angeles every October.
MAX is when Adobe showcases the latest and greatest software features, and unsurprisingly, artificial intelligence (AI) loomed large in the firm’s keynote this year. Some of this year’s highlights were open secrets, such as Generative Fill in Photoshop. Instead of waiting a year to announce and make available new features, Adobe has been releasing updates in open beta to users.
Usually, software developers wouldn’t put out a feature that’s still not production-ready. But just like that sourdough starter buried in the back of your fridge, AI needs constant feeding, so an open beta release makes sense. The downside is that it takes the “wow” factor away in Adobe’s keynote, as many of the announcements have been in users’ hands for some time.
Nonetheless, Adobe still had some applause-worthy features to show off in 2023. In particular, significant Firefly-powered (Adobe’s name for its AI) updates to Photoshop, Illustrator, and Express give us the first considerable view of what AI-aided creative design will look like in the near future.
Here are some of the highlights from this year’s Adobe keynote AND the future-forward magic show otherwise known as Sneaks.
Let's get this one out of the way first, as it's something we're already using around the office. In Photoshop, Firefly powers features like Generative Fill, which allows users to resize an image and use AI to expand and fill the canvas that matches the original image.
On the Illustrator side of things, Generative Match combines text prompts with user-selected reference images to create new images. For example, if the user types in “grumpy cat sitting in front of an empty plate inside a restaurant” and adds a watercolor painting as a reference, Firefly will generate a picture in the same style. Generative Match is available on the web and the Illustrator desktop application. Adobe says that with Generative Match, teams can create new assets that are consistent with their brands quickly and easily.
Also available now in Illustrator is Text to Vector Graphic, which, according to Adobe, is the world’s first vector graphic-generating AI model. Images generated by Text to Vector are editable vector images designed to be refined and completed by creatives. What's more, you can duplicate your creations, group or ungroup them, and open the layers panel to precisely edit all of those minuscule details. You can even recolor the hues in your image using Generative Recolor, which applies new color palettes using text prompts.
Another new feature announced for Illustrator is Retype, which identifies text in an image or photograph, matches the font and makes it editable.
Though Project Stardust leaked ahead of the conference, Adobe showed off the object-aware editor that allows users to quickly move, edit, or delete elements of an image by simply clicking and choosing it. A user can, for example, select people willy-nilly in the background and remove them like Marty McFly's siblings.
Glass reflections can be tedious to remove from photos, but Adobe’s Project See Through can eliminate reflections from an image with relative ease. Meanwhile, Project Neo gives 2D designers easy ways to create and incorporate 3D objects into projects like infographics, posters, and logos.
Creating custom lettering from an existing glyph can now be done in seconds thanks to Adobe’s Project Glyph Ease. Project Glyph Ease analyzes a hand-drawn or existing glyph or set of glyphs, like a store sign, and generates an entire character set. The rendered glyphs can then be edited and refined in Illustrator. That said, if you want to slave away at your desk creating letterforms based on a deli sign you walked by once just for the fun of it, have at it.
Adobe also showed off Project Draw and Delight, a tool that takes rough sketches or doodles and generates polished, refined artwork in various styles with different backgrounds and color palettes.\
Last but not least, we have to talk about that dress. Project Primrose, courtesy of Adobe's Christine Dierk, featured a seemingly innocuous dress covered in scales. However, the dress was a shapeshifting canvas, something highly interactive that could alter its pattern in the blink of an eye. Sure, the technology is highly applicable to the fashion industry, but we could even see it being used someday on packaging, with flexible designs that can constantly change on grocery store shelves.
If creators are still fearful of AI-powered design features, they certainly haven’t been scared of using them. According to the software developer, designers and non-designers generated billions of images using Adobe’s Firefly technology.
“The community adoption of Firefly is incredible, like 3 billion images generated since March, that was not that long ago,” said Deepa Subramaniam, VP of Creative Cloud product marketing, Adobe. “One billion of those images came in the last month alone. This hockey stick type of engagement is really incredible to see. The community is a key part of our product development. It’s an ongoing dialogue that’s very creator first, and they’re having a real, tangible impact on the development and the roadmap directly.”
The era of AI-assisted design is here, and Adobe is putting these tools in the hands of creatives everywhere. Less clear, however, is what it will mean to be a designer in the future. If Adobe’s Firefly is as powerful as it appears in demos, designers will push pixels less and guide the bots to do much of the work.
Images courtesy of Adobe.