Skip to content




Photoshop’s new AI assistant makes it easer than ever to edit images

Featured Replies

rssImage-4db03995b1159d770091bfc2829184f2.webp

Today Adobe is launching the public beta of its new AI assistant for Photoshop Web and Photoshop Mobile. The company’s impressive new assistant technology enables anyone to do seemingly flawless photo editing—Nano Banana style—by prompting the apps. Then it ups its power by giving you easy and precise ways to interact with that software—whether it’s via voice or using your finger to navigate the interface.

Photoshop Mobile and Web have included AI features for a while. The web version already had Adobe Firefly generative AI features like generative fill and generative expand. The previous mobile version of Photoshop became truly usable because it smartly integrated AI to allow for making accurate object selections with your fat finger.

This new AI assistant integration removes any lingering difficulty from image editing, putting it in competition with popular AI image generators like Google’s Nano Banana, OpenAI GPT-Image, or ByteDance’s Seedream. Unlike those models, however, combining the new Adobe AI assistant with Photoshop Mobile and Web gives users a lot more image editing precision through its new tools.

Plus, it adds the possibility of “upstreaming” results beyond posting an edited image on social media. Users will be able to move the AI-edited files into the full Adobe creative app workflows, to go full Photoshop, integrate into a Premiere project, or publish a book in Acrobat.

How the new Photoshop web and mobile work

When you click on the assistant icon, the model first analyzes the raw pixels on your screen. The assistant essentially scans the image to identify both the overall context and the specific objects within the frame—recognizing the difference between a human subject in the foreground, all the different objects present, and a chaotic crowd in the background. Once it maps out the “reality” in the image, the app provides you with proactive recommendations.

The assistant suggests edits, which can be any number of things, depending on the nature of the image, from removing “scattered objects to tighten the composition” to refining the lighting to adjust the color palette or anything in between. If you prefer to be hands-off, you can tell the machine to do it for you, or you can choose to bypass the automation and do your own thing.

Taking the manual route means you can use your voice or text prompts to manipulate the image while retaining granular control over the assistant’s actions. In the mobile app, for instance, you can issue a vocal command to alter a specific object—like removing the cropped head of a dude in the background—and the assistant will automatically isolate that element and place the changes on a dedicated layer.

Think of layers as transparent sheets of acetate stacked on top of each other; you can shuffle them around, duplicate them, or delete the background from the top sheet without permanently destroying the original photograph at the bottom of the pile. You can even sandwich generated typography securely behind a subject but in front of a newly created background. You just have to talk to the assistant to add the text wherever you want, even specifying the typography, color, size, and then move it or manipulate it using your fingers.

All very cool and Minority Report-ish.

Over on the web version, the new assistant introduces a feature called AI Markup to give users absolute precision over image alterations. Located within a contextual task bar, this tool lets you use your finger, mouse, or stylus to draw directly on the image, effectively outlining a digital quarantine zone for the artificial intelligence.

By physically marking up the canvas and adding text prompts, you establish strict borders that control exactly where the computational changes happen. It allows you to draw rough shapes to integrate entirely new objects into the scene, or to generate specific adjustment layers that fine-tune the contrast, shadows, and highlights of an isolated element in the image.

Adobe’s demonstration of the web platform illustrates how this localized editing works in practice. Using the AI Markup tool, a user highlighted specific fruits on a table to execute hyper-targeted commands. By drawing over the objects, the user was able to completely erase a pineapple from the composition, transmute a pomegranate into an apple, and shift an item’s color to blue. Because the artificial intelligence is confined to the marked boundaries, the rest of the image’s lighting, shadows, and surrounding elements remain untouched.

This is a big difference from the latest AI image editing models, which will change the image, even slightly, no matter how many times you tell them not to touch one single pixel of the image and just remove the damn banana from the fruit bowl. This happens because image editors have to “re-imagine” the entire image again, without the element, and generative AI always makes mistakes and hallucinates a bit, no matter how hard you try to avoid it. Photoshop’s assistant, however, only changes the marked area in a new transparent layer, avoiding touching the rest of the image.

The mobile application’s voice- and text-activated capabilities are equally utilitarian. In one demo, a user commanded, “Remove the person in the foreground.” The app instantly identified the human shape, excised it, and—understanding the context of the remaining image—suggested a logical follow-up at the bottom of the screen: “remove background people.”

Other voice commands ranged from “Turn this into a night scene with polar lights” to asking the app to “Make the Bridge darker.” When the latter command was issued, the software automatically generated a digital mask around the architectural structure and applied a targeted brightness and contrast adjustment layer. Users can also dictate text generation—saying “Add text that says Kyoto in white” or “Add text, Golden Gate”—and then manually tweak the font, size, position, and color.

Access to the beta depends on your subscription tier. Through April 9, users paying for Photoshop on the web and mobile, along with current Firefly customers, get unlimited AI generations. Free users on web and mobile, meanwhile, are capped at 20 free generations to get started.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.