PHOTOSHOP’S “KILLER” DIDN’T KILL IT—ADOBE OPENED THE DOOR

I learned Photoshop back when there was one Undo. No History panel. No “let me test five options then decide.” You got a single shot, like waiting a week for film to come back and realizing your thumb photobombed every frame.

Since then, Photoshop’s power has grown like a coral reef: gorgeous, complex, and occasionally sharp enough to draw blood. Every year something genuinely magical lands—then gets tucked into flyouts inside flyouts. I still teach designers who can recite “Layer Style → Blending Options → Blend If” like a secret handshake. We joke that Photoshop’s UI is a museum where the new wings open into older basements.

Cut to 2025. Google drops Gemini 2.5 Flash Image code named Nano Banana. Adobe, plot twist, invites it into Firefly and Adobe Express the same day. Did Photoshop die? Nah. Adobe basically said: pick the best model for the job, inside our garden.

TL;DR (answer first):

Google’s Gemini 2.5 Flash Image model is live today in Adobe Firefly and Adobe Express, with outputs invisibly watermarked via SynthID. Adobe has formally opened Firefly to third-party models (OpenAI, Google, Flux, etc.) and says credits apply across models. In Photoshop desktop, you can currently pick Firefly model versions for Generative Fill—but partner models aren’t exposed in the PS UI (yet) as of Sept 18, 2025. If Adobe ships a clean model picker in Photoshop, the workflow finally feels modern. If not, designers will keep bouncing between tabs. 

The real issue isn’t the model. It’s the menu.

Here’s my lived loop: I open PS to do a five-minute cleanup… thirty minutes later I’m spelunking menus. The right-click menu stretches off screen. Tools play musical chairs in a single flyout. And the greatest trick in the app—Blend If—still hides three clicks deep like a boss battle.

If Adobe becomes a model switchboard, then model choice can’t be a scavenger hunt. I want two things front-and-center in Photoshop’s contextual bar:

  1. Choose model (Firefly, Gemini, etc.)

  2. Why this model (speed, likeness, photorealism, style control), with one-line plain-English guidance

Give me that, and suddenly the model feels like a brush preset, not a labyrinth.

Where this actually helps you (not just on a slide)

1) Fast comps / pitch frames

You’ve got 90 minutes to sell a visual direction. Gemini’s strengths in likeness consistency and precise local edits help you stabilize faces, hands, or product angles quickly. Do the heavy lift in Firefly (Gemini selected), then drop into PS for layer-level cleanup. No more round-tripping to six tabs.

Micro-workflow:

  • Rough prompt + reference → Firefly (select Gemini)

  • Export with Content Credentials on

  • In PS: quick masked dodge/burn, Selective Color for brand alignment, and smart object for future swaps

2) Real production files

When a file is going to prepress (hi, CMYK), discipline beats prompts. The model gives you a head start; finishing inside PS gives you repeatability. Same as always: naming layers, masking like a grown-up, and keeping an eye on output intent.

Micro-workflow:

  • Generate/extend on web (Gemini or Firefly model)

  • Back to PS: Stamp Visible into a smart object; keep original layers below

  • Soft proof, tighten gamut warnings, and lock color with adjustment layers you can explain to print

3) Brand consistency across a series

Need ten variants that actually match? The model picker matters. Use the engine that’s best at prompt fidelity for generation; use Photoshop to enforce color harmony, grain, and type so a carousel feels like one family, not a collage.

Pricing + provenance (the boring stuff that quietly saves your butt)

  • Cost sanity: Public guidance puts Gemini 2.5 Flash Image around $0.039 per image via Google’s API/AI Studio. On Adobe’s side, the story is “credits apply across models,” but specifics per partner can feel… foggy. Clear, in-app pricing would stop a lot of ticket threads.

  • Trust layer: Google’s SynthID invisibly watermarks outputs; Adobe’s Content Credentials (C2PA) adds a “nutrition label” that persists through edits. Together, they’re the most client-friendly provenance stack we’ve got right now—especially for enterprise and agencies with legal review.

House rule at House of gAi: keep Content Credentials ON by default. If a brand asks, you can show what touched the image—model, app, edits—without breaking a sweat.

What Adobe has to nail next (or we keep living in tabs)

  1. Photoshop model picker with partner models visible—right in the contextual bar.

  2. Explainability in human English: “This model is faster for X; this one is better for Y.”

  3. Transparent credits/pricing inside Creative Cloud (no “is unlimited… actually unlimited?” confusion).

  4. Content Credentials on by default—plus a clear per-layer note of “which model touched what.”

Do that, and PS stops feeling like a relic museum and turns into a modern studio rack. You change the channel strip, not the whole DAW.


FAQS

  1. Can I use Google’s Gemini inside Photoshop?

    Not in the desktop model picker yet; it’s live in Firefly/Express. Adobe has signaled broader partner access. 

  2. How much does Gemini 2.5 Flash Image cost?

    About $0.039 per image via Gemini API/AI Studio. Enterprise pricing via Vertex AI mirrors the same token math. 

  3. Do Adobe credits work with partner models?

    Adobe says credits apply across third-party models in Firefly; revenue-share terms aren’t public. 

  4. What’s the difference between SynthID and Content Credentials?

    SynthID is an invisible pixel watermark; Content Credentials is verifiable metadata (C2PA) that persists through edits. Use both. 

  5. Is Figma using Gemini?

    Yes—Figma’s AI image tools list Google Gemini (and OpenAI) for making/editing visuals.  

Next
Next

PIXELS DON’T HUG: WHY BENETTON’S AI-LED CAMPAIGN FEELS LIKE STOCK PHOTOGRAPHY IN A SWEATER