Adobe, recently announced Firefly Image Model 5, This gives the choice and flexibility to use the best model for any creative task, along with our editing tools. Google’s latest image model, Gemini 3 (with Nano Banana Pro) is integrated into Adobe Firefly and Photoshop. It joins a lineup of partner models across image, video, and audio, including those from Black Forest Labs, ElevenLabs, Google, Ideogram, Luma AI, Moonvalley, OpenAI, Pika, Runway, and Topaz Labs.
Google’s Nano Banana Pro model improves editing quality, letting you use text prompts to refine parts of an image, adjust aspect ratios, increase resolution, and change camera angles or lighting. It also handles text in images and can localize visuals with translated text. Using Google Search’s knowledge base, the model can generate factually accurate visuals, making it a complementary model in creative apps.
In Adobe Firefly, a creative AI studio for video, image, and audio, creators can use Nano Banana Pro in the Text to Image feature. The model is also available in Firefly Boards. AI-powered collaborative moodboarding tool, where creators can brainstorm, organize ideas, and visualize early-stage concepts together.
In Photoshop, Nano Banana Pro powers Generative Fill, giving creative professionals fast, prompt-based ways to make specific edits, generate high-resolution content, and explore creative variations. It follows the recent addition of Google’s Nano Banana and Black Forest Labs’ FLUX.1 Kontext [pro], the first partner models integrated into Photoshop earlier this year for Generative Fill. With Firefly and partner models in Photoshop, creators can move from generative experimentation to refining results using layers, masks, and selections for precise editing, all without interrupting their workflow.




