Google's Gemini 3.1 Flash image model. Reaches 95% of Pro quality at 2-3x the speed and a fraction of the credits. Supports up to 4K, reference images, and features Pro doesn't have.
NanoBanana 2 launched on February 26, 2026, and it changed the math on when to use Pro. Built on Google's Gemini 3.1 Flash architecture, it generates images in 4-6 seconds at 1K and handles 4K output in under 30 seconds. It has two features Pro doesn't: Image Search Grounding (the model can pull real-world references from Google Search during generation) and Thinking Mode (three levels of reasoning depth so you can trade speed for quality per request). Character consistency holds for up to 5 people across generations, and it accepts up to 14 reference images for editing.
Toggle this on and the model searches Google during generation. It pulls real-world references for landmarks, logos, products, and public figures. The result is more accurate than prompting from memory alone. Pro doesn't have this.
Three levels: Minimal (fastest), High (best quality), and Dynamic (model decides). Minimal is the default. High adds a couple seconds but the model plans the composition before rendering. Useful for complex prompts where the first pass misses the mark.
NanoBanana 2 generates at 0.5K, 1K, 2K, or 4K. Even at 4K, you're looking at 15-30 seconds. Start at 1K for iteration, then re-generate your winners at 4K for production.
Overall
NanoBanana is a pattern matcher built on Gemini's multimodal reasoning. It responds to visual descriptions better than camera commands. Google's own guidance: "Describe the scene, don't just list keywords."
"Looking up at chin, subject towers against open sky, exaggerated perspective depth" works better than "low angle, 24mm, f/1.4." The model understands visual descriptions. Camera specs are hints, not instructions.
Commas blend concepts. Periods compartmentalize them. "A woman in a red dress. Dark alleyway behind her. Warm key light from the left." keeps colors clean. Run it together with commas and the red bleeds into the environment.
Words at the beginning get the most attention. Words at the end are suggestions. Lead with subject and action, then environment, then lighting.
"Cinematic lighting" is vague. Instead: "Hard key light from the upper left catching the cheekbone. Deep shadows on the right. Warm practicals in the background." Each source described separately.
The model defaults to clean output. Push toward photorealism with: "visible pores, micro skin texture, peach fuzz, film grain, Kodak Portra 400." The imperfections sell it.
NanoBanana 2 is not a downgrade from Pro. It's a newer model on a different architecture (Gemini 3.1 Flash vs Gemini 3 Pro) with its own exclusive features. Pro still wins on absolute quality. NanoBanana 2 wins on speed, features, and value.
Use NanoBanana 2 for iteration, volume, and anything where speed matters. Switch to Pro for final hero images, print campaigns, or scenes with complex spatial relationships. Both are available on DreamSun, and you can switch between them in the same session.
Open the image generator and select NanoBanana 2 from the model list. It's the most popular model on DreamSun.
Describe what you want to see in full sentences. Subject, environment, lighting, texture. Separate concepts with periods. Add reference images if you want to guide the style or maintain a character.
At 4-6 seconds per image, you can try 10 variations in under a minute. Start at 1K. Once you find what works, bump to 2K or 4K for the final version.
Pick a plan that fits your workflow and start generating with NanoBanana 2. Monthly plans give you more credits at a better per-image rate. You always see the credit cost before generating.
NanoBanana 2 is an AI image model built on Google's Gemini 3.1 Flash architecture. Released February 2026, it generates images from text prompts at up to 4K resolution. It supports editing with up to 14 reference images, character consistency across 5 people, and has exclusive features like Image Search Grounding and Thinking Mode. On DreamSun, you can use it alongside Nano Banana Pro, Grok Imagine, Recraft 4, and other models.
NanoBanana 2 pricing depends on resolution. You can see the exact credit cost before generating. Pick a DreamSun plan (Starter, Creator, or Pro) to get monthly credits, and your credits work with any model including NanoBanana 2.
Four options: 0.5K (512px, fastest and cheapest), 1K (standard), 2K, and 4K. Start at 1K for drafts and iteration, then re-generate at 4K for production output.
Different architectures, different strengths. NanoBanana 2 runs on Gemini 3.1 Flash, is 2-3x faster, and has exclusive features (Image Search Grounding, Thinking Mode). Pro runs on Gemini 3 Pro, has better shadow/lighting detail, and supports negative prompts. NanoBanana 2 reaches about 95% of Pro quality. For most use cases, that gap is invisible without pixel-level comparison.
Describe the scene in full sentences instead of listing keywords. Separate subject, environment, and lighting with periods to prevent color bleed. Put the most important details first. For photorealism, add texture: "visible pores, film grain, Kodak Portra 400." Google's own guidance: describe what you see, not camera settings.
A feature exclusive to NanoBanana 2. When enabled, the model searches Google during generation to pull real-world reference images. This improves accuracy for landmarks, logos, products, and well-known subjects. Pro doesn't have this.
Yes. Images you generate on DreamSun are yours to use commercially. All outputs include SynthID watermarking from Google. For high-stakes commercial work, also check Google's terms for the Gemini platform.
Yes. Upload up to 14 reference images and NanoBanana 2 uses them to guide style, composition, or character appearance. On DreamSun, adding references automatically switches the model to edit mode.