Google announced new personalized image creation features in the Gemini app, including the ability to draw on a user's Google Photos library to generate images that reflect personal context, according to a post on the company's Keyword blog.

The post, titled "New ways to create personalized images in the Gemini app," describes the update as tying image generation to what Google calls "personal intelligence." The headline of the page references Nano Banana alongside Google Photos, though the body of the announcement as published does not present a standalone model version name that DeepBrief could verify in the available source text.

What Google Says Is New

According to Google's post, users of the Gemini app can now create images that incorporate personal context pulled from their own Google Photos library. The company frames the feature as a way to generate images that "reflect your unique life," per the post's summary text.

The announcement is categorized under the Gemini app product area on Google's Keyword blog and is dated April 16, 2026, according to the page's published timestamp.

Nano Banana now uses your personal context and Google Photos to create images that reflect your unique life.

That description appears in the share metadata for the post, attributing the personalization behavior to the Nano Banana image capability and to Google Photos integration.

Model Naming

DeepBrief notes that the exact model designation Google is using for this release is not stated as a standalone declaration in the portion of the blog post body available to us. The name "Nano Banana" appears in the page's URL slug and in headline and share-card contexts on the Keyword blog. Readers looking for a formal model version identifier should consult Google's post directly and the company's developer documentation.

Until Google publishes a model-name statement in the post body or in accompanying developer materials, DeepBrief is not attributing a specific version number to this release.

Personal Context As A Product Direction

Google's post groups the update under a broader "personal intelligence" framing, according to the URL path and section tags on the Keyword blog. The company positions the Google Photos connection as the mechanism that allows Gemini to reference a user's own images when generating new ones, per the post.

The announcement does not, in the text reviewed by DeepBrief, specify rollout regions, pricing tiers, supported Gemini app surfaces, or whether the feature is gated behind a paid Gemini subscription. Google's post also does not, in the reviewed text, disclose safety controls, opt-in requirements for Google Photos access, or retention policies for images used as personalization inputs.

Availability

Google's Keyword post is the sole primary source for this report. The company has not, in the material reviewed, published benchmark comparisons, developer API details, or third-party evaluations alongside the announcement. DeepBrief found no independent corroborating coverage at the time of writing.

Readers seeking the company's own framing should refer to Google's post at blog.google/innovation-and-ai/products/gemini-app/personal-intelligence-nano-banana/.