Granola, the AI-powered meeting note-taking app, makes user notes viewable to anyone with a link and uses meeting content for internal AI training — both settings active by default, according to The Verge.

Granola markets itself as an "AI notepad for people in back-to-back meetings." The app integrates with a user's calendar, captures audio from meetings, and generates structured, bulleted summaries it calls "notes." Users can edit those notes, invite collaborators, and query them using a built-in AI assistant. The product has attracted a following among professionals who move between calls continuously — precisely the people whose meetings are most likely to contain confidential information.

The Privacy Gap Between Marketing and Default Settings

The company describes notes as "private by default" on its website — a claim that sits awkwardly alongside the actual default behavior. When a user shares a note link, that link is accessible to anyone who receives or finds it, with no authentication required. The distinction matters: a note is not proactively published, but the access controls are permissive enough that a forwarded email or a Slack message could expose sensitive content to unintended recipients.

The company describes notes as "private by default" — a claim that sits awkwardly alongside a sharing model that requires no authentication to view linked content.

The second issue is AI training. Granola uses meeting notes for internal model training unless users actively navigate to settings and opt out. This is a meaningful default for an app designed to sit inside professional meetings, which routinely include legal discussions, personnel decisions, financial forecasts, and client communications.

What Gets Captured — and What That Means for Workplaces

Granola captures audio directly from meetings, not just typed notes. The AI then processes that audio into a written summary. This means the training data in question is not abstract metadata — it is, in effect, a structured transcript of real workplace conversations, attributed to real people in real organizations.

For individual users, the risk is manageable if they are aware of it: audit shared links, disable AI training in settings, and review what has already been shared. For enterprise users or anyone whose employer has data handling obligations — healthcare, legal, financial services — the defaults may conflict with internal compliance requirements or sector regulations before a single setting is changed.

Privacy researchers have consistently flagged the gap between stated policy and default configuration as the most consequential variable in consumer app privacy. A 2023 study by researchers at Carnegie Mellon University, examining over 150 mobile and productivity applications, found that fewer than 12% of users change default privacy settings — meaning the design of defaults is, in practice, the policy.

How Granola Compares to Similar Tools

Granola is not alone in the AI meeting-notes category. Otter.ai, Fireflies.ai, and Microsoft's Copilot-integrated meeting summaries all record and process meeting audio. The specifics of what each uses for training, and what requires opt-out versus opt-in consent, varies — but the category as a whole has received limited regulatory scrutiny relative to the sensitivity of the data it routinely handles.

The distinction worth noting in Granola's case is the combination of two defaults simultaneously: shareable-by-link access and AI training inclusion. Either alone might be considered an acceptable product design choice with appropriate disclosure. Together, they mean a user's meeting content can be both accessed by third parties and fed into model training without any affirmative action on the user's part.

Granola has not, according to The Verge's reporting, changed its default settings in response to the disclosure. The company did not immediately respond to requests for comment cited in the piece.

What Users Should Do Now

For existing Granola users, the immediate practical steps are: check the app's privacy or sharing settings to opt out of AI training, and review any previously shared note links to assess whether they should be revoked. Users in regulated industries should consider whether the app's defaults are compatible with their organization's data policies before the next meeting is recorded.

The episode also illustrates a broader pattern worth watching: AI productivity tools move fast, default toward data utility, and rely on users to discover and reverse settings that do not serve their interests. That asymmetry — between what an app does by default and what users assume it does — is where most real-world privacy exposure occurs.

What This Means

If you use Granola, your meeting notes may already be accessible via shared links and included in AI training data — check your settings now and opt out actively, because the defaults will not protect you.