January 23, 2026

Google Gemini incident exposed a bigger AI identity security problem

When AI works as intended and security still fails anyway

The recently disclosed Google Gemini vulnerability did not involve stolen credentials, malware, or a compromised system. Instead, private calendar data was exfiltrated through malicious calendar invites using a technique known as indirect prompt injection. Cybersecurity researchers discovered that hidden instructions could be embedded inside a calendar event’s description. When a user later asked Gemini a simple question about their schedule, the AI interpreted those hidden instructions, summarized private calendar data, and wrote that summary into a new calendar event.

In some configurations, that new event became visible to attackers with no user clicks, no new permissions and no security alerts. From a traditional security perspective, everything looked normal. From an AI identity-security perspective, everything was broken.

The real issue was identity abuse, not AI misbehavior

This was not a case of Gemini “going rogue.” Gemini behaved exactly as designed. It had legitimate permission to:

  • Read calendar data
  • Create calendar events

What failed was not authorization. It was contextual control and intent governance. Traditional security controls asked: “Is Gemini authorized to do this?” They never asked: “Should Gemini do this right now, with this data, in this context?” That missing question represents one of the biggest gaps in modern AI security.

What the attack actually was

This was not a conventional exploit involving malware, stolen keys, or injected code. It was a semantic attack targeting how AI systems interpret language as instructions. In this case, an attacker embedded carefully crafted language inside the description field of an otherwise normal calendar invite. Because Gemini is designed to parse event metadata to answer harmless questions like “What meetings do I have today?”, it read the malicious text as part of its reasoning process.

The attack unfolded like this:

  1. An attacker sends a legitimate-looking calendar invite containing hidden instructions.
  2. A user later asks Gemini about their schedule.
  3. Gemini processes all relevant calendar data, including the hidden prompt.
  4. The AI summarizes private meeting information and writes it into a new calendar event.
  5. In some enterprise configurations, that new event exposes sensitive data back to the attacker.

There was no phishing link. No malicious payload. No credential compromise. Just language manipulating how an AI reasons and acts.

Why traditional IAM misses these attacks

Identity and Access Management (IAM) systems are built to answer one question well: “Who can access what?” They were not designed for systems that:

  • Act autonomously
  • Chain permissions across services
  • Respond to untrusted language context
  • Execute multi-step workflows without explicit user actions

In the Gemini incident, IAM saw legitimate read and write operations. Security tools saw normal activity. Yet sensitive data still crossed trust boundaries. This reflects a broader class of AI-native attack patterns commonly referred to as prompt injection or promptware, where AI systems treat user-provided content as executable intent.

Language is now an execution surface

The malicious payload in this vulnerability was not code. It was natural language, embedded in calendar metadata and triggered later by a completely normal user query. This shift fundamentally changes the threat model.

Vulnerabilities now exist in:

  • Context
  • Memory
  • AI-driven orchestration

Not just in software bugs or misconfigurations. Any AI system that can read from one system and write to another, even with correct permissions, can unintentionally create new data paths through language alone.

Where Unosecur fits: Identity security for a language-driven world

Unosecur is an identity security platform designed to help organizations manage and secure both human and non-human identities across complex environments. It does not attempt to fix prompt injection inside language models. That problem lives at the AI-model level. Instead, Unosecur focuses on visibility, governance, monitoring, and response, where identity security teams can take concrete action.

1. Treating AI services as first-class identities

Unosecur treats AI services as identities that must be governed, not as opaque features. This enables:

  • Unified visibility across all identities, including AI and automation
  • Clear tracking of permissions and access paths
  • Inclusion of AI services in identity risk assessments

2. Visibility into identity behavior

In incidents like the Gemini vulnerability, behavior is the real signal. Unosecur continuously monitors identity activity and builds behavioral baselines to detect anomalies such as:

  • Unexpected cross-service operations
  • Unusual write actions
  • Identity behavior that deviates from historical norms

3. Context-aware oversight of identity actions

Instead of stopping at “can this identity act,” Unosecur helps teams evaluate whether an action makes sense in context. This allows security teams to prioritize investigations where identity behavior and context indicate elevated risk, even when permissions are technically correct.

4. Reducing blast radius and supporting remediation

No platform can eliminate malicious language influence entirely. Unosecur helps reduce impact by:

  • Limiting how far sensitive data can propagate
  • Constraining the breadth of actions available to an identity
  • Flagging risky behavior early for investigation and response

5. Audit and compliance readiness

In regulated environments, visibility is non-negotiable. Unosecur provides:

  • Detailed audit trails of identity actions
  • Real-time compliance reporting
  • Access reviews to support forensic analysis after suspicious AI behavior

Why this matters beyond Gemini

AI systems are increasingly used to:

  • Analyze documents
  • Summarize inboxes
  • Automate workflows
  • Access APIs

As language increasingly drives outcomes, identity governance must evolve alongside it. The Gemini vulnerability is not an isolated case. Research has repeatedly shown how embedded instructions in emails, documents, and calendar invites can manipulate AI behavior. It's safe to say that AI did not break security, it exposed weak identity and intent-governance controls.

Final thought: AI security is identity security

As AI systems continue to:

  • Act autonomously
  • Chain permissions across services
  • Operate continuously
  • Respond to untrusted language

Security can no longer stop at “Does this identity have access?” It must also ask “Is this identity behaving safely right now, in this context?” That is the problem Unosecur helps organizations address by strengthening identity visibility, governance, monitoring, and response across modern systems.

Security checklist: AI + identity

Identity visibility & inventory

  • Catalog all identities, including AI services
  • Document permissions and access paths

Behavior monitoring

  • Enable continuous monitoring
  • Establish behavioral baselines

Context-aware detection

  • Correlate actions with context
  • Prioritize cross-service and anomalous activity

Access controls

  • Apply least-privilege principles
  • Conduct regular access reviews

Audit & compliance

  • Maintain detailed identity logs
  • Review and investigate anomalies

Incident response

  • Define playbooks for identity misuse
  • Enable rapid investigation and containment

The Gemini incident is a useful reminder of how security assumptions are shifting. AI systems did not bypass access controls. They did not exploit software vulnerabilities. They operated entirely within their assigned permissions. What changed was how language influenced action, and how easily legitimate identity privileges could be combined in unintended ways. As AI becomes more autonomous, more connected, and more embedded into daily workflows, identity security can no longer be limited to static access decisions. Security teams need visibility into how identities actually behave, how permissions interact across systems, and where context turns legitimate access into risk. This is where identity security platforms like Unosecur play a critical role not by altering AI models or interpreting language, but by strengthening governance, monitoring, and accountability for every identity operating inside complex environments, including AI services.

For organizations adopting AI at scale, the question is no longer whether identities have access. It is whether those identities are behaving safely, consistently, and as intended over time. Understanding that risk starts with visibility. A focused identity risk assessment is often the first step toward uncovering where AI-driven workflows may be creating silent exposure long before an incident makes it visible.

Explore our other blogs

Don’t let hidden identities cost you millions

Discover and lock down human & NHI risks at scale—powered by AI, zero breaches.