RSAC Innovation Sandbox 2026: Two Sides Of AI On Display
AI already runs inside most enterprises. Forrester’s Q4, 2025 AI Pulse Survey shows that 50% of organizations were piloting agentic AI, while 24% had it in production. Security teams are catching up after the fact. The RSAC Innovation Sandbox (ISB) finalists (ZeroPath, Token Security, Realm Labs, Humanix, Glide Identity, Geordie AI, Fig Security, Crash Override, [ ]
AI already runs inside most enterprises. Forrester’s Q4, 2025 AI Pulse Survey shows that 50% of organizations were piloting agentic AI, while 24% had it in production. Security teams are catching up after the fact. The RSAC Innovation Sandbox (ISB) finalists (ZeroPath, Token Security, Realm Labs, Humanix, Glide Identity, Geordie AI, Fig Security, Crash Override, Clearly AI, Charm Security) attack that gap from two sides: 1) how to control AI systems and mitigate AI risks; and 2) how to use AI to help security teams from collapsing under their own workload.
The winner: Geordie AI, with an AI governance platform that discovers AI agents running across code, cloud, and endpoints, maps each agent’s “anatomy” (its tools, skills, and connections), then provides runtime observability of agent actions.
Our pick: Realm Labs, with its runtime monitoring and visibility into how AI is thinking, piqued our interest in how this could establish a foundation for better analyzing and classifying intent – both the harmful and the benign – to better secure it.
ISB Finalists Address Issues That Enterprises Face Today
Through the various pitches, we noted how the finalists addressed several types of issues that enterprises face today:
-
AI agents slip past inventories, lack monitoring and constraints. One pitch highlighted an example of a Fortune 500 customer who uncovered more than 600 AI agents it didn’t know existed. No one acted surprised. That’s the baseline now. Most security teams can’t answer basic questions like how many are running, who owns them, and what they touch. They then struggle to implement meaningful approaches for real-time visibility and controls. The winner Geordie AI, and finalists Token Security and Realm Labs, tackled these issues directly.
-
AI-driven attacks on people need defenses to keep pace. One‑time passcodes buckle under AI‑driven phishing and voice fraud. From Glide Identity, SIM‑based cryptographic identity emerged as an alternative, anchored in hardware people already carry. Both humans and AI are subject to social engineering. Humanix monitors conversations and intervenes while attacks unfold, while Charm Security provides resolution agents to resolve scams and disruption agents (honeybots) that engage with attackers.
-
Managing code security and software vulnerabilities at scale is a monumental effort. Application security teams struggle to review growing volumes of AI-generated code, detect unapproved components, identify vulnerable dependencies, and more. ZeroPath’s code security suite finds vulnerabilities, verifies exploitability, and offers a path to remediation by combining deterministic scans with AI-augmented triage and prioritization. Crash Override provides an intelligence layer for the software supply chain capturing how software is created, with an equivalent of an Air Tag to track software to see what it’s doing in production. Clearly AI combines light weight code review, threat modeling, and third-party risk assessment using AI agents for ongoing evaluation of vendor privacy, risk, and AI governance; it augments and accelerates your existing processes for security reviews.
-
The reliability of the modern SOC hinders detections. From cobbling together fragmented security operations infrastructure across data pipelines, SIEM, and SOAR, to dealing with blind spots, a fragile SOC undermines confidence in observability. Fig Security addresses these concerns for mature enterprises and MSSPs through its SOC resilience platform, where it maps data flows and detection rules, detects failures and blind spots, simulates changes, and suggests fixes for your SOC plumbing.
Security Leaders: Seize The Opportunity
Across very different products in different categories, the direction was consistent. AI expands the attack surface and compresses security work, while governance, identity, and reliability determine what can operate at scale. Blind spots will accumulate as the enterprise moves into a vibe coded agentic world and security can’t spin its wheels while that happens. Most of the startups in Innovation Sandbox are unlikely to mature as independent platforms. They address narrow, high-friction problems that align closely with existing security and cloud platforms, making acquisition or integration their likely outcome. To keep up with business innovation, security leaders need to do the following:
-
Establish authoritative visibility into AI agents running in the environment. Assume AI agents exist across code, cloud services, SaaS platforms, and endpoints without security ownership. Direct teams to inventory AI agents by discovering what is executing, who owns it, and what systems and data it can touch. Treat unknown agents as unmanaged risk, not innovation debt. The outcome is a defensible baseline that lets you prioritize controls based on exposure rather than assumptions.
-
Enforce runtime accountability for AI behavior and identity. Move AI security controls from policy and review artifacts to runtime monitoring and identity binding. Require that AI agents and AI-mediated interactions are observable during execution and tied to clear ownership and strong identity controls. This directly addresses agent drift, misuse, and AI-driven social engineering that bypasses static safeguards. The outcome is reduced fraud exposure, faster detection of harmful behavior, and auditable accountability when incidents occur.
-
Embrace agentic development security (ADS). ADS focuses on securing AI-powered software development by preventing, detecting, prioritizing, and remediating flaws, while providing continuous intelligence on code, workflows, and applications. It’s needed to keep pace with AI coding agents and agentic development. No single vendor fully meets this vision today; Forrester’s upcoming ADS landscape and wave will highlight which vendors are leading and shaping this critical space.
-
Stabilize security operations reliability before scaling AI-driven detections. Treat SOC observability and data pipeline integrity as a prerequisite for AI at scale. Validate that detection rules, data flows, and response automation function as intended before adding more AI generated signals. Fragile SOC plumbing amplifies blind spots and noise when AI increases event volume and ambiguity. The outcome is improved confidence in detections, faster containment, and fewer high impact failures caused by missed or broken controls.
Blog
Take Control Of Your AI Voyage — Your Customers Deserve It
AI represents a once-in-a-generation opportunity to redefine customer value and establish durable differentiation. Experimentation had its moment — what’s required now is committed leadership, sharper customer focus, and a clear sense of purpose.
Blog
How CommBank Blends Customer, Digital, And AI Leadership
Cowritten with Janis Teo, senior research associate Five months ago, I published a blog post about banking’s new power role, the chief digital, data, and AI officer, an executive mandate designed not just for a simple consolidation for efficiency but to unify digital, data, and AI around a single strategic vision for growth, decision-making, and […]
Forrester AI Blog
https://www.forrester.com/blogs/rsac-innovation-sandbox-2026-two-sides-of-ai-on-display/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
productsurveyagentic
Q&A with Simon Willison on the November release of GPT-5.1 and Opus 4.5 as the inflection point for coding, exhaustion due to managing coding agents, and more (Lenny Rachitsky/Lenny s Newsletter)
Lenny Rachitsky / Lenny's Newsletter : Q&A with Simon Willison on the November release of GPT-5.1 and Opus 4.5 as the inflection point for coding, exhaustion due to managing coding agents, and more Simon Willison is a prolific independent software developer, a blogger, and one of the most visible and trusted voices on the impact AI is having on builders.

Anthropic’s Designs Three-Agent Harness Supports Long-Running Full-Stack AI Development
Anthropic introduces a three-agent harness separating planning, generation, and evaluation to improve long-running autonomous AI workflows for frontend and full-stack development. Industry commentary highlights structured approaches, iterative evaluation, and practical methods to maintain coherence and quality over multi-hour AI coding sessions. By Leela Kumili
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Self-Evolving AI

Anthropic’s Designs Three-Agent Harness Supports Long-Running Full-Stack AI Development
Anthropic introduces a three-agent harness separating planning, generation, and evaluation to improve long-running autonomous AI workflows for frontend and full-stack development. Industry commentary highlights structured approaches, iterative evaluation, and practical methods to maintain coherence and quality over multi-hour AI coding sessions. By Leela Kumili






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!