March 18, 2026


The Question Has Changed
For decades, enterprise security has been built around a single question: who_ is the user?_ Authentication. Role-based access. Network perimeters. These controls were designed for a world where a human logs in, clicks around, and logs out. They remain necessary. But they are no longer sufficient. AI systems have changed the model. They don’t log in and wait for instructions. They act - querying data, triggering workflows, moving between systems, operating on behalf of users who may not be watching. The relevant question is no longer who is the user. It is: who authorised this action, and can you prove it? The McKinsey incident illustrates exactly what happens when that question goes unanswered. The platform had authentication. It had access controls. It still exposed production data at scale, because those controls were never designed to govern autonomous action at execution time.The Governance Gap
Most enterprise security models focus on three controls: authentication, role-based access, and network protection. These work well when humans interact directly with software. They break down when AI systems act as intermediaries. When an AI agent executes an action inside your infrastructure, your systems need to be able to answer these questions before the action executes - not after:- Is this actor authorised to perform this specific action?
- What is the declared intent, and does it match what is being attempted?
- Does this action fall within permitted policy boundaries?
- Has the relevant user consented to their data being used in this way?
- Can we produce a verifiable record that proves all of the above?
What Runtime Governance Looks Like
This model is best understood as an AI Trust Layer. It operates at execution time - not at login - and enforces six primitives of trust against every action an AI system attempts to take. Identity. Every actor - human, agent, or machine - must have a verifiable identity. Trust is not granted by virtue of being inside the network. And that identity must be bound to the accountability of the individual or business behind it - so that every action can ultimately be traced back to a responsible party. Authority. Identity alone is not enough. Systems must validate what authority has actually been delegated. A user permitting an AI assistant to summarise documents has not automatically authorised it to query production databases or enumerate infrastructure. Authority must be explicit, scoped, and time-bound. Intent. What was the action intended to accomplish? Binding declared purpose to execution creates a checkpoint. When the action diverges from the intent - when a summarisation request starts probing infrastructure - the system should intervene. Policy. Access controls evaluated at login are too early and too coarse. Policy must be enforced at the moment an action executes: which systems can be accessed, which data can be queried, which workflows can be triggered, how much autonomy is permitted. Consent. Actions taken on behalf of individuals require explicit, enforceable consent. This is not a UX nicety. It is a governance control. Consent must be captured, validated, and bound to the action at runtime. Accountability. Every action must produce a cryptographic, tamper-resistant record: who acted, under what authority, with what intent, against which policy, with what consent, and what the outcome was. Not for audit theatrics - but because without proof, governance is just policy on paper.Security Prevents Attacks. Governance Limits Damage.
No security architecture can guarantee zero vulnerabilities. McKinsey responded quickly, patching all affected endpoints within 24 hours_._ Lilli had been in production for two years. Their own internal scanners missed the issue. The CodeWall agent found it because autonomous systems probe continuously, chain findings relentlessly, and do not follow checklists. The purpose of governance is not to prevent every attack. It is to ensure that when weaknesses exist - and they always will - the blast radius is contained. An agent that finds an unauthenticated endpoint should not be able to read 46 million messages. An action that was never authorised should not execute, even if the technical request succeeds. Strong runtime governance changes the calculus. Exploitation becomes harder to monetise when every action requires verified authority and produces accountable evidence.The Shift Organisations Must Make
The traditional model assumed: users log in → software executes. The emerging model is: actors delegate authority → AI systems take actions. That shift demands that governance move from identity at login to trust evaluated at runtime. Organisations deploying AI in production need infrastructure that enforces this - not as a future aspiration, but today, as agents move from pilots into live systems handling real data, real decisions, and real consequences. Those who build this foundation will be able to scale AI safely. Those that do not are building on infrastructure that was never designed for an autonomous world.Nuggets is the trust layer that governs AI actions at execution. Every action is identifiable, authorised and verifiable by default across enterprise systems. Learn more about the Nuggets AI Agent Identity solution or book an enterprise discovery call.