The Breakthrough: AWS Rex Open-Sourced
On May 4, 2026, AWS quietly open-sourced a significant piece of infrastructure that changes how security teams architect agentic AI deployments. The project, Trusted Remote Execution (Rex), gates every system operation an AI-generated script attempts against a Cedar policy defined by the host owner, not the agent. This runtime achievement is real, but it leaves untouched a critical data security problem.
What AWS Solved
The mechanics are clean. Scripts run in Rhai, a lightweight embedded language with no built-in OS access. Every read, write, or open is intercepted by a Rex SDK call, which evaluates a Cedar policy before permitting the underlying system call. If the policy denies the action, the script receives an ACCESS_DENIED_EXCEPTION and the operation never reaches the kernel. The script and policy are versioned separately, and the host owner defines what is allowed.
The targeted use case is explicit. AWS Rex is designed to contain three specific failure modes in agentic AI: hallucinated code, prompt injection, and overly eager task interpretation. None of these is hypothetical. Each is a documented attack class, and leading AI labs have publicly stated these issues are unsolvable. OpenAI said in late 2025 that prompt injection is unlikely ever to be fully solved. Anthropic acknowledged that prompt injection remains far from solved, especially as models take more real-world actions.
The architectural inversion is real. Most agent sandboxes bound the agent's behavior; Rex inverts that by bounding what any host operation the agent invokes can accomplish. This shift in where trust lives is encoded in production code. The pattern is the right pattern. AWS has endorsed an architecture that treats prompts as instructions rather than access controls and treats the agent's claimed identity as something to verify. Vendor security questionnaires, internal architecture reviews, and audit evidence packages can now reference a working open-source implementation of that pattern. That is the runtime layer, and it should be adopted.
What AWS Did Not Solve
Now the part that should change how every security and compliance leader reads this announcement. Rex governs system calls, not data security. That distinction is not a footnote; it is the difference between protecting the host from the agent and protecting data from misuse, and between passing a runtime audit and passing a regulatory one.
A Cedar policy can permit file_system::Action::"read" on a customer-records file. That is the right policy at the kernel layer. But at the data layer, a different set of questions must be asked: Is this read happening on behalf of a specific human user with the right authorization? Is the requester operating within the scope of the engagement that authorized access? Are the returned records minimum-necessary for the task? Are any subject to deletion requests, legal holds, or jurisdictional restrictions? Is the access logged in a tamper-evident form for future audits? Rex does not answer these questions. Cedar policies on system calls cannot answer them. They live one layer below the runtime, where data security must be enforced.
Without this data-layer enforcement, an organization can run every agentic workload through Rex, prove that no script ever exceeded host permissions, and still be unable to demonstrate to a regulator that the right person authorized the right access for the right purpose. This matters operationally and legally. GDPR Article 5 demands purpose limitation, data minimization, storage limitation, and accountability. HIPAA's minimum-necessary standard requires controls on which data the agent accesses, not just system calls. CMMC Level 2 access control families assume enforced authorization for AI access to controlled unclassified information. None of these frameworks is satisfied by runtime gating alone.
The Numbers Make the Gap Concrete
Recent industry research reveals that 63% of organizations cannot enforce purpose limitations on AI agents. Sixty percent cannot quickly terminate a misbehaving agent, and 55% cannot isolate AI systems from broader network access. Fifty-four percent cannot validate AI inputs. Some of these gaps are exactly what Rex closes at the runtime layer—termination, isolation, input validation. Others, like purpose limitation, are data-semantics controls that cannot be enforced on a system call; they must be enforced on the data.
Only 43% of organizations have a centralized AI data gateway. The remaining 57% run agentic AI through fragmented or partial data-layer controls. Adding Rex to that 57% closes the runtime gap but leaves the data gap intact. The audit-defensible layer is not the kernel; it is the data.
The Architecture Data Security Requires
The architecture that holds up under regulatory enforcement is layered, and the layers are not interchangeable. Runtime controls like Rex enforce what the host will permit. Identity controls enforce who the agent is acting on behalf of. Data-layer controls—attribute-based access control evaluated against classification, jurisdiction, consent, and purpose—enforce what data the agent is allowed to touch. Each layer addresses a different failure mode, and none substitutes for the others.
The data layer is where data security lives. It is where every access is authenticated against the human user the agent represents, where every authorization decision respects classification and consent, and where every operation produces a tamper-evident audit record that outlasts the model that initiated it. AWS does not provide that layer in the Rex release. It is the architect's responsibility and must be built explicitly.
What This Means for Security and Compliance Leaders
The right operational response has three parts. First, adopt the runtime pattern. Rex is open-source under Apache 2.0 and runs on Linux and macOS with no procurement obstacle. Second, do not treat runtime gating as the whole answer. Map current controls against the Five Eyes advisory's five risk categories—privilege, design and configuration, behavior, structural, and accountability—and identify where the architecture stops at the kernel and where the data layer is still ungoverned. Third, build the audit trail at the layer that survives model lifecycle changes. The model can be retired, the runtime replaced, but the data layer is the only place where evidence outlasts the agent that produced it.
AWS solved part of the problem. Data security—the part that shows up in audits, regulatory inquiries, breach notifications, and litigation discovery—requires governance at the data layer, and AWS did not address it. The runtime layer just got easier. The data layer remains the architect's responsibility, and it is the layer that decides whether the next agentic AI audit succeeds or fails.
Source: TechRepublic News