News

CSAI Foundation Expands Agentic AI Security Push

The Cloud Security Alliance (CSA) announced a series of CSAI Foundation milestones aimed at securing what it calls the agentic control plane, including a new catastrophic risk initiative, CVE Numbering Authority authorization and the acquisition of two agentic AI specifications.

The April 29 announcement, made at the CSA Agentic AI Security Summit, centers on governance and assurance for agentic AI systems. CSA said the milestones expand the CSAI Foundation's 2026 mission of "Securing the Agentic Control Plane."

According to CSA, the announcements include the launch of the STAR for AI Catastrophic Risk Annex, authorization as a CVE Numbering Authority through MITRE and the acquisition of the Autonomous Action Runtime Management specification and Agentic Trust Framework.

"The global economy is contending with two exponentials at once: frontier models leapfrogging each other month over month, and viral, bottom-up adoption of agents inside the business," said Jim Reavis, CEO and co-founder of CSA. "Today's announcements give enterprises, auditors, and regulators the technical specifications and assurance scaffolding to say yes to agentic AI without losing control of it."

Catastrophic Risk Annex Planned
The STAR for AI Catastrophic Risk Annex is being launched with support from Coefficient Giving, which CSA described as a philanthropic organization backing long-horizon AI safety work. CSA said the annex extends the AI Controls Matrix and STAR for AI assurance program to cover scenarios involving loss of human oversight, uncontrolled system behavior and other large-scale, irreversible, society-wide consequences.

The annex is designed to focus on controls that can be tested in production environments, according to CSA. A related CSA blog post said the project will identify existing AICM controls relevant to catastrophic risk, introduce new controls where gaps exist, and define evidence requirements and testing criteria suitable for independent assessment.

The rollout is planned in four phases from June 2026 through December 2027. Phase 1, from June through September 2026, is intended to translate catastrophic risk scenarios into auditable control language. Phase 2, from October through December 2026, is intended to develop validation protocols. Phase 3, from January through June 2027, is intended to bring the annex into real-world environments through pilot assessments, assessor training and reference implementations. Phase 4, from July through December 2027, is intended to produce public STAR for AI registry entries, benchmarking and a State of Catastrophic AI Risk Controls Report.

CSA said the annex will align with the NIST AI RMF, the EU AI Act and ISO/IEC 42001. The source does not document specific control text for the annex.

AICM and STAR for AI Context
The annex builds on CSA's AI Controls Matrix, which CSA describes as a vendor-agnostic framework for cloud-based AI systems. CSA says the AICM contains 243 control objectives across 18 security domains and maps to standards including ISO 42001, ISO 27001, NIST AI RMF 1.0 and BSI AIC4.

The AICM package includes the matrix itself, mapping to NIST AI 600-1, ISO 42001 and the EU AI Act, implementation guidelines, auditing guidelines, the AI-CAIQ questionnaire, introductory guidance and a STAR for AI Level 1 submission guide, according to CSA.

CSA describes STAR for AI as an extension of its Security, Trust, Assurance and Risk program into AI. According to CSA, STAR for AI provides a security controls framework, AI safety pledge and certification program for AI systems. CSA also says STAR for AI includes levels ranging from self-assessment to a Level 2 designation that requires third-party ISO/IEC 42001 certification and a Valid-AI-ted AI-CAIQ.

CNA Authorization and AI Risk Observatory Work
CSA said it has been authorized by the CVE Program as a CVE Numbering Authority, with an initial operational scope covering vulnerabilities in its software tools. The CVE Program describes CNAs as organizations authorized to assign CVE IDs and publish CVE Records within distinct scopes.

CSA said the authorization is part of its work on an AI Risk Observatory. The organization said CSAI is organizing research work streams and operational projects with existing CNAs and ecosystem partners. The listed areas of focus include responsible agentic-specific vulnerability coordination, CVE/NVD ecosystem gaps, AI-assisted human-verified vulnerability enrichment and practical guidance for defenders.

AARM Stewardship Moves to CSAI
CSA also announced that Vanta contributed the Autonomous Action Runtime Management specification to CSAI Foundation. CSA described AARM as an open system specification for securing AI-driven actions at runtime across context, policy, intent and behavior. CSA said AARM founder Herman Errico will continue to lead development of the specification as working group chair.

The AARM site describes components including a context accumulator, policy engine, approval service, deferral service, receipt generator and telemetry exporter. It lists implementation architectures including protocol gateway, SDK or instrumentation, kernel or eBPF and vendor integration. AARM's conformance requirements include pre-execution interception, context accumulation, policy evaluation with intent alignment, five authorization decisions, tamper-evident receipts and identity binding.

AARM's site also lists optional extended requirements, including semantic distance tracking, telemetry export and least privilege enforcement. It describes AARM as an open specification intended to define what an AARM-conformant system must do rather than prescribe how the system must be built.

ATF Stewardship Also Transferring
CSA said it has an agreement with Josh Woodruff, founder of MassiveScale.AI, to transfer stewardship of the Agentic Trust Framework. CSA described Woodruff as a CSA Research Fellow and co-chair of the CSA Zero Trust Working Group, and said he will continue to lead ATF development.

The ATF site says the framework applies zero trust principles to AI agents. It describes ATF as mapping to frameworks including CSA AICM, NIST AI RMF, SOC 2, ISO/IEC 42001, ISO/IEC 27001 and EU AI Act articles. The ATF site says the canonical specification is maintained on GitHub and that the specification is published under a Creative Commons Attribution 4.0 International license.

CSA's announcement did not disclose financial terms for either specification acquisition. It also did not document whether CSAI will alter existing licensing, governance or contribution processes for AARM or ATF after the stewardship transfers.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube