News

How AI Works with Zero Trust to Secure the Cloud

New cloud research examines the interaction of advanced AI and the zero-trust cybersecurity approach.

Last year's debut of advanced generative AI systems like ChatGPT came with significant cybersecurity concerns, as the cutting-edge tech can be used by both by the bad guys and the good guys.

It can help threat actors multiply phishing and other attacks, for example, while organizations can use AI to help thwart them. Key to the latter is zero trust, which has become an increasingly popular security approach as organizations are besieged by sophisticated ransomware and other cybersecurity exploits. It has been adopted by major cloud players like Microsoft to fight ransomware and has been described as the future of network security.

Zero Trust Principles
[Click on image for larger view.] Zero Trust Principles (source: Microsoft).

Zero trust eschews the standard security approach of walling off IT systems behind a secure network perimeter. It has grown in popularity with the advent of hybrid work models, the proliferation of endpoints and bring-your-own devices, disparate and interconnected systems spanning clouds and enterprise datacenters, and just general complexity all around. Instead of trying to secure perimeters, zero trust assumes that such fortress security approaches will fail or have already been penetrated, seeking to lessen the damage that can be caused.

A recent survey-based report from MIT Technology Review Insights states a key finding: Ssurveyed organizations are pretty much "all in for zero-trust." Sponsored by Infosys Cobalt, the "2023 Global Cloud Ecosystem" report, published last month, is based on a survey of 400 executives and interviews with global experts on the global cloud economy.

"Some 86 percent of the survey respondents use zero-trust architecture," the survey stated. "Primarily a public and hybrid cloud model that removes most instances of trust by default, zero trust has become a standard for cloud and AI."

The report isn't the first to observe the AI/zero trust relationship, of course, as a 2023 article on the Unite.AI site lists three simple, basic ways AI can empower zero trust:

  • Provide users with a better experience: An example of this is lessening the number of hoops that users need to jump through to obtain access.
  • Create and calculate risk scores: Because ML learns from past experiences, it can aid zero-trust security to create real-time risk scores based on the network, device and any other relevant data, and organizations can consider these scores when users request access and determine which outcome to assign.
  • Automatically provide access to users: AI can allow requests for access to be granted automatically by taking into account risk scores that have been generated, saving time for the IT department.

The new MIT Technology Review Insights report provides the cloud context for the AI/zero trust relationship.

"Cloud and AI draw from a broad data surface and generate rapid change, which presents new risks," the report stated. "Cloud, a requirement for growing AI and automation, also offers the breadth to identify and classify cloud assets with data cataloging, fast access, and visibility. This ensures asset risks are understood and prioritized."

The AI/zero trust relationship was also the subject of an article published this year by the Cloud Security Alliance, titled "Zero Trust and AI: Better Together" and written by Chris Hogan, an exec at Mastercard.

It describes three key opportunities unlocked by the intersection of AI and zero trust:

  • Behavioral analytics and anomaly detection: Empowered by AI, behavioral analytics meticulously scrutinizes user and entity actions to establish a baseline of "normal" behavior. This real-time monitoring mechanism is primed to flag anomalies and potential threats, perpetually learning and adapting to emerging patterns. By serving as a sentinel for unauthorized access or compromised accounts, AI reinforces the very essence of zero trust.
  • Automated threat response and remediation: For AI and threat detection, the script extends beyond identification alone. Here, AI takes the lead in automating response measures. This includes swift isolation of compromised devices, withdrawal of access privileges, or the seamless initiation of incident response protocols. By scripting AI into the incident response playbooks, organizations can expeditiously identify and neutralize threats, which is a prime function of the zero trust model.
  • Adaptive access control: AI technologies that are embedded in the fabric of access control systems can dynamically adjust privileges in response to real-time risk assessments. Enriched with context such as user location, device health and behavior patterns, AI generates an informed narrative for granting or denying resource access. This nimble approach seamlessly aligns with a core tenet of zero trust -- least privilege -- a principle etched deep within its philosophy.

Another report published last month by Zscaler examined how organizations are rushing to use generative AI tools despite significant security concerns. It listed four steps organizations can take to ensure GenAI use in their organization is properly secured, and two of them involve zero trust:

  • Implement a holistic zero trust architecture to authorize only approved AI applications and users.
  • Conduct thorough security risk assessments for new AI applications to clearly understand and respond to vulnerabilities.
  • Establish a comprehensive logging system for tracking all AI prompts and responses.
  • Enable zero trust-powered Data Loss Prevention (DLP) measures for all AI activities to safeguard against data exfiltration.

To help organizations leverage the AI/zero trust relationship to protect themselves in a safe manner, Accountable Tech, AI Now Institute and EPIC earlier this year jointly released the "Zero Trust AI Governance" framework, which offers policymakers a roadmap to help address "the urgent societal risks posed by these technologies."

The framework has its own three three overarching principles:

  • Time is of the essence -- start by vigorously enforcing existing laws.
  • Bold, easily administrable, bright-line rules are necessary.
  • At each phase of the AI system lifecycle, the burden should be on companies to prove their systems are not harmful.

Governance was also the subject of one of the other key findings of the MIT Technology Review Insights report, along with the "all in for zero-trust" finding. It states that cloud-centric organizations expect strong data governance, but don't always get it.

"Strong data privacy protection and governance is essential to accelerate cloud adoption," the report said. "Perceptions of national data sovereignty and privacy frameworks vary, underscoring the lack of global standards. Most respondents decline to say their countries are leaders in the space, but more than two-thirds say they keep pace."

The other key finding relates to financials, stating that the cloud helps the top and bottom lines globally.

Noting that cybersecurity is always a priority, the report said: "Public and hybrid cloud assets raise cybersecurity concerns and increase threat surfaces, and AI tools are being used to improve the predictability of attacks. As cloud helps AI and automation capabilities mature, AI algorithm development is accelerated with the scale that cloud compute provides. These cloud-produced AI tools provide more accurate and predictive security tools, such as service-provider and enterprise endpoint incursion and anomaly detection, which improve data cataloging, access, and visibility."

The survey backing the report was conducted in June.

About the Author

David Ramel is an editor and writer for Converge360.

Featured

Subscribe on YouTube