News

AI-Powered Threat Detection: From Reactive Defense to Proactive Security

Allan Liska, intelligence analyst at Recorded Future and a frequent ransomware expert on networks like CNN and PBS, took a deeper dive into modern app security during the Pure AI webcast today titled "Secure by Intelligent Design: Mastering Modern App Security with AI." In his session, "AI-Powered Threat Detection: Proactive Security for Modern Applications," Liska outlined how security, development, and AI teams can (and must) converge to build AI-enabled defenses directly into the software lifecycle. The two-part summit is available for on-demand replay.

"You don't want developers sort of going all cowboy and picking whatever model they want to use."

Allan Liska, Intelligence Analyst, Recorded Future

"What we're trying to do," Liska explained, "is go from ad hoc developers that are using AI at different phases individually or independently, to how does AI get inserted into the whole thing?" He stressed that effective integration means AI must be embedded into every phase of the secure software development lifecycle (SSDLC) -- not tacked on as an afterthought.

Making AI Part of the Entire SDLC

[Click on image for larger view.]

Liska opened by mapping out a typical SSDLC: Requirements → Design → Development → Testing → Maintenance. AI, he argued, should play a role across all of them. "Just adding AI into the development or testing phases can actually leave your organization, and its applications, more vulnerable," as Liska's slide cautioned.

He advocated for organizations to avoid disrupting established workflows. "Pick the development process first and then pick an AI framework that works within that development process," he said. For example, a team working in Kubernetes should select models optimized for that environment rather than restructuring to suit a particular AI tool.

"It's understandable," Liska acknowledged, "because many companies, like Duolingo and Box, have declared themselves AI-first. So already, it's likely your developers are using AI as part of their process." However, he stressed the need to move from isolated usage to strategic integration, avoiding "cowboy developers" choosing incompatible models without governance.

AI for Requirements and Design: More than Speed

[Click on image for larger view.]

AI's NLP capabilities can help parse unstructured sources -- meeting notes, emails, documentation -- into structured, usable development requirements. "Trying to pour over all of those notes and not miss anything can be really challenging," Liska said. AI tools can assist in structuring large volumes of stakeholder input -- like meeting notes or emails -- into more usable requirement documentation, reducing manual effort and helping identify duplicate or conflicting requests.

Additionally, AI can flag unrealistic or ambiguous requests. "Marketing doesn't necessarily speak in development terms," Liska noted, "and development doesn't speak in marketing terms. So using AI at this stage can help you find those vague things and get more rigorous requirements."

During the design phase, AI can also recommend scalable, secure architectures and detect bottlenecks. "You can have AI ingest the specifications from a new cloud provider and match that out," he said. "AI can analyze performance data, identify potential bottlenecks, things like -- and we'll talk more about this later -- things like API problems, other types of network configuration errors, etc., that may cause problems in the future...."

Embedding AI During Development and Testing

[Click on image for larger view.]

AI-powered code analysis was another high-impact area. "It's really good at analyzing code in real time and providing feedback to developers on potential security risks as they write code," Liska said. This includes catching common weaknesses like buffer overflows or identifying flawed third-party libraries.

Liska recalled one case where an AI system caught a critical flaw in a longstanding dependency: "It was able to say, hey, this app you're using has listed vulnerabilities that don't appear to have been patched. Maybe we should find another solution."

Testing is where AI arguably performs best. "The testing phase relies on sifting through enormous amounts of data to find anomalies -- and AI is really good at that," he emphasized. AI can scan massive codebases, flag behavioral outliers, and spot risky API usage. "I mean, think about just a couple years ago, what we have with Log4j -- how many developers had the Log4j library in their systems, and maybe weren't even aware that they had it..."

AI in Production and Maintenance: Real-Time Vigilance

[Click on image for larger view.]

Post-deployment, AI helps organizations detect threats by analyzing real-time traffic, logs, and behavior. "You want to be able to test continuously," Liska said, "not just inside the application but outside as well. What kind of network traffic is targeting your application?"

He warned that attackers commonly exploit leaked credentials and API keys. "We've seen nightmare stories of AWS or GCP keys leaked -- and suddenly some dude in Romania is mining for Bitcoin on your dime." AI can scan for those exposures, monitor anomalous behavior, and trigger early alerts.

AI also enables smarter incident response by automating triage, enriching context, and recommending remediations. "You want to be proactive, not reactive," he urged. "Replacing a server before it fails or identifying leaked credentials before they're used by the bad guys -- that's a huge win."

Final Advice: Govern Before You Deploy

[Click on image for larger view.]

Liska concluded by stressing that organizations should define their security and development frameworks first, then layer in AI -- never the reverse. "You don't want developers sort of going all cowboy and picking whatever model they want to use," he said. "You want guidance: this is the framework, these are the models that fit. You want it to be the model that's going to fit into your framework."

Looking ahead, he noted that regulatory pressure is mounting. "In the EU, we're already seeing more interest in AI regulation and AI trust. In the U.S., it'll come more slowly -- but it will come."

Ultimately, AI's real promise in application security isn't just faster analysis--it's enabling a shift from reactive firefighting to proactive defense. And as Liska put it: "It can go a long way toward improving the application development and security life cycle, but it has to be used comprehensively and with forethought."

And More
Beyond those top topics discussed above, Liska also covered a range of other key topics, including AI-driven threat modeling, cloud configuration scanning, runtime traffic analysis, and the importance of secure coding education using AI tools. You can learn all about those in the replay. You can learn all about those in the replay.

And, although replays are fine -- this was just today, after all, so timeliness isn't an issue -- there are benefits of attending such summits and webcasts from Virtualization & Cloud Review and sister sites in person. Paramount among these is the ability to ask questions of the presenters, a rare chance to get one-on-one advice from bona fide subject matter experts (not to mention the chance to win free prizes -- in this case a Nespresso Vertuoplus Deluxe Bundle, which was awarded to a random attendee during a session by sponsor Snyk, the AI trust company).

With all that in mind, here are some upcoming summits and webcasts coming up from our parent company in the month of June:

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube