In-Depth

Take 5 With Tom Fenton: Hard Truths of Enterprise AI Adoption

After my discussion with Christian Posta, Global Field CTO of Solo.io, I started thinking about AI in the enterprise and how it will be adopted.

Before delving into how AI will be adopted, it's important to address the hype surrounding it. Corporate boards and executives promote AI as a transformative force that will boost their company's competitiveness and profitability. However, for the teams responsible for implementing these strategies, the instruction is frequently ambiguous, with a mandate to "Just go figure it out." This will enable rapid experimentation, allowing teams to quickly learn and develop promising new agents to accomplish tasks. I have seen similar scenarios with other technologies, such as virtualization or containerization.

But as practitioners know, a critical gap emerges when it's time to move from a working pilot to a scalable, enterprise-wide solution. The very things that made the experiment successful suddenly become roadblocks. In this article, I explore five challenges that can arise when taking AI from the lab to the enterprise. I use quotes from Christian, which got me thinking about these traps; however, I in no way speak for him.

1. The 'Successful' Pilot is a Scalability Trap
The AI agent that worked perfectly on a developer's laptop or when calling out to a specific SAS API or MCP server creates a false sense of progress. This success is a trap because the methods that make the pilot work -- direct API calls, lax security, and unmonitored environments -- are the exact things that make it impossible to scale securely and, more importantly, governably across an enterprise.

When you try to replicate this model for an entire team or company, the entire system breaks down. The enterprise quickly loses track. What services are called by what? What security keys are they using to make this happen? How much is this going to cost? There is zero visibility and control, creating a governance nightmare with severe consequences.

Christian put this succinctly when he said, "...if something goes wrong--a jailbreak, a prompt injection, or something like this--they have no control. They don't even know it; they have no visibility... from a governance standpoint... it will be a nightmare. It'll take down a business."

2. Forget Multi-Cloud; AI Sprawl is the New Nightmare
Just when infrastructure teams were getting a handle on the last big thing in IT -- containerized workflows and multi-cloud deployment -- a new, more fragmented challenge has emerged: AI-sprawl. The AI agent ecosystem is often deployed across a far broader and more fractured landscape than traditional cloud infrastructure ever was.

While platform teams have been struggling to bring a couple of clouds together, AI agents and their MCP servers are being spun up across various teams. They run on various core platforms, use SaaS applications such as Databricks and Salesforce, and/or run on-prem Kubernetes clusters. This sprawl makes it nearly impossible for infrastructure and security teams to get a complete picture of what is running, where it's running, and how to apply consistent governance.

Christian laid out the problem by stating, "The agent ecosystem and where they're deploying and running things is even more fractured than what we would see with platform teams trying to bring a couple of clouds together."

3. When Making AI Work, Security is the Last Thing on Anyone's Mind
In the rush to experiment and prove that an AI agent can perform its intended function, security is almost always an afterthought. The rush is to make it work, not to make it secure.

The risk here goes far beyond traditional hacking. The real danger is a lack of governance. We need observability to see what these agents are doing. We need to be able to audit them and show a regulator that we know which decision an agent made and why.

We need proper credentials and policy enforcement for every external call. Without this foundation, the enterprise cannot answer the most critical questions, creating an unacceptable risk for any regulated or security-conscious organization.

Christian has seen this in action and said that "...security is the last thing on people's minds when they're trying to make things work."

4. To Trust AI, We Must Tame Its Unpredictability with 'Skills.'
A core tension exists at the heart of enterprise AI. As opposed to the structured programs and workflows we currently rely on, AI, with its probabilistic, non-deterministic nature, offers incredible power and flexibility but also poses a significant risk for enterprises that need consistency, reliability, and control. The solution isn't to eliminate AI's flexibility but to inject determinism where it matters most.

Companies and programs do this through the concept of "skills." Skills are codified instructions, organizational conventions, and non-negotiable business rules packaged and given to agents. For example, instead of letting an agent invent a method for code review, a "skill" can provide it with the company's specific, internal standards. These skills ensure that while an agent can be flexible in its reasoning, it must adhere to critical company policies, making it both powerful and trustworthy.

5. For AI's Future, Look to the History of APIs
To me, the current state of enterprise AI adoption feels like a repeat of the chaotic early days of APIs a couple of decades ago. Just as developers once created duplicated, inconsistent, and ungoverned APIs across the organization, teams are now doing the same thing with AI agents.

Fortunately, the solutions from the API era provide a clear path and blueprint for our current dilemma. Back then, the answer we came up with to address duplicated effort and inconsistent standards was to create shared libraries. Today, the AI equivalent of those shared libraries is "skills." By establishing robust governance, ensuring complete observability, and creating a catalog of reusable skills, we can build a consistent "golden path" for developers to follow. This will allow us to avoid our past mistakes and build a mature AI infrastructure from the start.

I lived through the early API days as did Christian, and in our discussion, he said, "What's happening with MCP... it feels like a deja vu moment from what we were doing 20 years ago with APIs."

The Journey from Experimentation to Architecture
I was glad I had a chance to sit down with Christian and discuss where we are with AI and where we need to head to take advantage of its full potential, as successfully scaling enterprise AI is ultimately more than the novelty of the model or the cleverness of the algorithm. It is an infrastructure challenge that hinges on building the right architecture for security, governance, and observability from the ground up.

Many people think of truly autonomous AI as a far-off concept, but the future is already here. Christian pointed this out as he looked out his window in Phoenix, seeing Waymo cars driving around, with no one in them. Self-driving cars are a prime example of fully autonomous AI, and we see them operating safely in the real world. Enterprises will reach that level of agent autonomy, but they won't allow a financial analyst agent to operate with the keys to their bank account unless they can prove it's safe.

The autonomous future isn't a decade away; it's driving on our roads now. The question is not whether we will grant AI agents more power (we will), but how we will build guardrails to ensure they operate not just intelligently, but also safely and in alignment with our organizational values.

About the Author

Tom Fenton has a wealth of hands-on IT experience gained over the past 30 years in a variety of technologies, with the past 20 years focusing on virtualization and storage. He previously worked as a Technical Marketing Manager for ControlUp. He also previously worked at VMware in Staff and Senior level positions. He has also worked as a Senior Validation Engineer with The Taneja Group, where he headed the Validation Service Lab and was instrumental in starting up its vSphere Virtual Volumes practice. He's on X @vDoppler.

Featured

Subscribe on YouTube