In-Depth
A Conversation with Solo.io About Agentic Agents and Skills in Enterprise IT
I recently had
the opportunity to (virtually) sit down and chat with Christian
Posta, Global Field CTO of Solo.io, about treating agents and
skills as first-class
resources and elevating them from ad hoc prototypes to components
that can be reliably operated as part of a production‑grade
platform in an Enterprise.
I was excited to
talk to the folks at Solo, as I have seen increased interest in AI
agents and have been wondering how they will be rolled out and
managed at scale.
About Agents and Skills
Before diving
into my conversation with Christian, I wanted to provide more
information about, and my take on, agents, skills, and how they are
used.
AI agents,
which are better known than AI skills, are autonomous software
entities designed to perceive their environment, make decisions,
and take actions toward achieving specific goals with minimal human
intervention. These agents can range from simple rule-based
chatbots that respond to predefined queries to sophisticated systems
that navigate complex, dynamic environments.
A few examples
include those used in autonomous vehicles and virtual assistants that
communicate between and manage tasks across multiple apps. To get a
tad technical, at their core, AI agents leverage techniques from
machine learning, reinforcement learning, and natural language
processing to interpret inputs, adapt to new information, and
optimize outcomes. The key value of AI agents lies in their ability
to operate continuously, handle repetitive or large-scale tasks,
and assist humans by reducing cognitive load and operational
friction.
Skills,
in the context of AI agents, refer to modular, encapsulated
capabilities that an agent can execute to perform specific functions
or solve particular problems. Think of skills as building blocks
where each skill represents an ability like booking a flight,
summarizing a document, querying a database, controlling a smart
device, or generating creative content. By decomposing behavior
into discrete skills, developers can compose more complex agent
behaviors, reuse capabilities across different agents, and update or
improve individual skills without redesigning the entire system.
Modern frameworks for AI agents often enable dynamic skill discovery,
orchestration, and learning, allowing agents to select and sequence
skills autonomously based on the user's intent and contextual cues.
This modularity accelerates development, enhances flexibility, and
broadens the range of tasks agents can competently handle.
To help you and
me better understand the differences between agents and skills, I
created the image below.
[Click on image for larger view.]
About Solo.io
Christian
works for Solo.io, which is a leader in cloud-native
networking. It is a privately held company founded in 2017 and
headquartered in Cambridge, Massachusetts. Its core mission is to
help organizations securely connect, scale, and monitor services and
APIs across hybrid and multi-cloud environments, particularly
Kubernetes-based workloads.
[Click on image for larger view.]
Solo.io's
flagship offerings are branded under the Gloo platform. These
include API gateways (Gloo Gateway and Gloo AI Gateway), service-mesh
management (Gloo Mesh), and developer-focused tooling such as the
Spotlight Developer Platform, built on Backstage. These
solutions were designed to simplify the complex networking and
security challenges inherent in modern microservices architectures.
They do this by
unifying connectivity, observability, and policy enforcement across
clusters and clouds. The company is deeply invested in the
open-source ecosystem and has donated projects to and continues
to be a key contributor to open-source projects, including Istio,
Kagent, Envoy, Kubernetes, and Cilium. It is well-regarded, supports,
and holds influential roles in the CNCF communities.
[Click on image for larger view.]
Since its launch
in 2017, Solo.io has experienced strong growth, earned
industry recognition, and secured substantial venture backing. It
achieved unicorn status after a $135 million Series C funding
round, which valued it at $1 billion. It has raised its capital from
VC luminaries such as Altimeter Capital, Redpoint Ventures, and True
Ventures.
Solo's products
are used by many Fortune 2000 organizations, including Grainger,
TomTom, FICO, and Fitch Ratings, as well as others looking to
modernize their API infrastructure and enhance their API security.
As a player in
the cloud-native community, Solo.io has helped many of its customers
with AI-ready infrastructure and internal developer platforms to
empower and navigate the complexities of cloud-native ecosystems.
This includes its foray into AI Agents and skills.
About Solo.io, Agents, and Skills
Solo.io's AI
agent and skills products fall under its kagent and agent gateway
products. These products were what Christian spent most of our time
discussing, so I will give you some background on them.
[Click on image for larger view.]
kagent -- Cloud-Native Framework for AI Agents
Kagent is an
open-source, Kubernetes-native framework from Solo.io that
brings agentic AI directly into cloud-native infrastructure. It
lets DevOps and platform teams build, deploy, and run autonomous AI
agents within Kubernetes clusters to automate tasks such as
troubleshooting, configuration management, performance optimization,
and other multi-step operations across services and workloads.
The framework
includes support for agent reasoning and planning, integration
with popular agent frameworks such as Google Agent Development Kit
and Langgraph, and the functions agents (tools) use to interact with
their environment. They do this using standardized protocols like the
Model Context Protocol (MCP). The basic kagent framework is free, but
Solo's also offers an enterprise distribution that adds enhanced
management, observability, security policies, and pre-built agents
for everyday cloud-native tasks, helping teams move agents beyond the
prototype stage and into production.
In kagent,
skills are treated as first-class capabilities that guide an AI
agent's planning and execution of tasks. Unlike raw tools that
perform a specific function (e.g., fetch logs), skills are
higher-level descriptions of what an agent is capable of doing to
achieve a goal. They help structure agent behavior by influencing
tool selection, planning, and autonomous decision-making, shaping how
agents interpret user intent and turn it into defined actions. I do
need to mention that Solo.io's ecosystem also includes
agentregistry. This registry project helps manage and share *skills*,
agents, and MCP tool servers across teams, making these capabilities
discoverable and reusable in production environments.
Agentgateway -- AI-Native Data Plane for Agent
Connectivity
agentgateway
is Solo.io's AI-native connectivity layer. It is a data plane
designed to support communication among agentic systems. Solo.io
developed it as traditional API or AI gateways were not designed for
agent-to-agent (A2A) or agent-to-tool (A2T) interactions.
Agentgateway
provides unified support for protocols such as MCP and A2A, enabling
seamless, secure communication between agents, tools, and LLMs across
different environments. It includes features for security, telemetry,
observability, governance, and integration with existing REST APIs as
agent-ready tools, making it easier to route, monitor, and control
agent interactions at scale. It can do this regardless of whether
they occur within Kubernetes or across bare metal, containers, or
VMs. This product underpins a broader “Agent Mesh” architecture
that ensures consistent connectivity regardless of how the agents are
built or deployed.
[Click on image for larger view.]
Conversation with Christian
During my
conversation with Christian, he emphasized the importance of treating
agents and skills as first‑class
Kubernetes resources, elevating them from ad hoc prototypes to
components that can be reliably operated as part of a
production‑grade
platform.
[Click on image for larger view.]
He stressed that
modern enterprise AI adoption currently has a tension between
high-level strategic mandates and the fragmented reality of "Shadow
IT" experimentation. While executives push for rapid AI
integration, development teams scramble to build pilots in isolated
environments, such as local IDEs. To speed things up, they use
insecure methods such as direct calls to external SaaS providers. By
doing this, they create a "scaling wall" in which security
teams ultimately veto promising proofs of concept because they
lack the necessary infrastructure oversight. Transitioning from these
narrow, insecure pilots to a scalable production environment requires
shifting away from a functionality-only mindset toward a structured
approach that meets enterprise-grade governance requirements.
Without a
centralized framework, the proliferation of AI agents threatens to
become an operational and security nightmare that could jeopardize an
entire business. The primary risks include a total loss of
visibility into service dependencies, credential sprawl that
increases the attack surface, and unpredictable cost overruns from
unmonitored model usage. Furthermore, the absence of a control layer
leaves organizations vulnerable to prompt injection attacks and
"jailbreaks" with no mechanism for detection or response.
To avoid these systemic risks, enterprises must implement a "golden
path" for deployment that mirrors the evolution of API
management, utilizing a central registry to catalog approved models
and "skills” codified instructions that ensure probabilistic
AI remains deterministic and compliant with internal company
standards.
He stressed that
the future of enterprise AI lies in the inevitable shift from simple
assistants to fully autonomous agents, making robust Layer 7
application networking a non-negotiable prerequisite. By leveraging
technologies such as service meshes and API gateways, organizations
can establish a unified control layer that manages identity, enforces
secure communication via mTLS (mutual Transport Layer Security ),
and provides comprehensive audit trails for every decision an agent
makes. This infrastructure allows for "runtime discovery"
across fractured deployment landscapes, including various cloud
providers and on-premises clusters. Ultimately, by balancing the
inherent flexibility of AI with deterministic controls and rigorous
observability, businesses can safely scale their AI initiatives from
individual experiments to powerful, autonomous enterprise assets.
Final Thoughts
I covered a lot
of ground in this article and in my discussion with Christian. Still,
the gist of this article and my discussion with Christian focused on
transitioning AI agents and skills from experimental prototypes into
production-grade enterprise resources.
Solo.io addresses
the challenge of managing these at scale through its Gloo platform
and newer "agentic" products, such as kagent, a
Kubernetes-native framework for deploying autonomous agents, and
agentgateway, a connectivity layer designed explicitly for
agent-to-agent (A2A) and agent-to-tool (A2T)
interactions.
On a practical
note, Christian said that without a centralized infrastructure, the
rise of "Shadow IT" AI experiments creates significant
security risks, including credential sprawl, unpredictable costs, and
a lack of visibility into service dependencies. To overcome this,
enterprise AI must adopt a deployment path that mirrors the
evolution of API management, using a central registry to ensure
that probabilistic AI remains deterministic and compliant. By
leveraging service meshes and API gateways, organizations can
establish a unified control layer that enforces secure mTLS
communication and maintains audit trails, ultimately allowing
businesses to scale AI initiatives into powerful, autonomous
enterprise assets safely.
You can get more
information about Solo.io
here.