In-Depth

KubeCon 2026 EU Pre-event Recap

In a previous article, I laid out the reasons "Why I Need to Attend KubeCon Europe 2026." After the first day of the event, I can say my reasons were spot on, and the conference exceeded the rather lofty expectations I had for it.

Picture 2
[Click on image for larger view.]

The conference ran from March 23-26, 2026, at RAI Amsterdam. It featured hundreds of sessions, keynotes, and vendors covering everything from AI usage and implementation to platform engineering. In follow-up articles, I will cover the keynotes and announcements made during the conference and provide a summary of some of the discussions I had with vendors at the show. But, before delving into my recap of the official conference, I want to cover a co-located event that I attended the day before KubeCon officially kicked off.

Pre-event Recap
The day before the conference officially kicked off, there were about a dozen pre-events.

Picture 1
[Click on image for larger view.]

The pre-event that I chose to attend was entitled "Virtual Machines (VM) on Kubernetes Day: Bridging the gap between VMs and Containers." During this session, which Portworx hosted, they first had an introduction to running VMs on Kubernetes (K8s), then delved into factors for successful VM migration to K8s, platform strategies for running VMs on K8s, and use cases for running VMs on K8s.

I enjoyed this event because it covered not a specific technology but one of the bigger issues we are seeing: companies reevaluating their data center strategies.

For background, Portworx, the company that put on the event, is no stranger to K8s and was one of the original cloud-native storage platforms for K8s. It was acquired by Pure Storage a few years back. Their product addressed one of the more pressing issues at the birth of K8s: a way to provide persistent, container-aware storage for K8s, which at that time was a stateless, ephemeral environment. Portworx has evolved over the years and now enables enterprises to run stateful applications with enterprise features such as high availability, data protection, backup, and disaster recovery across on-premises and cloud infrastructures.

The event kicked off by discussing how, over the past two decades, the enterprise data center has been VM-centric. But over the last two years, the landscape has shifted, and IT leaders from companies both large and small are now facing looming, sometimes very expensive, renewals of their existing data center infrastructure software. They are at a crossroads: whether to continue with their legacy VM-based data center architecture or try something different. Many of these companies see Kubernetes (K8s) as the future of their data centers.

Image2
[Click on image for larger view.]

Due to these changes, the transition is coming up faster than they would have liked, and they are looking for tools like KubeVirt, an open-source Kubernetes extension that allows running and managing Virtual Machines (VMs) alongside containers within the same K8s cluster, to help them complete the changeover.

The presenters told stories that prove this approach is viable, such as one organization that migrated 120,000 VMs to a Kubernetes-native stack. This signals that, for many organizations, the "Everything-on-K8s" approach is feasible, and shifting from an orchestrator once reserved for containers to one for virtual machines (VMs) is doable, even for cutting-edge workloads like AI.

The presenters said that many of the companies they have spoken with are renewing their existing data center infrastructure contracts, but with a 24- to 36-month exit ramp rather than a continuation plan. Their strategy seems to be to de-risk the move: rather than a "big bang" migration of Tier 0 production workloads overnight, they are starting with Tier 2 and Tier 3 environments (development and QA) to prove running VMs on K8s with the KubeVirt stack. This proves the technology and allows them to become comfortable with it before fully decommissioning their legacy hypervisor once the new stack has proven itself.

Picture 3
[Click on image for larger view.]

While the technical building blocks (KubeVirt, CSI, and advanced CNI) are now production-ready, the human element remains a bottleneck to their deployment. The transition from the "Click-Ops" world of traditional virtualization to the "GitOps/DevOps" world of Kubernetes requires a massive cultural shift and staff retraining. Understandably, infrastructure teams that have spent the last 20 years mastering GUI-based configurations are often dazed and confused by the shift to K8s declarative intent and API-driven workflows.

Furthermore, managing "Noisy Neighbors" becomes a significant challenge. If one VM saturates the bandwidth during a boot storm or a live migration, the entire cluster can suffer. Standard BGP and BPF implementations in K8s are often insufficient for high-availability (HA) across multiple data centers. This is why tools like the Calico L2 bridge or Cilium need to be deployed. Basically, K8s architecture needs to allow VMs to maintain their identities and dependencies, so the application team doesn't have to reconfigure their entire stack because the underlying hypervisor changes.

Vendor Discussions
KubeCon is more than the CNCF; it is also about the ecosystem surrounding Kubernetes, containers, and other cloud-native technologies. One of the more interesting/enjoyable parts of KubeCon is talking with people and companies on the showcase floor. Below is a recap of discussions I had with a few of the vendors.

Image3
[Click on image for larger view.]

Cast AI
Spending time talking to Leon Kuperman, CTO at CAST AI, I discussed the company's evolution since its founding in 2020 from a Kubernetes cost-optimization tool to a comprehensive platform that automates cloud infrastructure, workload right-sizing, and GPU management.

Image4
[Click on image for larger view.]

After our discussion, I felt like I had a better grasp on what CAST AI does and how it empowers its customers. In simple terms, Cast AI is a cloud infrastructure automation company that optimizes Kubernetes environments using AI-driven automation. It's SaaS-based and embeds autonomous agents into Kubernetes clusters to continuously monitor and manage resource allocation, scale workloads, reduce cloud costs, and improve performance. He also mentioned that Cast AI can act as a multicloud broker, enabling you to purchase resources from public cloud providers.

You can read more about Cast AI here.

Leon also mentioned that a week before KubeCon, they released Kimchi, a CLI tool that configures your AI coding assistants to use open-source models hosted by Cast AI. This was designed to help reduce AI costs by intelligently offloading tasks from expensive, proprietary models to open-source models running on optimized private clusters. He mentioned that since they started using it internally, they have seen a huge drop in their cloud-based coding assistance bill. It looks very interesting, and more information about it can be found at libraries.io.

ZEDEDA
One of the more interesting discussions I had was with Said Ouissal, the CEO and Founder of ZEDEDA, as he not only discussed his product line but also how AI is radically changing the IT world.

Picture 4
[Click on image for larger view.]

For background, ZEDEDA specializes in edge computing and orchestration. During my conversation, Said told me about the evolution of their Edge Virtualization Engine (Project EVE), a bespoke, hardware-agnostic (x86, Arm, GPU, RISC-V), API-driven, open-source operating system designed to manage hardware diversity and maintain zero-trust security in rugged, remote environments.

More importantly, he discussed how ZEDEDA's Edge Intelligence Platform combines an AI-driven approach to build and orchestrate agents, models, applications, and infrastructure on the edge. This platform is analogous to VMware's VCenter but specifically designed for the edge. It allows enterprises to manage, update, and monitor nodes across vast distances and varying connectivity states.

Said gave me some real-world examples of how artificial intelligence is moving out of the cloud and onto local devices, particularly using LLMs and autonomous agents. He told me about how it is being used for predictive maintenance of industrial machinery, automated monitoring of oil wells, and even in far-flung fields like AI-enhanced customer experiences in the car wash industry. Ultimately, he emphasized that scalable orchestration and robust security are the primary hurdles to widespread AI adoption at the physical edge of the network, and how his company is helping companies overcome these challenges.

Final Thoughts on Day 0
The momentum behind K8s as a platform for both VMs and containers is strong. I thoroughly enjoyed the event that I attended and chatting with others that night. I found that they enjoyed their events as well.

Overall, this was a very good and informative event, and I am glad I got to KubeCon a day early and was able to attend it. In my next article, I will give you a recap of the keynotes from the first day of KubeCon.

Featured

Subscribe on YouTube