In-Depth

A Chip off the Virtualization Block

Virtualization isn't just for software; processors need to be virtualization-aware, too. To that end, Intel and AMD are pushing the boundaries of what you can do on a chip.

The virtualization industry is moving forward at a whiplash-causing rate: Just a few years ago, it was all about server consolidation. Now server consolidation is mainstream, and the hot areas are desktop virtualization, client hypervisors and cellphone hypervisors.

Of course, underpinning all that technology is the hardware -- specifically, the processor, which works hand-in-hand with the software to enable that lightning advancement. Without processor cooperation, virtualization's forward progress would sputter to a halt. That's why the two big chipmakers, Intel Corp. and Advanced Micro Devices Inc., are working hard to make their chips more virtualization-friendly.

First, though, it's important to understand why processors need to be specially crafted for virtualization purposes. Virtualization can work on processors that aren't virtualization-aware, but it's a torturous, slow process that adds so much overhead that it's stripped of almost all benefits in a data center.

Torturous Workarounds
"It's very difficult to cleanly virtualize on a CPU architecture that wasn't designed for virtualization," says Steve Grobman, director of business-client architecture for Intel. "If you look at what a virtualization environment needs to do, it's all about transitioning the CPU from the virtual operating environment to the [physical environment] under certain conditions ... So there were ways to do it, but it was extremely complicated and added a lot of verbosity."

It also affected development on the software side, according to Grobman. "What the [virtualization] vendors ended up doing was spending a lot of their time figuring out how to essentially work around the fact that the processor didn't natively support a virtualized environment."

That led to Intel and AMD making the move to add virtualization capabilities to their chips. And now, after years of development, virtualization-tuned processors are a key part of each company's product lines.

For a long time, Intel had the server-processor business almost all to itself. That changed in October 2003, when AMD introduced its Opteron line of server chips. AMD, in fact, may have seen the potential of virtualization-tuned processors before Intel, and moved aggressively to add virtualization to a wide range of its processors.

As Margaret Lewis, AMD's director of commercial solutions, notes, "the vast majority of processors we sell, both client and server, have virtualization built in." The only exceptions are the Sempron chip family for low-end desktops and laptops, and the Geode family of embedded devices. That commitment to virtualization on the processor has helped AMD do well in the market, says analyst Matt Eastwood, IDC group vice president for enterprise platforms.

Horse Race
"Initially, AMD had a bit of a lead when virtualization first started," Eastwood says. "You did tend to see more Opteron-based servers, proportionally. Intel still had 80 percent of the market, but AMD was doing better in their overall penetration of virtualization. The percentage of AMD shipments that had virtualization was much higher than Intel."

Eastwood continues: "AMD tended to have a greater presence proportionally in the four-socket and above market than Intel did. Opteron was seen as the platform for consolidation and for very performance-oriented workloads."

That has changed over time, according to Eastwood. "More recently, the needle started to move toward Intel, so that the advantages AMD had with Opteron started to dissipate as Intel caught up."

That's why both chipmakers are desperately seeking a competitive advantage over the other. It's a two-horse race, and Intel has a big lead. AMD, however, believes it has the features that will allow it to pull ahead.

The AMD View
It begins with AMD Virtualization (AMD-V), the company's umbrella term for its virtualization technologies across all processors. "The two main ideas with AMD-V are to cut the overhead associated with virtualization, and to make sure we're helping maintain a good experience for the user -- whether [they're] an IT user or end user in the virtualized world," explains Lewis.

One recent advance in AMD-V, that AMD says Intel doesn't offer, is Rapid Virtualization Indexing (RVI), also known as nested-page tables. "RVI helps to set up better virtual-memory management," explains Lewis. Memory management is a core aspect of virtualization; the better it's managed, the better virtualization performs.

Lewis calls RVI "a huge step forward," and points to a recent VMware Inc. performance paper to back up her claim. VMware evaluated RVI with Shanghai, a quad-core processor released last November.

"Some workloads had up to a 42 percent performance increase with RVI. That kind of work on the hardware level is exactly what we need to make virtualization pervasive -- we're reducing the complexity and the overhead. The penalty for running things virtualized becomes more acceptable," Lewis comments.

Intel is plugging its own new memory technology, called extended-page tables. "We move part of what currently happens in software and [creates] some performance overhead into the CPU; so, with extended-page tables, you're able to enhance performance," says Grobman.

Another advantage touted by AMD is "extended migration," which allows live migration between disparate processors. Lewis explains the benefits: "People need to be able to migrate virtual machines [VMs] easily from one hardware environment to another. [Extended migration] helps virtualization software understand the level of functionality in the processor, and helps the virtualization software set up for moving or migrating running virtual machines between processors. For AMD, you can migrate running virtual machines with VMware from processors that we had out in the market from 2005 to processors that we'll bring out in the market in the foreseeable future."

That's important to customers, Lewis says. "They don't want to have these pools of virtual machines that have to be kept siloed off because you can't do live migration between them." She says AMD offers a wider range of live migration for VMware than Intel does.

Economic Hardships
For all those advantages, however, AMD faces challenges that go beyond technology. At the end of January, it reported huge losses in the last quarter of 2008, with revenue declining 33 percent to $1.2 billion. For the year, AMD reported a net loss of $1.4 billion, based on revenue of $5.8 billion.

Despite the gloomy numbers, IDC's Eastwood isn't predicting AMD's funeral. He says the company was poorly managed in the past, but is overcoming that. "I think they've cleaned up a lot of that. AMD has made a lot of changes to their organization; a lot of [its problems] have to do with the past and how they conducted their business from a capital perspective. That's really improved," he notes.

The fact that AMD is struggling, however, could affect its ability to sell into the virtualization market, Eastwood continues. "The challenge for AMD has been that their business has tended to follow the perceptions of their performance. When there was a perception that they had an advantage, and it was in workloads where performance was the big factor, they tended to do better."

Those advantages may not be enough in the recession, Eastwood says. "Given the economic climate that we're heading into, Intel could very well be seen as the safer choice in terms of long-term viability," he explains. "Certain types of customers will always buy Intel; that's just the way it is. Other customers are going to buy based on their perceptions of performance; those types of customers tend to bounce back and forth. So it's not really clear for 2009 which way that's going to bounce."

Intel has its own issues, too, and economics is part of it. It took a hit in its fourth quarter of 2008, reporting revenue of $8.2 billion -- a loss of 23 percent compared with the previous year. It also took some public flogging in January owing to the discovery of a flaw in its Trusted Execution Technology (TXT). TXT is currently only in client chips, Grobman says, but it's "absolutely" planned for server-based chips in the future. The natural assumption is that the TXT flaw will be corrected before it's approved for server use.

Grobman sums up Intel's virtualization vision in one sentence: "To improve robustness and enable [virtualization] vendors to write a very clean software architecture to enable virtualization."

Virtualization Inside Intel
Intel's name for its virtualization technologies is VT. Grobman explains how VT fulfills the company's virtualization vision. "What we've done with our VT architecture is looked at it from the ground up and asked, 'What could we do to our CPU architecture to make it such that you could write a very elegant, clean virtualization layer?'" he says. "For the virtualization vendors, when they use VT, it's a much simpler and therefore inherently more secure [environment] because their code base can be smaller and they can really focus on adding features versus focusing on these workarounds that they previously had to do."

Because of VT, Grobman says, vendors "actually program the CPU with very specific conditions on when it should transfer control from that guest or the virtualized operating environment into the controlling operating environment."

Another technology Intel champions is its Virtualization Technology for Directed I/O (VT-d). In a nutshell, VT-d facilitates remapping of devices into virtual environments.

Grobman gives an example of how it works. "Let's say you're running a server and you have a few virtual servers that have very intensive networking workloads, and you have multiple network interfaces on a machine," he says. "You could use this technology to remap an entire physical network adapter into a virtual environment, so that as the IT guy you can choose -- you can let the virtual machine manager take the network and virtualize it and share it; or, if I have a bunch of these, I can say, 'OK, I know this VM is consuming a lot of network bandwidth, and I'd rather give it exclusive access to its own NIC.'"

It has application beyond servers, Grobman claims: "In a client scenario, you can use this technology for something like remapping graphics. If you wanted to give a virtualized environment direct access to fully accelerated 3-D graphics without compromising anything -- or other I/O devices -- it enables you to pick a device, a piece of hardware on the platform, and directly map it into one of the virtual environments."

Virtually Everything?
With these advancements from Intel and AMD, are we heading toward the day when any and all aspects of a data center, including OSes, apps and client devices, can be virtualized? AMD's Lewis won't predict, but says it could happen. "In the business-computing world, we're getting to the realm where there isn't an application that can't be virtualized," she says. "We've talked to customers that are virtualizing Oracle databases, and SAP and ERP programs. Everybody did consolidation, but users are now moving toward bigger systems and getting more confident as virtualization matures."

Intel's Grobman is similarly guarded. "There's additional complexity introduced by a virtualization layer versus having discrete hardware. With the ability of hardware-based virtualization, the complexity of that virtualization software has been reduced," he explains.

But, he adds, "By the sheer nature that you have that extra complexity in the system, I don't know that I'd go as far as saying that it's 100 percent identical to running on different physical machines. Therefore, it's going to need IT looking at the sophistication and maturity of the virtualization products, and on a case-by-case basis determining whether they want to do that segmentation with virtualization software or with physical segmentation."

The question is whether these types of advances will continue at this rate in the current difficult economic climate. Will research and development be hampered by layoffs? Both companies say no. AMD, for example, has already announced that it will have a six-core processor out by the end of this year, and Lewis says the company will continue to add cores.

Adding cores by itself isn't enough, however. "You can't add computational capability or cores without an eye to what the power efficiency is," Lewis comments. "We're continually balancing what we can do to deliver more computational capabilities, but within acceptable power envelopes for our users. Users can't go back to days where computational capabilities were directly linked to power consumption of servers. That's one thing that we balance."

Intel already has six-core processors out as part of its Xeon 7400 series, putting it ahead of AMD in the core race. And it's already announced that its new series of Nehalem chips will support up to eight cores.

Recession-Proof?
Still, Eastwood warns both companies against overproducing, especially in the recession. "If you're AMD or Intel, you need to make sure that you're not overdoing it in terms of SKUs for your processors. The volumes just won't be there to take it too deep at the processor level. Server volumes are at 8 million per year, and probably 12 to 15 million server processors per year."

That doesn't mean Eastwood thinks virtualization adoption will slow, however; quite the contrary. "In our surveys, 70 percent to 80 percent [of respondents] describe themselves as mainstream adopters," he says. In addition, the average amount of an IT infrastructure that's virtualized, according to IDC, is about 30 percent. Eastwood expects that number to rise to 50 percent in the next year. "We expect adoption to be very strong in the next year," he says.

That doesn't necessarily translate into more processors -- instead, it could be an increase in "VM density," or the number of VMs per physical host. "This is probably the biggest wildcard we'll look at in 2009 and beyond," says Eastwood. "Today, the average host has five or six VMs, but the people we're speaking with say they feel comfortable going to 10 or 12; frankly, given the condition of the economic climate, we can see people getting to 10 or 12 faster than they would've liked. We think VM densities will go up, which could slow down demand for servers," and hence, processors.

That will certainly affect Intel and AMD going forward, Eastwood predicts: "While there will be an emphasis on projects that save money, like consolidating and virtualizing, there will be some pullback in other areas, particularly new applications and new IT services that may have been on the roadmap to invest in. Now [companies may] say, 'We're going to wait on this until next quarter, next year, something like that.'"

Well Positioned
Those are short-term concerns. In the long term, Eastwood and IDC have a much rosier outlook for the chip manufacturers, and the future of processor-based virtualization. "There's still a huge amount of road to run here," Eastwood says. "If you look at the installed base, it's far less virtual. There's still a lot of work to do and a lot of potential to push virtualization adoption forward, so for AMD and Intel I think there's still a pretty good opportunity."

Featured

Subscribe on YouTube