Virtualization - Ask the Experts #3
by Anand Lal Shimpi on September 2, 2010 9:00 PM EST- Posted in
- IT Computing
- Virtualization
- Intel
Our Ask the Experts series continues with another round of questions.
A couple of months ago we ran a webcast with Intel Fellow, Rich Uhlig, VMware Chief Platform Architect, Rich Brunner and myself. The goal was to talk about the past, present and future of virtualization. In preparation for the webcast we solicited questions from all of you, unfortunately we only had an hour during the webcast to address them. Rich Uhlig from Intel, Rich Brunner from VMware and our own Johan de Gelas all agreed to answer some of your questions in a 6 part series we're calling Ask the Experts. Each week we'll showcase three questions you guys asked about virtualization and provide answers from our panel of three experts. These responses haven't been edited and come straight from the experts.
If you'd like to see your question answered here leave it in the comments. While we can't guarantee we'll get to everything, we'll try to pick a few from the comments to answer as the weeks go on.
Question #1 by AnandTech user mpsii
I am having a hard time trying to determine the best hardware for server and desktop virtualization. There do not seem to be any benchmarks showing the performance comparison of, for example, a Phenom II X6 vs Core i7 (quad) processor.
Answer #1 by Johan de Gelas, AnandTech Senior IT Editor
A Phenom II X6 "Thuban" is almost identical to a "Istanbul" Opteron 2400. The Core i7 9xx is the little brother of the quadcore Xeon 5500. In both cases, the only real difference in a single socket system is the fact that the memory controller of the desktop CPUs runs with unbuffered memory instead of buffered ECC memory. So the desktop chips are slightly faster as the memory latency is a bit lower. We have performed quite a bit of virtualization benchmarking on both the Xeons and Opterons, running Microsoft's Hyper-V and VMware's ESX/vSphere, so you can get a rough idea on how the desktops CPUs compare running virtualized applications (http://www.anandtech.com/tag/IT).
Question #2 by Gary G.
If Type 1 virtualization runs a hypervisor on bare metal to host a guest OS, and type 2 virtualization features an OS hosting a hypervisor to host a guest OS, when will we have a type 3 virtualization? Type 3 virtualization would be physical hardware abstraction sufficient to run multiple physical hardware instruction sets. In this case, the hypervisor could run Sparc on Intel or vice versa. Is that part of our future history?
Answer #2 by Rich Brunner, VMware Chief Platform Architect
It is certainly technically feasible but it may not run at the performance you want even with clever binary translation tricks. Folks have debated this for awhile as a bridge to bring legacy, mission-critical workloads based on RISC architectures to more commonly available commodity hardware. I can't rule it out in that context, but I do not see it as a trend for new workloads.
Question #3 by Aaron P.
As the number of CPU cores increases, so does the consolidation ratio. Can you discuss what initiatives are being pursued that seek to limit the impact of a server failure for a machine hosting potentially hundreds of virtual servers?
Answer #3 by Rich Uhlig, Intel Fellow
Broadly speaking, there are a couple of ways to address the challenge: you can develop approaches to recover from and correct faults when they happen, or you can develop mechanisms to contain faults to limit their effects.
ECC memory is a well-known approach for detecting and correcting memory faults, and is a good example of the first approach. The same principle of fault recovery can be applied to other resources in the platform beyond memory, such as the system interconnect for coherency and I/O (e.g., the use of CRC to detect link-level errors and trigger packet retransmission in hardware).
When faults can’t be corrected, it is still useful to contain them to support higher-level recovery algorithms. This can be done by tagging uncorrectable data errors with “poison” bits that follow the data through the system (called poison forwarding). If the poisoned data is later used, hardware raises a machine-check exception to system software (OS or hypervisor), along with information about the nature of the fault. Ideally, this kind of hardware support enables a hypervisor to perform a more targeted action in response to a fault (e.g., to shut down only the VMs affected by given fault, rather than bringing down the entire platform and all the VMs running on it).
Intel has added a rich set of new features to our EX server product line that extend the kinds of faults that can be corrected or contained, including QPI link recovery and poison forwarding, support for PCIe advanced error reporting, and memory mirroring, among others. This collection of features are all part of the “RAS” (short for “Reliability, Availability and Serviceability”) capabilities of our EX class platforms and we plan to extend and improve them over time.
The above features go to improving the reliability of a given single server, but sometimes you can lose an entire platform (e.g., due to loss of power, etc.). In this case, an interesting emerging solution is to use virtualization to maintain a replica of VM state on another platform, either by replaying its execution or by checkpointing the VM’s state as it runs. In the event of a full platform failure, workload execution can resume on another platform based on the state of the replica VM. Virtualization also pairs nicely with other established methods for high availability, like cluster-based failover solutions. In this case, a standby machine in a failover cluster can be provided by a VM, rather than having to devote a full physical machine for that purpose.
As we see consolidation ratios increase over time, I’d expect to see hardware mechanisms for fault recovery and containment to co-evolve with software (hypervisor) use of those mechanisms to provide higher-level properties of service availability and system fault tolerance, both within and across physical platforms.
15 Comments
View All Comments
yyrkoon - Friday, September 3, 2010 - link
About the second question, and the answer given. Granted this was not the answer I really wanted to hear. Iit seems to me that "we" rarely get what "we" exactly want. Especially where computer hardware/software is concerned.The way I see it. At minimum embedded systems development could benefit from this greatly. Not everyone is a professional in this field, and lots of ( possible ) budding embedded developers *could* potentially get ahead with out having to buy actual hardware. Of course with *NIX, you have tools such as QEMU, and probably a plethora of many other tools that I am unaware of. Of course, I am far from being an expert in the embedded systems design field.
One example I could think of after talking to a friend who has been an Electronics Engineer for more than 30 years would be this. Take the average arcade board from the 80's. Some or possibly many ran processors similar to lets say an 8085 ( yes thats right, not an 8086, or 8088 ), and then had in a lot of cases a ton of custom hardware all doing various things. While at the same time working with the main processor. Really, not too dissimilar from a modern computer system. But definitely not as standardized. These "systems" however did not really run an OS as we now know it. But perhaps did have some form of a boot loader. Again, I am not an expert. But I really can not see any processor not having at least a minimalist boot loader to setup all the accompanying/attached hardware.
Now, how cool would it be to actually be able to emulate that processor, while at the same time being able to construct your custom hardware ( in software ) ? I am sure someone can see where i am going with this, and I am sure that is not too much of a stretch to think that *someone* could possibly already being doing this in-house already. We already ( sort of ) have this with FPGA's. But from the angle I am coming from; It is not really the same thing. For starters. FPGA's are less than ideal in many cases, for different reasons. Even *if* they can be emulated entirely in software.
Maybe what what I am suggesting is out of scope with this discussion as some may see it. Seeing as emulation, and virtualization are two different things. One is a vitualization of hardware, while the other is a virtualization of software( simply put ). Also, I do know what I am proposing here is very complex. It would not be a simple thing to do. But it also is not impossible.
VMWare for example, there is already hardware emulation on some level already happening. Perhaps not on the processor front, but with various other devices. Even though I am fairly sure there is at minimum a good bit of processor abstraction happening. Also true that we *could* technically run lets say Debian on top of another software architecture( HVM or not ), and then run QEMU within that "VM" Somehow though, to me, that just seems over complicated, and less than ideal.
Anyways, yeah, I do not know. I am basically putting my thoughts into writing. Maybe someone out there with more experience can set me straight ?
miteethor - Friday, September 3, 2010 - link
If you want arade boards emulated, look at MAME it runs thousands of different custom arcade boards already in software.yyrkoon - Friday, September 3, 2010 - link
Yeah thanks for that information. However, I was already aware of MAME, and what I was talking about was just an example. What I was getting at, was to have the ability to create custom hardware ( in software ) to assist with various aspects of design.While on the subject of MAME though. I think MAME is a perfect example of why someone may want to design in software, and then have the final implementation done in hardware. The hardware technology used to make these arcade boards is ancient by todays standards, Yet runs the "application" much faster than being emulated on a modern ( and really far more powerful ) system. A major difference here of course is that an arcade board is a specific purpose implementation. Where an x86/x84 and given platform was designed with general purpose in mind.
So running with this hypothetical example. It might be possible to run an ARM based VM on an x86/x64 based host. It might also be conceivable that there could be various platform "plugins". Where you may be able to pick the exact platform to more closely match your target system. Then, you would need a boot loader, and possibly an OS. After that, it would just be a simple matter of developing driver modules for your own purposes.
Then it may also be conceivable that the same thing could be done with a PIC, or 8051 micro processor. But I think it would get much harder. Once you start thinking of using processors that do not necessarily have a standard platform. Or even "require" an OS period.
Once again, this is an idea of mine that I think would be really cool, if it ever came to life. Everyone has ideas, and hey perhaps someone does have a better way of going about the same thing. It is just an idea :)
marraco - Friday, September 3, 2010 - link
What about the consumer market?Enabling virtualization in home. Allowing to use a single machine with terminals for each member.
Why it still is not avalable?
nafhan - Friday, September 3, 2010 - link
Because any advantages in that setup are minimal compared to the cost and complexity of getting such a system set up when compared to running the OS directly on independent pieces of commodity hardware.That's also why most business (even large ones where the admin advantages would be most pronounced) don't use configurations like that.
I have heard of people setting up "kiosk" type PC's where the user boots directly into a VM. A new copy of the VM gets loaded upon startup and deleted on shutdown/logout each time the machine is used. This is more for security than ease of use, though.
marraco - Saturday, September 4, 2010 - link
I completely disagree.I myself, when I was student, and shared a computer with my sister (she studied architecture), implemented two virtual machines using Thinsoft Betwin, under a Pentium II system.
It allowed me to save the cost of case, motherboard, processor, memory, optical drive and hard disk.
And also allowed me to invest on a more powerful computer. All better components.
We both made heavy tasks. The only hurdle was temporary freezes when somebody accessed the hard drive.
Look at this photo:
http://iis2004.blogsome.com/images/cite09.jpeg
You see a common cyber café.
You see lots of wasted money on unused hardware. A mountain of wasted money. People just use those computers to access web pages.
The initial investment is huge, and most computers remain unused by important lapses. Costs of energy run wild. Those places turn hot very fast when users fill it, and refrigeration cost raise to sky level.
Frequently a user mess a computer, causing it to stop working, adding cost because of clients lost, and maintenance.
Schools have similar problems.
An i7 920 computer, with 12 Gb of ram, and a fast SSD would allow saving between to 7 computers, reducing the energy costs by a lot, and reducing the space used by computer.
When a user mess a computer, you just start a copy of a virtual machine.
When less people use the system, they have access to a more powerful machine.
It requires small modifications to existent virtualization software, and enables access to new markets.
Some cybercafés are used for gamming:
http://www.osscc.org/wp-content/uploads/2007/04/cy...
But they have just integrated video, with low requirements, and I think that nVidia (or AMD) would be interested on replace those integrated video with a small number of more powerful discrete cards.
Last two geforces I bought included trial copies of Betwin, and video card makers know that their cards already achieved a level of performance that need to connect more monitors to justify the processing power, at least to the common user.
So they would be interested in work with virtualization companies to develop virtualization capabilities. It’s perfect for massive parallelized architectures.
pkoi - Tuesday, September 7, 2010 - link
I like your post I've always been a big fan of Betwin possibilities, but never implemented it to it's fullest.chukked - Sunday, September 5, 2010 - link
We do use our single machine as two machine by me and my brother simultaniouslyI have following setup
processor = intel core 2 duo E8300 (supports VT-x, VT-d, execute disable bit)
mainboard = intel DQ45CB (supports VT-x, VT-d)
graphics card = ATI Radeon 5670 1G (let me use two displays with full resolution)
RAM = 4GB
HDD1 = Seagate Burracuda, 500GB
HDD2 = WD Black Cavior, 500GB
PS2 = keyboard1, mouse1
USB = keyboard2, mouse2 (used to attach with vmware exclusivly)
VDU1 and VDU2 attached to Graphics card
we have seperate sound also
front panel sound capabilities for user1 (host OS windows 7)
back panel sound capabilities for user2 (guest OS windows XP)
host OS = windows 7
Guest OS = windows XP
VMM = VMware 7
windows 7 on HDD1
windows 7 pagefile on HDD1
VM Machine (windows XP) for second user on HDD2 (vm config = 2GB memory, single processor)
first user simply use the native windows 7 (hdd1, keyboard1, mouse1, VDU1)
start vmware
run virtual machine
grab usb keyboard and mouse exclusivly in virtual machine through vmware menu (menu, VM, removable devices)
move virtual machine to second monitor
switch to full screen mode
second user is on
Benefits
save power,
save space,
save waste once machine retires,
reduce CTO,
save power backup setup cost and later running cost
it works great for us and it should be for anyone else also.
just remember to choose VT-d enable board along with VT-x enables processor for good efficiency.
:)
Chukked
marraco - Sunday, September 5, 2010 - link
Great news. Thanks for posting.ggathagan - Friday, September 3, 2010 - link
The main point behind virtualization is to maximize the usage of a server's CPU, memory and storage capabilities.In addition to the complexity and its attendant cost that nafhan mentions, you also have to consider that VM's are not focused on processes that involve direct user input.
By contrast, your average home user is performing tasks that require a screen, a keyboard and a mouse. As such, the VM is not a good fit for that situation.
Additionally, most VM's emulate low-grade hardware when it comes to things like graphics and sound.
Video and sound capabilities are often key components in the type of activities the average home users needs.
What you describe would be closer to a thin client type of system.
Even in that scenario, however, the cost of the hardware needed to give each thin client appropriate Audio/Visual capabilities would be the major cost.
As such, there's just not enough savings in setting up centralized processing and storage to make it worth developing and marketing media-capable thin client systems.