With the advent of virtualization support in new operating systems from Microsoft, and even client Windows 7, 8 and 10, the proprietary Hyper-V service has ceased to be the lot of system administrators in mid-range companies. Hyper-V may well replace the same popular VirtualBox from Oracle in the field of entry-level (client-level) virtualization. However, before installing of this service you need to check that the system requirements are met, otherwise you may receive the following message: "The virtual machine cannot be started because the low-level shell is not running." What you should pay attention to when choosing hardware for virtualization. Is it possible to somehow save the situation if the hardware has already been purchased? Consider this in this post.
So, you have deployed Hyper-V on Windows server 2008 Server and when you try to start a virtual machine, you get a window

Do not despair, perhaps the situation can still be saved. It should be noted that the OS must be 64-bit, of course, you would not be able to deploy Hyper-V at all on x32. The first thing to do is to check that the corresponding items are enabled in the BIOS - turn on VT and AMD-V. Next, you need to make sure that your processor supports virtualization, the verification tools for Intel and AMD platforms are described as one of them. (in the picture below).

The utility from Mark Russinovich can also help in identifying.


Another common problem is the inability to launch virtual machines from Windows 2008 R2 on processors with Advanced Vector Extensions (AVX) technology support. This OS does not natively support AVX, however, in this situation, a fix can help you

Cause. The low-level shell (hypervisor) is not running. The following error message appears in the system error log: "The virtual machine cannot be started because the low-level shell is not running."

Elimination. To run the low-level shell, the physical computer must meet certain hardware requirements. For more information, see Requirements for Installing Hyper-V. If the computer does not meet the requirements, you will not be able to use it to run virtual machines. If your computer meets the requirements and the low-level shell is not running, you might need to enable options for hardware virtualization and hardware data execution prevention (DEP) in the BIOS. After changing these settings, you must turn off and then turn on the power of the computer. Changes to settings will not take effect when the computer is restarted.

Cause. Virtual disk, which is used as the system disk, is attached to the SCSI controller.

Elimination. Connect the system drive to the IDE controller. For instructions, see Configuring disks and storage devices.

Cause. The virtual machine is configured to use physical CDs and DVDs as installation media and uses a physical drive.

Elimination. Only one virtual machine can access a physical CD / DVD drive at a time. Disconnect the CD / DVD drive from the other virtual machine and try again.

Unable to install the operating system on a virtual machine over the network.

Cause. The virtual machine is using a network adapter instead of a legacy network adapter, or the legacy network adapter is not connected to the corresponding external network.

Elimination. Make sure the virtual machine is configured to use a legacy network adapter that is connected to an external network that provides installation services. For instructions on configuring network adapters, see Configuring your network.

The virtual machine is automatically suspended.

Cause. The virtual machine will automatically pause if there is not enough free space on the volume where snapshots or virtual hard disks are stored. The state of the virtual machine in Hyper-V Manager will be listed as Critical Suspended.

Elimination. Create additional disk space using Hyper-V Manager to apply or remove snapshots individually. Or, to delete all snapshots, export the virtual machine without its data, and then import the virtual machine.

When trying to create or start a virtual machine, the following error messages appear: "A user has opened a mapped section", "A network resource or device is no longer available", or "An I / O operation was interrupted due to the end of the command stream or at the request of the application."

Cause.

Elimination.

The virtual machines disappeared from the Hyper-V Manager console.

Cause. The reason may be antivirus program that runs in the management operating system when you have configured virtual machine file monitoring in Hyper-V with the real-time scan component.

Elimination. Exclude virtual machine files from real-time scanning. For specific file information, see Microsoft Knowledge Base article 961804 (http://go.microsoft.com/fwlink/?LinkId=143978).

When using a virtual machine connection, the mouse pointer changes to a dot or gets stuck in the virtual machine window.

Cause. Integration services are not installed in the operating system on the virtual machine.

Elimination. If the operating system on the virtual machine is supported, then integration services will be available for that operating system. To improve mouse integration, install integration services. For instructions, see Installing an operating system in a virtual machine. If the operating system in the virtual machine is not supported, you can use a keyboard shortcut to move the mouse outside of the virtual machine window. The default keyboard shortcut is CTRL + ALT + LEFT ARROW.

I cannot use my mouse to control the virtual machine. You use Remote Desktop Connection to connect to the server running the Hyper-V tool.

Cause. When using Hyper-V Manager to connect to a virtual machine, the Connect to Virtual Machine component provides this connection. However, using a virtual machine connection in a Remote Desktop Connection session is not supported unless Integration Services is installed. Therefore, the expected result is loss of mouse functionality.

Elimination. Do not use a virtual machine connection in a Remote Desktop Connection session until Integration Services is installed. There are several ways to solve this problem.

  • Install integration services. For instructions, see Installing an operating system in a virtual machine.
  • Establish a remote desktop session directly on the virtual machine.
  • Log into the console of the server running Hyper-V and use the Virtual Machine Connection component to connect to the virtual machine.
  • Install the Hyper-V Management Tools on a supported client computer to install the Connect to Virtual Machine component and create a session to connect to the virtual machine. For more information see the technical library Windows Server 2008 (http://go.microsoft.com/fwlink/?LinkId=143558).

When opening Device Manager in the operating system in a virtual machine, some devices are marked as unknown.

Cause. Device Manager does not recognize devices that are optimized for use in virtual machines and run with Hyper-V if Integration Services are not installed. The unknown devices identified in Device Manager differ depending on the operating system in the virtual machine and may include: VMBus, Microsoft VMBus HID Miniport, Microsoft VMBus Network Adapter, and storvsc miniport.

Elimination. If the operating system on the virtual machine is supported, integration services will be available for that operating system. Once Integration Services is installed, Device Manager will recognize the devices that are available for this operating system in the virtual machine. For instructions, see Installing an operating system in a virtual machine.

Virtual machine performance monitoring is required, but the processor information in Task Manager does not show what processor resources the virtual machine is using.

Cause. Task Manager does not show CPU information for virtual machines.

Elimination. Use System Performance and Stability Monitor to view CPU usage information for virtual machines running on a server running Hyper-V. It shows the data retrieved from the Hyper-V performance counters. Press the button to open the System Performance and Stability Monitor. Start, select the command Execute and enter perfmon.

The data obtained from the following performance counters can be viewed in the managing operating system (running the Hyper-V role).

  • Hyper-V Low-Level Shell Logical Processor -% Guest Run Time: Determines the amount of physical processor resources used to run virtual machines. This counter does not identify individual virtual machines or the amount of resources consumed by each virtual machine.
  • Hyper-V Low-Level Shell Virtual Processor -% Guest Run Time: Determines the amount of virtual processor resources that the virtual machine is consuming.

Hyper-V , native to Windows systems- in its server editions, as well as in some desktop versions and editions - an environment for working with virtual machines and their guests OS does not always work without problems. One of these problems may be a notification that pops up when starting a virtual machine that, they say, Hyper-V it fails to start because some low-level shell is not running.

What is this error and how to fix it.

A window with such an error is a universal interpretation, the reason may lie in several things.

System requirements

If Windows itself does not meet the requirements for working with Hyper-V, and not all desktop editions allow working with this component, it simply cannot be activated in the system. But there are also hardware requirements. Their mismatch may not affect the activation of the hypervisor, but later become the reason for the appearance of such an error.

For work Hyper-V necessary:

At least 4 GB of RAM;
64-bit processor with support for SLAT and virtualization technology.

BCD storage

The error in question may indicate a misconfiguration of the storage data. BCD... Component Hyper-V deeply integrated into Windows and starts before the system kernel starts. If in storage BCD changes were made to modify the launch of the hypervisor, they may be incorrect. Or launch Hyper-V and was previously deliberately disabled in order to temporarily optimize the use of computer resources. In this case, the configuration BCD in terms of starting the hypervisor, you must either correct or return the default value by setting autorun Hyper-V... To install autorun, open CMD as administrator (necessarily) , we introduce:

bcdedit / set hypervisorlaunchtype auto

After that, we reboot.

AMD Bulldozer

Hyper-V does not work with company processors AMD with architecture Bulldozer.

Virtualization technologies

To ensure the life of the virtualization environment through any hypervisor, the processor must be equipped with technology that provides virtualization - Intel Virtualization or else AMD-V... You can find out about the support for these technologies on the processor specifications page on the websites, respectively, Intel and AMD... And virtualization technology, of course, should be included in BIOS .

Another important nuance: for processors Intel v BIOS specific technologies must be disabled Intel VT-d and Trusted Execution... The built-in Windows hypervisor is not friendly with them. This is how the settings should look like. BIOS to work with Hyper-V: Virtualization technology is enabled and specific technologies are disabled.

Hyper-V is an example of server virtualization technology. This means that Hyper-V allows you to virtualize an entire computer by running multiple operating systems(usually server-side) on one physical computer (usually with server-class equipment). Each guest operating system thinks (if the operating systems can think) that it owns the computer and has the exclusive right to use its hardware resources (or any other set of computer resources that the virtual machine has access to). Thus, each operating system runs in a separate virtual machine, with all virtual machines running on the same physical computer. In a typical non-virtualized environment, only one operating system can run on a computer. Hyper-V technology provides the computer with this capability. Before looking at how Hyper-V technology works, we need to understand the general principles of how virtual machines work.

Understanding virtual machines

A virtual machine is a computing environment implemented in software that allocates hardware resources on a physical computer so that it can run multiple operating systems on a single computer. Each operating system runs in its own virtual machine and has dedicated logical instances of processors, hard drives, network cards, and other computer hardware resources. An operating system running in a virtual machine has no knowledge that it is running in a virtual environment and behaves as if it had complete control over the computer's hardware. Implementing virtual machines as described above means that server virtualization must be implemented in accordance with the following requirements:

  • Management interfaces
    Server virtualization requires management interfaces that enable administrators to create, configure, and control virtual machines running on a computer. These interfaces must also support software administration and be networked to provide remote control virtual machines.
  • Memory management
    Server virtualization requires a memory manager to ensure that all virtual machines receive dedicated and isolated memory resources.
  • Planning tool
    Server virtualization requires a scheduling tool to control virtual machine access to physical resources. The scheduling tool must be configurable by the administrator and must be able to assign different priority levels to the equipment.
  • Finite state machine
    Server virtualization requires a state machine that keeps track of the current state of all virtual machines on a computer. Virtual machine state information includes information about CPU, memory, devices, and the state of the virtual machine (started or stopped). The state machine must also support the control of transitions between different states
  • Storage and networking
    Server virtualization requires the ability to provision storage and network resources on the computer, which allows each virtual machine to have separate access to hard disks and network interfaces. In addition, desktop virtualization also requires the ability of multiple machines to access physical devices at the same time while maintaining consistency, isolation, and security.
  • Virtualized devices
    Server virtualization requires virtualized devices that provide operating systems running in virtual machines with logical representations of devices that behave the same as their physical counterparts. In other words, when the OS accesses the physical device of the computer from a virtual machine, it accesses the corresponding virtualized device, which is identical to the process of accessing a physical device.
  • Virtual device drivers
    To virtualize the server, you must install the virtual device drivers on the operating systems running on the virtual machines. Virtual device drivers provide applications with access to virtual representations of hardware and I / O connections, just as they would to physical hardware.
Below we will see that Microsoft's Hyper-V server virtualization solution meets all of these requirements, but first, let's look at the core software component that provides server virtualization - the low-level wrapper.

Understanding the low-level wrapper

A low-level shell is a virtualization platform that allows you to run multiple operating systems on a single physical computer - the host computer. The main function of the low-level wrapper is to create isolated runtimes for all virtual machines, and to manage communication between the guest operating system in the virtual machine and the underlying hardware resources of the physical computer. The term "low-level shell" (hypervisor) was coined in 1972 when IBM updated the System / 370 computing platform management program to support virtualization. The creation of a low-level shell was a new milestone in the evolution of computing, as it allowed to overcome architectural limitations and reduced the cost of using mainframes. Low-level shells are different. For example, they differ in type - i.e. by whether they run on physical hardware or are hosted in an operating system environment. Shells can also be categorized according to their design: monolithic or micronuclear.

Low-level shell type 1

Low-level Type 1 shells run directly on the underlying physical hardware of the host computers and act as control programs. In other words, they are run on hardware. In this case, the guest operating systems run on multiple virtual machines located above the low-level shell layer (see Figure 1).

Because Type 1 low-level wrappers run directly on the hardware and not in the OS environment, they usually provide optimal performance, availability, and security over other types. Type 1 low-level wrappers are also implemented in the following server virtualization products:

  • Microsoft Hyper-V
  • Citrix XenServer
  • VMware ESX Server

Low-level shell type 2

Low-level type 2 shells run in an OS environment running on the host computer. In this case, guest operating systems run in virtual machines above a low-level shell (see Figure 2). This type of virtualization is commonly referred to as hosted virtualization. Comparing Figure 2 and Figure 1, it is clear that the guest operating systems running in type 2 low-level shell platforms virtual machines are separated from the underlying hardware by another layer. Having an extra layer between virtual machines and hardware causes performance degradation on shell type 2 platforms and limits the number of virtual machines that can be run in practice. Low-level shells of type 2 are also implemented in the following server virtualization products:

  • Microsoft Virtual Server
  • VMware Server
The desktop virtualization product Microsoft Virtual PC also uses a Type 2 low-level shell architecture.

Monolithic low-level shells

The monolithic architecture of the low-level wrapper assumes that there are device drivers that support, reside in, and manage the wrapper (see Figure 3).

Monolithic architecture has both advantages and some disadvantages. For example, monolithic low-level shells do not require a controlling (parent) operating system, since all guest systems interact directly with the underlying computer hardware using device drivers. This is one of the advantages of a monolithic architecture. On the other hand, the fact that the drivers have to be designed specifically for the low-level wrapper presents significant difficulties as there are various types of motherboards, storage controllers, network adapters and other equipment. As a result, manufacturers of monolithic low-level shell platforms need to work closely with hardware manufacturers to ensure that the drivers for these devices support the low-level shell. It also makes shell manufacturers dependent on hardware manufacturers to supply the necessary drivers for their products. Thus, the range of devices that can be used in virtualized operating systems on monolithic low-level shell platforms is much narrower in comparison with the situation when the same operating systems are launched on physical computers. An important feature of this architecture is that it ignores one of the most important security principles - the need for defense in depth. With echeloned defense, several defense lines are created. In this model, there is no defense in depth since everything is done in the most privileged part of the system. An example of a server virtualization product that uses a monolithic low-level shell architecture is VMware ESX Server.

Microkernel low-level shells

Microkernel low-level shells do not require special drivers, since the operating system acts as the main (parent) partition. This section provides the runtime environment necessary for device drivers to access the underlying physical hardware of the host computer. Partitions will be discussed later, but for now, imagine that the term "partition" is equivalent to a virtual machine. On microkernel low-level shell platforms, installation of device drivers is required only for physical devices running on the parent partition. There is no need to install these drivers on guest operating systems, as guest operating systems only need to go to the parent partition to access the physical hardware of the host computer. In other words, a microkernel architecture does not imply direct access by guest operating systems to the underlying hardware. Physical devices are only accessed by interacting with the parent section. Figure 4 shows the microkernel architecture of the low-level shell in more detail.

Microkernel architecture has several advantages over monolithic architecture. Firstly, the absence of the need for special drivers allows you to use a wide range of existing drivers provided by the manufacturer. Second, the device drivers do not go into the wrapper, so it creates less load, is smaller, and is more robust. Third, and most importantly, the area of ​​potential attack is minimized because no extraneous code is loaded into the shell (device drivers are created by third parties, therefore they are considered extraneous code from the point of view of the shell developer). Agree that the penetration of malware software into a shell and taking control of all the virtual OSs of the computer is the last thing you want to experience. The only downside to the microkernel design is the need for a special, parent partition. This increases the load on the system (although it is usually minimal), since the access of the child partitions to the hardware requires them to interact with the parent partition. A significant advantage of the Hyper-V microkernel architecture is that it provides defense-in-depth. Hyper-V technology allows you to minimize code execution in the low-level shell and pass more functions up the stack (for example, state machine and control interfaces that run higher up the stack in user mode). ). What is an example of a server virtualization platform with a microkernel architecture? This is undoubtedly Microsoft Hyper-V, with Windows Server 2008 or later running on its parent partition.

Main features of Hyper-V

Below are some of the highlights of the original version of the Microsoft Hyper-V platform:

  • Support for various OS
    Hyper-V supports the simultaneous execution of various types of operating systems, including 32-bit and 64-bit operating systems on various server platforms (for example, Windows, Linux, etc.).
  • Extensibility
    Hyper-V technology has standard Windows Management Instrumentation (WMI) and APIs that enable ISVs and developers to quickly create custom tools and extensions for the virtualization platform.
  • Network load balancing
    Hyper-V provides virtual failover capabilities that enable Windows NLB to load balance virtual machines from different servers.
  • Microkernel architecture
    Hyper-V has a low-level 64-bit microkernel architecture that allows the platform to provide various methods of device support, additional performance, and security.
  • Hardware virtualization
    Hyper-V requires the use of Intel-VT or AMD-V hardware virtualization technologies.
  • Hardware Sharing Architecture
    Hyper-V uses a Virtualization Service Provider (VSP) and Virtualization Services Client (VSC) architecture to provide increased access and utilization of hardware resources (such as disks, network, and video).
  • Fast migration
    Hyper-V allows you to move a running virtual machine from one physical host to another with minimal latency. It does this through the highly available management tools of Windows Server 2008 and System Center.
  • Scalability
    Hyper-V supports multiple processors and cores at the host level, and extended memory access at the virtual machine level. This support provides scalability for virtualization environments to host large numbers of virtual machines on a single host. However, quick migration capabilities also allow scaling across multiple sites.
  • Symmetric Multiprocessor Architecture (SMP) support
    Hyper-V supports up to four processors in a virtual machine environment for running multithreaded applications in a virtual machine.

  • Hyper-V provides the ability to take snapshots of running virtual machines for fast rollback, which optimizes backup and recovery solutions.
All of these features are detailed in this roundup, but the most interesting features are those added to Hyper-V in R2. These functions are described below.

What's New in Hyper-V R2

New functionality has been added to the Hyper-V role in Windows Server 2008 R2. They improve the flexibility, performance, and scalability of Hyper-V. Let's consider them in more detail.

Increased flexibility

Hyper-V R2 contains the following new features that increase the flexibility to deploy and maintain a server virtualization infrastructure:

  • Live migration
    Hyper-V R2 includes a live migration feature that allows you to move a virtual machine from one Hyper-V server to another without interrupting network connectivity, without user downtime, and without disrupting service. Moving is accompanied by only a decrease in productivity for a few seconds. Live migration enables high availability of servers and applications running on clustered Hyper-V servers in a virtualized data center environment. Live migration also simplifies the process of upgrading and maintaining the host computer hardware, as well as providing new capabilities, such as the ability to balance network load for maximum energy efficiency or optimal use of the processor. Live migration is detailed in the "Working with live migration" section below.
  • Cluster Shared Volumes
    Cluster Shared Volumes are a new feature in Windows Server 2008 R2 Failover Clustering. It provides a single and consistent file namespace that allows all nodes in a cluster to access the same storage device. The use of Cluster Shared Volumes is highly recommended for live migrations and is described below in the "Working with live migration" section.
  • Supports hot add and remove storage media
    The R2 version of Hyper-V allows you to add or remove virtual hard disks and pass-through disks on a running virtual machine without shutting down and restarting it. This allows all of the storage used by the virtual machine to be tuned without downtime to accommodate changing workloads. In addition, it provides new backup capabilities for Microsoft SQL Server, Microsoft Exchange Server, and data centers. To use this feature, virtual and pass-through disks must be connected to the virtual machine using a virtual SCSI controller. For more information on adding SCSI controllers to virtual machines, see the "Managing Virtual Machines" section below.
  • Processor compatibility mode
    The new processor compatibility mode, available in Hyper-V R2, allows a virtual machine to be moved from one host computer to another when their processor architecture matches (AMD or Intel). This makes it easier to upgrade the Hyper-V host infrastructure by making it easier to migrate virtual machines from computers with older hardware to computers with newer hardware. In addition, it also provides the flexibility to migrate virtual machines between cluster nodes. For example, processor compatibility mode can be used to migrate virtual machines from an Intel Core 2 host to an Intel Pentium 4 host, or from an AMD Opteron host to an AMD Athlon host. Note that CPU Compatibility Mode only allows VM migrations when the processor architecture of the nodes matches. In other words, AMD-AMD and Intel-Intel migrations are supported. Moving virtual machines from a host computer of one architecture to a host computer of a different architecture is not supported. In other words, AMD-Intel and Intel-AMD migrations are not supported. For more information on processor compatibility mode and how to configure it, see the sidebar “How It Works. processor compatibility mode ".

Increased productivity

Hyper-V R2 contains the following new features that can improve the performance of a server virtualization infrastructure:

  1. Supports up to 384 simultaneously running virtual machines and up to 512 virtual processors on each server
    With the appropriate hardware, Hyper-V R2 can be used to reach previously unattainable levels of server consolidation. For example, one Hyper-V host computer can host:
    • 384 virtual machines with one processor (significantly less than the 512 virtual processor limit)
    • 256 virtual machines with two processors (total 512 virtual processors)
    • 128 virtual machines with four processors (total 512 virtual processors)

    You can also work with any combination of single-core, dual-core, and quad-core processors as long as the total number of virtual machines does not exceed 384 and the total number of virtual processors allocated to virtual machines does not exceed 512. These capabilities allow Hyper-V R2 to provide the highest density values ​​available on the market. virtual machines at the moment. By comparison, the previous version of Hyper-V in Windows Server 2008 SP2 only supported up to 24 logical processors and up to 192 virtual machines. Note that when using failover clusters, Hyper-V R2 supports up to 64 virtual machines per cluster node.

  2. Support for second level address translation (SLAT)
    In Hyper-V R2, the processor handles address translations in virtual machines rather than in Hyper-V code that programmatically performs table mappings. Thus, SLAT technology creates a second level of pages under the page tables of the x86 / x64 architecture of x86 / x64 processors through an indirection layer from virtual machine memory access to physical memory access.
  3. When used with appropriate processors (for example, Intel processors with extended EPT pages from generation i7 or recent AMD processors with nested NPT page tables), Hyper-V R2 significantly improves system performance in many cases. The performance gains are due to improvements in memory management technology and a decrease in the number of memory copies required to use these processor features. Performance is improved especially when working with large datasets (for example, with Microsoft SQL Server). Memory usage for the Microsoft Hypervisor low-level wrapper can be reduced from 5 percent to 1 percent of total physical memory. Thus, more memory is available to child partitions, allowing for a high degree of consolidation.

  4. Vm chimney
    This feature allows you to forward TCP / IP traffic for a virtual machine to the host's physical network adapter. To do this, the physical network adapter and OS must support TCP Chimney Offloading, which will improve the performance of the virtual machine by reducing the CPU load on the logical processors. Support for TCP Chimney Offloading in Microsoft Windows appeared in versions
  5. Please note that not all applications may use this feature. In particular, applications that use preallocated buffers and long-term connections with large data transfers will benefit most from enabling this feature. Also, keep in mind that physical NICs that support TCP Chimney Offload can handle a limited number of offloaded connections that are used by all VMs on the host.

  6. Virtual Machine Queue (VMQ) support
    Hyper-V R2 provides support for Virtual Machine Device Queues (VMDq), Intel Virtualization Technology For Connectivity. VMQ transfers the task of sorting virtual machine data traffic from Virtual Machine Manager to the network controller. This allows a single physical NIC to appear as multiple NICs (queues) to the guest, which optimizes CPU utilization and improves throughput network, and also provides improved virtual machine traffic management capabilities. After that, the host does not store direct memory access (DMA) data from the devices in its own buffer, because the network adapter can use this access to route packets to the memory of the virtual machine. Shortening the I / O path provides improved performance. For more information on the VMDq queue, see the Intel website at http://www.intel.com/network/connectivity/vtc_vmdq.htm.
  7. Jumbo frame support
    Jumbo frames are Ethernet frames that contain more than 1500 bytes of payload. Jumbo frames were previously available in non-virtual environments. Hyper-V R2 provides the ability to work with them in virtual machines and supports frames up to 9014 bytes (if supported by the underlying physical network).

As a result, it can improve network bandwidth and reduce CPU usage when transferring large files.

Increased scalability

Hyper-V R2 contains the following new features that increase the scalability of the server virtualization infrastructure:

  • Supports up to 64 logical processors in the main processor pool
    The number of logical processors supported in this version of Hyper-V has been quadrupled from old version Hyper-V. This enables enterprises to leverage the latest models of large and scalable server systems to maximize the benefits of consolidating existing workloads. In addition, the use of such server systems makes it easier to provide multiple processors for each virtual machine. Hyper-V supports up to four logical virtual processors per virtual machine.
  • Support for parking cores
    The Core Parking feature allows Windows and Hyper-V to consolidate data processing on a minimum number of processor cores. To do this, inactive processor cores are suspended by placing them in state C ("parked" state). This allows you to schedule virtual machines to run on a single host rather than spreading them across multiple hosts. This has the advantage of moving closer to the green computing model by reducing the amount of power required by the CPU of the nodes in the data center.

Hyper-V vs Virtual Server Comparison

The broad capabilities of Hyper-V have already led to the technology replacing Microsoft Virtual Server in many organizations that previously used Virtual Server for server consolidation, business continuity, testing, and development. At the same time, Virtual Server can still find application in corporate virtualization infrastructure. Table 1 compares some of the features and technical details of Hyper-V and Virtual Server.

Table 1. Comparison of components and specifications of Virtual Server 2005 R2 SP1 and Hyper-V R2

Component or technical data

Virtual Server 2005 R2 SP1

Architecture

Virtualization type

Hosted systems

Based on a low-level shell

Performance and scalability

32-bit virtual machines

64-bit virtual machines

32-bit nodes

64-bit nodes

Virtual machines with multiple processors

Maximum amount of guest RAM per virtual machine

The maximum number of guest CPUs per virtual machine

Maximum RAM knot

Maximum number of running virtual machines

Resource management

Availability

Failover guest failover

Failover of host computers

Migrating nodes

Snapshots of virtual machines

Control

Expandable and scriptable

User interface

Web interface

MMC interface 3 0

SCVMM Integration

For More Information For more information about Virtual Server features and downloads, go to http://www.microsoft.com/windowsserversystem/virtualserver/downloads.aspx. For information on migrating virtual machines from Virtual Server to Hyper-V, see "Virtual Machine Migration Guide: How To Migrate from Virtual Server to Hyper-V" in the TechNet Library at http://technet.microsoft.com/en -us / library / dd296684.aspx.