Best practices for deploying an HP EVA array with Microsoft Hyper-V R2

Size: px
Start display at page:

Download "Best practices for deploying an HP EVA array with Microsoft Hyper-V R2"

Transcription

1 Best practices for deploying an HP EVA array with Microsoft Hyper-V R2 Table of contents Executive summary... 3 Environment planning... 3 Compatibility matrix... 3 Sizing for Hyper-V... 4 Memory on a Hyper-V host... 4 Configuration details... 4 Blade enclosure... 5 Clustered Hyper-V servers... 6 Management server... 7 Command View EVA server... 8 Storage... 8 I/O test configuration... 9 Storage options... 9 Multipath in Windows Server 2008 R EVA Vraid levels Cluster-shared versus VM-specific volumes EVA configuration considerations Disk types Virtual hard disks Fixed VHD Dynamically expanding VHD Fixed versus dynamically expanding VHD performance Differencing disks Snapshots VHD and volume sizing Pass-through disks Disk expansion Disk options summary Virtual disk controllers Disk controller performance Disks per controller type Controller usage recommendation VM and storage management Server Manager Remote Management Hyper-V Manager Failover Cluster Manager... 26

2 Performance Monitor System Center Virtual Machine Manager Quick Storage Migration HP Command View EVA software HP Systems Insight Manager System Center Operations Manager (SCOM) VM deployment options Windows Deployment Services EVA Business Copy snapclones Deployment with SCVMM VM cloning with SCVMM VM template creation and deployment with SCVMM Physical-to-virtual (P2V) deployment through SCVMM Summary Appendix A Disk expansion VHD expansion For more information HP links Microsoft links Feedback... 46

3 Executive summary Server virtualization has been widely adopted in production data centers and continues to gain momentum due to benefits in power consumption, reduced data center footprint, IT flexibility, consolidation, availability, and lower total cost of ownership (TCO). With HP server, storage, infrastructure, and software products, businesses can take advantage of a converged infrastructure. Doing so reduces segmentation of IT departments and enables better resource use by using pools that are based on virtualized assets that adapt to ever-changing business requirements. As server and storage consolidation increases, there is an increased risk of production outages. Consolidation also raises the workload on servers and storage to new levels. Highly available and high-performance storage and server solutions are an integral part of creating an efficient, dependable environment. With the HP StorageWorks Enterprise Virtual Array (EVA) family, HP BladeSystem components, and Microsoft Windows Server 2008 R2 Hyper-V (Hyper-V), businesses can create a highly available and high-performance solution. This white paper outlines the virtualized infrastructure and offers best practices for planning and deploying the EVA with Hyper-V on HP ProLiant BladeSystem servers. New Hyper-V R2 features, such as Live Migration and Cluster Shared Volumes, help resolve consolidation challenges. This white paper serves as a resource aid for IT professionals who are responsible for implementing a Hyper-V environment and covers many server, storage, and virtualization concepts. HP strongly recommends thoroughly reviewing the documentation supplied with individual solution components to gain the indepth knowledge necessary for a comprehensive and reliable solution. Target audience: This white paper is intended for solutions architects, engineers, and project managers involved with the deployment of HP StorageWorks arrays with virtualization solutions. Recommendations are offered in this white paper, but it should not be regarded as a standalone reference. Familiarize yourself with virtualized infrastructures and with networking in a heterogeneous environment. A basic knowledge of HP ProLiant servers, HP StorageWorks EVA products, and management software, such as HP StorageWorks Command View EVA, is required. Links to information on these topics are available in the For more information section. In addition, it is important to understand the basic concepts of the Microsoft Hyper-V architecture and how this product virtualizes hardware resources. For more information, see Hyper-V Architecture. This white paper describes testing performed in November Environment planning Compatibility matrix HP SAN compatibility is a key verification in designing a hardware and software solution. The compatibility tool is available from the HP StorageWorks Single Point of Connectivity Knowledge (SPOCK) website at Note An HP Passport account is required to access SPOCK. 3

4 In addition to hardware interoperability requirements, SPOCK provides detailed version and configuration constraints through the Solution Software link. After logging in to SPOCK, select View by Array under the SAN Compatibility section on the left side of the screen. When prompted to navigate to a specific storage array, select the Refine link. After choosing an array, select an operating system and view the complete configuration details. Sizing for Hyper-V While this white paper suggests many best practices for a Hyper-V environment, it is not intended to be a sizing guide, nor does it suggest the maximum capabilities of the equipment used in testing. A critical piece of planning a virtualized environment is correctly sizing the equipment. For help sizing your environment, see HP Sizer for Microsoft Hyper-V 2008 R2 and the documents listed in For more information. Converting a physical server to a virtual machine (VM) is a convenient method of replacing aging hardware. Also, converting multiple servers to VMs and consolidating them on fewer physical hosts creates significant savings in equipment, power, cooling, and real estate. Doing so, however, requires careful preparation to maintain performance and availability. Because server consolidation is a common purpose for implementing a virtualized solution, Hyper-V host servers should have at least as many resources as the sum of used resources on the physical servers being converted to VMs plus overhead for the host operating system. Memory on a Hyper-V host Hyper-V does not allow memory overcommit (assigning more memory to the host and VMs on that host than is physically available), nor can VMs exist in memory that is paged to the disk. To be certain the Hyper-V server has sufficient resources, provide physical memory equal to the sum of all memory allocated to local VMs plus the following: At least 512 MB for the host operating system 300 MB for the Hypervisor 32 MB for the first GB of RAM allocated to each virtual machine 8 MB for every additional GB of RAM allocated to each virtual machine For more information, see Checklist: Optimizing Performance on Hyper-V. Configuration details This project consists of an HP BladeSystem c3000 enclosure with five HP ProLiant BL460c blade servers and an EVA4400 storage array. These components are connected in two fabrics, each with a Brocade 4Gb SAN Switch for HP c-class BladeSystem as shown in Figure 1. 4

5 Figure 1. Configuration overview Blade enclosure The HP ProLiant BladeSystem provides many convenient features through the Onboard Administrator (OA), including environment status information, remote management, and remote connectivity to servers and switches, making it a great tool for administrators. This powerful interface even includes clickable images of the enclosure and its components for easy navigation and control as shown in Figure 2. HP Integrated Lights-Out (ilo) is a management tool that provides power management, virtual media control, remote console access, and many other administrative benefits. ilo is also tightly integrated with OA. These components make the blade enclosure easy to remotely manage and control. Five BL460c blade servers are used in this environment. These servers each have Intel Xeon processors, 16 GB RAM (except for the storage management server, which has only 8 GB of RAM), and a dual port Emulex LPe1105-HP 4Gb FC HBA for HP c-class BladeSystem for connecting to the SAN. 5

6 Figure 2. HP BladeSystem Onboard Administrator Clustered Hyper-V servers Three servers (named HyperV1, HyperV2, and HyperV3) are placed in a Microsoft Windows Failover Cluster for application and VM high availability. These servers have Windows Server 2008 R2 Datacenter, which allows more than four VMs per Hyper-V host, and the Hyper-V role installed on them. They are used for consolidating many physical servers because each host houses several VMs, each of which can represent a physical server being consolidated. One of the three host servers, HyperV3, has a slightly different processor. This is used to verify the new functionality of Hyper-V R2 to live migrate VMs to servers with slightly different processors. Note Hyper-V Live Migration can only move VMs between servers with processors of the same vendor processor family. Hyper-V cannot live migrate VMs between AMD -based and Intel-based servers. Note VM (guest) operating systems cannot be clustered in this configuration. To include the guest OS in a cluster, iscsi storage must be used. 6

7 For testing purposes in this environment, each VM runs one of three guest operating systems: Windows Server 2008, Windows Server 2008 R2, or Red Hat Enterprise Linux 5.3. While the OS of each physical (host) server resides on its local hard drive, each VM (guest OS) and all data volumes that those VMs use are located on EVA Vdisks (LUNs) and presented to all three Hyper-V servers. This provides high availability of those volumes. Table 1 and Table 2 list the Hyper-V host server specifications in this environment. Table 1. HyperV1 and HyperV2 host server specifications Purpose Hyper-V host (clustered nodes 1 and 2) Operating system Windows Server 2008 R2 Datacenter Hyper-V Processors Two Intel Xeon (Dual Core) 3.00 GHz (4 cores total) Memory 16 GB Table 2. HyperV3 host server specifications Purpose Hyper-V host (clustered node 3) Operating system Windows Server 2008 R2 Datacenter Hyper-V Processors One Intel Xeon (Quad Core) 3.00 GHz (4 cores total) Memory 16 GB Management server A non-clustered blade server is used for server, VM, and cluster management. The management software can be installed either directly on the host OS or on individual VMs created on the management server. Table 3. Management host server specifications Purpose Server, VM, and cluster management Operating system Windows Server 2008 R2 Datacenter Hyper-V Processors Two Intel Xeon (Dual Core) 3.00 GHz (4 cores total) Memory 16 GB Microsoft System Center Virtual Machine Manager 2008 R2 (SCVMM) Software Microsoft System Center Operations Manager 2007 R2 (SCOM) HP Systems Insight Manager 5.3 SP1 7

8 Command View EVA server A non-clustered blade server running Windows Server 2008 is used for storage management with HP StorageWorks Command View EVA 9.1 (CV-EVA). Windows Server 2008 R2 is not used on this host because (at the time of completion of this white paper) CV-EVA is not supported on Windows Server 2008 R2 or on a Hyper-V VM. Table 4. CV-EVA host server specifications Purpose Storage management Operating system Windows Server 2008 Enterprise Processors Two Intel Xeon (Dual Core) 3.00 GHz (4 cores total) Memory 8 GB Software HP StorageWorks Command View EVA 9.1 Storage The EVA4400 used in this environment is running firmware XCS v It has four disk shelves with GB 15K Fibre Channel hard disk drives. However, only 16 of those drives are used to hold VM OS disks and other local data or applications. The remaining disks are used for other applications that are not relevant to this project. This configuration follows existing EVA best practices that suggest having a multiple of eight drives in each disk group. Note EVA performance best practices suggest using as few disk groups as possible. When considering using multiple EVA disk groups, carefully evaluate each workload and decide whether environments with similar workloads can share disk groups for improved performance. For more information, see the HP StorageWorks 4400/6400/8400 Enterprise Virtual Array configuration - Best practices white paper. The logical configuration used in this project is shown in Figure 3. 8

9 Figure 3. Logical configuration I/O test configuration Although this project is not meant to benchmark performance of the environment, testing a workload is useful to determine best practices. In this environment, an I/O-intensive workload is generated that is 60% random and 40% sequential, with a 60/40 read/write ratio. Block sizes range uniformly between 8 KB and 64 KB, in 8 KB increments (that is, 8 KB, 16 KB, 24 KB, 64 KB). Storage options Multipath in Windows Server 2008 R2 When setting up the storage environment, be sure to obtain the latest Multipath I/O (MPIO) drivers and management software and install them on each server that accesses the EVA. At the release of this white paper, the current version of the HP MPIO Full Featured DSM for EVA4x00/6x00/8x00 families of Disk Arrays (EVA MPIO DSM) is , which does not yet support Cluster Shared Volumes. If Cluster Shared Volumes are used, the built-in Microsoft Windows MPIO drivers must be used. To use the Microsoft MPIO drivers and tool, enable the Multipath I/O feature as explained in Installing and Configuring MPIO. If, however, HP StorageWorks Business Copy EVA Software or HP StorageWorks Continuous Access EVA Software is used, or if more manual control over the MPIO settings is desired, use the HP MPIO device-specific module (DSM) software. The Windows MPIO DSM for EVA software is available from Download drivers and software. Also, follow the existing EVA best practices in the HP StorageWorks 4400/6400/8400 Enterprise Virtual Array configuration - Best practices white paper. 9

10 EVA Vraid levels Because each Vdisk on the EVA is striped across all of the disks in a disk group, one of the largest factors in array performance is the number of disks in the disk group. However, the Virtual RAID (Vraid) level can also have a significant impact on performance. With the EVA4400, three Vraid levels are of interest: Vraid1, Vraid5, and Vraid6. To test the performance of these Vraid levels, the previously specified workload is applied to fixed virtual hard disks (VHDs) attached to several VMs. The average I/O operations per second (IOPS) and response times (ms) for each Vraid level are shown in Figure 4. Figure 4. IOPS and response times (ms) by Vraid level As shown in the IOPS by Vraid Level chart, Vraid1 outperforms Vraid5 by 16%, while Vraid5 outperforms Vraid6 by 37% for this workload (each workload has differing results). The drawback is that because Vraid1 mirrors data, it requires significantly more disk capacity than Vraid5 or Vraid6. While these results are not meant to benchmark the EVA, they demonstrate the point that the Vraid level must be carefully considered. For performance-critical environments, Vraid1 is the clear choice. However, with sufficient disks, and for lower requirements on performance or availability, consider Vraid5 or Vraid6. Vraid6 is similar in structure to Vraid5 except that it uses two parity disks instead of just one, creating extra I/O traffic, but also greater availability. Best Practice For the highest performance and availability, use Vraid1 for EVA Vdisks. For lower performance or availability requirements, consider using Vraid5 or Vraid6. 10

11 Cluster-shared versus VM-specific volumes Prior to Hyper-V R2, to allow a VM in a cluster to failover or to perform a migration without impacting other VMs, each VM had to be on its own LUN presented by external storage. Each LUN also had to be carefully sized to provide adequate capacity for the VHD, configuration files, snapshots, and other potential data, all while wasting as little extra drive capacity as possible. This method of storage management easily leads to either wasted storage or the need to frequently grow LUNs and volumes. Also, under this model, environments with many VMs need numerous LUNs presented to the Hyper-V hosts, thus complicating storage management. Furthermore, during failover or migration, ownership of these LUNs must be transferred between hosts because only one host in the cluster can write to these volumes at a time. The Cluster Shared Volume (CSV) feature helps to address these issues. For more information, see Using Cluster Shared Volumes in a Failover Cluster in Windows Server 2008 R2. With a CSV, multiple Hyper-V hosts can read/write to the same volume (same LUN) at the same time, allowing multiple VMs from different hosts to be placed on the same shared volume. Additionally, ownership of this volume/lun does not have to be transferred between hosts when a failover or migration occurs, which allows for faster failover/migration times. In fact, in this environment, start to finish times for live migrating VMs using CSVs are up to 33% faster than when using individual (non-csv) LUNs for each VM. Also, the visible pause in the guest OS when using individual LUNs is roughly 6 seconds, compared to only a 2 second pause when using CSVs (when no workload is applied). Note To perform live migrations, Hyper-V hosts must be in a cluster. However, CSVs are not required for live migration. Live migration performance depends heavily upon the current workload applied to the VM, the amount of RAM the VM owns, and the bandwidth on the virtual network used for live migration. Live migration takes a different amount of time in each environment. With this flexibility, a few large LUNs (each with a CSV on it) can be created for many (or all) VMs and their associated configuration files and snapshots. This method of storage allocation reduces wasted disk space as well because the VMs share a larger capacity pool. To test I/O performance of CSVs versus non-csv volumes on individual LUNs, I/O workloads are run on multiple VMs (ranging from 1 to 10 VMs at a time). In one series of tests ( X LUNs ), several EVA Vdisks are presented (as LUNs) to the host and formatted, each holding a VHD for a different VM. This is the method required under the original release of Hyper-V. In the second series of tests ( 2 CSVs ), two large EVA Vdisks are presented to the host, formatted, and added to the cluster as CSVs. In this case, all of the VM VHDs reside on these two CSVs. The workload previously specified is applied and the IOPS achieved is shown in Figure 5. Impressively, the CSVs perform comparably to using individual non-csv LUNs for each VHD. Also, the CSVs have response times that are almost identical to the individual volumes when testing five or fewer VMs. 11

12 Figure 5. IOPS for VMs on separate volumes versus two CSV Note This is not meant to be a sizing guide, nor do these test results benchmark the EVA. The workload applied is a very heavy load for only a 16-drive disk group. Monitor the disk latency in your environment, and, if necessary, add disk drives to the disk group to improve performance. Because the EVA spreads the Vdisks across the entire disk group, EVA performance is not significantly impacted due to having more or fewer Vdisks, as long as there at least two Vdisks to balance across the two EVA controllers. Notice, however, that the final column of data points (with 10 VMs tested) shows a performance improvement when using separate volumes for each VM. This is because the HBA on each host is set by default to have LUN-based queues, meaning there is a queue for each LUN (EVA Vdisk) presented to the host. Therefore, with 10 LUNs, there are more queues on the host sending I/O requests, allowing fewer I/O conflicts and keeping the EVA busier than with only two LUNs. If using a small number of Vdisks on the EVA, consider increasing the HBA queue depths as recommended in the LUN count influences performance section of the HP StorageWorks 4400/6400/8400 Enterprise Virtual Array configuration - Best practices white paper. 12

13 Best Practice For ease of management, faster migrations, and excellent performance, use Cluster Shared Volumes in your clustered Hyper-V environment. If very few Vdisks (LUNs) are used on the EVA, consider increasing HBA queue depths for possible performance improvements. EVA configuration considerations The CSV testing previously mentioned uses two LUNs on the EVA. Some testing is also done with only one CSV on one LUN, and all VM data residing on that LUN. Performance results when using two LUNs (one CSV on each LUN) is 2% to 8% better, depending on the number of VMs, than when using only one CSV and LUN. This is because with two LUNs, each is managed by one of the EVA controllers. If only one LUN exists, the controller managing that LUN must service all requests, eliminating the performance benefits of the second controller. Requests sent to the secondary controller are proxied to the managing controller to be serviced, increasing service times. Simply having an even number of LUNs, however, does not guarantee optimally balanced performance. By default, when creating a LUN in CV-EVA, the preferred path option is No preference. With this setting, by mere chance, the LUNs with the heaviest workloads might be managed by one controller, leaving the other nearly idle. It is therefore beneficial to specify which LUNs should be managed by which controller to balance the workload across both controllers. Best Practice For optimal performance, balance usage of the EVA controllers by specifically placing each LUN on a desired controller based on the LUN s expected workload. If an event causes a controller to go offline (even briefly), all LUNs that the controller manages are moved to the other controller to maintain availability, and they do not immediately fail back when the first controller is again available. Therefore, a small outage on controller A might leave controller B managing all LUNs and limiting performance potential even if both controllers are currently available. To avoid this scenario, check the managing controller of the LUNs on the EVA periodically and after significant events in the environment occur. This can be done using CV-EVA or HP StorageWorks EVA Performance Monitor (EVAPerf). In CV-EVA, view Vdisk Properties, and select the Presentation tab to view or change the managing controller of the desired LUN as shown in Figure 6. When changing the managing controller, be sure to click Save changes at the top of the page or those changes will not be applied. The owning controller for all LUNs can be quickly viewed using EVAPerf by issuing the EVAPerf command with the vd (virtual disk) parameter as shown in Figure 7. HP StorageWorks Storage System Scripting Utility (SSSU) for CV-EVA can also be used to change ownership of the LUNs and proves to be a powerful scripting tool for changing many configurations of the EVA. For more information about CV-EVA and SSSU for CV-EVA, see Manuals - HP StorageWorks Command View EVA Software. Best Practice Use CV-EVA and EVAPerf to monitor the managing controller for each LUN and rebalance the LUNs across the controllers with CV-EVA or SSSU, if necessary. 13

14 Figure 6. Viewing and setting the managing controller for a Vdisk in CV-EVA Figure 7. Viewing virtual disk properties in EVAPerf Disk types Two disk configurations can be used for a VM to access storage: virtual hard disks and pass-through disks. Each of these disk types has its own purpose and special characteristics as explained in the following sections. 14

15 Virtual hard disks A virtual hard disk (VHD) is a file with a.vhd extension that exists on a formatted disk on a Hyper-V host. This disk can be local or external, such as on a SAN, and each VHD has a maximum capacity of 2,040 GB. To use the VHD, a VM is assigned ownership of the VHD to place its operating system or other data on. This can be done either during or after VM creation, whereas pass-through disks are attached to an existing VM. Three VHD types fixed, dynamically expanding, and differencing disk provide the option to focus on either performance or capacity management. Fixed VHD With a fixed VHD, the VHD file consumes the specified capacity at creation time, thereby allocating the initially requested drive space all at once and limiting fragmentation of the VHD file on disk. From the VM s perspective, this VHD type then behaves much like any disk presented to an OS. The VHD file represents a hard drive. As data is written to a hard drive, the data fills that drive; or in this case, the VM writes data in the existing VHD file, but does not expand that file. Dynamically expanding VHD With a dynamically expanding VHD, a maximum capacity is specified. However, upon creation, the VHD file only grows to consume as much capacity on the volume as is currently required. As the VM writes more data to the VHD, the file dynamically grows until it reaches the maximum capacity specified at creation time. Because dynamically expanding VHDs only consume the capacity they currently need, they are very efficient for disk capacity savings. However, whenever the VHD file needs to grow, I/O requests might be delayed because it takes time to expand that file. Additionally, increased VHD file fragmentation might occur as it is spread across the disk and mixed with other I/O traffic on the disk. Fixed versus dynamically expanding VHD performance To directly compare the performance of fixed and dynamically expanding VHDs, VMs with each VHD type have a 50 GB file placed on the VHD. This large file forces the dynamically expanding VHD files to grow, roughly matching the size of the fixed VHD and eliminating the need for expansion during the I/O test. The workload specified in I/O test configuration is then applied to the VMs. With the fixed and dynamically expanding VHD files nearly identical in size and the same workload applied to both, the dynamically expanding VHDs are expected to perform equal to the fixed VHD. The results, however, reveal that the fixed VHDs achieve up to 7% more IOPS and at a 7% lower latency. It is also important to recognize that dynamically growing a VHD, as would occur in a real environment, would further slow its performance and likely cause increased fragmentation. For this reason, when using VHDs for VM storage, place performance-sensitive applications and VMs on fixed VHDs. Dynamically expanding VHDs, on the other hand, offer significant disk capacity savings and should be used when performance is not critical. Best Practice Use fixed VHDs rather than dynamically expanding VHDs for production environments where performance is critical. Where capacity is more of a concern, or for general use, use dynamically expanding disks. Differencing disks A differencing disk allows the creation of new (child) VMs from previously existing (parent) VMs. To create a differencing disk, the parent VM is shut down and put into read-only mode to protect the parent VHD. Any changes to the parent s VHD after the creation of the differencing disk ruins the data integrity of that differencing disk. A new VHD is then created by specifying the differencing disk type and making a reference to the parent VHD. The differencing disk that is created contains changes that 15

16 would otherwise be made to the parent VM s VHD. The differencing disk is then assigned to a new (child) VM that, when started, is identical to the read-only parent VM. At this point, the child VM can be changed and used like any other VM. Because the differencing disk records only the changes to the parent VM, it initially uses much less disk capacity compared to the parent VHD. Figure 8 shows the VHD for the parent VM (FixVM.vhd) and the emphasized differencing disk (CSV7_Diff.vhd) for the child VM. Note Differencing disks use the dynamic disk type, allowing them to dynamically grow as needed. Therefore, differencing disks experience the same performance issues as dynamic disks. Figure 8. Differencing disk file This technique allows rapid provisioning of test VMs because multiple differencing disks and the associated child VMs can be created referencing the same parent VM to create a tree-structure of parent-child relationships. A child VM can also serve as a parent for another differencing disk, thus creating a chain of child VMs. Note Because multiple differencing disks can depend on the same VHD, VMs that use differencing disks cannot be moved with live or quick migration. If the child VM is in the same cluster resource group as the parent, moving the parent VM also moves the child, but this is not a live migration. With differencing disks, many VMs can be created quickly with the same installation and applications, and then each can be changed as needed. This is ideal for testing different stages of an upgrade or compatibility between applications or updates. A child VM that uses a differencing disk 16

17 can also be merged with the parent VM (assuming that the child VM is the only child of the parent VM) or the differencing disk can be converted to a fixed or dynamically expanding VHD of its own, making the child VM independent of the parent. For example, if a VM has Windows Server 2008 installed, a differencing disk can be created from it and applied to a child VM. The child can then have a service pack or update installed to verify functionality with existing software. Next, that child can become the parent for another differencing disk and VM, which can have the next service pack or update installed. In this manner, multiple upgrade paths can be tested and thoroughly verified. Then, either of the child VMs can be merged with the parent VM to apply those changes to the parent, or a child VM s VHD can be converted to a fixed or dynamically expanding VHD to allow it to run independent of the parent VM. Recognize that the more differencing disks there are on a volume, the more I/O is created, thus degrading performance. The features and functionality of differencing disks are invaluable for test and development environments, but are likely not suitable for a production environment because of that performance impact. Also, because LUNs on the EVA are striped across the entire disk group, do not use differencing disks on LUNs that share a production system disk group. Best Practice Use differencing disks for test and development environments to rapidly deploy similar VMs and to test compatibility. To avoid performance degradation, do not use differencing disks in a production environment or in environments that share a disk group with a production system. Snapshots With Hyper-V, a point-in-time snapshot can be taken of a VM to save the current state, whether the VM is turned off or still running. This saved state includes not just the content on the VHD, but also the state of the memory and configuration settings, which allows a VM to be rolled back to a previous state of the machine. Be aware that rolling back some applications, such as a database, might cause synchronization issues or data inconsistency. Rolling back VMs with such applications must be done carefully and might also require some restoration techniques to return those applications to a functional state. Snapshots are not an alternative for backups. This point-in-time copy can be taken while an application is changing data on the disk and application data might not be in a consistent state. Also, if the base VHD is lost or corrupted, the snapshot is no longer usable, so it is very important to have an alternative backup strategy. Note Snapshots are not a backup solution. Use Microsoft System Center Data Protection Manager (SCDPM), HP Data Protector, or another backup utility to prevent data loss in case of a hardware failure or data corruption. Multiple snapshots of a VM can be taken, creating a chain of possible restore points as shown in Figure 9. 17

18 Figure 9. Snapshot chain Snapshots record changes to the VM by creating a differencing disk with an.avhd extension as well as a duplicate XML file with VM configuration data, a.bin file with VM memory contents, and a.vsv file with processor and other VM state data. If the VM is off when the snapshot is created, no.bin or.vsv files are created because there is no memory or state to record. Because snapshots contain changes to the disk, memory, and configuration, they start small, but can grow rapidly, consuming otherwise available disk capacity and creating additional I/O traffic. This can hinder the performance of VMs and other applications on the Hyper-V host. Be aware that taking a snapshot of a VM creates a differencing disk for every VHD that the VM owns. Therefore, the more VHDs the VM owns, the longer it takes, the more disk capacity it consumes, and the greater the potential for performance degradation. Also, while there is a setting in Server Manager called Snapshot File Location, this setting only changes the storage location of the VM memory, state, and configuration components of the snapshot. The differencing disk file is still stored on the volume that owns the original VHD. Note Snapshots cannot be taken of VMs that have any pass-through disks attached. The ability to roll a VM back to a previous point in time is very useful when a VM experiences undesirable results while installing patches, updates, or new software, or performing other configuration changes. The performance impact of having existing snapshots, however, is not ideal for most production environments. Because snapshots cause extra I/O, if a snapshot is no longer needed, it is beneficial to remove the snapshots. Deleting one or more snapshots causes a merging/deleting process to remove the requested snapshots. If, however, the VM is running when a snapshot is deleted, the system does not remove the differencing disk immediately. The differencing disk is actually still used until the VM is shut down, at which time the differencing disk is merged with any necessary VHDs and the disk space it used is released. 18

19 Best Practice Use Hyper-V snapshots for test and development environments before installing new patches, updates, or software. To avoid performance degradation, and because VMs must be shut down to properly remove differencing disks (.avhd files), do not use snapshots in production environments or on a disk group shared with a production system. VHD and volume sizing One benefit of virtualization is potential storage consolidation. Because of a common best practice of using separate disks for OS and application data, many physical servers might have large amounts of unused space on their OS (root) drives. Collectively, those physical servers might have terabytes of unused disk capacity. With Hyper-V VMs, however, OS VHDs can be sized more appropriately to save disk capacity. Properly sizing VHDs at creation time is important because undersizing a VHD can cause problems for the owning VM (just as a physical server that has a full root disk) and oversizing a VHD wastes disk capacity. Although it might be tempting to size a VHD just slightly more than necessary for the OS and applications, remember that unless manually changed, the swap (paging) file resides on the root VHD of the VM. Also be sure to plan for patches, application installations, and other events that might increase the capacity that the root VHD needs. Sizing the volume that the VHD resides on is also very important. For ease of management, it is common to place the configuration files on the same volume as the root VHD. Remember, however, that the configuration file with a.bin extension consumes as much disk capacity as there is memory given to the VM. Also, remember that snapshots are stored on the same volume that holds the VHD. Hyper-V also periodically uses a small amount of capacity (generally less than 30 MB) on the VHD s root volume. Without sufficient free capacity on that volume, a VM might transition to a Paused- Critical state where all VM operations are halted until the issue is resolved and the VM is manually returned to a Running state. To avoid VM complications, reserve at least 512 MB on any volumes that a VM owns. If snapshots are used, reserve more capacity, depending on the workload. Also carefully monitor free space on those volumes. While this might be easy with volumes that hold fixed VHDs, dynamically expanding VHDs and differencing disks might increase suddenly, causing failures due to lack of disk capacity. Best Practice Keep at least 512 MB free on volumes holding virtual machine VHDs or configuration data. If dynamically expanding VHDs, differencing disks, or snapshots are used, keep sufficient disk space free for unexpected sudden increases in VHD or AVHD size. The necessary capacity depends on the workloads applied. Best Practice Use HP Storage Essentials or Microsoft System Center Operations Manager (SCOM) to monitor free disk capacity on volumes that hold VM VHDs or configuration files. Alert the storage administrator if free disk capacity drops below the desired threshold. Pass-through disks Pass-through disks are different from VHDs in that they are not online to the Hyper-V host, nor do they use a VHD file. Instead they are passed directly to the VM, allowing less processing overhead and 19

20 slightly better performance for the disk. In this environment, pass-through disks show only up to a 5% performance improvement over VHDs. However, pass-through disks have another significant benefit: no capacity limit. VHDs have a limit of 2,040 GB. Pass-through disks do, however, lack many of the features available with VHDs, such as the use of snapshots, differencing disks, CSVs, and overall portability. With a pass-through disk, the disk is presented to the Hyper-V host, but left offline. It is later brought online from within the VM. Note When using pass-through disks, make sure that the disk is offline to the host OS. If a host and guest OS each attempt to access a pass-through disk at the same time, the disk can become corrupted. When using the Create New Virtual Machine wizard, a pass-through disk cannot be attached to a VM at the time of the VM s creation. Instead, when placing the OS on a pass-through disk, create the VM, and when choosing a disk, select the Attach a virtual hard disk later option. After the VM is created, instead of selecting a VHD, in VM settings, choose the physical disk option and select a disk from the list. Be aware that because the VM consumes the entire pass-through disk, the configuration files must be placed on a different volume, whereas they can reside on the same volume as a VHD. Best Practice Because VHDs offer functionality above what is available with pass-through disks, do not use pass-through disks for OS boot disks. Use pass-through disks only when application performance is of extreme importance, when the application vendor recommends allowing raw access to the disk, or when a single disk needs a capacity greater than 2 TB. Disk expansion If a fixed VHD s capacity is nearly consumed or a dynamically expanding VHD reaches its maximum limit and more capacity is desired, it is possible to expand the VHD. Pass-through disks can also be expanded in certain circumstances. Note, however, that disk expansion (VHD or pass-through) is not the same as having a dynamically expanding VHD. Warning Attempts to expand pass-through disks that act as the OS boot disk or passthrough disks that use an IDE controller frequently result in data corruption. Do not attempt to expand bootable pass-through disks or pass-through disks that use IDE controllers without first testing the expansion thoroughly and having a current backup of the disk. Expanding VHDs and pass-through disks might first require increasing the EVA Vdisk capacity. While an EVA Vdisk s capacity can be increased with concurrent I/O activity, expanding a VHD requires that the VHD be disconnected briefly from the VM or, alternatively, that the VM be shut down. Modifying the structure of any disk includes inherent risk. Before expanding, compressing, or otherwise changing any disk, make a backup and stop I/O to that disk to prevent data corruption. For more information about how to expand a VHD or pass-through disk, see Appendix A Disk expansion. 20

21 Best Practice If necessary, VHDs and some pass-through disks can be expanded. Before expanding a disk, be sure to have a current backup of the disk and pause the I/O traffic to that disk to avoid data corruption. Disk options summary Figure 10 shows a logical comparison of the different disk types. For the best performance, use passthrough disks, but remember that they lack many of the features that are available with VHDs and perform only slightly better than fixed VHDs. Therefore, using fixed VHDs might be the better option, while dynamically expanding VHDs can provide significant savings in disk capacity. Figure 10. Logical disk types 21

22 Virtual disk controllers To present a VHD or pass-through disk to a VM, an IDE or SCSI virtual disk controller must be used. Controller performance has improved over older virtualization technologies and there are several subtle differences to be aware of. Disk controller performance In some previous virtualization tools from Microsoft, virtual SCSI controllers performed better than virtual IDE controllers. This is because the SCSI controllers are synthetic devices designed for minimal overhead (they do not simulate a real or physical device) and I/O requests are quickly sent from the guest across the Virtual Machine Bus (VMBus) to the host I/O stack. IDE controllers, however, emulate a real device, which previously required extra processing before I/O requests were sent to the host I/O stack. With Hyper-V, however, a filter driver bypasses much of the extra processing and improves performance to equal that of the SCSI controller. Testing in this project confirms this with drives on SCSI controllers performing less than 3% better than those on IDE controllers. Disks per controller type Hyper-V allows two IDE controllers and four SCSI controllers per VM. Each IDE controller can have two devices attached, and one of these four total IDE devices must be the VM s boot disk. If a DVD drive is desired, it must also reside on an IDE controller. This leaves only two device slots (three if no DVD drive is used) available for other IDE drives to be attached, after which SCSI controllers must be used to allow more devices. Each of the four SCSI controllers has 64 device slots, allowing 256 total SCSI drives. Although DVD drives cannot reside on a SCSI controller and a VM cannot boot from a SCSI controller, with the release of R2, the SCSI controller provides the benefit of hot-swapping drives to a VM. To allow this capability, be sure to add the desired SCSI controllers to the VM before powering on the VM because controllers cannot be added or removed when a VM is running. Also, Microsoft recommends spreading disks across separate SCSI controllers for optimal performance. For more information, see Performance Tuning Guidelines for Windows Server 2008 R2. Best Practice To allow VMs to hot-add or hot-remove disks without requiring VM shutdown and for optimal performance, add all four virtual SCSI disk controllers to each VM at setup time and balance presented storage evenly across the controllers. Controller usage recommendation A common best practice recommends placing the OS and application data on separate disks. This can increase availability because it prevents application workloads from impacting an operating system. For consistency and ease of management, place application and data disks on SCSI controllers. 22

23 Best Practice The boot disk for a Hyper-V VM must use a virtual IDE controller. For ease of management and to use the hot-add or hot-remove disk feature of Hyper-V R2, place application and data disks on virtual SCSI disk controllers. If necessary, the controller type that a disk is attached to can be changed. However, because IDE controllers cannot hot-add or hot-remove disks, the VM must be shut down to do so. After the VM is shut down, change the controller by simply removing the disk from one controller and adding it to another. VM and storage management While virtualization has many benefits, managing a virtualized environment can still be difficult because there are many new concepts and challenges. Several tools for managing storage and a Hyper-V environment are discussed in the following sections. Server Manager Server Manager, which includes a compilation of modules in the Microsoft Management Console (MMC), has interfaces for managing failover clusters, local and external storage, performance, and many more components of the server. It is also likely the most accurate source of information on VMs in a Hyper-V environment because it resides on each full Windows Server installation and remains very current. Many management applications are installed on management servers and must request information from the necessary host. Some collect VM status information only periodically and do not present current data all of the time. Remote Management With Windows Server 2008 R2, Server Manager can manage remote hosts as well. To do this, first enable remote management from the target server s Server Manager by clicking the Configure Server Manager Remote Management link, and selecting the box in the pop-up window as shown in Figure 11. Then, in Server Manager on another Windows Server 2008 R2 host, right-click the server name, select Connect to Another Computer, and enter the server name. This can also be done from the Action menu. 23

24 Figure 11. Configuring Server Manager Remote Management Server core installations of Windows Server 2008 R2 can also be managed from a remote Server Manager. To enable remote management, on the server core machine, run SCONFIG, and select the 4) Configure Remote Management option. After configuring the necessary firewall ports, select the Connect to Another Computer option from a host with a full installation of Windows Server 2008 R2 (as explained previously). With this functionality, storage of a server core machine can be managed from the Server Manager GUI. For more information about remote management, see Remote Management with Server Manager and Windows Server 2008 R2 Core: Introducing SCONFIG. Note You can also manage the storage of Windows Server 2008 R2 machines (core or full edition) from a workstation running Windows 7 or Windows Vista by installing the Remote Server Administration Tools for Windows 7 or the Remote Server Administration Tools for Windows Vista, respectively. 24

25 Hyper-V Manager After the Hyper-V server role is installed, a Hyper-V Manager component becomes enabled that you can use to view the VMs on a host (local or other) and perform various tasks. From the Hyper-V Manager, you can edit VM settings, take snapshots, connect to the VM, and more (see Figure 12). This is a critical tool in managing Hyper-V VMs and is included in Windows Server 2008 R2. Figure 12. Hyper-V Manager within Server Manager 25

26 Failover Cluster Manager Another valuable feature in Server Manager is the Failover Cluster Manager. While server administrators who have set up a cluster in previous versions of Windows might be familiar with this component, there are two new pieces in the Failover Cluster Manager of interest when working with VMs. First, by right-clicking a cluster, there is a new option to enable CSVs. After CSVs have been enabled, a new folder for CSVs appears under the cluster as shown in Figure 13. CSV storage details are visible in this folder, including the capacity and path to each CSV, none of which use drive letters. CSVs can now be created from unused cluster storage. Note To add a CSV, the desired volume must already be a resource in the cluster (visible in the Storage folder under the cluster name) and available. If any VM or other resource is using the volume, it cannot be turned into a CSV. Figure 13. CSVs from within Failover Cluster Manager 26

27 Second, if the cluster is properly configured, live migration becomes an option. To do this, right-click the VM, highlight Live migrate virtual machine to another node, and select a target as shown in Figure 14. This option is also available through the Actions window. Figure 14. Live migration from within Failover Cluster Manager VMs can be started, stopped, and in general managed from within Failover Cluster Manager, making it a valuable tool for managing a highly available virtualized environment. However, some features, such as creating snapshots and importing or exporting VMs, are still only available in the Hyper-V Manager. Performance Monitor The Windows Performance Monitor (Perfmon) is a very useful tool for determining what might be the performance bottleneck in a solution. Perfmon has numerous counters, with categories that include server hardware, networking, database, cluster, and Hyper-V. While it is possible to collect Perfmon counters from each VM that runs Windows, doing so can be very tedious. Instead, on the host, collect Hyper-V counters, such as those in the Hyper-V Virtual Storage Device category, to avoid having to compile results from multiple VMs. Also, monitor the Cluster Shared Volumes category to determine whether or not that CSV is being properly used. Counters from the Hyper-V Virtual Storage Device and Cluster Shared Volumes categories can also be compared with the host s LogicalDisk category to see the difference between I/O requests from the VMs and what the host actually sends to disk. Because the storage devices can also be on a SAN, these counters can be compared to EVAPerf statistics, such as the virtual disk counters seen by running evaperf with the vd option. 27

28 Best Practice Instead of collecting numerous performance counters from each VM, use the Windows Performance Monitor on the host to collect Hyper-V counters. Compare counter sets such as Cluster Shared Volumes and Hyper-V Virtual Storage Device to those in the LogicalDisk set to understand disk performance from the VM s perspective versus the host s perspective. In addition to monitoring disk performance, it is important to watch the performance of the processors because it can be challenging to properly balance the processor usage in a virtualized environment. On non-virtualized servers, it might be sufficient to monitor the \Processor(*)\% Processor Time counter to determine whether or not a server s processors are being properly used. However, with Hyper-V enabled, monitoring this counter either on the host or on the VMs can be very misleading. This is because for performance monitoring, the host OS is considered another guest OS or VM, and that counter reports only the processor usage for that particular VM or host OS. Therefore, viewing the \Processor(*)\% Processor Time counter only shows usage as seen by the host, not the total processor usage of the physical server. On VMs, this value can be further skewed if more virtual processors are allocated to the resident VMs than there are logical processors available. For example, if a server has two dual-core CPUs, this is a total of four logical processors. However, if the server also has four VMs, each with two virtual processors, then eight total virtual processors compete for CPU time (not including the virtual processors for the host OS). If each of these VMs has a CPU-intensive workload, they might saturate the available logical processors and cause excessive context switching, further decreasing processor performance. In this scenario, consider that all four logical processors are busy 100% of the time due to VM workloads. Because one logical processor is roughly equivalent to one virtual processor and there are eight virtual processors for the VMs, only roughly half of the virtual processor resources are fully used. Therefore, if the \Processor(*)\% Processor Time counter is viewed on any or all of the VMs, they show (on average) 50% usage, suggesting that the server is capable of a heavier workload. Consider, on the other hand, a scenario with only two VMs, each with only one virtual processor. Then, with each VM running a heavy workload, the \Processor(*)\% Processor Time counter shows high usage, suggesting that the VM and even the physical server is not capable of handling the load. The same counter from the host s perspective, however, might show a very low usage because the host is relatively idle. This might cause one to assume that the server is capable of handling the workload and that all is well. However, more virtual processors allocated to each VM can saturate the server s logical processors. The values in this example are overly simplified because workloads are rarely so evenly distributed and processor usage is rarely so clearly balanced. However, the example does reveal the importance of understanding the counters found in Perfmon. To properly monitor processor usage in a Hyper-V environment, pay attention to two counter sets: Hyper-V Hypervisor Logical Processor and Hyper-V Hypervisor Virtual Processor. To view the physical processor usage of the entire server (host OS and guest VMs), view the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time counter. Viewing only the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time counter, however, is not sufficient to understand the processor usage on the VMs. If the virtual processor usage is high and the logical processor usage is low, add more VMs or virtual processors to existing VMs. If the logical processor usage is high and the virtual processor usage is low, then there are likely more virtual processors than there are logical processors, causing unnecessary context switching. In this case, reduce the number of virtual processors and attempt to reach a 1:1 ratio of virtual to logical processors by either moving VMs to another server or reducing the virtual processors allocated to 28

29 each VM. If both logical and virtual processors show high usage, move the VM to a different box or add processor resources, if possible. Best Practice Do not use the \Processor(*)\% Processor Time counter on the host or VMs to monitor the Hyper-V server s total processor usage. Instead, use the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time counter to view processor usage. This includes usage from the host and guest operating systems. Best Practice Monitor VM processor usage with the \Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time and \Hyper-V Hypervisor Virtual Processor(_Total)\% Total Run Time counters. If the virtual processor usage is high and the logical processor usage is low, add VMs or virtual processors to existing VMs. If the logical processor usage is high and the virtual processor usage is low, move VMs to a different server, reduce virtual processors on local VMs, or add physical processors to the server. For more information about Hyper-V performance monitoring, including a flowchart for processor usage analysis, see Measuring Performance on Hyper-V. For information about calculating expected processor requirements and adding, removing, or otherwise configuring virtual processors on a VM, see A quick sizing guide for Microsoft Hyper-V R2 running on HP ProLiant servers. System Center Virtual Machine Manager The Microsoft System Center suite is a family of products that prove to be very helpful in managing any Windows environment. The System Center Virtual Machine Manager 2008 R2 (SCVMM) is a powerful application that makes monitoring, managing, and deploying VMs easier and faster. Much of the virtual machine information available in Server Manager is also available in SCVMM in a more centralized interface. In addition to VM summary information, such as processor, memory, and other details, SCVMM shows networking and storage information. When performing many actions, such as migrating between hosts or taking snapshots, one helpful feature is the job detail tab, which shows each step of the job. Figure 15 shows a VM storage migration. Note If hosts are added to a cluster after they are imported into SCVMM, the cluster nodes must be removed from the SCVMM interface and re-added for SCVMM to recognize the cluster and update the Failover Cluster Manager when performing operations such as quick storage migration. 29

30 Figure 15. SCVMM VM job progress From within SCVMM, VM migrations can be performed and settings can be changed. VMs can be deployed, snapshots can be created and managed (in SCVMM snapshots are called checkpoints), and physical hosts can be converted to VMs. For administrators who are also working with VMware, SCVMM can manage VMware ESX servers through VMware VirtualCenter, allowing a single management pane. SCVMM also has a feature for converting VMs from VMware to Hyper-V VMs as shown in Figure 15. SCVMM has a library for easy VM deployment (see Deployment with SCVMM) and it integrates with other System Center applications. SCVMM also shows the underlying PowerShell scripts used to perform operations, making it easy for administrators who are unfamiliar with PowerShell to learn the commands and structure necessary to write their own scripts to automate the deployment of VMs. Quick Storage Migration Another feature that is available with Hyper-V R2 and SCVMM 2008 R2 is Quick Storage Migration (QSM), which allows the storage location for a VM s VHDs to be moved to a different volume with limited downtime of the VM. Because these volumes can reside in completely separate locations, QSM can migrate multiple VHDs for a VM from local to shared storage, from a traditional volume to a CSV, or from one SAN to an entirely different SAN, if desired. Quick Storage Migration is not limited by storage protocol, meaning that it can move a VHD between Fibre Channel and iscsi SANs. This makes it easy to migrate existing VMs from an older storage array to the new EVA. Note Quick Storage Migration cannot be performed if the VM has a passthrough disk attached. To perform a migration, first disconnect the passthrough disks, and then perform the migration. If necessary, copy the data on the pass-through disk to a VHD to include that data in the migration. 30

31 HP Command View EVA software Managing an EVA is very simple with CV-EVA because it provides a convenient interface to access the EVA s many powerful features. While in the past, CV-EVA was always a server-based management (SBM) tool, recent releases of the EVA allow array-based management (ABM), which uses the EVA management module and requires no external server for CV-EVA. However, the two previously mentioned utilities, EVAPerf and SSSU, are not available on the management module. Using EVAPerf along with Windows Perfmon to monitor EVA performance and SSSU to script configuration changes on the EVA can greatly enhance management of a Hyper-V environment. Best Practice To understand disk performance of the entire solution, run EVAPerf with the vd (Virtual Disk) option and compare it to Perfmon counter sets such as LogicalDisk, HP DSM High Performance Provider, Hyper-V Virtual Storage Device, and Cluster Shared Volumes. HP Systems Insight Manager Many administrators use HP Systems Insight Manager (SIM) to manage their physical servers. This utility has a Virtualization Manager Module that allows SIM to recognize VMs and even perform basic functions with the VMs such as starting, stopping, deploying, and copying the VMs as shown in Figure

32 Figure 16. HP SIM VM recognition System Center Operations Manager (SCOM) Another component of the Microsoft System Center suite is System Center Operations Manager (SCOM). This tool is useful for monitoring the physical state of the Hyper-V host servers and VMs, providing useful server information and configurable warnings and alerts when thresholds are reached, such as when processor or memory usage is above a certain point. SCOM also has many management packs that enable rich application monitoring. When integrated properly with SCVMM, SCOM can also monitor the performance of the VMs, generating reports that can be viewed from either SCOM or SCVMM (see Figure 17). Note If using SCOM 2007, the HP StorageWorks management pack can be integrated for extra storage management functionality within SCOM. This management pack is not yet available for SCOM 2007 R2. 32

33 Figure 17. Virtual Machine Utilization report from SCOM/SCVMM When properly integrated, SCOM can pass Performance and Resource Optimization (PRO) tips to SCVMM. These tips are based on configurable settings and can be automatically implemented to increase the availability or performance of a Hyper-V environment. For example, a PRO tip might warn when a clustered Hyper-V host server reaches 90% CPU usage and, if configured to act automatically, SCVMM will move a VM from that host to another Hyper-V host in the cluster. Properly configuring PRO tips can prevent degraded performance and even prevent server failures. PRO tips can be enabled by viewing the properties for a cluster or host group as shown in Figure 18. Each VM can be set individually to inherit the parent group s PRO tip settings. For more information about PRO tips, see TechNet Webcast: PRO Tips in System Center Virtual Machine Manager 2008 R2 (Level 300). Best Practice Integrate System Center Operations Manager with System Center Virtual Machine Manager to generate Performance and Resource Optimization (PRO) tips to increase availability and performance. If desired, allow System Center applications to automatically perform the recommended PRO tips to avoid system failures in extreme circumstances. 33

34 Warning If not properly configured and tested, automatic operations without administrator supervision can cause degraded performance or other results. Thoroughly test all PRO tips before allowing them to be automatically activated. Figure 18. Cluster group PRO tip settings The System Center suite can also be integrated with the HP Insight Control 6.0 suite for increased functionality. VM deployment options One of the greatest benefits of using Hyper-V is the ability to easily deploy VMs, whether it is creating brand new VMs, converting physical to virtual servers, or even converting from VMware VMs to Hyper-V VMs. There are several deployment tools and methods available. In addition to deploying new VMs with a fresh OS image, several of the methods described in the following sections can also create some form of clone of the server. When creating VM clones for deployment, it is important to remove server names and other server-specific information before creating the clone to avoid communication conflicts. This can be done by running the Sysprep command with the /generalize and /shutdown options. 34

35 Best Practice To avoid server name conflicts when deploying virtual machines based on existing operating system images or clones, run the Sysprep command with the /generalize and /shutdown options prior to creating the image or clone. Note When using SCVMM to create a virtual machine template, it is not necessary to run Sysprep first because SCVMM runs Sysprep and removes server names and other conflicting data as part of the template creation process. Windows Deployment Services Many large environments already make use of the Windows Deployment Services (WDS) tool. This tool allows deployment of new or captured server images to both physical servers and VMs. Therefore, a server can be set up with all service packs, patches, and applications. Then the image can be captured and redeployed to servers and VMs much more quickly than installing a new OS and applying the desired changes manually. WDS can also deploy VHD files to physical servers, allowing them to boot from a VHD. To use WDS to deploy VMs, a new VM with a blank disk must be set up and then booted from the network using a legacy NIC, which is less efficient for the VM and host than using a VM s native NIC. If WDS is already set up in the environment, this method can improve VM deployment. EVA Business Copy snapclones If a Business Copy license for CV-EVA is installed, it is possible to use snapclones to create what can be considered a VM clone. By creating an EVA snapclone of a Vdisk, the OS and all data on that Vdisk is duplicated and the VHD on that clone can then be attached to a VM. Note, however, that even if the snapclone name is changed in CV-EVA, the duplicate LUN appears identical to the host, including LUN capacity and formatting. If many duplicate LUNs are presented to the same host, determining which LUN is which might be difficult. For this reason, using snapclones is not the recommended method for creating a duplicate VM. Deployment with SCVMM Although some management components of SCVMM have already been discussed, SCVMM deployment tools are of special interest. SCVMM allows OS images, hardware profiles, and software profiles to be stored in its library for easy deployment. These profiles and images make VM cloning and template deployment very efficient. VM cloning with SCVMM Creating a VM clone through the SCVMM wizard is efficient and easy. If moving or duplicating a VM to an entirely different environment, creating a clone first ensures that if the export fails due to network or other issues, the original is still available and unchanged. Also, VMs or clones can be moved to the SCVMM library for later deployment. This is an ideal method for duplicating existing VMs. VM template creation and deployment with SCVMM With System Center Virtual Machine Manager, a deployment template can be made from a VM and stored in the SCVMM library for repeated use (see Figure 19). After the template is created, the hardware and OS profiles previously mentioned make deploying new VMs very easy. Additionally, 35

36 XML unattend files can be used to further automate configuration. Using SCVMM templates to deploy new VMs is perhaps the most effective method of deployment because with profiles, unattend files, and existing images, new VMs can be named, automatically added to a domain, and have applications and updates preinstalled. Many other settings can also be configured, reducing an administrator s post-installation task list. Figure 19. SCVMM Library with templates, profiles, and images for deployment Creating a VM template in SCVMM does not require running Sysprep first as with other deployment tools because Sysprep is part of the template creation process. Be aware, however, that creating a template consumes the source VM, so it might be beneficial to first create a clone of the source VM, and then use the clone to create a new template. Best Practice Use XML unattend files, hardware and operating system profiles, and SCVMM templates to easily deploy highly configurable VMs. Because creating a VM template consumes the source VM, use SCVMM to create a VM clone of the desired VM, and then create the template from that VM clone. Physical-to-virtual (P2V) deployment through SCVMM Server consolidation is one of the greatest reasons for implementing a virtualized environment. However, creating new VMs and changing them to function as the physical servers did can be a slow process. To simplify the process, SCVMM has a feature to convert a physical server to a VM. The Convert Physical Server (P2V) Wizard scans the specified server and presents valuable information about the server as shown in Figure

37 Figure 20. SCVMM convert physical to virtual wizard system information scan When performing a P2V conversion, it is possible to select some or all (default) of the desired disks (both local and external) to convert to VHDs for the new VM. However, all selected disks are converted to VHDs on one (specified) volume. If any of the new VM s disks are intended to be passthrough disks or if they should reside on separate volumes or disks, this must be done manually. To avoid consuming excessive network bandwidth and time, instead of converting all physical disks to VHDs, if possible (for example, if storage is on a SAN), unpresent the disks from the physical host, and re-present them to the new VM after it has been created. Although this cannot be done for the root partition, which must be converted if using the P2V conversion wizard, this might result in additional time and bandwidth savings. Best Practice To save time and consume less bandwidth, when performing a physical-tovirtual conversion, convert only the root partition and disks that cannot be moved to connect to the Hyper-V host. For all other (nonroot) disks, simply unpresent them from the physical server, re-present them to the Hyper-V host, and attach them to the new VM. 37

38 Although the default VHD type is dynamic, it is possible to choose a fixed VHD for a P2V conversion. A previously discussed performance best practice is to use fixed rather than dynamic VHDs. Creating a fixed VHD, however, matches the specified size of the source disk, including unused capacity as shown in Figure 21. In environments with large disk drives or slow networks, this conversion might take a significant amount of time and bandwidth. Using the dynamic VHD copies only the necessary bits, resulting in faster conversions, and is therefore recommended when source disks are large or the network is not sufficient. If fixed VHDs are desired and performance is a concern, then after converting large disks to dynamic VHDs, convert the dynamic VHDs to fixed within Hyper-V and expand the VHDs if necessary. Best Practice To avoid copying large quantities of unused disk capacity across the network, for all required P2V disk conversions, convert to dynamically expanding VHDs instead of fixed VHDs. If necessary, after the conversion, convert the dynamically expanding VHDs to fixed VHDs and expand them for improved performance. Figure 21. SCVMM convert physical to virtual wizard volume configuration 38

39 Summary This white paper details best practices and storage considerations for a Hyper-V environment with the EVA4400. The primary take-away from this paper is that understanding Hyper-V and the underlying storage can greatly improve performance, manageability, and overall satisfaction with the virtualized environment. 39

40 Appendix A Disk expansion Any time configuration changes are made on a file system with critical application or operating system data, it is best to make a backup of those volumes before performing the changes. Warning Attempts to expand pass-through disks with the OS root partition or on an IDE controller frequently result in data corruption. Do not attempt to expand bootable pass-through disks or pass-through disks that use IDE controllers without first testing the expansion thoroughly and having a current backup of the disk. VHD expansion Note To expand a pass-through disk, perform only steps 1 and Expand the Vdisk (LUN) on the EVA with CV-EVA by changing the Requested value for a Vdisk, and clicking Save changes as shown in Figure 22. Wait for the Allocated capacity to match the new Requested capacity. Figure 22. Expanding an EVA Vdisk 40

41 Note EVA LUNs can also be expanded from within Windows by installing the HP StorageWorks VDS & VSS Hardware Providers, turning on the Virtual Disk service, and enabling the Storage Manager for SANs feature in Windows. For more information, see Storage Manager for SANs or download the Storage Manager for SANs Step-by-Step Guide. 2. Using the Disk Management tool on the Hyper-V host, rescan the disks as shown in Figure 23 to reveal the extra capacity on the Windows volume, and extend the volume to the desired size as shown in Figure

42 Figure 23. Rescanning the disks on the Hyper-V host Figure 24. Extending the Windows volume 42

43 3. Shut down the VM that owns the VHD file on the newly expanded volume to allow configuration changes to the VHD file. Note If the VHD to be expanded is not the root (OS) VHD, the VHD can simply be removed from the VM instead of shutting down the entire VM. After the VHD is successfully expanded, it can be reattached to the VM. If the VM is left running during this VHD expansion, be sure to stop I/O traffic to that VHD to prevent application errors when the disk is removed. 4. With the VM shut down (or the VHD disconnected from the VM), open the VM settings dialog box as shown in Figure 25. Figure 25. Editing a VHD 5. In the wizard, locate the desired VHD, and select the option to Expand the VHD. 43

44 6. Set the new size as shown in Figure 26. Notice that the new VHD size chosen leaves several GB still available on the expanded volume as previously suggested. Figure 26. Setting the new VHD capacity 7. After the VHD expansion is complete, turn on the VM (or reattach the VHD to the running VM). 44

45 8. Rescan and expand the volume with the guest OS (VM) disk management tool as was done at the host OS level, as shown in Figure 27. Figure 27. Extending the volume capacity on the VM 45

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...

More information

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic

More information

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades

Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades Executive summary... 2 Introduction... 2 Exchange 2007 Hyper-V high availability configuration...

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA

Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Subtitle Table of contents Overview... 2 Key findings... 3 Solution

More information

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Integration note, 4th Edition Introduction... 2 Overview... 2 Comparing Insight Management software Hyper-V R2 and VMware ESX management...

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Windows Server 2008 R2 Hyper V. Public FAQ

Windows Server 2008 R2 Hyper V. Public FAQ Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010

More information

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...

More information

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning

More information

Windows Host Utilities 6.0.2 Installation and Setup Guide

Windows Host Utilities 6.0.2 Installation and Setup Guide Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277

More information

HP Converged Infrastructure Solutions

HP Converged Infrastructure Solutions HP Converged Infrastructure Solutions HP Virtual Connect and HP StorageWorks Simple SAN Connection Manager Enterprise Software Solution brief Executive summary Whether it is with VMware vsphere, Microsoft

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the

More information

SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software

SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software Table of contents Executive summary... 2 Solution overview... 2 Objectives...

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

Windows Host Utilities 6.0 Installation and Setup Guide

Windows Host Utilities 6.0 Installation and Setup Guide Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

What s new in Hyper-V 2012 R2

What s new in Hyper-V 2012 R2 What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

HP SN1000E 16 Gb Fibre Channel HBA Evaluation HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

VMware vsphere Data Protection 6.1

VMware vsphere Data Protection 6.1 VMware vsphere Data Protection 6.1 Technical Overview Revised August 10, 2015 Contents Introduction... 3 Architecture... 3 Deployment and Configuration... 5 Backup... 6 Application Backup... 6 Backup Data

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment Executive Summary... 2 HP StorageWorks MPX200 Architecture... 2 Server Virtualization and SAN based Storage... 3 VMware Architecture...

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

VEEAM ONE 8 RELEASE NOTES

VEEAM ONE 8 RELEASE NOTES VEEAM ONE 8 RELEASE NOTES This Release Notes document provides last-minute information about Veeam ONE 8 Update 2, including system requirements, installation instructions as well as relevant information

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

Dell Solutions Overview Guide for Microsoft Hyper-V

Dell Solutions Overview Guide for Microsoft Hyper-V Dell Solutions Overview Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

Sizing guide for Microsoft Hyper-V on HP server and storage technologies

Sizing guide for Microsoft Hyper-V on HP server and storage technologies Sizing guide for Microsoft Hyper-V on HP server and storage technologies Executive summary... 2 Hyper-V sizing: server capacity considerations... 2 Application processor utilization requirements... 3 Application

More information

How To Manage A Hyperv With Hitachi Universal Storage Platform Family

How To Manage A Hyperv With Hitachi Universal Storage Platform Family Hitachi Universal Storage Platform Family Best Practices with Hyper-V Best Practices Guide By Rick Andersen and Lisa Pampuch April 2009 Summary Increasingly, businesses are turning to virtualization to

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

Lab Validation Report. By Steven Burns. Month Year

Lab Validation Report. By Steven Burns. Month Year 1 Hyper-V v2 Host Level Backups Using Symantec NetBackup 7.0 and the Hitachi VSS Hardware Provider with the Hitachi Adaptable Modular Storage 2000 Family Lab Validation Report By Steven Burns March 2011

More information

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper

More information

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

Microsoft Hyper-V Server 2008 R2 Getting Started Guide

Microsoft Hyper-V Server 2008 R2 Getting Started Guide Microsoft Hyper-V Server 2008 R2 Getting Started Guide Microsoft Corporation Published: July 2009 Abstract This guide helps you get started with Microsoft Hyper-V Server 2008 R2 by providing information

More information

my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize

my forecasted needs. The constraint of asymmetrical processing was offset two ways. The first was by configuring the SAN and all hosts to utilize 1) Disk performance When factoring in disk performance, one of the larger impacts on a VM is determined by the type of disk you opt to use for your VMs in Hyper-v manager/scvmm such as fixed vs dynamic.

More information

Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads

Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Violin Memory 7300 Flash Storage Platform Supports Multiple Primary Storage Workloads Web server, SQL Server OLTP, Exchange Jetstress, and SharePoint Workloads Can Run Simultaneously on One Violin Memory

More information

Implementing and Managing Microsoft Server Virtualization

Implementing and Managing Microsoft Server Virtualization Implementing and Managing Microsoft Server Virtualization Course Number: 10215A Course Length: 5 Days Course Overview This five-day course will provide you with the knowledge and skills to deploy and manage

More information

Violin Memory Arrays With IBM System Storage SAN Volume Control

Violin Memory Arrays With IBM System Storage SAN Volume Control Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

Best Practices for Microsoft

Best Practices for Microsoft SCALABLE STORAGE FOR MISSION CRITICAL APPLICATIONS Best Practices for Microsoft Daniel Golic EMC Serbia Senior Technology Consultant Daniel.golic@emc.com 1 The Private Cloud Why Now? IT infrastructure

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array

Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation Report: Accelerating SQL Server Database Performance with the Lenovo Storage S3200 SAN Array Evaluation report prepared under contract with Lenovo Executive Summary Even with the price of flash

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

Lab Validation Report

Lab Validation Report Lab Validation Report Microsoft Hyper-V Scalable, Native Server Virtualization for the Enterprise By Brian Garrett and Mark Bowker September 2009 Lab Validation: Microsoft Hyper-V 2 Contents Introduction...

More information

How To Write An Article On An Hp Appsystem For Spera Hana

How To Write An Article On An Hp Appsystem For Spera Hana Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

Best practices for deploying an SAP landscape using the c3000 Blade Enclosure and the EVA4400 (Windows/MS-SQL)

Best practices for deploying an SAP landscape using the c3000 Blade Enclosure and the EVA4400 (Windows/MS-SQL) Best practices for deploying an SAP landscape using the c3000 Blade Enclosure and the EVA4400 (Windows/MS-SQL) Overview................................ 3 Goals................................ 3 Solution

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide

HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant PRO Management Pack for Microsoft System Center version

More information

Quick Start - Virtual Server idataagent (Microsoft/Hyper-V)

Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data

More information

BEST PRACTICES GUIDE: VMware on Nimble Storage

BEST PRACTICES GUIDE: VMware on Nimble Storage BEST PRACTICES GUIDE: VMware on Nimble Storage Summary Nimble Storage iscsi arrays provide a complete application-aware data storage solution that includes primary storage, intelligent caching, instant

More information

TGL VMware Presentation. Guangzhou Macau Hong Kong Shanghai Beijing

TGL VMware Presentation. Guangzhou Macau Hong Kong Shanghai Beijing TGL VMware Presentation Guangzhou Macau Hong Kong Shanghai Beijing The Path To IT As A Service Existing Apps Future Apps Private Cloud Lots of Hardware and Plumbing Today IT TODAY Internal Cloud Federation

More information

HP Device Monitor (v 1.2) for Microsoft System Center User Guide

HP Device Monitor (v 1.2) for Microsoft System Center User Guide HP Device Monitor (v 1.2) for Microsoft System Center User Guide Abstract This guide provides information on using the HP Device Monitor version 1.2 to monitor hardware components in an HP Insight Control

More information

Implementing the HP Cloud Map for SAS Enterprise BI on Linux

Implementing the HP Cloud Map for SAS Enterprise BI on Linux Technical white paper Implementing the HP Cloud Map for SAS Enterprise BI on Linux Table of contents Executive summary... 2 How to utilize this HP CloudSystem Matrix template... 2 Download the template...

More information

Windows Server8 2008. R2 Hyper-V. Microsoft's Hypervisor. Insiders Guide to. Wiley Publishing, Inc. John Kelbley. Mike Sterling WILEY

Windows Server8 2008. R2 Hyper-V. Microsoft's Hypervisor. Insiders Guide to. Wiley Publishing, Inc. John Kelbley. Mike Sterling WILEY Windows Server8 2008 R2 Hyper-V Insiders Guide to Microsoft's Hypervisor John Kelbley Mike Sterling WILEY Wiley Publishing, Inc. Contents Introduction xix Chapter l Introducing Hyper-V l Scenarios for

More information

MIGRATING LEGACY PHYSICAL SERVERS TO HYPER-V VIRTUAL MACHINES ON DELL POWEREDGE M610 BLADE SERVERS FEATURING THE INTEL XEON PROCESSOR 5500 SERIES

MIGRATING LEGACY PHYSICAL SERVERS TO HYPER-V VIRTUAL MACHINES ON DELL POWEREDGE M610 BLADE SERVERS FEATURING THE INTEL XEON PROCESSOR 5500 SERIES MIGRATING LEGACY PHYSICAL SERVERS TO HYPER-V VIRTUAL MACHINES ON DELL POWEREDGE M610 BLADE SERVERS FEATURING THE INTEL XEON PROCESSOR 5500 SERIES Table of contents Table of contents... 2 Introduction...

More information

VMware Best Practice and Integration Guide

VMware Best Practice and Integration Guide VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,

More information

DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution

DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution Tested with: ESRP Storage Version 3.0 Tested Date: Content DELL TM PowerEdge TM T610... 1 500 Mailbox Resiliency

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector Tech Note Nathan Tran The purpose of this tech note is to show how organizations can use Hitachi Applications Protector

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

FOR SERVERS 2.2: FEATURE matrix

FOR SERVERS 2.2: FEATURE matrix RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,

More information

EXAM - 70-410. Installing and Configuring Windows Server 2012. Buy Full Product. http://www.examskey.com/70-410.html

EXAM - 70-410. Installing and Configuring Windows Server 2012. Buy Full Product. http://www.examskey.com/70-410.html Microsoft EXAM - 70-410 Installing and Configuring Windows Server 2012 Buy Full Product http://www.examskey.com/70-410.html Examskey Microsoft 70-410 exam demo product is here for you to test the quality

More information

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products

More information

Configuration best practices for Microsoft SQL Server 2005 with HP StorageWorks Enterprise Virtual Array 4000 and HP blade servers white paper

Configuration best practices for Microsoft SQL Server 2005 with HP StorageWorks Enterprise Virtual Array 4000 and HP blade servers white paper Configuration best practices for Microsoft SQL Server 2005 with HP StorageWorks Enterprise Virtual Array 4000 and HP blade servers white paper Executive summary... 3 Intended audience... 3 Overview of

More information

Dell Compellent Storage Center

Dell Compellent Storage Center Dell Compellent Storage Center Windows Server 2012 Best Practices Guide Dell Compellent Technical Solutions Group July, 2013 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN

More information

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments W h i t e p a p e r Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments Introduction Windows Server 2012 Hyper-V Storage Networking Microsoft s Windows Server 2012 platform is designed for

More information

Table of contents. Technical white paper

Table of contents. Technical white paper Technical white paper Provisioning Highly Available SQL Server Virtual Machines for the HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x Table of contents Executive

More information

Building a Microsoft Windows Server 2008 R2 Hyper-V failover cluster with HP Virtual Connect FlexFabric

Building a Microsoft Windows Server 2008 R2 Hyper-V failover cluster with HP Virtual Connect FlexFabric Building a Microsoft Windows Server 2008 R2 Hyper-V failover cluster with HP Virtual Connect FlexFabric Technical white paper Table of contents Executive summary... 2 Overview... 2 HP Virtual Connect FlexFabric...

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

Using Integrated Lights-Out in a VMware ESX environment

Using Integrated Lights-Out in a VMware ESX environment Using Integrated Lights-Out in a VMware ESX environment 2 nd Edition Technology Brief Abstract... 2 ilo architecture in a virtualized environment... 2 The benefits of ilo in a virtualized environment...

More information

Veeam Backup & Replication

Veeam Backup & Replication Veeam Backup & Replication Version 7.0 Evaluator s Guide Hyper-V Environments August, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part

More information

HP VMware ESXi 5.0 and Updates Getting Started Guide

HP VMware ESXi 5.0 and Updates Getting Started Guide HP VMware ESXi 5.0 and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi. HP Part Number: 616896-002 Published: August 2011 Edition: 1 Copyright

More information

RDP 6.0.0 Release Notes And Support

RDP 6.0.0 Release Notes And Support RDP 6.0.0 Release Notes And Support Deployment Server Prerequisites Feature List Major Changes Known Issues Supported Operating Systems Supported ProLiant Targets Supported Integrity Targets Supported

More information

HP ProLiant Cluster for MSA1000 for Small Business... 2. Hardware Cabling Scheme... 3. Introduction... 3. Software and Hardware Requirements...

HP ProLiant Cluster for MSA1000 for Small Business... 2. Hardware Cabling Scheme... 3. Introduction... 3. Software and Hardware Requirements... Installation Checklist HP ProLiant Cluster for HP StorageWorks Modular Smart Array1000 for Small Business using Microsoft Windows Server 2003 Enterprise Edition November 2004 Table of Contents HP ProLiant

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Virtual desktops made easy

Virtual desktops made easy Product test: DataCore Virtual Desktop Server 2.0 Virtual desktops made easy Dr. Götz Güttich The Virtual Desktop Server 2.0 allows administrators to launch and maintain virtual desktops with relatively

More information

CXS-203-1 Citrix XenServer 6.0 Administration

CXS-203-1 Citrix XenServer 6.0 Administration Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,

More information

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template An HP Reference Architecture for TIBCO Technical white paper Table of contents Executive summary... 2 Solution environment... 2

More information

HP Departmental Private Cloud Reference Architecture

HP Departmental Private Cloud Reference Architecture Technical white paper HP Departmental Private Cloud Reference Architecture Table of contents Introduction to Virtualization, HP Smart Bundles, and the HP Departmental Private Cloud Reference Architecture

More information

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks Executive summary...2 Target audience...2 Introduction...2 Disclaimer...3

More information