Microsoft Private Cloud Fast Track on Dell vstart for Enterprise Virtualization

Size: px
Start display at page:

Download "Microsoft Private Cloud Fast Track on Dell vstart for Enterprise Virtualization"

Transcription

1 Microsoft Private Cloud Fast Track on Dell vstart for Enterprise Virtualization The Dell vstart for Enterprise Virtualization as a foundation for the Microsoft Private Cloud Fast Track solution based on System Center 202. Dell Enterprise Product Group Global Solutions Engineering A00

2 This document is for informational purposes only and may contain typographical errors and technical inaccuracies. The content is provided as is, without express or implied warranties of any kind. 202 Dell Inc. All rights reserved. Dell and its affiliates cannot be responsible for errors or omissions in typography or photography. Dell, the Dell logo, PowerEdge, Compellent, and Force 0 are trademarks of Dell Inc. Intel and Xeon are registered trademarks of Intel Corporation in the U.S. and other countries. Microsoft, Windows, Windows Server, Hyper-V, SQL Server, Active Directory, and Vista are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Intel and Xeon are registered trademarks of Intel Corporation in the U.S. and other countries. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell disclaims proprietary interest in the marks and names of others. June 202 Rev A00 ii

3 Contents Executive summary... 5 Private Cloud Fast Track Program Description... 5 Business Value... 5 Technical Benefits... 5 Reference Architecture... 6 Logical Architecture... 8 Server Architecture... 0 SAN Design... 7 Network Architecture Virtualization Architecture... 3 Fabric Management Fabric Management Hosts Management Logical Architecture Service Management Backup and Disaster Recovery... 6 Security Service Delivery Operations Tables Table Server Configurations... 2 Table 2 Traffic Descriptions... 5 Table 3 Sample VLAN and subnet configuration... 6 Table 4 CSV Limits... 8 Table 5 Guest Template Specs Table 6 Supported Guest Server OS Table 7 Supported Guest Client OS... 4 Table 8 SQL Data Location and Size Examples Table 9 SQL Databases and Instances Table 0 Backup types and capabilities... 6 iii

4 Figures Figure vstart 000m Logical Architecture... 7 Figure 2 Key Hardware Components in Dell vstart 000m... 9 Figure 3 Dell vstart Blade LAN Connectivity Overview... 3 Figure 4 NPAR and VLAN Configuration on a M620 Server... 4 Figure 5 NPAR and VLAN Configuration on a R620 Server... 5 Figure 6 vstart SAN Connectivity Overview... 8 Figure 7 Fault Domain Configuration in vstart SAN Figure 8 SAN FC network and zoning Figure 9 Compellent Snapshot Consistency Figure 0 Design Pattern Figure Management Architecture iv

5 Executive summary Private Cloud Fast Track Program Description The Microsoft Private Cloud Fast Track Program is a joint reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated configurations with Dell technology including computing power, network and storage architectures, and value-added software components. Microsoft Private Cloud Fast Track utilizes the core capabilities of Windows Server, Hyper-V and System Center to deliver a Private Cloud - Infrastructure as a Service offering. The key software components of every Reference Implementation are Windows Server 2008 R2 SP, Hyper-V, and System Center 202. Business Value The Microsoft Private Cloud Fast Track Program provides a reference architecture for building private clouds on each organization s unique terms. Each Fast-Track solution helps organizations implement private clouds with increased ease and confidence. Among the benefits of the Microsoft Private Cloud Fast Track Program are faster deployment, reduced risk, and a lower cost of ownership. Reduced risk: Tested, end-to-end interoperability of compute, storage, and network Predefined, out-of-box solutions based on a common cloud architecture that has already been tested and validated High degree of service availability through automated load balancing Lower cost of ownership: A cost-optimized, platform and software-independent solution for rack system integration High performance and scalability with Windows Server 2008 R2 operating system advanced platform editions of Hyper-V technology Minimized backup times and fulfilled recovery time objectives for each business critical environment Technical Benefits The Dell vstart architecture and Microsoft Private Cloud Fast Track Program integrate best-in-class hardware implementations with Microsoft s software to create a Reference Implementation. This solution has been co-developed by Dell and Microsoft and has gone through a validation process. As a Reference Implementation, Dell and Microsoft have taken on the work of building a private cloud that is ready to meet a customer s needs. Faster deployment: End-to-end architectural and deployment guidance 5

6 Streamlined infrastructure planning due to predefined capacity Enhanced functionality and automation through deep knowledge of infrastructure Integrated management for virtual machine (VM) and infrastructure deployment Self-service portal for rapid and simplified provisioning of resources Reference Architecture The Logical Architecture is comprised of two parts. The first is the Fabric which is the physical infrastructure (Servers, Storage, Network) that will host and run all customer/consumer virtual machines. In the vstart solution, the Fabric consists of between 8 and 32 Hyper-V host servers in one or more clusters, a highly scalable SAN, and an Ethernet LAN. The second is Fabric Management which is a set of virtual machines comprising the SQL and System Center management infrastructure. The vstart solution utilizes two Hyper-V host servers in a dedicated cluster for the Fabric Management along with the SAN and Ethernet LAN. The Logical architecture can be seen in Figure. 6

7 PowerEdge M000e PowerEdge M000e Brocade M5424 8Gbps FC Brocade M5424 8Gbps FC S LNK/SPD ACT S480P S480P SFP SFP LNK RS-232 USB-B STACK ID ACT QSFP+ QSFP+ Master PSU FAN ALM SYS PSU0 FAN0 Ethernet LNK LNK SYS SYS USB-A FAN RS-232 FAN RS-232 PSU ACT Ethernet PSU ACT Ethernet Microsoft Private Cloud Fast Track on Dell vstart for Enterprise Virtualization Workload Virtual Machines PowerConnect 0GbE Pass-Thru 2 per M000e Force0 S55 0Gb Ethernet Pass Through 0Gb Ethernet Pass Through Force0 S480 Core Network MASTER MASTER Brocade 500 (8, 6, 24 or 32) x PowerEdge M620 in Up to 2 PowerEdge M000e Dell 8 4 Module 2 per M000e Management Virtual Machines (2) x PowerEdge R620 Network Fibre Channel (3) 24 bay 2.5 enclosures and () 2 bay 3.5" enclosure Figure vstart 000m Logical Architecture 7

8 Logical Architecture The Dell vstart solution includes hardware, software and services components to simplify the ordering, design and deployment of the private cloud environment. Dell vstart 000m is the first Dell private cloud solution to include 2 th Generation PowerEdge TM blades, Dell Compellent Storage, and Dell Force0 Network switching. These components include between 8 and 32 compute nodes, two PowerEdge management servers, two SAN controllers, two 48 port 0GbE LAN switches, and two 40 port 8 Gbps SAN switches. Figure 2 shows the major hardware components in vstart 000m. It also includes the virtualization hypervisors from Microsoft and the associated plug-ins from Dell. 8

9 Figure 2 Key Hardware Components in Dell vstart 000m 9

10 Server Architecture The Dell PowerEdge R620 management servers and the PowerEdge M620 managed compute nodes in the Dell vstart 000m solution are Certified for Windows Server 2008 R2 and listed in the Windows Server Catalog. The host server architecture is a critical component of the virtualized infrastructure, as well as a key variable in the consolidation ratio and cost analysis. The ability of the host server to handle the workload of a large number of consolidation candidates increases the consolidation ratio and helps provide the desired cost benefit. The system architecture of the host server refers to the general category of the server hardware itself. Examples include rack mounted servers, blade servers, and large symmetric multiprocessor servers (SMP). The primary tenet to consider when selecting system architectures is that each Hyper-V host will contain multiple guests with multiple workloads. Processor, RAM, Storage, and Network capacity are critical, as well as high I/O capacity and low latency. It is critical to ensure that the host server is able to provide the required capacity in each of these categories. Blade/Rack Chassis Design The PowerEdge Blade chassis and rack servers used in this solution utilize redundant power supplies. Furthermore, the Dell vstart configuration includes Power Distribution Units (PDU). The vstart PDUs are designed to be connected with two different datacenter power sources. The PDUs on the left side are connected to one datacenter power source, while the PDUs on the right side are connected to another datacenter power source. This design ensures that there is no single point of failure in the power architecture. The redundant architecture is illustrated below. Blade/Rack Design The cloud solution utilizes different model of Dell PowerEdge 2 th generation servers for the computing and the management nodes. The Dell PowerEdge M620 blade servers along with Dell PowerEdge M000e blade chassis enclosure comprise the Host Cluster while the R620 rack servers are serving as the Management Cluster hosts. The Dell vstart 000m solution begins at 8 managed compute nodes and scales up to 32. Two rack servers are used in the solution for management. Blade Chassis Enclosure: The Dell PowerEdge M000e is a high-density, energy-efficient blade chassis that supports up to sixteen half-height blade servers, or eight full-height blade servers, and six I/O modules. A high-speed passive mid-plane connects the server modules to the I/O modules, management, and power in the rear of the chassis. The enclosure includes a flip-out LCD screen (for local configuration), six hot-pluggable/redundant power supplies, and nine hot-pluggable N+ redundant fan modules. Blade Servers: The Dell PowerEdge M620 is a half-height blade which offers a memory capacity of up to 768GB of RAM along with scalable I/O capabilities. Powered by Intel Xeon E processors and Dell s unique Select Network Adapter, flexible NIC technology, the M620 hyper-dense design provides the high performance processing in a compact form factor. There are thirty-two PowerEdge M620 servers along with two PowerEdge M000e blade chassis enclosures included in the Dell vstart configuration. Each M620 server is configured with two Intel Xeon E GHz 8-core processors and 28GB memory. Each of the M620 servers also includes a PERC H70 RAID controller along with two 300GB 5K RPM hard drives configured in a RAID for the local storage. 0

11 Improvement in Intel Xeon E processors: With Intel Integrated I/O, the E family processor merges the I/O controller onto the processor and reduces the latency. Combined with PCI Express 3.0, that can triple the pace at which data moves into and out of the processor, greatly improve the performance. Intel has also further backed its Advanced Encryption Standard-New Instructions (AES-NI) and Intel Trusted Execution (TXT) technologies into the new chips. Data can be encrypted faster and both while at rest and in transit for the first time, while improvements to TXT mean data center managers can now identify and quarantine compromised chipsets or even individual VMs much faster and get clean assets and VMs back online and running without having to shut down entire systems or hypervisors. I/O Modules: The chassis enclosure provides three separate fabrics referred to as A, B, C. Each fabric has two I/O modules for redundancy, making a total of six I/O modules in the enclosure. The I/O modules are named as A, A2, B, B2, C and C2. The I/O module can be populated with Ethernet switches, Fibre Channel and pass-through modules. (InfiniBand switch modules are also supported.) In the vstart 000m configuration, Fabric A is populated with PCT 0GbE Pass-Through modules, and Fabric B populated with Dell 8 4 SAN modules. Chassis Management: The Dell PowerEdge M000e has integrated management through a redundant Chassis Management Controller (CMC) module for enclosure management and integrated keyboard, video, and mouse (ikvm) modules. Through the CMC, the enclosure supports FlexAddress technology which enables the blade enclosure to lock the World Wide Names (WWN) of the Fibre Channel controllers and Media Access Control (MAC) addresses of the Ethernet controllers to specific blade slots. This enables seamless swapping or upgrading of blade servers with Ethernet and Fibre Channel controllers without affecting the LAN or SAN configuration. Embedded Management with Dell s Lifecycle Controller: The Lifecycle Controller is the engine for advanced embedded management and is delivered as part of idrac Enterprise in the Dell PowerEdge blade servers. It includes GB of managed and persistent storage that embeds systems management features directly on the server, thus eliminating the media-based delivery of system management tools and utilities previously needed for systems management. Embedded management includes: Unified Server Configurator (USC) aims at local -to- deployment via a graphical user interface (GUI) for operating system install, updates, configuration, and for performing diagnostics, on single, local servers. This eliminates the need for multiple option ROMs for hardware configuration. Remote Services are standards-based interfaces that enable consoles to integrate, for example, bare-metal provisioning and one-to-many OS deployments, for servers located remotely. Dell s Lifecycle Controller takes advantage of the capabilities of both USC and Remote Services to deliver significant advancement and simplification of server deployment. Lifecycle Controller Serviceability aims at simplifying server re-provisioning and/or replacing failed parts and thus reduces maintenance downtime. For more information on Dell blade servers, see Management Server: The R620 is a U rack server designed with a dual-socket and multi-core processor architecture, a dense memory configuration, and redundant local drives configurable in a RAID. The vstart configuration for Hyper-V Private Cloud Fast Track requirement includes two PowerEdge R620 servers. Each of the R620s includes two Intel Xeon E5- E Ghz, 8-core

12 processors and 28GB memory. They also each include a PERC H70 RAID controller configured as RAID. Compute Server Configuration Table Server Configurations Server Model Processor Memory Local storage and controller PowerEdge M620 (2) x Intel Xeon E5-2660, 2.2Ghz, 8-core, 20M Cache, Turbo, HT 28 GB (6 x 8 GB, DDR3 dual rank DIMMs, 333MHz) () x PERC H70 Integrated mini RAID controller (2) x 300GB 5K RPM drives configured in a RAID Management Server Configuration Server Model Processor Memory Local storage and controller PowerEdge R620 (2) x Intel Xeon E5-2660, 8-core, 20M Cache, Turbo, HT 28GB (6 x 8 GB, DDR3 dual rank DIMMs, 333MHz) () x PERC H70 Integrated mini RAID controller (2) x 300GB 5K RPM drives configured in a RAID Server Component Overview Note: The processor supports Intel Virtualization Technology (Intel VT), and hardware-enforced Data Execution Prevention (DEP). Server / Blade Storage Connectivity In the vstart configuration, each PowerEdge M620 and R620 server uses an internal RAID controller PERC H70 and connected with two HDDs configured in a RAID-. This RAID volume hosts Windows Server 2008 R2 SP for the hypervisor OS. Each server also includes a dual port 8 Gbps Fibre Channel adapter for attaching to SAN volumes and dual port 0 GbE network adapters for passing iscsi traffic through to the guest VMs. Server / Blade Network Connectivity Each host network adapter in both the fabric and fabric management hosts utilizes network teaming technology to provide highly available network adapters to each layer of the networking stack. The teaming architecture closely follows the Hyper-V: Live Migration Network Configuration Guide but extends further to provide highly available networking to each traffic type used in the architecture. NDC and NPAR. The Network Adapter implemented in the Dell 2 th generation PowerEdge servers has revolutionarily changed the long standing design concept of embedding the LAN onto the motherboard 2

13 (LOM). The LOM is replaced with a small, removable card called Network Daughter Card, or NDC. The NDC provides the flexibility of choosing network adaptors (4x GbE, 2x 0GbE, or 2x Converged Network Adapter.) In addition, the NDC supports network partitioning (NPAR) which allows splitting the 0GbE pipe on the NDC without changing the switch modules configuration. With NPAR, administrators can split the 0GbE pipe into 2 or 4 separate partitions or physical functions and allocate the desired bandwidth and resources as needed. Each of these partitions is actual PCI Express function that appears as a separate physical NIC in the server s system ROM, operating systems and hypervisor. In the vstart 000m configuration, each M620 blade server is configured with a Broadcom BCM5780 NDC providing two 0GbE ports. These ports are wired to the Pass-through modules in Fabric A and the corresponding ports on A and A2 modules are connected with the two Dell Force0 S480 switches outside the blade chassis enclosure. Meanwhile, each R620 rack server is configured with Broadcom BCM5780 Add-in NIC providing two 0Gb SPF+ ports and they are connected with the two Force0 S480 switches. The two Force0 S480 switches are configured with an ISL to provide an inter-switch traffic link. Network connectivity for the M620 is illustrated in Figure 3. Figure 3 Dell vstart Blade LAN Connectivity Overview On both the M620 and R620 servers, each of their 0GbE ports is partitioned into four ports with using NPAR and a total of eight NICs are created on each server. On each server, four NIC teams are created and configured with Smart Load Balancing TM and Failover by using the Broadcom Advanced Control Suite utility. Figure 4 illustrates the network configuration on a M620 blade server. Different VLAN IDs are assigned to the teamed NIC to segregate the traffic on the host and provide the segmentation necessary for cluster management, Live Migration (LM), cluster private, virtual machine, and other types of traffics as described in Table 2. The VLAN configuration used in the Dell vstart configuration is listed in Table 3. 3

14 Figure 4 NPAR and VLAN Configuration on a M620 Server 4

15 Figure 5 NPAR and VLAN Configuration on a R620 Server Table 2 Traffic Descriptions Traffic Type Use Fabric Node Management Fabric Live Migration Fabric CSV Tenant VM Out-of-Band Management iscsi Data Supports virtualization management traffic and communication between the host servers in the cluster. Supports migration of VMs between the host servers in the cluster. Supports cluster shared volume network communication between the servers in the cluster. Supports communication between the VMs hosted on the cluster and external systems. Supports configuration and monitoring of the servers through the idrac management interface, storage arrays, and network switches. Supports iscsi traffic between the servers and storage array(s). In addition, traffic between the arrays is supported. 5

16 Traffic Type Use Management VM Fabric Management Node Management Fabric Management Live Migration Fabric Management CSV Supports the virtual machine traffic for the management virtual machines. Supports virtualization management traffic and communication between the host servers in the management cluster. Supports migration of VMs between the host servers in the management cluster. Supports cluster shared volume network communication between the servers in the management cluster. Table 3 Sample VLAN and subnet configuration Traffic Type Sample VLAN Sample Subnet Out-of-Band Management /24 Compute Node Management /24 Compute Live Migration /24 Compute CSV /24 Management VM Network /24 iscsi /24 Management Hypervisor /24 Management Cluster LM /24 Management CSV /24 SQL Clustering /24 VM Network /22 Server /Blade HA and Redundancy The PowerEdge M000e blade chassis enclosure is designed with redundant power supplies and redundant fans. Each M620 uses a PERC H70 RAID controller and two hard drives are configured with RAID- which hosts the parent operating system. The design of PowerEdge R620 servers include high availability and redundant features such as redundant fans and power supplies that are distributed to independent power sources. The servers also uses PERC H70 controller with two hard disks configured with RAID- to prevent server crashes in the event of single disk failures. 6

17 SAN Design Storage Options The Dell vstart 000m uses the Dell Compellent Series 40 for shared SAN storage. The Series 40 provides the solution with both Fibre Channel and iscsi front end storage connectivity options. The Fibre Channel is used for hypervisor connectivity and provides a dedicated SAN fabric for the storage traffic. This gives the hypervisors dedicated bandwidth to provide the VMs a very low latency and high bandwidth option for storage connectivity. The iscsi interfaces are added to the Compellent Series 40 to provide an interface for the VMs to have direct access to enable in-guest clustering. SAN Storage Protocols iscsi and FC The Dell vstart 000m solution utilizes both Fibre Channel and iscsi protocols. The Fibre Channel storage, being a low-latency and ultra-high performance infrastructure is used for the parent to storage connectivity. The host-to-array Fibre Channel is in the 8Gbps form. For Hyper-V, iscsi-capable storage provides an advantage in that it is the protocol that can also be utilized by Hyper-V guest virtual machines for guest clustering. This requires that VM storage traffic and other network traffic flow over the same interface, however this contention is mitigated through the use of VLANs and NPAR on the network adapter. Storage Network In the vstart 000m configuration, each M620 blade server is configured with QLogic dual port QME2572 8Gb FC mezzanine card. Both the FC ports are wired across the M000e mid-plane to the two Dell 8 4 FC SAN modules in Fabric B. The SAN modules are trunked to Brocade 500 Fibre Channel switches. The front end FC ports of Compellent Series 40 are connected to the 500 FC SAN switches. Figure 5 provides the overview of the vstart s SAN connectivity. For the management server, each R620 server is configured with Qlogic QLE2562 8Gbps FC I/O Card and connected to the Brocade 500 Top of Rack SAN switches. To further support the Fabric Management guest cluster, the R620 server is also configured with iscsi connectivity to the Compellent by using its dual port Broadcom BCM5780 0GbE add-in NIC. Both ports are configured with NPAR and SLB teaming. The connectivity to the Compellent iscsi front-end is established via the two Force0 S480 switches. To provide the fully redundant and independent paths for storage I/O, MPIO is enabled by the iscsi initiator on the host. The iscsi traffic on R620 is segregated by the implementation of NPAR and VLAN. QoS is provided by the bandwidth settings in NPAR. 7

18 Figure 6 vstart SAN Connectivity Overview Cluster Shared Volumes Cluster Shared Volumes (CSV) is the storage volumes of choice for Hyper-V clusters. Developed by Microsoft exclusively for Hyper-V they enable multiple Hyper-V cluster nodes to simultaneously access VMs. The CSVs are used throughout the vstart 000m solution for both the fabric and fabric management servers. CSV Limits The below limitations are actually imposed by the NTFS file system and are inherited by CSV. Table 4 CSV Limits CSV Parameter Limitation Maximum Volume Size 256 TB Maximum # Partitions 28 Directory Structure Maximum Files per CSV Unrestricted 4+ Billion 8

19 CSV Parameter Limitation Maximum VMs per CSV Unlimited CSV Requirements All cluster nodes must use Windows Server 2008 R2 All cluster nodes must use the same drive letter for the system disk All cluster nodes must be on the same logical network subnet. Virtual LANs (VLANs) are required for multi-site clusters running CSV NT LAN Manager (NTLM) authentication in the local security policy must be enabled on cluster nodes SMB must be enabled for each network on each node that will carry CSV cluster communications Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled in the network adapter s properties to enable all nodes in the cluster to communicate with the CSV. The Hyper-V role must be installed on any cluster node that may host a VM CSV Volume Sizing Because all cluster nodes can access all CSV volumes simultaneously, we can now use standard LUN allocation methodologies based on performance and capacity requirements of the workloads running within the VMs themselves. Generally speaking, isolating the VM Operating System I/O from the application data I/O is a good start, in addition to application-specific I/O considerations such as segregating databases and transaction logs and creating SAN volumes and/or Storage Pools that factor in the I/O profile itself (i.e., random read and write operations vs. sequential write operations). CSV s architecture differs from other traditional clustered file systems which frees it from common scalability limitations. As a result, there is no special guidance for scaling the number of Hyper-V Nodes or VMs on a CSV volume other than ensuring that the overall IO requirements of the expected VMs running on the CSV are met by the underlying storage system and storage network. While rare, disks and volumes can enter a state where a checkdisk is required which with large disks may take a long time to complete causing downtime of the volume during this process somewhat proportional to the volume s size. Each enterprise application you plan to run within a VM may have unique storage recommendations and even perhaps virtualization-specific storage guidance. That guidance applies to use with CSV volumes as well. The important thing to keep in mind is that all VM s virtual disks running on a particular CSV will contend for storage I/O. Also worth noting is that individual SAN LUNs do not necessarily equate to dedicated disk spindles. A SAN Storage Pool or RAID Array may contain many LUNs. A LUN is simply a logic representation of a disk provisioned from a pool of disks. Therefore, if an enterprise application requires specific storage IOPS or disk response times you must consider all the LUNs in use on that Storage Pool. An application which would require dedicated physical disks were it not virtualized may require dedicate Storage Pools and CSV volumes running within a VM. 9

20 CSV Design Patterns Single CSV per Cluster In the Single CSV per Cluster design pattern, the SAN is configured to present a single large LUN to all the nodes in the host cluster. The LUN is configured as a CSV in Failover Clustering. All VM related files (VHDs, configuration files, etc.) belonging to the VMs hosted on the cluster are stored on the CSV. Optionally, data de-duplication functionality provided by the SAN can be utilized (if supported by the SAN vendor). Storage Host Boot Volumes (if boot from SAN) Host Cluster Witness Disk Volumes Data DeDupe SAN Large CSV Multiple VMs/VHDs no data/io optimization Multiple CSVs per Cluster In the Multiple CSV per Cluster design pattern, the SAN is configured to present two or more large LUNs to all the nodes in the host cluster. The LUNs are configured as a CSV in Failover Clustering. All VM related files (VHDs, configuration files, etc.) belonging to the VMs hosted on the cluster are stored on the CSVs. Optionally, data de-duplication functionality provided by the SAN can be utilized (if supported by the SAN vendor). 20

21 Storage Host Boot Volumes (if boot from SAN) Host Cluster Witness Disk Volumes Data DeDupe SAN Multiple CSVs Multiple VMs/VHDs no data/io optimization For both the Single and Multiple CSV patterns, each CSV has the same IO characteristics so each individual VM has all its associated VHDs stored on one of the CSVs. OS VHD Virtual Machine Data VHD Large CSV Multiple VMs/VHDs no data/io optimization Logs VHD Multiple IO Optimized CSVs per Cluster In the Multiple IO Optimized CSVs per Cluster design pattern, the SAN is configured to present multiple LUNs to all the nodes in the host cluster but the LUNs are optimized for particular IO patterns such as fast sequential read performance, or fast random write performance. The LUNs are configured as CSV in Failover Clustering. All VHDs belonging to the VMs hosted on the cluster are stored on the CSVs but targeted to the most appropriate CSV for the given IO needs. 2

22 Storage Host Boot Volumes (if boot from SAN) Host Cluster Witness Disk Volumes Host Cluster CSV Volumes (per Cluster) CSV Volume VM Operating Systems CSV Volume 2 VM Database / random R/W I/O CSV Volume 3 VM Logging / sequential W I/O CSV Volume 4 VM Staging, P2V, V2V CSV Volume 5 VM Configuration files, Volatile Memory, Pagefiles, Data DeDupe No Data DeDupe SAN In the Multiple IO Optimized CSVs per Cluster design pattern, each individual VM has all its associated VHDs stored on the appropriate CSV per required IO requirements. OS VHD CSV Volume VM Operating Systems Virtual Machine Data VHD CSV Volume 2 VM Database / random R/W I/O Logs VHD CSV Volume 3 VM Logging / sequential W I/O Note that a single VM can have multiple VHDs, and each VHD can be stored on a different CSV (provided all CSVs are available to the host cluster the VM is created on).high Availability In order to maintain continuous connectivity to stored data from the server, the controller level fault domains are established to create redundant I/O paths. These fault domains provide for continuous connectivity with no single point of failure. As illustrated in Figure 7, Domain includes connections through the top 500 then to each of the two Compellent controllers. Domain 2 includes connections through the bottom 500 then to each of the two Compellent controllers. In this implementation, if 22

23 one physical port fails, the virtual port will move to another physical port within the same fault domain and on the same controller. Figure 7 Fault Domain Configuration in vstart SAN Performance Dell Compellent Series 40, with the dual-controller configuration, 8 Gb Fibre Channel interconnects provides high bandwidth for data flows. This bandwidth is complemented with a large variety of drives in multiple speeds and sizes. The Series 40 also uses virtual port IQNs and WWNs, thereby enabling higher throughput and fault tolerance. Drive Types Dell Compellent storage enclosures feature 6 Gb interconnects, so organizations can scale up and out with ease. Administrators can mix SSD and drives in the same system, as well as drives with the same form factor (but different speeds and capacities) in the same storage enclosure. The Dell vstart 000m base solution, based upon the number of compute nodes uses multiples of four enclosures. Three enclosures utilize the 24 bay 2.5 drives and the fourth enclosure uses the bay 3.5 drives. The 2.5 drives selected are 46GB 5K RPM and the 3.5 drives are 3TB 7K RPM. This provides a balance of space and IOPS. RAID Array Design Dell Compellent Series 40 supports RAID 5, 6 and 0. The Compellent Storage Center will dynamically setup RAID based upon the demand of the storage. Multi-pathing For Windows Server 2008 R2, the built-in generic Microsoft DSM (MSDSM) provides adequate functionality for Dell Compellent. In all cases, multi-pathing should be used. Generally storage vendors will build a DSM (device-specific module) on top of Microsoft s Windows Server 2008 R2 MPIO software. Each DSM and HBA will have its own unique multi-pathing options, recommended number of connections, etc. 23

24 Fibre Channel Zoning In the Dell vstart 000m illustrated in Figure 8, FC zones are created on the Brocade 500 FC SAN switches such that each zone consists of one initiator (one HBA port in a blade server) and the Compellent FC ports. Figure 8 SAN FC network and zoning iscsi In Dell vstart 000m, iscsi connectivity to SAN is mainly used by the guest cluster. The traffic separation is implemented by the VLAN. Encryption and Authentication CHAP is available for use on the Compellent Storage Center. Jumbo Frames In the vstart configuration, Jumbo Frames are enabled for all devices of the SAN fabric. This includes the server network interface ports, the network switch interfaces and the Compellent interfaces. Thin Provisioning Particularly in Virtualization environments, thin provisioning is a common practice. This allows for efficient use of the available storage capacity. The LUN and corresponding CSV may grow as needed, typically in an automated fashion to ensure availability of the LUN (auto-grow). However, as storage becomes over-provisioned in this scenario very careful management and capacity planning is critical. 24

25 Dell Compellent Thin Provisioning delivers the highest enterprise storage utilization possible by eliminating pre-allocated but unused capacity. The software, Dynamic Capacity, completely separates allocation from utilization, enabling users to provision any size volume upfront yet only consume disk space when data is written. Thin Write technology assesses the incoming payload and designates capacity for each write on demand, leaving unused disk space in the storage pool for other servers and applications. Volume Cloning Compellent offers the Remote Instant Replay TM feature to support volume cloning. Remote Instant Replay leverages space-efficient snapshots between local and remote sites for cost-effective disaster recovery and business continuity. Following initial site synchronization, only incremental changes in enterprise data need to be replicated, minimizing capacity requirements and speeding recovery. Known as Thin Replication, this unique approach enables network storage administrators to choose between Fibre Channel or native IP connectivity for data transfer. Volume Cloning is another common practice in Virtualization environments. This can be used for both Host and VM volumes dramatically increasing Host installation times and VM provisioning times. Volume Snapshots SAN Volume snapshots are a common method of providing a point-in-time, instantaneous backup of a SAN Volume or LUN. These snapshots are typically block-level and only utilize storage capacity as blocks change on the originating volume. Some SANs provide tight integration with Hyper-V integrating both the Hyper-V VSS Writer on Hosts and Volume Snapshots on the SAN. This integration provides a comprehensive and high-performing backup and recovery solution. Dell Compellent Replay Manager is powerful snapshot consistency software that integrates with Microsoft Volume Shadow Copy Service (VSS) to ensure the integrity of Exchange Server, SQL Server and Hyper-V data. By initiating snapshots when the I/O is quiesced, Replay Manager provides timeconsistent snapshots even if the application or virtual machine (VM) is running during the process. The application-aware technology leverages Data Instant Replay, which creates space-efficient snapshots for recovery of most volumes to a server in less than 0 seconds. 25

26 Figure 9 Compellent Snapshot Consistency Storage Tiering Tiering storage is the practice of physically partitioning data into multiple distinct classes based on price, performance or other attributes. Data may be dynamically moved among classes in a tiered storage implementation based on access activity or other considerations. This is normally achieved through a combination of varying types of disks which are used for different data types. (i.e. Production, non-production, backups, etc.) Below is an example of storage tiering for a high IO application such as Microsoft Exchange. Dell Compellent Fluid Data storage dynamically moves enterprise data to the optimal storage tier based on actual use. The most active blocks reside on high-performance SSD and drives, while infrequently accessed data migrates to lower-cost, high-capacity drives. The result is network storage that remains in tune with application needs, with overall storage costs cut by up to 80%. 26

27 Storage Automation One of the objectives of the Microsoft private cloud solution is to enable rapid provisioning and deprovisioning of virtual machines. Doing so at large scale requires tight integration with the storage architecture and robust automation. Provisioning a new virtual machine on an already existing LUN is a simple operation however provisioning a new CSV LUN, adding it to a host cluster, etc. are relatively complicated tasks that must be automated. System Center Virtual Machine Manager 202 (VMM 202) enables end to end automation of this process through SAN integration using the SMI-S protocol. Historically, many storage vendors have designed and implemented their own storage management systems, APIs, and command line utilities. This has made it a challenge to leverage a common set of tools, scripts, etc. across heterogeneous storage solutions For the robust automation that is required in an advanced datacenter virtualization, a SAN solution supporting SMI-S is required. Preference is also given to SANs supporting standard and common automation interfaces such as PowerShell. The Dell Compellent Series 40 supports SMI-S which enables dynamic provisioning and de-provisioning. Network Architecture Three Tier Network Design Many network architectures include a tiered design with three or more tiers such as Core, Distribution/Aggregation, and Access. Designs are driven by the port bandwidth and quantity required at the edge, as well as the ability of the Distribution/Aggregation and Core tiers to provide higher speed uplinks to aggregate traffic. Additional considerations include Ethernet broadcast boundaries and limitations, spanning tree and or other loop avoidance technologies, etc. 27

28 Core The Core tier is the high speed backbone for the network architecture. The Core is typically comprised of two modular switch chassis providing a variety of service and interface module options. The datacenter Core may interface with other network modules (other datacenters, branch, campus, etc). Aggregation The Aggregation (or Distribution) tier consolidates connectivity from multiple Access tier switch uplinks. This tier is commonly implemented in End of Row switches or in a centralized wiring closet or MDF (main distribution frame) room. The Aggregation tier provides both high speed switching and more advanced features such as Layer 3 routing and other policy-based networking capabilities. The Aggregation tier must have redundant, high speed uplinks to the Core tier for high availability. Access The Access tier provides device connectivity to the datacenter network. This is commonly implemented using Layer 2 Ethernet switches typically via blade chassis switch modules or top of rack (ToR) switches. The access tier must provide redundant connectivity for devices, required port features, and adequate capacity both for access (device) ports and uplink ports. The Access tier may also provide features related to NIC teaming such as Link Aggregation Control Protocol (LACP). Certain teaming solutions may require LACP switch features. The diagram below illustrates a three tier network model, one providing 0Gb Ethernet to devices and the other providing Gb Ethernet to devices. Dell vstart uses 0GbE for the network connectivity. Force0 S480 supports VLAN Trunks and LAG. LAG is implemented between the two Force0 S480 switches in the vstart 000m solution. 28

29 0 Gb Ethernet to the Edge Gb Ethernet to the Edge Core Switches Aggregation Switches 2 x 0 Gb Ethernet Links 4 x 0 Gb Ethernet Links 2 x 0 Gb Ethernet Links 4 x 0 Gb Ethernet Links Access Switches Top of Rack or Blade Modules Top of Rack or Blade Modules VLAN(s) 0 Gb Ethernet Links Team Mgmt VLAN/vNIC iscsi VLAN/vNIC CSV VLAN/vNIC LM VLAN/vNIC VM VLAN/NIC Gb Ethernet Links Team Mgmt iscsi iscsi CSV LM VM VM Collapsed Core Network Design In smaller environments, a more simplified network architecture than the three tier model may be adequate. This can be achieved by combining the Core and Aggregation tiers, sometimes called a Collapsed Core. In this design, the Core switches provide both Core and Aggregation functionality. The smaller number of tiers and switches provide lower cost at the expense of future flexibility. 0 Gb Ethernet to the Edge Gb Ethernet to the Edge Collapsed Core (Core + Aggregration) 2 x 0 Gb Ethernet Links 2 x 0 Gb Ethernet Links Access Switches Top of Rack or Blade Modules Top of Rack or Blade Modules VLAN(s) 0 Gb Ethernet Links Team Mgmt VLAN/vNIC iscsi VLAN/vNIC CSV VLAN/vNIC LM VLAN/vNIC VM VLAN/NIC Gb Ethernet Links Team Mgmt iscsi iscsi CSV LM VM VM 29

30 High Availability and Resiliency Providing redundant paths from the device (server) through all the network tiers to the Core is highly recommended for high availability and resiliency. A variety of technologies (NIC teaming, Spanning Tree Protocol, etc.) can be utilized to ensure redundant path availability without looping. Each network tier should include redundant switches. With redundant pairs of Access tier switches, individual switch resiliency is slightly less important, so the expense of redundant power supplies and other component redundancy may not be required. At the Aggregation and Core tiers, full hardware redundancy in addition to device redundancy is recommended due to the critical nature of those tiers. Despite best efforts, sometimes devices fail, become damaged, or get misconfigured. For these situations, remote management and the ability to remotely power cycle all devices become important for restoring service rapidly. The Inter Switch Link (ISL) between the Dell Force0 network switches used in the vstart LAN network is configured as LAG. The ISL provides two 40Gbps back-end connection between the two switches to provide redundant connections for the fault-tolerant design. The Force0 switches also have redundant uplinks to the core network that allows for a failure in either switch path to the core network so that connectivity will be maintained. Host connectivity is maintained during a switch failure through the teaming interfaces of each host. Network Security & Isolation The network architecture must enable both security and isolation of network traffic. A variety of technologies can be used individually or in concert to assist in security and isolation: VLANs: VLANs enable traffic on one physical LAN to be subdivided into multiple virtual LANs or broadcast domains. This is accomplished by configuring devices or switch ports to tag traffic with specific VLAN IDs. A VLAN trunk is a network connection able to carry multiple VLANs, with each VLAN tagged with specific VLAN IDs. ACLs: Access Control Lists (ACLs) enable traffic to be filtered or forwarded based on a variety of characteristics such as protocol, source/destination port, and many other characteristics. ACLs can be used to prohibit certain traffic types from reaching the network or to enable/prevent traffic from reaching specific endpoints. IPsec: IPsec enables both authentication and encryption of network traffic to protect both from man in the middle attacks as well as network sniffing and other data collection activities. QoS: Quality of Service enables rules to be set based on traffic type or attributes such that one form of traffic does not block all others (by throttling it) or to ensure critical traffic always has a certain amount of bandwidth allocated. Network Automation Remote interfaces and management of the network infrastructure via SSH or similar protocol is important to both automation and resiliency of the datacenter network. Remote access and administration protocols can be used by management systems to automate complex or error prone configuration activities. For example, adding a VLAN to a distributed set of Access tier switches can be automated to avoid the potential for human error. 30

31 Virtualization Architecture Virtualization Storage Virtualization Storage virtualization is a concept in IT System Administration, referring to the abstraction (separation) of logical storage from physical storage so that it may be accessed without regard to physical storage or heterogeneous structure. This separation allows the Systems Admin increased flexibility in how they manage storage for end users. Network Virtualization In computing, Network Virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to the software containers on a single system. Whether virtualization is internal or external depends on the implementation provided by vendors that support the technology. Various equipment and software vendors offer network virtualization by combining any of the following: Network hardware, such as switches and network adapters, also known as network interface cards (NICs) Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs) Network storage devices Network media, such as Ethernet and Fibre Channel Server Virtualization Hardware Virtualization uses software to create a Virtual Machine (VM) that emulates a physical computer. This creates a separate OS environment that is logically isolated from the host server. By providing multiple VMs at once, this approach allows several operating systems to run simultaneously on a single physical machine. Hyper-V technology is based on a 64-bit hypervisor-based microkernel architecture that enables standard services and resources to create, manage, and execute virtual machines. The Windows Hypervisor runs directly above the hardware and ensures strong isolation between the partitions by enforcing access policies for critical system resources such as memory and processors. The Windows Hypervisor does not contain any third-party device drivers or code, which minimizes its attack surface and provides a more secure architecture. 3

32 In addition to the Windows Hypervisor, there are two other major elements to consider in Hyper-V: a parent partition and child partition. The parent partition is a special virtual machine that runs Windows Server 2008 R2, controls the creation and management of child partitions, and maintains direct access to hardware resources. In this model, device drivers for physical devices are installed in the parent 32

33 partition. In contrast, the role of a child partition is to provide a virtual machine environment for the installation and execution of guest operating systems and applications. Please see this detailed poster for more information Windows Server 2008 R2 SP and Hyper-V Host Design The following recommendations in this section adhere to the support statements in the following article: Requirements and Limits for Virtual Machines and Hyper-V in Windows Server 2008 R2 Licensing Certain versions of Windows Server 2008 R2 (namely Standard, Enterprise, and Datacenter editions) include virtualization use rights, which is the right and license to run a specified number of Windowsbased virtual machines. Windows Server 2008 R2 Standard edition includes use rights for one running virtual machine. Windows Server 2008 R2 Enterprise Edition includes use rights for up to four virtual machines. This does not limit the number of guests that the host can run; it means that licenses for four Windows Server guests are included. To run more than four you simply need to ensure you have valid Windows Server licenses for the additional virtual machines. In contrast to the two other Windows Server editions Windows Server 2008 R2 Datacenter Edition includes unlimited virtualization use rights, which allows from a licensing standpoint to run as many Windows Server guests as you like on the licensed physical server. OS Configuration The following outlines the general considerations for the Hyper-V Host Operating system. Note that these are not meant to be installation instructions but rather the process requirements and order. Hyper-V requires specific hardware. To install and use the Hyper-V role, you will need the following: An x64-based processor. Hyper-V is available in 64-bit editions of Windows Server 2008 specifically, the 64-bit editions of Windows Server 2008 Standard, Windows Server 2008 Enterprise, and Windows Server 2008 Datacenter. Hyper-V is not available for 32-bit (x86) editions or Windows Server 2008 for Itanium-Based Systems. However, the Hyper-V management tools are available for 32-bit editions. Hardware-assisted virtualization. This is available in processors that include a virtualization option specifically processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology. Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. Specifically, you must enable Intel XD bit (execute disable bit) or AMD NX bit (no execute bit). Use Windows Server 2008 R2, either Full or Server Core installation option. Note: there is no upgrade path from Server Core to Full or vice-versa, make this selection carefully. Use the latest hardware device drivers Hyper-V Parent Partition OS must be Domain-joined Hyper-V Server Roles and Failover Clustering Features are required 33

34 Apply relevant Windows Updates, including OOB updates not offered on Microsoft Update Hyper-V Update List for Windows Server 2008 R2 All Nodes, Networks, and Storage must pass Cluster Validation Wizard The Dell vstart 000m uses Windows Server 2008 R2 SP Datacenter Edition for the hypervisor. This edition of Windows Server is chosen to provide customers an unlimited number of virtual machine licenses. Memory and Hyper-V Dynamic Memory Dynamic Memory is a Hyper-V feature that helps you use physical memory more efficiently. With Dynamic Memory, Hyper-V treats memory as a shared resource that can be reallocated automatically among running virtual machines. Dynamic Memory adjusts the amount of memory available to a virtual machine, based on changes in memory demand and values that you specify. Dynamic Memory is available for Hyper-V in Windows Server 2008 R2 Service Pack (SP). You can make the Dynamic Memory feature available by applying the service pack to the Hyper-V role in Windows Server 2008 R2 or to Microsoft Hyper-V Server 2008 R2. For a complete description of Dynamic Memory features, settings, and design considerations, refer to the Hyper-V Dynamic Memory Configuration Guide at ( This guide provides the specific OS, Service Pack, and integration component levels for supported operating systems. The guide also contains the minimum recommended Startup RAM setting for all supported operating systems. In addition to the general guidance above, specific applications or workloads, particularly those with built in memory management capability such as SQL or Exchange, may provide workload specific guidance. The Fast Track Reference Architecture utilizes SQL 2008 R2 and the SQL product group has published best practices guidance for Dynamic Memory in Running SQL Server with Hyper-V Dynamic Memory ( Storage Adapters Fibre Channel / iscsi / CNE HBA Configuration In Dell vstart 000m solution, both M620 and R620 servers use FC adapters for storage connectivity. Both the compute and management servers use the 0GbE NIC ports to provide segregated iscsi connectivity for guest cluster. The FC and iscsi connections from each host are balanced correspondingly across the QLogic mezzanine cards or the BCM5780 0GbE NICs on the servers. This provides fault tolerance for the connections at the adapter level. Jumbo frames are enabled on the iscsi network. MPIO Configuration Microsoft MPIO architecture supports iscsi, Fibre Channel and serial attached storage () SAN connectivity by establishing multiple sessions or connections to the storage array. Multi-pathing solutions use redundant physical path components adapters, cables, and switches to create logical paths between the server and the storage device. In the event that one or more of these components fails, causing the path to fail, multi-pathing logic uses an alternate path for I/O so that applications can still access their data. Each network interface card (in the iscsi case) or HBA should be connected by using redundant switch infrastructures to provide continued access to storage in the event of a failure in a storage fabric component. 34

35 Failover times vary by storage vendor, and can be configured by using timers in the Microsoft iscsi Software Initiator driver, or modifying the Fibre Channel host bus adapter driver parameter settings. There are different connection options for Dell Compellent to support multiple-paths connectivity: legacy ports and virtual ports. In legacy mode, front-end IO ports are broken into primary and reserve ports based on a fault domain. Primary/reserved ports allow IO to use the primary path; the reserve port is in a standby mode until a primary port fails over to the reserve port. In terms of MPIO, this requires twice the IO ports to enable multiple paths. Even more ports are required for a dual fabric. Virtual Ports allow all front-end IO ports to be virtualized. All front-end IO ports can be used at the same time for load balancing as well as failover to another port. Virtual Ports are available for Fibre Channel connections only, iscsi connections only, or both Fibre Channel and iscsi. In vstart 000m, virtual ports are configured to support MPIO for both FC and iscsi. Performance Settings The following Hyper-V R2 network performance improvements should be tested and considered for production use: TCP Checksum Offload is recommended and benefits both CPU and overall network throughput performance, and is fully supported by Live Migration. Support for Jumbo Frames was also introduced with Windows Server Hyper-V in Windows Server 2008 R2 simply extends this capability to VMs. So just like in physical network scenarios, Jumbo Frames add the same basic performance enhancements to virtual networking. That includes up to 6 times larger payloads per packet, which improves not only overall th roughput but also reduces CPU utilization for large file transfers. The Dell vstart 000m has the dedicated networks separated for cluster private (heartbeat network), VM, iscsi. See the Server / Blade Network Connectivity for details. NIC Teaming Configurations NIC Teaming can be utilized to enable multiple, redundant NICs and connections between servers and Access tier network switches. Teaming can be enabled via hardware or software-based approaches. Teaming can enable multiple scenarios including path redundancy, failover, and load-balancing. Dell vstart uses Broadcom SLB teaming for the VM network. Figure 4 illustrates the NIC teaming configuration implemented on each PowerEdge M620 server. 35

36 Fiber Channel and Ethernet In this design pattern, a traditional, physically separated approach of Ethernet and Fiber Channel is utilized. For the LAN, two 0 Gb-E adapters are teamed combining both LAN and iscsi SAN traffic on the same physical infrastructure but with dedicated VLANs. For storage, two Fiber Channel HBAs are utilized with MPIO for failover and load balancing. The NIC teaming is provided at the blade or interconnect layer and is transparent to the host operating system. Each VLAN is presented to the OS as an individual NIC and in Hyper-V a virtual switch is created for each. For Fiber Channel, the MPIO is provided by the host OS combined with either the Microsoft DSM. By using FC, host clustering is enabled but not guest clustering. To meet this requirement, the SAN also presents iscsi LUNs over Ethernet. Virtual Machine Network(s) Hyper-V Host Failover Cluster Design A Hyper-V host failover cluster is a group of independent servers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). In case of a planned migration (called Live Migration), users experience no perceptible service interruption. The host servers are one of the critical components of a dynamic, virtual infrastructure. Consolidation of multiple workloads onto the host servers requires that those servers be highly available. Windows Server 2008 R2 provides advances in failover clustering that enable high availability and Live Migration of virtual machines between physical nodes. 36

37 Host Failover Cluster Topology In Microsoft Private Cloud Fast Track, we have two standard design patterns. It s recommended that the server topology consist of at least two Hyper-V host clusters. The first being at least two nodes and will be referred to as the Management Cluster. The second and any additional clusters will be referred to as Fabric Host Clusters. In some cases such as smaller scale scenarios or specialized solutions, the management and fabric clusters can be consolidated onto the fabric host cluster. Special care has to be taken in that case to ensure resource availability for the virtual machines that host the various parts of the management stack. See section 4.7. for details on the Management Cluster design patterns. Each host cluster can contain up to 6 nodes. Host clusters require some form of shared storage such as a Fibre Channel or iscsi SAN. Given that the Dell vstart 000m solution can support up to 32 nodes, this will require multiple clusters be created. Host Cluster Networks A variety of host cluster networks are required for a Hyper-V failover cluster. The network requirements enable high availability and high performance. The specific requirements and recommendations for network configuration are published on TechNet. Reference: Hyper-V: Live Migration Network Configuration Guide Section has the information for the Dell vstart Host Cluster networks configuration. Management Network A dedicated management network is required so hosts can be managed via a dedicated network such that there is not competition with guest traffic needs. A dedicated network provides a degree of separation for security and ease of management purposes. This typically implies dedicating a network adapter per host and port per network device to the management network. Additionally, most server manufacturers also provide a separate out of band management capability that enables remote management of server hardware outside of the host operating system. iscsi Network If using iscsi, a dedicated iscsi network is required so that storage traffic is not in contention with any other traffic. This typically implies dedicating two network adapters per host and ports per network device to the management network. CSV/Cluster Communication Network Usually, when the cluster node that owns a virtual hard disk (VHD) file in CSV performs disk input/output (I/O), the node communicates directly with the storage, for example, through a storage area network (SAN). However, storage connectivity failures sometimes prevent a given node from communicating directly with the storage. To maintain function until the failure is corrected, the node redirects the disk I/O through a cluster network (the preferred network for CSV) to the node where the disk is currently mounted. This is called CSV redirected I/O mode. Live Migration Network During Live Migration, the contents of the memory of the VM running on the source node need to be transferred to the destination node over a LAN connection. To ensure high speed transfer, a dedicated Live Migration network is required. 37

38 Virtual Machine Network(s) The Virtual Machine network(s) are dedicated to virtual machine LAN traffic. The VM network can be two or more Gb Ethernet networks, one or more network created via NIC teaming, or virtual networks created from shared 0 Gb Ethernet NICs. Host Failover Cluster Storage Cluster Shared Volumes (CSV) is a feature that simplifies the configuration and management of Hyper-V virtual machines in failover clusters. With CSV, on a failover cluster that runs Hyper-V, multiple virtual machines can use the same LUN (disk) yet fail over (or move from node to node) independently of one another. CSV provides increased flexibility for volumes in clustered storage for example; it allows you to keep system files separate from data to optimize disk performance, even if the system files and the data are contained within virtual hard disk (VHD) files. If you choose to use live migration for your clustered virtual machines, CSV can also provide performance improvements for the live migration process. CSV is available in versions of Windows Server 2008 R2 and of Microsoft Hyper-V Server 2008 R2 that include failover clustering. In the vstart 000m for Microsoft Private Cloud, at least one LUN is created on the Compellent storage array and connected and configured as a CSV on the failover Host Cluster. Hyper-V Guest VM Design Standardization is a key tenet of private cloud architectures. This also applies to Virtual Machines. A standardized collection of Virtual Machine templates can both drive predictable performance and greatly improve capacity planning capabilities. As an example, the table below illustrates what a basic VM template library would look like. Table 5 Guest Template Specs Template Specs Network OS Unit Cost Template Small vcpu, 2gb Memory, 50gb Disk VLAN X WS 2003 R2 Template 2 Med Template 3 X-Large 2 vcpu, 4gb Memory, 00gb Disk 4 vcpu, 8gb Memory, 200gb Disk VLAN X WS 2003 R2 VLAN X WS 2003 R2 Template 4 Small vcpu, 2gb Memory, 50gb Disk VLAN Y WS Template 5 Med Template 6 X-Large 2 vcpu, 4gb Memory, 00gb Disk 4 vcpu, 8gb Memory, 200gb Disk VLAN Y WS VLAN Y WS VM Storage Dynamically Expanding Disks Dynamically expanding virtual hard disks provide storage capacity as needed to store data. The size of the VHD file is small when the disk is created and grows as data is added to the disk. The size of the 38

39 VHD file does not shrink automatically when data is deleted from the virtual hard disk. However, you can compact the disk to decrease the file size after data is deleted by using the Edit Virtual Hard Disk Wizard. Fixed Size Disks Fixed virtual hard disks provide storage capacity by using a VHD file that is in the size specified for the virtual hard disk when the disk is created. The size of the VHD file remains 'fixed' regardless of the amount of data stored. However, you can use the Edit Virtual Hard Disk Wizard to increase the size of the virtual hard disk, which increases the size of the VHD file. By allocating the full capacity at the time of creation, fragmentation at the host level is not an issue (fragmentation inside the VHD itself must be managed within the guest). Differencing Disks Differencing virtual hard disks provide storage to enable you to make changes to a parent virtual hard disk without altering that disk. The size of the VHD file for a differencing disk grows as changes are stored to the disk. Pass-Through Disks Hyper-V enables virtual machine guests to directly access local disks or SAN LUNs that are attached to the physical server without requiring the volume to be presented to the host server. The virtual machine guest accesses the disk directly (utilizing the disk s GUID) without having to utilize the host s file system. Given that the performance difference between Fixed-Disk and Pass-through Disks is now negligible, the decision is now based on manageability. For instance, if the data on the volume will be very large (hundreds of gigabytes), a VHD is hardly portable at that size given the extreme amounts of time it takes to copy. Also, bear in mind the backup scheme. With pass-through disks, the data can only be backed up from within the Guest. When utilizing pass-through disks, there is no VHD file created; the LUN is used directly by the guest. Since there is no VHD file, there is no dynamic sizing capability or snapshot capability. In-guest iscsi Initiator Hyper-V can also utilize iscsi storage by directly connecting to iscsi LUNs utilizing the guest s virtual network adapters. This is mainly used for access to large volumes, volumes on SANs which the Hyper-V Host itself is not connected to, or for Guest-Clustering. Guests cannot boot from iscsi LUNs accessed through the virtual network adapters without utilizing a third-party iscsi initiator. VM Networking Hyper-V Guests support two types of virtual network adapters: Synthetic and Emulated. Synthetic makes use of the Hyper-V VMBUS architecture and is the high-performance, native device. Synthetic devices require the Hyper-V Integration Services be installed within the guest. Emulated adapters are available to all guests even if Integration Services are not available. They are much slower performing and only should be used if Synthetic is unavailable. You can create many virtual networks on the server running Hyper-V to provide a variety of communications channels. For example, you can create networks to provide the following: 39

40 Communications between virtual machines only. This type of virtual network is called a private network. Communications between the Host server and virtual machines. This type of virtual network is called an internal network. Communications between a virtual machine and a physical network by creating an association to a physical network adapter on the host server. This type of virtual network is called an external network. Virtual Processors Please reference the below table for supported # of virtual processors in a Hyper-V guest. Please note that the below list is somewhat dynamic; improvements to the Integration Services for Hyper-V are periodically released adding support for additional Operating Systems. Please see the following article for current information Table 6 Supported Guest Server OS Server Guest Operating System Edition vcpu Windows Server 2008 R2 with SP Windows Server 2008 R2 Windows Server 2008 Windows Server 2003 R2 with SP 2 Standard, Enterprise, Datacenter, and Web editions Standard, Enterprise, Datacenter, and Windows Web Server 2008 R2 Standard, Standard without Hyper-V, Enterprise, Enterprise without Hyper-V, Datacenter, Datacenter without Hyper-V, Windows Web Server 2008, and HPC Edition Standard, Enterprise, Datacenter, and Web, 2, 3, or 4, 2, 3, or 4, 2, 3, or 4 or 2 Windows Home Server 20 Standard, 2 or 4 Windows Storage Server 2008 R2 Essentials, 2 or 4 Windows Small Business Server 20 Essentials or 2 Windows Small Business Server 20 Standard, 2, or 4 Windows Server 2003 R2 x64 Edition with SP 2 Windows Server 2003 with SP 2 Windows Server 2003 x64 Edition with SP 2 Standard, Enterprise, and Datacenter or 2 Standard, Enterprise, Datacenter, and Web or 2 Standard, Enterprise, and Datacenter or 2 40

41 Server Guest Operating System Edition vcpu Windows 2000 Server with SP 4 Server, Advanced Server IMPORTANT: Support for this operating system ends on July 3, 200. For more information, see the notice at the beginning of this article. CentOS 6.0 and 6. x86 edition and x64 edition, 2, or 4 CentOS x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 6.0 and 6. x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 5.7 x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 5.6 x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 5.5 x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 5.4 x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 5.3 x86 edition and x64 edition, 2, or 4 Red Hat Enterprise Linux 5.2 x86 edition and x64 edition, 2, or 4 SUSE Linux Enterprise Server with SP SUSE Linux Enterprise Server 0 with SP 4 x86 edition and x64 edition, 2, or 4 x86 edition and x64 edition, 2, or 4 CentOS 6.0 and 6. x86 edition and x64 edition, 2, or 4 Table 7 Supported Guest Client OS Client Guest Operating System Editions Virtual Processors Windows 7 with SP Windows 7 Enterprise, Ultimate, and Professional. This applies to both 32-bit and 64-bit editions, as well as N and KN editions. Enterprise, Ultimate, and Professional. This applies to both 32-bit and 64-bit editions, as well as N and KN editions., 2, 3, or 4, 2, 3, or 4 4

42 Client Guest Operating System Editions Virtual Processors Windows Vista Windows XP with SP 3 (SP3) Important: Performance might be degraded on Windows XP with SP3 when the server running Hyper-V uses an AMD processor. For more information, see Degraded I/O Performance Using a Windows XP Virtual Machine with Windows Server 2008 Hyper-V ( Windows XP with SP 2 Business, Enterprise, and Ultimate, including N and KN editions or 2 Professional or 2 Professional IMPORTANT: Support for this operating system ends on July 3, 200. Windows XP x64 Edition with SP2 Professional or 2 Hyper-V supports a maximum ratio of 8 Virtual Processors (VPs) per Logical Processor for server workloads and 2 Virtual Processors (VPs) per Logical Processor for VDI workloads. A Logical Processor is defined as a processing core seen by the Host operating system or Parent Partition. In the case of Intel Hyper-Threading, each thread is considered an LP. Therefore a 6 LP server support a maximum of 28 VPs. That would in turn equate to 28 single-proc VM, 64 dual-proc VMs, or 32 quad-proc VMs. The 8: or 2: VP/LP ratios are maximum supported limits; it is recommended that lower limits be utilized than the maximum. Fabric Management Fabric Management Hosts Design Fabric Management hosts run Windows Server 2008 R2 SP with Service Pack (64 bit) and the Hyper-V role. As a function of the scalability of the vstart solution, the supporting System Center products and their dependencies will run within Hyper-V virtual machines. A two-node fabric management cluster is created to provide high availability of the fabric management workloads. This fabric management cluster is dedicated to the virtual machines running the suite of products providing IaaS management functionality and is not intended to run additional workloads. For additional management scale-points more or less management host capacity may be required. 42

43 Refer to the Figure 0 Design Pattern Server Architecture section for details about the hardware used for the fabric management hosts in the Dell vstart solution. Compute (CPU) The management virtual machine workloads are expected to have fairly high utilization; a conservative vcpu to Core ratio is utilized. The fabric management hosts use two CPUs with 8 cores each as well as hyperthreading. 43

44 Memory (RAM) Host memory is sized to support the System Center products and their dependencies providing IaaS management functionality. As such, the fabric management hosts each have 28GB of memory. Network The fabric management hosts use a single 0GbE multi-port network adapter. The adapter provides npar and teaming to enable a bandwidth guarantee and network port fault tolerance. Storage Connectivity The fabric management hosts include a RAID controller for local storage fault tolerance as well as a Fibre Channel adapter for cluster storage. The RAID controller is used to provide a RAID for the local hard drives providing the parent partition a fault tolerant boot disk. The fabric management hosts also have connections to the iscsi network to provide network interfaces to the VMs that will run guest clusters. Management Logical Architecture Logical Design The following depicts the Management Logical Architecture of the 2-node Management Cluster: 44

45 Figure Management Architecture The Management Architecture consists of 2 Physical Nodes in a Failover Cluster with Compellent SAN storage, iscsi and redundant network connections. This provides a highly-available platform for the management systems. Some systems have additional High Availability options. The most effective HA option is leveraged in those cases. The management systems include: The management architecture in the Dell vstart 000m for Microsoft Private Cloud Fast Track solution includes: Microsoft System Center 202 Virtual Machine Manager Microsoft System Center 202 Operations Manager Dell PowerEdge Servers Management Packs for System Center Operations Manager Dell Compellent Storage Center Management Packs for System Center Operations Manager Dell PRO Management Packs for System Center Operations Manager Microsoft System Center 202 Orchestrator Microsoft System Center 202 Service Manager 45

46 Microsoft System Center 202 App Controller Microsoft Cloud Services Process Pack Microsoft SQL Server 2008 R2 SP (Optional) Management Systems Architecture Pre-Requisite Infrastructure The following section outlines this architecture and its dependencies within a customer environment. Active Directory Domain Services Active Directory Domain Services (AD DS) is a required foundational component. Fast Track provides support for Windows Server 2008 and Windows Server 2008 R2 SP AD DS customer deployments. Previous versions are not directly supported for all workflow provisioning and de-provisioning automation. It is assumed that AD DS deployments exist on-site and deployment of these services is not in scope for the typical deployment. DNS Forests and Domains - The preferred approach is to integrate into an existing AD DS Forest and Domain. This is not a hard requirement; a dedicated resource Forest or Domain may also be employed as an additional part of the deployment. Fast Track does support multiple domains and/or multiple forests in a trusted environment using two-way forest trusts. Trusts Fast Track enables multi-domain support within a single forest where two-way forest (Kerberos) trusts exist between all domains. This is referred to as multi-domain or inter-forest support. Also supported are inter-forest or multi-forest scenarios as well as intra-forest environments. DNS name resolution is a required element for System Center 202 components and the process automation solution. Active Directory-integrated DNS is required for automated provisioning and deprovisioning components within System Center Orchestrator runbook as part of the solution. We provide full support and automation for Windows Server 2008 and Windows Server 2008 R2 SP DNS Active Directory-integrated DNS deployments. Use of non-microsoft or non-active Directory-integrated DNS solutions may be possible but would not provide for automated creation and removal of DNS records related to virtual machine provisioning and de-provisioning processes. Use of solutions outside of Active Directory-integrated DNS would either require manual intervention for these scenarios or require modifications to Cloud Services Process Pack Orchestrator runbooks. DHCP To support dynamic provisioning and management of physical and virtual compute capacity within the IaaS infrastructure, DHCP is utilized for all physical and virtual machines to support runbook automation. For physical hosts such as the Fabric Management cluster nodes and the scale-unit cluster nodes, DHCP reservations are recommended so that physical servers and NICs always have known IP addresses while providing centralized management of those addresses via DHCP. 46

47 SQL Server Two SQL Servers will be deployed to support the solution. The SQL Servers will be configured as a failover cluster, containing all the databases for each System Center product. Each SQL VM will be configured with four (4) vcpus, 32 GB of RAM, and 4 vnics (for one LAN, one Cluster, and two iscsi to support MPIO). The SQL VMs will access iscsi-based shared storage with two LUNs configured for each hosted database instance. A guest cluster is utilized for several reasons, namely to maintain HA of the System Center databases during both host and guest OS patching and during host or guest failures. Should the needs (CPU, RAM, IO) of the solution exceed what two VMs are able to provide, additional VMs can be added to the virtual SQL Cluster and each SQL instance moved to its own VM in the cluster. This requires SQL 2008 Enterprise Edition which is the recommendation. In Dell vstart for Enterprise Virtualizaiton solution, the SQL cluster will also be utilized to provide the HA File Server to hostthe VMM library share. This implementation is due to the known restrictions of Library share locations in a VMM Cluster. SQL Server Configuration Two Non-HA VMs on different Hyper-V Hosts: Windows Server 2008 R2 SP Enterprise Four vcpu 32GB Memory (NOT Dynamic Memory) Four vnics ( client connections, cluster communications, 2 iscsi) Storage: OS VHD, 20 iscsi LUNs Table 8 SQL Data Location and Size Examples LUN Purpose Size LUN, FC VM Operating System 60 GB VHD LUN 2, iscsi SQL Cluster Quorum GB LUN 3, iscsi SQL Cluster DTC 5 GB LUN 4, iscsi SC VMM DB 50 GB LUN 5, iscsi SC VMM Log 50 GB LUN 6, iscsi SC AC DB 50 GB LUN 7, iscsi SC AC Log 50 GB LUN 8, iscsi SC ORCH DB 50 GB LUN 9, iscsi SC ORCH Log 50 GB 47

48 LUN Purpose Size LUN 0, iscsi SC OM DB 50 GB LUN, iscsi SC OM Log 50 GB LUN 2, iscsi SC OM DW DB TB LUN 3, iscsi SC OM DW Log TB LUN 4, iscsi SC SM DB 50 GB LUN 5, iscsi SC SM Log 50 GB LUN 6, iscsi SC SM AS DB 50 GB LUN 7, iscsi SC SM AS Log 50 GB LUN 8, iscsi SC SM DW DB 500 GB LUN 9, iscsi SC SM DW Log 500 GB LUN 20, iscsi SC SP Farm DB 50 GB LUN 2, iscsi SC SP Farm Log 50 GB Table 9 SQL Databases and Instances DB Client Instance Name DB Name Authentication VMM <Instance > <VMM_DB> Win Auth Ops Mgr <Instance 2> <Ops_Mgr_DB> Win Auth Ops Mgr DW <Instance 3> <Ops_Mgr_DW> Win Auth Svc Mgr <Instance 4> <Svc Mgr_DB> Win Auth Svc Mgr DW <Instance 5> <Svc Mgr_DW> Win Auth Svc Mgr AS <Instance 6> <Svc Mgr_AS_DB> Win Auth Orchestrator <Instance 7> <Orchestrator_DB> Win Auth App Controller <Instance 8> <AppController_DB> Win Auth SharePoint <Instance 9> <SP_Farm_DB> Win Auth 48

49 Virtual Machine Manager System Center Virtual Machine Manager 202 has many uses in the solution. Two VMM servers are deployed and configured in a failover cluster using a dedicated SQL instance on the virtualized SQL cluster. The VMM installation uses the HA File Server services running on the SQL cluster nodes. Additional library servers can be added as needed (for instance, one per physical location). VMM & Operations Manager integration is configured during the installation process. The following hardware configurations will be used: VMM Server Configuration 2 non-ha VMs, guest clustered Windows Server 2008 R2 SP Four vcpu 4 GB Startup Memory (8GB Max Dynamic Memory) Four vnics ( cluster management, cluster ) Storage: OS VHD, x iscsi LUN Operations Manager System Center Operations Manager 202 is used for monitoring and remediation in the solution. Two OM servers are deployed in a single Management Group using a dedicated SQL instance on the virtualized SQL cluster. A third OM server is deployed as a reporting server. An OM agent gets installed on every guest VM as well as every management host and scale unit cluster node to support health monitoring functionality. The Operations Manager installation uses a dedicated SQL instance on the virtualized SQL cluster. The installation will follow a split SQL configuration : SQL Server Reporting Services (SSRS) and SQL Server Analysis Services (S) will reside on the third OM VM while the OM database and OM Data Warehouse database will utilize a dedicated instance on the virtualized SQL cluster. The following estimated database sizes are provided Estimated SQL Database Sizes 72 GB OMDB, 2. TB OMDW DB (Large) 32 GB OMDB,.0 TB OMDW DB (Med) The following hardware configurations will be used: Operations Manager Management Servers Two non-ha VMs: Windows Server 2008 R2 SP Four vcpu 4 GB Startup Memory (6GB Maximum Dynamic Memory) One vnic 49

50 Storage: OS VHD Operations Manager Reporting Server One HA VM: Windows Server 2008 R2 SP Four vcpu 4GB Memory One vnics Storage: OS VHD Management Packs Virtual Machine Manager 202 Windows Server Base Operating System Windows Server Failover Clustering Windows Server 2008 Hyper-V Microsoft SQL Server Management Pack Microsoft Windows Server Internet Information Services (IIS) 2000/2003/2008 System Center MPs Dell Server Management Packs v 5.0 Dell Compellent Management Packs v 2.0 Service Manager The Service Manager Management server is installed on two virtual machines. A third virtual machine hosts the Service Manager data warehouse server. Both the Service Manager database and the data warehouse database use a dedicated SQL instance on the virtualized SQL cluster. The Service Manager Self-Service Portal is hosted on a fourth VM. The following VM configurations are used: Service Manager Management Servers 2 HA VMs Windows Server 2008 R2 SP Four vcpu 4GB Memory (6GB Maximum Dynamic Memory) vnics Storage: OS VHD Service Manager Data Warehouse Server HA VM 50

51 Windows Server 2008 R2 SP Four vcpu 6GB Memory vnics Storage: OS VHD Service Manager Portal Servers HA VM Windows Server 2008 R2 SP Four vcpu 4GB Memory (8GB Maximum Dynamic Memory) vnics Storage: OS VHD Service Manager Estimated SQL Database Sizes 50 GB SMDB, 00 GB SMDW DB Orchestrator The Orchestrator installation uses a dedicated SQL instance on the virtualized SQL cluster. Two Orchestrator Runbook servers are leveraged for HA and scale purposes. Orchestrator provides built in failover capability (it does not use failover clustering). By default, if an Orchestrator server fails, any workflows that were running on that server will be started (not restarted) on the other Orchestrator server. The difference between starting and re-starting is that re-starting implies saving/maintaining state and enabling an instance of a workflow to keep running. Orchestrator only guarantees that it will start any workflows that were started on the failed server. State may (likely will) be lost, meaning a request may fail. Most workflows have some degree of state management built in which helps mitigate this risk. The other reason two Orchestrator servers are deployed by default is for scalability. By default each Orchestrator Runbook server can run a maximum of 50 simultaneous workflows. This limit can be increased depending on server resources, but an additional server is needed to accommodate larger scale environments. Orchestrator Servers Two non-ha VMs Windows Server 2008 R2 SP Four vcpu 4GB Memory (8GB Maximum Dynamic Memory) One vnic Storage: OS VHD 5

52 App Controller Because the Service Manager Portal is utilized, App Controller must also be installed. App Controller uses a dedicated SQL instance on the virtualized SQL cluster. A single App Controller server is installed on the management cluster. Service Manager provides the Service Catalog and Service Request mechanism, Orchestrator provides the automated provisioning, and App Controller provides the end-user interface for connecting to and managing workloads post-provisioning. App Controller Server HA VM Windows Server 2008 R2 SP Four vcpu 4GB Memory (8GB Maximum Dynamic Memory) One vnic Storage: OS VHD Management Scenarios Architecture Management Scenarios Below are the primary management scenarios addressed in Fast Track, although the management layer can provide many more capabilities. Fabric Management Fabric Provisioning IT Service Provisioning (including Platform and Application Provisioning) VM Provisioning and De-provisioning Fabric and IT Service Maintenance Fabric and IT Service Monitoring Resource Optimization Service Management Reporting (used by chargeback, capacity, service management, health, performance) Backup and Disaster Recovery Security Fabric Management Fabric management is the act of pooling multiple disparate computing resources together and being able to sub-divide, allocate, and manage them as a single fabric. The various methods below make this possible. 52

53 Hardware Integration Storage Integration Dell vstart 000m uses Compellent Series 40 which supports the SMI-S protocol. This is used in Virtual Machine Manager to discover, classify and provision storage on the Compellent storage arrays through the VMM console. VMM fully automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM. Dell Manager Server One HA VM Windows Server 2008 R2 SP Two vcpu 2 GB Memory (8GB Maximum Dynamic Memory) One vnic Storage: OS VHD Network Integration Networking in Virtual Machine Manager includes several enhancements that enable administrators to efficiently provision network resources for a virtualized environment. The networking enhancements include the following: The ability to create and define logical networks A logical network together with one or more associated network sites is a user-defined named grouping of IP subnets, VLANs, or IP subnet/vlan pairs that is used to organize and simplify network assignments. Some possible examples include BACKEND, FRONTEND, LAB, MANAGEMENT and BACKUP. Logical networks represent an abstraction of the underlying physical network infrastructure which enables modeling the network based on business needs and connectivity properties. After a logical network is created, it can be used to specify the network on which a host or a virtual machine (standalone or part of a service) is deployed. Users can assign logical networks as part of virtual machine and service creation without having to understand the network details. Static IP address and MAC address pool assignment If one or more IP subnets are associated with a network site, static IP address pools can be created from those subnets. Static IP address pools enable VMM to automatically allocate static IP addresses to Windows-based virtual machines that are running on any managed Hyper-V, VMware ESX or Citrix XenServer host. VMM can automatically assign static IP addresses from the pool to stand-alone virtual machines, to virtual machines that are deployed as part of a service, and to physical computers when VMM is used to deploy them as Hyper-V hosts. Additionally, a static IP address pool, can define a reserved range of IP addresses for load balancers virtual IP (VIP) addresses. VMM automatically assigns a virtual IP address to a load balancer during the deployment of a load-balanced service tier. 53

54 Load Balancer integration Hardware load balancers can be added to System Center 202 Virtual Machine Manager (VMM). By adding load balancers to VMM management and by creating associated virtual IP templates (VIP templates), users who create services can automatically provision load balancers when they create and deploy a service. Fabric Provisioning In accordance with the principle of standardization and automation, adding creating the fabric and adding capacity should always be an automated process. In Virtual Machine Manager, this is achieved via multi-step process. Provisioning Hyper-V Hosts Configuring Host Properties, Networking and Storage Create Hyper-V Host Clusters Each step in this process has dependencies: Provisioning Hyper-V Hosts A PXE boot server Dynamic DNS registration A standard base image to be used for Hyper-V Hosts Hardware Driver files in the VMM Library A Host Profile in the VMM Library Baseboard Management Controller (BMC) on the physical server Configuring Host Properties, Networking and Storage Host property settings Storage integration from above plus addition MPIO and/or iscsi configuration Network: You must have already configured the logical networks that you want to associate with the physical network adapter. If the logical network has associated network sites, one or more of the network sites must be scoped to the host group where the host resides. Create Hyper-V Host Clusters The Hosts must meet all requirements for Windows Server Failover Clustering The Hosts must be managed by VMM VMM Private Clouds Once the fabric resources (such as storage, networking, library servers and shares, host groups, and hosts) have been configured, they can be sub-divided and allocated for self-service consumption via the creation of VMM Private Clouds. During private cloud creation, underlying fabric resources that will be available in the private cloud are selected; library paths for private cloud users are configured, and the capacity set for the private cloud. For example, a cloud created for use by the Finance Department: 54

55 Name the Cloud i.e. Finance Scope it to one or more Host Groups Select which Logical Networks, Load Balancers, and VIP Templates are available to the Cloud Specify which Storage classifications are available to the Cloud Select which Library share are available to the Cloud for VM storage Specify granular capacity limits to the Cloud (i.e. Virtual CPUs, Memory, etc.) Select which Capability Profiles are available to the Cloud. (capability profiles match the type of hypervisor platforms that are running in the selected host groups. The built-in capability profiles represent the minimum and maximum values that can be configured for a virtual machine for each supported hypervisor platform) VM Provisioning and De-provisioning One of the primary cloud attributes is user self-service, or providing the consumer of a service the ability to request that service and have it be automatically provisioned for them. In the Microsoft private cloud solution, this refers to the ability for the user to request one or more virtual machines or to delete one or more of their existing virtual machines. The infrastructure scenario supporting this capability is the VM Provisioning and De-provisioning process. This process is initiated from the selfservice portal or tenant user interface and triggers an automated process or workflow in the infrastructure through System Center Virtual Machine Manager to either create or delete a virtual machine based on the authorized settings input by the user or tenant. Provisioning could be templatebased such as requesting a small, medium, or large VM template or a series of selections could be made by the user (vcpus, RAM, etc.). If authorized, the provisioning process could create a new VM per the user s request, add the VM to any relevant management products in the Microsoft private cloud (such as System Center) and enable access to the VM by the requestor. IT Service Provisioning In VMM, a service is a set of virtual machines that are configured and deployed together and are managed as a single entity. For example, a deployment of a multi-tier line-of-business application. In the VMM console, the Service Template Designer is used to create a service template, which defines the configuration of the service. The service template includes information about the virtual machines that are deployed as part of the service, which applications to install on the virtual machines, and the networking configuration needed for the service (including the use of a load balancer). Resource Optimization Elasticity, Perception of Infinite Capacity, and Perception of Continuous Availability are Microsoft private cloud architecture principles that relate to resource optimization. This management scenario deals with optimizing resources by dynamically moving workloads around the infrastructure based on performance, capacity, and availability metrics. Examples include the option to distribute workloads across the infrastructure for maximum performance or consolidating as many workloads as possible to the smallest number of hosts for a higher consolidation ratio. VMM Dynamic Optimization migrates virtual machines to perform resource balancing within host clusters that support live migration, according to settings used. Dynamic Optimization looks to correct 3 possible scenarios, in priority order 55

56 VMs that have configuration problems on their current host VMs that are causing their host to exceed configured performance thresholds Unbalanced resource consumption on hosts VMM Power Optimization is an optional feature of Dynamic Optimization, and it is only available when a host group is configured to migrate virtual machines through Dynamic Optimization. Through Power Optimization, VMM helps to save energy by turning off hosts that are not needed to meet resource requirements within a host cluster and turns the hosts back on when they are needed again. By default, VMM performs power optimization all of the time when the feature is turned on. However, the hours and days during the week when power optimization is performed can be scheduled. For example, initially schedule power optimization only on weekends or when low resource usage on hosts is anticipated. After observing the effects of power optimization in the environment, the hours may be increased or decreased. For Power Optimization, the computers must have a baseboard management controller (BMC) that enables out-of-band management. Dell PRO-enabled Management Packs PRO provides an open and extensible framework for the creation of management packs for virtualized applications or associated hardware. In building these management packs enabled for PRO, Dell has created a customized solution combining Dell monitoring offerings with the comprehensive monitoring and issue-resolution capabilities of PRO. Dell has incorporated awareness of system resources through the use of OpenManage Server Administrator in the PRO-enabled management packs. Watch points include but are not limited to server temperature, local RAID status, and power supply state. With these pre-determined watch points and resolution steps, PRO can react dynamically to adverse situations. Fabric and IT Service Maintenance The Microsoft private cloud running on Dell vstart 000m enables the ability to perform maintenance on any component of the solution without impacting the availability of the solution. Examples include the need to update or patch a host server, add additional storage to the SAN, etc. During maintenance the system ensures that unnecessary alerts or events are not generated in the management systems during planned maintenance. VMM 202 includes the built-in ability to maintain the Fabric servers in a controlled, orchestrated manner. Fabric servers include the following physical computers managed by VMM: Hyper-V hosts and Hyper-V clusters, library servers, Pre-Boot Execution Environment (PXE) servers, the Windows Server Update Management (WSUS) server, and the VMM management server. VMM supports on demand compliance scanning and remediation of the fabric. Administrators can monitor the update status of the servers. They can scan for compliance and remediate updates for selected servers. Administrators also can exempt resources from installation of an update. VMM supports orchestrated updates of Hyper-V host clusters. When a VMM administrator performs update remediation on a host cluster, VMM places one cluster node at a time in maintenance mode and 56

57 then installs updates. If the cluster supports live migration, intelligent placement is used to migrate virtual machines off the cluster node. If the cluster does not support live migration, VMM saves state for the virtual machines. The feature requires the use of a Windows Server Update Management (WSUS) server. Fabric and IT Service Monitoring The Microsoft private cloud enables the ability monitor every major component of the solution and generate alerts based on performance, capacity, and availability metrics. Examples include monitoring server availability, CPU, and storage utilization. Monitoring of the Fabric is performed via the integration of Operations Manager and Virtual machine Manager and the associated management packs for the infrastructure components. Enabling this integration allows Operations Manager to automatically discover, monitor, and report on essential performance and health characteristics of any object managed by VMM. Health and performance of all VMM-managed Hosts & VMs Diagram Views in Operations Manager reflecting all VMM deployed Hosts, Services, VMs, Private Clouds, IP address pools, Storage pools, and more. Performance and Resource Optimization (PRO) which can now be configured at a very granular level and delegated to specific Self-Service Users. Monitoring and automated remediation of physical servers, storage, and network devices Reporting The solution also provides a centralized reporting capability. The reporting capability provides standard reports detailing capacity, utilization, and other system metrics. The reporting functionality serves as the foundation for capacity or utilization-based billing and chargeback to tenants. In a Service-oriented IT model, reporting serves the following purposes: Systems Performance and Health Capacity metering and planning Service Level Availability Usage-based metering and chargeback Incident and problem reports which help IT focus efforts As a result of VMM/OM integration, several reports are created and available by default. However, metering & chargeback reports and incident and problem reports are enabled by the use of Service Manager and Cloud Services Process Pack. Service Management System The goal of Service Manager 202 is to support IT service management in a broad sense. This includes implementing Information Technology Infrastructure Library (ITIL) processes, such as change management and incident management, and it can also include processes for other things, such as allocating resources from a private cloud. Service Manager 202 maintains a Configuration Management Database (CMDB). The CMDB is the repository for nearly all configuration and management-related information in the System Center

58 environment. With the System Center Cloud Services Process Pack, this information includes Virtual Machine Manager (VMM) 202 resources like virtual machine templates, virtual machine service templates, and so on, which are copied regularly from the VMM 202 library into the CMDB. This allows objects such as VMs and Users to be tied to Orchestrator runbooks for automated request fulfillment, metering & chargeback and more. User Self-Service The Microsoft User Self-Service solution consists of three elements: Service Manager Self-Service Portal Cloud Services Process Pack App Controller Service Manager 202 provides its own Self-Service Portal. Using the information in the CMDB, Service Manager 202 can create a service catalog that shows the services available to a particular user. For example, a user wants to create a virtual machine in the group s cloud. Instead of passing the request directly on to VMM 202 as System Center App Controller 202 does, Service Manager 202 starts a workflow to handle the request. The workflow contacts the user s manager to get an approval for this request. If the request is approved, the workflow then starts a System Center Orchestrator 202 runbook. The Service Manager Self-Service Portal consists of two parts and has the pre-requisite of a Service Manager Server and database. Web content server SharePoint Web Part These roles shall be co-located on a single dedicated server Cloud Services Process Pack is an add-on component which enables IaaS capabilities through the Service Manager Self-Service Portal and Orchestrator Runbooks. It provides: Standardized and well-defined processes for requesting and managing cloud services, including the ability to define Projects, Capacity Pools, and Virtual Machines. Natively supported request, approval, and notification to enable businesses to effectively manage their own allocated infrastructure Capacity Pools. App Controller is the portal a self-service user would utilize after the request has been fulfilled in order to connect to and manage their virtual machines and services. App Controller connects directly to Virtual Machine Manager utilizing the credentials of the authenticated user to display his or her VMs, Services, and provide a configurable set of actions. Dell Compellent Storage Management Dell Compellent Storage Center enables enterprises of all sizes to move beyond simply storing data to actively, intelligently managing data. Powerful network storage software with built-in intelligence and automated storage management functions optimizes the provisioning, placement and protection of enterprise data throughout its lifecycle. 58

59 Storage Virtualization: Advanced disk level virtualization so all servers have continuous access to all storage. Thin Provision: Provisioning of any size virtual volume without consuming physical capacity until data is written to disk Automated Tiered Storage: Dynamic block-level migration of data between storage tiers and disk tracks based on the frequency of use. Continuous Snapshots: Space-efficient snapshots for continuous protection and near-instant recovery to any point in time. Dynamic Storage Migration: On-demand movement and sharing of storage volumes between two arrays without disruption. Boot from SAN: Space-efficient snapshots stored on the SAN for boot volumes, virtual machines template and more. Remote Replication: Replication of data between sites using snapshots for cost-effective disaster recovery. Storage Management: Single pane of glass management console providing comprehensive storage capacity and performance monitoring and reporting. Dell PowerEdge Server Management Utilities Dell OpenManage Essentials Dell OpenManage Essentials combines innovative simplicity in design with intuitive usability and operational efficiency to perform essential hardware management tasks in multivendor operating system and hypervisor environments. OpenManage Essentials automates basic repetitive hardware management tasks discovering, monitoring and updating for enterprise servers, storage and networking from a single, easy-to-use, one-to-many console. A key feature of OpenManage Essentials is its ability to monitor Dell PowerEdge 2th generation servers with or without a systems management software agent. Although agents can be powerful tools for managing infrastructure elements, they are OS-dependent, take up significant network and processor bandwidth, and are extra software that IT administrators must install, configure and test. OpenManage Essentials communicates directly with the server s embedded management, Integrated Dell Remote Access Controller 7 (idrac7) with Lifecycle Controller, to enable agent-free remote management and monitoring of server hardware components, such as storage, networking, processors and memory. No processor cycles are spent on agent execution or intensive inventory collection, and an agent-free environment also enhances system security. To maximize Dell systems health, performance and uptime, OpenManage Essentials delivers essential hardware element management including: Automated discovery, inventory and monitoring of Dell PowerEdge servers and Dell Force0 switches Agent-free, automated server monitoring and BIOS, firmware and driver updates for Dell PowerEdge servers, blade systems and internal storage Management of PowerEdge servers within Windows, and Hyper-V environments 59

60 Context-sensitive link and launch management of Dell blade chassis Dell Force0 switches Interfaces for the following optional Dell systems management solutions: Repository Manager to facilitate and secure precise control of system updates OpenManage Power Center to optimize and control power consumption KACE K000 Appliance service desk to provide actionable alerts describing the status of Dell servers, storage and switches Dell ProSupport phone home services for your data center resources By enabling easy and comprehensive inventory, monitoring and updating Dell hardware, OpenManage Essentials helps significantly reduce the complexity and time spent on repetitive hardware management tasks enhancing IT efficiency. Server Out of Band Management: idrac 7 The idrac7 with Lifecycle Controller enables agent-free monitoring, managing, maintaining, updating and deploying of Dell severs from any location, regardless of OS status. The embedded idrac7 with Lifecycle Controller is available on all Dell PowerEdge servers in the Dell vstart 000m Solution. Lifecycle Controller simplifies server management, enabling: Reduce recovery time in case of failure Extended reach of administrators to larger numbers of distant servers Enhanced security by providing secure access to remote servers Enhanced embedded management by providing local deployment and simplified serviceability through Unified Server Configuration and Web Services for Management (WS-Management) interfaces. Service Management The Service Management layer provides the means for automating and adapting IT service management best practices, such as those found in Microsoft Operations Framework (MOF) and the IT Infrastructure Library (ITIL), to provide built-in processes for incident resolution, problem resolution, and change control. Microsoft Operations Framework (MOF) 4.0 provides relevant, practical, and accessible guidance for today s IT pros. MOF strives to seamlessly blend business and IT goals while establishing and implementing reliable, cost-effective IT services. MOF is a free, downloadable framework that encompasses the entire service management lifecycle. Read MOF online. 60

61 Backup and Disaster Recovery In a Virtualized datacenter, there are 3 commonly used backup types: Host-based, Guest-based, and SAN-based. The below table contrasts these types: Table 0 Backup types and capabilities Capability Host Based Guest Based SAN Snapshot Protection of VM configuration X X Protection of Host & Cluster configuration X X Protection of Virtualization-specific data such as VM snapshots X X Protection of data inside the VM X X X Protection of data inside the VM stored on pass-through disks X X Support for VSS-based backups for supported operating systems and applications Support for Continuous Data Protection X X X X X Ability to granularly recover specific files or applications inside the VM X 6

62 Data Protection Manager 202 System Center 202 Data Protection Manager (DPM) enables disk-based and tape-based data protection and recovery for servers such as SQL Server, Exchange Server, SharePoint, virtual servers, file servers, and support for Windows desktops and laptops. DPM can also centrally manage system state and Bare Metal Recovery (BMR). When using DPM 202 for Hyper-V, you should be fully aware of and incorporate the recommendations found here: Security The three pillars of IT security are confidentiality, integrity, and availability (CIA). IT infrastructure threat modeling is the practice of considering what attacks might be attempted against the different components in an IT infrastructure. Generally, threat modeling assumes the following conditions: Organizations have resources (in this case, IT components) that they wish to protect. All resources are likely to exhibit some vulnerability. People might exploit these vulnerabilities to cause damage or gain unauthorized access to information. Properly applied security countermeasures help mitigate threats that exist because of vulnerabilities. The IT infrastructure threat modeling process is a systematic analysis of IT components that compiles component information into profiles. The goal of the process is to develop a threat model portfolio, which is a collection of component profiles. One way to establish these pillars as a basis for threat modeling IT infrastructure is through Microsoft Operations Framework (MOF) 4.0, a framework that provides practical guidance for managing IT practices and activities throughout the entire IT lifecycle. The Reliability Service Management Function (SMF) in the Plan Phase of MOF addresses creating plans for confidentiality, integrity, availability, continuity, and capacity, The Policy SMF in the Plan Phase provides context to help understand the reasons for policies, their creation, validation, and enforcement, and includes processes to communicate policy, incorporate feedback, and help IT maintain compliance with directives. The Deliver Phase contains several SMFs that help ensure that project planning, solution building, and the final release of the solution are accomplished in ways that fulfill requirements and create a solution that is fully supportable and maintainable when operating in production. 62

63 IT Infrastructure Threat Modeling Guide Security Risk Management Guide Security for the Microsoft private cloud is founded on three pillars: Protected Infrastructure, Application Access, and Network Access. Protected Infrastructure A defense in depth strategy is utilized at each layer of the Microsoft private cloud architecture. Security technologies and controls must be implemented in a coordinated fashion. An entry point represents data or process flow that traverses a trust boundary. Any portions of an IT infrastructure in which data or processes traverse from a less-trusted zone into a more-trusted zone should have a higher review priority. Users, processes, and IT components all operate at specific trust levels that vary between fully trusted and fully untrusted. Typically, parity exists between the level of trust assigned to a user, process, or IT component and the level of trust associated with the zone in which the user, process, or component resides. Malicious software poses numerous threats to organizations, from intercepting a user's logon credentials with a keystroke logger to achieving complete control over a computer or an entire network by using a rootkit. Malicious software can cause Web sites to become inaccessible, destroy or corrupt data, and reformat hard disks. Effects can include additional costs such as to disinfect computers, 63

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING DELL Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING September 2008 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations Microsoft Exchange 2010 on Dell Systems Simple Distributed Configurations Global Solutions Engineering Dell Product Group Microsoft Exchange 2010 on Dell Systems Simple Distributed Configurations This

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

How To Write An Article On An Hp Appsystem For Spera Hana

How To Write An Article On An Hp Appsystem For Spera Hana Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V

Windows Server 2008 R2 Hyper-V Server and Windows Server 8 Beta Hyper-V Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Large Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER IS FOR

More information

Microsoft Private Cloud Fast Track

Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease

More information

Dell Hyper-V Cloud Fast Track Reference Architecture for vstart200

Dell Hyper-V Cloud Fast Track Reference Architecture for vstart200 Dell Hyper-V Cloud Fast Track Reference Architecture for vstart200 Reference Architecture and Validation Guide Release 1.3 for Dell 12 th generation servers Dell Virtualization Solutions Engineering Revision:

More information

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220 MESOS CB220 Cluster-in-a-Box Network Storage Appliance A Simple and Smart Way to Converged Storage with QCT MESOS CB220 MESOS CB220 A Simple and Smart Way to Converged Storage Tailored for SMB storage

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW Dell PowerEdge M-Series Blade Servers Simplifying IT The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically

More information

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance This white paper compares the performance of blade-to-blade network traffic between two enterprise blade solutions: the Dell

More information

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0

Dell Virtual Remote Desktop Reference Architecture. Technical White Paper Version 1.0 Dell Virtual Remote Desktop Reference Architecture Technical White Paper Version 1.0 July 2010 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide

More information

HBA Virtualization Technologies for Windows OS Environments

HBA Virtualization Technologies for Windows OS Environments HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

Microsoft Exchange Server 2013 Virtualized Solution with Dell PowerEdge VRTX

Microsoft Exchange Server 2013 Virtualized Solution with Dell PowerEdge VRTX Microsoft Exchange Server 2013 Virtualized Solution with Dell PowerEdge VRTX A Dell Reference Architecture for 2,000 users with mailbox resiliency. Dell Global Solutions Engineering June 2013 Revision

More information

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist Part 1 - What s New in Hyper-V 2012 R2 Clive.Watson@Microsoft.com Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure

More information

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment. Deployment Guide How to prepare your environment for an OnApp Cloud deployment. Document version 1.07 Document release date 28 th November 2011 document revisions 1 Contents 1. Overview... 3 2. Network

More information

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland Introducing Markus Erlacher Technical Solution Professional Microsoft Switzerland Overarching Release Principles Strong emphasis on hardware, driver and application compatibility Goal to support Windows

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Brocade Solution for EMC VSPEX Server Virtualization

Brocade Solution for EMC VSPEX Server Virtualization Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,

More information

Pivot3 Reference Architecture for VMware View Version 1.03

Pivot3 Reference Architecture for VMware View Version 1.03 Pivot3 Reference Architecture for VMware View Version 1.03 January 2012 Table of Contents Test and Document History... 2 Test Goals... 3 Reference Architecture Design... 4 Design Overview... 4 The Pivot3

More information

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org

IOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices A Dell Technical White Paper Dell Symantec THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND

More information

Red Hat enterprise virtualization 3.0 feature comparison

Red Hat enterprise virtualization 3.0 feature comparison Red Hat enterprise virtualization 3.0 feature comparison at a glance Red Hat Enterprise is the first fully open source, enterprise ready virtualization platform Compare the functionality of RHEV to VMware

More information

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario Evaluation report prepared under contract with Dell Introduction Dell introduced its PowerEdge VRTX integrated IT solution for remote-office

More information

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic

More information

DVS Enterprise. Reference Architecture. VMware Horizon View Reference

DVS Enterprise. Reference Architecture. VMware Horizon View Reference DVS Enterprise Reference Architecture VMware Horizon View Reference THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED

More information

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture

HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture Technical white paper HP ConvergedSystem 900 for SAP HANA Scale-up solution architecture Table of contents Executive summary... 2 Solution overview... 3 Solution components... 4 Storage... 5 Compute...

More information

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters

Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Brocade and EMC Solution for Microsoft Hyper-V and SharePoint Clusters Highlights a Brocade-EMC solution with EMC CLARiiON, EMC Atmos, Brocade Fibre Channel (FC) switches, Brocade FC HBAs, and Brocade

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta Mit Soft- & Hardware zum Erfolg IT-Transformation VCE Converged and Hyperconverged Infrastructure VCE VxRack EMC VSPEX Blue IT-Transformation IT has changed dramatically in last past years The requirements

More information

Deploying Exchange Server 2007 SP1 on Windows Server 2008

Deploying Exchange Server 2007 SP1 on Windows Server 2008 Deploying Exchange Server 2007 SP1 on Windows Server 2008 Product Group - Enterprise Dell White Paper By Ananda Sankaran Andrew Bachler April 2008 Contents Introduction... 3 Deployment Considerations...

More information

Best Practices for Virtualised SharePoint

Best Practices for Virtualised SharePoint Best Practices for Virtualised SharePoint Brendan Law Blaw@td.com.au @FlamerNZ Flamer.co.nz/spag/ Nathan Mercer Nathan.Mercer@microsoft.com @NathanM blogs.technet.com/nmercer/ Agenda Why Virtualise? Hardware

More information

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Integration note, 4th Edition Introduction... 2 Overview... 2 Comparing Insight Management software Hyper-V R2 and VMware ESX management...

More information

Evaluation of Enterprise Data Protection using SEP Software

Evaluation of Enterprise Data Protection using SEP Software Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &

More information

VTrak 15200 SATA RAID Storage System

VTrak 15200 SATA RAID Storage System Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data

More information

Networking Solutions for Storage

Networking Solutions for Storage Networking Solutions for Storage Table of Contents A SAN for Mid-Sized Businesses... A Total Storage Solution... The NETGEAR ReadyDATA RD 0... Reference Designs... Distribution Layer... Access LayeR...

More information

Microsoft Exchange Solutions on VMware

Microsoft Exchange Solutions on VMware Design and Sizing Examples: Microsoft Exchange Solutions on VMware Page 1 of 19 Contents 1. Introduction... 3 1.1. Overview... 3 1.2. Benefits of Running Exchange Server 2007 on VMware Infrastructure 3...

More information

CompTIA Cloud+ Course Content. Length: 5 Days. Who Should Attend:

CompTIA Cloud+ Course Content. Length: 5 Days. Who Should Attend: CompTIA Cloud+ Length: 5 Days Who Should Attend: Project manager, cloud computing services Cloud engineer Manager, data center SAN Business analyst, cloud computing Summary: The CompTIA Cloud+ certification

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Overcoming Security Challenges to Virtualize Internet-facing Applications

Overcoming Security Challenges to Virtualize Internet-facing Applications Intel IT IT Best Practices Cloud Security and Secure ization November 2011 Overcoming Security Challenges to ize Internet-facing Applications Executive Overview To enable virtualization of Internet-facing

More information

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX

MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

EMC Unified Storage for Microsoft SQL Server 2008

EMC Unified Storage for Microsoft SQL Server 2008 EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information

More information

NET ACCESS VOICE PRIVATE CLOUD

NET ACCESS VOICE PRIVATE CLOUD Page 0 2015 SOLUTION BRIEF NET ACCESS VOICE PRIVATE CLOUD A Cloud and Connectivity Solution for Hosted Voice Applications NET ACCESS LLC 9 Wing Drive Cedar Knolls, NJ 07927 www.nac.net Page 1 Table of

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

CompTIA Cloud+ 9318; 5 Days, Instructor-led

CompTIA Cloud+ 9318; 5 Days, Instructor-led CompTIA Cloud+ 9318; 5 Days, Instructor-led Course Description The CompTIA Cloud+ certification validates the knowledge and best practices required of IT practitioners working in cloud computing environments,

More information

The Methodology Behind the Dell SQL Server Advisor Tool

The Methodology Behind the Dell SQL Server Advisor Tool The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Agent-free Inventory and Monitoring for Storage and Network Devices in Dell PowerEdge 12 th Generation Servers

Agent-free Inventory and Monitoring for Storage and Network Devices in Dell PowerEdge 12 th Generation Servers Agent-free Inventory and Monitoring for Storage and Network Devices in Dell PowerEdge 12 th Generation Servers This Dell Technical White Paper provides an overview on the agent-free monitoring feature

More information

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010

More information

Optimizing SQL Server Storage Performance with the PowerEdge R720

Optimizing SQL Server Storage Performance with the PowerEdge R720 Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced

More information

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution

Open-E Data Storage Software and Intel Modular Server a certified virtualization solution Open-E Data Storage Software and Intel Modular Server a certified virtualization solution Contents 1. New challenges for SME IT environments 2. Open-E DSS V6 and Intel Modular Server: the ideal virtualization

More information

Veeam 74-409 Study Webinar Server Virtualization with Windows Server Hyper-V and System Center. Orin Thomas @orinthomas

Veeam 74-409 Study Webinar Server Virtualization with Windows Server Hyper-V and System Center. Orin Thomas @orinthomas Veeam 74-409 Study Webinar Server Virtualization with Windows Server Hyper-V and System Center Orin Thomas @orinthomas http://hyperv.veeam.com/ study-guide-microsoft-certification-exam-74-409-server-virtualization-windows-server-hyper-v-system-center-4202/

More information

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET

STORAGE CENTER WITH NAS STORAGE CENTER DATASHEET STORAGE CENTER WITH STORAGE CENTER DATASHEET THE BENEFITS OF UNIFIED AND STORAGE Combining block and file-level data into a centralized storage platform simplifies management and reduces overall storage

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

Qsan AegisSAN Storage Application Note for Surveillance

Qsan AegisSAN Storage Application Note for Surveillance Qsan AegisSAN Storage Application Note for Surveillance Qsan AegisSAN P300Q P500Q F400Q F300Q 1/5 Qsan AegisSAN Storage Systems: Secure and Reliable Qsan delivers more reliable surveillance solution with

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

Frequently Asked Questions: EMC UnityVSA

Frequently Asked Questions: EMC UnityVSA Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Dell FlexAddress for PowerEdge M-Series Blades

Dell FlexAddress for PowerEdge M-Series Blades Dell FlexAddress for PowerEdge M-Series Blades June 16, 2008 Authored By: Rick Ward, Mike J Roberts and Samit Ashdhir Information in this document is subject to change without notice. Copyright 2008 Dell

More information

Pluribus Netvisor Solution Brief

Pluribus Netvisor Solution Brief Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and

More information

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd

Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd Reference Architecture - Microsoft Exchange 2013 on Dell PowerEdge R730xd Reference Implementation for up to 8000 mailboxes Dell Global Solutions Engineering June 2015 A Dell Reference Architecture THIS

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Zadara Storage Cloud A whitepaper. @ZadaraStorage

Zadara Storage Cloud A whitepaper. @ZadaraStorage Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Microsoft Private Cloud Fast Track Reference Architecture

Microsoft Private Cloud Fast Track Reference Architecture Microsoft Private Cloud Fast Track Reference Architecture Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with NEC s

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

Windows Server 2008 R2 Hyper V. Public FAQ

Windows Server 2008 R2 Hyper V. Public FAQ Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software. Dell XC Series Tech Note

Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software. Dell XC Series Tech Note Virtualizing Microsoft SQL Server on Dell XC Series Web-scale Converged Appliances Powered by Nutanix Software Dell XC Series Tech Note The increase in virtualization of critical applications such as Microsoft

More information

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector

Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector Protect SQL Server 2012 AlwaysOn Availability Group with Hitachi Application Protector Tech Note Nathan Tran The purpose of this tech note is to show how organizations can use Hitachi Applications Protector

More information