FlexPod Datacenter with Microsoft Private Cloud

Size: px
Start display at page:

Download "FlexPod Datacenter with Microsoft Private Cloud"

Transcription

1 NetApp Verified Architecture FlexPod Datacenter with Microsoft Private Cloud Microsoft Private Cloud Fast Track v4 with Windows Server 2012 R2, System Center 2012 R2, and NetApp Clustered Data ONTAP Glenn Sizemore, NetApp May 2014 NVA-0010 Version 1.0 Status: Final

2 TABLE OF CONTENTS 1 NetApp Verified Architecture Solution Overview Problem Statement Target Audience Technology Solution Use Case Summary Primary Use Case for FlexPod Datacenter with Microsoft Private Cloud Solution Resource Pooling Elasticity and the Perception of Infinite Capacity Perception of Continuous Availability Predictability Metering and Chargeback Multi-Tenancy Security and Identity Capacity Concepts Characterized Workloads Uncharacterized Workloads Resource Pools Capacity-Planning Methodology Defining the Resource Budget Defining Buckets for Uncharacterized Workloads Hardware Requirements Software Requirements Networking Storage Storage Options for Windows Server 2012 R Virtual Infrastructure Management Data Protection Alternative Use Cases Design Validation Success Stories Conclusion FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

3 References Supporting Documents Recommended Documents Version History Acknowledgements LIST OF TABLES Table 1) Example VM classes Table 2) Distinctions in levels of service Table 3) CSV parameters Table 4) Feature comparison of Microsoft Hyper-V capabilities Table 5) Host cluster networks LIST OF FIGURES Figure 1) FlexPod component families Figure 2) FlexPod discrete uplink design with NetApp clustered Data ONTAP Figure 3) NetApp integration with Microsoft Figure 4) NetApp clustered Data ONTAP controller as a monitored object on Microsoft System Manager 2012 R2 Operations Manager console Figure 5) NetApp OnCommand System Manager Figure 6) NetApp SMHV 2.0 distributed application-consistent backup architecture Figure 7) NetApp SMHV 2.0 architecture Figure 8) Backup types Figure 9) NetApp SnapVault integration into SMHV Figure 10) NetApp SMHV SnapVault options and Snapshot labels FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

4 1 NetApp Verified Architecture The NetApp Verified Architecture (NVA) program offers customers a validated architecture for NetApp solutions. The NVA provides customers with a NetApp solution architecture that: Is thoroughly tested Is prescriptive Minimizes customers deployment risk Accelerates their time to results 2 Solution Overview This document describes FlexPod Datacenter with Microsoft Private Cloud, a solution for deploying Cisco and NetApp technologies as a shared cloud infrastructure that has been validated under the Microsoft Private Cloud Fast Track v4 program. Microsoft Private Cloud Fast Track is a joint effort between Microsoft and its hardware partners to deliver preconfigured virtualization and private cloud solutions. The program focuses on new technologies and services in Microsoft Windows Server in addition to investments in Microsoft System Center. The validated designs in the Microsoft Private Cloud Fast Track program deliver best-in-class solutions from Microsoft s hardware partners that guide Microsoft technologies, investments, and best practices. The private cloud model provides much of the efficiency and agility of cloud computing, along with the increased control and customization that are achieved through dedicated private resources. Through the Microsoft Private Cloud Fast Track v4 validated FlexPod Datacenter with Microsoft Private Cloud solution, Cisco, NetApp, and Microsoft can offer organizations both the control and the flexibility that are required to reap the potential benefits of the private cloud. Microsoft Private Cloud Fast Track uses the core capabilities of Windows Server, Hyper-V, and System Center to deliver a private cloud infrastructure as a service offering. FlexPod Datacenter with Microsoft Private Cloud builds on the Microsoft Fast Track program to deliver industry-leading integration and implementation guidance. 2.1 Problem Statement Cloud-style architecture offers significant reductions in cost and increases business agility. However, these systems are complex and difficult to install and configure. This NVA document is designed to reduce deployment and design time for FlexPod customers and partners by providing specific guidance for creating a FlexPod Datacenter with Microsoft Private Cloud solution. 2.2 Target Audience The FlexPod Datacenter with Microsoft Private Cloud NVA is recommended for the following audiences: Customer or partner architects Customer IT business leaders Private-cloud architects 2.3 Technology Solution Industry trends indicate a vast data center transformation toward shared infrastructure and cloud computing, sometimes referred to as software-defined computing. Enterprise customers are moving away from isolated centers of IT operation toward more cost-effective virtualized environments. The objective of the move toward virtualization, and eventually to software-defined cloud computing, is to increase agility and reduce cost. 4 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

5 Especially because companies must address resistance to change in both their organizational and their technical IT models, achieving this transformation can seem daunting and complex. To accelerate the process and simplify the evolution to a shared-cloud, software-defined infrastructure, Cisco and NetApp have developed a solution called FlexPod Datacenter with Microsoft Private Cloud that is validated by Microsoft Private Cloud Fast Track v4. FlexPod is a predesigned best-practice data center architecture that is built on Cisco United Computing System (Cisco UCS ), the Cisco Nexus family of switches, and NetApp fabric-attached storage (FAS) systems, as shown in Figure 1. FlexPod is a suitable platform for running a variety of virtualization hypervisors as well as bare-metal operating systems (OSs) and enterprise workloads. FlexPod delivers not only a baseline configuration, but also the flexibility to be sized and optimized to accommodate many different use cases and requirements. Figure 1) FlexPod component families. This document describes the FlexPod Datacenter with Microsoft Private Cloud solution from Cisco and NetApp, validated by Microsoft Private Cloud Fast Track v4. It discusses design choices and deployment best practices for using this shared infrastructure platform. 5 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

6 2.4 Use Case Summary FlexPod Datacenter with Microsoft Private Cloud is a multipurpose platform that delivers a wide variety of workloads in an enterprise setting and offers the following key features: Nondisruptive operations Server Message Block (SMB) 3.0 protocol Offloaded data transfer (ODX) Simplified storage management Backup and recovery Nondisruptive operations are achieved by combining the flexibility and power of Windows Server 2012 R2 Hyper-V with the performance, availability, and efficiency of NetApp clustered Data ONTAP. This combination empowers the infrastructure or fabric management team to fully manage all aspects of the cloud without affecting nested customer instances. Windows Server 2012 introduced an evolution in simplified virtual machine (VM) storage management with SMB 3.0 and continuously available file shares. Clustered Data ONTAP 8.2 added support for this robust protocol, allowing the Microsoft Private Cloud solution to achieve the benefits of consolidated VM storage and a shared-nothing architecture. Because of the NetApp Virtual Storage Tier (VST) and the flash storage capabilities of the Data ONTAP architecture, this consolidation offers both greater efficiency and improved performance. ODX is an offload technology in Windows Server that allows the OS to hand off any copy operation to the storage controller. This offload is transparent and requires no customer plug-ins or software. The result is that Windows hosts can be loaded with greater density because the host OS is not consumed by filetransfer operations. In addition, its token-exchange architecture makes ODX cross-protocol capable. New in System Center 2012 R2 Virtual Machine Manager (SCVMM) is the capability to implement a Fast File Copy VM deployment through ODX. Building on the Storage Management Initiative Specification (SMI-S) published by the Storage Networking Industry Association (SNIA), an open standard for enterprise storage management, the solution can achieve fully integrated storage provisioning and management, either from Windows Server itself or through SCVMM. However, standards-based management does not cover all possible contingencies because it is a subset of the capabilities of all vendors. Therefore, to facilitate more advanced deployments, Microsoft System Center Orchestrator and the NetApp Data ONTAP PowerShell toolkit enable complete end-to-end orchestration and automation workflows. When these solutions are combined with System Center Service Manager, the workflows can be extended as integrated service offerings without the need for complex customer-built portals. They can also be extended further through integration with Microsoft Service Management Automation and the Windows Azure Management Pack. NetApp SnapManager for Microsoft Hyper-V provides a complete backup and recovery infrastructure for private cloud. New in the latest release is the integration with distributed backup operations for Microsoft Cluster Shared Volumes (CSVs) and the ability to back up VMs located on an SMB share by using the Microsoft remote volume shadow copy service (remote VSS). These advances allow the native integration of NetApp Snapshot and FlexClone technologies to perform fast backup and restore operations, regardless of the size of the VMs. 3 Primary Use Case for FlexPod Datacenter with Microsoft Private Cloud Solution The architecture principles of Microsoft Private Cloud conform to the cloud attributes outlined by the National Institute of Standards and Technology (NIST) definition of cloud computing: on-demand, self- 6 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

7 service, broad network access, resource pooling, rapid elasticity, and measured service. Similarly, the Microsoft Hyper-V cloud architecture is based on these seven principles: Resource pooling Elasticity and the perception of infinite capacity Perception of continuous availability Predictability Metering and chargeback Multi-tenancy Security and identity 3.1 Resource Pooling Resource optimization, which promotes efficiency and cost reduction, is primarily achieved through resource pooling. Abstracting the solution platform from the physical infrastructure allows resources to be optimized through shared use. Allowing multiple consumers to share resources results in higher resource use and more efficient use of the infrastructure. Behind many of the Hyper-V cloud principles is the element of optimization through abstraction, which helps improve agility and reduce cost. 3.2 Elasticity and the Perception of Infinite Capacity From a consumer s perspective, cloud services appear to have infinite capacity. Like electric utilities, they are available for as much or as little use as needed. This utility approach to computing requires proactive capacity planning so that requests can be satisfied on demand. Applying the principle of elasticity reactively and in isolation often leads to inefficient use of resources and unnecessary costs. But when an organization encourages desired consumer behavior, it can use this principle to balance the demand for agility with the cost of unused capacity. 3.3 Perception of Continuous Availability From the consumer s perspective, cloud services always appear to be available when needed. The consumer should never experience an interruption of service, even if failures occur in the Hyper-V cloud environment. To achieve this perception, organizations must take a mature service management approach that combines inherent application resiliency with infrastructure redundancies in a highly automated environment. As with the perception of infinite capacity, this principle can be achieved only in conjunction with the other Hyper-V cloud principles. 3.4 Predictability Predictability is a fundamental cloud principle for both consumers and providers. From the consumer s perspective, cloud services should be consistent; that is, they should have the same quality and capabilities each time they are used. For the provider, delivering this predictability requires homogenizing the underlying physical servers, network devices, and storage systems to create an underlying infrastructure that can offer a consistent experience to the hosted workloads. In relation to service management, the provider delivers predictability by standardizing service offerings and processes. Following the principle of predictability is fundamental to achieving quality of service (QoS). 3.5 Metering and Chargeback When IT professionals are asked to deliver a service to the business, they typically purchase the necessary components and then build an infrastructure that is specific to the service requirements. Often, this approach results in a longer time to market and increased cost caused by duplicate infrastructure. In 7 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

8 addition, the service often fails to meet business expectations for agility and cost control. The problem is often compounded when an existing service must be expanded or upgraded. This common approach to infrastructure deployment has forced most businesses to use complex forecasting models and guesswork to predict future needs for each business unit. Taking a service provider s perspective toward delivering infrastructure transforms the IT approach. If infrastructure is provided as a service, IT can use a shared resource model that enables economies of scale. Because the resource pool is combined, variations in need among business units can be absorbed, and the forecasting model becomes simplified, while accuracy is increased. This principle of providing IT as a service, combined with the other principles, helps the organization achieve greater agility at a lower cost. 3.6 Multi-Tenancy Multi-tenancy refers to the capability of the infrastructure to be logically subdivided and provisioned to different organizations or organizational units. The traditional example is a hosting company that provides servers to multiple customer departments. Increasingly, this model is also being used by centralized IT departments that provide services to multiple business units within a single organization, treating each as a customer or tenant. 3.7 Security and Identity Security for the Hyper-V cloud is founded on three principles: Protected infrastructure Application access Network access Protected infrastructure takes advantage of security and identity technologies so that hosts, information, and applications are secured across all scenarios in the data center, including the physical (on-premises) and virtual (on-premises and cloud) environments. Application access helps IT managers extend vital applications to internal users as well as to important business partners and cloud users. Network access uses an identity-centric approach so that users whether based in the central office or in remote locations have more secure access no matter what device they use. This security helps users stay productive and work effectively. Most important from a security standpoint, the secure data center makes use of a common integrated technology to help users gain basic access by using a common identity. Management is integrated across physical, virtual, and cloud environments so that businesses can take advantage of all capabilities without the need for significant additional financial investments. 4 Capacity Concepts Capacity planning relies on workloads, resource pools, and a capacity budget. Workloads in IT environments can generally be divided into two categories: Characterized workloads Uncharacterized workloads Effective capacity planning requires a basic understanding of these two distinct types of workloads. 8 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

9 4.1 Characterized Workloads Characterized workloads are well studied, well understood, and well defined. They are generally associated with major applications that have industry-wide adoption rates. An example of a characterized workload is Microsoft Exchange. Generally, because the requirements of characterized workloads are well known and well understood, very precise sizing tools already exist for them. 4.2 Uncharacterized Workloads As their name implies, uncharacterized workloads vary widely and are neither well defined nor well understood. There are generally no capacity-planning or sizing tools available for uncharacterized workloads because of their nonstandard nature. Most public and private cloud general-purpose virtualized client and server workloads fall into the uncharacterized category. 4.3 Resource Pools Within shared infrastructure, resources are managed by grouping like resources together into resource pools. A public or private cloud resource can be computing (CPU or memory), storage (performance or capacity), or bandwidth (both external for client connectivity or internal connectivity between VMs or between VMs and other resources such as storage). The combination of these resource pools becomes the basis for a resource budget. 4.4 Capacity-Planning Methodology Capacity planning relies on the definition of a set of resource pools that, when combined, become the resource budget. Characterized or well-defined workloads within the environment are sized using normal processes and tools, and the result is subtracted from the resource budget. What remains can be applied to uncharacterized workloads. The methodology presented categorizes uncharacterized workloads into averaged buckets and then subtracts the buckets from the resource budget. The first step in this process is to define the budget. 4.5 Defining the Resource Budget A public or private cloud environment has many types of resources. For the purposes of capacity planning, the network is assumed, and the focus is on the computing and storage resource types. Computing Resources Computing resources fall into two broad categories: processing (CPU) and random access memory (RAM). When creating VMs in Hyper-V, one early decision to make is determining how much CPU and RAM to allocate to the VM. Hyper-V uses the concept of logical processors in defining CPU resources. A logical processor can be either a physical server or a hyperthreaded core. A physical server with two CPU sockets, each with a 6- core hyperthreaded processor, is said to contain 24 logical processors. Windows Server 2012 R2 has a hard limit of 2,048 virtual CPUs per host. This limit, however, does not mean that every host can use that many virtual processors without any penalty. Therefore, NetApp recommends using the number of logical processors to calculate the total number of virtual processors that can be supported, which is determined by an acceptable fan-out ratio. For server workloads, NetApp defines the fan-out ratio as up to 8:1. For virtual desktop infrastructure (VDI) workloads, fan-out ratios of up to 12:1 are supported. For capacity planning, NetApp does not automatically assume the maximum fan-out ratios. Customers should perform their own analyses to determine the appropriate fan-out ratio for their requirements. Each server in the environment contains a set amount of RAM. In Windows Server 2012 R2, memory can be added dynamically to a Hyper-V VM. This capability makes it possible to set minimum and maximum amounts of RAM, along with priorities, in addition to the fixed-memory provisioning. The method used for 9 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

10 applying RAM determines the amount that should be subtracted from the resource pool for each VM. In either case, the RAM resource pool is the sum of the RAM available on the servers in the deployment. Storage Resources Storage resources can be broadly categorized in terms of performance and capacity. When defining storage resources, it is often desirable to have more than one storage pool, complete with its unique capacity and performance characteristics. The use of multiple pools is especially desirable in planning for different levels of service. The gold level of service would likely accommodate more I/O operations per second (IOPS) than the bronze level, for example. Independent of the number of pools, each storage pool can be defined in terms of IOPS and capacity. 4.6 Defining Buckets for Uncharacterized Workloads Not all VMs are created equal. In fact, a main reason that VM workloads are uncharacterized is that they vary so widely. The resource consumption of a VDI desktop differs radically from the resource consumption of a departmental Microsoft SharePoint server. One way to bring order to the disparity among VM workloads is to divide these widely varying workloads into a relatively small number of buckets. NetApp uses the terms small, medium, and large to define VM classes. These classes are not static and should be customized to reflect the realities of the organization. Table 1 shows an example of this type of categorization. Table 1) Example VM classes. VM Class Storage IOPS Disk Capacity RAM CPUs Small 25 40GB 2GB 1 Medium GB 4GB 2 Large GB 12GB 4 Table 2 shows distinctions in levels of service. Table 2) Distinctions in levels of service. Service Level Backup Retention Mirror VMs per LUN Bronze Weekly Unlimited Silver Weekly 2 weeks, 0 days, 0 hours Weekly 60 Gold Daily 2 weeks, 7 days, 0 hours Daily 30 Platinum Hourly 2 weeks, 7 days, 12 hours Hourly 15 As illustrated in Table 2, the levels of service focus primarily on integration of backup recovery and disaster recovery as a service. However, these services also affect the number of VMs per CSV when block storage is used. For instance, the platinum service level calls for fewer VMs per CSV than the bronze level because the capability to restore a failed VM (caused by hardware or other failure) depends to some degree on how many VMs must be restored. These restrictions do not apply to VM storage on SMB 3.0 file shares. After the VM classes have been established and overlaid with the services levels, the environment can be sized predictably by dividing a given resource consumption from the buckets established earlier. For more information about sizing private cloud infrastructure, refer to NetApp TR-4014: Microsoft Private Cloud Built-on FlexPod Capacity Planning Guide. 10 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

11 4.7 Hardware Requirements This section describes the hardware used in the solution. System Overview FlexPod is a best-practice data center architecture that is built with three components: Cisco UCS Cisco Nexus switches NetApp FAS systems These components are connected and configured according to the best practices of both Cisco and NetApp to provide the optimal platform for running a variety of enterprise workloads with confidence. FlexPod can scale up for greater performance and capacity (adding computing, network, or storage resources individually as needed), or it can scale out for environments that need multiple consistent deployments (rolling out additional FlexPod stacks). FlexPod delivers not only a baseline configuration but also the flexibility to be sized and optimized to accommodate many different use cases. Typically, the more scalable and flexible a solution is, the more difficult it becomes to maintain a single unified architecture capable of offering the same features and functions across each implementation. Overcoming this challenge is one of the key benefits of FlexPod. Each of the component families shown in Figure 1 offers platform and resource options to scale the infrastructure up or down while supporting the same features and functions that are required under the configuration and the connectivity best practices of FlexPod. Design Principles FlexPod addresses four primary design principles: availability, scalability, elasticity, and manageability. The related architecture goals are as follows: Note: Application availability. Deliver accessible and ready-to-use services. Scalability. Address increasing demand with appropriate resources. Flexibility. Provide new services or recovered resources without requiring infrastructure modification. Manageability. Facilitate efficient infrastructure operations through open standards and APIs. Performance and security are crucial design criteria that were not directly addressed in this project but are addressed in other collateral and benchmarking and solution-testing efforts. Capabilities and basic security elements were validated. FlexPod Discrete Uplink Design Figure 2 shows the FlexPod discrete uplink design with clustered Data ONTAP. As the illustration shows, the design is fully redundant in the computing, network, and storage layers. There is no single point of failure from a device or traffic-path perspective. 11 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

12 Figure 2) FlexPod discrete uplink design with NetApp clustered Data ONTAP. The FlexPod discrete uplink design is an end-to-end Ethernet transport solution supporting multiple local area network (LAN) and storage area network (SAN) protocols, most notably Fibre Channel over Ethernet (FCoE). The solution provides a unified 10-Gigabit Ethernet (10GbE) enabled fabric defined by dedicated FCoE uplinks and dedicated Ethernet uplinks between the Cisco UCS fabric interconnects and the Cisco Nexus switches, as well as converged connectivity between the NetApp storage devices and the same multipurpose Cisco Nexus switches. The FlexPod discrete uplink design does not employ a dedicated SAN switching environment and requires no dedicated Fibre Channel (FC) connectivity. The Cisco Nexus 5500 platform switches are configured in N_port ID virtualization (NPIV) mode, providing storage services for the FCoE-based traffic traversing the fabric. As Figure 2 shows, link-aggregation technology plays an important role, providing improved aggregate bandwidth and link resiliency across the solution stack. The NetApp storage controllers, Cisco UCS, and Cisco Nexus 5500 platform all support active port channels using IEEE 802.3ad standard Link Aggregation Control Protocol (LACP). Port channel technology is a link-aggregation technique that offers link fault tolerance and traffic distribution (load balancing) for improved aggregate bandwidth across 12 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

13 member ports. In addition, the Cisco Nexus 5000 Series offers virtual PortChannel (vpc) capabilities. vpcs allow links that are physically connected to two different Cisco Nexus 5500 platform devices to appear as a single logical port channel to a third device, essentially offering device fault tolerance. vpcs address aggregate bandwidth and link and device resiliency. The Cisco UCS fabric interconnects and NetApp FAS controllers benefit from the Cisco Nexus vpc abstraction, gaining link and device resiliency as well as full use of a nonblocking Ethernet fabric. Note: The Spanning Tree Protocol does not actively block redundant physical links in a properly configured vpc-enabled environment, so all ports should forward on vpc member ports. This dedicated uplink design uses FCoE-capable NetApp FAS controllers. From a storage traffic perspective, both standard LACP and Cisco vpc link-aggregation technologies play important roles in the FlexPod discrete uplink design. Figure 2 shows the use of dedicated FCoE uplinks between the Cisco UCS fabric interconnects and Cisco Nexus 5500 platform unified switches. The Cisco UCS fabric interconnects operate in N-Port Virtualization (NPV) mode, so the servers FC traffic is either manually or automatically pinned to a specific FCoE uplink; in this case, either of the two FCoE port channels is pinned. Using discrete FCoE port channels with distinct VSANs allows an organization to maintain traditional SAN A and SAN B fabric separation best practices, including separate zone databases. The vpc links between the Cisco Nexus 5500 platform switches and the NetApp storage controllers unified target adapters 2 (UTA2) are converged, supporting both FCoE and traditional 10GbE traffic, providing a robust last-mile connection between the initiator and the target. The initial storage configuration of this solution is a 2-node high-availability (HA) pair with NetApp clustered Data ONTAP. An HA pair consists of like storage nodes, such as the NetApp FAS3200, FAS6200, or FAS8200 Series. Scalability is achieved by adding storage capacity (disk or shelves) to an existing HA pair or by adding HA pairs to the cluster or storage domain. For SAN environments, the NetApp clustered Data ONTAP offering allows up to three HA pairs that include six clustered nodes to form a single logical entity and a large resource pool of storage that can be easily managed, logically carved, and efficiently consumed. For network-attached storage (NAS) environments, up to 24 nodes can be configured. In both scenarios, the HA interconnect allows each HA node pair to assume control of its partner s storage (disk or shelves) directly. The local physical HA storage failover capability does not extend beyond the HA pair. Furthermore, a cluster of nodes does not have to include similar hardware. Rather, individual nodes in an HA pair are configured alike, allowing customers to scale as needed as they bring additional HA pairs into the larger cluster. Network failover is independent of the HA interconnect. Network failover of each node in the cluster is supported by both the interconnect and the switching fabric, permitting cluster and data and management network interfaces to fail over to different nodes in the cluster, which extends failover beyond the HA pair. Note: Beginning with clustered Data ONTAP 8.2, NetApp storage systems can be configured to operate without cluster interconnect switches when a 2-node storage system is deployed. 4.8 Software Requirements The solution uses the following NetApp software: Clustered Data ONTAP Data ONTAP SMI-S agent 5.1 SnapDrive for Windows SnapManager for Hyper-V (SMHV) OnCommand Plug-In for Microsoft (OCPM) Data ONTAP PowerShell toolkit 3.1 The solution uses the following Microsoft software: 13 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

14 Windows Server 2012 R2 SQL Server 2012 System Center 2012 R2 Virtual Machine Manager System Center 2012 R2 Operations Manager System Center 2012 R2 Orchestrator System Center 2012 R2 Service Manager System Center 2012 R2 App Controller Windows Azure Integration Pack The solution uses the following Cisco software: Cisco UCS Manager 2.2(1c) Cisco Nexus 1000V 4.2(1)SV2(2.1a) Cisco UCS Manager Management Pack for Microsoft System Center Operations Manager Cisco UCS IP 1.0 release for SCO 1.0 Cisco UCS Microsoft System Center Virtual Machine Manager Networking Cisco Nexus 5500 Platform Switch The Cisco Nexus 5000 Series is designed for data center environments, with cut-through technology that enables consistent low-latency Ethernet solutions, front-to-back or back-to-front cooling, and data ports in the rear, bringing switching into close proximity with servers and making cable runs short and simple. The switch series is highly serviceable, with redundant, hot-pluggable power supplies and fan modules. It uses data center class Cisco NX-OS software for high reliability and ease of management. The Cisco Nexus 5500 platform extends the industry-leading versatility of the Cisco Nexus 5000 Series 10GbE data center class switches and provides innovative advances toward higher density, lower latency, and multilayer services. The Cisco Nexus 5500 platform is well suited for enterprise-class data center server access-layer deployments across a diverse set of physical, virtual, storage-access, and high-performance computing (HPC) data center environments. The switch used in this FlexPod architecture is the Cisco Nexus 5548UP. It has the following specifications: A one rack-unit (1RU) 1GbE or 10GbE switch 32 fixed unified ports on the base chassis and one expansion slot, for a total of 48 ports Expansion slot support for any of three module types: Note: Unified ports 1, 2, 4, or 8 Gb/sec native FC Ethernet or FCoE Throughput of up to 960Gb/sec For more information, refer to Cisco Nexus 5000 Series Switches. Cisco Nexus 2232PP 10GbE Fabric Extender The Cisco Nexus 2232PP 10GbE fabric extender provides thirty-two 10GbE and FCoE Enhanced Small Form-Factor Pluggable (SFP+) server ports and eight 10GbE and FCoE SFP+ uplink ports in a compact 1RU form factor. 14 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

15 The built-in standalone software, Cisco Integrated Management Controller (IMC), manages Cisco UCS C- Series rack servers. When a Cisco UCS C-Series rack server is integrated with Cisco UCS Manager using the Cisco Nexus 2232PP, the management controller no longer manages the server. Instead, the server is managed by the Cisco UCS Manager software, through the Cisco UCS Manager GUI or the command-line interface (CLI). The Cisco Nexus 2232PP provides data and control traffic support for the integrated Cisco UCS C-Series server. Cisco Nexus 1000V Switch for Microsoft Hyper-V Cisco Nexus 1000V Switch provides a comprehensive and extensible architectural platform for VM and cloud networking. This switch is designed to accelerate server virtualization and multi-tenant cloud deployments in a secure and operationally transparent manner. Integrated into the Microsoft Windows Server 2012 R2 Hyper-V hypervisor and SCVMM, the Cisco Nexus 1000V provides these advantages: Advanced VM networking, based on the Cisco NX-OS operating system and IEEE 802.1Q switching technology Policy-based VM connectivity Mobile VM security and network policy Nondisruptive operating model for server virtualization and networking teams Virtualized network services, with Cisco vpath providing a single architecture for layer 4 through layer 7 network services, such as load balancing, firewall, and WAN acceleration These capabilities help make the VM a basic building block of the data center, with full switching capabilities and a variety of layer 4 through layer 7 services in both dedicated and multi-tenant cloud environments. With the introduction of Virtual Extensible LAN (VXLAN) on the Cisco Nexus 1000V, network isolation among VMs can scale beyond the limits of traditional VLANs for cloud-scale networking. Note: For more information about the Cisco Nexus 1000V Switch and the Cisco Nexus 1010 Virtual Services Appliance, refer to Cisco Nexus 1000V Switch for Microsoft Hyper-V and Cisco Nexus 1010 Virtual Services Appliance. Cisco Data Center Virtual Machine Fabric Extender Cisco Data Center VM Fabric Extender (VM-FEX) technology collapses virtual and physical switching infrastructures into a single, easy-to-manage environment that provides the following benefits: Simplified operations, eliminating the need for a separate virtual networking infrastructure Improved network security to limit VLAN proliferation Optimized network utilization to reduce broadcast domains Enhanced application performance, which offloads VM switching from the host CPU to applicationspecific integrated circuits (ASICs) on the parent switch VM-FEX is supported on Windows Server 2012 R2 Hyper-V hypervisors, and it fully supports workload mobility through Hyper-V quick migration and live migration. VM-FEX eliminates the virtual switch in the hypervisor by providing individual VMs with virtual ports on the physical network switch. VM I/O is sent directly to the upstream physical network switch, which takes full responsibility for VM switching and policy enforcement. This approach leads to consistent treatment for all network traffic, virtual or physical. VM-FEX collapses virtual and physical switching layers into one and reduces the number of network management points by an order of magnitude. Although software-based devices work extremely efficiently, they have unavoidable overhead on the I/O path. Software-based devices introduce latency, increase overall path length, and consume computing cycles. With the single-root I/O virtualization (SR-IOV) capability, part of the network adapter hardware is exposed inside the VM, and it provides a direct I/O path to the network hardware. For this reason, a vendor-specific driver must be loaded onto the VM to use the virtual function network adapter. 15 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

16 4.10 Storage This section describes the solution s unified storage architecture. NetApp FAS and Clustered Data ONTAP NetApp offers a unified storage architecture. The term unified refers to a family of storage systems that simultaneously supports SAN storage (through FCoE, FC, and iscsi) and NAS (through CIFS and NFS) across many operating environments, such as VMware, Windows, and UNIX environments. This single architecture provides access to data through industry-standard protocols, including NFS, CIFS, iscsi, FCP, SCSI, and NDMP. Connectivity options include standard Ethernet (10/100/1000Mb or 10GbE) and FC (1, 2, 4, or 8 Gb/sec). In addition, all systems can be configured with high-performance solid-state drives (SSDs) or serial attached SCSI (SAS) disks for primary storage applications, low-cost serial ATA (SATA) disks for secondary applications (backup, archiving, and so on), or a mix of the different disk types. A storage system running Data ONTAP, also known as the storage controller, is the hardware device that sends and receives data to and from the host. This unit detects and gathers information about its own hardware configuration, storage system components, operating status, hardware failures, and other error conditions. The storage controller is highly redundantly connected to storage through disk shelves, which are the containers or device carriers that hold disks and associated hardware, such as power supplies, connectivity interfaces, and cabling. If storage requirements change over time, NetApp storage offers the flexibility to change quickly as needed without expensive and disruptive major equipment upgrades. This flexibility applies to a variety of types of changes: Physical changes, such as expansion of a controller to accept more disk shelves and subsequently more hard disk drives (HDDs) without an outage Logical or configuration changes, such as expansion of a RAID group to incorporate these new drives without requiring an outage Access-protocol changes, such as modification of a virtual representation of a hard drive to a host by changing a logical unit number (LUN) from FC access to iscsi access, with no data movement required, but only a simple dismount of the FC LUN and a mount of the same LUN, using iscsi In addition, a single copy of data can be shared between Windows and UNIX systems, with each environment allowed to access the data through native protocols and applications. In a system that was originally purchased with all-sata disks for backup applications, high-performance SAS disks could be added to support primary storage applications, such as Oracle applications, Microsoft Exchange, or Microsoft SQL Server. NetApp clustered Data ONTAP expands this traditional flexibility by allowing the dynamic relocation of either the logical storage container or the volume through the volume move feature, as well as the reassignment of entire parity groups or aggregates through aggregate relocation. These features allow a truly nondisruptive architecture in which any component of the storage system can be upgraded, resized, or redesigned without disruption of the private cloud infrastructure. NetApp storage solutions provide redundancy and fault tolerance through clustered storage controllers and hot-swappable redundant components, such as cooling fans, power supplies, disk drives, and shelves. This highly available and flexible architecture enables customers to manage all data under one common infrastructure while meeting mission-critical uptime requirements. The storage efficiency built into Data ONTAP offers substantial space savings, allowing more data to be stored at lower cost. Data protection includes replication services, so that valuable data is backed up and recoverable from an alternative location. The following features provide storage efficiency and data protection: 16 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

17 Thin-provisioned volumes are created using virtual sizing. They appear to be provisioned at their full capacity but are actually created much smaller and use additional space only when it is needed. Extra unused storage is shared across all volumes, and the volumes can grow and shrink on demand. NetApp Snapshot copies are automatically scheduled point-in-time copies that write only changed blocks, with no performance penalty. Snapshot copies consume little storage space because only changes to the active file system are written. Individual files and directories can easily be recovered from any Snapshot copy, and the entire volume can be restored to any Snapshot state in seconds. NetApp FlexClone volumes are instant virtual copies of datasets that use almost no space. The clones are writable, but only changes to the original are stored, so they provide rapid, space-efficient creation of additional data copies well suited for test and development environments. Deduplication removes redundant data blocks in primary and secondary storage with flexible policies to determine when the deduplication process is run. Compression compresses data blocks. Compression can be run whether or not deduplication is enabled and can provide additional space savings, whether run alone or together with deduplication. NetApp SnapMirror volumes can be asynchronously replicated either within the cluster or to another cluster. All of these capabilities are exposed through the logical management construct of a storage virtual machine (SVM), formerly known as Vserver. Storage Virtual Machines The secure logical storage partition through which data is accessed in clustered Data ONTAP is known as an SVM. A cluster serves data through at least one and possibly multiple SVMs. An SVM is a logical abstraction that represents a set of physical resources of the cluster. Data volumes and logical interfaces (LIFs) are created and assigned to an SVM and can reside on any node in the cluster to which the SVM has been given access. An SVM can own resources on multiple nodes concurrently, and those resources can be moved nondisruptively from one node to another. For example, a flexible volume can be nondisruptively moved to a new node, and an aggregate, or a data LIF, can be transparently reassigned to a different physical network port. The SVM abstracts the cluster hardware and is not tied to specific physical hardware. An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be joined together to form a single NAS namespace, which makes all of an SVM s data available to NFS and CIFS clients through a single share or mount point. For example, a 24-node cluster licensed for UNIX and Microsoft Windows File Services that has a single SVM configured with thousands of volumes can be accessed from a single network interface on one of the nodes. SVMs also support block-based protocols, and LUNs can be created and exported by using iscsi, FC, or FCoE. Any or all of these data protocols can be configured for use within a given SVM. An SVM is a secure entity; therefore, it is aware of only the resources that have been assigned to it and has no knowledge of other SVMs and their respective resources. Each SVM operates as a separate and distinct entity with its own security domain. Tenants can manage the resources allocated to them through a delegated SVM administration account. Each SVM can connect to unique authentication zones, such as Active Directory (AD), Lightweight Directory Access Protocol (LDAP), or network interface service (NIS). An SVM is effectively isolated from other SVMs that share the same physical hardware. From a performance perspective, maximum IOPS and throughput levels can be set for each SVM by using QoS policy groups, which allow the cluster administrator to quantify the performance capabilities allocated to each SVM. Clustered Data ONTAP is highly scalable, and additional storage controllers and disks can easily be added to existing clusters to scale capacity and performance to meet increasing demands. Because the cluster contains virtual storage servers, SVMs are also highly scalable. As new nodes or aggregates are 17 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

18 added to the cluster, the SVM can be nondisruptively configured to use them. New disk, cache, and network resources can be made available to the SVM to create new data volumes or to migrate existing workloads to these new resources to balance performance. This scalability also makes the SVM highly resilient. SVMs are no longer tied to the lifecycle of a given storage controller. As new replacement hardware is introduced, SVM resources can be moved nondisruptively from the old controllers to the new ones, and the old controllers can be retired from service while the SVM is still online and available to serve data. SVMs have three main components: Logical interfaces. All SVM networking is performed through LIFs created within the SVM. As logical constructs, LIFs are abstracted from the physical networking ports on which they reside. Flexible volumes. A flexible volume is the basic unit of storage for an SVM. An SVM has a root volume and can have one or more data volumes. Data volumes can be created in any aggregate that has been delegated by the cluster administrator for use by the SVM. Depending on the data protocols used by the SVM, volumes can contain LUNs for use with block protocols or files for use with NAS protocols, or both concurrently. For access by using NAS protocols, the volume must be added to the SVM namespace through the creation of a client-visible directory called a junction. Namespaces. Each SVM has a distinct namespace through which all of the NAS data shared from that SVM can be accessed. This namespace is essentially a map to all of the junctioned volumes for the SVM, regardless of the node or the aggregate on which they physically reside. Volumes can be joined at the root of the namespace or beneath other volumes that are part of the namespace hierarchy. For more information about namespaces, refer to NetApp TR-4129: Namespaces in Clustered Data ONTAP. For more information, refer to NetApp Data ONTAP 8 Operating System Storage Options for Windows Server 2012 R2 This section describes the storage options available for Windows Server 2012 R2, including the use of CSVs, SMB 3.0 continuously available file shares, and storage automation. Cluster Shared Volumes Windows Server 2008 R2 included the first version of Windows failover clustering to offer a distributed file access solution, allowing a single New Technology File System (NTFS) volume to be accessed simultaneously by multiple nodes in a cluster. Windows Server 2012 expanded on this base capability, introducing many new capabilities. Windows Server 2012 R2 has further expanded those base capabilities by adding the following features: Optimized CSV placement policies. Previous versions of Windows included a coordinator node that owned the physical disk resource and communicated with all other nodes for all I/O operations. In Windows Server 2012 R2, CSV ownership is automatically rebalanced whenever anything occurs that might affect CSV placement, such as a CSV failing over, a node joining the cluster, or a node being restarted. This mechanism keeps the cluster well balanced and maximizes the available I/O for all cluster resources. Increased CSV resiliency. Windows Server 2012 R2 adds a dedicated service to monitor the health of the CSV. With this service, if the node becomes unhealthy for any reason, the cluster automatically relocates the coordination services to a healthy node. In addition, the CSV services have been subdivided, with one service dedicated to regular file traffic, such as clients that access an NTFS share, and another service dedicated to handling internode traffic over the CSV network. These changes increase CVS resiliency and improve the scalability of SMB traffic between nodes. CSV cache. Windows Server 2012 added the capability to assign up to 20% of the total physical RAM to a read cache; however, the read cache was disabled by default. In Windows Server 2012 R2, the read cache is enabled by default, and it can now be configured to use up to 80% of the total RAM allocation. 18 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

19 Improved CSV diagnostics. Windows Server 2012 R2 now allows the state of each node to be viewed, enabling the administrator to see whether a node is in redirected I/O mode on a per-node basis. These enhancements, as well as others, help create an enterprise-ready hosting platform for organizations seeking to deploy traditional block storage. For a complete list of the enhancements made in Windows Server 2012 R2, refer to What's New in Failover Clustering in Windows Server 2012 R2. CSV Characteristics Table 3 shows the characteristics that are defined by NTFS and inherited by CSV. Table 3) CSV parameters. CSV Parameter Maximum volume size Characteristic 256TB Maximum number of partitions 128 Directory structure Maximum number of files per CSV Maximum number of VMs per CSV Unrestricted More than 4 billion Unlimited CSV Sizing Because all cluster nodes can access all CSVs simultaneously, IT managers can now use standard LUN allocation methodologies, based on the performance and capacity requirements of the expected workloads. In general, isolating the VM OS I/O from the application data I/O is a good start. In addition, it is helpful to implement application-specific considerations, such as segregating the database I/O from the logging I/O and creating SAN volumes and storage pools that factor in the I/O profile (that is, random read-write operations rather than sequential write operations). The architecture of CSV differs from that of traditional clustered file systems, which frees it from common scalability limitations. Therefore, no special guidance is needed for scaling the number of Hyper-V nodes or VMs on a CSV volume. The important point to remember is that all VM virtual disks running on a particular CSV contend for storage I/O. For this reason, it is extremely important to give the CSV network appropriate priority. For more information, refer to Designating a Preferred Network for Cluster Shared Volumes Communication in the Microsoft TechNet Library. Performance Storage performance is a complex mix of drive, interface, controller, cache, protocol, SAN, host bus adapter (HBA), driver, and OS considerations. The overall performance of the storage architecture typically is measured in terms of maximum throughput (MB/sec) and/or maximum IOPS for a given latency or response time (in milliseconds [ms]). Although each of these performance measurements is important, IOPS for a given latency is the most relevant to server virtualization. Using NetApp VST uses NetApp Flash Cache technology. This deduplication-aware technology uses the flash-memory cache to intelligently store large numbers of recently accessed blocks. The NetApp VST model can significantly increase the performance of an array in servicing the I/O load (or challenge) of a boot storm or a steady-state event. NetApp FAS controllers use two techniques to optimize both write and read performance. Write performance is optimized by the NetApp WAFL (Write Anywhere File Layout) file system, which delivers writes to the RAID groups as a sequential stream fill stripe write operation, which is the most efficient 19 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

20 method for destaging the write cache. This technique provides optimal disk use by reducing write latency. NetApp FAS controllers also use NetApp Flash Cache to optimize read operations. Multipathing Multipathing should be used in all cases. Generally, NetApp provides a device-specific module (DSM) on top of Windows Server 2012 R2 multipath I/O (MPIO) software that supports the NetApp storage platform. The NetApp DSM offers advanced active-active policies while providing precise failover and path recovery, as well as load balancing, for NetApp LUNs. Fibre Channel SAN FC is an option because it is a supported storage connection protocol. FC is a robust, mature storage protocol that supports multipathing through Microsoft MPIO and the NetApp DSM. iscsi SAN As with an FC-connected SAN, which is naturally on its own isolated network, the iscsi SAN must be on an isolated network, for both security and performance. Any networking standard practice for achieving this goal is acceptable, including a physically separate, dedicated storage network and a physically shared network with the iscsi SAN running on a private VLAN. The switch hardware must provide classof-service (CoS) or QoS guarantees for the private VLAN. In addition, iscsi security and frame size settings can be applied through two methods: Encryption and authentication. If multiple clusters or systems are used on the same SAN, proper segregation or device isolation must be provided. In other words, the storage used by cluster A must be visible only to cluster A and not to any other cluster, and not to a node from a different cluster. NetApp recommends using a session-authentication protocol, such as Challenge Handshake Authentication Protocol (CHAP), to provide a degree of security as well as segregation. Mutual CHAP or IP Security (IPsec) can also be used. Jumbo frames. If they are supported at all points in the entire path of the iscsi network, jumbo frames can increase throughput by up to 20%. Jumbo frames are supported in Hyper-V at the host and guest levels. If jumbo frames are not supported at any point in the network and this feature is enabled, the network device fragments the data packets and causes a decrease in performance. SMB 3.0 Continuously Available File Shares A major new component of clustered Data ONTAP 8.2 is support for the SMB 3.0 NAS protocol, which enables NetApp customers to use the SMB 3.0 features introduced with Windows Server With these new features, clustered Data ONTAP can be used to host a VM s virtual disks and configuration settings on a CIFS file share. The SMB 3.0 features implemented in clustered Data ONTAP 8.2 to support continuously available file shares and Hyper-V storage include the following: Persistent handles (continuously available file shares) Witness protocol Cluster client failover (CCF) Scale-out awareness Offloaded data transfer (ODX) Remote VSS Persistent Handles (Continuously Available File Shares) To enable continuous availability on a file share, the SMB client opens a file on behalf of the application, such as a VM running on a Hyper-V host, and requests persistent handles for the virtual hard disk format 20 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

21 (VHDX) file. When the SMB server receives a request to open a file with a persistent handle, the SMB server retains sufficient information about the file handle, along with a unique resume key supplied by the SMB client. Persistent handle information is shared among the nodes in a cluster. In the case of a planned move of file share resources from one node to another, or in the case of node failure, the SMB client reconnects to an active and available node and reopens the file by using persistent handles. The application or the VM running on the SMB client computer does not experience any failures or errors during this operation. From a VM perspective, it appears that the I/O operations to the virtual disk were delayed for a short time, similar to a brief loss of connectivity to the disk; however, no disruption is noticed. Witness Protocol When an SMB server node fails, the SMB client usually relies on the Transmission Control Protocol (TCP) timeout to detect a failure of the file share resource, such as open file. SMB 3.0 allows variable values for TCP timeouts, and because the virtual disk is a critical resource, the VM running on a Hyper-V server needs fast detection of network resources failover. The Witness protocol significantly improves the SMB client reconnect time. During connection to a shared resource (TREE_CONNECT), the SMB server provides information about features enabled on the share: for instance, whether the resource is clustered, scaled out, and continuously available. The SMB client then requests this same data from other nodes. Upon receiving the information, the SMB client registers itself with the other node. In the event of a cluster node failure, the SMB client is already connected to another node that can detect the failure and then notify the SMB client. This feature saves the SMB client from having to wait until the TCP timeout ends and instead immediately initiates reconnection to the running node, reducing the amount of time that the client is disconnected from the resource. For VMs with virtual disks stored on such SMB shares, disk disconnection time is reduced to the point that the VM does not detect such disconnects as hardware failures. This feature is enabled on clustered Data ONTAP by default only if all best practices are followed and if a LIF is present on each node in the cluster in every SVM. Note also that the Witness protocol is used only with continuously available shares. Cluster Client Failover To increase redundancy in a VM environment, Hyper-V servers should be placed in a Microsoft failover cluster. When the Hyper-V server node running a VM fails, the VM is live migrated or moved to another node. Before CCF with SMB 3.0, a VM that moved to another cluster node was considered a new application instance. Connection of new application instances to files already open on file shares must wait until the TCP timeout ends and the file handle is closed. CCF enables the VM to open a virtual disk file on a file share and provide a unique application identifier. When a Hyper-V server cluster node fails, the VM starts on another Hyper-V server node and supplies the same application identifier, letting the SMB server close existing file handles. The SMB client can then reconnect to the previously open file. Scale-Out Awareness Clustered Data ONTAP is a scale-out architecture by design and provides the capability to serve data from multiple nodes. It brings additional data redundancy to the network and spreads the load of multiple SMB clients among multiple nodes in a cluster. Scale-out awareness allows SMB clients to connect to all nodes in the cluster and access the same data. Offloaded Data Transfer Although the ODX copy offload feature is not required to run a Hyper-V workload over SMB 3.0, this feature can drastically improve VM deployment time for typical deployments in which the customer needs 21 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

22 to provision multiple VMs. The main advantages of this feature are that it is transparent to client machines and no data is sent over the network during file copy operations. Clustered Data ONTAP provides different mechanisms on the back end to copy data blocks. In the case of a single volume that serves a file share, NetApp uses its single-instance storage (SIS) clone feature, which eliminates the data copy process by creating only pointers. This feature accelerates back-end operations and significantly improves copy performance when ODX is used on the NetApp platform, compared to ODX implementations on other storage arrays. When data is copied outside the volume, the process remains offloaded, and no traffic travels through the client or the network. Remote Volume Shadow Copy Service VSS is a framework that coordinates application I/O and physical storage on the same server and allows creation of application-consistent Snapshot copies of the storage. Microsoft Windows Server 2012 R2 extends the functions of VSS to multiple servers. For instance, an application running on one server has storage on another server s file share. Remote VSS coordinates I/O activities during a backup process between both servers and provides application-consistent backup Snapshot copies of the storage for applications running remotely on the storage server. Clustered Data ONTAP 8.2 extends the functions of remote VSS by plugging into the VSS framework; a VSS service runs on a NetApp controller, and a VSS provider runs on a Windows Server 2012 R2 device. From the perspective of a VSS, the NetApp array performs in the same way as a Windows file server. Storage Automation One objective of the Microsoft Private Cloud solution is to enable rapid provisioning and deprovisioning of VMs. Doing so on a large scale requires tight integration with the storage architecture as well as robust automation. Provisioning a new VM on a preexisting LUN is a simple operation. However, provisioning a new CSV LUN and adding it to a host cluster are relatively complicated tasks that should be automated. Historically, many storage vendors have designed and implemented their own storage management systems, APIs, and command-line utilities. This has made it a challenge to use a common set of tools and scripts across heterogeneous storage solutions. To address this challenge, NetApp supports the Microsoft management tools and APIs shown in Figure 3. Specifically, NetApp provides the Data ONTAP PowerShell toolkit, which allows the management of NetApp controllers from Microsoft Windows PowerShell in addition to the standards-based management offered in SMI-S. 22 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

23 Figure 3) NetApp integration with Microsoft Virtual Infrastructure The solution s virtual infrastructure includes Microsoft Windows Server 2012 R2 Hyper-V and Microsoft System Center 2012 R2. Microsoft Windows Server 2012 R2 Hyper-V Microsoft Windows Server 2012 R2 Hyper-V provides significant scalability and expands support for host processors and memory. It includes the following new features: Support for up to 64 processors and 1TB of memory for Hyper-V VMs, including in many cases supporting 4 to 16 times the density of processors, memory, cluster nodes, and running VMs Support for innovative server features, including the ability to project a virtual nonuniform memory access (NUMA) topology onto a VM to provide optimal performance and workload scalability in large VM configurations Improvements to dynamic memory, including minimum memory and Hyper-V smart paging Note: Minimum memory allows Hyper-V to reclaim the unused memory from VMs to allow higher VM consolidation numbers. Smart paging is used to bridge the memory gap between minimum and startup memory by allowing VMs to start reliably when the minimum memory setting has indirectly led to an insufficient amount of available physical memory during restart. Runtime configuration of memory settings, including increasing the maximum memory and decreasing the minimum memory of running VMs The following updated features help the virtualization infrastructure support the configuration of large high-performance VMs to maintain demanding workloads: 23 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

24 VHDX offers greater capacity (up to 64TB of storage), helps provide additional protection from corruption from power failures, and prevents performance degradation on large-sector physical disks by optimizing structure alignment. Virtual Fibre Channel (VFC) support offers VMs unmediated access to SAN LUNs. VFC enables scenarios such as running the Windows Failover Cluster Management feature inside the guest OS of a VM connected to shared FC storage. VFC supports MPIO, NPIV for one-to-many mappings, and up to four VFC adapters per VM. Microsoft Windows Server 2012 R2 includes the following networking enhancements: Support for SR-IOV Third-party extensions to the Hyper-V extensible switch QoS minimum bandwidth Network virtualization IEEE Data Center Bridging (DCB) The virtualization layer is one of the primary enablers in environments with greater IT maturity. The decoupling of hardware, OSs, data, applications, and user state opens a wide range of options for easier management and distribution of workloads across the physical infrastructure. The capability of the virtualization layer to migrate running VMs from one server to another without downtime, along with many other features provided by hypervisor-based virtualization technologies, enables a comprehensive set of solution capabilities. These capabilities can be used by the automation, management, and orchestration layers to maintain desired states and to proactively address decaying hardware or other issues that would otherwise cause faults or service disruptions. Like the hardware layer, the automation, management, and orchestration layers must be able to manage the virtualization layer. Virtualization provides an abstraction of software from hardware that moves most management and automation operations to software instead of requiring users to perform manual operations on physical hardware. With this release, Windows Server 2012 Hyper-V introduces a number of improvements in both virtualization features and scalability. Table 4 compares scalability improvements and feature enhancements. Table 4) Feature comparison of Microsoft Hyper-V capabilities. Feature Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 R2 Scale Hardware logical processor support 16 logical processors 64 logical processors 320 logical processors Physical memory support 1TB 1TB 4TB Cluster scale 16 nodes and up to 1,000 VMs 16 nodes and up to 1,000 VMs 64 nodes and up to 4,000 VMs VM processor support Up to 4 virtual processors Up to 4 virtual processors Up to 64 virtual processors VM memory Up to 64GB Up to 64GB Up to 1TB Live migration Yes, one at a time Yes, one at a time Yes, with no limits up to as many as the hardware allows 24 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

25 Feature Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 R2 Servers in a cluster Virtual processor to logical processor ratio 8:1 8:1 for server No limits, up to as many as the hardware allows 12:1 for client (VDI) Storage Live storage migration No; quick storage migration through Microsoft SCVMM No; quick storage migration through Microsoft SCVMM Yes, with no limits, up to as many as the hardware allows VMs on file storage No No Yes, SMB 3 Guest FC No No Yes Virtual disk format Virtual hard disk (VHD) up to 2TB VHD up to 2TB VHD up to 2TB VHDX up to 64TB VM guest clustering Yes, by using iscsi Yes, by using iscsi Yes, by using iscsi, FC, or SMB Native 4,000-disk support No No Yes Live VHD merge No, offline No, offline Yes Live new parent No No Yes Secure ODX No No Yes Networking Network interface card (NIC) teaming Yes, by way of partners Yes, by way of partners Windows NIC teaming in box VLAN tagging Yes Yes Yes MAC address spoofing protection Address Resolution Protocol (ARP) spoofing protection No Yes, with R2 SP1 Yes No Yes, with R2 SP1 Yes SR-IOV networking No No Yes Network QoS No No Yes Network metering No No Yes Network monitor modes No No Yes IPsec task offload No No Yes VM trunk mode No No Yes Manageability Hyper-V PowerShell No No Yes 25 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

26 Feature Windows Server 2008 Windows Server 2008 R2 Windows Server 2012 R2 Network PowerShell No No Yes Storage PowerShell No No Yes SCONFIG No Yes Yes Enable and disable shell No (server core at OS setup) No (server core at OS setup) Yes; additional minimal shell environment (MINSHELL) Microsoft Windows VMConnect support for Microsoft RemoteFX No Yes The Hyper-V host cluster requires different types of network access, as described in Table 5. Table 5) Host cluster networks. Network Access Type Purpose of Network Access Type Network Traffic Requirements Recommended Network Access VM access Workloads running on VMs usually require external network connectivity to service client requests. Varies Public access that can be teamed for link aggregation or to fail over the cluster Clusters and CSVs This is the preferred network used by the cluster for communications to maintain cluster health. This network is also used by CSV to send data between owner and nonowner nodes. If storage access is interrupted, this network is used to access CSV or to maintain and back up CSV. The cluster should have access to more than one network for communication to make it highly available. Usually low bandwidth and low latency; occasionally, high bandwidth Private access SMB 3.0 Access storage through SMB 3.0. High bandwidth and low latency Live migration Transfer VM memory and state. High bandwidth and low latency during migrations Storage Access storage through iscsi. High bandwidth and low latency Usually, dedicated and private access Private access Usually, dedicated and private access 26 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

27 Network Access Type Purpose of Network Access Type Network Traffic Requirements Recommended Network Access Management Manage the Hyper-V management OS; this network is used by Hyper- V Manager. Low bandwidth Public access that can be teamed to fail over the cluster Highly available host servers are one critical component of a dynamic virtual infrastructure. A Hyper-V host failover cluster is a group of independent servers that work together to increase the availability of applications and services. The clustered servers (nodes) are connected physically. If one cluster node fails, another node begins to provide service. In the case of a planned live migration, users experience no perceptible service interruption. NetApp Microsoft Hyper-V Storage Options Storage for virtual disks (VHDX files) can be provided from the NetApp storage system in the following ways: Block-level LUN over FCP or iscsi attached directly to Hyper-V servers, presented as volumes on a standalone Hyper-V server or as CSVs in a Microsoft failover cluster File-level storage on a NetApp SMB 3.0 continuously available file share NetApp Integration with Microsoft Windows Server 2012 R2 NetApp provides tight integration between the storage (block or file level) resources and the Windows Server 2012 R2 host through the following technologies and products: The ODX feature works transparently on a Windows Server 2012 host and provides much faster provisioning of VMs from the master image: migrating, moving, importing, and exporting either the whole VM or only the VM storage. On NetApp arrays, ODX works across protocols and between file and block storage, which allows a mix of storage options for Windows 2012 Hyper-V VM storage. The NetApp SMI-S agent provides a unified storage management interface that can be used to discover, monitor, and manage NetApp storage systems. It provides transparent integration of NetApp storage into Windows Server 2012 R2 and Microsoft SCVMM. SnapManager 2.02 for Hyper-V with remote VSS capabilities protects VM resources running on block-level attached LUNs and CSVs and on remote SMB 3.0 file shares. It uses NetApp Snapshot technology to offload the backup process from the Hyper-V host to the NetApp storage system. It can use NetApp SnapMirror technologies for off-site backup operations at remote locations. This tool has an easy-to-use management interface, along with a set of Windows PowerShell cmdlets for robust automation. Thin provisioning is a part of the storage efficiency technologies that, along with deduplication, reduce both the allocated disk space and the overall cost of storage. Microsoft System Center 2012 R2 Microsoft System Center 2012 R2 helps organizations deliver flexible and cost-effective private cloud infrastructure in a self-service model, using existing data center hardware and software. It provides a common management experience across data centers and private-hosted or partner-hosted clouds. To deliver the best experience for modern applications, Microsoft System Center 2012 R2 offers deep insight into applications, down to the level of client script performance. System Center 2012 R2 delivers the tools and capabilities that organizations need to scale their capacity and, where necessary, to use cloud resources as well. 27 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

28 Microsoft System Center 2012 R2 offers unique application management capabilities that can deliver agile, predictable application services. Using the App Controller, Operations Manager, and SCVMM components of System Center 2012 R2, applications can be delivered as a service, with the service as a deployed instance of a cloud-style application, along with its associated configuration and virtual infrastructure. Microsoft System Center 2012 R2 includes the application management capabilities described in the following sections. Standardized Application Provisioning Microsoft SCVMM offers service templates to help define standardized application blueprints. A service template typically includes specifications for the hardware, the OS, and the application packages that compose the service. SCVMM supports multiple package types for Microsoft.NET applications, including Microsoft Deploy (msdeploy) for the web tier (Microsoft IIS), Microsoft Server Application Virtualization (Server App-V) for the application tier, and SQL Server dedicated administrator connection (DAC) for the data tier. It also specifies application configuration requirements, such as topology, elasticity, scale-out rules, health thresholds, and upgrade rules. Server App-V, a unique technology in SCVMM, optimizes applications for private cloud deployments by abstracting the application from the underlying OS and virtual infrastructure. By enabling image-based management, Server App-V simplifies application upgrades and maintenance. Comprehensive Hybrid Application Management Microsoft App Controller offers application owners a single view to manage application services and VMs, whether they are on premises, at the location of service providers, or using Microsoft Windows Azure. App Controller provides the capability to deploy and migrate VMs to the Microsoft Windows Azure VM service. You can migrate core applications such as Microsoft SQL Server, Active Directory (AD), and SharePoint Server from on-premises environments to Windows Azure with just a few mouse clicks. 360-Degree Application Monitoring, Diagnosis, and Dev Ops Microsoft Operations Manager offers deep application and transaction monitoring insight for.net applications and J2EE application servers, and it helps efficiently isolate the root cause of application performance problems to the offending line of code. Outside-in monitoring with Microsoft Global Service Monitor (GSM) and Operations Manager offers realtime visibility into application performance as experienced by end users. Operations Manager and GSM integrate with Microsoft Visual Studio to facilitate development and operations (Dev Ops) collaboration, helping remediate application problems more quickly. Operations Manager offers easy-to-use reporting and custom dashboarding. The Service Manager and Orchestrator components of Microsoft System Center 2012 automate core organizational process workflows, such as incident management, problem management, change management, and release management. You can also possible integrate and extend existing toolsets and build flexible workflows (or runbooks) to automate processes across IT assets and organizations. Standardized IT Services Microsoft System Center 2012 R2 provides the following service delivery and automation capabilities: Defininition of standardized service offerings by using dependencies in a centralized configuration management database (CMDB) 28 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

29 Publication of standardized service offerings through the service catalog offered by the Microsoft Service Manager Provisioning and allocation of pooled infrastructure resources to internal business unit IT (BUIT) teams by using either the Cloud Services Process Pack that is natively integrated into Service Manager or the Windows Azure Management Pack and its integration into Azure Service Management Automation Chargeback (or showback) of storage, network, and computing costs to BUIT teams and specifying pricing for BUIT teams at different levels of granularity Maintaining compliance with pertinent industry regulations and business needs through the IT Governance, Risk Management, and Compliance (GRC) Process Pack Consumer-Identified, Accessed, and Requested IT Services Microsoft System Center 2012 R2 offers consumers the following functions: Capability to enable self-service infrastructure with the self-service portal offered by Service Manager Capability to set access and resource quota levels on a per-user or per-buit basis Capture and tracking of required service request information Automated Processes and Systems for Fulfilling Service Requests The following automation functions provide complete end-to-end service request processing: Integration and extension of automation across System Center and third-party management toolsets and extension of automation to Windows Azure VM workflows Orchestration of automated workflows across multiple processes, departments, and systems Automation of service request provisioning for end-to-end request fulfillment Microsoft System Center 2012 R2 provides a common management toolset for configuring, provisioning, monitoring, and operating an IT infrastructure. A typical infrastructure has physical and virtual resources running heterogeneous OSs. The integrated physical, virtual, private, and public cloud management capabilities in System Center 2012 can help manage IT efficiently and optimize ROI for those resources. Physical and Virtual Infrastructure Management Capabilities Microsoft System Center 2012 R2 supports the following infrastructure management capabilities: Deployment and configuration of virtual servers and Hyper-V with Microsoft SCVMM Management of VMware vsphere and Citrix XenServer through a single interface Automatic deployment of Hyper-V to bare-metal servers and creation of Hyper-V clusters Provisioning of everything from OSs to physical servers, patches, and endpoint protection, with Microsoft Configuration Manager 4.13 Management This section describes the management software used in the solution: NetApp OnCommand Plug-In for Microsoft NetApp OnCommand System Manager NetApp SnapDrive for Windows Microsoft System Center 2012 R2: App Controller Operations Manager Orchestrator 29 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

30 Service Manager SCVMM Microsoft SQL Server 2012 R2 NetApp SnapManager for Hyper-V is described in the next section. NetApp OnCommand Plug-In for Microsoft NetApp OCPM enables and simplifies the management of servers and storage systems in Microsoft System Center R2. OCPM offers native integration with System Center Operations Manager and SCVMM. These integrations provide the intelligence to make System Center fully storage aware, thus simplifying the day-to-day administration of NetApp storage and amplifying the effectiveness of System Center monitoring and alerting. Figure 4 shows a clustered Data ONTAP controller as a monitored object in System Center 2012 Operations Manager. Figure 4) NetApp clustered Data ONTAP controller as a monitored object on Microsoft System Manager 2012 R2 Operations Manager console. NetApp OnCommand System Manager NetApp OnCommand System Manager enables administrators to manage individual storage systems or clusters of NetApp storage systems through an easy-to-use browser-based interface. System Manager comes with wizards and workflows, simplifying common storage tasks such as creation of volumes, LUNs, quota trees (qtrees), shares, and exports, which saves time and prevents errors. System Manager works across all NetApp storage resources, including NetApp FAS2000, FAS3000, FAS6000, and FAS8000 series and FlexArray systems. Figure 5 shows a sample window in NetApp OnCommand System Manager. 30 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

31 Figure 5) NetApp OnCommand System Manager. NetApp SnapDrive for Windows NetApp SnapDrive for Windows is an enterprise-class storage and data management application that simplifies storage management and increases the availability of application data. Key functionalities include storage provisioning, NetApp Snapshot copies with file system consistent data, rapid application recovery, and the capability to manage data easily. NetApp SnapDrive for Windows complements the native file system and volume manager and integrates transparently with the clustering technology supported by the host OS. Microsoft System Center 2012 R2 App Controller Microsoft App Controller is part of the Microsoft System Center suite. It offers a common self-service experience that can help administrators easily configure, deploy, and manage VMs and services across private clouds. App Controller provides the user interface for connecting and managing workloads after provisioning. Microsoft System Center 2012 R2 Operations Manager Microsoft Operations Manager is part of the Microsoft System Center suite. It provides infrastructure monitoring that is flexible and cost effective. It helps deliver the predictable performance and availability of vital applications, and it offers comprehensive monitoring for the data center and the private cloud. Microsoft System Center 2012 R2 Orchestrator Microsoft Orchestrator is part of the Microsoft System Center suite. It provides a centralized automation and workflow engine for the entire System Center suite. Orchestrator enables automation of any element of the Microsoft Private Cloud through Orchestrator Integration Packs. These integrations are combined into workflows or runbooks that can be triggered on demand. 31 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

32 Microsoft System Center 2012 R2 Service Manager Microsoft Service Manager is part of the Microsoft System Center suite. It provides an integrated platform for automating and adapting an organization s IT service management best practices, such as those found in Microsoft Operations Framework (MOF) and Information Technology Infrastructure Library (ITIL). It offers built-in processes for incident and problem resolution, change control, and asset lifecycle management. Microsoft System Center 2012 R2 Virtual Machine Manager Microsoft SCVMM is part of the Microsoft System Center suite. It is a management solution for the virtualized data center, enabling administrators to configure and manage the virtualization host, networking, and storage resources to create and deploy VMs and services to private clouds. Microsoft SQL Server 2012 R2 Microsoft SQL Server is a highly available database management and analysis system for e-commerce, line-of-business, and data-warehousing solutions. It stores data and provides reporting services for the System Center components Data Protection This section explains how the NetApp solution protects data through NetApp SnapManager 2.02 for Hyper-V (SMHV) and NetApp SnapVault software. NetApp SnapManager for Hyper-V 2.02 NetApp SMHV 2.02 automates and simplifies backup and restore operations for Windows Server 2012 environments that are hosted on NetApp storage systems. SMHV enables application-consistent dataset backup operations according to protection policies set by the storage administrator. VM backup operations can also be restored from those application-consistent backup copies. SMHV enables you to back up and restore multiple VMs across multiple hosts. Policies can be applied to the datasets to automate backup tasks such as scheduling, retention, and replication. Users can perform the following tasks with NetApp SMHV: Group VMs into datasets that have the same protection requirements and apply policies to those datasets. Back up and restore VMs running in dedicated and shared disks. Back up and restore VMs that are running on CSVs and using Windows failover clustering. Back up and restore VMs that are running on continuously available SMB 3.0 shares hosted on clustered Data ONTAP 8.2. Use scheduling policies to automate dataset backup operations. Perform on-demand backup of datasets. Use retention policies to retain dataset backup copies for as long as necessary. Update the SnapMirror and SnapVault destination location after a backup operation successfully finishes. Specify custom scripts to run before or after a backup operation. Restore VMs from backup copies. Monitor the status of all scheduled and running jobs. Manage hosts remotely from a management console. Provide consolidated reports for dataset backup, restore, and configuration operations. 32 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

33 Perform a combination of crash-consistent and application-consistent backup operations. Support Windows PowerShell cmdlets for most GUI operations. NetApp SMHV Backup and Restore Architecture on Windows Server 2012 The following sections describe the benefits of the SMHV backup-and-restore architecture on Windows Server Distributed Application-Consistent Backup Operations In Windows Server 2012 R2, CSV file system (CSVFS) introduced distributed application-consistent backup operations, which avoid multiple backup requests to each node in the cluster. The entire backup operation is performed from the coordinator node (the cluster owner) by using the new CSV writer and the CSV shadow copy provider. NetApp SMHV starts the backup session on the first node and adds the Microsoft Hyper-V writer and VMs running on the node that requested the backup. SMHV then adds the CSV writer and all other VMs running on the passive-node CSV volumes to its call list and creates the backup. Distributed application-consistent backup operations are space efficient because they create only one NetApp Snapshot copy for each volume instead of creating one for each node and volume combination. This space savings is significant when large numbers of nodes are involved in the backup. Data ONTAP imposes a limit on the maximum number of Snapshot copies that can be stored for a volume, enabling more VM backup copies to be stored. This feature allows backup of all VMs in a cluster to be consistent in a single application-consistent backup operation. To achieve this distributed backup mechanism, Microsoft has introduced a new CSV writer and CSV shadow copy provider: The CSV writer serves volume- and component-level metadata from the nonrequesting node for CSV volumes and acts as a proxy by including the Hyper-V writers from the remote node for the backup session. The CSV provider acts as the default software provider for CSV volumes and coordinates VSS freeze and thaw across all cluster nodes to provide application and crash consistency. The CSV provider coordinates the VSS backup activities from all of the Hyper-V writers on the partner cluster nodes to put the VM in an application-consistent state. It also verifies that the CSV shadow copy volume is writable for the partner node written by Hyper-V during the autorecovery process. Figure 6 shows the NetApp SMHV 2.0 distributed application-consistent backup architecture. 33 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

34 Figure 6) NetApp SMHV 2.0 distributed application-consistent backup architecture. Best Practices To achieve a successful backup process and faster backup performance, NetApp recommends having no more than 15 CSVFS LUNs in a single NetApp SMHV backup dataset that belong to the same NetApp storage system. In other words, VMs hosted on no more than 15 CSVFS LUNs belonging to the same storage system should be grouped together in a single dataset. If 20 CSVFS LUNs are hosted on a single NetApp storage system, NetApp recommends creating a minimum of two datasets and spreading the VMs (CSVFS LUNs) evenly across these datasets. NetApp recommends not performing operations related to SMHV during storage live migration because such operations could corrupt the VM. The following limitations apply to the distributed application-consistent backup function on Windows Server 2012 R2: The distributed backup mechanism for Windows 2012 R2 is not applicable for the crash-consistent backup feature in SMHV. All of the VHD files that belong to a VM must be hosted on CSVFS LUNs and not on a mix of CSVFS and shared disks. VHDX files larger than 14TB cannot be created and backed up because of Data ONTAP limitations. Hyper-V over SMB and Remote VSS on Windows Server 2012 NetApp Data ONTAP 8.2 supports two important features developed specifically for Windows Server 2012 environments: 34 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

35 Continuously available shares for Hyper-V over SMB Remote VSS Continuously available SMB shares can be created by using the provisioning templates in SnapDrive 7.0 for Windows and hosting VMs on them. These VMs can be backed up by using SMHV with remote VSS. When a backup is initiated, SMHV acts as a VSS requestor and adds the SMB 3.0 share containing Hyper-V VMs to the VSS Snapshot copy set. VSS invokes the new SMB file share copy provider component in Windows Server 2012 R2 to send the Microsoft Remote Procedure Call (RPC) commands to the SMB target (the storage system) to coordinate the VSS backup operations. The new file share shadow copy agent (the remote VSS provider) running on the SMB target is responsible for creating the actual hardware Snapshot copy. Data ONTAP 8.2 uses the file share shadow copy agent to make the application-consistent backup copy of the SMB shares. Figure 7 shows the NetApp SMHV 2.0 architecture. Figure 7) NetApp SMHV 2.0 architecture. Note: Note: Note: Note: SnapMirror and SnapVault are updated once per cluster after all of the node-level backup copies have been created. For Hyper-V over SMB, the CIFS server and SVM should have separate names. SMHV 2.0 uses clones directly from the active file system (AFS) during a backup. Although this approach increases the backup performance, cloning from the AFS consumes twice the space of cloning when AFS is not used. You can disable cloning from AFS by creating the DisableCloneFromAfs registry key and setting its value to 1. The registry key is located on the host system under HKEY_LOCAL_MACHINE > System > Services > SWSvc > Parameters. Hyper-V Server 2012 R2 does not support application-consistent backup functions for VMs in SMB shares. 35 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

36 Crash-Consistent Backup Operations Backup copies created through NetApp SMHV can be either application consistent or crash consistent. Application-consistent backup copies are created in coordination with VSS so that the applications running in the VM are stopped before the NetApp Snapshot copy is created. This type of backup helps confirm the integrity of application data and therefore can safely be used to restore a VM, and the applications running on it, to a consistent state. Although application-consistent backup operations are the most suitable solution for data protection and recovery of Hyper-V VMs, they have the following drawbacks: Application-consistent backup operations are slower because of VSS involvement with the parent and guest OSs. Because the application writer in the VM and the Hyper-V writer in the parent OS are involved in the backup process, failure in any of the components results in a failed backup. The Hyper-V writer uses the autorecovery process to make the VMs consistent. Autorecovery results in the creation of two Snapshot copies on the storage system. Therefore, each Hyper-V backup requires two Snapshot copies to be created per storage system volume. If multiple VMs are running on different nodes in a cluster, but on the same CSV, SMHV still must create one backup copy per node, as required by VSS. Therefore, SMHV creates several Snapshot copies on the same CSV for different VMs. Considering these drawbacks, it is desirable to have a quicker way to create Hyper-V VM backup copies. The crash-consistent backup function allows backup copies to be created quickly. A crash-consistent backup of a VM does not use VSS to quiesce data, and there is no autorecovery process during a crash-consistent backup operation. This type of backup operation simply creates a Snapshot copy on the NetApp storage system for all of the LUNs used by the VMs involved in the dataset. The data in the backup copy is the same as it would be after a system failure or a power outage. All SMHV functions, such as scheduling, restore, script execution, SnapMirror updates, backup retention, and so on, are supported for crash-consistent backup operations. Crash-consistent backup copies are also supported for both SAN and SMB 3.0 environments. Figure 8 shows the application-consistent and crash-consistent backup types in the context of the NetApp Backup Dataset wizard. 36 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

37 Figure 8) Backup types. Note: Note: Note: Note: The Saved state backup policy is not applicable for crash-consistent backup and restore operations because crash-consistent backup operations do not involve the Hyper-V VSS writer. SMHV supports parallel execution, crash-consistent, and application-consistent backup operations. It also supports parallel crash-consistent backup execution. However, because of a timeout error in SnapDrive for Windows, users might observe some problems while such operations are running. Restoration of crash-consistent backup copies in SMB environments fails if the directories that host the environments are renamed after the backup operation is performed. The crash-consistent backup function can be used to create the latest backup copy of all data just before performing an application-consistent restore operation on a VM. This practice provides a safe backup to which to revert, if necessary. Best Practice The crash-consistent backup feature is not a replacement for application-consistent backup operations. It enables the creation of frequent recovery points. Therefore, NetApp recommends that you perform frequent crash-consistent backup operations and fewer application-consistent backup operations. NetApp SnapVault and SMHV SMHV 2.0 supports SnapVault in clustered Data ONTAP 8.2 environments. After SMHV completes a backup operation for a dataset, the backup copy can be updated to the SnapVault destination. Figure 9 shows SnapVault integration into SMHV. 37 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

38 Figure 9) NetApp SnapVault integration into SMHV. Users can also configure the retention period of a Snapshot copy in the SnapVault storage system. Users can do this by selecting a label from the Snapshot Label drop-down list while configuring the SMHV backup workflow, as shown in Figure 10. Figure 10) NetApp SMHV SnapVault options and Snapshot labels. 38 FlexPod Datacenter with Microsoft Private Cloud 2014 NetApp, Inc. All Rights Reserved.

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

Windows Server 2012 授 權 說 明

Windows Server 2012 授 權 說 明 Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor

More information

Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution

Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution Cloud Ready: Architectural Integration into FlexPod with Microsoft Private Cloud Solution What You Will Learn Industry trends indicate a dramatic data center transformation to shared infrastructure. To

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Introduction to NetApp Infinite Volume

Introduction to NetApp Infinite Volume Technical Report Introduction to NetApp Infinite Volume Sandra Moulton, Reena Gupta, NetApp April 2013 TR-4037 Summary This document provides an overview of NetApp Infinite Volume, a new innovation in

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist Part 1 - What s New in Hyper-V 2012 R2 Clive.Watson@Microsoft.com Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure

More information

Microsoft Private Cloud Fast Track

Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease

More information

Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance

Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance Datasheet The New NetApp FAS3200 Series Enables Flash, Clustering to Improve IT Agility and Performance DATA CENTER SOLUTIONS For More Information: (866) 787-3271 Sales@PTSdcs.com KEY BENEFITS Designed

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

Building the Virtual Information Infrastructure

Building the Virtual Information Infrastructure Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage

More information

Cisco Unified Data Center

Cisco Unified Data Center Solution Overview Cisco Unified Data Center Simplified, Efficient, and Agile Infrastructure for the Data Center What You Will Learn The data center is critical to the way that IT generates and delivers

More information

How To Build A Cisco Ukcsob420 M3 Blade Server

How To Build A Cisco Ukcsob420 M3 Blade Server Data Sheet Cisco UCS B420 M3 Blade Server Product Overview The Cisco Unified Computing System (Cisco UCS ) combines Cisco UCS B-Series Blade Servers and C-Series Rack Servers with networking and storage

More information

Providing the Best-of-Breed Private Cloud. with NetApp and Windows Server 2012

Providing the Best-of-Breed Private Cloud. with NetApp and Windows Server 2012 Providing the Best-of-Breed Private Cloud with NetApp and Windows Server 2012 Contents The Benefits of the Private Cloud...2 Deploying Private Cloud with Microsoft and NetApp...3 Windows Server 2012 Hyper-V

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center Solution Overview Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center What You Will Learn The data center infrastructure is critical to the evolution of

More information

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support

More information

FlexPod with Microsoft Private Cloud

FlexPod with Microsoft Private Cloud FlexPod with Microsoft Private Cloud Architecture Overview for FlexPod with Microsoft Windows Server 2008 R2 and Microsoft System Center 2012 April 2012 2012 Cisco. All rights reserved. This document is

More information

NetApp Storage. Krzysztof Celmer NetApp Poland. Virtualized Dynamic Infrastructure. Applications. Virtualized Storage. Servers

NetApp Storage. Krzysztof Celmer NetApp Poland. Virtualized Dynamic Infrastructure. Applications. Virtualized Storage. Servers NetApp Storage Applications Virtualization-Aware Unified Fabric Krzysztof Celmer NetApp Poland Servers Virtualized Storage Virtualized Dynamic Infrastructure NetApp s Unified Storage Architecture Critical

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

Cisco Unified Computing System: Meet the Challenges of Today s Microsoft Exchange Server Workloads

Cisco Unified Computing System: Meet the Challenges of Today s Microsoft Exchange Server Workloads White Paper Cisco Unified Computing System: Meet the Challenges of Today s Microsoft Exchange Server Workloads What You Will Learn Email has evolved into a mission-critical service for most organizations,

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

FlexPod with Microsoft Private Cloud Architecture Overview

FlexPod with Microsoft Private Cloud Architecture Overview Technical Report FlexPod with Microsoft Private Cloud Architecture Overview Alex Jauch, Glenn Sizemore, and Mark Beaupre, NetApp Mike Mankovsky, Cisco April 2012 TR-4058 For the latest information, see

More information

Block based, file-based, combination. Component based, solution based

Block based, file-based, combination. Component based, solution based The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates

More information

Big data Devices Apps

Big data Devices Apps Trends driving IT pressures Cloud Big data Devices Apps Devices: Info Workers Will Erase Boundary Between enterprise And Consumer Technologies. Forrester Research. August 30, 2012 Apps: Gartner: Predicts

More information

FlexPod for Windows Server 2012 Hyper-V Design Guide

FlexPod for Windows Server 2012 Hyper-V Design Guide White Paper FlexPod for Windows Server 2012 Hyper-V Design Guide 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 33 Contents FlexPod for Windows Server 2012

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

New Cluster-Ready FAS3200 Models

New Cluster-Ready FAS3200 Models New Cluster-Ready FAS3200 Models Steven Miller Senior Technical Director and Platform Architect NetApp recently introduced two new models in the FAS3200 series: the FAS3220 and the FAS3250. Our design

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth

Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth Cloud Service Provider Builds Cost-Effective Storage Solution to Support Business Growth Overview Country or Region: United States Industry: Hosting Customer Profile Headquartered in Overland Park, Kansas,

More information

FlexPod for VMware The Journey to Virtualization and the Cloud

FlexPod for VMware The Journey to Virtualization and the Cloud FlexPod for VMware The Journey to Virtualization and the Cloud Presented Jointly by Simac Technik ČR with Cisco, NetApp, and VMware 2010 NetApp, Cisco, and VMware. All Rights Reserved. C97-633489-00 One

More information

How To Design A Data Centre

How To Design A Data Centre DATA CENTRE TECHNOLOGIES & SERVICES RE-Solution Data Ltd Reach Recruit Resolve Refine 170 Greenford Road Harrow Middlesex HA1 3QX T +44 (0) 8450 031323 EXECUTIVE SUMMARY The purpose of a data centre is

More information

Meeting the Needs of the Changing Enterprise

Meeting the Needs of the Changing Enterprise Meeting the Needs of the Changing Enterprise NetApp Storage in the hybrid cloud Wessel Gans Alliance Manager Benelux & Nordics IT Challenge 1. Economic uncertainty and budget challenges 2. Pressures to

More information

EMC BACKUP-AS-A-SERVICE

EMC BACKUP-AS-A-SERVICE Reference Architecture EMC BACKUP-AS-A-SERVICE EMC AVAMAR, EMC DATA PROTECTION ADVISOR, AND EMC HOMEBASE Deliver backup services for cloud and traditional hosted environments Reduce storage space and increase

More information

Windows Server 2012 R2 Hyper-V: Designing for the Real World

Windows Server 2012 R2 Hyper-V: Designing for the Real World Windows Server 2012 R2 Hyper-V: Designing for the Real World Steve Evans @scevans www.loudsteve.com Nick Hawkins @nhawkins www.nickahawkins.com Is Hyper-V for real? Microsoft Fan Boys Reality VMware Hyper-V

More information

Datasheet NetApp FAS6200 Series

Datasheet NetApp FAS6200 Series Datasheet NetApp FAS6200 Series Flexibility meets performance, scalability, and availability to satisfy the most demanding needs of your applications and virtualization workloads KEY BENEFITS Ready for

More information

NetApp for Oracle Database

NetApp for Oracle Database NetApp Verified Architecture NetApp for Oracle Database Enterprise Ecosystem Team, NetApp November 2012 NVA-0002 Version 2.0 Status: Final TABLE OF CONTENTS 1 NetApp Verified Architecture... 4 2 NetApp

More information

Pricing - overview of available configurations

Pricing - overview of available configurations Pricing - overview of available configurations Bundle No System Heads Disks Disk Type Software End User EUR* Token ID Config Name Bundle 1 FAS2040 Single 6 x 1TB SATA Base 4.185 R809196-2040 EEM FAS2040

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options

More information

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN

Whitepaper. NexentaConnect for VMware Virtual SAN. Full Featured File services for Virtual SAN Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights

More information

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services What You Will Learn Windows Server on WAAS reduces the cost and complexity

More information

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN

ENTERPRISE STORAGE WITH THE FUTURE BUILT IN ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage

More information

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

What s new in Hyper-V 2012 R2

What s new in Hyper-V 2012 R2 What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads

Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads White Paper Cisco Unified Computing System: Meet the Challenges of Microsoft SharePoint Server Workloads What You Will Learn Designing an enterprise-class Microsoft SharePoint Server 2013 environment presents

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

IVA & UCS. Frank Stott UCS Sales Specialist frstott@cisco.com. 2010 Cisco and/or its affiliates. All rights reserved.

IVA & UCS. Frank Stott UCS Sales Specialist frstott@cisco.com. 2010 Cisco and/or its affiliates. All rights reserved. IVA & UCS Frank Stott UCS Sales Specialist frstott@cisco.com 2010 Cisco and/or its affiliates. All rights reserved. 1 Presented jointly by Cisco, NetApp, and VMware 2010 Cisco and/or its affiliates. All

More information

NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality

NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality NETAPP TECHNICAL REPORT NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality Abhinav Joshi, NetApp Chaffie McKenna, NetApp August 2008 TR-3701 Version 1.0

More information

Data Center Evolution without Revolution

Data Center Evolution without Revolution WHITE PAPER www.brocade.com DATA CENTER Data Center Evolution without Revolution Brocade networking solutions help organizations transition smoothly to a world where information and applications can reside

More information

A Customer Blueprint: Improving Efficiency and Availability Using Microsoft Hyper-V and NetApp Storage

A Customer Blueprint: Improving Efficiency and Availability Using Microsoft Hyper-V and NetApp Storage NETAPP WHITE PAPER A Customer Blueprint: Improving Efficiency and Availability Using Microsoft Hyper-V and NetApp Storage Microsoft Solutions Engineering, NetApp Microsoft Virtualization Solutions, Microsoft

More information

Using NetApp Unified Connect to Create a Converged Data Center

Using NetApp Unified Connect to Create a Converged Data Center Technical Report Using NetApp Unified Connect to Create a Converged Data Center Freddy Grahn, Chris Lemmons, NetApp November 2010 TR-3875 EXECUTIVE SUMMARY NetApp extends its leadership in Ethernet storage

More information

Maxta Storage Platform Enterprise Storage Re-defined

Maxta Storage Platform Enterprise Storage Re-defined Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged computing,

More information

FlexArray Virtualization

FlexArray Virtualization Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance

Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance White Paper Redefine Network Visibility in the Data Center with the Cisco NetFlow Generation Appliance What You Will Learn Modern data centers power businesses through a new generation of applications,

More information

A Solution Guide. Converged Infrastructure and Private Cloud

A Solution Guide. Converged Infrastructure and Private Cloud A Solution Guide Converged Infrastructure and Private Cloud Businesses are looking at cloud, virtualisation and converged infrastructures because existing data centres are becoming ill equipped to help

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration

More information

Datasheet NetApp FAS8000 Series

Datasheet NetApp FAS8000 Series Datasheet NetApp FAS8000 Series Respond more quickly to changing IT needs with unified scale-out storage and industry-leading data management KEY BENEFITS Support More Workloads Run SAN and NAS workloads

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication

More information

Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices

Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices White Paper January 2014 Preface As both Cisco Unified Computing System (Cisco UCS ) and Microsoft Hyper-V continue to gain market

More information

A virtual SAN for distributed multi-site environments

A virtual SAN for distributed multi-site environments Data sheet A virtual SAN for distributed multi-site environments What is StorMagic SvSAN? StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical

More information

Simplify Data Management and Reduce Storage Costs with File Virtualization

Simplify Data Management and Reduce Storage Costs with File Virtualization What s Inside: 2 Freedom from File Storage Constraints 2 Simplifying File Access with File Virtualization 3 Simplifying Data Management with Automated Management Policies 4 True Heterogeneity 5 Data Protection

More information

Uncompromised business agility with Oracle, NetApp and VMware

Uncompromised business agility with Oracle, NetApp and VMware Tag line, tag line Uncompromised business agility with Oracle, NetApp and VMware HroUG Conference, Rovinj Pavel Korcán Sr. Manager Alliances South & North-East EMEA Using NetApp Simplicity to Deliver Value

More information

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest

More information

VMware vsphere on NetApp. Course: 5 Day Hands-On Lab & Lecture Course. Duration: Price: $ 4,500.00. Description:

VMware vsphere on NetApp. Course: 5 Day Hands-On Lab & Lecture Course. Duration: Price: $ 4,500.00. Description: Course: VMware vsphere on NetApp Duration: 5 Day Hands-On Lab & Lecture Course Price: $ 4,500.00 Description: Managing a vsphere storage virtualization environment requires knowledge of the features that

More information

Microsoft SMB File Sharing Best Practices Guide

Microsoft SMB File Sharing Best Practices Guide Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents

More information

Clustered Data ONTAP 8.3 Administration and Data Protection Training

Clustered Data ONTAP 8.3 Administration and Data Protection Training Clustered Data ONTAP 8.3 Administration and Data Protection Training Format: Duration: Classroom, Online 5 days Description This course uses lecture and hands-on exercises to teach basic administration

More information

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft White Paper Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft What You Will Learn Cisco is continuously innovating to help businesses reinvent the enterprise data

More information

What s New with VMware Virtual Infrastructure

What s New with VMware Virtual Infrastructure What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management

More information

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta

Mit Soft- & Hardware zum Erfolg. Giuseppe Paletta Mit Soft- & Hardware zum Erfolg IT-Transformation VCE Converged and Hyperconverged Infrastructure VCE VxRack EMC VSPEX Blue IT-Transformation IT has changed dramatically in last past years The requirements

More information

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V White Paper Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V What You Will Learn The modern virtualized data center is today s new IT service delivery foundation,

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman

OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman OmniCube SimpliVity OmniCube and Multi Federation ROBO Reference Architecture White Paper Authors: Bob Gropman Date: April 13, 2015 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All

More information

How To Evaluate Netapp Ethernet Storage System For A Test Drive

How To Evaluate Netapp Ethernet Storage System For A Test Drive Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding

More information

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure White Paper Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure Providing Agile and Efficient Service Delivery for Sustainable Business Advantage What You Will Learn Enterprises

More information

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

How To Optimize Sharepoint With Netapp Data Management And Storage

How To Optimize Sharepoint With Netapp Data Management And Storage White Paper Transitioning to Microsoft SharePoint and Virtualization with Microsoft and NetApp Sachin Chheda, Mike Noble, Calvin Nieh, NetApp and Bill Baer, Microsoft October 2010 WP-7122 OPTIMIZING A

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing

Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing White Paper Deliver Fabric-Based Infrastructure for Virtualization and Cloud Computing What You Will Learn The data center infrastructure is critical to the evolution of IT from a cost center to a business

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

New Hitachi Virtual Storage Platform Family. Name Date

New Hitachi Virtual Storage Platform Family. Name Date New Hitachi Virtual Storage Platform Family Name Date Familiar Challenges and Big Transformations Too Much Information Too Much Complexity 24 x 7 Expectations Continually Rising Costs Software-Defined

More information

EMC: The Virtual Data Center

EMC: The Virtual Data Center EMC: The Virtual Data Center Dejan Živanovi EMC Technology Solution Group Sr. Technology Consultant High-End platforms & Business Continuity 1 Business Challenges Data growing at 70% annually 80% are files

More information

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led

Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Understanding Cisco Cloud Fundamentals CLDFND v1.0; 5 Days; Instructor-led Course Description Understanding Cisco Cloud Fundamentals (CLDFND) v1.0 is a five-day instructor-led training course that is designed

More information

Unified Storage for the Private Cloud Dennis Chapman NetApp

Unified Storage for the Private Cloud Dennis Chapman NetApp Unified Storage for the Private Cloud Dennis Chapman NetApp Data Center Transformation Application- Based Silos Zones of Virtualization Private Cloud Public Cloud Apps Servers Network Hybrid Storage IT

More information