Lenovo ThinkServer High-Availability Solutions

Size: px
Start display at page:

Download "Lenovo ThinkServer High-Availability Solutions"

Transcription

1 Lenovo ThinkServer High-Availability Solutions With Lenovo ThinkServer SA120 DAS Array, LSI Syncro CS e, and Microsoft Windows Server 2012 Lenovo Enterprise Product Group Version 1.0 June 2014 Copyright Lenovo 2014

2 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. This information could include technical inaccuracies or typographical errors. Changes may be made to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this publication to non-lenovo Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk. The following terms are trademarks of Lenovo in the United States, other countries, or both: Lenovo, and ThinkServer. Intel and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. Microsoft, Windows Storage Server 2012, Windows Server 2012, and the Windows Logo are trademarks of Microsoft Corporation in the United States and/or other countries. LSI, the LSI & Design logo, and MegaRAID are trademarks or registered trademarks of LSI Corporation in the United States and/or other countries. 2

3 Contents 1.0 Introduction High-Availability Solutions Solution Architecture Network Architecture Virtual Disk Layout Storage Array Design Solution Hardware Recommendations ThinkServer Systems ThinkServer SA120 DAS Array Scaling the Solution Configuration Guide Pre-installation tasks Making the Physical Connections Creating Virtual Drives on Each Server Node Operating System Installation and Configuration Configure Networking Create Virtual Drives on the SA Expose Storage to the Cluster Nodes Creating the Cluster Creating the Highly Available Storage Cluster Install the File Services Role Create the Highly Available File Server Create a File Share Mapping User Folders to the Highly Available File Server Share Test a Cluster Failover Creating the Highly Available Application Cluster Install Hyper-V Create a Virtual Switch Add a Disk as CSV to Store Virtual Machine Data Create a Highly Available Virtual Machine

4 6.5 Test a Planned Failover Test an Unplanned Failover References

5 1.0 Introduction Organizations expect their IT environments to operate with mission-critical reliability. End-users expect that their key applications such as , database, and transaction processing will always be available, and that their data will be protected against loss in the event of a hardware or software failure. Shared storage is essential to achieving many of the benefits of high availability. Storage area networks (SANs) satisfy this need, but a SAN can also be very complex and expensive to deploy and manage. Network-attached storage (NAS) can be more affordable, but adding reliability and data protection to NAS can significantly increase the cost. High-availability systems provide applications with a means of continuance if a server on which they are running should fail. In a high-availability solution, servers work together in a cluster to provide redundancy to each other, maximizing uptime by utilizing fault-tolerant components. When a server in the cluster (a node) fails, the workload moves automatically to other nodes in the cluster with little interruption a process known as failover. High-availability configurations can also provide additional benefits by allowing CPU loads to be balanced by moving applications to servers that have lower CPU utilization in a way that is transparent to the clients. Solution Benefits Enterprise-class, highavailability, server application and storage at a fraction of the cost and complexity of existing HA solutions Storage resilience and performance similar to highcost storage solutions, such as Fibre Channel or SAN devices New virtualization and failover clustering capabilities of Microsoft Windows Server 2012 make high-availability application and storage solutions easier to configure and less expensive to deploy. The Windows Server platform provides high availability and scalability to many types of server workloads including Microsoft Hyper-V hosts, SQL Server, and Exchange, as well as file share storage for users and server applications. This document describes a ThinkServer solution that provides a continuously available hardware and software platform utilizing Microsoft Windows Server 2012 R2 Failover Clustering, which easily provides transparent failover without data loss. The LSI Syncro adapter provides robust hardware RAID data protection while supporting cluster failover, something that otherwise cannot be done natively within Windows Server. The Lenovo ThinkServer SA120 direct-attached storage array (also known as JBOD) completes the solution, enabling shared storage as reliable as a SAN at a fraction of the cost. This solution rivals the functionality and scalability of advanced architectures using SANs, while reducing capital and management costs and complexity. This solution is well suited to departments, workgroups, mid-size enterprises, and especially customers with limited IT staff and constrained budgets. This document provides guidance for installing, configuring, and supporting the solution. It is intended for IT administrators and managers, as well as business partners planning to evaluate or deploy these storage solutions using Lenovo servers. It assumes a working knowledge of Windows networking and 5

6 server software. Additional information beyond the scope of this document can be found in the References section. 2.0 High-Availability Solutions The high availability solutions described in this document can serve two primary purposes. 1. A highly available storage cluster provides continuously available networked storage for users and server applications such as Microsoft SQL Server and Hyper-V virtual machines (see Figure 1). 2. A highly available application cluster enables Windows Server clustered roles to run on physical servers or on virtual machines installed on the servers running Hyper-V (see Figure 2). Both solutions can achieve similar levels of reliability, availability, manageability, and performance expected of solutions using a SAN, but at a lower acquisition cost. Highly Available Storage Cluster This solution provides continuously available centralized networked file services for general use just like a SAN to traditional information workers and server application workloads. The solution enables continuous access to SMB and NFS file shares as well as iscsi storage targets with transparent failover for connections to those services. This capability is appropriate for users who need access to the same files and applications, or if centralized backup and file management is needed. Figure 1 Highly Available Storage Cluster Solution Stack In addition, this solution can leverage failover clustering and capabilities of Windows SMB 3.0 to provide file shares to store server application data, such as Hyper-V virtual machine files or SQL Server database 6

7 files. Microsoft calls file shares associated with this type of clustered file server scale-out file shares. In this configuration, all file shares are simultaneously accessible through all nodes in the cluster, referred to as an active-active configuration. This configuration provides better utilization of network bandwidth by automatically aggregating the bandwidth available from multiple redundant network paths between the application servers and the SMB 3.0 shares hosted on the storage server, and provides resiliency to a network failure. Connections to the shares are also load balanced by redirecting clients to the cluster node with the best access to the volume used by the file share. This is the recommended file server type when deploying either Hyper-V, or Microsoft SQL Server over SMB. Other important storage server roles and features of Windows Server 2012 R2 that can be employed include: Data Deduplication Deduplication can significantly improve the efficiency of storage space utilization by storing a single copy of identical data on the volume. This can deliver storage optimization ratios of 2:1 for general file servers and up to 20:1 for virtualization data. DFS Namespaces and Replication In a larger network, users can be given a centralized folder namespace, through which the underlying file shares on different servers and in different sites are made available to access and store files. DFS Namespaces map clients logical file requests to physical server files without having to search or map multiple locations. If deployed in a distributed environment (e.g. a branch office), DFS Replication provides synchronization capabilities between the central and remote servers across limited bandwidth network connections. BranchCache BranchCache optimizes the usage of wide area network (WAN) links by locally caching the remote data based on predefined policies. When a user accesses content on remote servers, BranchCache copies content from the remote servers and caches the content on the branch office server, allowing compatible clients to access the content from the local server rather than over the WAN. Subsequent requests for the same data will be served from the local server until updates are required. Volume Shadow Copy Services (VSS) is used to create a point-in-time image (shadow copy) of one or more volumes. It provides enhanced data protection through high fidelity backups, rapid data restores and data transport. VSS for SMB file shares allows performing of backup operations using the snapshots of remote file shares supporting SMB-based server applications (for example, SQL over SMB). Highly Available Application Cluster The highly available application cluster increases the availability of applications and services running in the member nodes. If one or more of the cluster nodes fail, other nodes begin to provide service (the failover process). In addition, the clustered roles are proactively monitored to verify that they are working properly. If they are not working, they are restarted or moved to another node. In this solution design, the shared storage is part of, and is managed by the clustered nodes, although a highly available storage cluster as described above could provide it. 7

8 Figure 2 Highly Available Application Cluster Solution Stack 2.1 Solution Architecture This guide will focus on configuring both the storage cluster solution and the application cluster solution. Steps to create the storage cluster are the same for the application cluster with the addition of enabling Hyper-V or other operating system services for high availability in the application solution. Windows Server 2012 R2 is installed on two ThinkServer systems deployed as a failover cluster. An LSI Syncro RAID controller in each server in the cluster provides connectivity to an SA120 JBOD to provide shared storage with hardware RAID capability. Storage resiliency is provided by using redundant connections from each cluster node to the SA120. Syncro provides hardware RAID to guard against data loss in the event of a drive failure. Syncro also mirrors the Input / Output (I/O) data cache in real time across the two controllers to support the failover cluster functionality. Because the data cache in both controllers is completely mirrored, data is not lost in the event of an un-planned failover. The cluster connects to the public network using standard Ethernet, and network resiliency can be provided by using multiple redundant Ethernet connections to redundant switches. A private network is 8

9 used for cluster internal communications. An optional separate management network can also be configured for management of the servers. To support the cluster, at least one Active Directory Domain Services controller is needed for centralized security and management of the cluster member computers. DNS services are also required. It is assumed that Active Directory and DNS are deployed at the customer site, and deployment of these services is not in scope for this document. Figure 3 shows the logical architecture of the solution. Figure 3 Logical Architecture 2.2 Network Architecture The network architecture requires a minimum of two networks to be configured. The first provides a private network for internal cluster communications. With only two nodes, the server-to-server network connection can be made directly (using a crossover cable) without going through a switch; otherwise, this network must be on a separate subnet from all other network communications. The second network provides access to the high-availability cluster and to infrastructure services over cost-efficient Ethernet connections (1 Gb or 10 Gb). The use of 1 Gb Ethernet versus 10 Gb Ethernet networking can be selected based on the intended workload. If resiliency against network failures is required, the solution must have redundant paths to each cluster server. Additional network adapters can be added and each NIC connected to redundant switches to provide continued access to the cluster in the event of a network component failure. When multiple NICs are available, network path redundancy, failover, load balancing and the aggregation of available 9

10 bandwidth on the available NIC ports can be configured through the use of NIC teaming, or the SMB multi-channel capability in Windows Server Optionally, a third network can be configured for management of the servers. Dedicating a network to this function prevents competition with guest traffic, and provides a degree of separation for security and ease of management. Additionally, the server out-of-band management can be combined on this network. 2.3 Virtual Disk Layout The Syncro CS controllers work together to achieve file sharing, cache coherency, heartbeat monitoring, and redundancy. In order to maintain data synchronization between the controllers, at any point in time, a particular virtual disk can only be accessed or owned by a single controller at any given point in time (a local virtual disk as shown in Figure 4). The other Syncro controller is aware of the virtual disk, but only has indirect access to it (a remote virtual disk). Figure 4 Local Virtual Disks Access to a remote virtual disk is accomplished with I/O shipping which is a means of submitting I/O requests from one controller to the controller that owns the virtual disk. As shown in Figure 5, when a controller requires access to a remote virtual disk, the I/O is shipped to the remote controller, which then processes the I/O locally. This preserves the active-active configuration of the cluster nodes; however, I/O requests serviced by local virtual disks are much faster than those serviced by remote virtual disks. 10

11 Figure 5 Remote Virtual Disks From a performance perspective, the situation shown in Figure 5 is non-optimal as there is an additional command processing overhead associated with shipped I/O. The preferred configuration is to co-locate the virtual disks with the server cluster node that is primarily driving the I/O load. Avoid configurations with multiple virtual disks whose I/O load is split between the server nodes. 2.4 Storage Array Design The storage array RAID level selected should be based on consideration of several factors, most importantly performance, fault tolerance, and storage capacity. However, not all of these factors can be optimized at the same time. In general, a storage configuration such as RAID 10 is appropriate for virtual machine usage balancing performance and capacity. RAID 5 can be used when more total drive capacity should be allocated to storage. RAID 1 is sufficient for server boot volumes. The examples shown in this document use a storage configuration as shown in Figure 6. In the SA120, a total of 12 hot-swap 7,200 rpm, 6Gbs SAS drives are organized into two drive groups (DG0 and DG1), each composed of five drives in a RAID 5 configuration (4 data + 1 parity). Two additional drives are dedicated as global hot spares for the cluster. The first drive group (DG0) is divided into two virtual disks. The first virtual disk (JBOD VD0) is used for the Quorum Drive, and the second (VD1) is used as a shared virtual drive for application or file data. The second drive group (DG1) is configured as a shared single virtual drive (VD2) for application or file data. The larger virtual drives can be further subdivided into partitions within Windows, and ownership of the virtual disks can be designated to a particular node of the cluster during cluster setup if desired. Each server will have a single drive group composed of two drives in a RAID 1 configuration. The drive group will be used for the operating system and its associated partitions, and is organized into a single virtual drive (Server VD0). This configuration provides optimum performance as well as protection against a drive failure in this group. 11

12 Figure 6 Drive Configuration 3.0 Solution Hardware Recommendations Recognizing that system results are highly dependent on the specific workload, this section describes recommended hardware for the solutions that can be used as a starting point for larger or more feature rich configurations. 3.1 ThinkServer Systems Enterprise-class Lenovo ThinkServer systems are an ideal choice for customers seeking affordable options that pack a punch. ThinkServer systems provide the performance, security, and reliability needed to support any workload. The servers feature balanced designs, flexible configurations, and expansive I/O to handle demanding deployments. Powerful new network adapter, storage controller, and sophisticated RAID choices increase scalability, reliability, and I/O capacity to handle growing requirements for large and compute-intensive, scale-out applications. With attractive price points, built-in redundancy, high reliability components, and sophisticated cooling technology, enterprise-class ThinkServer systems deliver outstanding value. 12

13 For the highly available storage cluster, Lenovo recommends two ThinkServer RD340 dual-cpu servers connected to the ThinkServer SA120 JBOD for shared storage. A typical configuration for each of the ThinkServer RD340 systems includes: Intel Xeon processors o Entry solutions: One 8-core CPU per node o Large-capacity solutions: Two 8-core CPUs per node Memory o Entry solutions: 32GB memory o Large-capacity solutions: 64GB memory (for large active datasets e.g. greater than 1GBps throughput) LSI Syncro CS RAID adapter for connection to SA120 JBOD ThinkServer RAID 300 for the internal drives in a RAID 1 configuration for the operating system Two 500GB SATA HDDs for system boot drives Four 1 Gb Ethernet interfaces for the network connections for network resiliency and load balancing o One Heartbeat 1GbE o Two External 1 GbE o One System Management 1GbE For the highly available application cluster, Lenovo recommends two ThinkServer RD640 systems be used. A typical configuration for each of the RD640 servers includes: Intel Xeon processors o Entry solutions: One 8-core CPU per node o Large-capacity solutions: Two 8-core CPUs per node Memory o Entry solutions: 64GB memory (with 1 CPU) o Large-capacity solutions: 128GB memory (with 2 CPUs) LSI Syncro CS RAID adapter for connection to SA120 JBOD ThinkServer RAID 500 or ThinkServer RAID 700 controller for the internal drives in a RAID 1 configuration for the operating system Two 500GB SATA HDDs for system boot drives Four 1 Gb Ethernet interfaces for the network connections for network resiliency and load balancing o One Heartbeat 1GbE o Two External 1 GbE o One System Management 1GbE Ordering information for these typical configurations is provided in Table 1, and Table 2. Two servers each are required for the solution. 13

14 Table 1 Storage Cluster Server Configuration Part Number Description Quantity 70AB001XUX RD340 (1U rack server with 4 x 3.5-inch hot-swap HDD bays) 1-1 x Intel Xeon processor E v2 (8-cores, 20MB cache, 1.9GHz, 7.2GT/s QPI) - 1 x 8GB DDR3L-1600MHz (2Rx8) RDIMM - ThinkServer RAID 300 (RAID 0, 1, 10) - 2 x integrated 1 Gb Ethernet - ThinkServer Management Module - Slim DVD optical - 1 x 550W Gold hot-swap redundant power supply - ThinkServer tool-less rail kit - Next Business Day On-site Warranty, 3 Years Parts and Labor 0C19534 ThinkServer 8GB DDR3L-1600MHz (2Rx8) RDIMM 3 4XB0F28655 ThinkServer Syncro CS e 6Gb High Availability Enablement 1 Kit by LSI - Includes two ThinkServer 1 meter external mini-sas cables 0A89473 ThinkServer 500GB 7.2K 3.5-inch enterprise 6Gbps SATA hotswap 2 hard drive 0C19506 ThinkServer 1Gbps Ethernet I350-T2 Server Adapter by Intel (Dual 1 Port, 1Gb BASE-T) 67Y2624 ThinkServer Management Module Premium for Remote ikvm 1 4X20E W Gold hot-swap redundant power supply SM Windows Server 2012 R2 Standard 1 Table 2 Application Cluster Server Configuration Part Number Description Quantity 70B10007UX RD640 (2U Rack with 8 x 2.5-inch hot swap HDD bays) 1-1 x Intel Xeon processor E v2 (8-core, 20MB cache, 2.00 GHz, 7.20 GT/s QPI) - 1 x 8GB DDR3L-1600MHz (2Rx8) RDIMM - 1 x ThinkServer RAID 700 Adapter II (RAID 0, 1, 5, 6, 10, 50, 60) - 2 x integrated 1 Gb Ethernet - ThinkServer Management Module - Slim DVD R/W optical - 1 x 800W Gold hot-swap redundant power supply - ThinkServer tool-less rail kit - Next Business Day On-site Warranty, 3 Years Parts and Labor 0C19534 ThinkServer 8GB DDR3L-1600MHz (2Rx8) RDIMM 7 4XB0F28655 ThinkServer Syncro CS e 6Gb High Availability Enablement 1 Kit by LSI - Includes two ThinkServer 1 meter external mini-sas cables 0C19495 ThinkServer 500GB 7.2K 2.5-inch enterprise 6Gbps SATA hot- 2 swap hard drive 0C19506 ThinkServer 1Gbps Ethernet I350-T2 Server Adapter by Intel (Dual 1 Port, 1G BASE-T) 67Y2624 ThinkServer Management Module Premium for Remote ikvm 1 4X20E W Gold hot-swap redundant power supply 1 4XI0E51562 Windows Server 2012 R2 Datacenter ThinkServer SA120 DAS Array 14

15 The ThinkServer SA120 is a 2U rack-mountable storage enclosure that provides both 2.5- inch and 3.5- inch drive bays in a single enclosure. The SA120 is unique in that twelve 3.5-inch hard disk drives (HDDs) mount in the front while four 2.5-inch drives mount in the rear of the enclosure. The rear 2.5-inch bays are reserved exclusively for optional Intel enterprise solid-state drives (SSDs), providing an optimal tiered storage platform in one dense enclosure 1. The SA120 supports direct- attached 6Gbps SAS connectivity and integrates seamlessly with ThinkServer rack and tower models via supported ThinkServer LSI SAS and RAID adapters. The SA120 features hot-swap disk drives, SAS Input/output Controller Cards (IOCCs), redundant fans and power supplies. Drives and power supplies are common with other ThinkServer systems, and can be shared increasing convenience and reducing overall costs. A typical configuration for the SA120 includes: Two IOCCs with dual SAS connections per controller Twelve 1 TB 7,200 rpm SAS 3.5-inch HDDs Table 3 provides ordering information for the SA120 typical configuration. Table 3 SA120 Configuration Part Number Description Quantity 70F10001UX SA120 (2U rack-mountable disk array with 12 x 3.5-inch hot-swap 1 HDD Bays) - Dual ThinkServer Storage Array I/O Module (6 Gbps) - Dual redundant 550W PSUs - Two ThinkServer 1 meter external mini-sas cables - ThinkServer static rail kit - Next Business Day On-site Warranty, 3 Years Parts and Labor 0C19530 ThinkServer 3.5-inch 1TB 7.2K SAS 6Gbps hot-swap hard drive Scaling the Solution The servers and SA120 hardware can scale to optimize for cost and performance requirements. The factors most likely to be modified to scale the solution include: Increase processing bandwidth for auxiliary processes (e.g. anti-virus, deduplication, backup for storage, or additional VMs for applications) by raising the performance and power rating of the processors, and increasing the amount of installed memory in each server cluster node. Increase network IOPs by increasing the number of NIC ports, or the bandwidth of the ports in each server cluster node. 1 The 2.5-inch SSD drives are not supported with the Syncro solutions. 15

16 Expand the storage array capacity by adding more, or higher capacity drives. Capacity can also be increased by adding additional clusters (servers and JBOD shared storage clusters 2 ). Enhance performance by adding additional drives (more spindles in a RAID virtual drive). Table 4 provides recommended options to address capacity and performance requirements, and enable connectivity to various Ethernet networks. Table 4 Server Expansion Options Option Description Part Number Memory ThinkServer 4GB DDR3-1866MHz (1Rx8) RDIMM 4X70F28585 ThinkServer 8GB DDR3-1866MHz (1Rx4) RDIMM 4X70F28586 ThinkServer 16GB DDR3-1866MHz (2Rx4) RDIMM 4X70F28587 ThinkServer 4GB DDR3L-1600MHz (1Rx8) RDIMM 0C19533 ThinkServer 8GB DDR3L-1600MHz (2Rx8) RDIMM 0C19534 ThinkServer 16GB DDR3L-1600MHz (2Rx4) RDIMM 0C19535 HDDs ThinkServer 500GB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap Hard Drive 0A89473 ThinkServer 1TB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap Hard Drive 0A89474 ThinkServer 2TB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap Hard Drive 0A89475 ThinkServer 3TB 7.2K 3.5-inch Enterprise 6Gbps SATA Hot Swap Hard Drive 0A89477 ThinkServer 3.5-inch 4TB 7.2K Enterprise SATA 6Gbps Hot Swap Hard Drive 0C19520 ThinkServer 3.5-inch 300GB 15K SAS 6Gbps Hot Swap Hard Drive 67Y2616 ThinkServer 3.5-inch 600GB 15K SAS 6Gbps Hot Swap Hard Drive 4XB0F28644 ThinkServer 3.5-inch 1TB 7.2K SAS 6Gbps Hot Swap Hard Drive 0C19530 ThinkServer 3.5-inch 2TB 7.2K SAS 6Gbps Hot Swap Hard Drive 0C19531 ThinkServer 3.5-inch 3TB 7.2K SAS 6Gbps Hot Swap Hard Drive 0C19532 Network Adapters ThinkServer 1Gbps Ethernet I350-T2 Server Adapter by Intel 0C19506 ThinkServer 1Gbps Ethernet I350-T4 Server Adapter by Intel 0C19507 Lenovo 10Gbps Ethernet X520-SR2 Server Adapter by Intel 0C19487 Lenovo 10Gbps Ethernet X520-DA2 Server Adapter by Intel 0C19486 Lenovo 10Gbps Ethernet X540-T2 Server Adapter by Intel 0C19497 Lenovo 10Gbps Ethernet Fibre Module by Intel 0C19488 Table 5 SA120 Expansion Options Option Description Part Number HDDs ThinkServer 3.5-inch 1TB 7.2K SAS 6Gbps Hot Swap Hard Drive 0C19530 ThinkServer 3.5-inch 2TB 7.2K SAS 6Gbps Hot Swap Hard Drive 0C19531 ThinkServer 3.5-inch 3TB 7.2K SAS 6Gbps Hot Swap Hard Drive 0C The clusters described in this document are limited to two servers and one JBOD. 16

17 Option Description Part Number ThinkServer 3.5-inch 4TB 7.2K SAS 6Gbps Hot Swap Hard Drive 4XB0F28635 Cables ThinkServer 1 meter External mini-sas cable 4X90F31495 ThinkServer 2 meters External mini-sas cable 4X90F31496 ThinkServer 4 meters External mini-sas cable 4X90F31497 ThinkServer 6 meters External mini-sas cable 4X90F Configuration Guide This section explains how to set up the hardware components and configure the high availability cluster. The basic steps are as follows: 1. Configure hardware and insure firmware is up to date, and hardware settings are configured. 2. Install and make physical connections to the hardware 3. Configure the drive groups and the virtual drives on each server and the SA120 Configure the internal RAID and virtual drive for the ThinkServer OS boot drive Configure the shared virtual drives in the SA120 with Syncro 4. Install and configure Windows Server 2012 R2 on both servers in the cluster 5. Install and configure the cluster feature on both servers. 6. Enable high-availability services for the storage cluster or the application cluster 7. Test the failover cluster 4.1 Pre-installation tasks To prepare for installation of Windows Server 2012 R2, ensure the following tasks are completed: 1. Select and install the desired server storage and network connectivity options. Recommended options are listed in Table 4, page Ensure that the server firmware is up to date. If necessary, update the system BIOS, ThinkServer Management Module (TMM), and Syncro controller to the latest version. Server BIOS and TMM updates can be installed using the ThinkServer Firmware Updater tool, available at 3. Configure BIOS settings including: a. System date and time b. Boot devices and boot order 17

18 c. TMM management interfaces 4.2 Making the Physical Connections Hardware connections should be made as follows: Storage Connections Figure 7 shows the SAS cable connections from two ThinkServer nodes and a single SA120 enclosure. Dual connections to the controllers provide redundant paths that safeguard against cable or controller failure. Table 6 summarizes the connections. Figure 7 SAS Connections Table 6 SAS Connections Point to Point Server Connection Server A Syncro Top Connector Server A Syncro Bottom Connector Server B Syncro Top Connector Server B Syncro Bottom Connector SA120 Connection I/O Module 1 A I/O Module 2 A I/O Module 1 B I/O Module 2 B Network Connections The servers have three integrated 1 GbE ports (one can be shared with, or dedicated to the TMM for system management), and the server can support additional 1 GbE or 10 GbE ports with optional Ethernet adapters. In the basic configuration, a two-port Ethernet adapter is used to connect to the public local area network for access to the failover cluster. 18

19 In a basic configuration, connections to the public network are made with ports 1 and 2 (connection A1 and A2 in Figure 8) to redundant switches. When additional optional Ethernet adapters are used (in 2U servers), additional aggregate bandwidth topologies are possible. A private heartbeat network connection is required for the cluster, and it attaches to an isolated network segment that is shared among the failover cluster nodes. There should be no other network communication on this network segment. The most typical connection type for the heartbeat segment between the nodes of a two-node failover cluster is a crossover network cable. This method is used in this document (connection B in Figure 8). If you connect to the LAN infrastructure, the network segment must be isolated. The management port is typically connected to a separate Ethernet switch or VLAN dedicated to management traffic (connection C in Figure 8). Figure 8 Network Connections 4.3 Creating Virtual Drives on Each Server Node Before attempting to install the operating system on the server, the internal RAID subsystem on each server must be configured. This can be accomplished by either using the EasyStartup configuration tool to preconfigure the RAID subsystem and install the operating system, or it can be done manually. Manual configuration can be done using either the pre-boot WebBIOS Configuration Utility or the MegaRAID CLI interface, which is suitable for scripting. The WebBIOS Configuration Utility allows the creation, management, and deletion of RAID arrays from the available physical drives attached to the RAID adapter. If RAID volumes have already been configured, the Configuration Utility does not automatically change their configuration. To configure the internal server RAID subsystem: 1. Enter WebBIOS during system POST 19

20 2. Create a new RAID configuration where a drive group is created with the available HDDs, and a RAID 1 virtual drive is created from the drive group. 3. Insure that the new virtual drive is set as the boot drive. Figure 9 shows the completed configuration in WebBIOS. Figure 9 Configure Server Virtual Drive 4.4 Operating System Installation and Configuration Windows Server 2012 R2 can be installed manually, or by using EasyStartup. Both nodes should be running the same version of the operating system, and be updated to the same level. Configure basic OS settings including networking and other features before creating the failover cluster Install OS and Perform Initial Configuration To install the OS manually, complete the following steps: 1. Depending on your server configuration, attach an external CD/DVD reader device. 2. Install the OS from the media and follow the prompts, completing the installation as directed by the installation routine. After the OS is successfully installed, log on to the system using the local administrator password created during the installation process. After logging in, the Server Manager is displayed (see Figure 10). 20

21 Figure 10 Server Manager 3. In Server Manager, select Local Server to perform basic system configuration: Change the Computer Name for each node. In our example, we use: Server node 1: csnode1 Server node 2: csnode2 Configure System Date and Time / Time Zone If desired, enable and configure Remote Desktop Enable Remote Management (remote management of this server from other servers) Insure all required hardware device drivers are installed and updated to the latest levels. In particular, the Syncro device driver should be at the current level. Use Device Manager to update the drivers as shown in Figure

22 Figure 11 Update Device Drivers Configure and install Windows Updates Add each server node to the same Active Directory Domain reboot will be required. Future log ins to the servers should use the domain account Enabling Clustered RAID Controller Support Support for clustered RAID controllers is not enabled by default in Microsoft Windows Server To enable support for this feature, perform the following steps: 1. Open Registry Editor (regedit.exe). 2. Locate and then create the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\ClusDIsk\Parameters 3. Right-click on the Parameters key and then choose New. 4. Select DWORD and give it a name of AllowBusTypeRAID. 5. Once the key is created, give it a value of 0x01. 22

23 Figure 12 Clustered RAID Registry Key 6. Exit the Registry editor. 4.5 Configure Networking Naming the network connections will simplify management of the failover cluster. In addition, some TCP/IP settings for the failover cluster must be configured exactly, while some permit choices to match your network configuration. Specific recommendations are provided in this section and are shown in Figure Public Network Figure 13 Network Connections The public network attaches to the local area network for client access of the cluster. This is the network that clients will use to access the failover cluster. Multiple ports on the same network enable load balancing and redundancy. The public network ports (named Public-1 and Public-2) can use either statically assigned TCP/IP settings, or the default settings provided through the Dynamic Host Configuration Protocol (DHCP). DHCP is the preferred method of assigning the addresses for the public interfaces, because this will 23

24 simplify the configuration of the cluster on the network. Use DHCP-assigned addresses for the physical network adapter s IP address, as well as all virtual IP addresses assigned to virtual servers configured within the failover cluster if configuring an application cluster solution. No additional network configuration is typically required if DHCP assignment is used, except for setting a reservation in the DHCP scope if you want the cluster to have a consistent address Heartbeat Network The Heartbeat network (named Heartbeat-1) is used only for the heartbeat communication between failover cluster nodes, so most network services for this interface can be disabled. To modify network connection properties: 1. Open Network Connections. Click Start, right-click Network, and then click Properties. In Network and Sharing Center, click Change adapter settings. 2. Open Properties for a network connection. Right-click the network connection and then click Properties. Figure 14 Heartbeat Network Settings 3. Uncheck the following unnecessary network features: Client for Microsoft Networks File and Printer Sharing for Microsoft Networks QoS Packet Scheduler 24

25 Internet Protocol Version 6 (TCP/IPv6) 4. Double click Internet Protocol Version 4 (TCP/IPv4) to modify its properties. It is common to use an address range of 10.x.x.x for the private heartbeat network. Enter a different IP address for each server. A default gateway and DNS server are not necessary when using a crossover cable for this network and do not need to be entered. Figure 15 Heartbeat Network IP Address Click the DNS tab. Uncheck Register this connection s addresses in DNS. Click the WINS tab. Uncheck Enable LMHOSTS lookup and Disable NETBIOS over TCP/IP. 5. Click OK to save the changes Configure NIC Teaming In order to use more than one Ethernet port together in the cluster, the adapters need to be teamed prior to the creation of the cluster. In Windows Server 2012 R2, NICs can be teamed via software from the NIC manufacturer (such as Intel), or through the built-in load balancing and failover option (LBFO) within Windows Server To configure Teaming using the Intel software for the NIC shown in the base configuration, complete the following steps: 25

26 1. Open Network Connections from the Control Panel and right click the first adapter to be used in the NIC Team, and select Properties. Then click Configure. Click on the Teaming tab and check the option to Team this adapter with other adapters. Then click on the New Team button. Figure 16 NIC Teaming Control Panel 2. Create a name for the team. In this example, Public-Team is used. Click Next to continue. Figure 17 New Team Wizard (Name the Team) 3. Select the network adapter ports to be teamed. In the figure below, the Intel I350 network adapter ports are selected. 26

27 Figure 18 New Team Wizard (Select Adapters) 4. Choose the type of teaming method. In the figure below, Adaptive Load Balancing is selected. This allows for both load balancing and fault tolerance on the port team. No special switch configuration is needed to use this mode. Figure 19 New Team Wizard (Select Teaming Method) 5. Click Finish to complete the Team Wizard Application. 27

28 Figure 20 New Team Wizard (Completed) 6. A new network adapter created that represents the NIC Team, and the connections that make up the team are added to the available network connections. Figure 21 Teamed Network Connection 4.6 Create Virtual Drives on the SA120 Before the drives in the SA120 can be used, they must be configured into drive groups, which hold one or more divisions known as virtual drives. The virtual drive will be assigned a RAID level, which is seen by the host computer system as a single drive volume. The high-availability cluster configuration requires that virtual disks used for storage must be shared; otherwise, they are only visible to the controller node that created them. A minimum of one shared virtual disk is required to be used as a quorum disk to enable the operating system support for cluster. This section explains how to configure the virtual disks using the WebBIOS pre-boot utility. This procedure will configure the virtual drives as shown in section 2.4, Storage Array Design, on page

29 To coordinate the configuration of the two controller nodes, both nodes must be booted into the WebBIOS pre-boot utility simultaneously. After powering on the two nodes in the cluster, rapidly access both consoles. One of the systems is used to create the virtual drives while the other system simply remains in the pre-boot utility. This approach keeps the second system in a state that does not fail over while the virtual drives are being created on the first system. 1. Simultaneously power on both servers. 2. On each system, when prompted during the POST, type CTRL-H for the Syncro controller to access the WebBIOS pre-boot BIOS utility. Wait until both systems are running the WebBIOS utility, and then proceed to the next step. Figure 22 Prompt to Enter Syncro WebBIOS 3. Select the LSI Syncro card from the menu if more than one LSI adapter is present. Click Start. Figure 23 WebBIOS Adapter Selection 4. On the WebBIOS main page, click Configuration Wizard, as shown in Figure

30 Figure 24 WebBIOS Main Page 5. The Configuration Wizard appears. Select New Configuration and click Next. 6. On the Select Configuration screen, select Virtual Drive Configuration and press Next. Figure 25 Select Configuration 7. On the Select Configuration Method screen, select Manual Configuration and click Next. 30

31 Figure 26 Select Configuration Method 8. The Drive Group Definition screen appears. In the Drives panel on the left, select the drives to be included in the drive group, and click Add To Array. Hold down the Ctrl key to select multiple drives simultaneously. In this example, select drives in slots 0 through 4 for the first drive group as shown in Figure 27. Figure 27 Drive Group Definition 9. After adding the drives to the drive group, click Accept DG and then click Next. 31

32 Figure 28 Accept Drive Group 0 Definition 10. On the Span Definition screen, select the drive group just created and click Add to SPAN, then click Next. Figure 29 Span Definition 11. On the Virtual Drive Definition screen, we will create the virtual drives as described in section 2.4, Storage Array Design on page 11. In this first drive group, we will create a virtual drive for the Quorum and the remaining space will be used for shared data. The quorum disk must be at least 50MB, but it does not require more than 1GB of space. In this example, we recommended that 500MB be allocated as shown in Figure

33 Insure that the Provide Shared Access checkbox is selected. Figure 30 Virtual Drive Definition for Quorum The Provide Shared Access option enables a shared virtual drive that both controller nodes can access. If this option is deselected, the virtual drive will be available exclusively for the node that creates it. After all settings have been configured, click Accept, and then click Next. 12. On the Confirmation Page, select Yes to confirm usage of Write Back with BBU mode. 13. Click Back to return to the Virtual Drive Definition page to create the second virtual drive in the drive group for shared data. Settings for this virtual drive are shown in Figure 31. To use the remaining space available, click Update Size to quickly enter the value in the Select Size field. 33

34 Figure 31 Virtual Drive Definition for Shared Data 14. Repeat the previous steps to create the other drive groups and virtual drives as desired. As the virtual drives are configured on the first controller node, the other controller node s drive listing is updated to reflect the use of the drives. Figure 32 Drive Group 1 Definition 15. When prompted, click Yes to save the configuration, and click Yes to confirm that you want to initialize it. 16. Define hot-spare disks for the virtual drives to maximize the level of data protection. Syncro supports global hot spares and dedicated hot spares. Global hot spares are global for the cluster, not just for a controller. 34

35 Select Drives from the main menu and select the drives to configure as spares. Select Properties, then press Go. 17. Select Make Global HSP and click Go. Figure 33 Select Drive for Hot Spare Figure 34 Configure Hot Spare 18. After all is done, the drive groups, virtual drives and hot spares can be viewed from the main screen as shown in Figure

36 Figure 35 Syncro Configuration Logical View 19. When all virtual drives and spares are configured, exit WebBIOS, and reboot both systems. 4.7 Expose Storage to the Cluster Nodes Before the failover cluster is created, verify that all cluster servers can see the shared disks. 1. To verify from one console that all servers can see the shared disks, make sure that you add all computers that you want to add as cluster nodes to Server Manager. 2. In Server Manager, click File and Storage Services, and then under Volumes, click Disks. 3. Under each server, verify that the shared disks are listed. 4. All shared disks must be formatted with one or more NTFS volumes. 5. One of the shared disks is used as the quorum disk. It must be formatted with an NTFS volume. Storage can be configured for use with the cluster from either the Server Manager, or the Disk Management plugin in Computer Management. This section demonstrates using Server Manager. 1. On one of the server nodes, open Server Manager. Select File and Storage Services, and then Disks. Available disks will appear as unknown and online. Figure 36 shows the following drives: Drive 0: Server Boot Drive (Windows Server 2012) should not be used for cluster storage Drive 1: 500MB for Quorum Drive 2: 7.27TB for Shared Data 36

37 Drive 3: 7.28TB for Shared Data Figure 36 Server manager Disks 2. For each disk to prepare, select the Disk, right click, and select New Volume from the context menu. Figure 37 Create Volume 3. The New Volume Wizard will appear. Select the Disk to use to create the volume. If both server nodes are known to the system, the volume can be configured to be controlled by that node. Click Next. 37

38 Figure 38 Select Server and Disk 4. Specify the volume size. In this example, allocate all available capacity for the Volume. Click Next. Figure 39 Specify the Volume Size 5. Assign a drive letter to the volume. The drive letters will not necessarily be the same on every node of the cluster. In Figure 40, the drive letter is assigned as Q to indicate the Quorum drive. 38

39 Figure 40 Assign Drive Letter 6. Select the Format options as shown in Figure 41. Name the volume to match its intended purpose. In this example we use the following volume names: Quorum Drive: Quorum Shared VD0: VD-0 Shared VD1: VD-1 7. Confirm the settings and click Create. Figure 41 File System Settings 39

40 Figure 42 Confirm New Volume Settings 8. Verify that each server recognizes the disks by viewing the disks in Server Manager or in Disk Management. Figure 43 Shared Storage in Server Manager 4.8 Creating the Cluster The following section describes how to configure and validate the failover cluster in Windows Server 2012 R Installing the Failover Clustering Feature 40

41 The Microsoft Server 2012 R2 operating system installation does not enable the clustering feature by default. Follow these steps to view the system settings, and to enable clustering. 1. Launch the Server Manager dashboard. Figure 44 Server Manager Dashboard 2. If the Before you Begin box appears, click Next. Select Role-based or feature-based installation. Figure 45 Add Roles and Features Wizard 3. In the Select Destination Server box, select the local server and click Next. 41

42 Figure 46 Select Destination Server 4. On the Select Server Roles screen, Click Next. 5. On the Select Features screen, select the Failover Clustering checkbox. Click Next. 6. Confirm the selection and click Install. Figure 47 Select Features 42

43 Figure 48 Confirm Installation Selections 7. Close the Installation Wizard when the installation has completed. Figure 49 Installation Progress 8. Repeat these steps on the other server that will form the cluster Validating the Failover Cluster Configuration Microsoft recommends that the configuration be validated before the cluster is formed. Validation verifies that network, storage, and system configuration requirements are met and that the nodes can form an effective cluster. To do this, run the Validate a Configuration wizard. The tests in the validation wizard include simulations of cluster actions and inspect the following aspects of the system: 43

44 System These tests analyze whether the two server modules meet specific requirements, such as running the same version of the operating system version using the same software updates. Network These tests analyze whether the planned cluster networks meet specific requirements, such as requirements for network redundancy. Storage These tests analyze whether the storage meets specific requirements, such as whether the storage correctly supports the required SCSI commands and handles simulated cluster actions correctly. To validate the configuration, perform the following steps: 1. Launch the Failover Cluster Manager tool from Server Manager: Select Server Manager > Tools > Failover Cluster Manager. Figure 50 Failover Cluster Manager 2. In the actions pane, click Validate Configuration. The Validate a Configuration Wizard starts. 3. In the Select Servers screen, enter the name of each server to be added to the cluster. Click Add after each name is entered. After all nodes are listed, click Next. 44

45 . 4. Select Run all tests, and click Next. Figure 51 Select Cluster Servers Figure 52 Cluster Validation Test Options 5. Confirm the tests to run then click Next to begin. 45

46 Figure 53 Cluster Validation Confirmation 6. When the tests complete, a summary of the results will be provided. The detailed results can be viewed by clicking View Report. Deselect Create the cluster now using the validated nodes and click Finish. Figure 54 Cluster Validation Summary Report If any of the validation tests fails or results in a warning, you should review the validation report and resolve the issues before creating the cluster. Be sure to run the Validate a Configuration Wizard again to verify that all issues have been resolved Creating the Failover Cluster 46

47 After successfully completing the cluster validation, create the Failover Cluster by performing the following steps: 1. Launch the Failover Cluster Manager tool. Figure 55 Failover Cluster Manager 2. In the actions pane, click Create Cluster... The Create Cluster Wizard starts. 3. In the Select Servers screen, enter the name of each server to be added to the cluster. Click Add after each name is entered. After all nodes are listed, click Next. Figure 56 Select Servers 47

48 4. Enter the name that you want to assign to the cluster in the cluster name field. If the wizard requests that an IP address be entered for the cluster, deselect all networks the networks will be configured later in section 4.8.4, Set Cluster Network Properties, page 49. Click Next. Figure 57 Cluster Name and IP Address 5. A confirmation page containing the cluster properties appears. If no other changes are required, you have the option to specify available storage by selecting the Add all eligible Storage to the cluster check box. Deselect this box the storage will be added to the cluster later in section 4.8.5, Add Disks to the Cluster, page 51. Click Next. Figure 58 Create Cluster Confirmation 6. After the cluster is created, a cluster creation report summary appears. This report includes any errors or warnings encountered. Click on the View Report button for additional details about the report. In this case, a warning is generated because no storage has been added to the 48

49 cluster and as a result, quorum drive has yet been configured yet. The Quorum drive will be configured in section 4.8.6, Create the Quorum Drive, page 52. Click Finish. Figure 59 Create Cluster Summary Set Cluster Network Properties After the failover cluster has been created, configure the network usage in Failover Cluster Manager. This step tells the cluster which network connections are used by the cluster, and which are available for network access by clients. To configure network connections in the failover cluster, perform the following steps: 1. Open Failover Cluster Manager. Expand the cluster, and expand the Networks node. 49

50 Figure 60 Cluster Networks 2. Select a network, and select Properties from the Action panel. 3. Under Name, type the corresponding network name for the connection. This should match the network connection naming convention created earlier (see section 4.5, Configure Networking, page 23). 4. Click the appropriate network options for the connection. Refer to Table 7 below for the information used in this example. Note that by default, only networks configured with a default gateway will be set automatically to Allow Clients to connect through this network. The network connections you create for the public network (that is, the connections clients use to connect to the cluster) will have a default gateway address whether you statically assign the addresses or use DHCP. The isolated network segments used for the heartbeat communication do not have default gateways assigned. When the failover cluster is created, the wizard should correctly configure these networks based on the addressing used. Table 7 Cluster Network Settings Network Name Cluster Use IP address Allow cluster Network Communications on this Network Allow clients to connect through this network Cluster Network 1 Client Access and #/21 Yes Yes Public Cluster Cluster Network 2 None #/24 No No Mgmt Cluster Network 3 Heartbeat Cluster Only #/24 Yes No 50

51 4.8.5 Add Disks to the Cluster Storage that was previously created and exposed to the cluster nodes must now be made available for the cluster to use. 1. Open Failover Cluster Manager. Expand the cluster, and expand the Disks node. 2. In the actions panel, click Add Disk. Figure 61 Failover Cluster Disks 3. Select the disk or disks to add and click OK. The selected disks are brought on line. Figure 62 Add Disks to Cluster 51

52 4. The disks added to the cluster appear in the Failover Cluster Manager. These disks will be used to create the Quorum drive, as well as shared storage for the failover cluster. Figure 63 Cluster Disks Create the Quorum Drive The Quorum drive is required for the cluster to function correctly. To configure or change the Quorum settings, perform the following steps: 1. Open Failover Cluster Manager, and select the cluster. With the cluster selected, under Actions, click More Actions, and then click Configure Cluster Quorum Settings. The Configure Cluster Quorum Wizard appears. Click Next. 52

53 Figure 64 Configure Quorum 2. On the Select Quorum Configuration Option page, select Select the Quorum Witness. Click Next. Figure 65 Select Quorum Configuration Options 3. On the Select Quorum Witness page, select the option to configure a disk witness, and then click Next. 53

54 Figure 66 Select Quorum Witness 4. On the Configure Storage Witness page, select the storage volume that you want to assign as the disk witness, and then click Next. Figure 67 Configure Storage Witness 5. Confirm your selections on the confirmation page that appears, and then click Next. 54

55 Figure 68 Confirm Cluster Quorum Settings 6. After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report. Click Next to exit the wizard. Figure 69 Configure Cluster Quorum Summary Report 7. After completion of the wizard, the quorum witness will be listed in the Failover Cluster Manager. 55

56 Figure 70 Quorum in Failover Cluster Manager 5.0 Creating the Highly Available Storage Cluster This section provides steps to configure and deploy the failover cluster as a high-availability file server. Two types of file servers can be created. The first is a File Server for General Use that provides file shares to users and applications that open and close files frequently. It supports NFS and SMB protocols, as well as provides for Data Deduplication, DFS Replication, and other File Services roles, but it cannot use a Clustered Shared Volume (CSV) for storage. The second is a Scale-Out File Server for Application Data that provides storage to server applications or Hyper-V VMs that leave files open for extended periods. This server type supports SMB, but not NFS, nor does it support the file services that the File Server for General Use provides. A Scale-Out File Server uses a CSV for storage. In this section, a High Availability File Server for General Use is created. 5.1 Install the File Services Role The File Services role should already be installed on the nodes of the failover cluster. If it is not, or if you want to verify that the role is installed, use the following steps. 1. Open Server Manager. Click Start, click Administrative Tools, and then click Server Manager. 2. Click the Roles node, and then click Add Roles. Click Next. 56

57 3. Click the File Services checkbox, if it is not already selected and then click Next. 4. Click all the appropriate Role Services for your cluster to provide (such DFS, FSRM, NFS, and so on), and then click Next. 5. Click Install. 6. When the wizard completes, click Close. Repeat these steps on each node of the cluster. 5.2 Create the Highly Available File Server 1. Open Failover Cluster Manager. Expand the cluster, and select the Roles node. In the Actions panel, click Configure Role. Figure 71 Configure Role 2. The High Availability Wizard starts. Click Next. 3. Click File Server from the list of available roles, and then click Next. 57

58 Figure 72 Select Role 4. Select File Server for General Use and click Next. Figure 73 File Server Type 5. Enter the name of the file server (CS-Cluster-FS1 in this example). If you are prompted to specify the networks to use, you should uncheck all statically assigned networks, because they represent isolated networks that clients cannot access. Click Next. 58

59 Figure 74 Client Access Point 6. Select one of the available disks to allocate to the file server cluster, and then click Next. 7. Click Next to confirm the operation. Figure 75 Select Storage 59

60 Figure 76 Confirm File Server Settings 8. After the file server configuration has finished, the server, and assigned storage will be visible in Failover Cluster Manager. Figure 77 File Server in Failover Cluster Manager 5.3 Create a File Share Shared folders must be contained in the file server cluster in order to provide failover capability. The following steps demonstrate how to create an SMB file share in the server cluster. 60

61 1. Open Failover Cluster Manager. Expand the cluster, and click on the Roles node to show the highly available file server just created. Figure 78 File Share in Failover Manager 2. Select the file server to display resources. From the Actions panel, click Add File Share. 3. The New Share Wizard will appear. Follow the instructions in the wizard. These instructions will depend on the files services you selected when installing the File Services role. In this example, we create a simple SMB share. Click Next. Figure 79 Create SMB Share 4. In the Share Location pane, enter a location for the file share on a disk that is available to the cluster. Click Next. 61

62 Figure 80 Select the Share Location 5. Enter a name for the file share in the Share Name field. The wizard displays the remote path to the share that users of the file server will use to access their shared files. Click Next. Figure 81 Select Share Name 6. If the path entered does not exist a warning will be displayed, with an option given to create the path location, or return to correct the entry. Click OK to continue and create the share location. 62

63 Figure 82 - New Share Path Does Not Exist 7. In the Configure Share Settings panel, select Enable Continuous Availability at a minimum to enable uninterrupted operation of the file share in the event of a system fault. Click Next. Figure 83 Configure Share Settings 8. Finally, specify permissions for the share. Click Next. Figure 84 Specify Share Permissions 63

64 9. A confirmation page appears. Click Create to create the share. Figure 85 Confirm Share Settings 10. At the completion of the wizard, the file share will be displayed in the file server role of the cluster within the Failover Cluster Manager. Figure 86 Share in High Availability File Server 5.4 Mapping User Folders to the Highly Available File Server Share Users can now access the highly available file server by manually mapping to the SMB share that was created. The users should be directed to \\<highly available file server name>\<file share name>. In the example above, this is: 64

65 \\CS-Cluster-FS1\CS-GPFileShare Connecting to the file server you created in Failover Cluster Management (instead of connecting to the cluster name or to any of the nodes in the cluster) may not be intuitive for users. The purpose of the highly available file server is to be online regardless of the specific server hosting the service, and so the connection is made to the role rather than to a physical computer. 5.5 Test a Cluster Failover After the failover cluster has been created, and high availability roles have been configured, the cluster s failover ability can be tested in Failover Cluster Manager. Use the following steps to test failover by moving a role to another node in the cluster. 1. Open Failover Cluster Manager and select the cluster. Expand the Roles node. 2. Select the role to move, in this case the high availability File Server just created. 3. Right click on the role, and click Move from the context menu. Select the cluster node to move the role to. Figure 87 Move Clustered Role If the move operation completes successfully, there will be no errors or warnings and the summary view of the service or application will update the Current Owner field to show the new node s name. 6.0 Creating the Highly Available Application Cluster This section provides steps to configure and deploy the failover cluster with clustered Hyper-V virtual machines. In this configuration guest VMs are managed through Failover Cluster Manager. In a standalone Hyper-V environment guest VMs are managed through Hyper-V Manager. 65

66 6.1 Install Hyper-V To install the Hyper-V role perform the following steps: 1. Open Server Manager. 2. Click the Roles node, and then click Add Roles. Click Next. 3. Click the Hyper-V checkbox, if it is not already selected, and then click Next. Figure 88 Select Hyper-V Role 4. In the Create Virtual Switch panel, select the Public-Team network adapter to attach the virtual switch. Click Next. Figure 89 Create Virtual Switch 66

67 5. In the Virtual Machine Migration panel, uncheck the Allow this server to send and receive live migrations of virtual machines. Migration of VM s will be handled by the cluster. Click Next. Figure 90 Virtual Machine Migration 6. The default Stores panel allows the selection of a default location of the virtual machine files. Accept the default for now. Click Next. Figure 91 Default Stores 7. The confirmation page is displayed. Click Install to install the Hyper-V role. 67

68 Figure 92 Confirm Hyper-V Role Selections 8. When the wizard completes, click Close. Repeat these steps on each node of the cluster. 6.2 Create a Virtual Switch Perform this step on both physical computers if you did not create the virtual switch when you installed the Hyper-V role. This virtual switch provides the highly available virtual machine with access to the physical network. 1. Open Hyper-V Manager. 2. From the Actions menu, click Virtual Switch Manager. 3. Under Create virtual switch, select External. 4. Click Create Virtual Switch. The New Virtual Switch page appears. 68

69 Figure 93 Virtual Switch Manager 5. Type a name for the new switch. Make sure you use exactly the same name on both servers running Hyper-V. 6. Under Connection Type, click External network, and then select the physical network adapter. 7. Click OK to save the virtual network and close Virtual Switch Manager. 6.3 Add a Disk as CSV to Store Virtual Machine Data To implement certain scenarios for clustered virtual machines, the virtual machine storage and virtual hard disk file should be configured as Cluster Shared Volumes (CSV). CSV can enhance the availability and manageability of virtual machines by enabling multiple nodes to concurrently access a single shared storage volume. CSV also support live migration of a Hyper-V virtual machine between nodes in a failover cluster. To configure a disk in clustered storage as a CSV volume, perform the following steps. 1. Open Failover Cluster Manager. Expand the cluster, and expand Storage and then click the Disks node. 69

70 Figure 94 Failover Cluster Manager Disks 2. Right-click a cluster disk, and then click Add to Cluster Shared Volumes. Figure 95 Create CSV 3. The Assigned To column changes to Cluster Shared Volume. 70

71 Figure 96 CSV in Failover Cluster manager 6.4 Create a Highly Available Virtual Machine As a best practice, it is recommended that if you create a virtual machine on a failover cluster node, you create it as a highly available virtual machine. Run the Hyper-V New Virtual Machine Wizard directly from Failover Cluster Manager. After the virtual machine is created in this way, it is automatically configured for high availability. 1. In Failover Cluster Manager, select or specify the cluster that you want. Ensure that the console tree under the cluster is expanded. Click Roles. Figure 97 Failover Cluster Manager Roles 71

72 2. In the Actions pane, click Virtual Machines, and then click New Virtual Machine. Figure 98 Add HA VM 3. Select a cluster node on which to initially install the VM and Click OK. Figure 99 New Virtual machine node 4. The New Virtual Machine Wizard appears. Click Next. 5. On the Specify Name and Location page, specify a name for the virtual machine. In this example we use CS-VM1. Click Store the virtual machine in a different location, and then type the full path or click Browse and navigate to the CSV created earlier. Click Next. 72

73 6. Specify the VM Generation. Click Next. Figure 100 VM Name and Location Figure 101 VM Generation 7. On the Assign Memory page, specify the amount of memory required for the operating system that will run on this virtual machine. In this example, specify 1024 MB. Click Next. 73

74 Figure 102 Assign VM Memory 8. On the Configure Networking page, connect the VM to the virtual switch. You should specify the virtual switch that you configured in section 6.1, Install Hyper-V, page 66. Click Next. Figure 103 Configure VM Networking 9. On the Connect Virtual Hard Disk page, click Create a virtual hard disk. Type the full path or click Browse and navigate to the CSV created earlier. Click Next. 74

75 Figure 104 Connect Virtual Hard Disk 10. On the Installation Options page, specify the location of the guest OS installation media, or defer the installation to a later time. Click Finish. Figure 105 VM Installation Options The virtual machine is created. The High Availability Wizard in Failover Cluster Manager then automatically configures the virtual machine for high availability. 11. The High Availability VM is added to the cluster in the Failover Cluster Manager. 75

76 Figure 106 VM in Failover Cluster Manager 6.5 Test a Planned Failover To test a planned failover, you can move the clustered virtual machine that you created to another node. 1. In Failover Cluster Manager, select or specify the cluster that you want. Ensure that the console tree under the cluster is expanded. 2. To select the destination node for live migration of the clustered virtual machine, right-click CS- VM1 (the clustered virtual machine previously created), point to Move, point to Live Migration, and then click Select Node. As the virtual machine is moved, the status is displayed in the results pane (center pane). 3. Verify that the move succeeded by inspecting the details of each node. 6.6 Test an Unplanned Failover To test an unplanned failover of the clustered virtual machine, you can stop the Cluster service on the node that owns the clustered virtual machine. 1. In Failover Cluster Manager, select or specify the cluster that you want. Ensure that the console tree under the cluster is expanded. 2. Expand the console tree under Nodes. 76

Windows Storage Server 2012 on Lenovo ThinkServer

Windows Storage Server 2012 on Lenovo ThinkServer Windows Storage Server 2012 on Lenovo ThinkServer Lenovo Enterprise Product Group Version 1.0 May 2013 Copyright Lenovo 2013 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER

More information

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013

More information

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220 MESOS CB220 Cluster-in-a-Box Network Storage Appliance A Simple and Smart Way to Converged Storage with QCT MESOS CB220 MESOS CB220 A Simple and Smart Way to Converged Storage Tailored for SMB storage

More information

Dell Compellent Storage Center

Dell Compellent Storage Center Dell Compellent Storage Center How to Setup a Microsoft Windows Server 2012 Failover Cluster Reference Guide Dell Compellent Technical Solutions Group January 2013 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010

More information

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V

Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Remote/Branch Office IT Consolidation with Lenovo S2200 SAN and Microsoft Hyper-V Most data centers routinely utilize virtualization and cloud technology to benefit from the massive cost savings and resource

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

I/O PERFORMANCE OF THE LENOVO STORAGE N4610

I/O PERFORMANCE OF THE LENOVO STORAGE N4610 I/O PERFORMANCE OF THE LENOVO STORAGE N4610 Strong storage performance is vital to getting the most from your business s server infrastructure. Storage subsystems are often the slowest segment of your

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX

Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX Building Microsoft Windows Server 2012 Clusters on the Dell PowerEdge VRTX Startup Guide Paul Marquardt Contents Introduction... 4 Requirements... 4 Chassis setup... 6 Chassis placement and CMC cabling...

More information

IBM Storwize V5000 and Windows Storage Server Product Guide

IBM Storwize V5000 and Windows Storage Server Product Guide IBM Storwize V5000 and Windows Storage Server Product Guide Did you know? Windows Storage Server 2012 based unified storage with the flexible choice of storage protocols (iscsi, SMB, NFS) can provide required

More information

High Availability with Windows Server 2012 Release Candidate

High Availability with Windows Server 2012 Release Candidate High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.

More information

Installation Guide July 2009

Installation Guide July 2009 July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification

More information

StarWind iscsi SAN: Configuring Global Deduplication May 2012

StarWind iscsi SAN: Configuring Global Deduplication May 2012 StarWind iscsi SAN: Configuring Global Deduplication May 2012 TRADEMARKS StarWind, StarWind Software, and the StarWind and StarWind Software logos are trademarks of StarWind Software that may be registered

More information

Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers

Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers Acer Altos Server Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers This installation guide provides instructions for

More information

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Deploying Windows Streaming Media Servers NLB Cluster and metasan Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................

More information

Accelerating Server Storage Performance on Lenovo ThinkServer

Accelerating Server Storage Performance on Lenovo ThinkServer Accelerating Server Storage Performance on Lenovo ThinkServer Lenovo Enterprise Product Group April 214 Copyright Lenovo 214 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.

More information

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture Dell Compellent Product Specialist Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Configuring ThinkServer RAID 100 on the TS140 and TS440

Configuring ThinkServer RAID 100 on the TS140 and TS440 Configuring ThinkServer RAID 100 on the TS140 and TS440 Lenovo ThinkServer TS Series Servers Lenovo Enterprise Product Group Version 1.0 September 17, 2013 2013 Lenovo. All rights reserved. LENOVO PROVIDES

More information

Lenovo Partner Pack for System Center Operations Manager

Lenovo Partner Pack for System Center Operations Manager Lenovo Partner Pack for System Center Operations Manager Lenovo Enterprise Product Group Version 1.0 December 2013 2013 Lenovo. All rights reserved. Legal Disclaimers: First paragraph is required. Trademark

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision

More information

Operating System Installation Guide

Operating System Installation Guide Operating System Installation Guide This guide provides instructions on the following: Installing the Windows Server 2008 operating systems on page 1 Installing the Windows Small Business Server 2011 operating

More information

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Windows 6.1 February 2014 Symantec Storage Foundation and High Availability Solutions

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1

EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1 EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014 Version 1 NEC EXPRESSCLUSTER X 3.x for Windows SQL Server 2014 Quick Start Guide Document Number ECX-MSSQL2014-QSG, Version

More information

IBM Storwize V5000 and Windows Storage Server IBM Redbooks Product Guide

IBM Storwize V5000 and Windows Storage Server IBM Redbooks Product Guide IBM Storwize V5000 and Windows Storage Server IBM Redbooks Product Guide With the introduction of the Microsoft Windows Storage Server 2012 offering via the IBM Reseller Option Kit (ROK) program, clients

More information

Bosch Video Management System High availability with VMware

Bosch Video Management System High availability with VMware Bosch Video Management System High availability with VMware en Technical Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3

More information

XenClient Enterprise Synchronizer Installation Guide

XenClient Enterprise Synchronizer Installation Guide XenClient Enterprise Synchronizer Installation Guide Version 5.1.0 March 26, 2014 Table of Contents About this Guide...3 Hardware, Software and Browser Requirements...3 BIOS Settings...4 Adding Hyper-V

More information

ThinkServer RD540 and RD640 Operating System Installation Guide

ThinkServer RD540 and RD640 Operating System Installation Guide ThinkServer RD540 and RD640 Operating System Installation Guide Note: Before using this information and the product it supports, be sure to read and understand the Read Me First and Safety, Warranty, and

More information

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200

Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Improving IT Operational Efficiency with a VMware vsphere Private Cloud on Lenovo Servers and Lenovo Storage SAN S3200 Most organizations routinely utilize a server virtualization infrastructure to benefit

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support

More information

How To Live Migrate In Hyperv On Windows Server 22 (Windows) (Windows V) (Hyperv) (Powerpoint) (For A Hyperv Virtual Machine) (Virtual Machine) And (Hyper V) Vhd (Virtual Hard Disk

How To Live Migrate In Hyperv On Windows Server 22 (Windows) (Windows V) (Hyperv) (Powerpoint) (For A Hyperv Virtual Machine) (Virtual Machine) And (Hyper V) Vhd (Virtual Hard Disk Poster Companion Reference: Hyper-V Virtual Machine Mobility Live Migration Without Shared Storage Storage Migration Live Migration with SMB Shared Storage Live Migration with Failover Clusters Copyright

More information

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist Part 1 - What s New in Hyper-V 2012 R2 Clive.Watson@Microsoft.com Datacenter Specialist Microsoft Cloud OS Vision Public Cloud Azure Virtual Machines Windows Azure Pack 1 Consistent Platform Windows Azure

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

Windows Server 2008 R2 Essentials

Windows Server 2008 R2 Essentials Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution

More information

Dell Compellent Storage Center

Dell Compellent Storage Center Dell Compellent Storage Center Windows Server 2012 Best Practices Guide Dell Compellent Technical Solutions Group July, 2013 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN

More information

Installing Microsoft Windows Server 2008R2 with EasyStartup

Installing Microsoft Windows Server 2008R2 with EasyStartup Installing Microsoft Windows Server 2008R2 with EasyStartup Version 1.5 1/11/2013 LENOVO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED

More information

Windows Server 2012 R2 Storage Infrastructure

Windows Server 2012 R2 Storage Infrastructure Windows Server 2012 R2 Storage Infrastructure Windows Server 2012 R2 Hands-on lab Windows Server 2012 R2 includes new storage features which allow you to create new storage-optimized file servers using

More information

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...

More information

IBM Flex System NAS Solutions Solution Guide

IBM Flex System NAS Solutions Solution Guide IBM Flex System NAS Solutions Solution Guide With the introduction of the Microsoft Windows Storage Server 2012 offering via the IBM Reseller Option Kit (ROK) program, clients can now get customized unified

More information

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide

Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

StarWind Virtual SAN for Microsoft SOFS

StarWind Virtual SAN for Microsoft SOFS StarWind Virtual SAN for Microsoft SOFS Cutting down SMB and ROBO virtualization cost by using less hardware with Microsoft Scale-Out File Server (SOFS) By Greg Schulz Founder and Senior Advisory Analyst

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

CiB with Windows 2012 R2 Best practices Guide

CiB with Windows 2012 R2 Best practices Guide CiB with Windows 2012 R2 Best practices Guide September 2014 Rev V1.02 Supermicro Storage Group White Paper Windows Server 2012 R2 on Supermicro Cluster-in-a-Box performance and availability considerations.

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

Windows Server 2012 2,500-user pooled VDI deployment guide

Windows Server 2012 2,500-user pooled VDI deployment guide Windows Server 2012 2,500-user pooled VDI deployment guide Microsoft Corporation Published: August 2013 Abstract Microsoft Virtual Desktop Infrastructure (VDI) is a centralized desktop delivery solution

More information

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2 vcenter Server Heartbeat 5.5 Update 2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

WHITEPAPER: Understanding Pillar Axiom Data Protection Options WHITEPAPER: Understanding Pillar Axiom Data Protection Options Introduction This document gives an overview of the Pillar Data System Axiom RAID protection schemas. It does not delve into corner cases

More information

What s new in Hyper-V 2012 R2

What s new in Hyper-V 2012 R2 What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows

More information

This chapter explains how to update device drivers and apply hotfix.

This chapter explains how to update device drivers and apply hotfix. MegaRAID SAS User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter explains an overview

More information

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Strong Performance and Cluster

More information

HP StorageWorks Automated Storage Manager User Guide

HP StorageWorks Automated Storage Manager User Guide HP StorageWorks Automated Storage Manager User Guide Part Number: 5697 0422 First edition: June 2010 Legal and notice information Copyright 2010, 2010 Hewlett-Packard Development Company, L.P. Confidential

More information

Getting Started Guide

Getting Started Guide Getting Started Guide Microsoft Corporation Published: December 2005 Table of Contents Getting Started Guide...1 Table of Contents...2 Get Started with Windows Server 2003 R2...4 Windows Storage Server

More information

UCS M-Series Modular Servers

UCS M-Series Modular Servers UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend

More information

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7. Step-by-Step Guide to configure on Intel Server Systems R2224GZ4GC4 Software Version: DSS ver. 7.00 up01 Presentation updated: April 2013 www.open-e.com 1 www.open-e.com 2 TECHNICAL SPECIFICATIONS OF THE

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013

StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013 StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013 TRADEMARKS StarWind, StarWind Software and the StarWind and the StarWind Software

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Version 9.0 User Guide 302-001-755 REV 01 Copyright 2007-2015 EMC Corporation. All rights reserved. Published in USA. Published

More information

Configuring Windows Server Clusters

Configuring Windows Server Clusters Configuring Windows Server Clusters In Enterprise network, group of servers are often used to provide a common set of services. For example, Different physical computers can be used to answer request directed

More information

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario

Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario Evaluation of Dell PowerEdge VRTX Shared PERC8 in Failover Scenario Evaluation report prepared under contract with Dell Introduction Dell introduced its PowerEdge VRTX integrated IT solution for remote-office

More information

VMware for Bosch VMS. en Software Manual

VMware for Bosch VMS. en Software Manual VMware for Bosch VMS en Software Manual VMware for Bosch VMS Table of Contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3 Installing and configuring ESXi server 6 3.1 Installing

More information

modular Storage Solutions MSS Series

modular Storage Solutions MSS Series N E T W O R K e d s t o r a g e modular Storage Solutions MSS Series NAS and iscsi SAN Product Family High Performance Enterprise Features Easily Scalable Utmost Reliability and Flexibility NAS & iscsi

More information

Lenovo ThinkServer and Cloudera Solution for Apache Hadoop

Lenovo ThinkServer and Cloudera Solution for Apache Hadoop Lenovo ThinkServer and Cloudera Solution for Apache Hadoop For next-generation Lenovo ThinkServer systems Lenovo Enterprise Product Group Version 1.0 December 2014 2014 Lenovo. All rights reserved. LENOVO

More information

Intel RAID SSD Cache Controller RCS25ZB040

Intel RAID SSD Cache Controller RCS25ZB040 SOLUTION Brief Intel RAID SSD Cache Controller RCS25ZB040 When Faster Matters Cost-Effective Intelligent RAID with Embedded High Performance Flash Intel RAID SSD Cache Controller RCS25ZB040 When Faster

More information

A Principled Technologies deployment guide commissioned by Dell Inc.

A Principled Technologies deployment guide commissioned by Dell Inc. A Principled Technologies deployment guide commissioned by Dell Inc. TABLE OF CONTENTS Table of contents... 2 Introduction... 3 About the components... 3 About the Dell PowerEdge VRTX...3 About the Dell

More information

MS Exchange Server Acceleration

MS Exchange Server Acceleration White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba

More information

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform

Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform 1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.

More information

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment

UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage appliance is ideal for

More information

Virtualizing your Datacenter

Virtualizing your Datacenter Virtualizing your Datacenter with Windows Server 2012 R2 & System Center 2012 R2 Part 2 Hands-On Lab Step-by-Step Guide For the VMs the following credentials: Username: Contoso\Administrator Password:

More information

Configuring ThinkServer RAID 500 and RAID 700 Adapters. Lenovo ThinkServer

Configuring ThinkServer RAID 500 and RAID 700 Adapters. Lenovo ThinkServer Configuring ThinkServer RAID 500 and RAID 700 Adapters Lenovo ThinkServer October 4, 2011 Contents Overview... 4 RAID 500 features... 4 RAID 700 features... 4 RAID Overview... 4 Choosing the RAID Level...

More information

Microsoft Windows Storage Server 2003 R2

Microsoft Windows Storage Server 2003 R2 Microsoft Windows Storage Server 2003 R2 Getting Started Guide Abstract This guide documents the various features available in Microsoft Windows Storage Server 2003 R2. Rev 1. 2005 Microsoft Corporation.

More information

DIGILIANT Windows Storage Server 2012

DIGILIANT Windows Storage Server 2012 DIGILIANT Windows Storage Server 2012 User s Guide Copyright 2012 Digiliant, LLC. All Rights Reserved. This User s Guide is provided AS IS and Digiliant, LLC makes no warranty as to its accuracies or use.

More information

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers

Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System x Servers Highly Available Scale-Out File Server on IBM System x3650 M4 November 2012 Running Microsoft SQL Server 2012 on a Scale-Out File Server Cluster via SMB Direct Connection Solution Utilizing IBM System

More information

Guide to SATA Hard Disks Installation and RAID Configuration

Guide to SATA Hard Disks Installation and RAID Configuration Guide to SATA Hard Disks Installation and RAID Configuration 1. Guide to SATA Hard Disks Installation... 2 1.1 Serial ATA (SATA) Hard Disks Installation... 2 2. Guide to RAID Configurations... 3 2.1 Introduction

More information

broadberry.co.uk/storage-servers

broadberry.co.uk/storage-servers Established in 1989 Broadberry Data Systems have powered the worlds largest organisations for over 23 years, ranging from the top 10 universities in England to NASA and CERN. We re a leading manufacturer

More information

N8103-149/150/151/160 RAID Controller. N8103-156 MegaRAID CacheCade. Feature Overview

N8103-149/150/151/160 RAID Controller. N8103-156 MegaRAID CacheCade. Feature Overview N8103-149/150/151/160 RAID Controller N8103-156 MegaRAID CacheCade Feature Overview April 2012 Rev.1.0 NEC Corporation Contents 1 Introduction... 3 2 Types of RAID Controllers... 3 3 New Features of RAID

More information

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations

Microsoft Exchange 2010 on Dell Systems. Simple Distributed Configurations Microsoft Exchange 2010 on Dell Systems Simple Distributed Configurations Global Solutions Engineering Dell Product Group Microsoft Exchange 2010 on Dell Systems Simple Distributed Configurations This

More information

How To Write An Article On An Hp Appsystem For Spera Hana

How To Write An Article On An Hp Appsystem For Spera Hana Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Backup and Recovery for Microsoft Exchange 2007 SP2 EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the

More information

UltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0

UltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0 UltraBac Documentation UBDR Gold Bare Metal Disaster Recovery Administrator Guide UBDR Gold v8.0 UBDR Administrator Guide UBDR Gold v8.0 The software described in this guide is furnished under a license

More information

Microsoft Private Cloud Fast Track Reference Architecture

Microsoft Private Cloud Fast Track Reference Architecture Microsoft Private Cloud Fast Track Reference Architecture Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with NEC s

More information

Synology High Availability (SHA)

Synology High Availability (SHA) Synology High Availability (SHA) Based on DSM 5.1 Synology Inc. Synology_SHAWP_ 20141106 Table of Contents Chapter 1: Introduction... 3 Chapter 2: High-Availability Clustering... 4 2.1 Synology High-Availability

More information

StarWind iscsi SAN: Configuring HA File Server for SMB NAS February 2012

StarWind iscsi SAN: Configuring HA File Server for SMB NAS February 2012 StarWind iscsi SAN: Configuring HA File Server for SMB NAS February 2012 TRADEMARKS StarWind, StarWind Software and the StarWind and the StarWind Software logos are trademarks of StarWind Software which

More information

DD670, DD860, and DD890 Hardware Overview

DD670, DD860, and DD890 Hardware Overview DD670, DD860, and DD890 Hardware Overview Data Domain, Inc. 2421 Mission College Boulevard, Santa Clara, CA 95054 866-WE-DDUPE; 408-980-4800 775-0186-0001 Revision A July 14, 2010 Copyright 2010 EMC Corporation.

More information

Appendix B Lab Setup Guide

Appendix B Lab Setup Guide JWCL031_appB_467-475.indd Page 467 5/12/08 11:02:46 PM user-s158 Appendix B Lab Setup Guide The Windows Server 2008 Applications Infrastructure Configuration title of the Microsoft Official Academic Course

More information