Data Center High Availability Clusters
|
|
|
- Herbert Holt
- 10 years ago
- Views:
Transcription
1 CHAPTER 1 High Availability Clusters Overview Clusters define a collection of servers that operate as if they were a single machine. The primary purpose of high availability (HA) clusters is to provide uninterrupted access to data, even if a server loses network or storage connectivity, or fails completely, or if the application running on the server fails. HA clusters are mainly used for and database servers, and for file sharing. In their most basic implementation, HA clusters consist of two server machines (referred to as nodes ) that share common storage. Data is saved to this storage, and if one node cannot provide access to it, the other node can take client requests. Figure 1-1 shows a typical two node HA cluster with the servers connected to a shared storage (a disk array). During normal operation, only one server is processing client requests and has access to the storage; this may vary with different vendors, depending on the implementation of clustering. HA clusters can be deployed in a server farm in a single physical facility, in different facilities at various distances for added resiliency. The latter type of cluster is often referred to as a geocluster. Figure 1-1 Basic HA Cluster Client Virtual address Public network Private network (heartbeats, status, control)
2 High Availability Clusters Overview Chapter 1 Geoclusters are becoming very popular as a tool to implement business continuance. Geoclusters improve the time that it takes for an application to be brought online after the servers in the primary site become unavailable. In business continuance terminology, geoclusters combine with disk-based replication to offer better recovery time objective (RTO) than tape restore or manual migration. HA clusters can be categorized according to various parameters, such as the following: How hardware is shared (shared nothing, shared disk, shared everything) At which level the system is clustered (OS level clustering, application level clustering) Applications that can be clustered Quorum approach Interconnect required One of the most relevant ways to categorize HA clusters is how hardware is shared, and more specifically, how storage is shared. There are three main cluster categories: Clusters using mirrored disks Volume manager software is used to create mirrored disks across all the machines in the cluster. Each server writes to the disks that it owns and to the disks of the other servers that are part of the same cluster. Shared nothing clusters At any given time, only one node owns a disk. When a node fails, another node in the cluster has access to the same disk. Typical examples include IBM High Availability Cluster Multiprocessing (HACMP) and Microsoft Cluster Server (MSCS). Shared disk All nodes have access to the same storage. A locking mechanism protects against race conditions and data corruption. Typical examples include IBM Mainframe Sysplex technology and Oracle Real Application Cluster. Technologies that may be required to implement shared disk clusters include a distributed volume manager, which is used to virtualize the underlying storage for all servers to access the same storage; and the cluster file system, which controls read/write access to a single file system on the shared SAN. More sophisticated clustering technologies offer shared-everything capabilities, where not only the file system is shared, but memory and processors, thus offering to the user a single system image (SSI). In this model, applications do not need to be cluster-aware. Processes are launched on any of the available processors, and if a server/processor becomes unavailable, the process is restarted on a different processor. The following list provides a partial list of clustering software from various vendors, including the architecture to which it belongs, the operating system on which it runs, and which application it can support: HP MC/Serviceguard Clustering software for HP-UX (the OS running on HP Integrity servers and PA-RISC platforms) and Linux. HP Serviceguard on HP-UX provides clustering for Oracle, Informix, Sybase, DB2, Progress, NFS, Apache, and Tomcat. HP Serviceguard on Linux provides clustering for Apache, NFS, MySQL, Oracle, Samba, PostgreSQL, Tomcat, and SendMail. For more information, see the following URL: HP NonStop computing Provides clusters that run with the HP NonStop OS. NonStop OS runs on the HP Integrity line of servers (which uses Intel Itanium processors) and the NonStop S-series servers (which use MIPS processors). NonStop uses a shared nothing architecture and was developed by Tandem Computers. For more information, see the following URL: HP OpenVMS High Availability Cluster Service This clustering solution was originally developed for VAX systems, and now runs on HP Alpha and HP Integrity servers. This is an OS-level clustering that offers an SSI. For more information, see the following URL: 1-2
3 Chapter 1 High Availability Clusters Overview HP TruCluster Clusters for Tru64 UNIX (aka Digital UNIX). Tru64 Unix runs on HP Alpha servers. This is an OS-level clustering that offers an SSI. For more information, see the following URL: IBM HACMP Clustering software for servers running AIX and Linux. HACMP is based on a shared nothing architecture. For more information, see the following URL: MSCS Belongs to the category of clusters that are referred to as shared nothing. MSCS can provide clustering for applications such as file shares, Microsoft SQL databases, and Exchange servers. For more information, see the following URL: Oracle Real Application Cluster (RAC) provides a shared disk solution that runs on Solaris, HP-UX, Windows, HP Tru64 UNIX, Linux, AIX, and OS/390. For more information about Oracle RAC 10g, see the following URL: Solaris SUN Cluster Runs on Solaris and supports many applications including Oracle, Siebel, SAP, and Sybase. For more information, see the following URL: Veritas (now Symantec) Cluster Server Veritas is a mirrored disk cluster. Veritas supports applications such as Microsoft Exchange, Microsoft SQL Databases, SAP, BEA, Siebel, Oracle, DB2, Peoplesoft, and Sybase. In addition to these applications you can create agents to support custom applications. It runs on HP-UX, Solaris, Windows, AIX, and Linux. For more information, see the following URL: and Note A single server can run several server clustering software packages to provide high availability for different server resources. Note For more information about the performance of database clusters, see the following URL: Clusters can be stretched to distances beyond the local data center facility to provide metro or regional clusters. Virtually any cluster software can be configured to run as a stretch cluster, which means a cluster at metro distances. Vendors of cluster software often offer a geoclusters version of their software that has been specifically designed to have no intrinsic distance limitations. Examples of geoclustering software include the following: EMC Automated Availability Manager Data Source (also called AAM) This HA clustering solution can be used for both local and geographical clusters. It supports Solaris, HP-UX, AIX, Linux, and Windows. AAM supports several applications including Oracle, Exchange, SQL Server, and Windows services. It supports a wide variety of file systems and volume managers. AAM supports EMC SRDF/S and SRDF/A storage-based replication solutions. For more information, see the following URL: Oracle Data Guard Provides data protection for databases situated at data centers at metro, regional, or even continental distances. It is based on redo log shipping between active and standby databases. For more information, see the following URL: Veritas (now Symantec) Global Cluster Manager Allows failover from local clusters in one site to a local cluster in a remote site. It runs on Solaris, HP-UX, and Windows. For more information, see the following URL: 1-3
4 HA Clusters Basics Chapter 1 HP Continental Cluster for HP-UX For more information, see the following URL: IBM HACMP/XD (Extended Distance) Available with various data replication technology combinations such as HACMP/XD Geographic Logical Volume Manager (GLVM) and HACMP/XD HAGEO replication for geographical distances. For more information, see the following URL: HA Clusters Basics HA clusters are typically made of two servers such as the configuration shown in Figure 1-1. One server is actively processing client requests, while the other server is monitoring the main server to take over if the primary one fails. When the cluster consists of two servers, the monitoring can happen on a dedicated cable that interconnects the two machines, or on the network. From a client point of view, the application is accessible via a name (for example, a DNS name), which in turn maps to a virtual IP address that can float from a machine to another, depending on which machine is active. Figure 1-2 shows a clustered file-share. Figure 1-2 Client Access to a Clustered Application File Share Example In this example, the client sends requests to the machine named sql-duwamish, whose IP address is a virtual address, which could be owned by either or. The left of Figure 1-3 shows the configuration of a cluster IP address. From the clustering software point of view, this IP address appears as a monitored resource and is tied to the application, as described in Concept of Group, page 1-7. In this case, the IP address for the sql-duwamish is , and is associated with the clustered application shared folder called test. 1-4
5 Chapter 1 HA Clusters Basics Figure 1-3 Virtual Address Configuration with MSCS HA Clusters in Server Farms Figure 1-4 shows where HA clusters are typically deployed in a server farm. Databases are typically clustered to appear as a single machine to the upstream web/application servers. In multi-tier applications such as a J2EE based-application and Microsoft.NET, this type of cluster is used at the very bottom of the processing tiers to protect application data. 1-5
6 HA Clusters Basics Chapter 1 Figure 1-4 HA Clusters Use in Typical Server Farms Default GW Web servers servers Application servers Database Servers Applications An application running on a server that has clustering software installed does not mean that the application is going to benefit from the clustering. Unless an application is cluster-aware, an application process crashing does not necessarily cause a failover to the process running on the redundant machine. Similarly, if the public network interface card (NIC) of the main machine fails, there is no guarantee that the application processing will fail over to the redundant server. For this to happen, you need an application that is cluster-aware. Each vendor of cluster software provides immediate support for certain applications. For example, Veritas provides enterprise agents for the SQL Server and Exchange, among others. You can also develop your own agent for other applications. Similarly, EMC AAM provides application modules for Oracle, Exchange, SQL Server, and so forth. 1-6
7 Chapter 1 HA Clusters Basics In the case of MSCS, the cluster service monitors all the resources by means of the Resource Manager, which monitors the state of the application via the Application DLL. By default, MSCS provides support for several application types, as shown in Figure 1-5. For example, MSCS monitors a clustered SQL database by means of the distributed transaction coordinator DLL. Figure 1-5 Example of Resource DLL from MSCS It is not uncommon for a server to run several clustering applications. For example, you can run one software program to cluster a particular database, another program to cluster the file system, and still another program to cluster a different application. It is out of the scope of this document to go into the details of this type of deployment, but it is important to realize that the network requirements of a clustered server might require considering not just one but multiple clustering software applications. For example, you can deploy MSCS to provide clustering for an SQL database, and you might also install EMC SRDF Cluster Enabler to failover the disks. The LAN communication profile of the MSCS software is different than the profile of the EMC SRDF CE software. Concept of Group One key concept with clusters is the group. The group is a unit of failover; in other words, it is the bundling of all the resources that constitute an application, including its IP address, its name, the disks, and so on. Figure 1-6 shows an example of the grouping of resources: the shared folder application, its IP address, the disk that this application uses, and the network name. If any one of these resources is not available, for example if the disk is not reachable by this server, the group fails over to the redundant machine. 1-7
8 HA Clusters Basics Chapter 1 Figure 1-6 Example of Group The failover of a group from one machine to another one can be automatic or manual. It happens automatically when a key resource in the group fails. Figure 1-7 shows an example: when the NIC on goes down, the application group fails over to. This is shown by the fact that after the failover, owns the disk that stores the application data. When a failover happens, mounts the disk and starts the application by using the API provided by the Application DLL. Figure 1-7 Failover of Group quorum Application data
9 Chapter 1 HA Clusters Basics The failover can also be manual, in which case it is called a move. Figure 1-8 shows a group (DiskGroup1) failing over to a node or target2 (see the owner of the group), either as the result of a move or as the result of a failure. After the failover or move, nothing changes from the client perspective. The only difference is that the machine that receives the traffic is or target2, instead of (or target1, as it is called in these examples). Figure 1-8 Move of a Group LAN Communication The LAN communication between the nodes of a cluster obviously depends on the software vendor that provides the clustering function. As previously stated, to assess the network requirements, it is important to know all the various software components running on the server that are providing clustering functions. Virtual IP Address The virtual IP address (VIP) is the floating IP address associated with a given application or group. Figure 1-3 shows the VIP for the clustered shared folder (that is, DiskGroup1 in the group configuration). In this example, the VIP is The physical address for (or target1) could be , and the address for could be When the VIP and its associated group are active on, when traffic comes into the public network VLAN, either router uses ARP to determine the VIP and answer. When the VIP moves or fails over to, then answers the ARP requests from the routers. Note From this description, it appears that the two nodes that form the cluster need to be part of the same subnet, because the VIP address stays the same after a failover. This is true for most clusters, except when they are geographically connected, in which case certain vendors allow solutions where the IP address can be different at each location, and the DNS resolution process takes care of mapping incoming requests to the new address. 1-9
10 HA Clusters Basics Chapter 1 The following trace helps explaining this concept: ICMP Echo (ping) request ICMP Echo (ping) reply Broadcast ARP Who has ? Tell Broadcast ARP Who has ? Gratuitous ARP When fails, detects this by using the heartbeats, and then verifies its connectivity to It then announces its MAC address, sending out a gratuitous ARP that indicates that has moved to Public and Private Interface As previously mentioned, the nodes in a cluster communicate over a public and a private network. The public network is used to receive client requests, while the private network is mainly used for monitoring. Node1 and monitor the health of each other by exchanging heartbeats on the private network. If the private network becomes unavailable, they can use the public network. You can have more than one private network connection for redundancy. Figure 1-1 shows the public network, and a direct connection between the servers for the private network. Most deployments simply use a different VLAN for the private network connection. Alternatively, it is also possible to use a single LAN interface for both public and private connectivity, but this is not recommended for redundancy reasons. Figure 1-9 shows what happens when (or target1) fails. Node2 is monitoring and does not hear any heartbeats, so it declares target1 failed (see the right side of Figure 1-9). At this point, the client traffic goes to (target2). Figure 1-9 Public and Private Interface and a Failover public LAN heartbeats heartbeats private LAN
11 Chapter 1 HA Clusters Basics Heartbeats From a network design point of view, the type of heartbeats used by the application often decide whether the connectivity between the servers can be routed. For local clusters, it is almost always assumed that the two or more servers communicate over a Layer 2 link, which can be either a direct cable or simply a VLAN. The following traffic traces provide a better understanding of the traffic flows between the nodes: UDP Source port: 3343 Destination port: UDP Source port: 3343 Destination port: UDP Source port: 3343 Destination port: UDP Source port: 3343 Destination port: and are the IP addresses of the servers on the private network. This traffic is unicast. If the number of servers is greater or equal to three, the heartbeat mechanism typically changes to multicast. The following is an example of how the server-to-server traffic might appear on either the public or the private segment: UDP Source port: 3343 Destination port: UDP Source port: 3343 Destination port: UDP Source port: 3343 Destination port: 3343 The x.x range is the site local scope. A closer look at the payload of these UDP frames reveals that the packet has a time-to-live (TTL)=1: Internet Protocol, Src Addr: ( ), Dst Addr: ( ) [ ] Fragment offset: 0 Time to live: 1 Protocol: UDP (0x11) Source: ( ) Destination: ( ) The following is another possible heartbeat that you may find: UDP Source port: 23 Destination port: UDP Source port: 23 Destination port: UDP Source port: 23 Destination port: 23 The address belongs to the link local address range, which is generated with TTL=1. These traces show that the private network connectivity between nodes in a cluster typically requires Layer 2 adjacency between the nodes; in other words, a non-routed VLAN. The Design chapter outlines options where routing can be introduced between the nodes when certain conditions are met. Layer 2 or Layer 3 Connectivity Based on what has been discussed in Virtual IP Address, page 1-9 and Heartbeats, page 1-11, you can see why Layer 2 adjacency is required between the nodes of a local cluster. The documentation from the cluster software vendors reinforces this concept. Quoting from the IBM HACMP documentation: Between cluster nodes, do not place intelligent switches, routers, or other network equipment that do not transparently pass through UDP broadcasts and other packets to all cluster nodes. This prohibition includes equipment that optimizes protocol such as Proxy ARP and MAC address caching, transforming multicast and broadcast protocol requests into unicast requests, and ICMP optimizations. 1-11
12 HA Clusters Basics Chapter 1 Quoting from the MSCS documentation: The private and public network connections between cluster nodes must appear as a single, non-routed LAN that uses technologies such as virtual LANs (VLANs). In these cases, the connections network must be able to provide a guaranteed, maximum round-trip latency between nodes of no more than 500 milliseconds. The Cluster Interconnect must appear as a standard LAN. For more information, see the following URL: According to Microsoft, future releases might address this restriction to allow building clusters across multiple L3 hops. Note Some Cisco technologies can be used in certain cases to introduce Layer 3 hops in between the nodes. An example is a feature called Local Area Mobility (LAM). LAM works for unicast traffic only and it does not necessarily satisfy the requirements of the software vendor because it relies on Proxy ARP. As a result of this requirement, most cluster networks are currently similar to those shown in Figure 1-10; to the left is the physical topology, to the right the logical topology and VLAN assignment. The continuous line represents the public VLAN, while the dotted line represents the private VLAN segment. This design can be enhanced when using more than one NIC for the private connection. For more details, see Complete Design, page Figure 1-10 agg1 Typical LAN Design for HA Clusters agg2 hsrp Disk Considerations Figure 1-7 displays a typical failover of a group. The disk ownership is moved from to. This procedure requires that the disk be shared between the two nodes, such that when becomes active, it has access to the same data as. Different clusters provide this functionality differently: some clusters follow a shared disk architecture where every node can write to every disk (and a sophisticated lock mechanism prevents inconsistencies which could arise from concurrent access to the same data), or shared nothing, where only one node owns a given disk at any given time. 1-12
13 Chapter 1 HA Clusters Basics Shared Disk With either architecture (shared disk or shared nothing), from a storage perspective, the disk needs to be connected to the servers in a way that any server in the cluster can access it by means of a simple software operation. The disks to which the servers connect are typically protected with redundant array of independent disks (RAID): RAID1 at a minimum, or RAID01 or RAID10 for higher levels of I/O. This approach minimizes the chance of losing data when a disk fails as the disk array itself provides disk redundancy and data mirroring. You can provide access to shared data also with a shared SCSI bus, network access server (NAS), or even with iscsi. Quorum Concept Figure 1-11 shows what happens if all the communication between the nodes in the cluster is lost. Both nodes bring the same group online, which results in an active-active scenario. Incoming requests go to both nodes, which then try to write to the shared disk, thus causing data corruption. This is commonly referred to as the split-brain problem. Figure 1-11 Theoretical Split-Brain Scenario The mechanism that protects against this problem is the quorum. For example, MSCS has a quorum disk that contains the database with the cluster configuration information and information on all the objects managed by the clusters. Only one node in the cluster owns the quorum at any given time. Figure 1-12 shows various failure scenarios where despite the fact that the nodes in the cluster are completely isolated, there is no data corruption because of the quorum concept. 1-13
14 HA Clusters Basics Chapter 1 Figure 1-12 LAN Failures in Presence of Quorum Disk (a) (b) (c) mgmt mgmt mgmt mgmt mgmt mgmt quorum quorum Application Disk Application Disk In scenario (a), owns the quorum and that is also where the group for the application is active. When the communication between and is cut, nothing happens; tries to reserve the quorum, but it cannot because the quorum is already owned by. Scenario (b) shows that when loses communication with the public VLAN, which is used by the application group, it can still communicate with and instruct to take over the disk for the application group. This is because can still talk to the default gateway. For management purposes, if the quorum disk as part of the cluster group is associated with the public interface, the quorum disk can also be transferred to, but it is not necessary. At this point, client requests go to and everything works. Scenario (c) shows what happens when the communication is lost between and where owns the application group. Node1 owns the quorum, thus it can bring resources online, so the application group is brought up on. The key concept is that when all communication is lost, the node that owns the quorum is the one that can bring resources online, while if partial communication still exists, the node that owns the quorum is the one that can initiate the move of an application group. When all communication is lost, the node that does not own the quorum (referred to as the challenger) performs a SCSI reset to get ownership of the quorum disk. The owning node (referred to as the defender) performs SCSI reservation at the interval of 3s,and the challenger retries after 7s. As a result, if a node owns the quorum, it still holds it after the communication failure. Obviously, if the defender loses connectivity to the disk, the challenger can take over the quorum and bring all the resources online. This is shown in Figure
15 Chapter 1 HA Clusters Basics Figure 1-13 Node1 Losing All Connectivity on LAN and SAN quorum There are several options related to which approach can be taken for the quorum implementation; the quorum disk is just one option. A different approach is the majority node set, where a copy of the quorum configuration is saved on the local disk instead of the shared disk. In this case, the arbitration for which node can bring resources online is based on being able to communicate with at least more than half of the nodes that form the cluster. Figure 1-14 shows how the majority node set quorum works. Figure 1-14 Majority Node Set Quorum switch2 node
16 HA Clusters Basics Chapter 1 Each local copy of the quorum disk is accessed via the network by means of server message block (SMB). When nodes are disconnected, as in Figure 1-14, needs to have the vote of the majority of the nodes to be the master. This implies that this design requires an odd number of nodes. Also notice that there is no quorum disk configured on the storage array. Network Design Considerations Routing and Switching Design Figure 1-12 through Figure 1-14 show various failure scenarios and how the quorum concept helps prevent data corruption. As the diagrams show, it is very important to consider the implications of the routing configuration, especially when dealing with a geocluster (see subsequent section in this document). It is very important to match the routing configuration to ensure that the traffic enters the network from the router that matches the node that is preferred to own the quorum. By matching quorum and routing configuration, when there is no LAN connectivity, there is no chance that traffic is routed to the node whose resources are offline. Figure 1-15 shows how to configure the preferred owner for a given resource; for example, the quorum disk. This configuration needs to match the routing configuration. Figure 1-15 Configuring the Preferred Owner for a Resource Quorum Example Controlling the inbound traffic from a routing point of view and matching the routing configuration to the quorum requires the following: Redistributing the connected subnets Filtering out the subnets where there are no clusters configured (this is done with route maps) Giving a more interesting cost to the subnets advertised by switch1 Figure 1-16 (a) shows a diagram with the details of how the public and private segment map to a typical topology with Layer 3 switches. The public and private VLANs are trunked on an EtherChannel between switch1 and switch2. With this topology, when the connectivity between switch1 and switch2 is lost, the nodes cannot talk with each other on either segment. This is actually preferable to having a LAN disconnect on only, for example, the public segment. The reason is that by losing both segments at the same time, the topology converges as shown in Figure 1-16 (b) no matter which node owned the disk group for the application. 1-16
17 Chapter 1 HA Clusters Basics Figure 1-16 Typical Local Cluster Configuration switch1 switch2 switch1 switch2 public private public private quorum Application data quorum Application data Importance of the Private Link Figure 1-17 shows the configuration of a cluster where the public interface is used both for client-to-server connectivity and for the heartbeat/interconnect. This configuration does not protect against the failure of a NIC or of the link that connects to the switch. This is because the node that owns the quorum cannot instruct the other node to take over the application group. The result is that both nodes in the cluster go offline. 1-17
18 HA Clusters Basics Chapter 1 Figure 1-17 Cluster Configuration with a Promiscuous Port No Private Link switch1 switch2 heartbeat quorum Application data For this reason, Cisco highly recommends using at least two NICs; one for the public network and one for the private network, even if they both connect to the same switch. Otherwise, a single NIC failure can make the cluster completely unavailable, which is exactly the opposite of the purpose of the HA cluster design. NIC Teaming Servers with a single NIC interface can have many single points of failure, such as the NIC card, the cable, and the switch to which it connects. NIC teaming is a solution developed by NIC card vendors to eliminate this single point of failure by providing special drivers that allow two NIC cards to be connected to two different access switches or different line cards on the same access switch. If one NIC card fails, the secondary NIC card assumes the IP address of the server and takes over operation without disruption. The various types of NIC teaming solutions include active/standby and active/active. All solutions require the NIC cards to have Layer 2 adjacency with each other. Figure 1-18 shows examples of NIC teaming configurations. 1-18
19 Chapter 1 HA Clusters Basics Figure 1-18 NIC Teaming Configurations Default GW HSRP heartbeat Default GW HSRP heartbeat Default GW HSRP heartbeat Eth0: Active Eth1: Standby Eth0: Active Eth1: Standby Eth0: Active Eth1-X: Active IP= MAC =0007.e910.ce0f IP= MAC =0007.e910.ce0f IP= MAC =0007.e910.ce0f IP= MAC =0007.e910.ce0e On failover, Src MAC Eth1 = Src MAC Eth0 IP address Eth1 = IP address Eth0 On failover, Src MAC Eth1 = Src MAC Eth0 IP address Eth1 = IP address Eth0 One port receives, all ports transmit Incorporates Fault Tolerance One IP address and multiple MAC addresses With Switch Fault Tolerance (SFT) designs, one port is active and the other is standby, using one common IP address and MAC address. With Adaptive Load Balancing (ALB) designs, one port receives and all ports transmit using one IP address and multiple MAC addresses. Figure 1-19 shows an Intel NIC teaming software configuration where the user has grouped two interfaces (in this case from the same NIC) and has selected the ALB mode. Figure 1-19 Typical NIC Teaming Software Configuration 1-19
20 HA Clusters Basics Chapter 1 Depending on the cluster server vendor, NIC teaming may or may not be supported. For example, in the case of MSCS, teaming is supported for the public-facing interface but not for the private interconnects. For this reason, it is advised to use multiple links for the private interconnect, as described at the following URL: Quoting from Microsoft: Microsoft does not recommend that you use any type of fault-tolerant adapter or Teaming for the heartbeat. If you require redundancy for your heartbeat connection, use multiple network adapters set to Internal Communication Only and define their network priority in the cluster configuration. Issues have been seen with early multi-ported network adapters, so verify that your firmware and driver are at the most current revision if you use this technology. Contact your network adapter manufacturer for information about compatibility on a server cluster. For more information, see the following article in the Microsoft Knowledge Base: Network Adapter Teaming and Server Clustering. Another variation to the NIC teaming configuration consists in using cross-stack EtherChannels. For more information, see the following URL: on/guide/swethchl.html. Figure 1-20 (a) shows the network design with cross-stack EtherChannels. You need to use two or more Cisco Catalyst 3750 switches interconnected with the appropriate stack interconnect cable, as described at the following URL: on/guide/swstack.html. The aggregation switches are dual-connected to each stack member (access1 and access2); the servers are similarly dual-connected to each stack member. EtherChanneling is configured on the aggregation switches as well as the switch stack. Link Aggregation Protocol is not supported across switches, so the channel group must be configured in mode on. This means that the aggregation switches also need to be configured with the channel group in mode on. Figure 1-20 (b) shows the resulting equivalent topology to Figure 1-20 (a) where the stack of access switches appears as a single device to the eyes of the aggregation switches and the servers. Figure 1-20 Configuration with Cross-stack EtherChannels agg1 (a) agg2 agg1 (b) agg2 Stackwise Technology acc1 acc2 Stack Interconnect Configuration of the channeling on the server requires the selection of Static Link Aggregation; either FEC or GEC, depending on the type of NIC card installed, as shown in Figure
21 Chapter 1 HA Clusters Basics Figure 1-21 Configuration of EtherChanneling on the Server Side Storage Area Network Design Compared with the ALB mode (or TLB, whichever name the vendor uses for this mechanism), this deployment has the advantage that all the server links are used both in the outbound and inbound direction, thus providing a more effective load balancing of the traffic. In terms of high availability, there is little difference with the ALB mode: With the stackwise technology, if one of the switches in the stack fails (for example the master), the remaining one takes over Layer 2 forwarding in 1s (see the following URL: uration/guide/swintro.html) The FEC or GEC configuration of the NIC teaming driver stops using the link connecting to the failed switch and continues on the remaining link. With an ALB configuration, when access1 fails, the teaming software simply forwards the traffic on the remaining link. In both cases, the traffic drop amounts to few seconds. From a SAN point of view, the key requirement for HA clusters is that both nodes need to be able to see the same storage. Arbitration of which node is allowed to write to the disk happens at the cluster software level, as previously described in Quorum Concept, page HA clusters are often configured for multi-path I/O (MPIO) for additional redundancy. This means that each server is configured with two host-based adapters (HBAs) and connects to two fabrics. The disk array is in turn connected to each fabric. This means that each server has two paths to the same LUN. Unless special MPIO software is installed on the server, the server thinks that each HBA gives access to a different disk. The MPIO software provides a single view of the disk via these two paths and load balancing between them. Two examples of this type of software include EMC Powerpath and HP Autopath. The MPIO software can be provided by HBA vendors, storage vendors, or by Volume Manager vendors. Each product operates in a different layer of the stack, as shown in Figure Several mechanisms can be used by this software to identify the same disk that appears on two different HBAs. 1-21
22 HA Clusters Basics Chapter 1 Figure 1-22 SAN Configuration User Applications Filesyste. User Applications File System/Database User Applications File System/Database SCSI Port/Generic Driver Volume Manager Volume Mgr Virtualization HBA Multipath driver Replacement Disk Driver Volume Mgr Multipath I/O SCSI Port/Generic Driver SCSI Disk Driver SCSI Port/Generic Driver HBA Driver HBA Driver HBA Driver HBA Driver HBA HBA HBA HBA HBA HBA Fibre Channel Fibre Channel Fibre Channel Fibre Channel Fibre Channel Fibre Channel Disk Disk Disk MPIO can use several load distribution/ha algorithms: Active/Standby, Round Robin, Least I/O (referred to the path with fewer I/O requests), or Least Blocks (referred to the path with fewer blocks). Not all MPIO software is compatible with clusters, because sometimes the locking mechanisms required by the cluster software cannot be supported with MPIO. To discover whether a certain cluster software is compatible with a specific MPIO solution, see the hardware and software compatibility matrix provided by the cluster vendor. As an example, in the case of Microsoft, see the following URLs: Besides verifying the MPIO compatibility with the cluster software, it is also important to verify which mode of operation is compatible with the cluster. For example, it may be more likely that the active/standby configuration be compatible than the load balancing configurations. Besides MPIO, the SAN configuration for cluster operations is fairly simple; you just need to configure zoning correctly so that all nodes in the cluster can see the same LUNs, and similarly on the storage array, LUN masking needs to present the LUNs to all the nodes in the cluster (if MPIO is present, the LUN needs to mapped to each port connecting to the SAN). Complete Design Figure 1-23 shows the end-to-end design with a typical data center network. Each clustered server is dual-homed to the LAN and to the SAN. NIC teaming is configured for the public interface; with this design, it might be using the ALB mode (also called TLB depending on the NIC vendor) to take 1-22
23 Chapter 1 HA Clusters Basics advantage of the forwarding uplinks of each access switch; MPIO is configured for storage access. The private connection is carried on a different port. If you require redundancy for the private connection, you would configure an additional one, without the two being teamed together. Figure 1-23 Design Options with Looped Access with (b) being the Preferred Design agg1 (a) agg2 agg1 (b) agg2 VLAN 10,20,30 VLAN 10 acc1 acc2 Private Interface VLAN 20 NIC Teaming Public Interface VLAN 10 MPIO Private Interface VLAN 20 acc1 acc2 Private Interface VLAN Figure 1-23 (a) shows a possible design where each server has a private connection to a single switch. This design works fine except when one of the two switches fails, as shown. In this case, the heartbeat (represented as the dash line in the picture) needs to traverse the remaining link in the teamed public interface. Depending on the clustering software vendor, this configuration might or might not work. As previously stated, Microsoft, for example, does not recommend carrying the heartbeat on a teamed interface. Figure 1-23 (b) shows a possible alternative design with redundancy on the private links. In this case, there are three VLANs: Vlan 10 for the public interface, and VLAN 20 and 30 for the private links. VLAN 20 is local to the access switch to the left and VLAN 30 is local to the access switch to the right. Each node has a private link to each access switch. In case one access switch fails, the heartbeat communication (represented as the dash line in the picture) continues on the private links connected to the remaining access switch. Figure 1-24 (a) shows the design with a loop-free access. 1-23
24 HA Clusters Basics Chapter 1 Figure 1-24 Design with a Loop-free Access (a) and an Important Failure Scenario (b) agg1 (a) agg2 agg1 (b) agg2 acc1 acc2 acc1 acc2 Private Interface VLAN 20 Private Interface VLAN 30 Private Interface VLAN 20 Private Interface VLAN 30 MPIO This design follows the same strategy as Figure 1-23 (b) for the private links. The teaming configuration most likely leverages Switch Fault Tolerance, because there is no direct link between the access switch to the right towards the left aggregation switch where HSRP is likely to be the primary. One important failure scenario is the one shown in Figure 1-24 (b) where the two access switches are disconnected, thus creating a split subnet. To address this problem and make sure that the cluster can continue to work, it may be a good design best practice to match the preferred owner for the quorum disk to the aggregation switch that advertises the path with the best metric. This configuration is not the normal default configuration for the aggregation switches/routers. You have to explicitly configure the routing in a way that aggregation1 is the preferred path to the cluster. This is achieved, for example, by using the command redistribute connected to filter out all the subnets except the cluster subnet, and by using route maps to assign a better cost to the route advertised by agg1 compared to the one advertised by agg
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
hp ProLiant network adapter teaming
hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2
Server Clusters : Geographically Dispersed Clusters For Windows 2000 and Windows Server 2003
Server Clusters : Geographically Dispersed Clusters For Windows 2000 and Windows Server 2003 Microsoft Corporation Published: November 2004 Abstract This document describes what a geographically dispersed
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES
A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES By: Edward Whalen Performance Tuning Corporation INTRODUCTION There are a number of clustering products available on the market today, and clustering has become
Using Multipathing Technology to Achieve a High Availability Solution
Using Multipathing Technology to Achieve a High Availability Solution Table of Contents Introduction...3 Multipathing Technology...3 Multipathing I/O Implementations...5 Storage Redundancy...5 Infortrend
HP Serviceguard Cluster Configuration for Partitioned Systems
HP Serviceguard Cluster Configuration for Partitioned Systems July 2005 Abstract...2 Partition configurations...3 Serviceguard design assumptions...4 Hardware redundancy...4 Cluster membership protocol...4
Synology High Availability (SHA)
Synology High Availability (SHA) Based on DSM 5.1 Synology Inc. Synology_SHAWP_ 20141106 Table of Contents Chapter 1: Introduction... 3 Chapter 2: High-Availability Clustering... 4 2.1 Synology High-Availability
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network
CCNP SWITCH: Implementing High Availability and Redundancy in a Campus Network Olga Torstensson SWITCHv6 1 Components of High Availability Redundancy Technology (including hardware and software features)
How To Make An Org Database Available On Linux
by Andrey Kvasyuk, Senior Consultant July 7, 2005 Introduction For years, the IT community has debated the merits of operating systems such as UNIX, Linux and Windows with religious fervor. However, for
Windows Server Failover Clustering April 2010
Windows Server Failover Clustering April 00 Windows Server Failover Clustering (WSFC) is the successor to Microsoft Cluster Service (MSCS). WSFC and its predecessor, MSCS, offer high availability for critical
SanDisk ION Accelerator High Availability
WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing
Introduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
AUTO DEFAULT GATEWAY SETTINGS FOR VIRTUAL MACHINES IN SERVERS USING DEFAULT GATEWAY WEIGHT SETTINGS PROTOCOL (DGW)
AUTO DEFAULT GATEWAY SETTINGS FOR VIRTUAL MACHINES IN SERVERS USING DEFAULT GATEWAY WEIGHT SETTINGS PROTOCOL (DGW) Suman Dutta 1, Shouman Barua 2 and Jishu Sen 3 1 IT Trainer, Logitrain.com.au 2 PhD research
Intel Advanced Network Services Software Increases Network Reliability, Resilience and Bandwidth
White Paper Network Connectivity Intel Advanced Network Services Software Increases Network Reliability, Resilience and Bandwidth Adapter teaming is a long-proven method for increasing network reliability,
EMC PowerPath Family
DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Stretching A Wolfpack Cluster Of Servers For Disaster Tolerance. Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA dick_wilkins@hp.
Stretching A Wolfpack Cluster Of Servers For Disaster Tolerance Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA [email protected] Motivation WWW access has made many businesses 24 by 7 operations.
High Availability Technical Notice
IBM FileNet P8 Version 4.5.1 High Availability Technical Notice GC19-2800-00 IBM FileNet P8 Version 4.5.1 High Availability Technical Notice GC19-2800-00 Note Before using this information and the product
Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices
Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices A Dell Technical White Paper Dell Symantec THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
IP SAN BEST PRACTICES
IP SAN BEST PRACTICES PowerVault MD3000i Storage Array www.dell.com/md3000i TABLE OF CONTENTS Table of Contents INTRODUCTION... 3 OVERVIEW ISCSI... 3 IP SAN DESIGN... 4 BEST PRACTICE - IMPLEMENTATION...
Customer Education Services Course Overview
Customer Education Services Course Overview Accelerated SAN Essentials (UC434S) This five-day course provides a comprehensive and accelerated understanding of SAN technologies and concepts. Students will
Top-Down Network Design
Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,
Nutanix Tech Note. VMware vsphere Networking on Nutanix
Nutanix Tech Note VMware vsphere Networking on Nutanix Nutanix Virtual Computing Platform is engineered from the ground up for virtualization and cloud environments. This Tech Note describes vsphere networking
DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch
DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby
SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study
SAP Solutions on VMware Infrastructure 3: Table of Contents Introduction... 1 SAP Solutions Based Landscape... 1 Logical Architecture... 2 Storage Configuration... 3 Oracle Database LUN Layout... 3 Operations...
Storage Networking Management & Administration Workshop
Storage Networking Management & Administration Workshop Duration: 2 Days Type: Lecture Course Summary & Description Achieving SNIA Certification for storage networking management and administration knowledge
Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager
Hitachi Data System s WebTech Series Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager The HDS WebTech Series Dynamic Load Balancing Who should
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy Objectives The purpose of this lab is to demonstrate both high availability and performance using virtual IPs coupled with DNS round robin
Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Veritas Cluster Server from Symantec
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013
Long-Distance Configurations for MSCS with IBM Enterprise Storage Server
Long-Distance Configurations for MSCS with IBM Enterprise Storage Server Torsten Rothenwaldt IBM Germany Presentation 2E06 Agenda Cluster problems to resolve in long-distance configurations Stretched MSCS
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
High Availability Cluster for RC18015xs+
High Availability Cluster for RC18015xs+ Shared Storage Architecture Synology Inc. Table of Contents Chapter 1: Introduction... 3 Chapter 2: High-Availability Clustering... 4 2.1 Synology High-Availability
Synology High Availability (SHA)
Synology High Availability (SHA) Based on DSM 4.3 Synology Inc. Synology_SHAWP_ 20130910 Table of Contents Chapter 1: Introduction Chapter 2: High-Availability Clustering 2.1 Synology High-Availability
Best practices for data migration.
IBM Global Technology Services June 2007 Best practices for data migration. Methodologies for planning, designing, migrating and validating data migration Page 2 Contents 2 Executive summary 4 Introduction
HP Serviceguard for Linux Order and Configuration Guide
HP Serviceguard for Linux Order and Configuration Guide Introduction... 2 What s new... 2 Special note on configurations... 2 Cluster configuration elements... 3 System software... 3 HP Serviceguard for
How To Understand and Configure Your Network for IntraVUE
How To Understand and Configure Your Network for IntraVUE Summary This document attempts to standardize the methods used to configure Intrauve in situations where there is little or no understanding of
Symantec Cluster Server powered by Veritas
Delivers high availability and disaster recovery for your critical applications Data Sheet: High Availability Overview protects your most important applications from planned and unplanned downtime. Cluster
Scalable NAS for Oracle: Gateway to the (NFS) future
Scalable NAS for Oracle: Gateway to the (NFS) future Dr. Draško Tomić ESS technical consultant, HP EEM 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change
Cisco Networking Academy CCNP Multilayer Switching
CCNP3 v5 - Chapter 5 Cisco Networking Academy CCNP Multilayer Switching Implementing High Availability in a Campus Environment Routing issues Hosts rely on a router to find the best path Issues with established
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
vsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Active-Active and High Availability
Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark
HP Serviceguard Cluster Configuration for HP-UX 11i or Linux Partitioned Systems April 2009
HP Serviceguard Cluster Configuration for HP-UX 11i or Linux Partitioned Systems April 2009 Abstract... 2 Partition Configurations... 2 Serviceguard design assumptions... 4 Hardware redundancy... 4 Cluster
High Availability (HA) Aidan Finn
High Availability (HA) Aidan Finn About Aidan Finn Technical Sales Lead at MicroWarehouse (Dublin) Working in IT since 1996 MVP (Virtual Machine) Experienced with Windows Server/Desktop, System Center,
W H I T E P A P E R. VMware Infrastructure Architecture Overview
W H I T E P A P E R ware Infrastructure Architecture Overview ware white paper Table of Contents Physical Topology of the ware Infrastructure Data Center............................... 4 Virtual Data Center
Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA
Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Subtitle Table of contents Overview... 2 Key findings... 3 Solution
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
Availability Digest. www.availabilitydigest.com. Redundant Load Balancing for High Availability July 2013
the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Data Center Multi-Tier Model Design
2 CHAPTER This chapter provides details about the multi-tier design that Cisco recommends for data centers. The multi-tier design model supports many web service architectures, including those based on
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Veritas InfoScale Availability
Veritas InfoScale Availability Delivers high availability and disaster recovery for your critical applications Overview protects your most important applications from planned and unplanned downtime. InfoScale
Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric
Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric 2001 San Diego Gas and Electric. All copyright and trademark rights reserved. Importance
Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server
Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Windows 6.1 February 2014 Symantec Storage Foundation and High Availability Solutions
IP SAN Fundamentals: An Introduction to IP SANs and iscsi
IP SAN Fundamentals: An Introduction to IP SANs and iscsi Updated April 2007 Sun Microsystems, Inc. 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 USA All rights reserved. This
Expert Reference Series of White Papers. Planning for the Redeployment of Technical Personnel in the Modern Data Center
Expert Reference Series of White Papers Planning for the Redeployment of Technical Personnel in the Modern Data Center [email protected] www.globalknowledge.net Planning for the Redeployment of
Ultimate Guide to Oracle Storage
Ultimate Guide to Oracle Storage Presented by George Trujillo [email protected] George Trujillo Twenty two years IT experience with 19 years Oracle experience. Advanced database solutions such
Veritas Cluster Server by Symantec
Veritas Cluster Server by Symantec Reduce application downtime Veritas Cluster Server is the industry s leading clustering solution for reducing both planned and unplanned downtime. By monitoring the status
VERITAS Cluster Server v2.0 Technical Overview
VERITAS Cluster Server v2.0 Technical Overview V E R I T A S W H I T E P A P E R Table of Contents Executive Overview............................................................................1 Why VERITAS
Advanced Network Services Teaming
Advanced Network Services Teaming Advanced Network Services (ANS) Teaming, a feature of the Advanced Network Services component, lets you take advantage of multiple adapters in a system by grouping them
Red Hat Cluster Suite
Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.
Windows Geo-Clustering: SQL Server
Windows Geo-Clustering: SQL Server Edwin Sarmiento, Microsoft SQL Server MVP, Microsoft Certified Master Contents Introduction... 3 The Business Need for Geo-Clustering... 3 Single-location Clustering
Astaro Deployment Guide High Availability Options Clustering and Hot Standby
Connect With Confidence Astaro Deployment Guide Clustering and Hot Standby Table of Contents Introduction... 2 Active/Passive HA (Hot Standby)... 2 Active/Active HA (Cluster)... 2 Astaro s HA Act as One...
EMC Invista: The Easy to Use Storage Manager
EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services
Violin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
HiCommand Dynamic Link Manager (HDLM) for Windows Systems User s Guide
HiCommand Dynamic Link Manager (HDLM) for Windows Systems User s Guide MK-92DLM129-13 2007 Hitachi Data Systems Corporation, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Implementing Storage Concentrator FailOver Clusters
Implementing Concentrator FailOver Clusters Technical Brief All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject to
Building a Highly Available and Scalable Web Farm
Page 1 of 10 MSDN Home > MSDN Library > Deployment Rate this page: 10 users 4.9 out of 5 Building a Highly Available and Scalable Web Farm Duwamish Online Paul Johns and Aaron Ching Microsoft Developer
Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation
Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Delivering High Availability Solutions with Red Hat Cluster Suite
Delivering High Availability Solutions with Red Hat Cluster Suite Abstract This white paper provides a technical overview of the Red Hat Cluster Suite layered product. The paper describes several of the
Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option
WHITE PAPER Optimized Performance for SAN Environments Backup Exec 9.1 for Windows Servers SAN Shared Storage Option 11/20/2003 1 TABLE OF CONTENTS Executive Summary...3 Product Highlights...3 Approaches
TechBrief Introduction
TechBrief Introduction Leveraging Redundancy to Build Fault-Tolerant Networks The high demands of e-commerce and Internet applications have required networks to exhibit the same reliability as the public
Deploying Windows Streaming Media Servers NLB Cluster and metasan
Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................
How To Run A Web Farm On Linux (Ahem) On A Single Computer (For Free) On Your Computer (With A Freebie) On An Ipv4 (For Cheap) Or Ipv2 (For A Free) (For
Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) [email protected] October 2000 http://verge.net.au/linux/has/ http://ultramonkey.sourceforge.net/ Introduction:
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Networking Topology For Your System
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
Redundancy and load balancing at L3 in Local Area Networks. Fulvio Risso Politecnico di Torino
Redundancy and load balancing at L3 in Local Area Networks Fulvio Risso Politecnico di Torino 1 Problem: the router is a single point of failure H1 H2 H3 VLAN4 H4 VLAN4 Corporate LAN Corporate LAN R1 R2
High Availability and MetroCluster Configuration Guide For 7-Mode
Data ONTAP 8.2 High Availability and MetroCluster Configuration Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone:
Symantec NetBackup Clustered Master Server Administrator's Guide
Symantec NetBackup Clustered Master Server Administrator's Guide for Windows, UNIX, and Linux Release 7.5 Symantec NetBackup Clustered Master Server Administrator's Guide The software described in this
RESILIENT NETWORK DESIGN
Matěj Grégr RESILIENT NETWORK DESIGN 1/36 2011 Brno University of Technology, Faculty of Information Technology, Matěj Grégr, [email protected] Campus Best Practices - Resilient network design Campus
Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.
EliteNAS Cluster Mirroring Option - Introduction Real Time NAS-to-NAS Mirroring & Auto-Failover Cluster Mirroring High-Availability & Data Redundancy Option for Business Continueity Typical Cluster Mirroring
