Citrix TriScale clustering tech note



Similar documents
Solutions Guide. Deploying Citrix NetScaler with Microsoft Exchange 2013 for GSLB. citrix.com

Microsoft Dynamics CRM 2015 with NetScaler for Global Server Load Balancing

Solutions Guide. Deploying Citrix NetScaler for Global Server Load Balancing of Microsoft Lync citrix.com

NetScaler: A comprehensive replacement for Microsoft Forefront Threat Management Gateway

Configuring Citrix NetScaler for IBM WebSphere Application Services

Cisco and Citrix: Building Application Centric, ADC-enabled Data Centers

Cisco and Citrix: Building Application Centric, ADC-enabled Data Centers

Citrix Lifecycle Management

SolidFire SF3010 All-SSD storage system with Citrix CloudPlatform Reference Architecture

Citrix NetScaler and Microsoft SharePoint 2013 Hybrid Deployment Guide

Microsoft SharePoint 2013 with Citrix NetScaler

Deploying NetScaler Gateway in ICA Proxy Mode

Deliver the Next Generation Intelligent Datacenter Fabric with the Cisco Nexus 1000V, Citrix NetScaler Application Delivery Controller and Cisco vpath

Solution Brief. Deliver Production Grade OpenStack LBaaS with Citrix NetScaler. citrix.com

Guide to Deploying Microsoft Exchange 2013 with Citrix NetScaler

Deploying NetScaler with Microsoft Exchange 2016

Solution Guide for Citrix NetScaler and Cisco APIC EM

Microsoft TMG Replacement with NetScaler

Citrix desktop virtualization and Microsoft System Center 2012: better together

How To Use Netscaler As An Afs Proxy

icrosoft TMG Replacement with NetScaler

NetScaler carriergrade network

White Paper. Deployment Practices and Guidelines for NetScaler 10.5 on Amazon Web Services. citrix.com

Single Sign On for ShareFile with NetScaler. Deployment Guide

Securing Outlook Web Access (OWA) 2013 with NetScaler AppFirewall

Citrix XenServer Industry-leading open source platform for cost-effective cloud, server and desktop virtualization. citrix.com

Deliver Enterprise Mobility with Citrix XenMobile and Citrix NetScaler

Using Vasco IDENTIKEY Server with NetScaler

White Paper. Optimizing the video experience for XenApp and XenDesktop deployments with CloudBridge. citrix.com

BlueCat Networks Adonis and Proteus on Citrix NetScaler SDX Platform Overview

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

Advanced Service Desk Security

BlueCat IPAM, DNS and DHCP Solutions on Citrix NetScaler SDX Platform Overview

Citrix NetScaler 10.5 Essentials for ACE Migration CNS208; 5 Days, Instructor-led

Windows XP Application Migration Checklist

Provisioning ShareFile on Microsoft Azure Storage

The top 5 truths behind what the cloud is not

Secure SSL, Fast SSL

Taking Windows Mobile on Any Device

RSA Adaptive Authentication and Citrix NetScaler SDX Platform Overview

White Paper. Protecting Mobile Apps with Citrix XenMobile and MDX. citrix.com

Desktop virtualization for all

Fullerton India enhances its employee productivity and efficiency with Citrix XenDesktop

Cisco ACI and Citrix NetScaler: Opening the Way to Data Center Agility

CNS-205 Citrix NetScaler 10 Essentials and Networking

Optimizing service assurance for XenServer virtual infrastructures with Xangati

NetScaler SQL Intelligent Load Balancing. Scaling the Data Tier with.

Websense Data Security Gateway and Citrix NetScaler SDX Platform Overview

Citrix Solutions. Overview

Trend Micro InterScan Web Security and Citrix NetScaler SDX Platform Overview

Set Up a VM-Series Firewall on the Citrix SDX Server

Desktop virtualization for all

ExamPDF. Higher Quality,Better service!

Run Skype for Business as a Secure Virtual App with a Great User Experience

Citrix ShareFile Enterprise: a technical overview citrix.com

Solve the application visibility challenge with NetScaler Insight Center

Solution Guide. Optimizing Microsoft SharePoint 2013 with Citrix NetScaler. citrix.com

CNS-208 Citrix NetScaler 10.5 Essentials for ACE Migration

How To Manage A Netscaler On A Pc Or Mac Or Mac With A Net Scaler On An Ipad Or Ipad With A Goslade On A Ggoslode On A Laptop Or Ipa On A Network With

Networking and High Availability

The falling cost and rising value of desktop virtualization

Secure remote access

Networking and High Availability

Desktop virtualization for all - technical overview citrix.com

Data Center Consolidation for Federal Government

Deploying XenApp on a Microsoft Azure cloud

Securing virtual desktop infrastructure with Citrix NetScaler

Top Three Reasons to Deliver Web Apps with App Virtualization

The Always-on Enterprise: Business Continuity Scenarios that Work

Secure Data Sharing in the Enterprise

Ensure VoIP and Skype for Business Call Quality and Reliability with NetScaler SD-WAN

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

Citrix NetScaler 10 Essentials and Networking

Citrix ShareFile Enterprise technical overview

CNS-208 Citrix NetScaler 10 Essentials for ACE Migration

Citrix Workspace Cloud Apps and Desktop Service with an on-premises Resource Reference Architecture

The Office Reinvented: Mobile Workspaces are the Future of Work

Design and deliver cloudbased apps and data for flexible, on-demand IT

Citrix NetScaler 10 Essentials and Networking

CNS-208 CITRIX NETSCALER 10.5 ESSENTIALS FOR ACE MIGRATION

Enabling mobile workstyles with an end-to-end enterprise mobility management solution.

Deploying XenApp 7.5 on Microsoft Azure cloud

Mobility and cloud transform access and delivery of apps, desktops and data

How To Get Cloud Services To Work For You

Achieve mobile delivery with Citrix NetScaler

Enterprise- Grade MDM

Maximizing Flexibility and Productivity for Mobile MacBook Users

XenApp and XenDesktop 7.8 AppDisk & AppDNA for AppDisk technology

Transcription:

Citrix TriScale Clustering Citrix TriScale clustering tech note

2 Expanding capacity with TriScale clustering Citrix TriScale clustering provides the ability to scale out Citrix NetScaler capacity to massive levels by enabling up to 32 NetScaler appliances, physical or virtual, to work in unison to deliver one or more applications. It also provides an alternative to high availability (HA) application delivery controller (ADC) appliance deployments, so that 100 percent of network resources are put to productive use. Here are some key value propositions of TriScale clustering: 1 Tbps + scale Two to 32 NetScaler appliances can be clustered together to scale ADC capacity beyond 3 Tbps. Because capacity can be added incrementally, and only when needed, datacenter managers can scale without overprovisioning the network. Unified cluster management NetScaler clusters provide a single policy management view for the entire cluster. All aspects of management remain constant, even as the cluster expands to meet future growth. Policy changes are made just once and automatically propagate across all nodes. Ensured transparency TriScale enables the NetScaler appliance cluster to appear as a single resource (i.e., a single virtual IP address) to all users, preserving ADC transparency. Seamless, linear scalability Adding or removing nodes from a NetScaler cluster incurs zero downtime and results in a predictable, linear change to aggregate performance and capacity. New nodes added to a cluster immediately advertise themselves and automatically begin to absorb their share of the load. Built-in fault tolerance TriScale clustering provides inherent reliability. If a node fails or becomes unreachable, the traffic handled by that node is automatically redistributed to the remaining active nodes through the normal traffic distribution mechanism of TriScale. In addition, the cluster can be configured with a spare node, which remains in passive, standby mode but automatically replaces a node that fails. How clustering works By grouping multiple NetScaler appliances together and configuring them to work in unison, TriScale technology enables IT to efficiently scale the ADC infrastructure. One or more applications can be supported by multiple NetScaler appliances simultaneously, so that they benefit from the aggregate capacity of all cluster nodes. A cluster can be treated exactly like a single node. In other words, a NetScaler cluster behaves logically like one NetScaler device. This holds true even from a networking perspective. For example, all ports across multiple NetScaler appliances in a cluster can be treated as they were one device. A NetScaler cluster is configured and managed as a single system. The virtual IP (VIP) address, representing the termination point for an application, is striped across all nodes in the cluster. Application requests and responses are distributed fairly among all cluster nodes.

3 Cluster communication The NetScaler cluster communicates with application clients through the physical connections between the cluster nodes and the client-side connecting device which are typically aggregated via a Layer 2 switch. The logical grouping of these physical connections is called the client data plane. Similarly, the cluster communicates with backend servers though the physical connections between the cluster nodes and the server-side connecting device. The logical grouping of these physical connections is called the server data plane. In addition, cluster nodes communicate with each other using a cluster backplane. All cluster nodes reside on the same subnet and are connected via an Ethernet backplane. The backplane, which includes the physical connections between each cluster node and a backplane switch, is the backbone of the cluster system and serves three purposes: 1. Management synchronization is accomplished via the backplane to support a single configuration across all nodes. Individual NetScaler nodes are not managed separately. 2. Synchronization of persistence and health check information ensures that individual nodes can be inserted and removed from the cluster pool. 3. Packets can be forwarded to other nodes in the cluster in the event a connection is being tracked by one node but packets for that connection happen to arrive on a different node. The backplane can also be wired for redundancy using typical switch redundancy topologies. Figure 1 shows the logical grouping of the physical connections that form the client data plane, server data plane and cluster backplane. Figure 1: Cluster communication interfaces

4 Some of the benefits of clustering include: Layer 2 scalability: Multiple ports (across multiple NetScaler boxes in a cluster) can be selected to form a Link Aggregation Group (LAG). This group behaves in full compliance with 802.1ax/LACP. Ports on the other side of the cluster can similarly be terminated on an Ethernet switch with the corresponding LAG defined. Layer 3 routing advantages: Ports on the NetScaler nodes can be used in the same way as a single-device configuration. They use Layer 3 techniques such as equal cost multi-path (ECMP) routing to ensure incoming packet flows take advantage of all connected wires. Linksets: Linksets allow multiple cluster ports to be connected on the same broadcast domain (Ethernet segment), without the need to define a LAG on the switch side. The NetScaler devices coordinate management of broadcast traffic (ARP and ARP response), and suppress spanning tree protocol loops that may otherwise result. This eliminates any Layer 2 switch limitation on the number of ports that can be put in a LAG. Fine-grained and failure-aware flow distribution code running on the NetScaler nodes determines how traffic is distributed across the links. Linksets enable even utilization of all links of a LAG. Most importantly, the cluster does not rely on the external environment to evenly distribute traffic among cluster nodes. Ports can be selected from one node, or multiple nodes, when providing connectivity to incoming traffic. Furthermore, regardless of the way traffic arrives at the NetScaler appliance, the cluster guarantees even workload distribution on each device or node. Cluster configuration and synchronization One of the cluster nodes is designated as a configuration coordinator (CCO). To synchronize the configurations on a cluster, the CCO coordinates them and owns the management address of the cluster, which is called the cluster IP address. Administrative access to the CCO is achieved through the cluster IP address, as shown in Figure 2: Figure 2: Configuring the NetScaler cluster through the cluster IP address

5 All configuration changes performed through the cluster IP address are propagated to the cluster nodes through the cluster backplane. When a new appliance is added to the cluster, its configuration settings and files (SSL certificates, licenses, DNS parameters, etc.) are automatically cleared and the appliance is synchronized with the cluster configuration. (Note: Policy configurations should be saved before re-provisioning a NetScaler appliance into a cluster so that information is not lost.) When an existing cluster node rejoins the cluster (e.g., after a failure or intentional disabling of a node), the cluster checks the node s configuration. If there is a mismatch between the current configuration of the node and that being managed by the CCO, the node is automatically synchronized using one of the following techniques: Full synchronization If the difference between configurations exceeds 255 commands, all of the configurations implemented on the CCO are applied to the node that is rejoining the cluster. Incremental synchronization If the difference between configurations is less than 256 commands, only the configurations that are not the same are applied to the node that is rejoining the cluster. This results in faster synchronization when configuration deltas are small. Only configurations performed on the CCO by accessing it through the cluster IP address are synchronized to the other cluster nodes. Configurations performed by accessing cluster nodes through their NetScaler IP (NSIP) addresses are not synchronized to the other cluster nodes. Working with striped and spotted IP management addresses In a clustered deployment, the NetScaler management IP (MIP) address and subnet IP (SNIP) address can be striped or spotted. A striped IP address is active on all the nodes of the cluster. IP addresses configured on the cluster without specifying an owner node are active on all the cluster nodes. A spotted IP address is active on and owned exclusively by a single node. IP addresses configured on the cluster by specifying an owner node are active only on the node that is specified as the owner. Figure 3 shows striped and spotted IP addresses in a three-node cluster. The virtual IP (VIP) address 10.102.29.66 is striped on all the cluster nodes, and the SNIP address 10.102.29.99 is active on NS0, NS1 and NS2.

6 Figure 3: Example of a three-node cluster with striped and spotted IP addresses Traffic distribution within the cluster The specific NetScaler node that receives application requests or responses from an external connecting device (e.g., Layer 2 switch) is referred to as the Flow Receiver. The node that actually processes the traffic flow is called the Flow Processor. Both the Flow Receiver and Flow Processor are active nodes in the cluster and are fully capable of handling traffic. One of three traffic distribution mechanisms is employed to determine which node will serve as the Flow Receiver and receive traffic: ECMP routing, cluster aggregation (CLAG) or linkset. Each of these mechanisms uses a different algorithm to determine the Flow Receiver. The Flow Receiver then uses internal cluster logic to select the appropriate Flow Processor, that is, the node that will handle the traffic. The Flow Processor is determined based on active nodes and traffic flow.

7 Figure 4: Traffic distribution in a cluster Figure 4 shows a client request flowing through the NetScaler cluster. The client sends a request to a striped VIP address. A traffic distribution mechanism configured on the client data plane selects one of the cluster nodes as the Flow Receiver. The Flow Receiver receives the traffic, determines the proper Flow Processor that must process the traffic and then steers the request to that node through the cluster backplane. If the Flow Receiver and Flow Processor reside on the same physical node, the request does not traverse the cluster backplane. Cluster and node states Cluster node classification includes three types of states: admin states, operational states and health states. Admin state. An admin state is configured when you add the node to the cluster. It indicates the purpose of the node: Active Nodes in this state serve traffic if they are operational and healthy. Passive Nodes in this state do not serve traffic, but remain fully synchronized with the cluster. These nodes are useful during maintenance activity because they can be upgraded without removing the node from the cluster. Spare Nodes in this state do not serve traffic, but remain fully synchronized with the cluster. Spare nodes act as backup nodes for the cluster. If one of the Active nodes becomes unavailable, the operational state of one of the spare nodes automatically becomes Active and that node starts serving traffic. Only nodes whose admin state is Active, operational state is Active and health status is Up can serve traffic. If one node in a two-node cluster had a health status of Down, the remaining node would remain functional and continue to support traffic.

8 Cluster coordination The following techniques are employed to elect a cluster node for coordination and ensure full reliability in the event of a failure. An operating cluster has an elected cluster coordinator. This election process is automatic. Once elected, the coordinator is responsible for handling any action that requires the entire cluster to operate as one. Actions include adding or removing cluster nodes and handling traffic rebalance as required. Every node in the cluster has the most up-to-date information about the cluster. Failover is built into the cluster such that any node is capable of taking over the responsibility of another node in the event of failure. The cluster continuously monitors topology, connectivity and reachability status of all its links and services. When a change is detected (e.g., a node joining or leaving or a downstream router failure), a distributed alogrithm runs on all nodes to find the maximally connected set of nodes. This will ensure that the most well-connected subset of nodes is selected for processing traffic through the cluster. Nodes that suffer connectivity loss are transparently disallowed from serving traffic. Cluster management NetScaler cluster deployments are managed via a centralized dashboard, accessible via the cluster management IP (CLIP). The dashboard provides a complete view of cluster deployment across nodes, as well as per-node details for detailed monitoring and troubleshooting. The NetScaler Command Center central monitoring and management solution also supports cluster monitoring and management by enabling the following operational tasks: Setting up a cluster Existing non-clustered NetScaler appliances can be reconfigured as nodes within a single cluster. Adding/removing a node A node can be added incrementally to an existing cluster or removed from an existing cluster. Centralized monitoring The performance of cluster nodes can be monitored on a real-time basis. Gathering report and log information Aggregated stats/logs from all cluster nodes can be obtained. Cluster support for NetScaler functionality NetScaler clustering supports the most popular and commonly deployed NetScaler features, including: Load balancing Content switching SSL offload

9 Compression Routing Application Firewall ActionAnalytics Bandwidth based spillover Content rewrite L4 denial of service (DoS) protections Nitro RESTful API HTTP DoS protections* Static and dynamic caching* DNS caching* * Single node support only In addition, each node must be of the same form factor (e.g., NetScaler MPX physical appliance or NetScaler VPX virtual appliance), with a matching maximum capacity rating, the same NetScaler software edition and an identical set of licensed NetScaler features. Failure management and fault tolerance NetScaler clustering replicates session-based information such as persistence information and SSL handshakes. A consistent view across the cluster is also maintained, including health check responses and server response times. For example, a retry of an HTTP transaction that lands on another node after a failure will behave correctly and load balance to the correct server. Unlike many chassis-based clustering solutions, where a single node failure may cause all connecections across all nodes to drop, NetScaler clustering implements a more-effective loss model. In every scenario, the loss of information in the cluster is proportional to the amount of loss suffered. For example, if one node in a fivenode cluster were to fail, the loss will be no more than 20 percent. During all failure scenarios, the cluster continues to guarantee that work is evenly divided among remaining nodes. When a cluster is undergoing a loss or recovering from joining or leaving a node, the clustering algorithms do not cost more during recovery. This is criitical, because otherwise the loss of a single node can lead to a cascading failure, where the resultant recovery phase causes more nodes to fail. For example, when a node fails, the work corresponding to that node is equally divided among the remaining nodes. If instead, as is common in other clusters, the entire burden of the failed node were to be placed on just one other node effectively doubling its load it could possibly cause the second node to fail as well, leading to a cascading failure.

10 TriScale clustering in operation Clustering provides the ability to dynamically increase or decrease the number of NetScaler nodes serving one or more applications based on required capacity. Adding a new node is a simple operation that can be accomplished while live traffic is going through the deployment. The node is added to the cluster and its configuration is automatically synchronized with the CCO. Once the node is active, traffic is gracefully distributed to the new node. Existing connections are not shifted to it. Instead, the node participates in scaling up by processing its share of new traffic directed to the cluster. NetScaler clusters also support the notion of a Spare node, which can be part of an active cluster without participating in traffic processing. This node s configuration is synchronized with the CCO, and it is ready to replace a failed or unreachable node at any time. If an active node goes down for any reason, the Spare node assumes an Active state and replaces the faulty node in real time. This replacement mechanism is fully automated and does not require administrator intervention. Support for spare capacity provides redundancy in the deployment and a viable alternative to traditional HA pair deployments. As best practice, it is recommended to have one Spare node for every five to 10 active nodes in a cluster to ensure a high degree of cluster reliability and sustained capacity. Corporate Headquarters Fort Lauderdale, FL, USA Silicon Valley Headquarters Santa Clara, CA, USA EMEA Headquarters Schaffhausen, Switzerland India Development Center Bangalore, India Online Division Headquarters Santa Barbara, CA, USA Pacific Headquarters Hong Kong, China Latin America Headquarters Coral Gables, FL, USA UK Development Center Chalfont, United Kingdom About Citrix Citrix (NASDAQ:CTXS) is the cloud company that enables mobile workstyles empowering people to work and collaborate from anywhere, easily and securely. With market-leading solutions for mobility, desktop virtualization, cloud networking, cloud platforms, collaboration and data sharing, Citrix helps organizations achieve the speed and agility necessary to succeed in a mobile and dynamic world. Citrix products are in use at more than 260,000 organizations and by over 100 million users globally. Annual revenue in 2012 was $2.59 billion. Learn more at www.. 2013 Citrix Systems, Inc. All rights reserved. Citrix, NetScaler, TriScale, VPX and MPX are trademarks or registered trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries. All other trademarks and registered trademarks are property of their respective owners. 0513/PDF