Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.



Similar documents
Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster

High-Availability User s Guide v2.00

Volume Replication INSTALATION GUIDE. Open-E Data Storage Server (DSS )

Astaro Deployment Guide High Availability Options Clustering and Hot Standby

Step-by-Step Guide to Open-E DSS V7 Active-Active iscsi Failover

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

The functionality and advantages of a high-availability file server system

Implementing Storage Concentrator FailOver Clusters

A Step-by-Step Guide to Synchronous Volume Replication (Block Based) over a WAN with Open-E DSS

If you already have your SAN infrastructure in place, you can skip this section.

N5 NETWORKING BEST PRACTICES

StarWind Virtual SAN Best Practices

HUAWEI OceanStor Load Balancing Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

Availability Digest. Redundant Load Balancing for High Availability July 2013

Implementing and Managing Windows Server 2008 Clustering

Data Replication INSTALATION GUIDE. Open-E Data Storage Server (DSS ) Integrated Data Replication reduces business downtime.

Top 10 Reasons why MySQL Experts Switch to SchoonerSQL - Solving the common problems users face with MySQL

Step-by-Step Guide to Open-E DSS V7 Active-Passive iscsi Failover

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Identikey Server Performance and Deployment Guide 3.1

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Synology High Availability (SHA)

Synology High Availability (SHA): An Introduction Synology Inc.

Introduction to MPIO, MCS, Trunking, and LACP

High Availability & Disaster Recovery Development Project. Concepts, Design and Implementation

InterWorx Clustering Guide. by InterWorx LLC

ADVANCED NETWORK CONFIGURATION GUIDE

Deployment Topologies

IP SAN Best Practices

short introduction to linux high availability description of problem and solution possibilities linux tools

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Contingency Planning and Disaster Recovery

Isilon IQ Network Configuration Guide

Configuring Windows Server Clusters

LARGE SCALE INTERNET SERVICES

Synology High Availability (SHA)

High Availability Essentials

Managing your Domino Clusters

Integrated Application and Data Protection. NEC ExpressCluster White Paper

Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview

Test Report Newtech Supremacy II NAS 05/31/2012. Newtech Supremacy II NAS storage system

XLink ClusterReplica SQL 3.0 For Windows 2000/2003/XP

ArcGIS for Server Reference Implementations. An ArcGIS Server s architecture tour

Enterprise Linux Business Continuity Solutions for Critical Applications

IP SAN BEST PRACTICES

SAP HANA Operation Expert Summit BUILD - High Availability & Disaster Recovery

Best Practices: Pass-Through w/bypass (Bridge Mode)

Disaster Recovery Disaster Recovery Planning for Business Continuity Session Name :

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

New Storage System Solutions

Red Hat Cluster Suite

Volume GAJSHIELD INFOTECH PVT LTD. Wan Failover & Load Balancing. Administrative Guide

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

GoGrid Implement.com Configuring a SQL Server 2012 AlwaysOn Cluster

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

StarWind Virtual SAN for Microsoft SOFS

NetApp Software. SANtricity Storage Manager Concepts for Version NetApp, Inc. 495 East Java Drive Sunnyvale, CA U.S.

SAP HANA Network Requirements

Dell PowerVault MD Series Storage Arrays: IP SAN Best Practices

Data Communications Hardware Data Communications Storage and Backup Data Communications Servers

WebGUI Load Balancing

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

New Features in SANsymphony -V10 Storage Virtualization Software

Qsan AegisSAN Storage Application Note for Surveillance

How To Load Balance On A Cisco Cisco Cs3.X With A Csono Css 3.X And Csonos 3.5.X (Cisco Css) On A Powerline With A Powerpack (C

White Paper ClearSCADA Architecture

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Virtual SAN Design and Deployment Guide

IdP Clustering. You want to prevent service outages. High Availability and Load Balancing. Possible problems: HW failures

DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch

High Availability and Clustering

APV9650. Application Delivery Controller

High Availability Solutions for the MariaDB and MySQL Database

Enhancements of ETERNUS DX / SF

iscsi SAN for Education

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Dell Compellent Storage Center

Cisco Active Network Abstraction Gateway High Availability Solution

A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES

Aqua Connect Load Balancer User Manual (Mac)

Linux High Availability

Terminal Server Software and Hardware Requirements. Terminal Server. Software and Hardware Requirements. Datacolor Match Pigment Datacolor Tools

Building a Highly Available and Scalable Web Farm

Lustre SMB Gateway. Integrating Lustre with Windows

A Complete Platform for Highly Available Storage. SoftNAS High Availability Guide

EMC VNXe HIGH AVAILABILITY

Gladinet Cloud Enterprise

EXINDA NETWORKS. Deployment Topologies

What s New with VMware Virtual Infrastructure

Chapter 10: Scalability

Transcription:

EliteNAS Cluster Mirroring Option - Introduction Real Time NAS-to-NAS Mirroring & Auto-Failover Cluster Mirroring High-Availability & Data Redundancy Option for Business Continueity Typical Cluster Mirroring Setup Servers Unix/Linux/ Node-0 Node-1 It is Real Time, Synchronized, Block level, Volume base data replication with automatic Fail-Over from one node to another node. It can be viewed as if a RAID-1 across 2 NAS units over LAN. Cluster Mirroring is for network share/volume only. (iscsi and FC target is under development.) Even though the cluster now is the Mirroring mode, but there are many configuration variations: Configuration-1: Active/Standby Configuration-2: Active/Active with Static Load-Balancing Configuration-3: Active/Active with Dynamic Load-Balancing

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance. The is the NIC port for data synchronization between the clustered Nodes. It can be a dedicated NIC port or shared with the data port that uses for the client access. From time to time, it is also referred to as: Sync Port or Clustered Port. The NIC port that is behind the Public IP serving data to clients are some time also referred to as Data Port or Public Port. Cluster Mirroring Basic Requirements Needs two (2) NAS nodes. Each NAS node needs at least two NIC ports. One NIC port for providing data access to the network and the other NIC port for data mirroring between nodes (the / Sync Port). Each Node must have different names in the Server Name field. Clustering is an optional feature/function, and will require a license for each node in the cluster. It can be enabled by purchasing the clustering license or when purchasing with the unit. To activate, enter the license key from the Web Admin page System Updates New License Note - each Node must be in the same LAN, not over WAN. Cluster Performance Consideration. A matching RAID storage system. A fast network interface (such as 10Gb or Infiniband) does not guarantee a fast network throughput, unless the RAID storage system can provide the same or more bandwidth. For the desired network throughput, please make sure the RAID controller, the type of disks, and number of disks together can provide enough bandwidth. A dedicated network link between two nodes for the Sync-Port/Cluster-Port. The connection can be a direct cable link between the NIC Ports without a switch in between, either 1Gigabit or 10Gb. Consider using a 10GbE NIC for the Sync Port, even though the Data Ports are only Gigabit NICs, are recommended. The performance of fast Sync-Port will improve significantly in the event of the Node Fail-Back. Caching mode for network volume/share. Clustered volume/share has options for the Cache Mode: Write-Back or Write-Through. Write-Back gives fast better Write performance while taking the risk of losing data in cache in case of sudden power outage or system locked up. On the other hand, Write-through mode will minimize that risk, but giving far less Write performance. For detail, please refer to document: Steps for Cluster Configuration detail CPU & Memory. The NAS system will always use any available memory for caching and buffering. Therefore, the larger the sizes of memory are better. In normal operation, even in cluster mirroring mode, the CPU utilization is very low. However, when it is in the node Fail- Back mode, the Surviving Node will use full CPU power and maximum memory to calculate the data that needed to be re-synced to the Failing-Back Node and pushes those data across the Sync-Port link to the Failing-Back Node. The faster CPU, the larger the memory

and the higher-bandwidth the Sync-Port link is, the sooner both Nodes will be in-sync again, and the Fail-Back Node can be sooner put back into clustering service. Cluster Implementation Planning System hardware 2 NAS node servers. Network gigabit NIC or 10Gb NIC, dedicated Sync-Port or share the NIC ports for both Data-Port and Sync-Port. NAS - General setting Server Names, DNS Server in place or not, DNS Server working properly or not. NAS Cluster setting share/volume to be Write-Back mode or Write-Through mode. Which NIC will be the, give it a host name and IP, and which NIC will be the Data Port, give it a host name and IP as well. These host name IP association will be registered under network host page. Cluster Be Aware & Limitation Cluster Mirroring and Failover is for LAN only, and not for WAN. The default network share/volume is using EXT4 as the file system. With EXT4, it takes time to create the file system depending on the size of the share/volume. If you get a Blank Black screen during the file system creation, do not panic, and it is normal. When adding Node to the Cluster, it will also give a momentary Blank Black screen, it is also normal. When assigning the ethx port to Public IP, it will also give a momentary Blank Black screen, it is also normal. Refer to Cluster Valid Public IP Config for what Network configuration is valid and what is not. Clustered volumes must have the exact same size from both nodes Data-Port/Public-Port from both Nodes must be using the same port number / ethx port, for example, both nodes are using eth0 data port, or eth1 for public port.

Active-Standby Configuration. All clients map to the share(s) on the Public IP (virtual service IP): All network activities is accessing Node-0 via 1 st NIC Port at IP: during normal operation. Any writes from clients to Public IP will get synchronized from Node-0 to Node-1 via the 2 nd NIC Ports, the Clustered Ports. Any Reads to clients are from Node-0 only. In the event Node-0 goes down, all accessing from clients will automatically and transparently failover to Node-1. The Failover process will take just seconds. Servers Unix/Linux/ Active Cluster Node-0 Standby Cluster Node-1

Active-Active with Static Load-Balancing Configuration. Separate clients into two group: o Clients from Group #1 map the share(s) on the Public IP (virtual service IP): which is owned by Node-0 (Green lines) o Clients from Group #2 map the same share(s) on the Public IP (virtual service IP): 192.168.1.168 which is owned by Node-1. (Purple Lines) Node-0 serves its own clients while being standby for Node-1, and Node-1 serves its own clients while being standby for Node-0. Data updates on Node-0 will get synchronized to Node-1 at real time, and data updates on Node-1 will also get synchronized to Node-0 at real time. Write performance to the cluster (both nodes) in this Active-Active configuration is better than the Active-Standby configuration. Servers Unix/Linux/ 192.168.1.168 Active Cluster Node-0: Owner of Virtual IP Active Cluster Node-1: Owner of Virtual IP 192.168.1.168

Active-Active with Dynamic Load-Balancing Configuration This setup will require an external device. You will need either a DNS server configured with Round- Robin, or some 3 rd party network load-balancing device configured with load-balancing policy. Please consult with the specific Round-Robin DNS or 3 rd party load-balancing network device for configurations for the load balancing virtual IP. Servers Unix/Linux/ DNS Server configured with Round-Robin or 3 rd Party professional Network Load-Balance Device with policy. Virtual IP by DNS or by Network Load-Balancing Device 192.168.1.168 Active Cluster Node-0: Owner of Virtual IP Active Cluster Node-1: Owner of Virtual IP 192.168.1.168