HAOSCAR 2.0: an open source HA-enabling framework for mission critical systems
|
|
|
- Matthew Hart
- 10 years ago
- Views:
Transcription
1 HAOSCAR 2.0: an open source HA-enabling framework for mission critical systems Rajan Sharma, Thanadech Thanakornworakij { tth010,rsh018}@latech.edu High availability is essential in mission critical computing systems to enable breakthrough science and to advance economic and business developments, especially in today s digital world. HA systems are increasingly vital due to their ability to sustain critical services to users. To stay competitive, most companies need more reliable systems for supporting their daily business work. Thus, we foresee the critical importance of enabling the cyber infrastructure with HA. The HAOSCAR 2.0 is independent of OSCAR, and now it supports both Debian and Red Hat based Linux systems. We verified HAOSCAR 2.0 by testing it in Ubuntu 9.10 server edition for Debian support and CentOS 5.6 for Red Hat support. The way HAOSCAR enhances HA is by adopting component redundancy to eliminate single-point-of-failure; it also incorporates a self-healing mechanism, failure detection, automatic synchronization, fail-over and fail-back. In the next release,we plan to have API for advance users in which developers and administrators can extend the functionality of HAOSCAR using the provided hooks. API's will allow users to create event notification services and powerful rule based systems. They can also be used to determine the state of the monitored services. In this article, we also provide examples of existing and new systems where HAOSCAR 2.0 acts as an application in improving HA. The example systems are web application and an exchange server system that exchanges the patients' data between hospitals. The need of the high availability systems is increasing dramatically. Every company or organization needs more reliable systems for supporting its daily business work. We see the demand of high availability of other systems. Initially, HAOSCAR was tied with OSCAR packages. HAOSCAR eliminated the single-point-of-failure of the clusters by duplicating a head node in the OSCAR clusters. If the head node fails, all applications in the cluster may fail. To alleviate such a failure, HAOSCAR provides a self-healing mechanism, failure detection and recovery, failover and fail-back in OSCAR clusters. HAOSCAR added a secondary head node into the system when the primary node fails. The secondary head node will take the responsibility. The HAOSCAR team sees the need for computer systems to use high availability in scientific discovery as well as critical business problems. We developed a new version of HAOSCAR that is not tied to OSCAR. HAOSCAR supports many cluster management technology systems like OSCAR, Debian and Red Hat based systems. We are planning to support ROCKS in future. The main goal of the new HAOSCAR 2.0 project is to improve flexibility, provide an open solution, and provide a combined power of HA and performance computing solution. HAOSCAR should support most IT infrastructure based on Linux operating system (such as web servers, and clusters) by providing the much-needed redundancy for mission critical grade applications. To achieve high availability, component redundancy is adopted to eliminate single point of failure. Our enhancement incorporates a self-healing mechanism, failure detection, automatic synchronization, failover and fail-back. HAOSCAR provides a simple High Availability solution for users. The installation process includes a few steps asking for information from the user. HAOSCAR includes a feature to clone the system in the installation step to make the data and software stacks consistent. If the primary component has failed, the clone node takes over the head node responsibilities. HAOSCAR also has a feature to monitor services
2 with a flexible event-driven rule-based system. Moreover, HAOSCAR provides data synchronization between the primary and secondary system. All of these features are enabled in the installation process. HAOSCAR 2.0 Hardware Architecture Figure 1 illustrates the architecture of HA-OSCAR. The beta release supports a cloud of private network interfaces and failover (as shown in figure 1) and external reliable storage. Users can manually add and configure more NICs, switches, and external storage. Figure 1 HAOSCAR Hardware Architecture. HAOSCAR consists of the following major system components: 1. Primary server: responsible for receiving and distributing the requests to specified clients. Each server has at least two NICs; one is connected to the Internet by a public network address, one is connected to a local LAN to which both head nodes are connected, and additional optional NICs may be connected anywhere.
3 2. Standby primary server: activates its services, monitors primary server and takes over when a failure in the primary server is detected. 3. Multiple clients(optional): dedicated to computation. 4. Local LAN Switches: provide local connectivity among head and client/compute nodes. Each head node should have at least two NICs. One of the NICs is a public interface to an outside network and the other is a private interface to its local LAN and computing nodes. The configuration depends on how a user wants to connect their NICs to either the public or private network. Our illustrations assume eth0 is a private interface and eth1 is public interface. Figure 2 shows sample of HA-OSCAR head node network configuration. HAOSCAR 2.0 Software Architecture Figure 2 Sample of HAOSCAR head node network configuration Figure 3 HAOSCAR Software Architecture
4 HAOSCAR combines many existing technology packages together to provide a HA solution. HAOSCAR software architecture has three components as shown in Figure3. The first component is IP monitoring using heartbeat. Heartbeat is a service designed to detect failure of the physical components, such as network and IP service. When the primary node is not healthy, Heartbeat handles IP fail-over and failback mechanism. The second component is the service monitoring, MONIT. MONIT is a small and light weight service used to monitor the health of important services to make those services highly available. MONIT will attempt to restart the failing services for four times by default and is tunable. If the services can not be successfully restarted, MONIT will send the message to heartbeat to trigger fail-over. The third component, data synchronization is provided by a service called hafilemon. During a fail-back event, data will synchronize from the secondary server to the primary server. This backwards synchronization, occurs to propagate changes made to the secondary server while it is the head node. By default, hafilemon will invoke rsync two minutes after it detects the first change in files, to allow groups of changes to transmit together. Users can change this time according to the need of their applications. System-Imager is used for cloning the primary node during the installation process. It creates a standby head node image from the primary server. Moreover, when users need to replace the secondary node, they simply run a script to invoke System-Imager to clone the primary system to a new system. Improvements on Beta Release over Earlier Versions of HAOSCAR In this new beta version of HAOSCAR 2.0, there have been several enhancements, and new features are introduced as compared to the earlier versions. HAOSCAR 2.0 project's primary goal is to remove several OSCAR dependencies in HAOSCAR and to reintegrate these into its core making it a truly stand alone High Availability solution for any cluster or server platform. The new features in this beta release include 1. Head node redundancy : This beta version of HAOSCAR 2.0 supports Active/Hot Standby for the head node, whereas the active-warm standby was the initial model of choice for the earlier versions. We also plan to support the Active-Active multi head architecture in future release. The active-active model enables both performance and availability, because both head nodes can provide services simultaneously. However, its implementation is quite complicated and leads to data inconsistency when failures occur. 2. Heartbeat Integration : Heartbeat provides a primary node outage detection and fail over and fail back mechanism through serial line and UDP connectivity. In this beta release, heartbeat uses a set of virtual IP address to make the fail back mechanism possible whereas in the earlier versions the heartbeat was setup to use the primary server IP address which was preventing fail back from occurring. The current version of heartbeat only supports a pair of node. In the future release we plan to use HAL-R2( HA Linux Release 2) which is the major revision of the entire Linux system, to extends heartbeat's functionality to support multiple nodes for monitoring resources for correct operation. We also plan to integrate HAL-R2 with Active-Active multi-node architecture so that this model enables high performance and high availability for efficient computing. 3. Networking Interfaces : In this beta version of HA-OSCAR 2.0, we are able to select multiple virtual IPs through local and external networks to support multiple networking interfaces. We can assign multiple IPs when setting up the primary server during the installation whereas the earlier versions could only assign single IPs during the installation and so it did not had support for multiple interfaces.
5 4. Services : In this beta version, a service called hafilemon which is a daemon monitors changes made in the given directory trees and call rsync accordingly. To prevent calling rsync too frequently, a user can set the smallest synchronization time interval threshold so that the hafilemon daemon will not call rsync if the current time - last sync time < the threshold. It will call rsync later when the threshold criteria is met. Hafilemon has been implemented using Sys:Gamin for conceptual testing, and it will be implemented using inotify in C in near future. This module is a new module. Hafilemon service was not available in earlier versions of HA- OSCAR. 5. Cross Platform Support : Earlier versions of HAOSCAR 2.0 only supported Debian based system so fewer member of the Linux community were able to use HAOSCAR. But this beta release of HAOSCAR 2.0 supports both Debian and Red Hat based system making it available to the whole Linux community. HAOSCAR 2.0 Application Web Application HAOSCAR provides a high availability to mission critical application that are running on a web server. There are number of factors that makes a web application unavailable that are running on web server such as hardware/software failure, power issues and routine maintenance in web server. HA OSCAR supports many key feature of high availability that makes a web server achieve maximum uptime. During the installation process of a HA-OSCAR 2.0 on a web server, it makes a clone of primary web server which act as a standby server, and we have to specify a path of folder where web application files and MYSQL database table are kept for data synchronization between the primary and standby server. The primary and standby server should have a homogeneous hardware and network card should support a PXE boot. Primary web server receives a request from clients and serve the requests, when failure occurs on primary server, standby web server will take over primary server and it will configure same IP address as primary web server so that all the requests is redirected to stand by server. Any changes that is made in database when standby server is active is synchronized with the primary server to ensure the data consistency. The synchronization is done with rsync, rsync synchronizes directories, it copies the content of one directory and makes it look exactly like the other one. Rsync works by getting a list of files in your source and destination directories, comparing them as per specified criteria (file size, creation/modification date or checksum) and then making the destination directory reflect all the changes which happened to the source since the last synchronization session. When the primary web server is available again, and the repair is completed, by default this server will be the standby server. If users need the fixed server to be the primary server, then they have to run the failback script to let the fixed server be the primary server. Patient Data Exchange Server An application that is currently in development to have HA-Capabilities is the Patient Data Exchange Server. It is a application that is used to exchange and manage a patient's record and data among health care providers with secure and reliable process. It act like a broker among health care provider and
6 provides a secure and reliable services such as health care provider registration, patient data exchange brokerage, patient data transfer in push and pull mode, and data exchange support via various open standards such as HL7, DICOM and future standard protocol, as shown in Figure 4. Backed up by high availability infrastructure from HAOSCAR, it would not only have maximum uptime but in cases of downtime, it would seamlessly synchronize all the data from the primary server to the standby server. It would in turn get the primary server's ip address, thus making the Patient Data Exchange Server highly available. To make it all easy and one time process for the specific system administrator dealing with the exchange server, we have decided that it would be more helpful if HAOSCAR and the exchange server were to be combined and integrated into one single package. Doing so would not only install the exchange server but it would make it HA enabled as well. The package would first install all the components related with the exchange server in the appropriate destinations and then after completion, start the HAOSCAR installation,during installation process we have to specify the web application directories and databases utilized by the exchange server which would automatically be picked for synchronization by HAOSCAR. Figure 4 Patient Data Exchange Server with HA Cybertools Petashare Petashare is the project in the Louisiana Optical Network Initiative (LONI) that supports the need to collaborate and share scientific large scale data in seven Louisiana campuses. The petashare project provides data management and scheduling and storage tools for supporting the collaborative researches of scientists. Petashare storage is managed by the Integrated Rule-Oriented Data System (irods). Every campus will have irods for sharing and handling the data across the network. irods is composed of two servers, Data server and Metadata server. Data server is the server that stores large scale scientific data. Metadata keeps the location of every file in data servers in the network. Metadata servers are replicated across the network. To make Petashare highly available, HAOSCAR 2.0 creates a secondary server for the Metadata server. If the Metadata server failed, the secondary server will take over the responsibility of the primary server.
7 Users can still access the petashare storage across the network. In the case of Data server failure, the system does not support replicating the data in the same site yet. If files that a user needs are in the server that failed, the user cannot get the files. Rocks Cluster Distribution The previous versions o HAOSCAR 2.0 just supported only OSCAR cluster but the new HAOSCAR supports Rocks also. After the user finishes setting up the Rocks cluster, HAOSCAR will clone the head node of Rocks to create a standby head node. When the primary head node has failed, the secondary head one will take over the responsibility. After the primary head node is fixed, it will be the standby node. If a user want to change the fixed head node to be primary node, HAOSCAR provides manual script to do so. High Availability Tools Configuration and Installation (HATCI) Customization Service Monitoring (Monit) Configuration HAOSCAR 2.0 uses Monit to monitor and maintain services that need to be Highly Available. If users are familiar with Monit, they may configure Monit manually. Listed below are some basic configuration options. In /etc/monit/monitrc: set daemon 120 set httpd port 2188 and use address localhost allow localhost # allow admin:monit Set daemon 120 sets Monit to run as a service daemon that polls each watched process/file every 120 seconds. Set httpd port 2188 and use address localhost / allow localhost spawns a httpd daemon that Monit may use to access its daemon functionality during runtime operation. Allow admin:monit allows anyone to remotely login to the HTTPD service as "admin" with the password "Monit". This allows for remote maintenence of the monitored services. For each system critical service, Monit needs a pair of checks. First is a process check that is used by Monit to maintain the process. Second is a pid file check that is used to coordinate between Monit and hearbeat fail-over. In the example below, HAOSCAR 2.0 monitors a sshd service. If the service is not running or could not comunicate via port 22, it will restart the service up to five times. If the service still does not work, HAOSCAR will run the fail-over script and give up trying to restart the service. For example: check process sshd with pidfile /var/run/sshd.pid start program = "/etc/init.d/ssh start" stop program = "/etc/init.d/ssh stop" if 5 restarts within 5 cycles then timeout
8 if failed port 22 protocol ssh then restart check file sshdpid with path /var/run/sshd.pid if changed timestamp for 5 cycles then exec "/bin/sh /usr/bin/fail-over" Please refer to for additional examples of service configurations, but be sure to include the pid file check for Monit-Heartbeat interaction. IP Availability (Heartbeat) Configuration In /etc/ha.d/ha.cf: logfile /var/log/haoscar/heartbeat.log udpport 694 logfacility local0 keepalive 2 deadtime 30 initdead 120 bcast eth0 Udpport defines which port the primary and secondary servers communicate across. Keepalive specifies the polling interval the Primary server uses to reassert itself to the Secondary server. Deadtime is how long the Secondary server will wait without reassertion from the Primary server before taking over the IPs. Initdead is the deadtime used when first bringing the system online. Bcast defines which NIC heartbeat uses to communicate between nodes. In /etc/ha.d/haresources: Primary-Server Provides virtual IPs to be used by Heartbeat, and which system they belong to by default. Both files must remain synchronised across both servers. Data Syncronization (HA-OSCAR filemon) Configuration In /etc/init.d/ha-oscar-filemon: start-stop-daemon --start --pidfile $PID_FILE --background --make-pidfile --exec $DAEMON -- --recursive --period primary=$primary --secondary=$secondary $watch_dirs --period defines how rsync will wait to transmit new sets of changes to the Secondary server.
Twin Peaks Software High Availability and Disaster Recovery Solution For Linux Email Server
Twin Peaks Software High Availability and Disaster Recovery Solution For Linux Email Server Introduction Twin Peaks Softwares Replication Plus software is a real-time file replication tool, based on its
Veritas Cluster Server
APPENDIXE This module provides basic guidelines for the (VCS) configuration in a Subscriber Manager (SM) cluster installation. It assumes basic knowledge of the VCS environment; it does not replace the
Snapt Redundancy Manual
Snapt Redundancy Manual Version 2.0 p. 1 Contents Chapter 1: Introduction... 3 Installation... 3 Chapter 2: Settings... 4 Chapter 3: Server Management... 6 Chapter 4: Virtual IP Management... 7 Chapter
INUVIKA TECHNICAL GUIDE
--------------------------------------------------------------------------------------------------- INUVIKA TECHNICAL GUIDE FILE SERVER HIGH AVAILABILITY OVD Enterprise External Document Version 1.0 Published
Whitepaper Continuous Availability Suite: Neverfail Solution Architecture
Continuous Availability Suite: Neverfail s Continuous Availability Suite is at the core of every Neverfail solution. It provides a comprehensive software solution for High Availability (HA) and Disaster
Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module
Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module June, 2015 WHITE PAPER Contents Advantages of IBM SoftLayer and RackWare Together... 4 Relationship between
Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module
Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module June, 2015 WHITE PAPER Contents Advantages of IBM SoftLayer and RackWare Together... 4 Relationship between
1. Configuring Apache2 Load Balancer with failover mechanism
1. Configuring Apache2 Load Balancer with failover mechanism node01 Messaging Part 1 Instance 1 for e.g.: 192.168.0.140 192.168.0.2 node02 Messaging Part 1 Instance 2 for e.g.: 192.168.0.90 Configuring
Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.
EliteNAS Cluster Mirroring Option - Introduction Real Time NAS-to-NAS Mirroring & Auto-Failover Cluster Mirroring High-Availability & Data Redundancy Option for Business Continueity Typical Cluster Mirroring
Implementing Failover Capabilities in Red Hat Network Satellite
Implementing Failover Capabilities in Red Hat Network Satellite By Vladimir Zlatkin Abstract This document will help you create two identical Red Hat Network (RHN) Satellites functioning in failover mode.
High Availability Solutions for the MariaDB and MySQL Database
High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment
NAS 259 Protecting Your Data with Remote Sync (Rsync)
NAS 259 Protecting Your Data with Remote Sync (Rsync) Create and execute an Rsync backup job A S U S T O R C O L L E G E COURSE OBJECTIVES Upon completion of this course you should be able to: 1. Having
Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster
www.open-e.com 1 Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster (without bonding) Software Version: DSS ver. 7.00 up10 Presentation updated: May 2013 www.open-e.com 2
Gigabyte Content Management System Console User s Guide. Version: 0.1
Gigabyte Content Management System Console User s Guide Version: 0.1 Table of Contents Using Your Gigabyte Content Management System Console... 2 Gigabyte Content Management System Key Features and Functions...
Stretching A Wolfpack Cluster Of Servers For Disaster Tolerance. Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA dick_wilkins@hp.
Stretching A Wolfpack Cluster Of Servers For Disaster Tolerance Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA [email protected] Motivation WWW access has made many businesses 24 by 7 operations.
High Availability and Clustering
High Availability and Clustering AdvOSS-HA is a software application that enables High Availability and Clustering; a critical requirement for any carrier grade solution. It implements multiple redundancy
short introduction to linux high availability description of problem and solution possibilities linux tools
High Availability with Linux / Hepix October 2004 Karin Miers 1 High Availability with Linux Using DRBD and Heartbeat short introduction to linux high availability description of problem and solution possibilities
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
WhatsUp Gold v16.3 Installation and Configuration Guide
WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard
High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper
High Availability with Postgres Plus Advanced Server An EnterpriseDB White Paper For DBAs, Database Architects & IT Directors December 2013 Table of Contents Introduction 3 Active/Passive Clustering 4
Migration and Disaster Recovery Underground in the NEC / Iron Mountain National Data Center with the RackWare Management Module
Migration and Disaster Recovery Underground in the NEC / Iron Mountain National Data Center with the RackWare Management Module WHITE PAPER May 2015 Contents Advantages of NEC / Iron Mountain National
PVFS High Availability Clustering using Heartbeat 2.0
PVFS High Availability Clustering using Heartbeat 2.0 2008 Contents 1 Introduction 2 2 Requirements 2 2.1 Hardware................................................. 2 2.1.1 Nodes...............................................
Red Hat Cluster Suite
Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.
Administrator Guide VMware vcenter Server Heartbeat 6.3 Update 1
Administrator Guide VMware vcenter Server Heartbeat 6.3 Update 1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.
Secure Web Gateway Version 11.7 High Availability
Secure Web Gateway Version 11.7 High Availability Legal Notice Copyright 2015 Trustwave Holdings, Inc. All rights reserved. This document is protected by copyright and any distribution, reproduction, copying,
High Availability and Disaster Recovery Solutions for Perforce
High Availability and Disaster Recovery Solutions for Perforce This paper provides strategies for achieving high Perforce server availability and minimizing data loss in the event of a disaster. Perforce
High Availability Low Dollar Clustered Storage
High Availability Low Dollar Clustered Storage Simon Karpen [email protected] / [email protected] Thanks to Shodor for use of this space for the meeting. This document licensed under the Creative Commons
Moving to Plesk Automation 11.5
Moving to Plesk Automation 11.5 Last updated: 2 June 2015 Contents About This Document 4 Introduction 5 Preparing for the Move 7 1. Install the PA Moving Tool... 8 2. Install Mail Sync Software (Windows
Astaro Deployment Guide High Availability Options Clustering and Hot Standby
Connect With Confidence Astaro Deployment Guide Clustering and Hot Standby Table of Contents Introduction... 2 Active/Passive HA (Hot Standby)... 2 Active/Active HA (Cluster)... 2 Astaro s HA Act as One...
High Availability. Palo Alto Networks. PAN-OS Administrator s Guide Version 6.0. Copyright 2007-2015 Palo Alto Networks
High Availability Palo Alto Networks PAN-OS Administrator s Guide Version 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054 www.paloaltonetworks.com/company/contact-us
REM-Rocks: A Runtime Environment Migration Scheme for Rocks based Linux HPC Clusters
REM-Rocks: A Runtime Environment Migration Scheme for Rocks based Linux HPC Clusters Tong Liu, Saeed Iqbal, Yung-Chin Fang, Onur Celebioglu, Victor Masheyakhi and Reza Rooholamini Dell Inc. {Tong_Liu,
White Paper. Fabasoft on Linux Cluster Support. Fabasoft Folio 2015 Update Rollup 2
White Paper Fabasoft Folio 2015 Update Rollup 2 Copyright Fabasoft R&D GmbH, Linz, Austria, 2015. All rights reserved. All hardware and software names used are registered trade names and/or registered
Building Elastix-2.4 High Availability Clusters with DRBD and Heartbeat (using a single NIC)
Building Elastix2.4 High Availability Clusters with DRBD and Heartbeat (using a single NIC) This information has been modified and updated by Nick Ross. Please refer to the original document found at:
Pervasive PSQL Meets Critical Business Requirements
Pervasive PSQL Meets Critical Business Requirements Pervasive PSQL White Paper May 2012 Table of Contents Introduction... 3 Data Backup... 3 Pervasive Backup Agent... 3 Pervasive PSQL VSS Writer... 5 Pervasive
Panorama High Availability
Panorama High Availability Palo Alto Networks Panorama Administrator s Guide Version 6.0 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa Clara, CA 95054
How To Install Storegrid Server On Linux On A Microsoft Ubuntu 7.5 (Amd64) Or Ubuntu (Amd86) (Amd77) (Orchestra) (For Ubuntu) (Permanent) (Powerpoint
StoreGrid Linux Server Installation Guide Before installing StoreGrid as Backup Server (or) Replication Server in your machine, you should install MySQL Server in your machine (or) in any other dedicated
Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2)
Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2) Hyper-V Manager Hyper-V Server R1, R2 Intelligent Power Protector Main
PolyServe Understudy QuickStart Guide
PolyServe Understudy QuickStart Guide PolyServe Understudy QuickStart Guide POLYSERVE UNDERSTUDY QUICKSTART GUIDE... 3 UNDERSTUDY SOFTWARE DISTRIBUTION & REGISTRATION... 3 Downloading an Evaluation Copy
Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.
Step-by-Step Guide to configure on Intel Server Systems R2224GZ4GC4 Software Version: DSS ver. 7.00 up01 Presentation updated: April 2013 www.open-e.com 1 www.open-e.com 2 TECHNICAL SPECIFICATIONS OF THE
Synology High Availability (SHA)
Synology High Availability (SHA) Based on DSM 4.3 Synology Inc. Synology_SHAWP_ 20130910 Table of Contents Chapter 1: Introduction Chapter 2: High-Availability Clustering 2.1 Synology High-Availability
Active-Active and High Availability
Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark
McAfee Endpoint Encryption Hot Backup Implementation
McAfee Endpoint Encryption Hot Backup Implementation 1 Endpoint Encryption Hot Backup Implementation Planning, Implementing and Managing SafeBoot Enterprise Systems SafeBoot Hot Backup Implementation Having
High Availability Low Dollar Load Balancing
High Availability Low Dollar Load Balancing Simon Karpen System Architect, VoiceThread [email protected] Via Karpen Internet Systems [email protected] These slides are licensed under the
Release Notes for Fuel and Fuel Web Version 3.0.1
Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously
Installing and Using the vnios Trial
Installing and Using the vnios Trial The vnios Trial is a software package designed for efficient evaluation of the Infoblox vnios appliance platform. Providing the complete suite of DNS, DHCP and IPAM
Quick Start Guide. Cerberus FTP is distributed in Canada through C&C Software. Visit us today at www.ccsoftware.ca!
Quick Start Guide Cerberus FTP is distributed in Canada through C&C Software. Visit us today at www.ccsoftware.ca! How to Setup a File Server with Cerberus FTP Server FTP and SSH SFTP are application protocols
Exam : EE0-511. : F5 BIG-IP V9 Local traffic Management. Title. Ver : 12.19.05
Exam : EE0-511 Title : F5 BIG-IP V9 Local traffic Management Ver : 12.19.05 QUESTION 1 Which three methods can be used for initial access to a BIG-IP system? (Choose three.) A. serial console access B.
Active-Active ImageNow Server
Active-Active ImageNow Server Getting Started Guide ImageNow Version: 6.7. x Written by: Product Documentation, R&D Date: March 2014 2014 Perceptive Software. All rights reserved CaptureNow, ImageNow,
SQL Server AlwaysOn. Michal Tinthofer 11. Praha 2013. What to avoid and how to optimize, deploy and operate. Michal.Tinthofer@Woodler.
SQL Server AlwaysOn What to avoid and how to optimize, deploy and operate. 11. ročník největší odborné IT konference v ČR! Michal Tinthofer [email protected] Praha 2013 Overview Introduction
Step-by-Step Guide to Open-E DSS V7 Active-Active iscsi Failover
www.open-e.com 1 Step-by-Step Guide to Software Version: DSS ver. 7.00 up10 Presentation updated: June 2013 www.open-e.com 2 TO SET UP ACTIVE-ACTIVE ISCSI FAILOVER, PERFORM THE FOLLOWING STEPS: 1. Hardware
Using Live Sync to Support Disaster Recovery
Using Live Sync to Support Disaster Recovery SIMPANA VIRTUAL SERVER AGENT FOR VMWARE Live Sync uses backup data to create and maintain a warm disaster recovery site. With backup and replication from a
Deploying System Center 2012 R2 Configuration Manager
Deploying System Center 2012 R2 Configuration Manager This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.
Siemens PLM Connection. Mark Ludwig
Siemens PLM Connection High Availability of Teamcenter Enterprise Mark Ludwig Copyright Siemens Copyright PLM Software Siemens Inc. AG 2008. All rights reserved. Teamcenter Digital Lifecycle Management
Gigabyte Management Console User s Guide (For ASPEED AST 2400 Chipset)
Gigabyte Management Console User s Guide (For ASPEED AST 2400 Chipset) Version: 1.4 Table of Contents Using Your Gigabyte Management Console... 3 Gigabyte Management Console Key Features and Functions...
Neverfail for Windows Applications June 2010
Neverfail for Windows Applications June 2010 Neverfail, from Neverfail Ltd. (www.neverfailgroup.com), ensures continuity of user services provided by Microsoft Windows applications via data replication
Windows clustering glossary
Windows clustering glossary To configure the Microsoft Cluster Service with Windows 2000 Advanced Server, you need to have a solid grounding in the various terms that are used with the Cluster Service.
CA ARCserve Replication and High Availability for Windows
CA ARCserve Replication and High Availability for Windows Microsoft SQL Server Operation Guide r15 This documentation and any related computer software help programs (hereinafter referred to as the "Documentation")
DATA CENTER. Best Practices for High Availability Deployment for the Brocade ADX Switch
DATA CENTER Best Practices for High Availability Deployment for the Brocade ADX Switch CONTENTS Contents... 2 Executive Summary... 3 Introduction... 3 Brocade ADX HA Overview... 3 Hot-Standby HA... 4 Active-Standby
Configuring MDaemon for High Availability
Configuring MDaemon for High Availability This document is intended to provide a general outline of the steps that are required to configure MDaemon for high availability. Modifications may be required
www.rackwareinc.com RackWare Solutions Disaster Recovery
RackWare Solutions Disaster Recovery RackWare Solutions Disaster Recovery Overview Business Continuance via Disaster Recovery is an essential element of IT and takes on many forms. The high end consists
Network/Floating License Installation Instructions
Network/Floating License Installation Instructions Installation steps: On the Windows PC that will act as License Manager (SERVER): 1. Install HASP Run-time environment, SERVER 2. Plug in the red USB hardware
Backup Server DOC-OEMSPP-S/6-BUS-EN-21062011
Backup Server DOC-OEMSPP-S/6-BUS-EN-21062011 The information contained in this guide is not of a contractual nature and may be subject to change without prior notice. The software described in this guide
QuickStart Guide vcenter Server Heartbeat 5.5 Update 2
vcenter Server Heartbeat 5.5 Update 2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
High Availability Solutions & Technology for NetScreen s Security Systems
High Availability Solutions & Technology for NetScreen s Security Systems Features and Benefits A White Paper By NetScreen Technologies Inc. http://www.netscreen.com INTRODUCTION...3 RESILIENCE...3 SCALABLE
ImageNow Cluster Resource Monitor
ImageNow Cluster Resource Monitor Installation and Setup Guide ImageNow Version: 6.7. x Written by: Product Documentation, R&D Date: June 2012 2012 Perceptive Software. All rights reserved CaptureNow,
FioranoMQ 9. High Availability Guide
FioranoMQ 9 High Availability Guide Copyright (c) 1999-2008, Fiorano Software Technologies Pvt. Ltd., Copyright (c) 2008-2009, Fiorano Software Pty. Ltd. All rights reserved. This software is the confidential
Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com
CHAPTER: Introduction Microsoft virtual architecture: Hyper-V 6.0 Manager Hyper-V Server (R1 & R2) Hyper-V Manager Hyper-V Server R1, Dell UPS Local Node Manager R2 Main Operating System: 2008Enterprise
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
Contingency Planning and Disaster Recovery
Contingency Planning and Disaster Recovery Best Practices Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge Date: October 2014 2014 Perceptive Software. All rights reserved Perceptive
Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE
Kaspersky Endpoint Security 8 for Linux INSTALLATION GUIDE A P P L I C A T I O N V E R S I O N : 8. 0 Dear User! Thank you for choosing our product. We hope that this documentation will help you in your
Architectures Haute-Dispo Joffrey MICHAÏE Consultant MySQL
Architectures Haute-Dispo Joffrey MICHAÏE Consultant MySQL 04.20111 High Availability with MySQL Higher Availability Shared nothing distributed cluster with MySQL Cluster Storage snapshots for disaster
Configuring an OpenNMS Stand-by Server
WHITE PAPER Conguring an OpenNMS Stand-by Server Version 1.2 The OpenNMS Group, Inc. 220 Chatham Business Drive Pittsboro, NC 27312 T +1 (919) 533-0160 F +1 (503) 961-7746 [email protected] URL: http://blogs.opennms.org/david
14 Failover and Instant Failover
14 Failover and Instant Failover 14.1 Failover Introduction ACP uses specific terms to cover different topics that are concerned with keeping data viable during computer failure. Replacement: If a terminal
Clustering ExtremeZ-IP 4.1
Clustering ExtremeZ-IP 4.1 Installing and Configuring ExtremeZ-IP 4.x on a Cluster Version: 1.3 Date: 10/11/05 Product Version: 4.1 Introduction This document provides instructions and background information
Availability Guide for Deploying SQL Server on VMware vsphere. August 2009
Availability Guide for Deploying SQL Server on VMware vsphere August 2009 Contents Introduction...1 SQL Server 2008 with vsphere and VMware HA/DRS...2 Log Shipping Availability Option...4 Database Mirroring...
Setup guide. TELUS AD Sync
Setup guide TELUS AD Sync June 2013 TELUS AD Sync User Guide. The AD Sync Tool must be downloaded onto your organization s Domain Controller. Please call TELUS at 1 877 846 4456 to have this feature provisioned
Redundant Servers. APPolo Redundant Servers User Guide. User Guide. Revision: 1.2 Last Updated: May 2014 Service Contact: service@lynx-technik.
Redundant Servers Revision: 1.2 Last Updated: May 2014 Service Contact: [email protected] 2014 LYNXTechnik AG Page 1/8 Contents APPolo Control and Signal Processing... 2 APPolo Server Redundancy...
GlobalSCAPE DMZ Gateway, v1. User Guide
GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical
OpenMind: Know Your Customer
OpenMind: Know Your Customer Contents About OpenMind... 3 Feedback... 3 A Request... 3 Installation... 3 Install Ruby and Ruby on Rails... 4 Get the Code... 4 Create the Database Schema... 4 Update database.yml...
EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014. Version 1
EXPRESSCLUSTER X for Windows Quick Start Guide for Microsoft SQL Server 2014 Version 1 NEC EXPRESSCLUSTER X 3.x for Windows SQL Server 2014 Quick Start Guide Document Number ECX-MSSQL2014-QSG, Version
SanDisk ION Accelerator High Availability
WHITE PAPER SanDisk ION Accelerator High Availability 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Introduction 3 Basics of SanDisk ION Accelerator High Availability 3 ALUA Multipathing
Cloud Services ADM. Agent Deployment Guide
Cloud Services ADM Agent Deployment Guide 10/15/2014 CONTENTS System Requirements... 1 Hardware Requirements... 1 Installation... 2 SQL Connection... 4 AD Mgmt Agent... 5 MMC... 7 Service... 8 License
Best Practices for Deploying and Managing Linux with Red Hat Network
Best Practices for Deploying and Managing Linux with Red Hat Network Abstract This technical whitepaper provides a best practices overview for companies deploying and managing their open source environment
Immotec Systems, Inc. SQL Server 2005 Installation Document
SQL Server Installation Guide 1. From the Visor 360 installation CD\USB Key, open the Access folder and install the Access Database Engine. 2. Open Visor 360 V2.0 folder and double click on Setup. Visor
SteelEye Protection Suite for Windows Microsoft Internet Information Services Recovery Kit. Administration Guide
SteelEye Protection Suite for Windows Microsoft Internet Information Services Recovery Kit Administration Guide October 2013 This document and the information herein is the property of SIOS Technology
Microsoft File and Print Service Failover Using Microsoft Cluster Server
Microsoft File and Print Service Failover Using Microsoft Cluster Server TechNote First Edition (March 1998) Part Number 309826-001 Compaq Computer Corporation Notice The information in this publication
GRAVITYZONE HERE. Deployment Guide VLE Environment
GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including
MONIT. UNIX Systems Management
MONIT UNIX Systems Management Introduction monit is a utility for managing and monitoring, processes, files, directories and devices on a Unix system. Monit conducts automatic maintenance and repair and
OCS Virtual image. User guide. Version: 1.3.1 Viking Edition
OCS Virtual image User guide Version: 1.3.1 Viking Edition Publication date: 30/12/2012 Table of Contents 1. Introduction... 2 2. The OCS virtualized environment composition... 2 3. What do you need?...
How To Load Balance On A Cisco Cisco Cs3.X With A Csono Css 3.X And Csonos 3.5.X (Cisco Css) On A Powerline With A Powerpack (C
esafe Gateway/Mail v. 3.x Load Balancing for esafe Gateway 3.x with Cisco Web NS and CSS Switches Design and implementation guide esafe Gateway provides fast and transparent real-time inspection of Internet
TSM for Windows Installation Instructions: Download the latest TSM Client Using the following link:
TSM for Windows Installation Instructions: Download the latest TSM Client Using the following link: ftp://ftp.software.ibm.com/storage/tivoli-storagemanagement/maintenance/client/v6r2/windows/x32/v623/
VMware vcenter Log Insight Getting Started Guide
VMware vcenter Log Insight Getting Started Guide vcenter Log Insight 1.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by
Synology High Availability (SHA): An Introduction Synology Inc.
Synology High Availability (SHA): An Introduction Synology Inc. Synology_WP_ 20130220 Table of Contents Chapter 1: Introduction Chapter 2: High-Availability Clustering 2.1 Synology High-Availability Cluster...
Industry White Paper. Ensuring system availability in RSView Supervisory Edition applications
Industry White Paper Ensuring system availability in RSView Supervisory Edition applications White Paper Ensuring system availability in RSView Supervisory Edition applications Rockwell Software, Visualization
CA Performance Center
CA Performance Center Single Sign-On User Guide 2.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is
SIP-DECT Knowledge Base SIP-DECT System Update
SIP-DECT Knowledge Base SIP-DECT System Update MAI 2015 DEPL-2046 VERSION 1.6 KNOWLEDGE BASE TABLE OF CONTENT 1) Introduction... 2 2) Update (New Service Pack in the same Release)... 3 2.1 OMM HOSTED ON
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter
Zerto Virtual Manager Administration Guide
Zerto Virtual Manager Administration Guide AWS Environment ZVR-ADVA-4.0U2-01-23-07-15 Copyright 2015, Zerto Ltd. All rights reserved. Information in this document is subject to change without notice and
Configuring Failover
Configuring Failover 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
