Highly Available Service Environments Introduction
|
|
|
- Karin Willis
- 10 years ago
- Views:
Transcription
1 Highly Available Service Environments Introduction This paper gives a very brief overview of the common issues that occur at the network, hardware, and application layers, as well as possible solutions, that may be faced by an organization required to address availability and performance issues of a service. It will provide this information in a clear and concise manner without the use of marketing buzzwords. Reading this paper will not make a person an expert on high availability systems nor allow them to go out and start creating one. It instead will get them to actively think about the issues that are involved not only with their services but also with high availability systems and provide a starting point for when they are ready to go get more information. Network The network is often the last piece of infrastructure service designers look at when planning out a service. There is a misconception that as long as a service s connection to the network is not slow or close to max capacity that there is nothing else to address on the network side. Presented below is a description of common network configurations, the problems that can occur in these configurations, and methods for addressing these problems. Standard Network Environment The standard configuration of a network, pictured below, is to have one or more routers which make up the core of the network. These routers establish the internal addressing scheme (subnets) and connect the network to a service provider, like AT&T or Verizon. Switches then connect to these routers in order to provide connections to individual systems. Depending on the size of the network there may be multiple layers of routers and switches in order to distribute some of the work of handling the network traffic. Chad La Joie Page 1 of 8 3/6/2003
2 Routers Switches Servers Figure 1: Standard Network Environment While this network configuration is very useful in many circumstances and easy to maintain because of its simplicity it does have some problems if the goal is to create a highly available environment for services. The first problem lies with the routers themselves. In most network configurations if a router fails anything connected to that router is isolated. Until recently the only way to address this problem was to purchase routers that included as much redundancy as possible. These routers contain things like redundant network interfaces, power supplies, and routing engines. While this definitely helps the problem there is still a chance that a router can fail and leave a segment of the network isolated, and routers with redundant hardware are often far more expensive than those without. To address this issue a special network protocol called Virtual Routing Redundancy Protocol (VRRP) was created. VRRP allows one router to take over the operations of another router in the event of a failure. This allows a company to invest in hardware that does not have redundant features, spending less money, and still eliminate the router as a single point of failure in a network. The next problem lies with the router to switch connection and is especially acute if the routers are using VRRP. While VRRP can allow one router to take over for a failed router it can not move the physical connection from the failed router to the operational one. Therefore the switches must now have connections to all of the routers that are set up in this redundant fashion. For example, if the two bottom routers in the diagram above employ VRRP then each switch pictured will need to be connected to both routers so that, in the event of a failure, the remaining router will still be able to contact the switch and servers connected to it. This is true, even if the routers are not using VRRP, but instead of whole router failures a single network interface failure will isolate a segment of the network. Chad La Joie Page 2 of 8 3/6/2003
3 Now that the router hardware and router/switch connections are not single points of failure only one major problem remains. Like the router/switch connections the switch/server connections could fail and isolate a server. This failure could be caused by a single switch interface failure, or by a complete switch failure. The very nature of switches allows computers to be hooked up to more than one, likely without any additional configuration. * Therefore, to protect against interface or complete switch failures, each server should be connected to two or more switches. Highly Available Network Environment Employing all the suggestions above creates a network, often termed a multihomed network, without a single point of failure which is very desirable for hosting services that need to be available the vast majority of the time. The diagram below shows what a small highly available network might look like. Routers (employing VRRP) Switches Servers Figure 2: Highly Available Network Environment In addition to being highly available this network may provide other benefits depending on the capability of the hardware used. Many servers and switches can be setup to take more than one network connection and treat them as a single virtual interface, an approach sometimes called binding. This allows them to make use of all links, when they are operational, to gain additional bandwidth which in turn could improve the response time of services hosted in this environment. * Some switches which provide advanced features may require that some configuration be done to the switches in order to handle this type of setup. Chad La Joie Page 3 of 8 3/6/2003
4 Server Hardware When it comes to server hardware there are two approaches often taken to ensure the availability of the server. One approach is to pack a single server with as much redundant hardware as possible. The other method is to purchase multiple, often commodity, machines and create some mechanism to have working machines take over for failed machines. These approaches are analogous to the approaches, discussed above, to ensure the availability of the network. Redundant Hardware Method Many companies that produce servers for mission critical applications, like IBM, HP, and Sun, sell hardware that contain at least two of every piece of hardware inside the system and an operating system that can reroute work away from hardware that fails, a breed of hardware that these companies often refer to as enterprise class systems. Each company has different mechanisms for doing this which succeed, in varying degrees, to providing fault-tolerant hardware. Usually these enterprise class systems are far more expensive than a system, or set of systems, which provide the same amount of computing power but don t offer the internal redundant hardware. For the extra expense however a customer usually gets not only a greater mean time between complete system failure but also better support and quicker turn around time for hardware failures, and arguably better engineered hardware. Together these things can go a long way to making sure servers and the services running on them are available for the maximum amount of time. Extra hardware costs aside there can be some very real problems with this approach. First, some enterprise class systems have a controlling piece of hardware for every set of redundant hardware. If this controlling hardware fails, the system goes down. For example a server has 4 CPUs (C1-C4) and C1 is the controlling hardware for all the CPUs. If C1 fails, all the CPUs become unusable and the server goes down. If C2, C3, and/or C4 fail things continue as they normally would have, except for perhaps performing operations a bit slower. This sort of architecture is definitely something to look out for and avoid. Another problem with this approach is that no matter how it is approached there is still only one machine. If that machine needs to be taken down for maintenance then all the services running on that machine will be unavailable. This could result in a loss of business, a situation most companies would prefer to avoid. The last problem is one of resources. Most enterprise class systems run a special operating system. These operating systems require skills to maintain that people in an organization may not possess. Also, because of the unique operating system, certain applications may not be available for that system. Load Balanced Commodity Hardware The second approach to ensuring the high availability of a server, as stated above, is to take a collection of commodity hardware and set them up in a manner that active hardware will take over for failed hardware. Such a setup is usually called a cluster with Chad La Joie Page 4 of 8 3/6/2003
5 each system in the cluster known as a node. The mechanism used to provide the fail-over capability is usually a process called load balancing. To implement this approach a designer will need not only the two or more servers but also two or more network switches and load balancing network appliances such as the F5 s BIG-IP or Foundry s ServerIron systems. Each server is then connected to each of the switches which are then connected to each load balancer. Each load balancer should then be connected to two switches on the external network, the same switches the servers where connected to in the highly available network environment described above. In this setup an instance of a service is deployed on each server, known as a node, in the load balanced cluster of machines. The load balancers will treat all operational machines as one virtual machine, addressable by one or more IP addresses and host names. Incoming requests are then forwarded to one of the servers based on some policy, such as round robin, in the load balancers. The load balancers periodically scan the cluster to make sure all servers are up. If one or more servers are down no requests are sent to them until they come back online. Most load balancers will also detect, when forwarding a request, a server which is not operational but has not yet been marked as such. It will then mark the server as offline and forward the request to another server. External Switches Load Balancers Switches Servers Figure 3: Load Balanced Server Environment This approach to highly available servers has many benefits over the single system approach. First it is often much less costly, even taking into account the additional hardware (switches and load balancers) required. Second, because the servers are commodity hardware, a company will likely already have the resources to administer the machines. Third, because there is more then one server running a service at one time, servers can be taken down for maintenance without taking down the service. Lastly, this solution is often times easier to scale and usually offers better performance for the price. This approach is, however, not without problems. First, commodity hardware often comes with commodity support. This could mean slower turn around times when a system does fail. Second, the mean time between failures for each system will be less Chad La Joie Page 5 of 8 3/6/2003
6 because each system lacks any sort of internal redundancy. Third, while adding more systems to the load balanced cluster will increase performance this is only true in certain cases and can quickly become an exercise in diminishing returns. Lastly, as will be discussed later, some applications are not designed to work in a load balanced environment which can lead to problematic behavior when they are placed one. Highly Available Server Environment The best approach to creating a highly available server environment is to use both of the methods discussed above. This will maximize the strengths of each approach and minimized most of their weaknesses. This approach creates a load balanced setup, as described above but, instead of using commodity hardware, low-end enterprise class hardware is used for the servers. Usually this means the server has redundant CPUs, hard drives, and power supplies. This solution tends to offer greater performance gains then either solution individually as well as offer greater mean time between failures and better support. It still allows individual machines to be taken down for maintenance without losing the service but also increases the benefits of adding new machines to the cluster. The only drawbacks to this approach is that it is more costly than the load balanced commodity hardware approach, though it is usually still cheaper then the single-system approach, and it still presents a problem for applications not designed to run in load balanced environments. Services When it comes to deploying a service most service designers assume the service will be deployed on a single machine. If it is then deployed in a highly available environment, as described above, odd and indeterministic behavior can be exhibited. There are at least three broad categories of issues that can cause this behavior and any service may demonstrate one or more of them. These issues, described next, can often times be avoided with little trouble during the design of the service but it becomes far harder to fix them after development, especially if the service is an off-the-shelf application. Persistent Connections When a client communicates to a service the most costly operation, time-wise and sometimes resource-wise, is creating the communication channel between the two. Therefore most applications will, if the communication protocol being used allows it, try to create persistent connections. Persistent connections are connection that are created once and used many times, that is they persist between each usage. This allows a client to only incur the cost of creating the connection once. A few problems can arise if the client is connecting to a service that is placed in a highly available environment. First, if the client is unable to reach the same instance of the service it had originally communicated with the clients persistent connection will fail and the client may report the service as down, which may not be the case. This situation Chad La Joie Page 6 of 8 3/6/2003
7 can occur either because the node to which the client originally communicated with went down, or because the load balancer sent the client s request to another node. The easiest way to avoid this problem is, of course, not to use persistent connections. Such a solution, however, is rarely practical, especially if the client is another application that makes very frequent requests to a service. Instead a better way to handle this problem is to make sure the client gracefully handles persistent connection failures and, when a failure is encountered, tries to re-establish the connection. Most load balancers can also be set up to allow a client to communicate with the same node, assuming the node is still available, that it originally communicated with. This setup, often referred to as sticky sessions or session affinity, coupled with the client gracefully handling failed connections, as mentioned above, greatly reduces the likelihood of encountering a problem when using persistent connections. Local Storage of Data Many applications store things like configuration files, internal state information, or other data on the local file system. If this data is only ever used for the internal functionally of the service it is unlikely that problems will arise, except for the head ache administrators will get having to change configuration files on every node when a configuration change is needed. If, however, the service stores, and operates on, information locally and then returns it to a client problems will arise in a highly available environment. The problem is that this information will not be available to all the other nodes. For example consider a message board service which stores messages in a file on the local file system. A client initiates a request and ends up at Node1, the client posts a message, and leaves. Later the client comes back, to view the message board, ends up at Node2, and does not see their message. This is not because the service did not receive the message and properly store it but instead because the message is stored on the local file system of Node1. The solution to this problem is, like the previous problem, fairly easy. Do not store information locally, instead use some central data store (e.g. a database). This allows all instances of a service to read and act on the same data store. If this option is used though there are two things the data store will need to do. First, it has to have mechanisms in place to handle multiple write requests at the same time, otherwise it may become corrupt. Second, it needs to be redundant so that a single point of failure is not introduced into the highly available environment. In-memory information Every service deployed stores some amount of information in memory and like information stored on the local file system if this information is only used internally by the application there is no issue. Most services, however, employ techniques such as caching and in-memory session management which can lead to problems in a highly available environment. If a service uses in-memory caching, and a vast majority of them do, there are a few things that can be done, at design time, to alleviate or diminish problems that may Chad La Joie Page 7 of 8 3/6/2003
8 arise. First, if it is okay to return old data that may have changed on another node, nothing needs to be done, if this is not acceptable however then the caches between nodes must be synchronized. Such a synchronization process, at the very least, will need to invalidate a cached object from all caches when it is invalidated or changed in one cache. It may also replicate the addition of objects to the cache as well as changes that occur to cached objects, instead of just invalidating those objects when a change occurs. This type of caching system is sometimes referred to as a lateral caching system. So, if a service requires cache consistency some sort of lateral caching will need to be employed. Unfortunately lateral caching libraries are not available for most programming languages and those libraries that are available are immature. Another common area where the use of in-memory artifacts may cause problems is when a service employs some sort of session management. Sessions often store, in memory, information specific to each client. If the client arrives at a different node then the one that initiated the session this information will not be available which may lead to odd behavior or at the very least inconvenient behavior for the client. The most common type of services that do this are web applications. If a service does employ session management there are a couple things that can be done to address problems that occur if the service is placed in a highly available environment. First, a lateral cache could be used to replicate session information to each node. Second, if the service is a web application, many enterprise web application servers automatically replication sessions if they are aware that they are in a highly available environment. A load balancer employing sticky sessions can also go a long way to addressing these issues as well. If a client always communicates with the same node these inmemory issues go away so long as the node never becomes inactive. Since this should be a very rare occasion it is often acceptable to use sticky sessions to solve this problem. In cases where session management is critical, for example in financial transactions, it may still be necessary to recover from a node failure which means sessions information will still need to be replicated. Conclusion While highly available service environments require a great deal of thought in order to plan and operate well most organization that do a large amount of electronic transaction will eventually encounter a need for them. It is vitally important to note however that like all technologies these environments are not magic panaceas that will fix availability and performance problems in every situation. They are simply a useful tool to consider when faced with such issues. Highly available service environments can pound out a lot of issues, the trick is to not view everything as a nail. Chad La Joie Page 8 of 8 3/6/2003
Building a Highly Available and Scalable Web Farm
Page 1 of 10 MSDN Home > MSDN Library > Deployment Rate this page: 10 users 4.9 out of 5 Building a Highly Available and Scalable Web Farm Duwamish Online Paul Johns and Aaron Ching Microsoft Developer
DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING
PolyServe High-Availability Server Clustering for E-Business 918 Parker Street Berkeley, California 94710 (510) 665-2929 wwwpolyservecom Number 990903 WHITE PAPER DNS ROUND ROBIN HIGH-AVAILABILITY LOAD
LinuxWorld Conference & Expo Server Farms and XML Web Services
LinuxWorld Conference & Expo Server Farms and XML Web Services Jorgen Thelin, CapeConnect Chief Architect PJ Murray, Product Manager Cape Clear Software Objectives What aspects must a developer be aware
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions
Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,
ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy
ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to
SCALABILITY AND AVAILABILITY
SCALABILITY AND AVAILABILITY Real Systems must be Scalable fast enough to handle the expected load and grow easily when the load grows Available available enough of the time Scalable Scale-up increase
Coyote Point Systems White Paper
Five Easy Steps to Implementing Application Load Balancing for Non-Stop Availability and Higher Performance. Coyote Point Systems White Paper Load Balancing Guide for Application Server Administrators
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy
Lab 5 Explicit Proxy Performance, Load Balancing & Redundancy Objectives The purpose of this lab is to demonstrate both high availability and performance using virtual IPs coupled with DNS round robin
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
Optimizing Enterprise Network Bandwidth For Security Applications. Improving Performance Using Antaira s Management Features
Optimizing Enterprise Network Bandwidth For Security Applications Improving Performance Using Antaira s Management Features By: Brian Roth, Product Marketing Engineer April 1, 2014 April 2014 Optimizing
ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy
ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to
Avaya P333R-LB. Load Balancing Stackable Switch. Load Balancing Application Guide
Load Balancing Stackable Switch Load Balancing Application Guide May 2001 Table of Contents: Section 1: Introduction Section 2: Application 1 Server Load Balancing Section 3: Application 2 Firewall Load
SwiftStack Global Cluster Deployment Guide
OpenStack Swift SwiftStack Global Cluster Deployment Guide Table of Contents Planning Creating Regions Regions Connectivity Requirements Private Connectivity Bandwidth Sizing VPN Connectivity Proxy Read
Memory-to-memory session replication
Memory-to-memory session replication IBM WebSphere Application Server V7 This presentation will cover memory-to-memory session replication in WebSphere Application Server V7. WASv7_MemorytoMemoryReplication.ppt
High Availability and Clustering
High Availability and Clustering AdvOSS-HA is a software application that enables High Availability and Clustering; a critical requirement for any carrier grade solution. It implements multiple redundancy
IBM Software Information Management. Scaling strategies for mission-critical discovery and navigation applications
IBM Software Information Management Scaling strategies for mission-critical discovery and navigation applications Scaling strategies for mission-critical discovery and navigation applications Contents
5 Easy Steps to Implementing Application Load Balancing for Non-Stop Availability and Higher Performance
5 Easy Steps to Implementing Application Load Balancing for Non-Stop Availability and Higher Performance DEPLOYMENT GUIDE Prepared by: Jim Puchbauer Coyote Point Systems Inc. The idea of load balancing
Jive and High-Availability
Jive and High-Availability TOC 2 Contents Jive and High-Availability... 3 Supported High-Availability Jive Configurations...3 Designing a Single Data Center HA Configuration... 3 Designing a Multiple Data
Global Server Load Balancing
White Paper Overview Many enterprises attempt to scale Web and network capacity by deploying additional servers and increased infrastructure at a single location, but centralized architectures are subject
Load Balancing Web Applications
Mon Jan 26 2004 18:14:15 America/New_York Published on The O'Reilly Network (http://www.oreillynet.com/) http://www.oreillynet.com/pub/a/onjava/2001/09/26/load.html See this if you're having trouble printing
Informix Dynamic Server May 2007. Availability Solutions with Informix Dynamic Server 11
Informix Dynamic Server May 2007 Availability Solutions with Informix Dynamic Server 11 1 Availability Solutions with IBM Informix Dynamic Server 11.10 Madison Pruet Ajay Gupta The addition of Multi-node
Top IT Pain Points: Addressing the bandwidth issues with Ecessa solutions
Top IT Pain Points: Addressing the bandwidth issues with Ecessa solutions TABLE OF CONTENTS 02 02 05 07 08 Introduction Reliability Performance Scalability Flexibility 1 Amazon lost almost $31,000 per
co Characterizing and Tracing Packet Floods Using Cisco R
co Characterizing and Tracing Packet Floods Using Cisco R Table of Contents Characterizing and Tracing Packet Floods Using Cisco Routers...1 Introduction...1 Before You Begin...1 Conventions...1 Prerequisites...1
High Availability Essentials
High Availability Essentials Introduction Ascent Capture s High Availability Support feature consists of a number of independent components that, when deployed in a highly available computer system, result
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
This presentation provides an overview of the architecture of the IBM Workload Deployer product.
This presentation provides an overview of the architecture of the IBM Workload Deployer product. Page 1 of 17 This presentation starts with an overview of the appliance components and then provides more
Centrata IT Management Suite 3.0
Centrata IT Management Suite 3.0 Technical Operating Environment April 9, 2004 Centrata Incorporated Copyright 2004 by Centrata Incorporated All rights reserved. April 9, 2004 Centrata IT Management Suite
Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.
Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component
Tushar Joshi Turtle Networks Ltd
MySQL Database for High Availability Web Applications Tushar Joshi Turtle Networks Ltd www.turtle.net Overview What is High Availability? Web/Network Architecture Applications MySQL Replication MySQL Clustering
5Get rid of hackers and viruses for
Reprint from TechWorld /2007 TEChWoRLd ISSuE 2007 ThEBIG: 5 FIREWaLLS TEChWoRLd ISSuE 2007 ThEBIG: 5 FIREWaLLS TEChWoRLd ISSuE 2007 ThEBIG: 5 FIREWaLLS # # # Load balancing is basically a simple task where
Zadara Storage Cloud A whitepaper. @ZadaraStorage
Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers
Building Reliable, Scalable AR System Solutions. High-Availability. White Paper
Building Reliable, Scalable Solutions High-Availability White Paper Introduction This paper will discuss the products, tools and strategies available for building reliable and scalable Action Request System
FatPipe Networks www.fatpipeinc.com
XTREME WHITE PAPERS Overview The growing popularity of wide area networks (WANs), as a means by which companies transact vital information with clients, partners, and colleagues, is indisputable. The business
BASICS OF SCALING: LOAD BALANCERS
BASICS OF SCALING: LOAD BALANCERS Lately, I ve been doing a lot of work on systems that require a high degree of scalability to handle large traffic spikes. This has led to a lot of questions from friends
Jive and High-Availability
Jive and High-Availability TOC 2 Contents Jive and High-Availability... 3 Supported High-Availability Jive Configurations...3 Designing a Single Data Center HA Configuration... 3 Designing a Multiple Data
Technical White Paper: Clustering QlikView Servers
Technical White Paper: Clustering QlikView Servers Technical White Paper: Clustering QlikView Servers Clustering QlikView Servers for Resilience and Horizontal Scalability v1.3 QlikView 9 SR3 or above,
The Importance of a Resilient DNS and DHCP Infrastructure
White Paper The Importance of a Resilient DNS and DHCP Infrastructure DNS and DHCP availability and integrity increase in importance with the business dependence on IT systems The Importance of DNS and
Traffic Monitoring in a Switched Environment
Traffic Monitoring in a Switched Environment InMon Corp. 1404 Irving St., San Francisco, CA 94122 www.inmon.com 1. SUMMARY This document provides a brief overview of some of the issues involved in monitoring
CMPT 471 Networking II
CMPT 471 Networking II Firewalls Janice Regan, 2006-2013 1 Security When is a computer secure When the data and software on the computer are available on demand only to those people who should have access
Distribution One Server Requirements
Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs
Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs As a head of the campus network department in the Deanship of Information Technology at King Abdulaziz University for more
Staying Alive Understanding Array Clustering Technology
White Paper Overview This paper looks at the history of supporting high availability network environments. By examining first and second-generation availability solutions, we can learn from the past and
FAQ: BroadLink Multi-homing Load Balancers
FAQ: BroadLink Multi-homing Load Balancers BroadLink Overview Outbound Traffic Inbound Traffic Bandwidth Management Persistent Routing High Availability BroadLink Overview 1. What is BroadLink? BroadLink
Scalability of web applications. CSCI 470: Web Science Keith Vertanen
Scalability of web applications CSCI 470: Web Science Keith Vertanen Scalability questions Overview What's important in order to build scalable web sites? High availability vs. load balancing Approaches
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
Neverfail Solutions for VMware: Continuous Availability for Mission-Critical Applications throughout the Virtual Lifecycle
Neverfail Solutions for VMware: Continuous Availability for Mission-Critical Applications throughout the Virtual Lifecycle Table of Contents Virtualization 3 Benefits of Virtualization 3 Continuous Availability
HRG Assessment: Stratus everrun Enterprise
HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at
CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY
51 CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY Web application operations are a crucial aspect of most organizational operations. Among them business continuity is one of the main concerns. Companies
Milestone Edge Storage with flexible retrieval
White paper Milestone Edge Storage with flexible retrieval Prepared by: John Rasmussen, Senior Technical Product Manager, Milestone XProtect Corporate Business Unit Milestone Systems Date: July 8, 2015
Veritas Storage Foundation High Availability for Windows by Symantec
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
VoIP Solutions Guide Everything You Need to Know
VoIP Solutions Guide Everything You Need to Know Simplify, Save, Scale VoIP: The Next Generation Phone Service Ready to Adopt VoIP? 10 Things You Need to Know 1. What are my phone system options? Simplify,
How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled
Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision
Introduction. What is a Remote Console? What is the Server Service? A Remote Control Enabled (RCE) Console
Contents Introduction... 3 What is a Remote Console?... 3 What is the Server Service?... 3 A Remote Control Enabled (RCE) Console... 3 Differences Between the Server Service and an RCE Console... 4 Configuring
Cisco Application Networking for IBM WebSphere
Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
Rapid Bottleneck Identification
Rapid Bottleneck Identification TM A Better Way to Load Test WHITEPAPER You re getting ready to launch or upgrade a critical Web application. Quality is crucial, but time is short. How can you make the
CS514: Intermediate Course in Computer Systems
: Intermediate Course in Computer Systems Lecture 7: Sept. 19, 2003 Load Balancing Options Sources Lots of graphics and product description courtesy F5 website (www.f5.com) I believe F5 is market leader
Technical Analysis Document
Technical Architecture Technical Analysis Document The table below shows the various possibilities that sonic sounds have to host their e-commerce site on. The hosting type is described and then advantages
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Nutanix Tech Note. Failure Analysis. 2013 All Rights Reserved, Nutanix Corporation
Nutanix Tech Note Failure Analysis A Failure Analysis of Storage System Architectures Nutanix Scale-out v. Legacy Designs Types of data to be protected Any examination of storage system failure scenarios
Secure Networks for Process Control
Secure Networks for Process Control Leveraging a Simple Yet Effective Policy Framework to Secure the Modern Process Control Network An Enterasys Networks White Paper There is nothing more important than
The Microsoft Large Mailbox Vision
WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more e mail has many advantages. Large mailboxes
Routing Security Server failure detection and recovery Protocol support Redundancy
Cisco IOS SLB and Exchange Director Server Load Balancing for Cisco Mobile SEF The Cisco IOS SLB and Exchange Director software features provide a rich set of server load balancing (SLB) functions supporting
Virtualized Domain Name System and IP Addressing Environments. White Paper September 2010
Virtualized Domain Name System and IP Addressing Environments White Paper September 2010 Virtualized DNS and IP Addressing Environments As organizations initiate virtualization projects in their operating
The objective of WebSphere MQ clustering is to make this system as easy to administer and scale as the Single Queue Manager solution.
1 2 It would be nice if we could place all the queues in one place. We could then add processing capacity around this single Queue manager as required and start multiple servers on each of the processors.
How To Send Video At 8Mbps On A Network (Mpv) At A Faster Speed (Mpb) At Lower Cost (Mpg) At Higher Speed (Mpl) At Faster Speed On A Computer (Mpf) At The
Will MPEG Video Kill Your Network? The thought that more bandwidth will cure network ills is an illusion like the thought that more money will ensure human happiness. Certainly more is better. But when
Last time. Data Center as a Computer. Today. Data Center Construction (and management)
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
AppDirector Load balancing IBM Websphere and AppXcel
TESTING & INTEGRATION GROUP SOLUTION GUIDE AppDirector Load balancing IBM Websphere and AppXcel INTRODUCTION...2 RADWARE APPDIRECTOR...3 RADWARE APPXCEL...3 IBM WEBSPHERE...4 SOLUTION DETAILS...4 HOW IT
VoIP Buying Guide for Small Business
VoIP Buying Guide for Small Business By Brad Chacos, PCWorld Aug 14, 2012 6:00 PM The cord-cutting movement isn't limited to consumer cable and Netflix. As Voice over Internet Protocol communication matures
HA / DR Jargon Buster High Availability / Disaster Recovery
HA / DR Jargon Buster High Availability / Disaster Recovery Welcome to Maxava s Jargon Buster. Your quick reference guide to Maxava HA and industry technical terms related to High Availability and Disaster
IT White Paper. N + 1 Become Too Many + 1?
IT White Paper Balancing Scalability and Reliability in the Critical Power system: When Does N + 1 Become Too Many + 1? Summary Uninterruptible Power Supply (UPS) protection can be delivered through a
CS 188/219. Scalable Internet Services Andrew Mutz October 8, 2015
CS 188/219 Scalable Internet Services Andrew Mutz October 8, 2015 For Today About PTEs Empty spots were given out If more spots open up, I will issue more PTEs You must have a group by today. More detail
Layer 4-7 Server Load Balancing. Security, High-Availability and Scalability of Web and Application Servers
Layer 4-7 Server Load Balancing Security, High-Availability and Scalability of Web and Application Servers Foundry Overview Mission: World Headquarters San Jose, California Performance, High Availability,
Active-Active and High Availability
Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark
Fax Server Cluster Configuration
Fax Server Cluster Configuration Low Complexity, Out of the Box Server Clustering for Reliable and Scalable Enterprise Fax Deployment www.softlinx.com Table of Contents INTRODUCTION... 3 REPLIXFAX SYSTEM
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview
Pivot3 Desktop Virtualization Appliances vstac VDI Technology Overview February 2012 Pivot3 Desktop Virtualization Technology Overview Table of Contents Executive Summary... 3 The Pivot3 VDI Appliance...
Creating SANLess Microsoft SQL Server Failover Cluster Instances with SIOS DataKeeper Cluster Edition and SanDisk Fusion iomemory
Creating SANLess Microsoft SQL Server Failover Cluster Instances with SIOS DataKeeper Cluster Edition and SanDisk Fusion iomemory Learn how deploying both DataKeeper Cluster Edition and SanDisk Fusion
NetIQ Access Manager 4.1
White Paper NetIQ Access Manager 4.1 Performance and Sizing Guidelines Performance, Reliability, and Scalability Testing Revisions This table outlines all the changes that have been made to this document
Cisco Application Networking for Citrix Presentation Server
Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
Availability Digest. Stratus Avance Brings Availability to the Edge February 2009
the Availability Digest Stratus Avance Brings Availability to the Edge February 2009 Business continuity has not yet been extended to the Edge. What is the Edge? It is everything outside of the corporate
SECURE WEB GATEWAY DEPLOYMENT METHODOLOGIES
WHITEPAPER In today s complex network architectures it seems there are limitless ways to deploy networking equipment. This may be the case for some networking gear, but for web gateways there are only
Active-Active Servers and Connection Synchronisation for LVS
Active-Active Servers and Connection Synchronisation for LVS Simon Horman (Horms) [email protected] VA Linux Systems Japan K.K. www.valinux.co.jp with assistance from NTT Commware Coporation www.nttcom.co.jp
Availability Digest. www.availabilitydigest.com. Redundant Load Balancing for High Availability July 2013
the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they
WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network
WHITE PAPER How To Build a SAN The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network TABLE OF CONTENTS Introduction... 3 What is a SAN?... 4 Why iscsi Storage?... 4
Surround SCM Backup and Disaster Recovery Solutions
and Disaster Recovery Solutions by Keith Vanden Eynden Investing in a source code management application, like, protects your code from accidental overwrites, deleted versions, and other common errors.
A Dell Technical White Paper Dell Storage Engineering
Networking Best Practices for Dell DX Object Storage A Dell Technical White Paper Dell Storage Engineering THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND
Server Traffic Management. Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems
Server Traffic Management Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems The Server Selection Problem server array A server farm B Which server? Which
Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.
EliteNAS Cluster Mirroring Option - Introduction Real Time NAS-to-NAS Mirroring & Auto-Failover Cluster Mirroring High-Availability & Data Redundancy Option for Business Continueity Typical Cluster Mirroring
Building Nameserver Clusters with Free Software
Building Nameserver Clusters with Free Software Joe Abley, ISC NANOG 34 Seattle, WA, USA Starting Point Discrete, single-host authoritative nameservers several (two or more) several (two or more) geographically
Bigdata High Availability (HA) Architecture
Bigdata High Availability (HA) Architecture Introduction This whitepaper describes an HA architecture based on a shared nothing design. Each node uses commodity hardware and has its own local resources
Vess A2000 Series HA Surveillance with Milestone XProtect VMS Version 1.0
Vess A2000 Series HA Surveillance with Milestone XProtect VMS Version 1.0 2014 PROMISE Technology, Inc. All Rights Reserved. Contents Introduction 1 Purpose 1 Scope 1 Audience 1 What is High Availability?
Scaling Microsoft SQL Server
Recommendations and Techniques for Scaling Microsoft SQL To support many more users, a database must easily scale out as well as up. This article describes techniques and strategies for scaling out the
Reduce your downtime to the minimum with a multi-data centre concept
Put your business-critical activities in good hands If your income depends on the continuous availability of your servers, you should ask your hosting provider for a high availability solution. You may
