Building HA Linux Cluster A tutorial for IEEE Cluster Conference 2001

Size: px
Start display at page:

Download "Building HA Linux Cluster A tutorial for IEEE Cluster Conference 2001"

Transcription

1 Building HA Linux Cluster A tutorial for IEEE Cluster Conference 2001 Ibrahim.Haddad@Ericsson.com Ericsson Research Corporate Unit Ericsson Canada Purpose of the tutorial 1. Share our experience in building near-telecom grade HA Linux clusters. 2. Address design and implementation issues. 3. Share our vision of the future of carrier class server nodes Ericsson Canada 1

2 Tutorial Description Introduction Clusters and cluster Computing Types of clusters HA clusters System HW architecture The ARIES project Linux: An OS for telecom platforms Intermediate Issues in building HA clusters Steps ( ) Testing the clusters availability and performance Advanced topics Load balancing and traffic distribution IPv6 Security Cluster simulation Ericsson Canada Clustering and Ericsson Research Open Architecture Research Mission Conduct applied research for the access, application, network and services for advanced mobile network, in co-operation with product units, core units, customers and research organizations to demonstrate the viability of distributed system for Servers and Applications in IP Network. Activities Cluster computing, cluster simulation, IP telephony services, IP routers ECUR Lab State of the art telecom-grade equipment Ericsson Canada 2

3 Part I: Introduction to Clusters Clusters HA Cluster High availability Fault tolerance Overview of the system architecture Ericsson Canada What is a cluster? A cluster is a collection of connected, independent computers that work together to solve a problem Ericsson Research Open Architecture Lab Montreal - Canada Ericsson Canada 3

4 Clustering Goals High Availability Isolate or reduce the impact of a failure in the machine, resources, or device through redundancy and fail over techniques. Scalability Expand the capacity of servers in terms of processors, memory, storage, or other resources to support business growth Improved processing speed High performance / Improved access time / Improved response time Load Balancing Efficient resource utilization Manageability Reduce system management costs through appropriate system management facilities Ericsson Canada Where are clusters used today? Clusters are used in: Computational science research Universities National labs ISPs and ASPs Telecom research labs / telephony services research labs Government labs (such as Ministers of Defence ) However, they still are: Too hard to set-up and use Too many options without clear winners Too many learning curves to climb too many times Ericsson Canada 4

5 HA Clusters Ericsson Canada HA Clusters HA clusters have the ability to continue operations even if a server fails HA systems are not anymore regarded for traditional mission critical applications. An increasing need for almost all commercial applications and systems to provide users with minimal downtime. This even extends, for example, to customers demanding minimal downtime for regular system maintenance/upgrades. Cost of downtime Ericsson Canada 5

6 High Availability (HA) HA is defined as the capability of a computing resource to remain on-line in the face of a variety of potential subsystem failures. Failure of a power supply, network access, storage devices... Routine maintenance to upgrade an operating system or application may demand that a subsystem be taken off-line. High availability is measured in percentage In a monolithic system (vs. a cluster), each of these events would interrupt service Ericsson Canada HA Degrees Ericsson Canada 6

7 Fault tolerance An alternative approach to gaining greater availability is a fault tolerant architecture. All critical subsystems in a fault tolerant system are redundant In the case of a subsystem failure, a "hot spare" is available Replicated power supplies, cooling systems, disks, CPUs, The fault tolerant approach is expensive. To guarantee reliability, you essentially purchase several computing subsystems, only one of which carries the workload at any one time. Second and subsequent systems shadow processing and mirror the storage of data, without contributing to the overall capacity of the system Ericsson Canada High availability vs. fault tolerance High availability for cluster architectures works differently. 5 clustered nodes may divide the load for a set of critical applications. All 5 nodes contribute toward processing the tasks at hand. If 1 of the nodes fail, then the 4 remaining nodes pick up the load. Depending on the load at the time of the failure, performance will drop no more than 20% (1/5). Switching the load of the failed machine to other machines usually takes a short period of time Ericsson Canada 7

8 High availability vs. fault tolerance Monolithic systems can be expected to perform 99% of the time. 1% downtime translates to 90 hours in a year, or about 3.5 days. Not acceptable in the telecom world Fault-tolerant systems can improve reliability to %. At this level of system reliability, it is far more likely that extrinsic factors will interrupt service. Cluster architectures can be tuned in accordance with the cost of downtime Ericsson Canada Software vs. Hardware availability Hardware availability is increasing at a higher paste than software availability/reliability In some cases, the platform may be available but not the application Software has bugs; it may cause applications to crash. Keeping redundancy in applications and maintaining processes state is complex. In telecom, the required uptime includes both platform and applications uptime End-users don t care about running platforms when the required application is unavailable Ericsson Canada 8

9 Failover Failover is the ability of a cluster to detect problems in a node and to accommodate ongoing processing by routing applications to other nodes. This process may be programmed or scripted so that steps are taken automatically without operator intervention. Fundamental to failover is communication among nodes signaling that they are functioning correctly and reporting problems when they occur Ericsson Canada Heartbeat Each node listens actively to make sure that all of its companions are alive. When a node fails, cluster interconnect software takes action. More sophisticated cluster software reacts to the problem by: Shifting applications and users automatically, and Reconnecting to one or more healthy nodes of the cluster. Journals may be necessary to bring an application up to its current transaction with integrity. Databases may need to be reloaded. The catastrophic failure of a node is one event to which the cluster management environment must respond. Other potential problems may be a failure of a network interface, RAID failure, Ericsson Canada 9

10 Issues in Building HA Clusters How to automatically build and boot the nodes? Which file systems to use in the cluster? What types of traffic distribution and load balancing mechanisms to adopt depending on the applications? How to build redundancy and to which extend? How to manage the cluster, remotely? How to add/remove nodes without affecting the operations? Ericsson Canada The HW System Architecture A typical research Linux cluster for telecom applications at Ericsson Research Canada Ericsson Canada 10

11 The Needs Multi-CPU rackmount server systems that are flexible can support several developers working on different projects instead of having one server shelf and many diskless CPUs, we wanted shelves that can be autonomous from each other The systems should be as close as possible to real production systems so that system integration problems can be studied and overcome 48 volts power supply systems CompactFlash support pre-installed SCSI disks and tape backups Software RAID configurations as many fast Ethernet ports as possible on each CPU Ericsson Canada Server hardware platform 41U 40U 31.5U 23U 22U 21U 18U Compact PCI design -48V Central Office Powered NEBS Compliant Ready 16 P3 500MHz 512 Mbytes RAM 6 Ethernet Ports / Processor 8 SCSI Disk banks (3x18GBytes) Fully Redundant and hot swap Ericsson Canada 11

12 The solution Each system has four shelves (a total of 16 CPUs and 24 hard drives) 4 Pentium III 500MHz CPUs, each with 512MB RAM, CompactFlash, 2 onboard Fast Ethernet ports and 4 Fast Ethernet ports supplied by a Znyx 474 card two of the CPUs are diskless, two have a SCSI bus with three 18GB hard drives and a DDS-4 capable DAT tape drive Each system also has a Knurr alarm management system and two 37-port (36 100baseT + one fiber) 3com switches Ericsson Canada IO Front View 4x 150W PS in n+1 Hot swappable Dual Dat tape Hot swappable Dual raid array 3 18GB SCSI each All hot swappable Technician GND for wrist strap Chassis GND Quad cpci-mxs64 processors with a quad Ethernet Znyx card all hot swappable Hot swappable Ventilation: In - front bottom Out: rear top Ericsson Canada 12

13 IO Rear View All rear I/O cpci-mxs64 and quad Ethernet Znyx cards Sealed for airflow and EMI protection Dual input Power Filter module (single input for this first iteration) -48V input Alarm information output Wiring duct Ericsson Canada Processors Magazine Front view 6x 150W PS in n+1 Hot swappable 15 x cpci-mxp64 processors, all hot swappable Technician GND for wrist strap Hot swappable Ventilation: In - front bottom Out - rear top Ericsson Canada 13

14 Alternative (cheap) hardware used to build clusters Off-the-shelf 1U Celeron/Pentium III 256/512/768 MB RAM 20 GB IDE HD Floppy/CD-ROM 2 fast Ethernet ports 2 USB ports Cybex remote control unit 48 port Ethernet Switch Fiber connections Ericsson Canada Part II: The ARIES Project Advanced Research on Internet E-Servers Ericsson Canada 14

15 Overview Today vs. tomorrow Challenges ARIES Project Internet Servers Requirements Operating System Requirements Linux Kernel ARIES research areas Ericsson Canada Today vs. tomorrow 32 bit architecture Memory < 1 GB Processor 1 GHz Fast Ethernet IPv4 Network File Systems TelORB Basic Security Hundreds of TPS < 2 Million Subscribers 64 bit architecture Memory > 10 GB Processor > 10 GHz Gigabit Ethernet IPv4 and IPv6 Storage Area Networks TSP (TelORB & Linux) Severe Security Reqs. Thousands of TPS Subscribers > 10M Ericsson Canada 15

16 Challenges Bringing telecom-grade characteristics to Linux HA, scalability, Interoperability, HW, languages Supporting IPv4 and IPv6 Achieving linear scalability Reliable distributed file systems and distributed network storage Efficient load balancing and traffic distribution Providing a high level of security Providing remote operation & management Ericsson Canada Internet Servers Requirements Capacity Scalability Scale up any of the components in order to achieve a linear increase. Load Balancing Provides dynamic load-balancing mechanisms. The load balancing mechanism detects and reacts to the unavailability, addition and removal of components in the system. Availability Meet high availability requirements Operation and maintenance Performed (remotely) without affecting the system performance and availability Response Time Ericsson Canada 16

17 Internet Servers Requirements Geographical Diversity Spread across several Points of Presence Support geographic mirroring The communication is over dedicated high-speed links Protocols Supports UDP, TCP, HTTP, H.323, IIOP (CORBA), and SIP in the same scalable fashion Single IP Interface Clients access the server application through a single IP address Security High security requirements (e- and m-commerce) Ericsson Canada Operating System Requirements Characteristics 1. Very high availability and robustness implemented in software 2. (Soft) Real Time 3. Scalability 4. Performance Openness 1. Hardware 2. Languages: C++, Java 3. Interoperability: CORBA/IIOP, TCP/IP, Java RMI 4. 3rd party Software Ericsson Canada 17

18 ARIES 2000 Find and prototype the necessary technology to prove the feasibility of an Internet Server that provides telecom-grade characteristics: High Reliability ( % uptime) High Performance (x number of transactions/sec/processor) High Scalability (Number of CPUs and IP connections) High Throughput (fast and reliable voice/data streaming) Open to Fast Reconfiguration Use Linux and Open Source Software as the base technology Use commodity off-the-shelf hardware and software Better cost/performance ($/TPS) ratio Ericsson Canada ARIES 2001 Find, prototype and develop the necessary technology, which will enhance the clustering capabilities of TelORB and Linux to fulfil the future demands for Mobile Internet Servers. Research Areas: IPv6 Security Load Balancing Bringing IPv6 to carrier-class server nodes Build security at various cluster levels Efficient use of cluster resources Ericsson Canada 18

19 Part III: Linux as a candidate operating system for a near-telecom grade clusters Ericsson Canada Kernel Features Openness (HW/SW, Languages, Interoperability, 3 rd party SW) Flexibility (access to kernel code) Value for money Kernel Stability Optimizable Kernel (for size, speed and features) Offers support for embedded devices (PDAs, cell phones ) IPv6 Support Access to faster releases of IP features than other operating systems Supports a variety of network protocols Ericsson Canada 19

20 The Enterprise Linux Kernel (v 2.4) New System Architecture Features Support for multiple architectures inc. IA-64 (not bound to 1 supplier) Improved support for symmetric multiprocessing (SMP), up to 64 processors Enhanced task synchronization and threading facilities for increased efficiency File System Features Improved Virtual File System layer for efficiency Support advanced file system capabilities such as journaling ReiserFS, ext3, xfs, and jfs Ericsson Canada The Enterprise Linux Kernel (v 2.4)... Improved Networking Features Rewritten networking layer resulting in performance improvements Improved IPv4 implementation / Improved NAT performance Improved Support for IPv6 Kernel HTTP Daemon New version of NFS (enhanced security and performance) Support for Logical Volume Management Increased Device Support Support for 16 Ethernet cards Support for new SCSI controllers Support for additional RAID devices Ericsson Canada 20

21 Break Next: Building the Linux HA Cluster Ericsson Canada Part IV: Building a HA Linux Cluster How can we build a Linux based cluster using Open Source software to achieve continuous availability? Part of the work that was conducted as part of ARIES Ericsson Canada 21

22 Outline Architecture of a typical cluster Basic steps into building the cluster Automating installations Building the disks Automating installations Lessons learned Installing and starting application servers Building Redundancy Traffic distribution with LVS Load balancing with MOSIX Ericsson Canada Architecture of our typical Linux cluster 2 Redundant master nodes DHCP server TFTP server NFS server Install/BpBatch Server Security gateway NTP server Single IP Interface Incoming Traffic Single IP Interface Master Nodes Traffic Nodes Traffic nodes Host application servers (Apache, Jigsaw, Tomcat, Real Server, Icecast) Serve requests Ericsson Canada 22

23 Basic steps Prepare the master nodes Install Linux Red Hat 6.2 server installation Configure Linux kernel to support required configs Very minimal and optimized Configure the network settings 4 ports, two of each are on different networks Setup Ethernet redundancy Setup DHCP Collect nodes information such as MAC addresses Setup NFS Patch the kernel source code with NFS redundancy patches Recompile the kernel and the special mount program the mount point and /etc/exports on both servers are the same Setup TFTP Setup NTP Install BpBatch Ericsson Canada Basic steps Kernel and ramdisk preparation Build a Linux kernel to be used on the traffic nodes 2 kernels should be prepared depending whether the node has disks or not Prepare 2 types of ramdisks one for nodes with disks and one for diskless nodes Prepare traffic nodes Modify the BIOS setup so that all the boot sequence is as follows: For nodes with disk: LAN 1, LAN 2, Flash, disk For diskless nodes: LAN 1, LAN2, Flash (For disk-processors, check for flag file in Flash to where to boot from) Ericsson Canada 23

24 Cluster server automated installation In each cabinet, two redundant CPUs with disks are Linux boot servers for the rest of the cabinet. In the Linux cabinet, the remaining six CPUs with disks can have their configuration rebuilt automatically from the two master CPUs. Automated installation of these systems using BpBatch The main feature of BpBatch is the partition cloning facility, which let us create an image of a computer's hard disk partition and then distribute and install this image on a cluster of PC. Big success: we can do a complete rebuild of a RAID system from a corrupted base (all disks corrupted and CompactFlash corrupted) entirely from the network Ericsson Canada Sample BpBatch script diskless nodes For diskless processors: Logvars "Print" Print "INFO: Starting BpBatch script\r\n" Set CacheNever="ON" LinuxBoot "LATEST-bzImage" "root=/dev/ram "ramdisk_size =50000" "LATEST-ramdisk" Ericsson Canada 24

25 Sample BpBatch script disk nodes LogVars "Print" Print "INFO: Starting BpBatch script\r\n" if valid 0:1 goto ChooseStage Print "INFO: CFlash is not valid\r\n" goto RebuildDisk :ChooseStage Print "INFO: CFlash is valid\r\n" if exist "{1:1}/second.step" goto Stage2 Print "INFO: Reboot from disk sdc\r\n" if exist "{0:1}/third.stp" goto Stage3 Print "INFO: Booting raid from disk sda\r\n" if exist "{0:1}/boota.stp" goto JustBoota Print "INFO: No I mean Booting raid from disk sdb\r\n" if exist "{0:1}/bootb.stp" goto JustBootb Print "INFO: Oups! System in bad shape, we have to Rebuild\r\n" goto RebuildDisk Ericsson Canada Sample BpBatch script master nodes At any time, a master node can be built given that the other master node is up Prepared a minimal ramdisk and a script that would help us build a disk image very fast. Boot from LAN, configure network setting based on DHCP broadcast from the alive master node, rebuild disk, install all data, start services Ericsson Canada 25

26 Automated Installations Disk-CPU Automatic Rebuild: Boot from LAN (LAN 1/LAN 2) If LANs are not available boot from Flash If LANs and Flash not available, boot from disk Activate the Ethernet connections We have 6 connections, we only activate 4 Build a disk CPU from scratch Using BpBatch scripts Setup RAID Next slide Install ramdisk which include all the applications Start application servers such as web and streaming servers Ericsson Canada Automatic RAID Setup Check flash for a special file If file does not exist, do a boot from LAN and rebuild disk w/ RAID If file exists, then do a boot from disk Stage 1 Boot from LAN Stage 2 Boot from LAN on SDC Stage 3 Add SDC to RAID setup Disk Map S D A Flash S D B S D C Install minimal kernel Boot from LAN on SDA Rebuild Flash Partition SDC Download ramdisk and install it on SDC Partition SDA and SDB for RAID Mark SDC as faulty during RAID setup Perform RAID setup Boot from LAN on SDA System is up Synchronize disks Ericsson Canada 26

27 Diskless CPU Automatic Rebuild Disk-CPU Automatic Rebuild Boot from LAN (LAN 1/LAN 2) Activate (4) Ethernet connections Download ramdisk Start application servers Config files are stored on NFS to keep ramdisk size to minimal Ericsson Canada Automatic installations evaluation Disk-CPU automatic installation: Advantages: No manual intervention. The setup can be easily adapted to different configurations. Very fast to sub-cluster Inconveniences: Not very fast the RAID setup consumes 3 reboots Limited ramdisk size due to BpBatch and TFTP limitations Diskless CPUs automatic installation: Advantages: Very fast (less than 45 secs.) Inconveniences: Limited ramdisk size (15 MB) due to BpBatch and TFTP limitations Ericsson Canada 27

28 Building the Ramdisk Ramdisk for Diskless nodes Start from someone else s ramdisk; many available of the Internet Prepare your own Do a very minimal install for a system (just a the minimum required to have a working system) Get rid of all the stuff that you don t need Write a script to build the ramdisk image Save the ramdisk image under the tftpboot directory Ramdisk for disk nodes Build and customize one node Use BpBatch utilities to capture the disk image Use BpBatch scripts invoked by DHCP to rebuild the disks Ericsson Canada Things learned about the freeware BpBatch Complete rebuild of a SCSI RAID configuration through BpBatch requires three reboot sequences and writing to the CompactFlash Handling of disk Master Boot Records (MBRs) is buggy Had to learn how to work around BpBatch bugs, took a lot of time Few types of disk partitions are supported access to Linux EXT2 and DOS FAT16 partitions works access to DOS FAT12 (the default CompactFlash setup) or Linux RAID partitions does not work in general, BpBatch does not deal well with partition tables it has not created itself Use of DHCP-supplied values to drive BpBatch is also somewhat buggy Ericsson Canada 28

29 More things learned about auto-install BpBatch only knows how to transfer things using unicast TFTP this is somewhat unreliable and introduces unnecessary limits on the file sizes which can be downloaded using BpBatch A network installation sequence that requires multiple reboots must keep state in the client, since the boot server must be stateless Since BpBatch will not read RAID partitions, this forced us to keep the boot stage information in the CompactFlash Conclusion: Find alternatives to BpBatch to reduce bug-hunting and system rebuild time Commercial version of BpBatch: Ericsson Canada Application servers As test application servers, we used: Web Servers: Apache, Jigsaw, and Tomcat Streaming servers: Real Server and Icecast Binaries are contained within the ramdisk /usr/local/bin/ Configuration files are the same for all CPUS One copy is available on NFS and shared among all CPUs /nfs/apache/httpd.conf for instance Update one copy only and you get a new config on all nodes Startup script are in /etc/rc.d/* Jigsaw and Tomcat rely on JAVA JDK was available on NFS for diskless nodes due to space limitations imposed on the size of the ramdisk /nfs/jdk1.2.2/ Ericsson Canada 29

30 Redundancy in a clustered system: How to build a cluster that can meet our HA requirements? Ericsson Canada Redundancy Schemes Servers must maintain high availability. How to reach a telecom-grade uptime? Building redundancy into the system Redundancy at the Network Level Redundancy at the File System Level Redundancy at the Disk Level Redundancy at the CPU Level Ericsson Canada 30

31 Redundancy levels Ethernet Connections Monitor status of link1 If link1 is down: Delete route to link1 Direct traffic to link2 NFS Redundancy Implemented at kernel level Special mount program Switch Servers on timeout Software RAID Automatic Setup RAID 1 and RAID 5 Disk Architecture Independent eth2 eth3 Server 1 Primary NFS Server Internet eth2 eth3 Server N eth0 eth1 eth0 eth1 NFS Data Source 1 Firewall Cluster Single Entry Point Secondary NFS Server NFS Data Source Ericsson Canada Ethernet connections Ericsson Canada 31

32 Ethernet redundancy Goal: Keep Ethernet connections up The Ethernet redundancy daemon monitors the link status of the primary port. On link down, the route of the first port is deleted the traffic goes to the second port When the link goes up again, wait a bit, and switch to the primary port. The backup port routes have metrics bigger than the primary one. Reads router data from /proc/net/route Deletes routes bound to port eth[n+1] Adds routes for port eth[n+1] using configuration for port eth[n] with a higher routing metric value (i.e. lower priority) Ericsson Canada Ethernet redundancy daemon operation phase Poll running bit (patched tulip driver) for port eth[n] and if down then delete routes bound to eth[n] Traffic bound to port eth[n] is now diverted to port eth[n+1] Poll running bit for port eth[n] and if up then re-install routes for port eth[n] and ping to broadcast address. Traffic bound to port eth[n+1] is now diverted to port eth[n] with the lower metric value (higher routing priority) Ericsson Canada 32

33 Ethernet redundancy daemon Configure a pair of ports eth[n] and eth[n+1] with the same IP and MAC addresses Use the following command to change MAC address of a network interface card: ifconfig eth[n+1] hw ethr aa:bb::cc:dd:ee:ff Run ERD daemon Ericsson Canada NFS redundancy Goal: Maintain data availability by having redundant NFS servers NFS kernel files have been modified to support NFS redundancy. Kernel File Location /usr/src/linux/fs/nfs/inode.c /usr/src/linux/net/sunrpc/sched.c /usr/src/linux/net/sunrpc/clnt.c Changes Description All the NFS redundancy support Raise the timeout flag when there is one Raise the timeout flag when there is one Ericsson Canada 33

34 NFS redundancy A special version of the mount program has also been implemented. The 2 servers are passed to the mount program and then to the kernel. % mount t nfs server1,server2:/nfs_mnt_point The nfs mount point has to be the same on both servers When a timeout occurs: A kernel thread is waken up. The data from the 2 servers is switched. The different caches are cleaned up. The dcache is updated with new handles from the new server Ericsson Canada Data redundancy through software RAID Goal: To have data available in the case of a disk crash Solution: Use software RAID 1 and RAID 5 The first thing we did, before working on the boot sequence, is installing and setting up 1 CPU with raid partition using the cdrom installation with RedHat 6.2 (pro. version) With this installation, we tar and zip the different partitions we have (next slide) Ericsson Canada 34

35 Disks partitioning sda1, sdb1 and sdc1 are representing the md0 partition that is /boot sda6, sdb6 and sdc6 are representing the md1 partition that is /var sda5, sdb5 and sdc5 are representing the md2 partition that is / sda8, sdb8 and sdc8 are representing the md3 partition that is /home sda7, sdb7 and sdc7 are representing the md4 partition that is the swap Ericsson Canada Building RAID setups We wrote a script that auto-format a disk with this exact geometry and partitions. You need to compile a kernel with these options in block devices : Autodetect RAID partitions [y] Linear (append) mode [m] RAID-1 (mirroring) mode [m] RAID-4/RAID-5 [m] Initial RAM disk (initrd) support [y] To automate the setup on other nodes, we used DHCP with BpBatch scripts. With bpbatch, we download a small kernel image that will be used to partition the disk with the script we did Then, the CPU downloads via ftp the full kernel, the different files systems, the config files and scripts we are going to use on next step. (check the Automatic RAID Setup slide) Ericsson Canada 35

36 Testing the achieved availability Ethernet Disconnect Ethernet cables while running a benchmark No interruption in service 0 failed requests SW RAID Remove disk from cabinet, wait few minutes, put it back The disk will be detected and will be synchronized with the other disks Synchronization time depends on the size of the disk and the number of changes NFS Shutdown 1 NFS server or disconnect its Ethernet cables Application servers switch to use the second NFS server Transparent No fails Ericsson Canada Screen shot of cpu Ericsson Canada 36

37 Traffic Distribution: How can we allow an infinite amount of clients to reach an infinite amount of servers presented as a single virtual IP address? Ericsson Canada Ideal solution High availability No single point of failure Minimal impact in case of failure Scalability same cost/performance ($/TPS) same response time No impact/requirements on clients/servers/routers Ericsson Canada 37

38 Linux Virtual Server LVS is a server built on a cluster of real servers, with the load balancer running on Linux. 3 architectures: Network address translation (NAT), direct routing (DR), IP tunnelling (IPT) The architecture of the cluster is transparent to end users Users only see a single IP address Detects failure of a node and reconfigure LVS is not highly scalable Bottlenecks at LVS Server Level Limited Scalability at Nodes Level (~20) Ericsson Canada LVS via NAT Ericsson Canada 38

39 Scalability of LVS via NAT 8 traffic processors getting their traffic from WebBench 8 traffic processors getting their traffic through LVS Requests / Second Requests/Second _client 4_clients 1_client 8_clients 16_clients LVS Results 8_clients 12_clients 16_clients 20_clients 24_clients LVS vs Non-LVS Setup (1 LVS/8 Traffic CPUs) 24_clients 32_clients 40_clients 48_clients 56_clients 28_clients 32_clients 36_clients 40_clients 44_clients 48_clients 52_clients 56_clients 60_clients Non-LVS Setup LVS Setup Ericsson Canada LVS via DR Ericsson Canada 39

40 Performance comparison: DR vs. NAT 8 traffic processors getting their traffic through LVS using Direct Routing 8 traffic processors getting their traffic through LVS using NAT DR is almost 3x more performant Requests per Second _client 8_clients 16_clients LVS: DR vs NAT 24_clients 32_clients 40_clients 48_clients 56_clients LVS-DR LVS-NAT Ericsson Canada LVS conclusions LVS is easy to install and manage, and very useful. LVS is a potential solution for small to mid-size web farms that need a software-based solution for traffic distribution. LVS's future is promising with the determination to add more: load-balancing algorithms geographic-based scheduling the integration the heartbeat code and the CODA distributed fault-tolerant file system into the virtual. LVS's implement the virtual server in IPv6. Compared to other packages: LVS provides many unique features such as the support for multiple scheduling algorithms, and the support for various methods for requests forwarding (NAT, direct routing, tunneling). Our next step regarding LVS is to try out the other two implementations (direct routing and IP tunneling) and compare the performance with the NAT implementation on the same setup. We found some restrictions using LVS under heavy load. LVS can not handle the high number of transactions per second that our servers will be receiving Ericsson Canada 40

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Cisco Application Networking Manager Version 2.0

Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager (ANM) software enables centralized configuration, operations, and monitoring of Cisco data center networking equipment

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

MOSIX: High performance Linux farm

MOSIX: High performance Linux farm MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm

More information

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions

Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Purpose-Built Load Balancing The Advantages of Coyote Point Equalizer over Software-based Solutions Abstract Coyote Point Equalizer appliances deliver traffic management solutions that provide high availability,

More information

Distribution One Server Requirements

Distribution One Server Requirements Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

More information

760 Veterans Circle, Warminster, PA 18974 215-956-1200. Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974.

760 Veterans Circle, Warminster, PA 18974 215-956-1200. Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974. 760 Veterans Circle, Warminster, PA 18974 215-956-1200 Technical Proposal Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA 18974 for Conduction Cooled NAS Revision 4/3/07 CC/RAIDStor: Conduction

More information

Building a Highly Available and Scalable Web Farm

Building a Highly Available and Scalable Web Farm Page 1 of 10 MSDN Home > MSDN Library > Deployment Rate this page: 10 users 4.9 out of 5 Building a Highly Available and Scalable Web Farm Duwamish Online Paul Johns and Aaron Ching Microsoft Developer

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Red Hat Cluster Suite

Red Hat Cluster Suite Red Hat Cluster Suite HP User Society / DECUS 17. Mai 2006 Joachim Schröder Red Hat GmbH Two Key Industry Trends Clustering (scale-out) is happening 20% of all servers shipped will be clustered by 2006.

More information

VTrak 15200 SATA RAID Storage System

VTrak 15200 SATA RAID Storage System Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data

More information

Solaris For The Modern Data Center. Taking Advantage of Solaris 11 Features

Solaris For The Modern Data Center. Taking Advantage of Solaris 11 Features Solaris For The Modern Data Center Taking Advantage of Solaris 11 Features JANUARY 2013 Contents Introduction... 2 Patching and Maintenance... 2 IPS Packages... 2 Boot Environments... 2 Fast Reboot...

More information

How to Choose your Red Hat Enterprise Linux Filesystem

How to Choose your Red Hat Enterprise Linux Filesystem How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to

More information

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc. Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component

More information

ELIXIR LOAD BALANCER 2

ELIXIR LOAD BALANCER 2 ELIXIR LOAD BALANCER 2 Overview Elixir Load Balancer for Elixir Repertoire Server 7.2.2 or greater provides software solution for load balancing of Elixir Repertoire Servers. As a pure Java based software

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available Phone: (603)883-7979 sales@cepoint.com Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous

More information

What the student will need:

What the student will need: COMPTIA SERVER+: The Server+ course is designed to help the student take and pass the CompTIA Server+ certification exam. It consists of Book information, plus real world information a student could use

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Introduction to Gluster. Versions 3.0.x

Introduction to Gluster. Versions 3.0.x Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

LinuxWorld Conference & Expo Server Farms and XML Web Services

LinuxWorld Conference & Expo Server Farms and XML Web Services LinuxWorld Conference & Expo Server Farms and XML Web Services Jorgen Thelin, CapeConnect Chief Architect PJ Murray, Product Manager Cape Clear Software Objectives What aspects must a developer be aware

More information

Running a Workflow on a PowerCenter Grid

Running a Workflow on a PowerCenter Grid Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

More information

High Performance Cluster Support for NLB on Window

High Performance Cluster Support for NLB on Window High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor,

More information

Scala Storage Scale-Out Clustered Storage White Paper

Scala Storage Scale-Out Clustered Storage White Paper White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current

More information

High Availability Solutions for the MariaDB and MySQL Database

High Availability Solutions for the MariaDB and MySQL Database High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

Cloud Based Application Architectures using Smart Computing

Cloud Based Application Architectures using Smart Computing Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products

More information

NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions

NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions NEC Corporation of America Intro to High Availability / Fault Tolerant Solutions 1 NEC Corporation Technology solutions leader for 100+ years Established 1899, headquartered in Tokyo First Japanese joint

More information

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation Overview of I/O Performance and RAID in an RDBMS Environment By: Edward Whalen Performance Tuning Corporation Abstract This paper covers the fundamentals of I/O topics and an overview of RAID levels commonly

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

Building a Linux Cluster

Building a Linux Cluster Building a Linux Cluster CUG Conference May 21-25, 2001 by Cary Whitney Clwhitney@lbl.gov Outline What is PDSF and a little about its history. Growth problems and solutions. Storage Network Hardware Administration

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

How To Fix A Powerline From Disaster To Powerline

How To Fix A Powerline From Disaster To Powerline Perforce Backup Strategy & Disaster Recovery at National Instruments Steven Lysohir 1 Why This Topic? Case study on large Perforce installation Something for smaller sites to ponder as they grow Stress

More information

SERVER CLUSTERING TECHNOLOGY & CONCEPT

SERVER CLUSTERING TECHNOLOGY & CONCEPT SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: vaibhav.mathur2007@gmail.com Abstract Server Cluster is one of the clustering technologies; it is use for

More information

POWER ALL GLOBAL FILE SYSTEM (PGFS)

POWER ALL GLOBAL FILE SYSTEM (PGFS) POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm

More information

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7 Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:

More information

Synology High Availability (SHA)

Synology High Availability (SHA) Synology High Availability (SHA) Based on DSM 5.1 Synology Inc. Synology_SHAWP_ 20141106 Table of Contents Chapter 1: Introduction... 3 Chapter 2: High-Availability Clustering... 4 2.1 Synology High-Availability

More information

Virtual SAN Design and Deployment Guide

Virtual SAN Design and Deployment Guide Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

Astaro Deployment Guide High Availability Options Clustering and Hot Standby

Astaro Deployment Guide High Availability Options Clustering and Hot Standby Connect With Confidence Astaro Deployment Guide Clustering and Hot Standby Table of Contents Introduction... 2 Active/Passive HA (Hot Standby)... 2 Active/Active HA (Cluster)... 2 Astaro s HA Act as One...

More information

CommuniGate Pro White Paper. Dynamic Clustering Solution. For Reliable and Scalable. Messaging

CommuniGate Pro White Paper. Dynamic Clustering Solution. For Reliable and Scalable. Messaging CommuniGate Pro White Paper Dynamic Clustering Solution For Reliable and Scalable Messaging Date April 2002 Modern E-Mail Systems: Achieving Speed, Stability and Growth E-mail becomes more important each

More information

SCALABILITY AND AVAILABILITY

SCALABILITY AND AVAILABILITY SCALABILITY AND AVAILABILITY Real Systems must be Scalable fast enough to handle the expected load and grow easily when the load grows Available available enough of the time Scalable Scale-up increase

More information

S y s t e m A r c h i t e c t u r e

S y s t e m A r c h i t e c t u r e S y s t e m A r c h i t e c t u r e V e r s i o n 5. 0 Page 1 Enterprise etime automates and streamlines the management, collection, and distribution of employee hours, and eliminates the use of manual

More information

CHAPTER 15: Operating Systems: An Overview

CHAPTER 15: Operating Systems: An Overview CHAPTER 15: Operating Systems: An Overview The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint

More information

Grid on Blades. Basil Smith 7/2/2005. 2003 IBM Corporation

Grid on Blades. Basil Smith 7/2/2005. 2003 IBM Corporation Grid on Blades Basil Smith 7/2/2005 2003 IBM Corporation What is the problem? Inefficient utilization of resources (MIPS, Memory, Storage, Bandwidth) Fundamentally resources are being wasted due to wide

More information

Proposal for Virtual Private Server Provisioning

Proposal for Virtual Private Server Provisioning Interpole Solutions 1050, Sadguru Darshan, New Prabhadevi Road, Mumbai - 400 025 Tel: 91-22-24364111, 24364112 Email : response@interpole.net Website: www.interpole.net Proposal for Virtual Private Server

More information

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...

More information

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010 Best Practices for Data Sharing in a Grid Distributed SAS Environment Updated July 2010 B E S T P R A C T I C E D O C U M E N T Table of Contents 1 Abstract... 2 1.1 Storage performance is critical...

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

HP reference configuration for entry-level SAS Grid Manager solutions

HP reference configuration for entry-level SAS Grid Manager solutions HP reference configuration for entry-level SAS Grid Manager solutions Up to 864 simultaneous SAS jobs and more than 3 GB/s I/O throughput Technical white paper Table of contents Executive summary... 2

More information

CGL Architecture Specification

CGL Architecture Specification CGL Architecture Specification Mika Karlstedt Helsinki 19th February 2003 Seminar paper for Seminar on High Availability and Timeliness in Linux University of Helsinki Department of Computer science i

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

IBM Security QRadar SIEM Version 7.2.6. High Availability Guide IBM

IBM Security QRadar SIEM Version 7.2.6. High Availability Guide IBM IBM Security QRadar SIEM Version 7.2.6 High Availability Guide IBM Note Before using this information and the product that it supports, read the information in Notices on page 35. Product information This

More information

NIVEO Network Attached Storage Series NNAS-D5 NNAS-R4. More information: WWW.NIVEOPROFESSIONAL.COM INFO@NIVEOPROFESSIONAL.COM

NIVEO Network Attached Storage Series NNAS-D5 NNAS-R4. More information: WWW.NIVEOPROFESSIONAL.COM INFO@NIVEOPROFESSIONAL.COM NIVEO Network Attached Storage Series NNAS-D5 NNAS-R4 More information: WWW.NIVEOPROFESSIONAL.COM INFO@NIVEOPROFESSIONAL.COM Product Specification Introduction The NIVEO NNAS series is specifically designed

More information

High Availability Essentials

High Availability Essentials High Availability Essentials Introduction Ascent Capture s High Availability Support feature consists of a number of independent components that, when deployed in a highly available computer system, result

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

Active-Active and High Availability

Active-Active and High Availability Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide

More information

Microsoft Exchange Server 2003 Deployment Considerations

Microsoft Exchange Server 2003 Deployment Considerations Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers

More information

Google File System. Web and scalability

Google File System. Web and scalability Google File System Web and scalability The web: - How big is the Web right now? No one knows. - Number of pages that are crawled: o 100,000 pages in 1994 o 8 million pages in 2005 - Crawlable pages might

More information

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SAN TECHNICAL - DETAILS/ SPECIFICATIONS SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance

More information

Virtualised MikroTik

Virtualised MikroTik Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand

More information

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang Distributed RAID Architectures for Cluster I/O Computing Kai Hwang Internet and Cluster Computing Lab. University of Southern California 1 Presentation Outline : Scalable Cluster I/O The RAID-x Architecture

More information

WHITE PAPER 1 WWW.FUSIONIO.COM

WHITE PAPER 1 WWW.FUSIONIO.COM 1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics

More information

theguard! ApplicationManager System Windows Data Collector

theguard! ApplicationManager System Windows Data Collector theguard! ApplicationManager System Windows Data Collector Status: 10/9/2008 Introduction... 3 The Performance Features of the ApplicationManager Data Collector for Microsoft Windows Server... 3 Overview

More information

Load Balancing for Microsoft Office Communication Server 2007 Release 2

Load Balancing for Microsoft Office Communication Server 2007 Release 2 Load Balancing for Microsoft Office Communication Server 2007 Release 2 A Dell and F5 Networks Technical White Paper End-to-End Solutions Team Dell Product Group Enterprise Dell/F5 Partner Team F5 Networks

More information

Ultra Thin Client TC-401 TC-402. Users s Guide

Ultra Thin Client TC-401 TC-402. Users s Guide Ultra Thin Client TC-401 TC-402 Users s Guide CONTENT 1. OVERVIEW... 3 1.1 HARDWARE SPECIFICATION... 3 1.2 SOFTWARE OVERVIEW... 4 1.3 HARDWARE OVERVIEW...5 1.4 NETWORK CONNECTION... 7 2. INSTALLING THE

More information

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking Network Storage for Business Continuity and Disaster Recovery and Home Media White Paper Abstract Network storage is a complex IT discipline that includes a multitude of concepts and technologies, like

More information

by Kaleem Anwar, Muhammad Amir, Ahmad Saeed and Muhammad Imran

by Kaleem Anwar, Muhammad Amir, Ahmad Saeed and Muhammad Imran The Linux Router The performance of the Linux router makes it an attractive alternative when concerned with economizing. by Kaleem Anwar, Muhammad Amir, Ahmad Saeed and Muhammad Imran Routers are amongst

More information

Distributed File Systems

Distributed File Systems Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.

More information

SIP-DECT Knowledge Base SIP-DECT System Update

SIP-DECT Knowledge Base SIP-DECT System Update SIP-DECT Knowledge Base SIP-DECT System Update MAI 2015 DEPL-2046 VERSION 1.6 KNOWLEDGE BASE TABLE OF CONTENT 1) Introduction... 2 2) Update (New Service Pack in the same Release)... 3 2.1 OMM HOSTED ON

More information

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

More information

CT505-30 LANforge-FIRE VoIP Call Generator

CT505-30 LANforge-FIRE VoIP Call Generator 1 of 11 Network Testing and Emulation Solutions http://www.candelatech.com sales@candelatech.com +1 360 380 1618 [PST, GMT -8] CT505-30 LANforge-FIRE VoIP Call Generator The CT505-30 supports SIP VOIP

More information

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations

Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Symantec Endpoint Protection 11.0 Architecture, Sizing, and Performance Recommendations Technical Product Management Team Endpoint Security Copyright 2007 All Rights Reserved Revision 6 Introduction This

More information

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper High Availability with Postgres Plus Advanced Server An EnterpriseDB White Paper For DBAs, Database Architects & IT Directors December 2013 Table of Contents Introduction 3 Active/Passive Clustering 4

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Installation Guide July 2009

Installation Guide July 2009 July 2009 About this guide Edition notice This edition applies to Version 4.0 of the Pivot3 RAIGE Operating System and to any subsequent releases until otherwise indicated in new editions. Notification

More information

Red Hat Enterprise linux 5 Continuous Availability

Red Hat Enterprise linux 5 Continuous Availability Red Hat Enterprise linux 5 Continuous Availability Businesses continuity needs to be at the heart of any enterprise IT deployment. Even a modest disruption in service is costly in terms of lost revenue

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

VERITAS Storage Foundation 4.3 for Windows

VERITAS Storage Foundation 4.3 for Windows DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications

More information

Business Continuity: Choosing the Right Technology Solution

Business Continuity: Choosing the Right Technology Solution Business Continuity: Choosing the Right Technology Solution Table of Contents Introduction 3 What are the Options? 3 How to Assess Solutions 6 What to Look for in a Solution 8 Final Thoughts 9 About Neverfail

More information

Tushar Joshi Turtle Networks Ltd

Tushar Joshi Turtle Networks Ltd MySQL Database for High Availability Web Applications Tushar Joshi Turtle Networks Ltd www.turtle.net Overview What is High Availability? Web/Network Architecture Applications MySQL Replication MySQL Clustering

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2

QuickStart Guide vcenter Server Heartbeat 5.5 Update 2 vcenter Server Heartbeat 5.5 Update 2 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent

More information

Web Application s Performance Testing

Web Application s Performance Testing Web Application s Performance Testing B. Election Reddy (07305054) Guided by N. L. Sarda April 13, 2008 1 Contents 1 Introduction 4 2 Objectives 4 3 Performance Indicators 5 4 Types of Performance Testing

More information

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture Continuous Availability Suite: Neverfail s Continuous Availability Suite is at the core of every Neverfail solution. It provides a comprehensive software solution for High Availability (HA) and Disaster

More information

Simplest Scalable Architecture

Simplest Scalable Architecture Simplest Scalable Architecture NOW Network Of Workstations Many types of Clusters (form HP s Dr. Bruce J. Walker) High Performance Clusters Beowulf; 1000 nodes; parallel programs; MPI Load-leveling Clusters

More information

An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database

An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always

More information

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes

More information

Configuring Windows Server Clusters

Configuring Windows Server Clusters Configuring Windows Server Clusters In Enterprise network, group of servers are often used to provide a common set of services. For example, Different physical computers can be used to answer request directed

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH CONTENTS Introduction... 4 System Components... 4 OpenNebula Cloud Management Toolkit... 4 VMware

More information

FioranoMQ 9. High Availability Guide

FioranoMQ 9. High Availability Guide FioranoMQ 9 High Availability Guide Copyright (c) 1999-2008, Fiorano Software Technologies Pvt. Ltd., Copyright (c) 2008-2009, Fiorano Software Pty. Ltd. All rights reserved. This software is the confidential

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

VIA CONNECT PRO Deployment Guide

VIA CONNECT PRO Deployment Guide VIA CONNECT PRO Deployment Guide www.true-collaboration.com Infinite Ways to Collaborate CONTENTS Introduction... 3 User Experience... 3 Pre-Deployment Planning... 3 Connectivity... 3 Network Addressing...

More information

(Scale Out NAS System)

(Scale Out NAS System) For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages

More information