CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN
|
|
|
- Angelina Hicks
- 10 years ago
- Views:
Transcription
1 CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN
2 Introduction Different h/w used for GRID services Various techniques & First experiences & Recommendations 16 June 2006 Thorsten Kleinwort CERN-IT 2
3 GRID Service issues Most GRID Services do not have built in failover functionality Stateful servers: Information lost on crash Domino effects on failures Scalability issues: e.g. >1 CE 16 June 2006 Thorsten Kleinwort CERN-IT 3
4 Different approaches Server hardware setup: Cheap off the shelf h/w satisfies only for batch nodes (WN) Improve reliability (cost ^) Dual Power supply (and test it works!) UPS Raid on system/data disks (hot swappable) Remote console/bios access Fiber Channel disk infrastructure CERN: Mid-Range-(Disk-)Server 16 June 2006 Thorsten Kleinwort CERN-IT 4
5 16 June 2006 Thorsten Kleinwort CERN-IT 5
6 Various Techniques Where possible, increase number of servers to improve service : WN, UI (trivial) BDII: possible, state information is very volatile and re-queried periodically CE, RB, more difficult, but possible for scalability 16 June 2006 Thorsten Kleinwort CERN-IT 6
7 Load Balancing For multiple Servers, use DNS alias/load balancing to select the right one: Simple DNS aliasing: For selecting master or slave server Useful for scheduled interventions Dedicated (VO-specific) host names, pointing to the same machine (E.g. RB) Pure DNS Round Robin (no server check) configurable in DNS: to equally balance load Load Balancing Service, based on Server load/availability/checks: Uses monitoring information to determine best machine(s) 16 June 2006 Thorsten Kleinwort CERN-IT 7
8 DNS Load Balancing LXPLUS.CERN.CH Update LXPLUS DNS Select least loaded SNMP/Lemon LB Server LXPLUSXXX 16 June 2006 Thorsten Kleinwort CERN-IT 8
9 Load balancing & scaling Publish several (distinguished) Servers for same site to distribute load: Can also be done for stateful services, because clients stick to the selected server Helps to reduce high load (CE, RB) 16 June 2006 Thorsten Kleinwort CERN-IT 9
10 Use existing Services All GRID applications that support ORACLE as a data backend (will) use the existing ORACLE Service at CERN for state information Allows for stateless and therefore load balanced application ORACLE Solution is based on RAC servers with FC attached disks 16 June 2006 Thorsten Kleinwort CERN-IT 10
11 High Availability Linux Add-on to standard Linux Switches IP address between two servers: If one Server crashes On request Machines monitor each other To further increase high availability: State information can be on (shared) FC 16 June 2006 Thorsten Kleinwort CERN-IT 11
12 HA Linux + FC disk SERVICE.CERN.CH Heartbeat FC Switches No single Point of failure FC Disks 16 June 2006 Thorsten Kleinwort CERN-IT 12
13 WMS: CE & RB CE & RB are stateful Servers: load balancing only for scaling: Done for RB and CE CE s and RB s on Mid-Range-Server h/w If one server goes, the information it keeps is lost [Possibly FC storage backend behind load balanced, stateless servers] 16 June 2006 Thorsten Kleinwort CERN-IT 13
14 GRID Services: DMS & IS DMS (Data Management System) SE: Storage Element FTS: File Transfer Service LFC: LCG File Catalogue IS (Information System) BDII: Berkley Database Information Index 16 June 2006 Thorsten Kleinwort CERN-IT 14
15 DMS: SE SE: castorgrid: Load balanced front end cluster, with CASTOR storage backend 8 simple machines (batch type) Here load balancing and failover works (but not for simple GRIDFTP) 16 June 2006 Thorsten Kleinwort CERN-IT 15
16 DMS:FTS & LFC LFC & FTS Service: Load Balanced Front End ORACLE database backend +FTS Service: VO agent daemons One warm spare for all: Gets hot when needed Channel agent daemons (Master & Slave) 16 June 2006 Thorsten Kleinwort CERN-IT 16
17 FTS: VO agent (failover) ALICE ATLAS SPARE CMS LHCB 16 June 2006 Thorsten Kleinwort CERN-IT 17
18 16 June 2006 Thorsten Kleinwort CERN-IT 18
19 IS: BDII BDII: Load balanced Mid-Range-Servers: LCG-BDII top level BDII PROD-BDII site BDII In preparation: <VO>-BDII State information in volatile cache re-queried within minutes 16 June 2006 Thorsten Kleinwort CERN-IT 19
20 16 June 2006 Thorsten Kleinwort CERN-IT 20
21 Example: BDII > First BDII in production > Second BDII in production 16 June 2006 Thorsten Kleinwort CERN-IT 21
22 AAS:MyPRoxy MyProxy has a replication function for a Slave Server, that allows read-only proxy retrieval [not used any more] Slave Server gets read-write copy from Master regularly to allow DNS switch over (rsync, every minute) HALinux handles failover 16 June 2006 Thorsten Kleinwort CERN-IT 22
23 HA Linux PX101 MYPROXY.CERN.CH PX102 Heartbeat rsync 16 June 2006 Thorsten Kleinwort CERN-IT 23
24 AAS: VOMS LCG-VOMS (HALinux) voms-ping voms102 voms103 VOMS DB ORACLE 10g RAC 16 June 2006 Thorsten Kleinwort CERN-IT 24
25 MS: SFT, GRVW, MONB Not seen as critical Nevertheless servers are migrated to Mid-Range-Servers GRVW has a hot-standby SFT & MONB on Mid-Range-Servers 16 June 2006 Thorsten Kleinwort CERN-IT 25
Database Services for Physics @ CERN
Database Services for Physics @ CERN Deployment and Monitoring Radovan Chytracek CERN IT Department Outline Database services for physics Status today How we do the services tomorrow? Performance tuning
Techniques for implementing & running robust and reliable DB-centric Grid Applications
Techniques for implementing & running robust and reliable DB-centric Grid Applications International Symposium on Grid Computing 2008 11 April 2008 Miguel Anjo, CERN - Physics Databases Outline Robust
Report from SARA/NIKHEF T1 and associated T2s
Report from SARA/NIKHEF T1 and associated T2s Ron Trompert SARA About SARA and NIKHEF NIKHEF SARA High Energy Physics Institute High performance computing centre Manages the Surfnet 6 network for the Dutch
LHC schedule: what does it imply for SRM deployment? [email protected]. CERN, July 2007
WLCG Service Schedule LHC schedule: what does it imply for SRM deployment? [email protected] WLCG Storage Workshop CERN, July 2007 Agenda The machine The experiments The service LHC Schedule Mar. Apr.
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
Tushar Joshi Turtle Networks Ltd
MySQL Database for High Availability Web Applications Tushar Joshi Turtle Networks Ltd www.turtle.net Overview What is High Availability? Web/Network Architecture Applications MySQL Replication MySQL Clustering
Tier0 plans and security and backup policy proposals
Tier0 plans and security and backup policy proposals, CERN IT-PSS CERN - IT Outline Service operational aspects Hardware set-up in 2007 Replication set-up Test plan Backup and security policies CERN Oracle
High Availability and Clustering
High Availability and Clustering AdvOSS-HA is a software application that enables High Availability and Clustering; a critical requirement for any carrier grade solution. It implements multiple redundancy
Distributed Database Access in the LHC Computing Grid with CORAL
Distributed Database Access in the LHC Computing Grid with CORAL Dirk Duellmann, CERN IT on behalf of the CORAL team (R. Chytracek, D. Duellmann, G. Govi, I. Papadopoulos, Z. Xie) http://pool.cern.ch &
PES. Ermis service for DNS Load Balancer configuration. HEPiX Fall 2014. Aris Angelogiannopoulos, CERN IT-PES/PS Ignacio Reguero, CERN IT-PES/PS
Ermis service for DNS Load Balancer configuration HEPiX Fall 2014 Aris Angelogiannopoulos, CERN IT-PES/PS Ignacio Reguero, CERN IT-PES/PS 1 Outline Core concepts DNS Load Balancing at CERN Motivation and
Virtualisation Cloud Computing at the RAL Tier 1. Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013
Virtualisation Cloud Computing at the RAL Tier 1 Ian Collier STFC RAL Tier 1 HEPiX, Bologna, 18 th April 2013 Virtualisation @ RAL Context at RAL Hyper-V Services Platform Scientific Computing Department
Analisi di un servizio SRM: StoRM
27 November 2007 General Parallel File System (GPFS) The StoRM service Deployment configuration Authorization and ACLs Conclusions. Definition of terms Definition of terms 1/2 Distributed File System The
Building Reliable, Scalable AR System Solutions. High-Availability. White Paper
Building Reliable, Scalable Solutions High-Availability White Paper Introduction This paper will discuss the products, tools and strategies available for building reliable and scalable Action Request System
short introduction to linux high availability description of problem and solution possibilities linux tools
High Availability with Linux / Hepix October 2004 Karin Miers 1 High Availability with Linux Using DRBD and Heartbeat short introduction to linux high availability description of problem and solution possibilities
PES. High Availability Load Balancing in the Agile Infrastructure. Platform & Engineering Services. HEPiX Bologna, April 2013
PES Platform & Engineering Services High Availability Load Balancing in the Agile Infrastructure HEPiX Bologna, April 2013 Vaggelis Atlidakis, -PES/PS Ignacio Reguero, -PES/PS PES Outline Core Concepts
Scalability of web applications. CSCI 470: Web Science Keith Vertanen
Scalability of web applications CSCI 470: Web Science Keith Vertanen Scalability questions Overview What's important in order to build scalable web sites? High availability vs. load balancing Approaches
Evolution of Database Replication Technologies for WLCG
Home Search Collections Journals About Contact us My IOPscience Evolution of Database Replication Technologies for WLCG This content has been downloaded from IOPscience. Please scroll down to see the full
Batch and Cloud overview. Andrew McNab University of Manchester GridPP and LHCb
Batch and Cloud overview Andrew McNab University of Manchester GridPP and LHCb Overview Assumptions Batch systems The Grid Pilot Frameworks DIRAC Virtual Machines Vac Vcycle Tier-2 Evolution Containers
DNS LOAD BALANCING AND FAILOVER MECHANISM AT CERN
DNS LOAD BALANCING AND FAILOVER MECHANISM AT CERN V. Bahyl, N. Garfield, CERN, Geneva, Switzerland Abstract Availability approaching 100% and response time converging to 0 are two factors that users expect
Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft. Holger Marten. Holger. Marten at iwr. fzk. de www.gridka.de
Tier-2 cloud Holger Marten Holger. Marten at iwr. fzk. de www.gridka.de 1 GridKa associated Tier-2 sites spread over 3 EGEE regions. (4 LHC Experiments, 5 (soon: 6) countries, >20 T2 sites) 2 region DECH
Chapter 10: Scalability
Chapter 10: Scalability Contents Clustering, Load balancing, DNS round robin Introduction Enterprise web portal applications must provide scalability and high availability (HA) for web services in order
Mass Storage System for Disk and Tape resources at the Tier1.
Mass Storage System for Disk and Tape resources at the Tier1. Ricci Pier Paolo et al., on behalf of INFN TIER1 Storage [email protected] ACAT 2008 November 3-7, 2008 Erice Summary Tier1 Disk
ALICE GRID & Kolkata Tier-2
ALICE GRID & Kolkata Tier-2 Site Name :- IN-DAE-VECC-01 & IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :- INDIA Vikas Singhal VECC, Kolkata Events at LHC Luminosity : 10 34 cm -2 s -1 40 MHz every
29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.
P2000 P4000 29/07/2010 1 HP D2200SB STORAGE BLADE Twelve hot plug SAS drives in a half height form factor P410i Smart Array controller onboard with 1GB FBWC Expand local storage capacity PCIe x4 to adjacent
Exploring Oracle E-Business Suite Load Balancing Options. Venkat Perumal IT Convergence
Exploring Oracle E-Business Suite Load Balancing Options Venkat Perumal IT Convergence Objectives Overview of 11i load balancing techniques Load balancing architecture Scenarios to implement Load Balancing
HAMBURG ZEUTHEN. DESY Tier 2 and NAF. Peter Wegner, Birgit Lewendel for DESY-IT/DV. Tier 2: Status and News NAF: Status, Plans and Questions
DESY Tier 2 and NAF Peter Wegner, Birgit Lewendel for DESY-IT/DV Tier 2: Status and News NAF: Status, Plans and Questions Basics T2: 1.5 average Tier 2 are requested by CMS-groups for Germany Desy commitment:
Availability Digest. www.availabilitydigest.com. Redundant Load Balancing for High Availability July 2013
the Availability Digest Redundant Load Balancing for High Availability July 2013 A large data center can comprise hundreds or thousands of servers. These servers must not only be interconnected, but they
Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC
EGEE and glite are registered trademarks Enabling Grids for E-sciencE Alternative models to distribute VO specific software to WLCG sites: a prototype set up at PIC Elisa Lanciotti, Arnau Bria, Gonzalo
Experience and Lessons learnt from running High Availability Databases on Network Attached Storage
Experience and Lessons learnt from running High Availability Databases on Network Attached Storage Manuel Guijarro, Ruben Gaspar et al CERN IT/DES CERN IT-DES, CH-1211 Geneva 23, Switzerland [email protected],
Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.
EliteNAS Cluster Mirroring Option - Introduction Real Time NAS-to-NAS Mirroring & Auto-Failover Cluster Mirroring High-Availability & Data Redundancy Option for Business Continueity Typical Cluster Mirroring
Managing a Tier-2 Computer Centre with a Private Cloud Infrastructure
Managing a Tier-2 Computer Centre with a Private Cloud Infrastructure Stefano Bagnasco, Riccardo Brunetti, Stefano Lusso (INFN-Torino), Dario Berzano (CERN) ACAT2013 Beijing, May 16-21, 2013 motivation
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version
Data Management Plan (DMP) for Particle Physics Experiments prepared for the 2015 Consolidated Grants Round. Detailed Version The Particle Physics Experiment Consolidated Grant proposals now being submitted
Eloquence Training What s new in Eloquence B.08.00
Eloquence Training What s new in Eloquence B.08.00 2010 Marxmeier Software AG Rev:100727 Overview Released December 2008 Supported until November 2013 Supports 32-bit and 64-bit platforms HP-UX Itanium
U-LITE Network Infrastructure
U-LITE: a proposal for scientific computing at LNGS S. Parlati, P. Spinnato, S. Stalio LNGS 13 Sep. 2011 20 years of Scientific Computing at LNGS Early 90s: highly centralized structure based on VMS cluster
<Insert Picture Here> WebLogic High Availability Infrastructure WebLogic Server 11gR1 Labs
WebLogic High Availability Infrastructure WebLogic Server 11gR1 Labs WLS High Availability Data Failure Human Error Backup & Recovery Site Disaster WAN Clusters Disaster Recovery
DNS ROUND ROBIN HIGH-AVAILABILITY LOAD SHARING
PolyServe High-Availability Server Clustering for E-Business 918 Parker Street Berkeley, California 94710 (510) 665-2929 wwwpolyservecom Number 990903 WHITE PAPER DNS ROUND ROBIN HIGH-AVAILABILITY LOAD
Linux High Availability
Linux High Availability In general, there are service monitor daemons running on the load balancer to check server health periodically, as illustrated in the figure of LVS high availability. If there is
SAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
Common Server Setups For Your Web Application - Part II
Common Server Setups For Your Web Application - Part II Introduction When deciding which server architecture to use for your environment, there are many factors to consider, such as performance, scalability,
OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.
Step-by-Step Guide to configure on Intel Server Systems R2224GZ4GC4 Software Version: DSS ver. 7.00 up01 Presentation updated: April 2013 www.open-e.com 1 www.open-e.com 2 TECHNICAL SPECIFICATIONS OF THE
How To Run A Web Farm On Linux (Ahem) On A Single Computer (For Free) On Your Computer (With A Freebie) On An Ipv4 (For Cheap) Or Ipv2 (For A Free) (For
Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) [email protected] October 2000 http://verge.net.au/linux/has/ http://ultramonkey.sourceforge.net/ Introduction:
Westek Technology Snapshot and HA iscsi Replication Suite
Westek Technology Snapshot and HA iscsi Replication Suite Westek s Power iscsi models have feature options to provide both time stamped snapshots of your data; and real time block level data replication
CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT
SS Data & Storage CERN Cloud Storage Evaluation Geoffray Adde, Dirk Duellmann, Maitane Zotes CERN IT HEPiX Fall 2012 Workshop October 15-19, 2012 Institute of High Energy Physics, Beijing, China SS Outline
UN 4013 V - Virtual Tape Libraries solutions update...
UN 4013 V - Virtual Tape Libraries solutions update... - a Unisys storage partner Key issues when considering virtual tape Connectivity is my platform supported by whom? (for Unisys environments, MCP,
Load Balancing Web Applications
Mon Jan 26 2004 18:14:15 America/New_York Published on The O'Reilly Network (http://www.oreillynet.com/) http://www.oreillynet.com/pub/a/onjava/2001/09/26/load.html See this if you're having trouble printing
Service Challenge Tests of the LCG Grid
Service Challenge Tests of the LCG Grid Andrzej Olszewski Institute of Nuclear Physics PAN Kraków, Poland Cracow 05 Grid Workshop 22 nd Nov 2005 The materials used in this presentation come from many sources
Learn Oracle WebLogic Server 12c Administration For Middleware Administrators
Wednesday, November 18,2015 1:15-2:10 pm VT425 Learn Oracle WebLogic Server 12c Administration For Middleware Administrators Raastech, Inc. 2201 Cooperative Way, Suite 600 Herndon, VA 20171 +1-703-884-2223
Brian LaGoe, Systems Administrator Benjamin Jellema, Systems Administrator Eastern Michigan University
Brian LaGoe, Systems Administrator Benjamin Jellema, Systems Administrator Eastern Michigan University 1 Backup & Recovery Goals and Challenges Traditional/EMU s Old Environment Avamar Key Features EMU
Computing in High- Energy-Physics: How Virtualization meets the Grid
Computing in High- Energy-Physics: How Virtualization meets the Grid Yves Kemp Institut für Experimentelle Kernphysik Universität Karlsruhe Yves Kemp Barcelona, 10/23/2006 Outline: Problems encountered
CHAPTER 2 BACKGROUND AND OBJECTIVE OF PRESENT WORK
CHAPTER 2 BACKGROUND AND OBJECTIVE OF PRESENT WORK 2.1 Background Today middleware technology is not implemented only in banking and payment system even this is the most important point in the field of
Creating Web Farms with Linux (Linux High Availability and Scalability)
Creating Web Farms with Linux (Linux High Availability and Scalability) Horms (Simon Horman) [email protected] December 2001 For Presentation in Tokyo, Japan http://verge.net.au/linux/has/ http://ultramonkey.org/
A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES
A SURVEY OF POPULAR CLUSTERING TECHNOLOGIES By: Edward Whalen Performance Tuning Corporation INTRODUCTION There are a number of clustering products available on the market today, and clustering has become
Open-Xchange Server High Availability
OPEN-XCHANGE Whitepaper Open-Xchange Server High Availability High Availability Concept for OX Example Configuration v1.00 Copyright 2005, OPEN-XCHANGE Inc. This document is the intellectual property of
Michael Thomas, Dorian Kcira California Institute of Technology. CMS Offline & Computing Week
Michael Thomas, Dorian Kcira California Institute of Technology CMS Offline & Computing Week San Diego, April 20-24 th 2009 Map-Reduce plus the HDFS filesystem implemented in java Map-Reduce is a highly
High Availability & Disaster Recovery Development Project. Concepts, Design and Implementation
High Availability & Disaster Recovery Development Project Concepts, Design and Implementation High Availability & Disaster Recovery Development Project CONCEPTS Who: Schmooze Com Inc, maintainers, core
If you already have your SAN infrastructure in place, you can skip this section.
Part 1: Configuring the iscsi SAN Infrastructure Our first step will be physically setting up the SAN. If you already have your SAN infrastructure in place, you can skip this section. In this article,
High Availability Technical Notice
IBM FileNet P8 Version 4.5.1 High Availability Technical Notice GC19-2800-00 IBM FileNet P8 Version 4.5.1 High Availability Technical Notice GC19-2800-00 Note Before using this information and the product
High Availability Solutions for the MariaDB and MySQL Database
High Availability Solutions for the MariaDB and MySQL Database 1 Introduction This paper introduces recommendations and some of the solutions used to create an availability or high availability environment
ArcGIS for Server Deployment Scenarios An ArcGIS Server s architecture tour
ArcGIS for Server Deployment Scenarios An Arc s architecture tour Ismael Chivite Product Manager at Esri Concepts Single Machine Configurations Basic Basic with Proxy Fail-Over Load Balanced or Siloed
Deployment Topologies
, page 1 Multinode Cluster with Unified Nodes, page 2 Clustering Considerations, page 3 Cisco Unified Communications Domain Manager 10.6(x) Redundancy and Disaster Recovery, page 4 Capacity Considerations,
ARDA Experiment Dashboard
ARDA Experiment Dashboard Ricardo Rocha (ARDA CERN) on behalf of the Dashboard Team www.eu-egee.org egee INFSO-RI-508833 Outline Background Dashboard Framework VO Monitoring Applications Job Monitoring
Going Hybrid. The first step to your! Enterprise Cloud journey! Eric Sansonny General Manager!
Going Hybrid The first step to your! Enterprise Cloud journey! Eric Sansonny General Manager! About Aruba! Few figures! About Aruba! Few figures! 2 million customers! About Aruba! Few figures! 600 people!
www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009
SEE-GRID-SCI Virtualization and Grid Computing with XEN www.see-grid-sci.eu Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009 Milan Potocnik University
Status and Evolution of ATLAS Workload Management System PanDA
Status and Evolution of ATLAS Workload Management System PanDA Univ. of Texas at Arlington GRID 2012, Dubna Outline Overview PanDA design PanDA performance Recent Improvements Future Plans Why PanDA The
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
MySQL High-Availability and Scale-Out architectures
MySQL High-Availability and Scale-Out architectures Oli Sennhauser Senior Consultant [email protected] 1 Introduction Who we are? What we want? 2 Table of Contents Scale-Up vs. Scale-Out MySQL Replication
Contents. SnapComms Data Protection Recommendations
Contents Abstract... 2 SnapComms Solution Environment... 2 Concepts... 3 What to Protect... 3 Database Failure Scenarios... 3 Physical Infrastructure Failures... 3 Logical Data Failures... 3 Service Recovery
Comparing TCO for Mission Critical Linux and NonStop
Comparing TCO for Mission Critical Linux and NonStop Iain Liston-Brown EMEA NonStop PreSales BITUG, 2nd December 2014 1 Agenda What do we mean by Mission Critical? Mission Critical Infrastructure principles
High Availability Solutions for MySQL. Lenz Grimmer <[email protected]> 2008-08-29 DrupalCon 2008, Szeged, Hungary
High Availability Solutions for MySQL Lenz Grimmer 2008-08-29 DrupalCon 2008, Szeged, Hungary Agenda High Availability in General MySQL Replication MySQL Cluster DRBD Links/Tools Why
IP Storage On-The-Road Seminar Series
On-The-Road Seminar Series Disaster Recovery and Data Protection Page 1 Agenda! The Role of IP in Backup!Traditional use of IP networks for backup! backup capabilities! Contemporary data protection solutions
DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led
DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led Course Description Gain a deeper understanding of the advanced features of DB2 9 for Linux, UNIX, and Windows database environments
Online Storage Replacement Strategy/Solution
I. Current Storage Environment Online Storage Replacement Strategy/Solution ISS currently maintains a substantial online storage infrastructure that provides centralized network-accessible storage for
The Grid-it: the Italian Grid Production infrastructure
n 1 Maria Cristina Vistoli INFN CNAF, Bologna Italy The Grid-it: the Italian Grid Production infrastructure INFN-Grid goals!promote computational grid technologies research & development: Middleware and
Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager
Hitachi Data System s WebTech Series Hitachi Path Management & Load Balancing with Hitachi Dynamic Link Manager and Global Link Availability Manager The HDS WebTech Series Dynamic Load Balancing Who should
The Promise of Virtualization for Availability, High Availability, and Disaster Recovery - Myth or Reality?
B.Jostmeyer Tivoli System Automation [email protected] The Promise of Virtualization for Availability, High Availability, and Disaster Recovery - Myth or Reality? IBM Systems Management, Virtualisierung
A High Performance, High Reliability Perforce Server
A High Performance, High Reliability Perforce Server Shiv Sikand, IC Manage Inc. Marc Lewert, Angela Thomas, TiVo Inc. Perforce User Conference, Las Vegas, April 2005 1 Abstract TiVo has been able to improve
