Condor for the Grid. 3)
|
|
|
- Lionel Hardy
- 9 years ago
- Views:
Transcription
1 Condor for the Grid 1) Condor and the Grid. Douglas Thain, Todd Tannenbaum, and Miron Livny. In Grid Computing: Making The Global Infrastructure a Reality, Fran Berman, Anthony J.G. Hey, Geoffrey Fox, editors. John Wiley, ) Condor-G: A Computation Management Agent for Multi-Institutional Grids. J. Frey, T. Tannenbaum, I. Foster, M. Livny, S. Tuecke. In Proceedings of 10th International Symposium on High Performance Distributed Computing (HPDC-10), August )
2 Utilization of Diverse Resources Most users could access diverse resources Tasks such as simulation, image processing, rendering Problem of potential barrier to these resources Diverse mechanisms, policies, failure modes, performance Arise when crossing administrative domains A solution must follow 3 key user requirements Discover, acquire, and reliably manage computational resources dynamically in the course of everyday activities Shouldn t be bothered with resource location, mechanisms to use them, the computational tasks on the resources, or reacting to failure They DO care about how long their job takes and how much it costs
3 Condor Background Started in 1988 by Dr. Miron Livny Takes advantage of computing resources that might otherwise be wasted Condor is a reliable, scalable, and proven distributed system, with many years of operating experience. It manages a computing system with multiple owners, multiple users, and no centralized administrative structure. It deals gracefully with failures and delivers a large number of computing cycles on the time scale of months to years. (3)
4 Condor Features ClassAds Matches job requests with machines Job checkpoint and migration Records checkpoints Able to resume applications from the checpoint file Safeguards computation time of the job Remote System Calls Can preserve the local execution environment Users don t have to make files available on remote workstations
5 Simple Condor Pool Figure 11.4 from (1)
6 Gateway Flocking Condor software spread, creating various pools User could only participate in one community 1994-a solution, gateway flocking, was developed Figure 11.5 From (1)
7 Direct flocking Problems with Gateway flocking Only allows sharing at the organizational level Direct Flocking Allows agent to report itself to multiple matchmakers Figure 11.7 from (1)
8 Condor in relation to the Grid Figure 11.2 from (1)
9 Slight Grid Detour Condor-G exploits a few Grid protocols GSI (Grid Security Infrastructure) Essential building blocks for other Grid protocols Handles authentication and authorization MDS-2 (Methods for Discovering and Disseminating) Resource uses GRRP (Grid Resource Registration Protocol) for notification to other Grid entities Those entities use GRIP(Grid Resource Information Protocol) to obtain information about the resource status GASS(Global Access to Secondary Storage) Mechanisms for transferring data between a remote HTTP, FTP, or GASS Server GRAM (Grid Resource Allocation and Management)
10 GRAM In 1998, Globus designed GRAM GRAM (Grid Resource Allocation and Management Protocol) Provides an abstraction for remote process queuing Three Aspects important in relation to Condor Security Two-phase commit Fault tolerance To take advantage of GRAM, the user needs a system to: Remember what jobs have been submitted, where the jobs are, and what those jobs are doing Handle job failure and possible resubmitting of jobs Track large numbers of jobs with queueing, prioritization, logging Condor-G was created to do these things
11 Intro to Condor-G Treats Grids as a local resource API and command line tools to Submit jobs Query a job s status or cancel the job Be informed of job termination or problems Obtain access to detailed logs Maintains look and feel of local resource manager
12 Condor-G (under the hood) Stages a job s standard I/O and executable using GASS Can use GRAM to submit jobs to remote queues Problems Must direct job to a particular queue without knowing the availability of resources Does not support the features of each system underlying GRAM Many of Condor s features like checkpointing also missing Wait a tick I think we can fix that last problem Gliding in allows us to have reach of Condor-G and the features of Condor
13 Condor without gliding in Job Submission Machine Job Execution Site Condor-G Scheduler End User Requests Fork Globus GateKeeper Fork Persistant Job Queue Globus JobManager Globus JobManager Submit Submit Fork Condor-G GridManager Site Job Scheduler (PBS, Condor, LSF, LoadLeveler, NQE, etc. ) GASS Server Job X Job Y
14 Gliding In (all figures from 11.9(3)) Process requires three steps
15 Gliding In - Step Two
16 Gliding In - Step Three
17 Fork Fork Condor-G with gliding in Job Submission Machine Job Execution Site Persistant Job Queue End User Requests Condor-G Scheduler Fork Condor-G GridManager GASS Server Globus Daemons + Local Site Scheduler [See Figure 1] Condor Shadow Process for Job X Condor-G Collector Resource Information Transfer Job X Redirected System Call Data Job Condor Daemons Job X Condor System Call Trapping & Checkpoint Library Figure 2 from (2)
18 More Condor-G tidbits Failure handling Crash of Globus JobManager Crash of the machine that manages the remote resource Crash of the machine on which the GridManager is executing Network failure between two machines Credential management GSI proxy given finite lifetime Condor-G agaent periodically analyzes credentials for users with queued jobs Credentials may have to be forwarded to a remote location Resource discovery and scheduling Different strategies User supplied list of GRAM servers Personal resource broker that runs as part of Condor-G agent
19 Condor-G FAQ From (3) Do I need to have a Condor pool installed to use Condor-G? No, you do not. Condor-G is only the job management part of Condor. It makes perfect sense to install Condor-G on just one machine within an organization. Note that access to remote resources using a Globus interface will be done through that one machine using Condor-G.2. I am an existing Condor pool administrator, and I want to be able to let my users access resources managed by Globus, as well as access the Condor pool. Should I install Condor-G? Yes I want to install Globus on my Condor pool, and have external users submit jobs into the pool. Do I need to install Condor-G? No, you do not need to install Condor-G. You need to install Globus, to get the "Condor jobmanager setup package."
20 More from the Condor-G FAQ I am the administrator at Physics, and I have a 64-node cluster running Condor. The administrator at Chemistry is also running Condor on her 64-node cluster. We would like to be able to share resources. Do I need to install Globus and Condor-G? You may, but you do not have a need to. Condor has a feature called flocking, which lets multiple Condor pools share resources. By setting configuration variables, jobs may be executed on either cluster. Flocking is good (better than Condor-G) because all the Condor features continue to work. Examples of features are checkpointing remote system calls. Unfortunately, flocking only works between Condor pools. So, access to a resource managed by PBS, for example, will still require Globus and Condor-G to submit jobs into the PBS queue. Why would I want to use Condor-G? If you have more than a trivial set of work to get done, you will find that you spend a great deal of time managing your jobs without Condor. Your time would be better spent working with your results. For example, imagine you want to run 1000 jobs on the NCSA Itanium cluster without Condor-G. To use the Globus interface directly, you will type 'globusrun' 1000 times. Now imagine that you want to check on the status of your jobs. Using 'globus-job-status' 1000 times will not be much fun. And, heaven help you if you find a bug and need to cancel your thousand jobs. This will be followed by resubmitting all 1000 jobs. Under Condor-G, job submission and management is simplified. Condor-G will also ease submission and management of jobs when job flow is complex. For example, if the first 100 jobs must run before any of the next 100 jobs, and once those 200 jobs are done, the next 700 may start. The last 100 may run only after the first 900 finish. Condor DAGMan manages all of the dependencies for you. You can avoid writing a rather complex Perl script.
21 Problem Solvers High level structure built on top of the Condor agent Relies on a Condor agent in two ways Uses agent as service for reliably executing jobs Making the problem solver reliable Two are provided with Condor Master-worker (MW) Directed acyclic graph manager (DAGMAN)
22 Master-Worker Solves problems of indeterminate size on a large and unreliable workforce Master consists of three components Work list - outstanding work master wants done Tracking mode - accounts for remote worker processes Steering Module - directs computation by examining results, modifying the work list, obtaining a sufficient number of workers Packaged as source for several C++ classes
23 DAGMan Meta-scheduler for Condor Service for executing multiple jobs Deals with explicitly declared dependencies Keeps private logs Allows it to resume a DAG where it left off, even with crashes
24 DAGMan properties Can be in any order but must maintaing properties of a DAG No cycles Directed edges JOB statement associates abstract name (A) with file (a.condor) which describes complete Condor job B and C can t run until A finishes in.pl and out.pl run before and after C gets executed, respectively PRE - usually prepare environment POST - tear down the environment
25 DAGMan logistics DAGs are defined by.dag files Command condor_submit_dag <dagfile> DAGMan daemon runs as its own Condor job With a joib failure, DAGMan will go as long as it can and then will generate a rescue file The rescue file is then used to restore the state when ready Process will continue as if nothing happened
26 Split Execution Making the job feel at home in its environment Cooperation between submission mach. and execution machine This cooperation is called split execution 2 distinct components, shadow and sandbox Shadow - represents the user to the system Provides executable, arguments, environment, etc. Sandbox - gives job safe place to play Sand - gives everything jobs needs to properly run Box - protects job from harm from outside world Condor has several universes to create job environment
27 Condor Universes Standard Supplied with earliest Condor facilities Provides emulation for standard system calls Allows checkpointing Converts all job s standard calls into RPC s back to shadow The Java Universe Added to Condor in late 2001 Originally, users had to submit entire JVM binary as job Worked, but it was inefficient Level of abstraction introduced to create complete Java environment Location of JVM is provided by local administrator Sandbox must place this and oteher components in private execution directory, and start JVM based on the users credentials
28 Questions???
GRID workload management system and CMS fall production. Massimo Sgaravatto INFN Padova
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova What do we want to implement (simplified design) Master chooses in which resources the jobs must be submitted Condor-G
Condor-G: A Computation Management Agent for Multi-Institutional Grids
: A Computation Management Agent for Multi-Institutional Grids James Frey, Todd Tannenbaum, Miron Livny Ian Foster, Steven Tuecke Department of Computer Science Mathematics and Computer Science Division
GT 6.0 GRAM5 Key Concepts
GT 6.0 GRAM5 Key Concepts GT 6.0 GRAM5 Key Concepts Overview The Globus Toolkit provides GRAM5: a service to submit, monitor, and cancel jobs on Grid computing resources. In GRAM, a job consists of a computation
An approach to grid scheduling by using Condor-G Matchmaking mechanism
An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr
An objective comparison test of workload management systems
An objective comparison test of workload management systems Igor Sfiligoi 1 and Burt Holzman 1 1 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA E-mail: [email protected] Abstract. The Grid
(RH 7.3, gcc 2.95.2,VDT 1.1.6, EDG 1.4.3, GLUE, RLS) Tokyo BNL TAIWAN RAL 20/03/2003 20/03/2003 CERN 15/03/2003 15/03/2003 FNAL 10/04/2003 CNAF
Our a c t i v i t i e s & c o n c e rn s E D G - L C G t r a n s i t i o n / c o n v e r g e n c e p l a n EDG s i d e : i n t e g r a t i o n o f n e w m i d d l e w a r e, t e s t b e d e v a l u a t
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
Vulnerability Assessment for Middleware
Vulnerability Assessment for Middleware Elisa Heymann, Eduardo Cesar Universitat Autònoma de Barcelona, Spain Jim Kupsch, Barton Miller University of Wisconsin-Madison Barcelona, September 21st 2009 Key
Grid Computing in SAS 9.4 Third Edition
Grid Computing in SAS 9.4 Third Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2014. Grid Computing in SAS 9.4, Third Edition. Cary, NC:
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment
A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment Arshad Ali 3, Ashiq Anjum 3, Atif Mehmood 3, Richard McClatchey 2, Ian Willers 2, Julian Bunn
The GridWay Meta-Scheduler
The GridWay Meta-Scheduler Committers Ignacio M. Llorente Ruben S. Montero Eduardo Huedo Contributors Tino Vazquez Jose Luis Vazquez Javier Fontan Jose Herrera 1 Goals of the Project Goals of the Project
ELIXIR LOAD BALANCER 2
ELIXIR LOAD BALANCER 2 Overview Elixir Load Balancer for Elixir Repertoire Server 7.2.2 or greater provides software solution for load balancing of Elixir Repertoire Servers. As a pure Java based software
Scheduling and Resource Management in Grids and Clouds
Scheduling and Resource Management in Grids and Clouds Dick Epema Parallel and Distributed Systems Group Delft University of Technology Delft, the Netherlands may 10, 2011 1 Outline Resource Management
Implementing and Managing Microsoft Server Virtualization
Implementing and Managing Microsoft Server At the end of the course the delegate will be able to complete the following: COMPUTER TRAINING The key to a better future WINDOWS Course No. 10215 5 Days Target
Workflow Requirements (Dec. 12, 2006)
1 Functional Requirements Workflow Requirements (Dec. 12, 2006) 1.1 Designing Workflow Templates The workflow design system should provide means for designing (modeling) workflow templates in graphical
Condor: Grid Scheduler and the Cloud
Condor: Grid Scheduler and the Cloud Matthew Farrellee Senior Software Engineer, Red Hat 1 Agenda What is Condor Architecture Condor s ClassAd Language Common Use Cases Virtual Machine management Cloud
Client/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
Scheduling in SAS 9.3
Scheduling in SAS 9.3 SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc 2011. Scheduling in SAS 9.3. Cary, NC: SAS Institute Inc. Scheduling in SAS 9.3
55004A: Installing and Configuring System Center 2012 Operations Manager
Sales 406/256-5700 Support 406/252-4959 Fax 406/256-0201 Evergreen Center North 1501 14 th St West, Suite 201 Billings, MT 59102 55004A: Installing and Configuring System Center 2012 Operations Manager
Virtual machine interface. Operating system. Physical machine interface
Software Concepts User applications Operating system Hardware Virtual machine interface Physical machine interface Operating system: Interface between users and hardware Implements a virtual machine that
Sun Grid Engine, a new scheduler for EGEE
Sun Grid Engine, a new scheduler for EGEE G. Borges, M. David, J. Gomes, J. Lopez, P. Rey, A. Simon, C. Fernandez, D. Kant, K. M. Sephton IBERGRID Conference Santiago de Compostela, Spain 14, 15, 16 May
Cloud Computing. Lecture 5 Grid Case Studies 2014-2015
Cloud Computing Lecture 5 Grid Case Studies 2014-2015 Up until now Introduction. Definition of Cloud Computing. Grid Computing: Schedulers Globus Toolkit Summary Grid Case Studies: Monitoring: TeraGRID
TOG & JOSH: Grid scheduling with Grid Engine & Globus
TOG & JOSH: Grid scheduling with Grid Engine & Globus G. Cawood, T. Seed, R. Abrol, T. Sloan EPCC, The University of Edinburgh, James Clerk Maxwell Building, Mayfield Road, Edinburgh, EH9 3JZ, UK Abstract
Lesson Plans Microsoft s Managing and Maintaining a Microsoft Windows Server 2003 Environment
Lesson Plans Microsoft s Managing and Maintaining a Microsoft Windows Server 2003 Environment (Exam 70-290) Table of Contents Table of Contents... 1 Course Overview... 2 Section 0-1: Introduction... 4
The Win32 Network Management APIs
The Win32 Network Management APIs What do we have in this session? Intro Run-Time Requirements What's New in Network Management? Windows 7 Windows Server 2003 Windows XP Network Management Function Groups
Managing and Maintaining a Windows Server 2003 Network Environment
Managing and maintaining a Windows Server 2003 Network Environment. AIM This course provides students with knowledge and skills needed to Manage and Maintain a Windows Server 2003 Network Environment.
locuz.com HPC App Portal V2.0 DATASHEET
locuz.com HPC App Portal V2.0 DATASHEET Ganana HPC App Portal makes it easier for users to run HPC applications without programming and for administrators to better manage their clusters. The web-based
Scheduling in SAS 9.4 Second Edition
Scheduling in SAS 9.4 Second Edition SAS Documentation The correct bibliographic citation for this manual is as follows: SAS Institute Inc. 2015. Scheduling in SAS 9.4, Second Edition. Cary, NC: SAS Institute
IceWarp to IceWarp Server Migration
IceWarp to IceWarp Server Migration Registered Trademarks iphone, ipad, Mac, OS X are trademarks of Apple Inc., registered in the U.S. and other countries. Microsoft, Windows, Outlook and Windows Phone
HDFS Users Guide. Table of contents
Table of contents 1 Purpose...2 2 Overview...2 3 Prerequisites...3 4 Web Interface...3 5 Shell Commands... 3 5.1 DFSAdmin Command...4 6 Secondary NameNode...4 7 Checkpoint Node...5 8 Backup Node...6 9
Monitoring Clusters and Grids
JENNIFER M. SCHOPF AND BEN CLIFFORD Monitoring Clusters and Grids One of the first questions anyone asks when setting up a cluster or a Grid is, How is it running? is inquiry is usually followed by the
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
A Survey Study on Monitoring Service for Grid
A Survey Study on Monitoring Service for Grid Erkang You [email protected] ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide
Concepts and Architecture of the Grid. Summary of Grid 2, Chapter 4
Concepts and Architecture of the Grid Summary of Grid 2, Chapter 4 Concepts of Grid Mantra: Coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations Allows
1Z0-102. Oracle Weblogic Server 11g: System Administration I. Version: Demo. Page <<1/7>>
1Z0-102 Oracle Weblogic Server 11g: System Administration I Version: Demo Page 1. Which two statements are true about java EE shared libraries? A. A shared library cannot bedeployed to a cluster.
StreamServe Persuasion SP5 Control Center
StreamServe Persuasion SP5 Control Center User Guide Rev C StreamServe Persuasion SP5 Control Center User Guide Rev C OPEN TEXT CORPORATION ALL RIGHTS RESERVED United States and other international patents
1. Simulation of load balancing in a cloud computing environment using OMNET
Cloud Computing Cloud computing is a rapidly growing technology that allows users to share computer resources according to their need. It is expected that cloud computing will generate close to 13.8 million
Oracle WebLogic Server 11g Administration
Oracle WebLogic Server 11g Administration This course is designed to provide instruction and hands-on practice in installing and configuring Oracle WebLogic Server 11g. These tasks include starting and
Citrix XenServer Backups with SEP sesam
Citrix XenServer Backups with SEP sesam Contents Introduction and Overview...2 XenServer Backup Methods...2 Offline Backup... 3 Online Live Memory Backup... 3 Online Quiesced Backup... 4 Online Normal
Monitoring Agent for Microsoft Exchange Server 6.3.1 Fix Pack 9. Reference IBM
Monitoring Agent for Microsoft Exchange Server 6.3.1 Fix Pack 9 Reference IBM Monitoring Agent for Microsoft Exchange Server 6.3.1 Fix Pack 9 Reference IBM Note Before using this information and the product
Chapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
EnterpriseLink Benefits
EnterpriseLink Benefits GGY AXIS 5001 Yonge Street Suite 1300 Toronto, ON M2N 6P6 Phone: 416-250-6777 Toll free: 1-877-GGY-AXIS Fax: 416-250-6776 Email: [email protected] Web: www.ggy.com Table of Contents
Architectural Overview
Architectural Overview Version 7 Part Number 817-2167-10 March 2003 A Sun ONE Application Server 7 deployment consists of a number of application server instances, an administrative server and, optionally,
Batch Systems. provide a mechanism for submitting, launching, and tracking jobs on a shared resource
PBS INTERNALS PBS & TORQUE PBS (Portable Batch System)-software system for managing system resources on workstations, SMP systems, MPPs and vector computers. It was based on Network Queuing System (NQS)
Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine
Grid Scheduling Architectures with and Sun Grid Engine Sun Grid Engine Workshop 2007 Regensburg, Germany September 11, 2007 Ignacio Martin Llorente Javier Fontán Muiños Distributed Systems Architecture
Provisioning and Resource Management at Large Scale (Kadeploy and OAR)
Provisioning and Resource Management at Large Scale (Kadeploy and OAR) Olivier Richard Laboratoire d Informatique de Grenoble (LIG) Projet INRIA Mescal 31 octobre 2007 Olivier Richard ( Laboratoire d Informatique
IBM Tivoli Storage Manager Version 7.1.4. Introduction to Data Protection Solutions IBM
IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM Note: Before you use this
Central Administration User Guide
User Guide Contents 1. Introduction... 2 Licensing... 2 Overview... 2 2. Configuring... 3 3. Using... 4 Computers screen all computers view... 4 Computers screen single computer view... 5 All Jobs screen...
VMware vcenter Support Assistant 5.1.1
VMware vcenter.ga September 25, 2013 GA Last updated: September 24, 2013 Check for additions and updates to these release notes. RELEASE NOTES What s in the Release Notes The release notes cover the following
NorduGrid ARC Tutorial
NorduGrid ARC Tutorial / Arto Teräs and Olli Tourunen 2006-03-23 Slide 1(34) NorduGrid ARC Tutorial Arto Teräs and Olli Tourunen CSC, Espoo, Finland March 23
This training is targeted at System Administrators and developers wanting to understand more about administering a WebLogic instance.
This course teaches system/application administrators to setup, configure and manage an Oracle WebLogic Application Server, its resources and environment and the Java EE Applications running on it. This
CDAT Overview. Remote Managing and Monitoring of SESM Applications. Remote Managing CHAPTER
CHAPTER 1 The Cisco Distributed Administration Tool (CDAT) provides a set of web-based facilities that allow the service-provider administrator to perform two different sets of tasks: Remote managing and
BestSync Tutorial. Synchronize with a FTP Server. This tutorial demonstrates how to setup a task to synchronize with a folder in FTP server.
BestSync Tutorial Synchronize with a FTP Server This tutorial demonstrates how to setup a task to synchronize with a folder in FTP server. 1. On the main windows, press the Add task button ( ) to add a
Data Management in an International Data Grid Project. Timur Chabuk 04/09/2007
Data Management in an International Data Grid Project Timur Chabuk 04/09/2007 Intro LHC opened in 2005 several Petabytes of data per year data created at CERN distributed to Regional Centers all over the
VMware vsphere: [V5.5] Admin Training
VMware vsphere: [V5.5] Admin Training (Online Remote Live TRAINING) Summary Length Timings : Formats: Lab, Live Online : 5 Weeks, : Sat, Sun 10.00am PST, Wed 6pm PST Overview: This intensive, extended-hours
Novell Distributed File Services Administration Guide
www.novell.com/documentation Novell Distributed File Services Administration Guide Open Enterprise Server 11 SP2 January 2014 Legal Notices Novell, Inc., makes no representations or warranties with respect
The GRID and the Linux Farm at the RCF
The GRID and the Linux Farm at the RCF A. Chan, R. Hogue, C. Hollowell, O. Rind, J. Smith, T. Throwe, T. Wlodek, D. Yu Brookhaven National Laboratory, NY 11973, USA The emergence of the GRID architecture
CLOUD SERVICES FOR EMS
CLOUD SERVICES FOR EMS Greg Biegen EMS Software Director Cloud Operations and Security September 12-14, 2016 Agenda EMS Cloud Services Definitions Hosted Service Managed Services Governance Service Delivery
Middleware Lou Somers
Middleware Lou Somers April 18, 2002 1 Contents Overview Definition, goals, requirements Four categories of middleware Transactional, message oriented, procedural, object Middleware examples XML-RPC, SOAP,
Side-by-side Migration Guide for Snare Server v7
Side-by-side Migration Guide for Snare Server v7 Intersect Alliance International Pty Ltd. All rights reserved worldwide. Intersect Alliance Pty Ltd shall not be liable for errors contained herein or for
CMB 207 1I Citrix XenApp and XenDesktop Fast Track
CMB 207 1I Citrix XenApp and XenDesktop Fast Track This fast paced course provides the foundation necessary for students to effectively centralize and manage desktops and applications in the datacenter
Windows Server 2008 Essentials. Installation, Deployment and Management
Windows Server 2008 Essentials Installation, Deployment and Management Windows Server 2008 Essentials First Edition. This ebook is provided for personal use only. Unauthorized use, reproduction and/or
Mitglied der Helmholtz-Gemeinschaft. System monitoring with LLview and the Parallel Tools Platform
Mitglied der Helmholtz-Gemeinschaft System monitoring with LLview and the Parallel Tools Platform November 25, 2014 Carsten Karbach Content 1 LLview 2 Parallel Tools Platform (PTP) 3 Latest features 4
Web Service Robust GridFTP
Web Service Robust GridFTP Sang Lim, Geoffrey Fox, Shrideep Pallickara and Marlon Pierce Community Grid Labs, Indiana University 501 N. Morton St. Suite 224 Bloomington, IN 47404 {sblim, gcf, spallick,
Stellar Phoenix Exchange Server Backup
Stellar Phoenix Exchange Server Backup Version 1.0 Installation Guide Introduction This is the first release of Stellar Phoenix Exchange Server Backup tool documentation. The contents will be updated periodically
CONDOR as Job Queue Management for Teamcenter 8.x
CONDOR as Job Queue Management for Teamcenter 8.x 7th March 2011 313000 Matthias Ahrens / GmbH The issue To support a few automatic document converting and handling mechanism inside Teamcenter a Job Queue
Performance Testing Web 2.0
Performance Testing Web 2.0 David Chadwick Rational Testing Evangelist [email protected] Dawn Peters Systems Engineer, IBM Rational [email protected] 2009 IBM Corporation WEB 2.0 What is it? 2 Web
E- SPIN's IPSwitch WhatsUp Gold Network Management System System Administration Advanced Training (5 Day)
Class Schedule E- SPIN's IPSwitch WhatsUp Gold Network Management System System Administration Advanced Training (5 Day) Date: Specific Pre- Agreed Upon Date Time: 9.00am - 5.00pm Venue: Pre- Agreed Upon
