PRACE hardware, software and services. David Henty, EPCC,
|
|
|
- Sophia Davis
- 10 years ago
- Views:
Transcription
1 PRACE hardware, software and services David Henty, EPCC,
2 Why? Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean properties and variations weather and flood events Astrophysics, Elementary particle physics, Plasma physics systems, structures which span a large range of different length and time scales quantum field theories like QCD, ITER Material Science, Chemistry, Nanoscience understanding complex materials, complex chemistry, nanoscience the determination of electronic and transport properties Life Science system biology, chromatin dynamics, large scale protein dynamics, protein association and aggregation, supramolecular systems, medicine Engineering complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft, virtual power plant 2
3 Supercomputing Drives Science through Simulation Environment Weather/ Climatology Pollution / Ozone Hole Ageing Society Medicine Biology Materials/ Inf. Tech Spintronics Nano-science Energy Plasma Physics Fuel Cells 3
4 Sum of Performance per Country (TOP500) 4
5 Rationale Europe must maintain its high standards in computational science and engineering Europe has to guarantee independent access to HPCsystems of the highest performance class for all computational scientists in its member states Scientific Excellence requires peer review on European scale to foster best ideas and groups User requirements as to variety of architectures requires coordinated procurement EU and national governments have to establish robust and persistent funding scheme 5
6 HPC on ESFRI Roadmap 2006 First comprehensive definition of RIs at European level RIs are major pillars of the European Research Area A European HPC service strategic competitiveness attractiveness for researchers access based on excellence supporting industrial development 6
7 capability The ESFRI Vision for a European HPC service European HPC-facilities at the top of an HPC provisioning pyramid Tier-0: 3-6 European Centres for Petaflop Tier-0:? European Centres for Exaflop Tier-1: National Centres Tier-2: Regional/University Centres Creation of a European HPC ecosystem Scientific and industrial user communities HPC service providers on all tiers Grid Infrastructures The European HPC hard- and software industry Tier-0 Tier-1 Tier-2 PRACE DEISA/PRACE # of systems 7
8 PRACE in Europe 8
9 PRACE Timeline HPCEUR HET PRACE MoU PRACE Preparatory EU-Grant: INFSO-RI , 10 Mio. Phase PRACE Operation PRACE Implementation Phase (1IP, 2IP) PRACE (AISBL), a legal entity with (current) seat location in Brussels 9
10 Purpose of Workshop Introduce you to DECI-7 process Get you logged on to Tier-1 machines Make sure you can compile and run simple codes Inform you of the applications support available Get you started on your own codes 10
11 Timetable 13:30-13:45 Welcome and Introduction to SARA 13:45-14:30 PRACE hardware, software and services 14:30-15:30 Use of Certificates, Using gsi-ssh and gridftp 15:30-16:00 Coffee Break 16:00-17:30 Hands on session (practical examples) 09:30-10:15 PRACE support for the DECI projects 10:15-10:30 Remote Visualization 10:30-11:00 Hands on Sessions (users own application codes) 11:00-11:30 Coffee Break 11:30-12:30 Hands on Sessions (users own application codes) 12:30-13:30 Lunch 13:30-15:30 One-to-one sessions (optional) 11
12 Access to PRACE resources Regular calls for proposals see Successful projects allocated a maximum number of CPU hours given access for a limited period of time Linked calls can apply for Tier-0 or Tier-1 Tier-1 access is via DECI (continuation of DEISA scheme) active projects (you!) are DECI-7 call already open for DECI-8 starting May
13 Tier-0 Systems IBM Blue Gene/P JUGENE Germany) Bull Bullx cluster CURIE France) Cray XE6 HERMIT Germany) SuperMUC Germany) MareNostrum (BSC, Spain) FERMI (CINECA, Italy) 13
14 Tier-1 Systems: specialist Cray XT4/5/6 and Cray XE6 EPCC (UK) KTH (Sweden) CSC (Finland) IBM Blue Gene/P IDRIS (France) RZG (Germany) NCSA (Bulgaria) IBM Power 6 RZG (Germany) SARA (The Netherlands) CINECA (Italy) 14
15 Tier-1 Systems: clusters FZJ (Germany, Bull Nehalem cluster) LRZ (Germany, Xeon cluster) HLRS (Germany, NEC Nehalem cluster plus GP/GPU cluster) CINES (France, SGI ICE 8200) BSC (Spain, IBM PowerPC) CINECA (Italy, Westmere plus GP/GPU cluster) PSNC (Poland, Bullx plus GP/GPU cluster and HP cluster) ICHEC (Ireland, SGI ICE 8200). 15
16 DECI Terminology Every project has a single HOME site one or more EXECUTION sites Home site main point of contact you will have a named person responsible for login accounts etc. Execution sites where you run your jobs 16
17 Accounts HOME site must apply for an account here you supply a certificate from your national Certificate Authority automatically propagated to execution site(s) may be additional info needed based on local arrangements e.g. signing up to codes of conduct EXECUTION sites same user name as home site recommended access is via gsissh. some sites may support ssh 17
18 Security Infrastructure Standard public/private key setup private key: only owner knows public key: known to everyone one encrypts, other decrypts: authentication and privacy figure by Borja Sotomayor, tutorial/multiplehtml/ch09s03.html from J. Schopf, Globus Alliance
19 Certificates Similar to passport or driver s licence All PRACE users have an X509 Certificate certified by their national certificate authority have special temporary certificates for training sessions Enables secure authentication figure by Rachana Ananthakrishnan (from J. Schopf, Globus Alliance)
20 Managing certificates Certificate must be installed on your local machine protected by a private password/passphrase List of user certificates is held by PRACE central LDAP database Lightweight Directory Access Protocol PRACE sites synchronise with the LDAP at regular intervals Ask your home site if you have problems!
21 CPU accounted in standard core-hours e.g. conversion factors for DEISA were: CPU normalisation AMD I2 1.1 Intel 1.4 Intel 1.6 Intel 2.8 Intel 3 Intel Westmere 2.7 Intel Westmere 2.6 Intel Westmere 2.7 NEC SX8 6 NEC SX9 36 P4+@ [email protected] 3 PPC@ X2 4 XE6 12C@ XEON X5560@ XEON X5570@ XT5 CSCS 1.2 XT5 DC@ XT5 DC@ XT
22 Current status Tier-1 systems new to PRACE DECI-7 is the first PRACE DECI integration may not be complete for all sites Training accounts we provide temporary accounts for today pr1utrxx (XX = 11, 12,, 35) notional home site is EPCC (HECToR) temporary certificate from ECMWF CA execution sites are at SARA, CSC and CINECA chosen to span a range of architectures 22
23 DECI-7 Statistics 54 applications for 200 million standard core-hours 35 successful projects allocated 90 million standard core-hours Start date: 1 st November 2011 End date: 31 st October 2012 you MUST use your CPU allocation in this period Final reports must be submitted due within 3 months of project completion 23
24 Useful resources Documentation currently provided via DEISA site migrating to Reporting problems Trouble Ticket System will be opened up to users in the future Monitoring CPU usage done via DART tool home site can advise on installing this 24
25 Run a graphical client on local machine uniform interface to different HPC systems support for data transfer, workflows etc. 25
26 Meal Brasserie Harkema 26
Cosmological simulations on High Performance Computers
Cosmological simulations on High Performance Computers Cosmic Web Morphology and Topology Cosmological workshop meeting Warsaw, 12-17 July 2011 Maciej Cytowski Interdisciplinary Centre for Mathematical
Information about Pan-European HPC infrastructure PRACE. Vít Vondrák IT4Innovations
Information about Pan-European HPC infrastructure PRACE Vít Vondrák IT4Innovations Realizing the ESFRI Vision for an HPC RI European HPC- facilities at the top of an HPC provisioning pyramid Tier- 0: European
PRACE the European HPC Research Infrastructure. Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council
PRACE the European HPC Research Infrastructure Carlos Mérida-Campos, Advisor of Spanish Member at PRACE Council Barcelona, 6-June-2013 PRACE an European e-infrastructure & ESFRI-list item in operation
International High Performance Computing. Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum
International High Performance Computing Troels Haugbølle Centre for Star and Planet Formation Niels Bohr Institute PRACE User Forum Why International HPC? Large-scale science projects can require resources
How To Build A Supermicro Computer With A 32 Core Power Core (Powerpc) And A 32-Core (Powerpc) (Powerpowerpter) (I386) (Amd) (Microcore) (Supermicro) (
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
Supercomputing Resources in BSC, RES and PRACE
www.bsc.es Supercomputing Resources in BSC, RES and PRACE Sergi Girona, BSC-CNS Barcelona, 23 Septiembre 2015 ICTS 2014, un paso adelante para la RES Past RES members and resources BSC-CNS (MareNostrum)
Relations with ISV and Open Source. Stephane Requena GENCI [email protected]
Relations with ISV and Open Source Stephane Requena GENCI [email protected] Agenda of this session 09:15 09:30 Prof. Hrvoje Jasak: Director, Wikki Ltd. «HPC Deployment of OpenFOAM in an Industrial
Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich
Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 19 13:00-13:30 Welcome
Kriterien für ein PetaFlop System
Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working
e-infrastructures for Science and Industry
8 th Int. Conference on Parallel Processing and Applied Mathematics Wroclaw, Poland, Sep 13 16, 2009 e-infrastructures for Science and Industry -Clusters, Grids, and Clouds Wolfgang Gentzsch, The DEISA
Access, Documentation and Service Desk. Anupam Karmakar / Application Support Group / Astro Lab
Access, Documentation and Service Desk Anupam Karmakar / Application Support Group / Astro Lab Time to get answer to these questions Who is allowed to use LRZ hardware? My file system is full. How can
Extreme Scaling on Energy Efficient SuperMUC
Extreme Scaling on Energy Efficient SuperMUC Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy
The PRACE Project Applications, Benchmarks and Prototypes. Dr. Peter Michielse (NCF, Netherlands)
The PRACE Project Applications, Benchmarks and Prototypes Dr. Peter Michielse (NCF, Netherlands) Introduction to me Ph.D. in numerical mathematics (parallel adaptive multigrid solvers) from Delft University
Pedraforca: ARM + GPU prototype
www.bsc.es Pedraforca: ARM + GPU prototype Filippo Mantovani Workshop on exascale and PRACE prototypes Barcelona, 20 May 2014 Overview Goals: Test the performance, scalability, and energy efficiency of
High Performance Computing at CEA
High Performance Computing at CEA Thierry Massard CEA-DAM Mascot 2012 Meeting 23/03/2012 1 1 www-hpc.cea.fr/en Paris CEA/DIF (Bruyères-Le-Châtel) Saclay CEA Supercomputing Complex Mascot 2012 Meeting 23/03/2012
How Cineca supports IT
How Cineca supports IT Topics CINECA: an overview Systems and Services for Higher Education HPC for Research Activities and Industries Cineca: the Consortium Not For Profit Founded in 1969 HPC FERMI: TOP500
Workprogramme 2014-15
Workprogramme 2014-15 e-infrastructures DCH-RP final conference 22 September 2014 Wim Jansen einfrastructure DG CONNECT European Commission DEVELOPMENT AND DEPLOYMENT OF E-INFRASTRUCTURES AND SERVICES
HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect [email protected]
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect [email protected] EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
HPC in Oil and Gas Exploration
HPC in Oil and Gas Exploration Anthony Lichnewsky Schlumberger WesternGeco PRACE 2011 Industry workshop Schlumberger Oilfield Services Schlumberger Solutions: Integrated Project Management The Digital
Extreme Scale Compu0ng at LRZ
Extreme Scale Compu0ng at LRZ Dieter Kranzlmüller Munich Network Management Team Ludwig- Maximilians- Universität München (LMU) & Leibniz SupercompuFng Centre (LRZ) of the Bavarian Academy of Sciences
HPC technology and future architecture
HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange [email protected] Toàn Nguyên [email protected]
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
David Vicente Head of User Support BSC
www.bsc.es Programming MareNostrum III David Vicente Head of User Support BSC Agenda WEDNESDAY - 17-04-13 9:00 Introduction to BSC, PRACE PATC and this training 9:30 New MareNostrum III the views from
globus online Globus Online for Research Data Management Rachana Ananthakrishnan Great Plains Network Annual Meeting 2013
globus online Globus Online for Research Data Management Rachana Ananthakrishnan Great Plains Network Annual Meeting 2013 We started with technology proven in many large-scale grids GridFTP GRAM MyProxy
Supercomputer Center Management Challenges. Branislav Jansík
Supercomputer Center Management Challenges Branislav Jansík 2000-2004 PhD at KTH Stockholm Molecular properties for heavy element compounds Density Functional Theory for Molecular Properties 2004-2006
Scyld Cloud Manager User Guide
Scyld Cloud Manager User Guide Preface This guide describes how to use the Scyld Cloud Manager (SCM) web portal application. Contacting Penguin Computing 45800 Northport Loop West Fremont, CA 94538 1-888-PENGUIN
IMPLEMENTING GREEN IT
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware?
Parallel Software usage on UK National HPC Facilities 2009-2015: How well have applications kept up with increasingly parallel hardware? Dr Andrew Turner EPCC University of Edinburgh Edinburgh, UK [email protected]
Grid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland
Software services competence in research and development activities at PSNC Cezary Mazurek PSNC, Poland Workshop on Actions for Better Participation of New Member States to FP7-ICT Timişoara, 18/19-03-2010
XSEDE Service Provider Software and Services Baseline. September 24, 2015 Version 1.2
XSEDE Service Provider Software and Services Baseline September 24, 2015 Version 1.2 i TABLE OF CONTENTS XSEDE Production Baseline: Service Provider Software and Services... i A. Document History... A-
STW Open Technology Programme. H2020 Future & Emerging Technology. and. GRANTS WEEK 2015 October 9 th
STW Open Technology Programme and H2020 Future & Emerging Technology GRANTS WEEK 2015 October 9 th 9/12/2010 INDIVIDUAL FUNDING OPPORTUNITIES IN EUROPE 1 SUPPORT FOR RESEARCH FUNDING ACQUISITION AT THE
Computational infrastructure for NGS data analysis. José Carbonell Caballero Pablo Escobar
Computational infrastructure for NGS data analysis José Carbonell Caballero Pablo Escobar Computational infrastructure for NGS Cluster definition: A computer cluster is a group of linked computers, working
IT security concept documentation in higher education data centers: A template-based approach
IT security concept documentation in higher education data centers: A template-based approach Wolfgang Hommel Leibniz Supercomputing Centre, Munich, Germany EUNIS 2013 June 12th, 2013 Leibniz Supercomputing
CS 698: Special Topics in Big Data. Chapter 2. Computing Trends for Big Data
CS 698: Special Topics in Big Data Chapter 2. Computing Trends for Big Data Chase Wu Associate Professor Department of Computer Science New Jersey Institute of Technology [email protected] Collaborative
GTC Presentation March 19, 2013. Copyright 2012 Penguin Computing, Inc. All rights reserved
GTC Presentation March 19, 2013 Copyright 2012 Penguin Computing, Inc. All rights reserved Session S3552 Room 113 S3552 - Using Tesla GPUs, Reality Server and Penguin Computing's Cloud for Visualizing
JUROPA Linux Cluster An Overview. 19 May 2014 Ulrich Detert
Mitglied der Helmholtz-Gemeinschaft JUROPA Linux Cluster An Overview 19 May 2014 Ulrich Detert JuRoPA JuRoPA Jülich Research on Petaflop Architectures Bull, Sun, ParTec, Intel, Mellanox, Novell, FZJ JUROPA
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing
Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools
Costs of air pollution from European industrial facilities 2008 2012 an updated assessment
Costs of air pollution from European industrial facilities 2008 2012 an updated assessment Summary In 2012, air pollution from European industrial facilities cost at least EUR 59 billion (and up to EUR
BSC - Barcelona Supercomputer Center
Objectives Research in Supercomputing and Computer Architecture Collaborate in R&D e-science projects with prestigious scientific teams Manage BSC supercomputers to accelerate relevant contributions to
Agenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
Enterprise HPC & Cloud Computing for Engineering Simulation. Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc.
Enterprise HPC & Cloud Computing for Engineering Simulation Barbara Hutchings Director, Strategic Partnerships ANSYS, Inc. Historical Perspective Evolution of Computing for Simulation Pendulum swing: Centralized
SRNWP Workshop. HP Solutions and Activities in Climate & Weather Research. Michael Riedmann European Performance Center
SRNWP Workshop HP Solutions and Activities in Climate & Weather Research Michael Riedmann European Performance Center Agenda A bit of marketing: HP Solutions for HPC A few words about recent Met deals
e-infrastructure and related projects in PSNC
e-infrastructure and related projects in PSNC Norbert Meyer PSNC ACTIVITY Operator of Poznań Metropolitan Area Network POZMAN Operator of Polish National Research and Education Network PIONIER HPC Center
Ernst Rauch Munich Re 18 October 2011
INSURANCE OF NATURAL CATASTROPHES: DATA REQUIREMENTS, RISK ANALYSIS, INDEMNIFICATION Conference Prevention and Insurance of Natural Catastrophes - 18 October 2011, Brussels Ernst Rauch Head Corporate Climate
MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper
Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper
OpenMP Programming on ScaleMP
OpenMP Programming on ScaleMP Dirk Schmidl [email protected] Rechen- und Kommunikationszentrum (RZ) MPI vs. OpenMP MPI distributed address space explicit message passing typically code redesign
3rd Party Audited Cloud Infrastructure SOC 1, Type II SOC 2, Type II ISO 27001. Annual 3rd party application Pen Tests.
THE BRIGHTIDEA CLOUD INFRASTRUCTURE INTRODUCTION Brightidea s world-class cloud infrastructure is designed and certified to handle the most stringent security, reliability, scalability, and performance
Cloud Computing from an Institutional Perspective
15th April 2010 e-infranet Workshop Louvain, Belgium Next Generation Data Center Summit Cloud Computing from an Institutional Perspective Distributed Systems Architecture Research Group Universidad Complutense
Informationsaustausch für Nutzer des Aachener HPC Clusters
Informationsaustausch für Nutzer des Aachener HPC Clusters Paul Kapinos, Marcus Wagner - 21.05.2015 Informationsaustausch für Nutzer des Aachener HPC Clusters Agenda (The RWTH Compute cluster) Project-based
Shibboleth : An Open Source, Federated Single Sign-On System David E. Martin [email protected]
Shibboleth : An Open Source, Federated Single Sign-On System David E. Martin [email protected] International Center for Advanced Internet Research Outline Security Mechanisms Access Control Schemes
Strategic Analysis of the Global Automotive Market for IT Mobility Platforms
Strategic Analysis of the Global Automotive Market for IT Mobility Platforms Billing and Smart Charging as Two Key Opportunity Areas in the EV Infrastructure Segment January 2012 Contents Research Scope
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca
Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q
Economic and Social Council
United Nations Economic and Social Council ECE/EB.AIR/WG.1/2013/10 Distr.: General 30 July 2013 English only Economic Commission for Europe Executive Body for the Convention on Long-range Transboundary
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
The Current Status of the EUMETNET Programme UNIDART
The Current Status of the EUMETNET Programme UNIDART Jürgen Seib Deutscher Wetterdienst Database Management Department e-mail: [email protected] The main goal Development of a Web-based information system
SURFsara HPC Cloud Workshop
SURFsara HPC Cloud Workshop doc.hpccloud.surfsara.nl UvA workshop 2016-01-25 UvA HPC Course Jan 2016 Anatoli Danezi, Markus van Dijk [email protected] Agenda Introduction and Overview (current
Cisco Conference Connection
Data Sheet Cisco Conference Connection Cisco IP Communications a comprehensive system of powerful, enterprise-class solutions including IP telephony, unified communications, IP video/audio conferencing,
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner Research Group Scientific Computing Faculty of Computer Science University of Vienna AUSTRIA http://www.par.univie.ac.at
Grid Computing With FreeBSD
Grid Computing With FreeBSD USENIX ATC '04: UseBSD SIG Boston, MA, June 29 th 2004 Brooks Davis, Craig Lee The Aerospace Corporation El Segundo, CA {brooks,lee}aero.org http://people.freebsd.org/~brooks/papers/usebsd2004/
JuRoPA. Jülich Research on Petaflop Architecture. One Year on. Hugo R. Falter, COO Lee J Porter, Engineering
JuRoPA Jülich Research on Petaflop Architecture One Year on Hugo R. Falter, COO Lee J Porter, Engineering HPC Advisoy Counsil, Workshop 2010, Lugano 1 Outline The work of ParTec on JuRoPA (HF) Overview
CaliberRM / LDAP Integration. CaliberRM
CaliberRM / LDAP Integration CaliberRM Borland Software Corporation 100 Enterprise Way Scotts Valley, California 95066-3249 www.borland.com Made in Borland Copyright 2004 Borland Software Corporation.
