Concept and Implementation of CLUSTERIX: National Cluster of Linux Systems

Size: px
Start display at page:

Download "Concept and Implementation of CLUSTERIX: National Cluster of Linux Systems"

Transcription

1 Concept and Implementation of CLUSTERIX: National Cluster of Linux Systems Roman Wyrzykowski 1, Norbert Meyer 2, and Maciej Stroinski 2 1 Czestochowa University of Technology Institute of Computer & Information Sciences Dabrowskiego 73, Czestochowa, Poland roman@icis.pcz.pl 2 Poznan Supercomputing and Networking Center Noskowskiego 10, Poznan, Poland {meyer, stroins}@man.poznan.pl Abstract. This paper presents the concept and implementation of the National Cluster of Linux Systems (CLUSTERIX) - a distributed PCcluster (or metacluster) of a new generation, based on the Polish Optical Network PIONIER. Its implementation makes it possible to deploy a production Grid environment, which consists of local PC- clusters with 64- and 32-bit Linux machines, located in independent centers across Poland. The management software developed as Open Source allows for dynamic changes in the metacluster configuration. The resulting system will be tested on a set of pilot distributed applications developed as a part of the project. The project is implemented by 12 Polish supercomputing centers and metropolitan area networks. 1 Introduction PC-clusters using Open Source software such as Linux are the most common and available parallel systems now. At the same time, the capability of Gigabit/s wide area networks are increasing rapidly, to the point when it becomes feasible and indeed interesting to think of the high-end integrated metacluster environment rather than a set of disjoint local clusters. Such metaclusters [3,17,18] can be viewed as key elements of the modern Grid infrastructure, and used by scientists and engineers to solve computationally and data demanding problems. In Poland, we have access to all crucial elements which are necessary to build the national Linux metacluster. The most important among them is Polish Optical Network PIONIER [15, 16]. It represents an intelligent, multi-channel optical network using the DWDM technology, with the bandwidth of n x (10, 40,...) Gb/s, based on IP protocol. On the transport layer this network provides allocation of dedicated resources for specified applications, Grids, and thematic networks.

2 2 Roman Wyrzykowski et al. 2 Project Goals and Status The main objective of the CLUSTERIX project [1] is to develop mechanisms and tools that allow for the deployment of a production Grid environment with the bacbone consisting of dedicated, local Linux clusters with 64-bit machines. Local clusters are placed in geographically distant independent centers connected by the Polish Optical Network PIONIER. It is assumed that each (in theory) Linux cluster may be attached to the backbone dynamically, as so called dynamic cluster. As a result, a geographically distributed Linux cluster is obtained, with a dynamically changing configuration, fully operational, and integrated with services offered by other projects. The project started on December 2003, and lasts 32 months. It is divided into two stages: (i) research and development with estimated duration of 20 months, (ii) deployment stage. The project is implemented by 12 Polish supercomputing centers and metropolitan area networks affiliated to Polish universities, with Czestochowa University of Technology as the project coordinator. It is important to note the phrase production Grid ; meaning the development of software/hardware infrastructure accessible for real computing, fully operational and integrated with services offered by other projects related to the PIONIER program [16], e.g., the National Computational Cluster based on the LSF batch system, National Data Warehouse, and virtual lab project. Delivering advanced and specialized services integrated into a single coherent system requires additional mechanisms not available in the existing pilot installations (see, e.g., CrossGrid testbed [2]). They are commonly constrained by the assumption of static infrastructure in terms of the number of nodes and services provided, as well as the number of users organized into virtual organizations. On the contrary, in CLUSTERIX we provide mechanisms and tools for an automated attachement of dynamic clusters; for example, non-dedicated clusters or labs may be attached to the backbone during the night or weekend. In the CLUSTERIX project, a lot of emphasis is laid on the usage of the IPv6 protocol [8] and its added functionality - enhanced reliability and QoS. This functionality delivered to the application level and at least used in middleware would allow for a better quality of services. Nothing like a production, IPv6- based Grid infrastructure does exist at present, but taking into account duration of the project it may be assumed that the IPv6 standard will be widely used. Therefore, the developed tools will support both IPv6 and IPv4. After the system is built, it will be tested on a set of pilot applications created as a part of the project. The important goal of the project is also to support potential CLUSTERIX users in preparation of their Grid applications, thus creating a group of people being able to use the cluster in an optimal way after the research and deployment works are finished.

3 Concept and Implementation of CLUSTERIX 3 3 Pilot Installation The CLUSTERIX project includes a pilot installation (Fig.1) consisting of 12 local clusters located in independent centers across Poland. They are interconnected via dedicated 1 Gb/s channels provided by the PIONIER optical network. S UPSK GDAÑSK 32 x IA-64, KOSZALIN 128 GB RAM, 1168 GB HDD, switch InfiniBand SZCZECIN 6 x IA-64, 12 GB RAM, 438 GB HDD, switch 48x1Gb/s BYDGOSZCZ OLSZTYN 30 x IA-64, 60 GB RAM, 1095 GB HDD, switch 48x1Gb/s POZNAÑ TORUÑ 6 x IA-64, 16 GB RAM, 219 GB HDD, switch 24x1Gb/s WARSZAWA BIA YSTOK 6 x IA-64, 12 GB RAM, 2219 GB HDD, switch 24x1Gb/s ZIELONA GÓRA 8 x IA-64, 24 GB RAM, 292 GB HDD, switch 24x1Gb/s 18 x IA-64, 172 GB RAM, 6278 GB HDD, switch InfiniBand WROC AW 12 x IA-64, 24 GB RAM, 438 GB HDD, switch 48x1Gb/s ÓD 8 x IA-64, 16 GB RAM, 292 GB HDD, switch 48x1Gb/s OPOLE GLIWICE PU AWY RADOM 16 x IA-64, 32 GB RAM, 2219 GB HDD, switch 24x1Gb/s CZÊSTOCHOWA KIELCE KATOWICE BIELSKO-BIA A 16 x IA-64, 32 GB RAM, 584 GB HDD, switch 24x1Gb/s KRAKÓW RZESZÓW LUBLIN 24 x IA-64, 48 GB RAM, 876 GB HDD, switch 48x1Gb/s Fig. 1. Pilot installation in the CLUSTERIX project The core of the testbed is equipped with 127 Intel Itanium2 nodes managed by Linux OS (Debian distribution, kernel 2.6.x). A computational node includes two Itanium2 processors (1,3 GHz, 3 MB cache), 4 GB or 8 GB RAM, 73 or 146 GB SCSI HDD, as well as two network interfaces (Gigabit Ethernet, and InfiniBand or Myrinet). Such a dual network interface allows for creating two independent communication channels dedicated to exchange of messages during computations and NFS support. The efficient access to the PIONIER backbone is provided through a Gigabit Ethernet L2/L3 coupling switch (see Fig.2).

4 4 Roman Wyrzykowski et al. Fig.2. Architecture of the CLUSTERIX infrastructure

5 Concept and Implementation of CLUSTERIX 5 Selected 32-bit machines are dedicated to management of local clusters and the entire infrastructure. While users tasks are allowed to be executed only on computational nodes, each local cluster is equipped with an access node where the Globus Toolkit [5] and local batch system are running. All machines inside a local cluster are protected by a firewall, which is also used as a router for attachment of dynamic clusters. Access to resources of the National Linux Cluster is allowed only from machines called entry points; physical users can possess their accounts only on these dedicated nodes. It is assumed that endusers applications are submitted to the CLUSTERIX system through WWW portals. An important element of the pilot installation is Data Storage System. Before execution of an application, input data are fetched from storage elements and transferred to access nodes; after the execution output data are returned from access nodes to storage elements. The Data Storage System includes a distributed implementation of data broker. Currently each storage element is equipped with 2 TB HDD. 4 Pilot Applications The National Linux Cluster will be used for running HTC applications, as well as large-scale distributed applications that require parallel use of resources of one or more local clusters (meta-applications). In the project, selected end-user s applications are being developed for the experimental verification of the project assumptions and deliverables, as well as to achieve real application results. It is clear that applications and their ability to use distributed resources efficiently will decide finally on success of computational Grids. Because of the hierarchical architecture of the CLUSTERIX infrastructure, it is not a trivial issue to adopt an application for its efficient execution on the metacluster. This requires parallelization on several levels corresponding to the metacluster architecture, and taking into account heterogeneity in both the computing power of different nodes, and network performance between various subsystems. Another problem is a variable availability of Grid components. In the CLUSTERIX project, the MPICH-G2 tool [10] based on the Globus Toolkit is used as a Grid-enabled implementation of MPI standard. The list of pilot applications includes among others: FEM modeling of castings solidification; modeling transonic flows and design of advanced tip devices; prediction of protein structures from a sequence of aminoacids and simulation of protein folding; investigation of properties of bio-molecular systems, for drug design; large-scale simulations of blood circulation in micro-capillaries; astrophysical simulations; package GAMESS in the CLUSTERIX environment.

6 6 Roman Wyrzykowski et al. 5 CLUSTERIX Middleware 5.1 Technologies and Architecture The middleware developed in the project should allow for: managing clusters with dynamically changing configuration, including temporarily attached clusters; submitting, executing and monitoring HPC/HTC applications accordingly to users preferences; efficient management of users and virtual organizations; effective management of network resources, with use of IPv6 protocols; integration of services delivered as outcome of other projects, especially those related to the PIONIER program, e.g., data warehouse, other computational services; respecting local policies of administration and management within independent domains; convenient access to resources and applications, using an integrated interface; high level of reliability and security in the heterogeneous environment. The CLUSTERIX software is developed as Open Source, and is based on using the Globus Toolkit 2.4 and Web Services, with Globus 2.4 available in the Globus 3.2 distribution. The use of Web Services makes the created software easier to reuse, and allows for interoperability with other Grid Systems on the service level. It is important to note that initially the OGSI/OGSA concept [4], implemented in Globus 3, was assumed to be used in CLUSTERIX. However, due to the rapid transition to Globus 4 and the WS-Resource Framework, we had to reject the initial decision, and choose Globus 2.4 as the only possibility to build the CLUSTERIX production environment, taking into account the time limitations of the project. The usage of the Open Source approach allows anybody to access the project source code, modify it and publish the changes. This makes the software more reliable and secure. Open software is easier to integrate with newly developed and existing software, like the GridLab resource management system [6], which is adopted in the CLUSTERIX project. The architecture of the CLUSTERIX middleware is shown in Fig.3. In the successive subsections, we describe concisely some key components of this middleware 5.2 Resource Management System In CLUSTERIX, we base on the GridLab Resource Management System (GRMS) developed in the GridLab project [6]. The main functionality of GRMS include: ability to choose the best resource for the task execution, according to the job description and a chosen mapping algorithm; submitting the GRMS task according to the job description;

7 Concept and Implementation of CLUSTERIX 7 Fig.3. Architecture of the CLUSTERIX middleware

8 8 Roman Wyrzykowski et al. ability to migrate the GRMS task to a better resource; ability to cancel the task; providing information about the task status, and other information about tasks, e.g., name of host where the task is/was running; ability to transfer input and output files. This approach implies the necessity to integrate GRMS with services developed in the CLUSTERIX project, such as monitoring/information system, data management system, checkpointing mechanism, and management of users accounts and virtual organizations. The additional functionality of GRMS developed for CLUSTERIX include: communication with resource management systems in different domains, cooperation with the network resource management system, support for MPICH-G2, and prediction module. The prediction module is crucial for providing an efficient use of available resources. The basic functionality of this module includes: ability to predict execution times and resource demands of tasks, using available information about resources and tasks; prediction of time spent by tasks in queues of local batch systems; ability to take into account prediction errors, and find resource assignments that are the least sensitive to these errors. The prediction module uses reasoning techniques based on knowledge discovery, statistical approaches and rough sets. 5.3 Data Management System Grid applications deal with large volumes of data. Consequently, effective datamanagement solutions are vital for Grids. For the CLUSTERIX project, the Clusterix Data Management System (CDMS) has been developed, based on the analysis of existing implementations and user s requirements [12]. A special attention has been paid to making the system user-friendly and efficient, aiming at creation of a reliable and secure Data Storage System [9]. Taking into account Grid specific networking parameters - different bandwidth, current load and network technologies between geographically distant sites, CDMS tries to optimize data throughput via replication and replica selection techniques. Another key feature to be considered during implementation of Grid data services is fault tolerance. In CDMS, the modular design and distributed operation model assure elimination of a single point of failure. In particular, multiple instances of the data broker are running concurrently, and their coherence is provided by a synchronization subsystem. The basic technologies used in the development of CDMS include: gridftp and GSI components of Globus 2.4, as well as Web Services implemented using the GSOAP plugin from GridLab [6].

9 Concept and Implementation of CLUSTERIX Network Resource Management System The Polish Optical Network PIONIER, which is used in CLUSTERIX as a backbone interconnect, is based on the DWDM optical technology and 10 Gigabit Ethernet standard. PIONIER allows to create dedicated VLANs based on 802.1q standard, as well as setting traffic priorities based on 802.1p standard. Additionally, Black Diamond 6808 switches from Extreme Networks, which are installed in the backbone, support a proprietary protocol which allows us to guarantee bandwidth for a given VLAN. Based on requirements of the CLUSTERIX middleware and pilot applications, it has been decided to establish two dedicated VLANs within the PIO- NIER network: computational network with bandwidth of 1 Gb/s; management network with bandwidth of 100 Mb/s dedicated to configuration of the metacluster, measurement purposes, software upgrading, etc. In accordance with the project goals, the IPv6 protocol is extensively deployed in the system infrastructure and middleware. Apart from advantages mentioned in Section 2, the use of IPv6 offers other benefits, e.g., the mobile Ipv6 protocol allows us to use fixed IPv6 addresses for dynamic clusters, irrespective of a place of their attachment. Note that the IPSec mechanism is used to improve security on the network level, for both IPv6 and IPv4 protocols. In CLUSTERIX, the SNMP protocol is used for the management of all elements that are critical for network operation, like backbone and coupling switches, access nodes, storage elements, and firewalls. This allows us to build the Network Resource Management System, which contains the following main components: measurement agents; data base containig results of measurement; network resource manager (network broker); graphical user interface. The results of measurements are then used for the traffic management, and in the Clusterix Data Management System. Providing a required quality of network services is extremely important to deliver performance capacities of the CLUSTERIX infrastructure to end-users applications. For this aim, the following techniques are deployed: creation of two VLANs in a computational network, with different priorities (normal and high); tagging IP packets (especially promising for the IPv6 protocol); differentiated services. 5.5 Management of Users Accounts and Virtual Organizations Grids as geographically distributed, large-scale environments raise strong demands to management of users accounts and virtual organizations (VOs) [4].

10 10 Roman Wyrzykowski et al. Apart from scalability, fault tolerance and security, the main requirement is flexibility because management tools must be able to take into account different roles of: user, administrator of a resource, manager of a virtual organization, manager of a group of resources. Unfortunately, existing tools do not support flexible policy of resource authorization and accounting. A new management tool developed in CLUSTERIX features an open architecture based on plugins. This allows for different methods of authorization, The architecture of this tool is highly distributed in order to provide scalability and fault tolerance. Another important feature is dynamic assignment of accounts, since a pool of accounts is assigned to users when required. The availability of a Site Accounting Information System (SAIS) and Virtual Organization Information System (VOIS) for every site and VO, respectively, gives the tool the ability to collect and store accounting information across all sites and VOs. The kernel of this tool is Globus Authorization Module (GAM) - an extension of the Globus gatekeeper. GAM provides different authorization plugins, which implement different authorization policies such as: accept all users from the grid-mapfile; ban users put on the black list; accept all users from a certain VO; query a remote authorization system; accept all users with a certificate matching a given template. GAM collects also basic accounting information (time, user, account, etc.), stored and processed in SAISs and VOISs. 5.6 User Interface The main design goal is to create a flexible portal interface that can be easily extended and adopted for utilization with different applications. We focus on separation of the visualization part from the application s logic, as well as giving possibility of framework extension at run-time. Support of the VRML, X3D, SVG and chart (jpeg, png) formats of output presentations is also an important goal, as well as security and fault tolerance. It is necessary to provide adaptation of ready end-users applications, and easy installation of them seamlessly on multiple hosts. These requirements constraint us to use SSH as a communication channels (see Fig.4), when the only way of communication between the portal and CLUSTERIX services is interaction with the SSH session server. The security of interface is provided by using encrypted communication protocols, and storing certificates on entry points not in portals. A new feature is using XML-based application extensions, called by us parsers, which allow us to describe rules of interaction between users and applications,

11 Concept and Implementation of CLUSTERIX 11 PORTAL GRIDSPHERE SSH SERVER (PERL) (XML) PARSER PTY ENTRYPOINT -PORTLETS -SERVICES (SSH SESSION) (JAVA) PARSERS (PERL) (HTTPS) USER HOST ClusterIX SERVICES Fig.4. Architecture of user interface including input data format, output data parsing and visualization. Parsers are dynamically loaded Perl components, which are generated based on XML descriptions provided by users. In combinations with SSH sessions and pseudoterminals, which have been successfully used for the implementation of Web Condor Interface [11], parsers and application-specific managers allow us to gain persistence and interaction possibilities in Grids. The resulting SSH Session Server Framework allows for a fully distributed implementation, which is an efficient way to provide fault-tolerant features. The proposed framework is used together with the GridSphere Portlet Framework [7] the outcome of the GridLab Project. This gives us the possibility to use a variety of build-in features of the GridSphere technology such as users secure space, chart generation, etc. Fig.5 presents a portal screenshot for a demo application which has been developed [14] for the FEM modelling of heat transfer in castings. 6 Conclusions This paper presents the concept and implementation of CLUSTERIX, a geographically distributed Linux cluster based on the Polish Optical Network

12 12 Roman Wyrzykowski et al. Fig.5. A portal screenshot for the heat transfer demo application PIONIER. The main objective of the CLUSTERIX project is to develop mechanisms and tools that allow for the deployment of a production Grid environment. The CLUSTERIX backbone consists of dedicated, local Linux clusters with 64-bit machines. It is assumed that so called dynamic clusters may be attached to the backbone dynamically. For example, non-dedicated clusters and labs may be attached to the backbone during the night or weekend. As a result, CLUSTERIX is an open and complex structure whose efficient management is not a trivial issue. Among the most important problems not covered in this paper are: security issues, management of cluster software, monitoring of cluster nodes, and checkpointing. For example, the possible attachement of dynamic clusters results in a distrusted environment which is difficult to be

13 Concept and Implementation of CLUSTERIX 13 secured comprehensively. While solving the secutity issues in CLUSTERIX, the major approach is to integrate existing products and find out the configuration which fits into the best security level for both types of clusters - backbone and dynamic ones. Acknowledgements. The CLUSTERIX project has been funded by the Polish Ministry of Science and Information Society Technologies under grant 6T C/ We also would like to thank Intel Corporation for sponsoring the project and help in building the pilot installation. References 1. CLUSTERIX Project Home Page, 2. Crossgrid Exploitation Website, 3. DemogGrid Project, 4. Foster, I., Kesselman, C., Nick, J.M., Tuecke, S.: The physiology of the Grid. In: Grid Computing - Making the Global Infrastructure a Reality, J. Wiley & Sons, 2003, Globus Project Home Page, 6. GridLab: A Grid Application Toolkit and Testbed, 7. GridSphere Portal, 8. IPv6: The Next Generation Internet, 9. Karczewski, K., Kuczynski, L., Wyrzykowski, R.: Secure Data Transfer and Replication Mechanisms in Grid Environments. In: Proc. Cracow 03 Grid Workshop, Cracow, (2003) Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface, Journal of Parallel and Distributed Computing (JPDC), Vol. 63, No. 5, pp , May Kuczynski, T., Wyrzykowski, R.: Cluster Monitoring and Management in the WebCI Environment. Lect. Notes in Comp. Sci., Springer-Verlag, 3019 (2004) Kuczynski, L., Karczewski, K., Wyrzykowski, R.: Clusterix Data Management System. In: Proc. Cracow 04 Grid Workshop, Cracow, (2004) (in print) 13. Olas, T., Karczewski, K., Tomas, A., Wyrzykowski, R: FEM Computations on Clusters Using Different Models of Parallel Programming. Lect. Notes in Comp. Sci (2002) Olas, T., Wyrzykowski, R: Porting Thermomechanical Applications to CLUS- TERIX Environment. In: Proc. Cracow 04 Grid Workshop, Cracow, (2004) (in print) 15. PIONIER Home Page, Weglarz, J.: Poznan Networking and Supercomputing Center: 10 years of experience in building IT infrastructure for e-science in Poland, The TeraGrid: A Primer, http;// 18. Wyrzykowski, R., Meyer, N., Stroinski, M.: PC-Based LINUX Metaclusters as Key Elements of Grid Infrastructure. In: Proc. Cracow 02 Grid Workshop, Cracow, (2002)

Grid Activities in Poland

Grid Activities in Poland Grid Activities in Poland Jarek Nabrzyski Poznan Supercomputing and Networking Center naber@man.poznan.pl Outline PSNC National Program PIONIER Sample projects: Progress and Clusterix R&D Center PSNC was

More information

KMD National Data Storage

KMD National Data Storage KMD National Data Storage in the PIONIER network Maciej Brzezniak, Norbert Meyer, Rafał Mikołajczak Maciej Stroiński Location igrid2005, Sept. 27th, 2005 SZCZE KOSZAL P GDAŃSK IN BYD GOSZCZ OZNAŃ TORUŃ

More information

Secure Data Transfer and Replication Mechanisms in Grid Environments p. 1

Secure Data Transfer and Replication Mechanisms in Grid Environments p. 1 Secure Data Transfer and Replication Mechanisms in Grid Environments Konrad Karczewski, Lukasz Kuczynski and Roman Wyrzykowski Institute of Computer and Information Sciences, Czestochowa University of

More information

Clusterix Dynamic Clusters Administration Tutorial

Clusterix Dynamic Clusters Administration Tutorial Clusterix Dynamic Clusters Administration Tutorial Marcin Pawlik , Jan Kwiatkowski IIS, WIZ, PWr Tutorial outline Clusterix and dynamic clusters

More information

An approach to grid scheduling by using Condor-G Matchmaking mechanism

An approach to grid scheduling by using Condor-G Matchmaking mechanism An approach to grid scheduling by using Condor-G Matchmaking mechanism E. Imamagic, B. Radic, D. Dobrenic University Computing Centre, University of Zagreb, Croatia {emir.imamagic, branimir.radic, dobrisa.dobrenic}@srce.hr

More information

Cloud Campus Services in PLATON e-science Platform

Cloud Campus Services in PLATON e-science Platform Cloud Campus Services in PLATON e-science Platform Roman Wyrzykowski 1, Marek Zawadzki 2, Tomasz Chmiel 1, Piotr Dzierzak 1, Artur Kaszuba 2, Jacek Kochan 2, Jerzy Mikolajczak 2, Tomasz Olas 1, Sebastian

More information

Advanced Service Platform for e-science. Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL)

Advanced Service Platform for e-science. Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL) Advanced Service Platform for e-science Robert Pękal, Maciej Stroiński, Jan Węglarz (PSNC PL) PLATON Service Platform for e-science Operational Programme: Innovative Economy 2007-2013 Investments in development

More information

PIONIER TF-NGN update Michał Przybylski michalp@man.poznan.pl 13th TF-NGN meeting in Madrid

PIONIER TF-NGN update Michał Przybylski michalp@man.poznan.pl 13th TF-NGN meeting in Madrid PIONIER TF-NGN update Michał Przybylski michalp@man.poznan.pl 13th TF-NGN meeting in Madrid Contents current network (physical and logical) current construction work link optimization new network services

More information

Simplifying Administration and Management Processes in the Polish National Cluster

Simplifying Administration and Management Processes in the Polish National Cluster Simplifying Administration and Management Processes in the Polish National Cluster Mirosław Kupczyk, Norbert Meyer, Paweł Wolniewicz e-mail: {miron, meyer, pawelw}@man.poznan.pl Poznań Supercomputing and

More information

GRMS Features and Benefits

GRMS Features and Benefits GRMS - The resource management system for Clusterix computational environment Bogdan Ludwiczak bogdanl@man.poznan.pl Poznań Supercomputing and Networking Center Outline: GRMS - what it is? GRMS features

More information

PROGRESS Access Environment to Computational Services Performed by Cluster of Sun Systems

PROGRESS Access Environment to Computational Services Performed by Cluster of Sun Systems PROGRESS Access Environment to Computational Services Performed by Cluster of Sun Systems Michał Kosiedowski, Cezary Mazurek, Maciej Stroiński 1) 1) Poznan Supercomputing and Networking Center Noskowskiego

More information

PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center

PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center PIONIER the national fibre optic network for new generation services Artur Binczewski, Maciej Stroiński Poznań Supercomputing and Networking Center e-irg Workshop October 12-13, 2011; Poznań, Poland 18th

More information

Cluster, Grid, Cloud Concepts

Cluster, Grid, Cloud Concepts Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of

More information

THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid

THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING José Daniel García Sánchez ARCOS Group University Carlos III of Madrid Contents 2 The ARCOS Group. Expand motivation. Expand

More information

Data Management System for grid and portal services

Data Management System for grid and portal services Data Management System for grid and portal services Piotr Grzybowski 1, Cezary Mazurek 1, Paweł Spychała 1, Marcin Wolski 1 1 Poznan Supercomputing and Networking Center, ul. Noskowskiego 10, 61-704 Poznan,

More information

Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid 5000 Testbed

Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid 5000 Testbed Enabling Large-Scale Testing of IaaS Cloud Platforms on the Grid 5000 Testbed Sébastien Badia, Alexandra Carpen-Amarie, Adrien Lèbre, Lucas Nussbaum Grid 5000 S. Badia, A. Carpen-Amarie, A. Lèbre, L. Nussbaum

More information

Proposal of Dynamic Load Balancing Algorithm in Grid System

Proposal of Dynamic Load Balancing Algorithm in Grid System www.ijcsi.org 186 Proposal of Dynamic Load Balancing Algorithm in Grid System Sherihan Abu Elenin Faculty of Computers and Information Mansoura University, Egypt Abstract This paper proposed dynamic load

More information

Deploying a distributed data storage system on the UK National Grid Service using federated SRB

Deploying a distributed data storage system on the UK National Grid Service using federated SRB Deploying a distributed data storage system on the UK National Grid Service using federated SRB Manandhar A.S., Kleese K., Berrisford P., Brown G.D. CCLRC e-science Center Abstract As Grid enabled applications

More information

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland

The Lattice Project: A Multi-Model Grid Computing System. Center for Bioinformatics and Computational Biology University of Maryland The Lattice Project: A Multi-Model Grid Computing System Center for Bioinformatics and Computational Biology University of Maryland Parallel Computing PARALLEL COMPUTING a form of computation in which

More information

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper

MIGRATING DESKTOP AND ROAMING ACCESS. Migrating Desktop and Roaming Access Whitepaper Migrating Desktop and Roaming Access Whitepaper Poznan Supercomputing and Networking Center Noskowskiego 12/14 61-704 Poznan, POLAND 2004, April white-paper-md-ras.doc 1/11 1 Product overview In this whitepaper

More information

Solution for private cloud computing

Solution for private cloud computing The CC1 system Solution for private cloud computing 1 Outline What is CC1? Features Technical details System requirements and installation How to get it? 2 What is CC1? The CC1 system is a complete solution

More information

Condor for the Grid. 3) http://www.cs.wisc.edu/condor/

Condor for the Grid. 3) http://www.cs.wisc.edu/condor/ Condor for the Grid 1) Condor and the Grid. Douglas Thain, Todd Tannenbaum, and Miron Livny. In Grid Computing: Making The Global Infrastructure a Reality, Fran Berman, Anthony J.G. Hey, Geoffrey Fox,

More information

Classic Grid Architecture

Classic Grid Architecture Peer-to to-peer Grids Classic Grid Architecture Resources Database Database Netsolve Collaboration Composition Content Access Computing Security Middle Tier Brokers Service Providers Middle Tier becomes

More information

Service Oriented Distributed Manager for Grid System

Service Oriented Distributed Manager for Grid System Service Oriented Distributed Manager for Grid System Entisar S. Alkayal Faculty of Computing and Information Technology King Abdul Aziz University Jeddah, Saudi Arabia entisar_alkayal@hotmail.com Abstract

More information

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware

Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware Analyses on functional capabilities of BizTalk Server, Oracle BPEL Process Manger and WebSphere Process Server for applications in Grid middleware R. Goranova University of Sofia St. Kliment Ohridski,

More information

Cloud Computing. Lecture 5 Grid Case Studies 2014-2015

Cloud Computing. Lecture 5 Grid Case Studies 2014-2015 Cloud Computing Lecture 5 Grid Case Studies 2014-2015 Up until now Introduction. Definition of Cloud Computing. Grid Computing: Schedulers Globus Toolkit Summary Grid Case Studies: Monitoring: TeraGRID

More information

Scheduling and Resource Management in Computational Mini-Grids

Scheduling and Resource Management in Computational Mini-Grids Scheduling and Resource Management in Computational Mini-Grids July 1, 2002 Project Description The concept of grid computing is becoming a more and more important one in the high performance computing

More information

A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment

A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment A Taxonomy and Survey of Grid Resource Planning and Reservation Systems for Grid Enabled Analysis Environment Arshad Ali 3, Ashiq Anjum 3, Atif Mehmood 3, Richard McClatchey 2, Ian Willers 2, Julian Bunn

More information

Manjrasoft Market Oriented Cloud Computing Platform

Manjrasoft Market Oriented Cloud Computing Platform Manjrasoft Market Oriented Cloud Computing Platform Innovative Solutions for 3D Rendering Aneka is a market oriented Cloud development and management platform with rapid application development and workload

More information

Web Service Based Data Management for Grid Applications

Web Service Based Data Management for Grid Applications Web Service Based Data Management for Grid Applications T. Boehm Zuse-Institute Berlin (ZIB), Berlin, Germany Abstract Web Services play an important role in providing an interface between end user applications

More information

Performance of the NAS Parallel Benchmarks on Grid Enabled Clusters

Performance of the NAS Parallel Benchmarks on Grid Enabled Clusters Performance of the NAS Parallel Benchmarks on Grid Enabled Clusters Philip J. Sokolowski Dept. of Electrical and Computer Engineering Wayne State University 55 Anthony Wayne Dr., Detroit, MI 4822 phil@wayne.edu

More information

Digital Library for Multimedia Content Management

Digital Library for Multimedia Content Management Digital Library for Multimedia Content Management Cezary Mazurek, Maciej Stroinski, Sebastian Szuber Pozna_ Supercomputing and Networking Centre, ul. Noskowskiego 10, 61-704 Pozna_, POLAND tel. +48 61

More information

1.1.1 Introduction to Cloud Computing

1.1.1 Introduction to Cloud Computing 1 CHAPTER 1 INTRODUCTION 1.1 CLOUD COMPUTING 1.1.1 Introduction to Cloud Computing Computing as a service has seen a phenomenal growth in recent years. The primary motivation for this growth has been the

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based

More information

Michał Jankowski Maciej Brzeźniak PSNC

Michał Jankowski Maciej Brzeźniak PSNC National Data Storage - architecture and mechanisms Michał Jankowski Maciej Brzeźniak PSNC Introduction Assumptions Architecture Main components Deployment Use case Agenda Data storage: The problem needs

More information

Grid Computing Vs. Cloud Computing

Grid Computing Vs. Cloud Computing International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 6 (2013), pp. 577-582 International Research Publications House http://www. irphouse.com /ijict.htm Grid

More information

Dynamic allocation of servers to jobs in a grid hosting environment

Dynamic allocation of servers to jobs in a grid hosting environment Dynamic allocation of s to in a grid hosting environment C Kubicek, M Fisher, P McKee and R Smith As computational resources become available for use over the Internet, a requirement has emerged to reconfigure

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM Albert M. K. Cheng, Shaohong Fang Department of Computer Science University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu

More information

IBM Deep Computing Visualization Offering

IBM Deep Computing Visualization Offering P - 271 IBM Deep Computing Visualization Offering Parijat Sharma, Infrastructure Solution Architect, IBM India Pvt Ltd. email: parijatsharma@in.ibm.com Summary Deep Computing Visualization in Oil & Gas

More information

IPv6/IPv4 Automatic Dual Authentication Technique for Campus Network

IPv6/IPv4 Automatic Dual Authentication Technique for Campus Network IPv6/IPv4 Automatic Dual Authentication Technique for Campus Network S. CHITPINITYON, S. SANGUANPONG, K. KOHT-ARSA, W. PITTAYAPITAK, S. ERJONGMANEE AND P. WATANAPONGSE Agenda Introduction Design And Implementation

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland

Poland. networking, digital divide andgridprojects. M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland Poland networking, digital divide andgridprojects M. Pzybylski The Poznan Supercomputing and Networking Center, Poznan, Poland M. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and ACK

More information

Grid Computing vs Cloud

Grid Computing vs Cloud Chapter 3 Grid Computing vs Cloud Computing 3.1 Grid Computing Grid computing [8, 23, 25] is based on the philosophy of sharing information and power, which gives us access to another type of heterogeneous

More information

Trademark Notice. General Disclaimer

Trademark Notice. General Disclaimer Trademark Notice General Disclaimer Intelligent Management, Centralized Operation & Maintenance Huawei Data Center Network Management Solution A data center is an integrated IT application environment

More information

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft SQL Server Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate

More information

COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters

COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters COMP5426 Parallel and Distributed Computing Distributed Systems: Client/Server and Clusters Client/Server Computing Client Client machines are generally single-user workstations providing a user-friendly

More information

Course Syllabus. Fundamentals of Windows Server 2008 Network and Applications Infrastructure. Key Data. Audience. Prerequisites. At Course Completion

Course Syllabus. Fundamentals of Windows Server 2008 Network and Applications Infrastructure. Key Data. Audience. Prerequisites. At Course Completion Key Data Product #: 3380 Course #: 6420A Number of Days: 5 Format: Certification Exams: Instructor-Led None This course syllabus should be used to determine whether the course is appropriate for the students,

More information

PROGRESS Portal Access Whitepaper

PROGRESS Portal Access Whitepaper PROGRESS Portal Access Whitepaper Maciej Bogdanski, Michał Kosiedowski, Cezary Mazurek, Marzena Rabiega, Malgorzata Wolniewicz Poznan Supercomputing and Networking Center April 15, 2004 1 Introduction

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Implementing the Application Control Engine Service Module

Implementing the Application Control Engine Service Module Course: Implementing the Application Control Engine Service Module Duration: 4 Day Hands-On Lab & Lecture Course Price: $ 2,995.00 Learning Credits: 30 Hitachi HiPass: 4 Description: Implementing the Application

More information

Cisco Application Networking Manager Version 2.0

Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager Version 2.0 Cisco Application Networking Manager (ANM) software enables centralized configuration, operations, and monitoring of Cisco data center networking equipment

More information

Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine

Grid Scheduling Architectures with Globus GridWay and Sun Grid Engine Grid Scheduling Architectures with and Sun Grid Engine Sun Grid Engine Workshop 2007 Regensburg, Germany September 11, 2007 Ignacio Martin Llorente Javier Fontán Muiños Distributed Systems Architecture

More information

Development of Software Dispatcher Based. for Heterogeneous. Cluster Based Web Systems

Development of Software Dispatcher Based. for Heterogeneous. Cluster Based Web Systems ISSN: 0974-3308, VO L. 5, NO. 2, DECEMBER 2012 @ SRIMC A 105 Development of Software Dispatcher Based B Load Balancing AlgorithmsA for Heterogeneous Cluster Based Web Systems S Prof. Gautam J. Kamani,

More information

Article on Grid Computing Architecture and Benefits

Article on Grid Computing Architecture and Benefits Article on Grid Computing Architecture and Benefits Ms. K. Devika Rani Dhivya 1, Mrs. C. Sunitha 2 1 Assistant Professor, Dept. of BCA & MSc.SS, Sri Krishna Arts and Science College, CBE, Tamil Nadu, India.

More information

Virtual machine interface. Operating system. Physical machine interface

Virtual machine interface. Operating system. Physical machine interface Software Concepts User applications Operating system Hardware Virtual machine interface Physical machine interface Operating system: Interface between users and hardware Implements a virtual machine that

More information

Collaborative & Integrated Network & Systems Management: Management Using Grid Technologies

Collaborative & Integrated Network & Systems Management: Management Using Grid Technologies 2011 International Conference on Computer Communication and Management Proc.of CSIT vol.5 (2011) (2011) IACSIT Press, Singapore Collaborative & Integrated Network & Systems Management: Management Using

More information

FOXBORO. I/A Series SOFTWARE Product Specifications. I/A Series Intelligent SCADA SCADA Platform PSS 21S-2M1 B3 OVERVIEW

FOXBORO. I/A Series SOFTWARE Product Specifications. I/A Series Intelligent SCADA SCADA Platform PSS 21S-2M1 B3 OVERVIEW I/A Series SOFTWARE Product Specifications Logo I/A Series Intelligent SCADA SCADA Platform PSS 21S-2M1 B3 The I/A Series Intelligent SCADA Platform takes the traditional SCADA Master Station to a new

More information

A Web Services Data Analysis Grid *

A Web Services Data Analysis Grid * A Web Services Data Analysis Grid * William A. Watson III, Ian Bird, Jie Chen, Bryan Hess, Andy Kowalski, Ying Chen Thomas Jefferson National Accelerator Facility 12000 Jefferson Av, Newport News, VA 23606,

More information

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building

More information

Grid Computing: A Ten Years Look Back. María S. Pérez Facultad de Informática Universidad Politécnica de Madrid mperez@fi.upm.es

Grid Computing: A Ten Years Look Back. María S. Pérez Facultad de Informática Universidad Politécnica de Madrid mperez@fi.upm.es Grid Computing: A Ten Years Look Back María S. Pérez Facultad de Informática Universidad Politécnica de Madrid mperez@fi.upm.es Outline Challenges not yet solved in computing The parents of grid Computing

More information

A QoS-aware Method for Web Services Discovery

A QoS-aware Method for Web Services Discovery Journal of Geographic Information System, 2010, 2, 40-44 doi:10.4236/jgis.2010.21008 Published Online January 2010 (http://www.scirp.org/journal/jgis) A QoS-aware Method for Web Services Discovery Bian

More information

Grid-based Distributed Data Mining Systems, Algorithms and Services

Grid-based Distributed Data Mining Systems, Algorithms and Services Grid-based Distributed Data Mining Systems, Algorithms and Services Domenico Talia Abstract Distribution of data and computation allows for solving larger problems and execute applications that are distributed

More information

Grid Scheduling Dictionary of Terms and Keywords

Grid Scheduling Dictionary of Terms and Keywords Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status

More information

Resource Management on Computational Grids

Resource Management on Computational Grids Univeristà Ca Foscari, Venezia http://www.dsi.unive.it Resource Management on Computational Grids Paolo Palmerini Dottorato di ricerca di Informatica (anno I, ciclo II) email: palmeri@dsi.unive.it 1/29

More information

PRACE WP4 Distributed Systems Management. Riccardo Murri, CSCS Swiss National Supercomputing Centre

PRACE WP4 Distributed Systems Management. Riccardo Murri, CSCS Swiss National Supercomputing Centre PRACE WP4 Distributed Systems Management Riccardo Murri, CSCS Swiss National Supercomputing Centre PRACE WP4 WP4 is the Distributed Systems Management activity User administration and accounting Distributed

More information

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

IP SAN Fundamentals: An Introduction to IP SANs and iscsi IP SAN Fundamentals: An Introduction to IP SANs and iscsi Updated April 2007 Sun Microsystems, Inc. 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 USA All rights reserved. This

More information

Network device management solution

Network device management solution iw Management Console Network device management solution iw MANAGEMENT CONSOLE Scalability. Reliability. Real-time communications. Productivity. Network efficiency. You demand it from your ERP systems

More information

TELEMEDICAL PORTAL TELEMEDYCYNA WIELKOPOLSKA

TELEMEDICAL PORTAL TELEMEDYCYNA WIELKOPOLSKA XI Conference "Medical Informatics & Technologies" - 2006 telemedical portal,regional telemedicine, medical teleconsultations Jurek BŁASZCZYŃSKI *, Michał KOSIEDOWSKI ^, Cezary MAZUREK ^, Roman SŁOWIŃSKI

More information

Stream Processing on GPUs Using Distributed Multimedia Middleware

Stream Processing on GPUs Using Distributed Multimedia Middleware Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research

More information

Survey and Taxonomy of Grid Resource Management Systems

Survey and Taxonomy of Grid Resource Management Systems Survey and Taxonomy of Grid Resource Management Systems Chaitanya Kandagatla University of Texas, Austin Abstract The resource management system is the central component of a grid system. This paper describes

More information

LinuxWorld Conference & Expo Server Farms and XML Web Services

LinuxWorld Conference & Expo Server Farms and XML Web Services LinuxWorld Conference & Expo Server Farms and XML Web Services Jorgen Thelin, CapeConnect Chief Architect PJ Murray, Product Manager Cape Clear Software Objectives What aspects must a developer be aware

More information

Fusion Service Schedule Virtual Data Centre ( VDC ) Version FUS-VDC-7.1

Fusion Service Schedule Virtual Data Centre ( VDC ) Version FUS-VDC-7.1 Fusion Service Schedule Virtual Data Centre ( VDC ) Version FUS-VDC-7.1 1 DEFINITIONS AND INTERPRETATIONS 1.1. Words or phrases used with capital letters in this Service Schedule shall have the same meanings

More information

SolarWinds Log & Event Manager

SolarWinds Log & Event Manager Corona Technical Services SolarWinds Log & Event Manager Training Project/Implementation Outline James Kluza 14 Table of Contents Overview... 3 Example Project Schedule... 3 Pre-engagement Checklist...

More information

Middleware support for the Internet of Things

Middleware support for the Internet of Things Middleware support for the Internet of Things Karl Aberer, Manfred Hauswirth, Ali Salehi School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL) CH-1015 Lausanne,

More information

VTrak 15200 SATA RAID Storage System

VTrak 15200 SATA RAID Storage System Page 1 15-Drive Supports over 5 TB of reliable, low-cost, high performance storage 15200 Product Highlights First to deliver a full HW iscsi solution with SATA drives - Lower CPU utilization - Higher data

More information

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution Arista 10 Gigabit Ethernet Switch Lab-Tested with Panasas ActiveStor Parallel Storage System Delivers Best Results for High-Performance and Low Latency for Scale-Out Cloud Storage Applications Introduction

More information

KNOWLEDGE GRID An Architecture for Distributed Knowledge Discovery

KNOWLEDGE GRID An Architecture for Distributed Knowledge Discovery KNOWLEDGE GRID An Architecture for Distributed Knowledge Discovery Mario Cannataro 1 and Domenico Talia 2 1 ICAR-CNR 2 DEIS Via P. Bucci, Cubo 41-C University of Calabria 87036 Rende (CS) Via P. Bucci,

More information

DEDICATED MANAGED SERVER PROGRAM

DEDICATED MANAGED SERVER PROGRAM DEDICATED MANAGED SERVER PROGRAM At Dynamic, we understand the broad spectrum of issues that come with purchasing and managing your own hardware and connectivity. They can become costly and labor intensive

More information

Recommended hardware system configurations for ANSYS users

Recommended hardware system configurations for ANSYS users Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range

More information

Oracle SDN Performance Acceleration with Software-Defined Networking

Oracle SDN Performance Acceleration with Software-Defined Networking Oracle SDN Performance Acceleration with Software-Defined Networking Oracle SDN, which delivers software-defined networking, boosts application performance and management flexibility by dynamically connecting

More information

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to

More information

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang

Distributed RAID Architectures for Cluster I/O Computing. Kai Hwang Distributed RAID Architectures for Cluster I/O Computing Kai Hwang Internet and Cluster Computing Lab. University of Southern California 1 Presentation Outline : Scalable Cluster I/O The RAID-x Architecture

More information

An Experience in Accessing Grid Computing Power from Mobile Device with GridLab Mobile Services

An Experience in Accessing Grid Computing Power from Mobile Device with GridLab Mobile Services An Experience in Accessing Grid Computing Power from Mobile Device with GridLab Mobile Services Abstract In this paper review the notion of the use of mobile device in grid computing environment, We describe

More information

A Theory of the Spatial Computational Domain

A Theory of the Spatial Computational Domain A Theory of the Spatial Computational Domain Shaowen Wang 1 and Marc P. Armstrong 2 1 Academic Technologies Research Services and Department of Geography, The University of Iowa Iowa City, IA 52242 Tel:

More information

Client/Server Computing Distributed Processing, Client/Server, and Clusters

Client/Server Computing Distributed Processing, Client/Server, and Clusters Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the

More information

OMU350 Operations Manager 9.x on UNIX/Linux Advanced Administration

OMU350 Operations Manager 9.x on UNIX/Linux Advanced Administration OMU350 Operations Manager 9.x on UNIX/Linux Advanced Administration Instructor-Led Training For versions 9.0, 9.01, & 9.10 OVERVIEW This 5-day instructor-led course focuses on advanced administration topics

More information

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN

PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN 1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction

More information

Operating System for the K computer

Operating System for the K computer Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements

More information

Cloud Based Distributed Databases: The Future Ahead

Cloud Based Distributed Databases: The Future Ahead Cloud Based Distributed Databases: The Future Ahead Arpita Mathur Mridul Mathur Pallavi Upadhyay Abstract Fault tolerant systems are necessary to be there for distributed databases for data centers or

More information

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study Journal of Algorithms & Computational Technology Vol. 6 No. 3 483 Real-Time Analysis of CDN in an Academic Institute: A Simulation Study N. Ramachandran * and P. Sivaprakasam + *Indian Institute of Management

More information

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates

High Performance. CAEA elearning Series. Jonathan G. Dudley, Ph.D. 06/09/2015. 2015 CAE Associates High Performance Computing (HPC) CAEA elearning Series Jonathan G. Dudley, Ph.D. 06/09/2015 2015 CAE Associates Agenda Introduction HPC Background Why HPC SMP vs. DMP Licensing HPC Terminology Types of

More information

Cisco Application Networking for IBM WebSphere

Cisco Application Networking for IBM WebSphere Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures

Centralized Systems. A Centralized Computer System. Chapter 18: Database System Architectures Chapter 18: Database System Architectures Centralized Systems! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types! Run on a single computer system and do

More information

SERVER CLUSTERING TECHNOLOGY & CONCEPT

SERVER CLUSTERING TECHNOLOGY & CONCEPT SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: vaibhav.mathur2007@gmail.com Abstract Server Cluster is one of the clustering technologies; it is use for

More information

The FEDERICA Project: creating cloud infrastructures

The FEDERICA Project: creating cloud infrastructures The FEDERICA Project: creating cloud infrastructures Mauro Campanella Consortium GARR, Via dei Tizii 6, 00185 Roma, Italy Mauro.Campanella@garr.it Abstract. FEDERICA is a European project started in January

More information

Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland

Software services competence in research and development activities at PSNC. Cezary Mazurek PSNC, Poland Software services competence in research and development activities at PSNC Cezary Mazurek PSNC, Poland Workshop on Actions for Better Participation of New Member States to FP7-ICT Timişoara, 18/19-03-2010

More information

UNLOCK YOUR IEC 61850 TESTING EXCELLENCE

UNLOCK YOUR IEC 61850 TESTING EXCELLENCE IMPROVE EFFICIENCY TEST WITH CONFIDENCE OF KNOW-HOW LEARN AND EXPAND YOUR IEC 61850 SKILLS MASTER YOUR NETWORK KNOWLEDGE GENERATE TEST RESULTS UNLOCK YOUR IEC 61850 TESTING EXCELLENCE Connect To & Read

More information

When talking about hosting

When talking about hosting d o s Cloud Hosting - Amazon Web Services Thomas Floracks When talking about hosting for web applications most companies think about renting servers or buying their own servers. The servers and the network

More information