An Optimistic Parallel Simulation Protocol for Cloud Computing Environments

Size: px
Start display at page:

Download "An Optimistic Parallel Simulation Protocol for Cloud Computing Environments"

Transcription

1 An Optimistic Parallel Simulation Protocol for Cloud Computing Environments 3 Asad Waqar Malik 1, Alfred J. Park 2, Richard M. Fujimoto 3 1 National University of Science and Technology, Pakistan 2 IBM T.J. Watson Research Center, Yorktown Heights, USA School of Computational Science and Engineering, Georgia Institute of Technology, USA Abstract Cloud computing offers the ability to provide parallel and distributed simulation services remotely to users through the Internet. Services hosted within the cloud can potentially incur processing delays due to load sharing among other active services, and can cause optimistic simulation protocols to perform poorly. This article discusses problems such as increased rollbacks and memory usage that can lead to degradations in performance for optimistic parallel simulations. The Time Warp Straggler Message Identification Protocol (TW-SMIP) is described that offers one approach to addressing this problem. Experimental evidence is provided to show this mechanism can significantly reduce the frequency of rollbacks and memory consumption relative to a traditional Time Warp system. 1. Introduction Cloud computing is a paradigm where software is provided as a service across virtualized computing resources available to clients at remote locations. Cloud computing hides resource availability issues making this infrastructure appealing to users with varying computational requirements: from storage applications to compute intensive tasks. Large scale parallel simulations often require compute time on high performance computing machines and clusters. Access to such resources may be problematic as such facilities can potentially have large acquisition costs and on-going management expenses. Cloud computing offers the potential to make parallel simulation much more accessible to a larger portion of the modeling and simulation community by eliminating or reducing such costs and risks. In a companion paper (Fujimoto, Malik et al. 2010) we describe the potential benefits and challenges in executing parallel and distributed simulations in cloud computing environments. Here, we focus on one class of parallel simulations those using optimistic synchronization mechanisms. Parallel discrete event simulation (PDES) refers to the execution of a discrete event simulation program across multiple processors. Typically this is done to scale simulations to larger configurations, to increase the detail and fidelity of the model, and/or to reduce execution time (Fujimoto 2000). PDES has been applied to a variety of applications such as modeling largescale telecommunication networks (Fujimoto, Perumalla et al. 2003), manufacturing (Lendermann, Low et al. 2005), and transportation systems (Perumalla 2006), to mention a few. A PDES program consists of a collection of logical processes (LPs) that communicate by exchanging time stamped messages or events. A fundamental problem in PDES concerns the synchronization of the parallel simulation program. Each LP must process incoming messages (events) in time stamp order. This is necessary to ensure that events in the simulated future do not affect events in the past. However, if an LP has received an event with, say, time stamp 10, how can it be sure no event will later arrive from another LP with a time stamp smaller than 10? This issue is referred to as the synchronization problem. Time Warp (Jefferson 1985) is a well-known approach to addressing the synchronization problem that uses rollbacks. Each LP is allowed to process whatever events it has received. If it SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 1 of 9

2 receives a straggler message, i.e., a new event with time stamp smaller than other events it has already processed, it must undo or roll back the computations for these events and re-execute them in the proper (time stamp) sequence. If this computation sent one or more messages to other LPs, the rollback must unsend these messages. A mechanism called anti-messages is used to cancel these events. Rollback-based mechanism, more generally referred to as optimistic synchronization, are described in greater detail in (Fujimoto 2000). Execution of traditional optimistic PDES systems in the presence of external interference from other user computations can lead to an excessive number of rollbacks, as illustrated by the work described in (Carothers and Fujimoto 2000). This is because computations from other users will slow the progress in simulation time of some LPs relative to others that are running on more lightly loaded processors, resulting in more straggler messages and longer rollbacks than would otherwise occur. Additionally, cloud computing environments may exhibit longer communication delays than tightly-coupled high performance computing platforms, further increasing the likelihood of straggler messages. To address these issues, the TW-SMIP protocol that dynamically adjusts the execution of each LP based on local parameters and straggler messages. The protocol avoids barrier synchronizations, and instead dynamically limits forward execution of LPs to reduce the amount of erroneous computation and generation of incorrect messages. Time Warp consists of two distinct components: a local control and a global control mechanism. Local control (i.e., state management, rollback recovery and anti-messages) is implemented within each processor, independent of the other processors. The global control mechanism is used to commit operations such as I/O that cannot be rolled back and to reclaim memory resources through computing a Global Virtual Time (GVT) value. GVT is the minimum simulation time among all unprocessed or partially processed messages and anti-messages in the system. 2. Optimistic Execution in the Cloud Challenges that arise in executing Time Warp programs under a cloud computing architecture include: 1. Effective utilization of resources 2. Load distribution 3. Efficient execution despite network traffic and communication delays 4. Fault tolerance 5. Process synchronization The techniques to address the above mentioned challenges must be provided automatically and transparently to application programs within the cloud. They present certain challenges concerning the optimistic execution paradigm used in Time Warp. Traditional approaches to PDES and most work using Time Warp to date assume a fixed set of dedicated computing resources, and typically do not address fault tolerance concerns. These assumptions are too restrictive for cloud environments. We touch upon each of these issues below. In a cloud computing environment resources are shared among multiple users. The number and nature of the workload presented by these users can vary over time. New resources may become available during the execution of a long-running Time Warp program as existing jobs complete or existing resources may become more heavily utilized as new jobs are initiated on behalf of this or other clients. Ideally, the Time Warp program must adapt as these changes occur to make the most effective use of the resources that are made available for its execution. This may entail distributing the execution over additional processors, or reducing the number of processors during the execution of the Time Warp program. This dynamic environment contrasts with a largely static environment typically used for Time Warp where a set of processors is dedicated to the execution, and the Time Warp program is restricted to only using those processors until execution completes. Unlike traditional parallel computing applications where a poorly balanced system results in idle processors while other processors are overburdened with computation, a poorly SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 2 of 9

3 balanced Time Warp program may not result in idle processors. This is because processors without sufficient workload may be optimistically performing computations that are later rolled back. Processor workload must consider rolled back computation in assessing the amount of workload placed on the processor (Carothers and Fujimoto 2000). Network traffic and communication delays are significant in current implementations of cloud computing infrastructures. Delayed messages may increase the number of straggler messages, i.e., messages that arrive late and result in a rollback. These effects may be alleviated by considering communication delays and the likelihood of increased rollbacks in determining the most appropriate mapping of Time Warp LPs to processors. Further, large communication delays may impact the algorithm used to compute GVT, so should be considered in implementing the global control mechanism. The Time Warp program must be able to tolerate failures in the underlying computing infrastructure. It should be able to run to completion despite processor and storage failures or network outages. Redundant execution of LPs must be managed in a way to ensure correct results are obtained without an excessive amount of wasted computation, especially in the context of Time Warp s optimistic style of execution that may result in replicated versions of LPs executing entirely different computations. Finally, as mentioned earlier, synchronization is a fundamental problem that must be addressed in order to achieve efficient execution of Time Warp programs in cloud computing environments. Execution of optimistic simulations across cloud computing architectures introduces new problems in addition to the straggler message and rollback issues that arise in traditional Time Warp frameworks. Under a cloud computing architecture, machines may be servicing other jobs and requests concurrently with the optimistic simulation. This leads to nonuniform and asymmetric processing conditions that can degrade performance in Time Warp programs. Here, we focus on this synchronization problem and performance issues when deploying Time Warp programs in cloud computing environments. While there is little work to date concerning synchronization of Time Warp programs in cloud computing environments, synchronization in conventional parallel and distributed computing platforms is a mature area of research. Several synchronization techniques were developed to employ optimism control by limiting forward execution of LPs to improve performance. Several techniques are discussed in (Fujimoto 2000). The mechanism described in (Madisetti and Hardaker 1992) is perhaps most closely related to the approach described here. Special synchronization messages are used to minimize the cascading rollback effect. However, these mechanisms do not address concerns particular to cloud environments, especially the need to execute over nondedicated computing resources. 3. A Cloud Architecture for Time Warp A cloud computing infrastructure offers numerous benefits such as reconfigurable dynamic resources while unifying and simplifying access to resources without burdening end-users with the costs and complexities associated with acquiring and managing underlying hardware and software layers. This computing paradigm is particularly appealing for parallel and distributed simulations as virtualized resources can be configured to meet the demands of the simulation that may vary widely from processor-bound executions to those that are memory-bound. Traditional monolithic PDES simulators designed for static, tightly-coupled cluster systems would not be well suited for a cloud computing environment. As resources are virtualized in a cloud environment, direct and full control of the underlying physical resources is not feasible. Unpredictability in processing and additional delays can adversely affect the performance of a Time Warp system that is sensitive to the execution environment that is not fully dedicated to the simulation. SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 3 of 9

4 User Interface Initialize Event Module Simulation Processing Module text Storage Module (state saving) Event Distribution Module Network Module Figure 1. TW-SMIP architecture stack A Time Warp system that is integrated into the cloud computing platform as a service that is aware of the environment can compensate for certain disadvantages in the infrastructure such as load sharing of physical resources between unrelated processes and other issues. Efficient execution of optimistic PDES application codes on cloud computing will require a new software infrastructure and algorithms that are aware of the underlying cloud infrastructure. The TW-SMIP architecture is shown in Figure 1. The communication module is responsible for handling communications among LPs, and is implemented over the underlying network module that provides interprocessor communications. The current implementation uses MPI as the network module. The event distribution and simulation processing modules are the main components that are responsible for implementing event management. These include the logic for processing events, handling rollbacks, and sending and receiving antimessages. The storage module implements state saving functions. The event initialization module provides mechanisms to begin the execution by providing input events to LPs based on simulation parameters. The user interface defines an application program interface to LPs. In order to reduce communication overheads, especially in cloud environments where such overheads can be significant, it is often advantageous to map multiple LPs to an individual processor (or virtual machine). An algorithm is required to map LPs to processors. This algorithm must balance communications overheads with achieving effective load distribution, while maintaining sufficient concurrency relative to the number of processors that may be allocated to the execution. Though this is an important issue, it is beyond the scope of the work presented here, so is not addressed further. The mapping of LPs to processors used in the experiments described later was derived manually. 4. The TW-SMIP Protocol TW-SMIP is an optimistic synchronization protocol intended to address concerns about interference and communication delays. Periodic status messages, termed heartbeat (HB) messages, are distributed to LPs residing on a processor to provide information concerning LPs residing on other processors that may send messages. These HB messages are superimposed over a standard Time Warp mechanism. HB messages include information concerning sent messages for straggler detection. They are given SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 4 of 9

5 higher priority than other messages, and are not subject to message bundling in order to minimize their latency. The TW-SMIP protocol is based on straggler message identification to avoid frequent rollbacks due to asymmetric and uneven processing loads that can be expected to arise. TW-SMIP performs boundary-based synchronization of LPs running on distributed nodes in the cloud architecture. Here we assume the use of TCP/IP, ensuring the reliable delivery of messages, and that multiple LPs can be mapped to a single processor. The requirement of reliable message delivery is necessary in Time Warp to ensure repeatability and to guarantee that the parallel execution produces exactly the same results as a sequential execution of the simulation. It is not a severe requirement for clouds implemented in localized computing clusters and geographically distributed implementations where the volume of communications does not necessitate the use of best effort communications. TW-SMIP is designed to reduce communication overhead and limit rolled back computation. Generated HB messages are only sent to processors where communication has occurred since the last computed GVT value. This approach can significantly reduce the number of HB messages generated during the simulation if not all processors directly communicate with each other. The TW-SMIP protocol is useful for large distributed simulations where the system is prone to network congestion, no specialized broadcast capabilities are available, and/or the simulations span multiple LANs as null HB messages are not used. The principle used in this approach is to send HB messages where communication has occurred. After a fixed wall clock time each processor enters the HB phase and generates HB messages for processors to which it communicates. The LPs continue processing future events in addition to receiving HB messages. LPs stop processing events when it discovers straggler messages through the HB message; they must roll back to the timestamp of the straggler message. At the same time it stops other LPs from processing false computation by generating anti-messages. The HB message based scheme is used to perform boundarybased synchronization for those LPs that have straggler messages; it does not limit other LPs from processing future events. TW-SMIP executes in a manner similar to a traditional Time Warp program, with the addition of the HB messages. Specifically, after a fixed interval of time, each processor enters into a straggler message identification phase and sends HB messages to all other processors to which it communicates. HB messages constrain execution of LPs that may be too far in the future. LPs can generate HB messages independent of other LPs in a simulation. HB messages consist of two array fields: timestamp (TS) and the message identification number (MID). Arrays are used to hold information concerning multiple messages. For example, suppose a source LP fills these two fields with the timestamp of messages and message identification numbers when generating messages destined for another LP. During the simulation, each LP saves the TS and MID values of each message it has sent or received to or from other LPs. Upon receiving a HB message, each LP compares the received information with locally saved information. Thus, each LP has two lists: one maintained by the LP that logs messages as they arrive and a second list that is created upon receipt of a HB message. If the lists are not identical then one or more straggler messages exist in the system. This immediately interrupts event processing and the LP rolls back the simulation to that point where a straggler message is expected. If the timestamp of the straggler message is greater than the local time of the destination node then the LP will keep processing events until the time of the straggler message is reached and pauses. There is a possibility that an HB message may be delayed. Under these circumstances, the receiving LP simply ignores the HB message if all the messages identified in the HB message have been received. Additionally, entries in the receive list are removed. Our proposed protocol utilizes this heartbeat mechanism to control optimistic execution across the cloud computing infrastructure. Specifically, straggler messages are used to SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 5 of 9

6 reduce rollbacks that would otherwise result in wasted computation. This reduces memory consumption and paces LPs more uniformly across the simulation in the presence of asymmetric processing loads. Further details of the protocol are described in (Malik, Park et al. 2009). 5. Performance Study The following empirical study examines the behavior of the TW-SMIP protocol under different asymmetric and symmetric processor loads. Comparison of our proposed protocol with traditional Time Warp mechanisms is provided to quantify improvement of a straggler message identification mechanism and validate its utility. The following experiments were performed on dual core 3.2GHz Intel Xeon processors with 6GB of memory per node. GNU/RedHat Linux running a 64-bit kernel was installed on each machine. Nodes were interconnected via Fast Ethernet. Twelve of these nodes were used in the following tests. In order to analyze the performance of the TW- SMIP implementation, the benchmark model described in (Madisetti, Hardaker et al. 1993) was used. This benchmark was designed to capture the computational characteristics of simulations such as those used to model load sharing in electrical power grids. In this simulation program, each LP acts as a source and generates two types of messages: self and propagating. Self-messages are those sent by a source LP to itself with a defined timestamp increment. Propagating messages are sent to another LP in the network. Messages are generated with a timestamp of T LocalTime + L Lookahead. Applications such as electrical power grid simulations exhibit such behavior where load sharing requests are distributed among power stations; if a request cannot be processed locally, it is propagated to a neighboring node in the graph. As discussed in (Madisetti, Hardaker et al. 1993) other network applications such as air traffic simulations exhibit similar behaviors. This application provides a challenging test case for the TW-SMIP protocol because the selfmessages can result in some LPs advancing far ahead of others, only to be rolled back by subsequent straggler messages. For the experiments described here, upon processing an event, an LP sends a message to another LP with probability 0.5; otherwise it sends a message to itself. If the message is sent to another LP, it randomly selects a neighboring LP based on a predefined network topology. The synthetic topology used in these experiments is a two-dimensional grid where each node has N, S, E, W, NE, NW, SE, and SW neighbors. Here, 1000 LPs are mapped to a single processor. To analyze TW-SMIP over asymmetric conditions, a series of experiments were performed with varying HB period and workload. The HB period is defined as the time between successive HB messages generated by each LP. Asymmetric test conditions are achieved by varying the workload across the pool of machines used to gather data. Processors may be lightly loaded or heavily loaded. The background jobs are generated using a tool called Stress. Stress is a workload generator for POSIX systems (Stress Library) and allows for a configurable amount of CPU, memory, I/O, and disk stress on the system. Scenarios termed lightly loaded denote a load of two-cpu bound, one I/O bound, and one memory allocator process. Highly loaded scenarios include a load contains four CPU-bound, two I/O bound and one memory allocator process. The background workload is generated on each node. Figure 2 shows representative results. It indicates the number of rolled back events as well as the total number of events that were processed for the lightly loaded scenario under different HB periods. The data point at infinite HB period indicates the number of rolled back events when no HB messages are used; this corresponds to the performance of a Time Warp system without TW-SMIP. The number of committed events for the different runs remained constant at approximately 2.5 million events. The event rate indicates the number of committed events per unit time, and varies between 0.5 and 1.0 million events per second. SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 6 of 9

7 Figure 2. TW-SMIP execution scenario Figure 3. Efficiency comparison of test cases The TW-SMIP protocol significantly reduces the number of rolled back events compared to a conventional Time Warp system as shown in Figure 3. The number of rolled back events is increased if the HB period is set to either too high or too low a value. When the HB period is too large, the protocol is not effective in limiting the optimistic execution of LPs, resulting in an increased number of rolled back events. As expected, the traditional Time Warp system with no HB messages (or a HB period of infinity) yields a large number of rolled back events. When the HB messages are too frequent, however, the processing of the HB messages themselves becomes a bottleneck that amplify any imbalances in the parallel simulation execution, again resulting in a large number of rolled back events. The fact that the number of rolled back events remains low over a relatively broad range, from 4 to 100 milliseconds in this test case, suggests that it may not be necessary to fine tune the HB period in order to achieve the benefits of the TW-SMIP protocol. The efficiency of the simulation runs for three different scenarios is shown in Figure 3 with different background workloads. Efficiency is defined as the number of committed events divided by the total number of events that are processed, and gives an indication of the fraction of time the system is processing events that are eventually committed. These data verify that TW-SMIP offers the greatest benefit moderate HB periods, ranging from a few to 100 milliseconds. Not surprisingly, the nonuniformly distributed loads yield more rollbacks and reduced efficiency. SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 7 of 9

8 Figure 4. TW-SMIP Efficiency vs. Traditional Time Warp Synchronization A comparison between TW-SMIP and a traditional Time Warp synchronization mechanism is shown in Figure 4. A mix of uniform and non-uniform lightly and heavily loaded conditions was used for this study. Specifically, for each test case, the HB period that yielded the best performance in prior tests was used: HB periods of 0.003, 0.07 and were used for the lightly loaded uniform scenario, lightly loaded non-uniform scenario, and highly loaded non-uniform scenario respectively. In the lightly loaded uniform test case, the TW-SMIP approach provides a significantly improved efficiency over the traditional Time Warp approach. Under nonuniform test cases, the observed data also shows significant performance improvements in both the lightly loaded and heavily loaded scenarios. For example, TW-SMIP exhibits an efficiency of 91% in the lightly loaded uniform test case, compared to the traditional TW implementation that yields 76% efficiency. An analysis of TW-SMIP was also performed for open queuing simulations on 12 dual core nodes. These test scenarios were performed under different background workloads and the results were qualitatively similar to the prior experiments. They demonstrated that the event rate decreases as HB messages become less frequent due to an increased number of rollbacks. Heavily loaded systems with less frequent rollbacks better utilize the resources. Less frequent HB messages failed to reduce the number of rollbacks, as expected; it produced additional overhead for heavy loaded systems. These series of experiments demonstrate that the TW-SMIP protocol produces better utilization of resources than the traditional Time Warp implementation under a variety of external workloads. A traditional Time Warp system generates frequent rollbacks in a resource sharing environment such as that occurring in a cloud infrastructure; TW-SMIP overcomes this problem by using heartbeat messages as a mechanism to detect straggler messages. 6. Concluding Remarks Cloud computing offers the promise of providing an execution platform without exposing complicated details of PDES execution to users. However, it is well known that optimistic PDES programs under traditional Time Warp frameworks can perform poorly where resources are shared amongst many users leading to slower execution times due to asymmetric and uneven processing. Thus under such an environment like that of a cloud, running optimistic simulations without any optimism control can lead to lower efficiency in execution. The TW-SMIP protocol is a first step in addressing asymmetric background loads that are inherent in cloud computing environments. This protocol defines dynamic synchronization points for individual LPs based on straggler messages. Handling of these straggler messages can improve efficiency of the system; which in turn leads to improved utilization of resources by lessening the amount of rolled back computation. SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 8 of 9

9 Much additional research is required before the potential of cloud computing for optimistic parallel simulations can be fully realized. Perhaps foremost, experience in executing optimistic parallel simulations in contemporary cloud environments is lacking. Development frameworks and tools are needed to enable implementation of parallel simulation application codes on cloud computing architectures. Acknowledgement Funding for this research was provided in part by NSF Grant ATM References Carothers, C. D. and R. M. Fujimoto (2000). "Efficient Execution of Time Warp Programs on Heterogeneous, NOW Platforms." IEEE Trans. Parallel Distrib. Syst. 11(3): Fujimoto, R. (2000). Parallel and Distributed Simulation Systems, Wiley Interscience. Fujimoto, R. M., A. W. Malik, et al. (2010). "Parallel and Distributed Simulation in the Cloud." Simulation Magazine, Society for Modeling and Simulation, Intl., 1(3). Fujimoto, R. M., K. S. Perumalla, et al. (2003). Large-Scale Network Simulation -- How Big? How Fast? Modeling, Analysis and Simulation of Computer and Telecommunication Systems. Jefferson, D. (1985). "Virtual Time." ACM Transactions on Programming Languages and Systems 7(3): Lendermann, P., M. Y. H. Low, et al. (2005). An Integrated and Adaptive Decision-Support Framework for High-Tech Manufacturing and Service Networks. Proceedings of the 2005 Winter Simulation Conference. Madisetti, V. and D. A. Hardaker (1992). "Synchronization Mechanisms for Distributed Event-Driven Computation." ACM Transactions on Modeling and Computer Simulation 2: Madisetti, V. K., D. A. Hardaker, et al. (1993). "The MIMDIX Operating System for Parallel Simulation and Supercomputing." Journal of Parallel and Distributed Computing 18(4): Malik, A., A. Park, et al. (2009). Optimistic Synchronization of Parallel Simulations in Cloud Computing Environments. IEEE International Conference on Cloud Computing. Perumalla, K. S. (2006). A Systems Approach to Scalable Transportation Network Modeling. Winter Simulation Conference, Monterey, CA, IEEE. Stress Library. from Asad Waqar Malik is a PhD candidate at National University of Science and Technology (NUST), Pakistan. He received his MS in Software Engineering and Bachelor of Computer Science degrees from NUST and Hamdard University respectively. He has been working in the distributed simulation field since He also worked as an international scholar at Georgia Institute of Technology. He has five international conference publications. His research interest includes real time decision support system, distributed simulation, and C4I systems. Alfred Park is a postdoctoral research scientist at IBM T.J. Watson Research Center at Yorktown Heights, New York. He received his BS, MS and PhD in Computer Science from the Georgia Institute of Technology in 2002, 2004 and 2009 respectively. His interests are in large scale stream processing systems, high performance computing, metacomputing, and parallel and distributed simulation. Richard Fujimoto is a Regents Professor and Chair of the School of Computational Science and Engineering at the Georgia Institute of Technology. He received his M.S. and Ph.D. degrees from the University of California (Berkeley) in 1980 and 1983 respectively. He has published over 200 articles on parallel and distributed simulation. Among his past activities he lead the definition of the time management services for the DoD High Level Architecture (HLA). SCS M&S Magazine 2010 / n4 (Oct) Malik/Park/Fujimoto Page 9 of 9

Supercomputing applied to Parallel Network Simulation

Supercomputing applied to Parallel Network Simulation Supercomputing applied to Parallel Network Simulation David Cortés-Polo Research, Technological Innovation and Supercomputing Centre of Extremadura, CenitS. Trujillo, Spain david.cortes@cenits.es Summary

More information

A Flexible Cluster Infrastructure for Systems Research and Software Development

A Flexible Cluster Infrastructure for Systems Research and Software Development Award Number: CNS-551555 Title: CRI: Acquisition of an InfiniBand Cluster with SMP Nodes Institution: Florida State University PIs: Xin Yuan, Robert van Engelen, Kartik Gopalan A Flexible Cluster Infrastructure

More information

SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 2 of 10

SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 2 of 10 Parallel and Distributed Simulation in the Cloud Richard M. Fujimoto 1, Asad Waqar Malik 2 and Alfred J. Park 3 1 School of Computational Science and Engineering, Georgia Institute of Technology, USA 2

More information

Time Management in the High Level Architecture"

Time Management in the High Level Architecture Time Management in the High Level Architecture" Richard M. Fujimoto! Professor!! Computational Science and Engineering Division! College of Computing! Georgia Institute of Technology! Atlanta, GA 30332-0765,

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Principles and characteristics of distributed systems and environments

Principles and characteristics of distributed systems and environments Principles and characteristics of distributed systems and environments Definition of a distributed system Distributed system is a collection of independent computers that appears to its users as a single

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General

More information

Recommendations for Performance Benchmarking

Recommendations for Performance Benchmarking Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations

Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Dr. Maurice Eggen Nathan Franklin Department of Computer Science Trinity University San Antonio, Texas 78212 Dr. Roger Eggen Department

More information

Chapter 16: Distributed Operating Systems

Chapter 16: Distributed Operating Systems Module 16: Distributed ib System Structure, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed Operating Systems Motivation Types of Network-Based Operating Systems Network Structure Network Topology

More information

Chapter 14: Distributed Operating Systems

Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Chapter 14: Distributed Operating Systems Motivation Types of Distributed Operating Systems Network Structure Network Topology Communication Structure Communication

More information

Distributed Systems LEEC (2005/06 2º Sem.)

Distributed Systems LEEC (2005/06 2º Sem.) Distributed Systems LEEC (2005/06 2º Sem.) Introduction João Paulo Carvalho Universidade Técnica de Lisboa / Instituto Superior Técnico Outline Definition of a Distributed System Goals Connecting Users

More information

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師

Operating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師 Lecture 7: Distributed Operating Systems A Distributed System 7.2 Resource sharing Motivation sharing and printing files at remote sites processing information in a distributed database using remote specialized

More information

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical Identify a problem Review approaches to the problem Propose a novel approach to the problem Define, design, prototype an implementation to evaluate your approach Could be a real system, simulation and/or

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Energy Efficient MapReduce

Energy Efficient MapReduce Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Reliable Systolic Computing through Redundancy

Reliable Systolic Computing through Redundancy Reliable Systolic Computing through Redundancy Kunio Okuda 1, Siang Wun Song 1, and Marcos Tatsuo Yamamoto 1 Universidade de São Paulo, Brazil, {kunio,song,mty}@ime.usp.br, http://www.ime.usp.br/ song/

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Load Distribution in Large Scale Network Monitoring Infrastructures

Load Distribution in Large Scale Network Monitoring Infrastructures Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu

More information

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture Yangsuk Kee Department of Computer Engineering Seoul National University Seoul, 151-742, Korea Soonhoi

More information

Virtual machine interface. Operating system. Physical machine interface

Virtual machine interface. Operating system. Physical machine interface Software Concepts User applications Operating system Hardware Virtual machine interface Physical machine interface Operating system: Interface between users and hardware Implements a virtual machine that

More information

CHAPTER 7 SUMMARY AND CONCLUSION

CHAPTER 7 SUMMARY AND CONCLUSION 179 CHAPTER 7 SUMMARY AND CONCLUSION This chapter summarizes our research achievements and conclude this thesis with discussions and interesting avenues for future exploration. The thesis describes a novel

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

Parallel Computing. Benson Muite. benson.muite@ut.ee http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage

Parallel Computing. Benson Muite. benson.muite@ut.ee http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage Parallel Computing Benson Muite benson.muite@ut.ee http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework

More information

EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR

EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR Hao Wu Richard M. Fujimoto George Riley College Of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 {wh, fujimoto, riley}@cc.gatech.edu

More information

MS Exchange Server Acceleration

MS Exchange Server Acceleration White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

Using reverse circuit execution for efficient parallel simulation of logic circuits

Using reverse circuit execution for efficient parallel simulation of logic circuits Using reverse circuit execution for efficient parallel simulation of logic circuits Kalyan Perumalla *, Richard Fujimoto College of Computing, Georgia Tech, Atlanta, Georgia, USA ABSTRACT A novel technique

More information

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform

Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform Why clustering and redundancy might not be enough This paper discusses today s options for achieving

More information

SERVER CLUSTERING TECHNOLOGY & CONCEPT

SERVER CLUSTERING TECHNOLOGY & CONCEPT SERVER CLUSTERING TECHNOLOGY & CONCEPT M00383937, Computer Network, Middlesex University, E mail: vaibhav.mathur2007@gmail.com Abstract Server Cluster is one of the clustering technologies; it is use for

More information

Cloud Based Application Architectures using Smart Computing

Cloud Based Application Architectures using Smart Computing Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products

More information

How To Test For Elulla

How To Test For Elulla EQUELLA Whitepaper Performance Testing Carl Hoffmann Senior Technical Consultant Contents 1 EQUELLA Performance Testing 3 1.1 Introduction 3 1.2 Overview of performance testing 3 2 Why do performance testing?

More information

Scalability and Classifications

Scalability and Classifications Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static

More information

Performance Modeling and Analysis of a Database Server with Write-Heavy Workload

Performance Modeling and Analysis of a Database Server with Write-Heavy Workload Performance Modeling and Analysis of a Database Server with Write-Heavy Workload Manfred Dellkrantz, Maria Kihl 2, and Anders Robertsson Department of Automatic Control, Lund University 2 Department of

More information

Cluster, Grid, Cloud Concepts

Cluster, Grid, Cloud Concepts Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Stream Processing on GPUs Using Distributed Multimedia Middleware

Stream Processing on GPUs Using Distributed Multimedia Middleware Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database

Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a

More information

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,

More information

Introduction to Network Management

Introduction to Network Management Introduction to Network Management Chu-Sing Yang Department of Electrical Engineering National Cheng Kung University Outline Introduction Network Management Requirement SNMP family OSI management function

More information

System Models for Distributed and Cloud Computing

System Models for Distributed and Cloud Computing System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems

More information

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,

More information

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based

More information

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter

More information

G22.3250-001. Porcupine. Robert Grimm New York University

G22.3250-001. Porcupine. Robert Grimm New York University G22.3250-001 Porcupine Robert Grimm New York University Altogether Now: The Three Questions! What is the problem?! What is new or different?! What are the contributions and limitations? Porcupine from

More information

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada ken@unb.ca Micaela Serra

More information

Client/Server and Distributed Computing

Client/Server and Distributed Computing Adapted from:operating Systems: Internals and Design Principles, 6/E William Stallings CS571 Fall 2010 Client/Server and Distributed Computing Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Traditional

More information

ORACLE DATABASE 10G ENTERPRISE EDITION

ORACLE DATABASE 10G ENTERPRISE EDITION ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.

More information

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010 Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,

More information

ZooKeeper. Table of contents

ZooKeeper. Table of contents by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes...

More information

Fault Tolerance in Hadoop for Work Migration

Fault Tolerance in Hadoop for Work Migration 1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous

More information

Optimizing the Virtual Data Center

Optimizing the Virtual Data Center Optimizing the Virtual Center The ideal virtual data center dynamically balances workloads across a computing cluster and redistributes hardware resources among clusters in response to changing needs.

More information

A Performance Monitor based on Virtual Global Time for Clusters of PCs

A Performance Monitor based on Virtual Global Time for Clusters of PCs A Performance Monitor based on Virtual Global Time for Clusters of PCs Michela Taufer Scripps Institute & UCSD Dept. of CS San Diego, USA Thomas Stricker Cluster 2003, 12/2/2003 Hong Kong, SAR, China Lab.

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Network Attached Storage. Jinfeng Yang Oct/19/2015

Network Attached Storage. Jinfeng Yang Oct/19/2015 Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability

More information

MOSIX: High performance Linux farm

MOSIX: High performance Linux farm MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Copyright www.agileload.com 1

Copyright www.agileload.com 1 Copyright www.agileload.com 1 INTRODUCTION Performance testing is a complex activity where dozens of factors contribute to its success and effective usage of all those factors is necessary to get the accurate

More information

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer

Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Cluster Scalability of ANSYS FLUENT 12 for a Large Aerodynamics Case on the Darwin Supercomputer Stan Posey, MSc and Bill Loewe, PhD Panasas Inc., Fremont, CA, USA Paul Calleja, PhD University of Cambridge,

More information

Load Balance Strategies for DEVS Approximated Parallel and Distributed Discrete-Event Simulations

Load Balance Strategies for DEVS Approximated Parallel and Distributed Discrete-Event Simulations Load Balance Strategies for DEVS Approximated Parallel and Distributed Discrete-Event Simulations Alonso Inostrosa-Psijas, Roberto Solar, Verónica Gil-Costa and Mauricio Marín Universidad de Santiago,

More information

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set

More information

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters COSC 6374 Parallel I/O (I) I/O basics Fall 2012 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network card 1 Network card

More information

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required Think Faster. Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

Load Balancing in Distributed Data Base and Distributed Computing System

Load Balancing in Distributed Data Base and Distributed Computing System Load Balancing in Distributed Data Base and Distributed Computing System Lovely Arya Research Scholar Dravidian University KUPPAM, ANDHRA PRADESH Abstract With a distributed system, data can be located

More information

Chapter 11 I/O Management and Disk Scheduling

Chapter 11 I/O Management and Disk Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ 2008, Prentice Hall I/O Devices Roadmap Organization

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...

More information

Accelerating and Simplifying Apache

Accelerating and Simplifying Apache Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly

More information

Distributed applications monitoring at system and network level

Distributed applications monitoring at system and network level Distributed applications monitoring at system and network level Monarc Collaboration 1 Abstract Most of the distributed applications are presently based on architectural models that don t involve real-time

More information

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE.,

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE., AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM K.Kungumaraj, M.Sc., B.L.I.S., M.Phil., Research Scholar, Principal, Karpagam University, Hindusthan Institute of Technology, Coimbatore

More information

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7 Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE

PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India sudha.mooki@gmail.com 2 Department

More information

Upgrading a Telecom Billing System with Intel Xeon Processors

Upgrading a Telecom Billing System with Intel Xeon Processors WHITE PAPER Xeon Processors Billing System Migration Upgrading a Telecom Billing System with Xeon Processors Migrating from a legacy RISC platform to a server platform powered by Xeon processors has helped

More information

IBM Software Group. Lotus Domino 6.5 Server Enablement

IBM Software Group. Lotus Domino 6.5 Server Enablement IBM Software Group Lotus Domino 6.5 Server Enablement Agenda Delivery Strategy Themes Domino 6.5 Server Domino 6.0 SmartUpgrade Questions IBM Lotus Notes/Domino Delivery Strategy 6.0.x MRs every 4 months

More information

Scheduling and Resource Management in Computational Mini-Grids

Scheduling and Resource Management in Computational Mini-Grids Scheduling and Resource Management in Computational Mini-Grids July 1, 2002 Project Description The concept of grid computing is becoming a more and more important one in the high performance computing

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Workshop on Parallel and Distributed Scientific and Engineering Computing, Shanghai, 25 May 2012

Workshop on Parallel and Distributed Scientific and Engineering Computing, Shanghai, 25 May 2012 Scientific Application Performance on HPC, Private and Public Cloud Resources: A Case Study Using Climate, Cardiac Model Codes and the NPB Benchmark Suite Peter Strazdins (Research School of Computer Science),

More information

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is

More information

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

How To Monitor And Test An Ethernet Network On A Computer Or Network Card 3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel

More information

A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN

A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN Eman Al-Harbi 431920472@student.ksa.edu.sa Soha S. Zaghloul smekki@ksu.edu.sa Faculty of Computer and Information

More information

Mizan: A System for Dynamic Load Balancing in Large-scale Graph Processing

Mizan: A System for Dynamic Load Balancing in Large-scale Graph Processing /35 Mizan: A System for Dynamic Load Balancing in Large-scale Graph Processing Zuhair Khayyat 1 Karim Awara 1 Amani Alonazi 1 Hani Jamjoom 2 Dan Williams 2 Panos Kalnis 1 1 King Abdullah University of

More information

MAGENTO HOSTING Progressive Server Performance Improvements

MAGENTO HOSTING Progressive Server Performance Improvements MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 sales@simplehelix.com 1.866.963.0424 www.simplehelix.com 2 Table of Contents

More information

Apache Hadoop. Alexandru Costan

Apache Hadoop. Alexandru Costan 1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open

More information

THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid

THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING. José Daniel García Sánchez ARCOS Group University Carlos III of Madrid THE EXPAND PARALLEL FILE SYSTEM A FILE SYSTEM FOR CLUSTER AND GRID COMPUTING José Daniel García Sánchez ARCOS Group University Carlos III of Madrid Contents 2 The ARCOS Group. Expand motivation. Expand

More information

Distributed Systems. Examples. Advantages and disadvantages. CIS 505: Software Systems. Introduction to Distributed Systems

Distributed Systems. Examples. Advantages and disadvantages. CIS 505: Software Systems. Introduction to Distributed Systems CIS 505: Software Systems Introduction to Distributed Systems Insup Lee Department of Computer and Information Science University of Pennsylvania Distributed Systems Why distributed systems? o availability

More information

D1.2 Network Load Balancing

D1.2 Network Load Balancing D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June ronald.vanderpol@sara.nl,freek.dijkstra@sara.nl,

More information