SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 2 of 10

Size: px
Start display at page:

Download "SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 2 of 10"

Transcription

1 Parallel and Distributed Simulation in the Cloud Richard M. Fujimoto 1, Asad Waqar Malik 2 and Alfred J. Park 3 1 School of Computational Science and Engineering, Georgia Institute of Technology, USA 2 National University of Science and Technology, Pakistan 3 IBM T.J. Watson Research Center, Yorktown Heights, USA Abstract Cloud computing offers the ability to transparently provide computing services remotely to users through the Internet, freeing them of the burdens associated with managing computing resources and facilities. It offers the potential to make parallel and distributed simulation capabilities much more widely accessible to users who are not experts in this technology and do not have ready access to high performance computing platforms. However, services hosted within the cloud can incur significant performance degradations. This article discusses the potential benefits and technical challenges that arise in utilizing cloud platforms for parallel and distributed simulations and a potential solution approach. 1. What is Cloud Computing? Cloud computing has been gaining much attention in recent years as a means of realizing a long sought-after vision where computing is provided to consumers as a utility, not unlike electricity, water, or natural gas. As such, it is revolutionizing portions of the IT industry conducive to this computational model. In contrast to the norm today where organizations potentially incur large capital expenditure (CapEx) costs to build and operate their own computing infrastructure and IT services to meet the computing needs of local users, cloud computing is a paradigm where computing services are implemented on servers that can be accessed by clients operating at remote locations throughout the Internet (Hewitt 2008). It is an approach where computing resources are provided as a service, e.g., Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). These services are inherently flexible and agile where users can choose the granularity of service and pay for only the computing resources they need. Commercial cloud computing services have gained some inroads and success utilizing this paradigm. Amazon Web Services Elastic Compute Cloud (EC2) offers compute cycles and storage charged on an hourly and monthly basis, respectively (Lizhe, Jie et al. 2008). EC2 provides virtualized hardware upon which users can build a wide variety of applications. Some higher level services, e.g., storage management services are also provided. Microsoft s Azure platform supports applications that are developed using.net libraries. Google AppEngine provides an environment for creating web service applications that operate on Google s infrastructure. While EC2 offers a lowlevel version of the cloud where users can build their own software stacks over virtualized hardware, Azure and AppEngine offer successively higher-level views and services, trading off flexibility with increased functionality and ease of use. Cloud computing outsources computing infrastructure services to the cloud provider, reducing or eliminating the need for organizations to offer extensive IT hardware and services. This may provide certain economic advantages to its users. First, the cloud provider may be able to provide computing infrastructure more affordably than a locally operated facility. This is because the cloud provider can take advantage of certain economies of scale to reduce its operating costs. In addition, the operator can place its facilities in locations that offer lower cost electricity, power, cooling, labor, property costs, and/or taxes. Electricity and cooling now represents a large fraction, e.g., a third, of the operating costs of a modern datacenter, while the cost of electricity in states such as California can be several times more than locations in Idaho or Washington SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 1 of 10

2 (Armbrust, Fox et al. 2009). The pay-as-yougo cost structure of cloud computing is attractive if the computational requirements of users vary greatly over time. Cloud computing allows the amount of computational power that is used, and the cost, to expand and shrink according to the user s needs. The alternative is to either overprovision the infrastructure within the users organization to meet the peak expected workload, leaving the facility underutilized during periods of less intense usage, or to create a lower cost facility that is unable to meet the demands offered at times of peak usage. Cloud computing represents a fusion of many related technologies and paradigms. It includes concepts associated with paradigms such as grid, utility and autonomous computing. Grid computing aggregates computational resources across different administrative domains so that they can act in concert to perform very large tasks. Grid computing is often heterogeneous and geographically detached (Aymerich, Fenu et al ). Applications of grid computing require a large number of processing cycles, access to large amounts of data, and can generate huge datasets. Autonomic computing (AC) is based on self-management, configuration, healing, and self-protection. An AC system uses a concept called closed control loops; without any external input it monitors and controls the system. It is a self adaptive system that dynamically changes its behavior in response to new situations and needs (Wang 2004). Utility computing is a provisioning model based on the idea of outsourcing and demand availability. Services and resources are provided to the end user and charged based on their usage (Haifeng, Galligan et al. 2005). Cloud computing includes elements of these associated approaches, and introduces new ways to interact with different services, as compared to traditional client/server architectures. While there has been much research in cloud computing and related technologies, comparatively little work have focused on their use in simulation, especially parallel and distributed simulation. There has been some work in federating distributed simulations utilizing the High Level Architecture (HLA) over grids (Cai, Turner et al. 2002; Xie, Teo et al. 2005; Chen, Turner et al. 2006; Pan, Turner et al. 2007). Other related work includes webbased simulation (Reichenthal 2002; Fitzgibbons, Fujimoto et al. 2004; Huang, Xiang et al. 2004), the Extensible Modeling and Simulation Framework (XMSF) for web services (Pullen, Brunton et al. 2005), object request broker (ORB) based frameworks (Cholkar and Koopman 1999; D Ambrogio and Gianni 2004). Execution of parallel and distributed simulations over clouds represents an interesting opportunity, but presents certain technical challenges, as will be discussed next. 2. Parallel Simulation in the Cloud Parallel discrete event simulation (PDES) refers to the execution of a discrete event simulation program across multiple processors. Typically this is done to scale simulations to larger configurations, to increase the detail and fidelity of the model, and/or to reduce execution time (Fujimoto 2000). As computational hardware becomes increasingly parallel (e.g., multi-core CPUs) with only modest clock speed improvements, simulation developers must turn to exploit parallelism as a means to address performance concerns. While cloud computing presents an intriguing means to offering traditional sequential simulation applications to users, its use for parallel and distributed simulations is perhaps more compelling. A significant impediment to the widespread exploitation of this technology has been the availability of suitable computing platforms. Cloud computing lowers the barrier to begin exploiting these technologies because it eliminates the need to purchase, and more importantly, operate and maintain high performance computing equipment at the local site. Further, by providing parallel and distributed simulation software as a service, cloud computing offers the ability to hide many of the complications of executing parallel and distributed simulation codes from the user, offering the potential to make exploitation of this technology much less risky than is the case today. SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 2 of 10

3 As such, cloud computing can help to address a long-standing problem faced by the parallel and distributed simulation community to simplify exploitation of the technology by domain scientists and engineers who are not experts in parallel computation or parallel simulation techniques. Currently, careful consideration of the hardware platform on which the simulation is to run must typically be performed to achieve good performance. Shielding the simulation modeler from the details of the underlying system details can accelerate the adoption and use of parallel and distributed simulation techniques. A cloud platform offers certain advantages from a simulation software vendor s perspective as well. Offering and maintaining this technology on a well-known, stable platform such as a virtual cluster in EC2 is a much less daunting undertaking compared to offering software that is sufficiently general that it can operate effectively on whatever platforms and configurations users happen to have in place at their local site. This reduces the costs associated with delivering parallel and distributed simulation products to end users, further enhancing its attractiveness. The appeal of cloud computing for high performance computing applications has not gone unnoticed by cloud computing vendors. For example, Amazon s EC2 supports the Message Passing Interface (MPI), the standard communications protocol used by messagebased parallel programs that run on clusters and supercomputers. This provides a direct path to making parallel and distributed simulations, at least those based on MPI, readily available to end users of the cloud. But this is not to say that parallel and distributed simulation in the cloud is a certainty, nor is it straightforward. Indeed, many key issues remain. Many of these concern cloud environments in general, and are not specific to parallel or distributed simulation. For example, users must be assured that proprietary and confidential data will be secure in a cloud environment before they will consider such a move. Services must be reliable, nearly always available, secure, and resistant to cyber attacks. Further, execution of parallel and distributed simulations in cloud environments introduces additional issues, as will be discussed next. Preliminary work in benchmarking parallel scientific programs in Amazon s EC2 provides some insight into the technical issues that will arise in executing parallel and distributed simulations in a cloud environment (Walker 2008; Ekanayake and Fox 2009). It has been observed that parallel scientific codes executed over EC2 ran significantly slower compared to execution on dedicated nodes of a cluster. Two issues are communications and interference. Data processing applications using paradigms such as MapReduce (Dean and Ghemawat 2008) have enjoyed much success in cloud environments, but do not require extensive communications among computing tasks that arise in parallel and distributed simulation codes. Cloud environments are often better at providing high bandwidth communications among applications than in providing low latency (Walker 2008). This is problematic for many simulation applications that are accustomed to sending many small messages requiring quick delivery rather than fewer large messages requiring high bandwidth alone. Cloud environments are shared among many users, and individual users are not guaranteed exclusive access to the processors assigned to that user s virtual cluster. In principle, gang scheduling techniques can be used to ensure an individual users is allocated a set of physical nodes at the same time instant, however, this property may not be guaranteed by the cloud provider. This can lead to difficulties for parallel simulation applications, especially those that utilize optimistic synchronization techniques. Interactive distributed simulations require that simulation computations be completed to meet hard or soft real time constraints. Response time guarantees would have to be provided by the cloud provider to fulfill this requirement. SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 3 of 10

4 In the following, we describe an approach that utilizes the master/worker paradigm for executing parallel and distributed simulation codes. Developed for volunteer computing and desktop grid applications, this approach features properties such as bulk communication (rather than transmitting many small messages), automated support for fault tolerance and load balancing, and migrating computations to the data that are advantageous in the cloud. In a companion paper to appear later, we describe an optimistic synchronization protocol to address the interference problem. 3. Aurora Here, we adopt terminology used in parallel discrete event simulation (PDES). Specifically, a PDES program consists of a collection of logical processes (LPs) that communicate by exchanging time stamped messages. Here, we use the terms events and messages synonymously, unless stated otherwise. Each LP is a sequential discrete event simulation that operates by processing events in time stamp order to model the evolution of the system over time. The computation associated with each event involves modifying state variables to reflect changes in the system occurring at that instant in time, and (possibly) scheduling new events with timestamp in the simulated future. The master/worker paradigm is a mechanism for dividing up a potentially large computation into smaller portions of work that can be distributed to a pool of workers operating under the direction of a master controller. Although this concept has been well-studied in the general distributed computing literature, it has not been extensively explored for parallel discrete event simulation codes. The master/worker paradigm is well suited for cloud computing environments. Unlike traditional PDES codes that rely on frequent exchange of small messages among LPs, the master/worker approach features an automated bundling process that aggregates messages destined for different destinations and transmits them as one unit, achieving better utilization of the high bandwidth, relatively high latency interconnect in cloud environments. Further, by caching the state of LPs among the worker nodes, the execution mechanism has the effect of migrating computations to the data, a strategy that has proven to be successful in data intensive computing applications in environments based on the MapReduce paradigm frequently found in clouds. General distributed computing software does not have the same requirements as those of PDES programs and thus infrastructure must be specifically designed to support this simulation environment. Traditional master/worker systems lease work units to workers for execution. For example, SETI@Home (Anderson, Cobb et al. 2002) divides received radio telescope data into portions that workers can download for analysis. These work units are time-independent from other workers and require no communication between workers as they perform computations as leased by the master. These conditions simplify the runtime requirements of the underlying master/worker infrastructure and are inadequate to support PDES. In conventional PDES systems, a simulation consists of many logical processes that communicate by exchanging time-stamped messages as the simulation progresses. The first important difference in a master/worker PDES system is that the LP state must be tracked and stored. Some general distributed computing projects only require a simplifying result at the end of their computation, however, a PDES system must maintain the state of the LP that may be leased to a different worker at a later time. Furthermore, the PDES system must ensure that these time-stamped messages are delivered in the correct order to the proper destination LP to preserve the local causality constraint (LCC). Therefore, in addition to correctly ordering messages sent between LPs, these messages must be maintained for future delivery to any LP or collection of LPs that are leased as a work unit to an available worker. Moreover, work units must be properly constrained for execution as processing events and messages arbitrarily into the future may violate the LCC. Therefore conservative or optimistic time management schemes must be SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 4 of 10

5 employed alongside LP and work unit management in a master/worker PDES system. Proxy Message Message Message Work Unit Work Unit Work Unit Worker Figure 1. Overview of interaction between different back-end services and a worker during a typical simulation These key differences and requirements between general distributed computing programs and PDES in a master/worker paradigm is the impetus for the Aurora system. However, the need for highly scalable and high performance distributed simulations drive the architecture for the system, which is described next. The description of the Aurora architecture described next is based on that presented in (Fujimoto, Park et al. 2007). 3.1 Conceptual Overview The master/worker paradigm promotes a clear separation between the controller and workers that perform computation on the work unit passed to them by the master. The Aurora architecture divides the master infrastructure into three main back-end services as shown in Figure 1. These services include a proxy, one or multiple work unit state servers, and one or multiple work unit message state servers. The distributed nature of the back-end services helps to ensure scalability and robustness of the Aurora system. These back-end services communicate over TCP/IP sockets. The Aurora workers perform the actual simulation computation by contacting the proper back-end services to download simulation state and associated messages per work unit lease. Computation is done locally, independent of other workers. Once the computation completes, the final state and messages are uploaded to the work unit state and message state servers. This process repeats for a specified number of times or until the simulation reaches its end time. As shown in Figure 1, the worker only contacts back-end services pertinent to its own execution as directed by the proxy. Due to the addition of support for multiple concurrent simulations, Aurora uses the concept of simulation packages (SimPkg). A simulation package contains all the runtime parameters of the simulation such as the number of work units, a lookahead connectivity graph, simulation begin and end times, wallclock time deadlines per work unit lease, and possible initial states. These specifications are uploaded to the proxy service where necessary metadata and allocation of resources is done prior to starting the simulation and allowing workers to execute. 3.2 Aurora Proxy The Aurora Proxy service is the central core controller that oversees the simulation and the other two back-end components. Internal managers handle the other services. The work unit state and message state servers are controlled by independent managers within the proxy that keep track of metadata such as worker keys, server IP addresses, and simulation package to work unit storage allocation. The proxy contains three managers: state server manager, message state server manager, and a simulation package manager (see Figure 2). The work unit state server manager holds control information such as state server IP addresses and a list of work units of a simulation package the server is hosting. The message state server manager operates similarly to the state server, SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 5 of 10

6 except that it keeps track of message state servers instead of work unit state servers. Work Unit Proxy Work Unit State Connection State s TM Connection Message State Simulation Message Connection TSO Message Queues Worker Figure 2. Internal metadata and state management The baseline functionality for both the work unit state server and message state server are inherited from a common manager class, but is kept separate at the implementation level for caching mechanisms which will be distinct whether the server is supporting atomic work unit states or individual time stamped messages. The simulation manager stores the metadata of a simulation package and performs any work related to simulation packages and its associated work units. For example, when a simulation package is uploaded, the simulation manager finds space on available servers where work unit and future message state can be stored. Worker requests for work are passed to the simulation managers for leasing duties as well as to keep track of which work units have been leased or locked down to prevent further leasing. The simulation manager instantiates time managers as specified by the simulation package. Time managers hold information regarding leased work unit simulation times. These managers are responsible for computing safe processing bounds for any given work unit. Time managers may employ conservative or optimistic time management schemes or a possible combination of both. The implementation described here uses a centralized conservative time management mechanism. 3.3 Work Unit and Message State Services The other components of the Aurora back-end service are the work unit and message state servers. The work unit server contains the current values of state vectors for each work unit and the simulation time for which this state is valid. These work unit states are applicationdefined contiguous blocks of memory and are handled by pack and unpack procedures overwritten by the Aurora workers. The message state server is similar to the state server except that instead of storing the state of work units, future messages are stored in timestamp order according to their destination work unit. These services can be easily adapted for optimistic and caching routines by simply storing state and message state histories instead of discarding previous values upon state and message updates. Although distribution of the workload among the proxy, work unit state and message state servers has increased scalability and performance, much of the future performance potential of the Aurora system lies in exploiting optimistic execution and employing caching mechanisms. These services include the baseline architecture for these types of performance enhancements. 3.4 Workers and Work Unit Lifecycle The Aurora workers perform work as dictated by the master comprised of the proxy and associated state servers. The Aurora worker will SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 6 of 10

7 perform iterations of a work unit lifecycle as specified at run time. This lifecycle is comprised of four major steps: (1) initialization, (2) work unit setup, (3) application-defined execution, and (4) work unit finalization. The initialization step is typically done only one time when the worker starts up. This step includes thread initialization, signal handler setup, and command-line parsing. After this step completes, the work unit begins the setup and download phase to populate the worker with the proper state variables and messages to process for the leased simulation execution time. The steps for the work unit setup are performed in the following order: 1. Worker key request 2. Request work unit and metadata 3. Populate internal metadata structures and create a work unit manager if necessary 4. Contact state server and download state 5. Contact message state server and download messages 6. Unpack work unit state 7. Unpack work unit messages and populate the incoming message queue in time-stamp order A worker key is a unique identifier issued to the worker upon first contact by the proxy. The worker uses its key to identify itself in future communications. Once a unique worker key is issued, the worker performs a work unit request where the proxy may or may not lease a work unit to the requesting worker. If a work unit is available to be leased, associated metadata about the work unit is downloaded from the proxy service. Internally, the worker will then set up supporting data structures for the impending work unit. Once this step has completed, the worker will then contact the proper work unit state and message state servers and download the packed information. The work unit state is then processed by the application-defined unpack method to load current state variables. The message states are automatically unpacked by the Aurora worker into a time-stamp ordered priority queue for processing by the worker during the simulation execution. After the work unit setup has completed, the application-defined simulation execution method is invoked and the simulation runs until the end time as specified during the work unit metadata download (step 2 of the work unit setup). During the simulation execution, if the work unit detects a message that is destined for another work unit, it will perform a destination message server lookup. This is performed in a separate thread from the main execution loop and the result is cached for future use. When the simulation reaches the end time as specified by the proxy, work unit finalization begins which is the final step in the work unit lifecycle. This process is comprised of seven steps: 1. Pack final state 2. Pack future messages destined for itself 3. Contact proxy to begin consistency convergence 4. Collate and pack future messages destined for other work units by message state server keys; deliver collated messages to the proper message state servers 5. Upload work unit state to state server 6. Upload messages packed in step 2 to message state servers 7. Receive consistency convergence verification from proxy The first step calls an application-defined method for packing the final state into a contiguous area of memory for upload to the state server. Next, the worker packages any remaining unprocessed messages on the timestamp ordered message queue which may have been created by sending messages during simulation execution by the work unit to itself. The worker then initiates a consistency convergence to the proxy. A consistency convergence locks down the work unit so it may not be leased again or updated in multiple lease scenarios. This allows atomic commits and prevents inconsistent updates of updated work unit and message states. The worker then begins a message collation process where future messages destined for work units that reside on the same physical message state server are packed together. This reduces the frequency of smaller message updates and allows the workers to update groups of messages at one time. After SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 7 of 10

8 this process completes, the final state and future messages destined for the work unit are uploaded to the appropriate servers. The final step is a verification of consistency convergence from the proxy that the process was completed successfully without errors on the back-end. During steps 4-6, the Aurora state and message state servers send messages to the proxy indicating updates to their respective states. The proxy acknowledges these messages and tracks the consistency convergence process. The final result of the update is returned to the worker. 3.5 Performance For a performance evaluation of the Aurora system, a torus queuing network was used to compare the impact of work unit granularity and the amount of work per work unit lease on overall system throughput. In this simulation, servers can be aggregated into subnets that can then be mapped to single work units. In this study, the coarse-grained queuing network was configured as a 250,000 server 500x500 closed torus network with 625 partitions of 25x25 torus subnets. The fine-grained queueing network was configured as a 22,500 server 150x150 closed torus network partition into x10 torus subnets that can be leased as work units. The coarse-grained simulation had internal links within each work unit with a delay of 10 microseconds and external work unit-to-work unit delays of 1 millisecond. The fine-grained simulation had the external delay set at 0.1 milliseconds while the internal delay remained unchanged. Both simulation scenarios generated 10,000 jobs destined for internal servers and 10,000 jobs destined for servers external to the work unit. Additionally, both scenarios had job service times exponentially distributed with a mean of 5 microseconds. The differences in simulation parameters ensured significant differences of work in each work unit lease. A total of 42 processors consisting of heterogeneous architectures were used in this performance evaluation. 16 processors in two compute nodes were comprised of 8-way 550MHz Pentium III machines with 4 GB of memory per node and the remaining 26 processors were Pentium Xeon 3.06GHz SMT enabled processors across 13 nodes with 1 GB of memory per node. The Aurora back-end was run on three Pentium Xeon 2.8GHz machines with 4 GB of memory each. All machines were running RedHat Linux and had Fast Ethernet among each set of similar processors, but on disparate LANs between machine types. The back-end setup for the Coarse and Fine scenario contained 1 work unit state server and 1 message state server whereas the Fine (2) scenario used 1 work unit server and 2 message state servers. For the figures below, deferred refers to the amount of wallclock time (seconds) a worker spends waiting for a valid work unit lease from the back-end. Import indicates to the amount of time the worker downloads work unit metadata, work unit state vectors, messages and the associated time spent in the applicationdependent deserialization routine. Finalize is the time to perform the logical inverse of import where the work unit state and messages are serialized and consistency convergence is achieved on the back-end services for the returning work unit. Application is the time spent executing application simulation code. The overheads for the coarse and fine-grained simulations are detailed in Figure 3. As expected, the coarse-grained simulation incurred much less overhead than the fine-grained simulations. For the fine-grained simulations, the majority of the time spent is in the work unit overheads, most notably the work unit finalization and import. With an additional message state server available, we can observe a reduction in overhead time in the Fine (2) scenario. Figure 4 details the global event throughput for each of the scenarios. Although the coarse-grained simulation was expected to have higher event throughput than the finegrained simulations due to less overhead, the Fine (2) scenario showed gains through the addition of only one more message state server resulting in a 53% increase in event throughput. 4. Concluding Remarks Cloud computing offers the promise of outsourcing the task of providing and managing execution platform to users while hiding many of the complicated details of PDES execution. SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 8 of 10

9 As such, it offers the possibility of making parallel and distributed simulation technology much more readily accessible to the simulation community. As cloud computing environments become more common, their use for parallel and distributed simulations become more attractive. Figure 3. Processor time breakdown In this article, we attempted to highlight some of the benefits and important challenges associated with executing parallel and distributed simulation in cloud environments, and suggested some possible solutions that have been implemented and observed. This work only represents an initial step in executing parallel and distributed codes in cloud computing environments. There are many avenues for future research. As noted earlier, use of the cloud for interactive parallel and distributed simulations is an open area of study. Experience with simulation application codes in cloud environments is limited, and development frameworks and tools are needed. Security and reliability issues merit greater attention. A mechanism for auto tuning parallel and distributed codes to improve the convenience for users is an area that will require further study. The interaction of fault tolerance mechanisms with the parallel and distributed simulation execution requires further study. Acknowledgement This research was funded in part by NSF Grant ATM Figure 4. Event throughput References Anderson, D. P., J. Cobb, et al. (2002). "SETI@home: an experiment in public-resource computing." Communications of the ACM 45: Armbrust, M., A. Fox, et al. (2009). Above the Clouds: A Berkeley View of Cloud Computing. Berkeley, CA, Electrical Engineering and Computer Sciences, University of California at Berkeley. Aymerich, F. M., G. Fenu, et al. (2008 ). An Approach to a Cloud Computing Network. First International Conference on the Applications of Digital Information and Web Technologies, ICADIWT Cai, W., S. J. Turner, et al. (2002). A load management system for running HLA-based distributed simulations over the grid. Proceedings of the Sixth IEEE International Workshop on Distributed Simulation and Real-Time Applications, Fort Worth, TX. Chen, D., S. J. Turner, et al. (2006). A Framework for Robust HLA-based Distributed Simulations. International Workshop on Principles of Advanced and Distributed Simulation, Singapore. Cholkar, A. and P. Koopman (1999). A widely deployable Web-based network simulation framework using CORBA IDL-based APIs. Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future - Volume 2. Phoenix, Arizona, United States, ACM Press. SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 9 of 10

10 D Ambrogio, A. and D. Gianni (2004). Using CORBA to Enhance HLA Interoperability in Distributed and Web-Based Simulation. Computer and Information Sciences: Dean, J. and S. Ghemawat (2008). "MapReduce: simplified data processing on large clusters." Communications of the ACM 51(1). Ekanayake, J. and G. Fox (2009). High Performance Parallel Computing with Clouds and Cloud Technologies. Bloomington, IN, Department of Computer Science, Indiana University. Fitzgibbons, J. B., R. M. Fujimoto, et al. (2004). IDSim: an extensible framework for Interoperable Distributed Simulation. Proceedings of the IEEE International Conference on Web Services, San Diego, CA. Fujimoto, R. (2000). Parallel and Distributed Simulation Systems, Wiley Interscience. Fujimoto, R. M. (1990). "Parallel discrete event simulation." Communications of the ACM archive 33(10): Fujimoto, R. M., A. Park, et al. (2007). Towards Flexible, Reliable, High Throughput Parallel Discrete Event Simulations. Cooperative Research in Science and Technology (COST) Symposium on Modeling and Simulation in Telecommunications. Haifeng, G., P. Galligan, et al. (2005). The Application of Utility Computing and Web- Services to Inventory Optimisation. Proceedings of the 2005 IEEE International Conference Services Computing (SCC 05). 2: Hewitt, C. (2008). "ORGs for Scalable, Robust, Privacy-Friendly Client Cloud Computing." Internet Computing, IEEE 12(5): Huang, Y., X. Xiang, et al. (2004). A Self Manageable Infrastructure for Supporting Webbased Simulations. Proceedings of the 37th annual symposium on Simulation. Arlington, VA, IEEE Computer Society. Lizhe, W., T. Jie, et al. (2008). Scientific Cloud Computing: Early Definition and Experience. 10th IEEE International Conference on High Performance Computing and Communications, HPCC. Pan, K., S. J. Turner, et al. (2007). A Service Oriented HLA RTI on the Grid. Web Services, ICWS IEEE International Conference on. Pullen, J. M., R. Brunton, et al. (2005). "Using Web services to integrate heterogeneous simulations in a grid environment." Future Gener. Comput. Syst. 21(1): Reichenthal, S. W. (2002). Re-introducing web-based simulation. Proceedings of the 34th conference on Winter simulation: exploring new frontiers. San Diego, California, Winter Simulation Conference. Walker, E. (2008). "Benchmarking Amazon EC2 for High Performance Scientific Computing." from Wang, Y. (2004). On autonomous computing and cognitive processes. Third IEEE International Conference on Cognitive Informatics, IEEE. Xie, Y., Y. M. Teo, et al. (2005). Servicing Provisioning for HLA-Based Distributed Simulation on the Grid. Proceedings of the 19th Workshop on Principles of Advanced and Distributed Simulation. Monterey, CA, IEEE Computer Society. Richard Fujimoto is a Regents Professor and Chair of the School of Computational Science and Engineering at the Georgia Institute of Technology. He received his M.S. and Ph.D. degrees from the University of California (Berkeley) in 1980 and 1983 respectively. He has published over 200 articles on parallel and distributed simulation. Among his past activities he led the definition of the time management services for the DoD High Level Architecture (HLA). Asad Waqar Malik is a PhD candidate at National University of Science and Technology (NUST), Pakistan. He received his MS in Software Engineering and Bachelor of Computer Science degrees from NUST and Hamdard University respectively. He has been working in the distributed simulation field since He also worked as an international scholar at Georgia Institute of Technology. He has five international conference publications. His research interest includes real time decision support system, distributed simulation, and C4I systems. Alfred Park is a postdoctoral research scientist at IBM T.J. Watson Research Center at Yorktown Heights, New York. He received his BS, MS and PhD in Computer Science from the Georgia Institute of Technology in 2002, 2004 and 2009 respectively. His interests are in large scale stream processing systems, high performance computing, metacomputing, and parallel and distributed simulation. SCS M&S Magazine 2010 / n3 (July) Fujimoto et al - Page 10 of 10

An Optimistic Parallel Simulation Protocol for Cloud Computing Environments

An Optimistic Parallel Simulation Protocol for Cloud Computing Environments An Optimistic Parallel Simulation Protocol for Cloud Computing Environments 3 Asad Waqar Malik 1, Alfred J. Park 2, Richard M. Fujimoto 3 1 National University of Science and Technology, Pakistan 2 IBM

More information

Part V Applications. What is cloud computing? SaaS has been around for awhile. Cloud Computing: General concepts

Part V Applications. What is cloud computing? SaaS has been around for awhile. Cloud Computing: General concepts Part V Applications Cloud Computing: General concepts Copyright K.Goseva 2010 CS 736 Software Performance Engineering Slide 1 What is cloud computing? SaaS: Software as a Service Cloud: Datacenters hardware

More information

System Models for Distributed and Cloud Computing

System Models for Distributed and Cloud Computing System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems

More information

Li Sheng. lsheng1@uci.edu. Nowadays, with the booming development of network-based computing, more and more

Li Sheng. lsheng1@uci.edu. Nowadays, with the booming development of network-based computing, more and more 36326584 Li Sheng Virtual Machine Technology for Cloud Computing Li Sheng lsheng1@uci.edu Abstract: Nowadays, with the booming development of network-based computing, more and more Internet service vendors

More information

How To Understand Cloud Computing

How To Understand Cloud Computing Overview of Cloud Computing (ENCS 691K Chapter 1) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ Overview of Cloud Computing Towards a definition

More information

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study DISTRIBUTED SYSTEMS AND CLOUD COMPUTING A Comparative Study Geographically distributed resources, such as storage devices, data sources, and computing power, are interconnected as a single, unified resource

More information

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain

More information

CLOUD COMPUTING IN HIGHER EDUCATION

CLOUD COMPUTING IN HIGHER EDUCATION Mr Dinesh G Umale Saraswati College,Shegaon (Department of MCA) CLOUD COMPUTING IN HIGHER EDUCATION Abstract Technology has grown rapidly with scientific advancement over the world in recent decades. Therefore,

More information

Grid Computing Vs. Cloud Computing

Grid Computing Vs. Cloud Computing International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 6 (2013), pp. 577-582 International Research Publications House http://www. irphouse.com /ijict.htm Grid

More information

Cluster, Grid, Cloud Concepts

Cluster, Grid, Cloud Concepts Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com

CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS. Review Business and Technology Series www.cumulux.com ` CUMULUX WHICH CLOUD PLATFORM IS RIGHT FOR YOU? COMPARING CLOUD PLATFORMS Review Business and Technology Series www.cumulux.com Table of Contents Cloud Computing Model...2 Impact on IT Management and

More information

Building Platform as a Service for Scientific Applications

Building Platform as a Service for Scientific Applications Building Platform as a Service for Scientific Applications Moustafa AbdelBaky moustafa@cac.rutgers.edu Rutgers Discovery Informa=cs Ins=tute (RDI 2 ) The NSF Cloud and Autonomic Compu=ng Center Department

More information

White Paper. Requirements of Network Virtualization

White Paper. Requirements of Network Virtualization White Paper on Requirements of Network Virtualization INDEX 1. Introduction 2. Architecture of Network Virtualization 3. Requirements for Network virtualization 3.1. Isolation 3.2. Network abstraction

More information

CLOUD COMPUTING SECURITY ISSUES

CLOUD COMPUTING SECURITY ISSUES CLOUD COMPUTING SECURITY ISSUES Florin OGIGAU-NEAMTIU IT Specialist The Regional Department of Defense Resources Management Studies, Brasov, Romania The term cloud computing has been in the spotlights

More information

Migration Scenario: Migrating Batch Processes to the AWS Cloud

Migration Scenario: Migrating Batch Processes to the AWS Cloud Migration Scenario: Migrating Batch Processes to the AWS Cloud Produce Ingest Process Store Manage Distribute Asset Creation Data Ingestor Metadata Ingestor (Manual) Transcoder Encoder Asset Store Catalog

More information

Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad

Cloud Computing: Computing as a Service. Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Cloud Computing: Computing as a Service Prof. Daivashala Deshmukh Maharashtra Institute of Technology, Aurangabad Abstract: Computing as a utility. is a dream that dates from the beginning from the computer

More information

A Survey Study on Monitoring Service for Grid

A Survey Study on Monitoring Service for Grid A Survey Study on Monitoring Service for Grid Erkang You erkyou@indiana.edu ABSTRACT Grid is a distributed system that integrates heterogeneous systems into a single transparent computer, aiming to provide

More information

Hadoop in the Hybrid Cloud

Hadoop in the Hybrid Cloud Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big

More information

An Overview on Important Aspects of Cloud Computing

An Overview on Important Aspects of Cloud Computing An Overview on Important Aspects of Cloud Computing 1 Masthan Patnaik, 2 Ruksana Begum 1 Asst. Professor, 2 Final M Tech Student 1,2 Dept of Computer Science and Engineering 1,2 Laxminarayan Institute

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

Stream Processing on GPUs Using Distributed Multimedia Middleware

Stream Processing on GPUs Using Distributed Multimedia Middleware Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research

More information

What Is It? Business Architecture Research Challenges Bibliography. Cloud Computing. Research Challenges Overview. Carlos Eduardo Moreira dos Santos

What Is It? Business Architecture Research Challenges Bibliography. Cloud Computing. Research Challenges Overview. Carlos Eduardo Moreira dos Santos Research Challenges Overview May 3, 2010 Table of Contents I 1 What Is It? Related Technologies Grid Computing Virtualization Utility Computing Autonomic Computing Is It New? Definition 2 Business Business

More information

Manjrasoft Market Oriented Cloud Computing Platform

Manjrasoft Market Oriented Cloud Computing Platform Manjrasoft Market Oriented Cloud Computing Platform Aneka Aneka is a market oriented Cloud development and management platform with rapid application development and workload distribution capabilities.

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

Analysis and Research of Cloud Computing System to Comparison of Several Cloud Computing Platforms

Analysis and Research of Cloud Computing System to Comparison of Several Cloud Computing Platforms Volume 1, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: www.ijetmr.org Analysis and Research of Cloud Computing System to Comparison of

More information

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial

More information

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM

PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM Akmal Basha 1 Krishna Sagar 2 1 PG Student,Department of Computer Science and Engineering, Madanapalle Institute of Technology & Science, India. 2 Associate

More information

EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR

EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR Hao Wu Richard M. Fujimoto George Riley College Of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 {wh, fujimoto, riley}@cc.gatech.edu

More information

Security Considerations for Public Mobile Cloud Computing

Security Considerations for Public Mobile Cloud Computing Security Considerations for Public Mobile Cloud Computing Ronnie D. Caytiles 1 and Sunguk Lee 2* 1 Society of Science and Engineering Research Support, Korea rdcaytiles@gmail.com 2 Research Institute of

More information

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani

More information

CHAPTER 8 CLOUD COMPUTING

CHAPTER 8 CLOUD COMPUTING CHAPTER 8 CLOUD COMPUTING SE 458 SERVICE ORIENTED ARCHITECTURE Assist. Prof. Dr. Volkan TUNALI Faculty of Engineering and Natural Sciences / Maltepe University Topics 2 Cloud Computing Essential Characteristics

More information

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing N.F. Huysamen and A.E. Krzesinski Department of Mathematical Sciences University of Stellenbosch 7600 Stellenbosch, South

More information

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...

More information

From Grid Computing to Cloud Computing & Security Issues in Cloud Computing

From Grid Computing to Cloud Computing & Security Issues in Cloud Computing From Grid Computing to Cloud Computing & Security Issues in Cloud Computing Rajendra Kumar Dwivedi Assistant Professor (Department of CSE), M.M.M. Engineering College, Gorakhpur (UP), India E-mail: rajendra_bhilai@yahoo.com

More information

Cloud/SaaS enablement of existing applications

Cloud/SaaS enablement of existing applications Cloud/SaaS enablement of existing applications GigaSpaces: Nati Shalom, CTO & Founder About GigaSpaces Technologies Enabling applications to run a distributed cluster as if it was a single machine 75+

More information

ABSTRACT. KEYWORDS: Cloud Computing, Load Balancing, Scheduling Algorithms, FCFS, Group-Based Scheduling Algorithm

ABSTRACT. KEYWORDS: Cloud Computing, Load Balancing, Scheduling Algorithms, FCFS, Group-Based Scheduling Algorithm A REVIEW OF THE LOAD BALANCING TECHNIQUES AT CLOUD SERVER Kiran Bala, Sahil Vashist, Rajwinder Singh, Gagandeep Singh Department of Computer Science & Engineering, Chandigarh Engineering College, Landran(Pb),

More information

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining

More information

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter

More information

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE

Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.

More information

THE WINDOWS AZURE PROGRAMMING MODEL

THE WINDOWS AZURE PROGRAMMING MODEL THE WINDOWS AZURE PROGRAMMING MODEL DAVID CHAPPELL OCTOBER 2010 SPONSORED BY MICROSOFT CORPORATION CONTENTS Why Create a New Programming Model?... 3 The Three Rules of the Windows Azure Programming Model...

More information

How To Understand Cloud Computing

How To Understand Cloud Computing Cloud Computing: a Perspective Study Lizhe WANG, Gregor von LASZEWSKI, Younge ANDREW, Xi HE Service Oriented Cyberinfrastruture Lab, Rochester Inst. of Tech. Abstract The Cloud computing emerges as a new

More information

A Study of Infrastructure Clouds

A Study of Infrastructure Clouds A Study of Infrastructure Clouds Pothamsetty Nagaraju 1, K.R.R.M.Rao 2 1 Pursuing M.Tech(CSE), Nalanda Institute of Engineering & Technology,Siddharth Nagar, Sattenapalli, Guntur., Affiliated to JNTUK,

More information

White Paper on CLOUD COMPUTING

White Paper on CLOUD COMPUTING White Paper on CLOUD COMPUTING INDEX 1. Introduction 2. Features of Cloud Computing 3. Benefits of Cloud computing 4. Service models of Cloud Computing 5. Deployment models of Cloud Computing 6. Examples

More information

Tamanna Roy Rayat & Bahra Institute of Engineering & Technology, Punjab, India talk2tamanna@gmail.com

Tamanna Roy Rayat & Bahra Institute of Engineering & Technology, Punjab, India talk2tamanna@gmail.com IJCSIT, Volume 1, Issue 5 (October, 2014) e-issn: 1694-2329 p-issn: 1694-2345 A STUDY OF CLOUD COMPUTING MODELS AND ITS FUTURE Tamanna Roy Rayat & Bahra Institute of Engineering & Technology, Punjab, India

More information

CLOUD COMPUTING. A Primer

CLOUD COMPUTING. A Primer CLOUD COMPUTING A Primer A Mix of Voices The incredible shrinking CIO CIO Magazine, 2004 IT Doesn t Matter, The cloud will ship service outside the institution and ship power from central IT groups to

More information

Lecture 02a Cloud Computing I

Lecture 02a Cloud Computing I Mobile Cloud Computing Lecture 02a Cloud Computing I 吳 秀 陽 Shiow-yang Wu What is Cloud Computing? Computing with cloud? Mobile Cloud Computing Cloud Computing I 2 Note 1 What is Cloud Computing? Walking

More information

Parallel Computing. Benson Muite. benson.muite@ut.ee http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage

Parallel Computing. Benson Muite. benson.muite@ut.ee http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage Parallel Computing Benson Muite benson.muite@ut.ee http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework

More information

Cloud Computing 159.735. Submitted By : Fahim Ilyas (08497461) Submitted To : Martin Johnson Submitted On: 31 st May, 2009

Cloud Computing 159.735. Submitted By : Fahim Ilyas (08497461) Submitted To : Martin Johnson Submitted On: 31 st May, 2009 Cloud Computing 159.735 Submitted By : Fahim Ilyas (08497461) Submitted To : Martin Johnson Submitted On: 31 st May, 2009 Table of Contents Introduction... 3 What is Cloud Computing?... 3 Key Characteristics...

More information

The Hidden Extras. The Pricing Scheme of Cloud Computing. Stephane Rufer

The Hidden Extras. The Pricing Scheme of Cloud Computing. Stephane Rufer The Hidden Extras The Pricing Scheme of Cloud Computing Stephane Rufer Cloud Computing Hype Cycle Definition Types Architecture Deployment Pricing/Charging in IT Economics of Cloud Computing Pricing Schemes

More information

Cloud Computing. Adam Barker

Cloud Computing. Adam Barker Cloud Computing Adam Barker 1 Overview Introduction to Cloud computing Enabling technologies Different types of cloud: IaaS, PaaS and SaaS Cloud terminology Interacting with a cloud: management consoles

More information

Private Cloud for the Enterprise: Platform ISF

Private Cloud for the Enterprise: Platform ISF Private Cloud for the Enterprise: Platform ISF A Neovise Vendor Perspective Report 2009 Neovise, LLC. All Rights Reserved. Background Cloud computing is a model for enabling convenient, on-demand network

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

What can DDS do for You? Learn how dynamic publish-subscribe messaging can improve the flexibility and scalability of your applications.

What can DDS do for You? Learn how dynamic publish-subscribe messaging can improve the flexibility and scalability of your applications. What can DDS do for You? Learn how dynamic publish-subscribe messaging can improve the flexibility and scalability of your applications. 2 Contents: Abstract 3 What does DDS do 3 The Strengths of DDS 4

More information

Distribution transparency. Degree of transparency. Openness of distributed systems

Distribution transparency. Degree of transparency. Openness of distributed systems Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science steen@cs.vu.nl Chapter 01: Version: August 27, 2012 1 / 28 Distributed System: Definition A distributed

More information

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )

More information

A Study on Analysis and Implementation of a Cloud Computing Framework for Multimedia Convergence Services

A Study on Analysis and Implementation of a Cloud Computing Framework for Multimedia Convergence Services A Study on Analysis and Implementation of a Cloud Computing Framework for Multimedia Convergence Services Ronnie D. Caytiles and Byungjoo Park * Department of Multimedia Engineering, Hannam University

More information

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems

David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM

More information

for my computation? Stefano Cozzini Which infrastructure Which infrastructure Democrito and SISSA/eLAB - Trieste

for my computation? Stefano Cozzini Which infrastructure Which infrastructure Democrito and SISSA/eLAB - Trieste Which infrastructure Which infrastructure for my computation? Stefano Cozzini Democrito and SISSA/eLAB - Trieste Agenda Introduction:! E-infrastructure and computing infrastructures! What is available

More information

Relational Databases in the Cloud

Relational Databases in the Cloud Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating

More information

A New Approach of CLOUD: Computing Infrastructure on Demand

A New Approach of CLOUD: Computing Infrastructure on Demand A New Approach of CLOUD: Computing Infrastructure on Demand Kamal Srivastava * Atul Kumar ** Abstract Purpose: The paper presents a latest vision of cloud computing and identifies various commercially

More information

Getting Familiar with Cloud Terminology. Cloud Dictionary

Getting Familiar with Cloud Terminology. Cloud Dictionary Getting Familiar with Cloud Terminology Cloud computing is a hot topic in today s IT industry. However, the technology brings with it new terminology that can be confusing. Although you don t have to know

More information

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD M. Lawanya Shri 1, Dr. S. Subha 2 1 Assistant Professor,School of Information Technology and Engineering, Vellore Institute of Technology, Vellore-632014

More information

Overview. The Cloud. Characteristics and usage of the cloud Realities and risks of the cloud

Overview. The Cloud. Characteristics and usage of the cloud Realities and risks of the cloud Overview The purpose of this paper is to introduce the reader to the basics of cloud computing or the cloud with the aim of introducing the following aspects: Characteristics and usage of the cloud Realities

More information

Cloud Based Distributed Databases: The Future Ahead

Cloud Based Distributed Databases: The Future Ahead Cloud Based Distributed Databases: The Future Ahead Arpita Mathur Mridul Mathur Pallavi Upadhyay Abstract Fault tolerant systems are necessary to be there for distributed databases for data centers or

More information

Cloud Computing An Introduction

Cloud Computing An Introduction Cloud Computing An Introduction Distributed Systems Sistemi Distribuiti Andrea Omicini andrea.omicini@unibo.it Dipartimento di Informatica Scienza e Ingegneria (DISI) Alma Mater Studiorum Università di

More information

IBM 000-281 EXAM QUESTIONS & ANSWERS

IBM 000-281 EXAM QUESTIONS & ANSWERS IBM 000-281 EXAM QUESTIONS & ANSWERS Number: 000-281 Passing Score: 800 Time Limit: 120 min File Version: 58.8 http://www.gratisexam.com/ IBM 000-281 EXAM QUESTIONS & ANSWERS Exam Name: Foundations of

More information

Building an AWS-Compatible Hybrid Cloud with OpenStack

Building an AWS-Compatible Hybrid Cloud with OpenStack Building an AWS-Compatible Hybrid Cloud with OpenStack AWS is Transforming IT Amazon Web Services (AWS) commands a significant lead in the public cloud services market, with revenue estimated to grow from

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: sashko.ristov@finki.ukim.mk

More information

Key Considerations and Major Pitfalls

Key Considerations and Major Pitfalls : Key Considerations and Major Pitfalls The CloudBerry Lab Whitepaper Things to consider before offloading backups to the cloud Cloud backup services are gaining mass adoption. Thanks to ever-increasing

More information

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications Open System Laboratory of University of Illinois at Urbana Champaign presents: Outline: IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications A Fine-Grained Adaptive

More information

Distributed Systems and Recent Innovations: Challenges and Benefits

Distributed Systems and Recent Innovations: Challenges and Benefits Distributed Systems and Recent Innovations: Challenges and Benefits 1. Introduction Krishna Nadiminti, Marcos Dias de Assunção, and Rajkumar Buyya Grid Computing and Distributed Systems Laboratory Department

More information

<Insert Picture Here> Infrastructure as a Service (IaaS) Cloud Computing for Enterprises

<Insert Picture Here> Infrastructure as a Service (IaaS) Cloud Computing for Enterprises Infrastructure as a Service (IaaS) Cloud Computing for Enterprises Speaker Title The following is intended to outline our general product direction. It is intended for information

More information

Informatica Ultra Messaging SMX Shared-Memory Transport

Informatica Ultra Messaging SMX Shared-Memory Transport White Paper Informatica Ultra Messaging SMX Shared-Memory Transport Breaking the 100-Nanosecond Latency Barrier with Benchmark-Proven Performance This document contains Confidential, Proprietary and Trade

More information

I D C M A R K E T S P O T L I G H T

I D C M A R K E T S P O T L I G H T I D C M A R K E T S P O T L I G H T The New IP: Building the Foundation of Datacenter Network Automation March 2015 Adapted from Worldwide Enterprise Communications and Datacenter Network Infrastructure

More information

Control 2004, University of Bath, UK, September 2004

Control 2004, University of Bath, UK, September 2004 Control, University of Bath, UK, September ID- IMPACT OF DEPENDENCY AND LOAD BALANCING IN MULTITHREADING REAL-TIME CONTROL ALGORITHMS M A Hossain and M O Tokhi Department of Computing, The University of

More information

A SURVEY ON MAPREDUCE IN CLOUD COMPUTING

A SURVEY ON MAPREDUCE IN CLOUD COMPUTING A SURVEY ON MAPREDUCE IN CLOUD COMPUTING Dr.M.Newlin Rajkumar 1, S.Balachandar 2, Dr.V.Venkatesakumar 3, T.Mahadevan 4 1 Asst. Prof, Dept. of CSE,Anna University Regional Centre, Coimbatore, newlin_rajkumar@yahoo.co.in

More information

Amazon EC2 XenApp Scalability Analysis

Amazon EC2 XenApp Scalability Analysis WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2

More information

Planning the Migration of Enterprise Applications to the Cloud

Planning the Migration of Enterprise Applications to the Cloud Planning the Migration of Enterprise Applications to the Cloud A Guide to Your Migration Options: Private and Public Clouds, Application Evaluation Criteria, and Application Migration Best Practices Introduction

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

Demystifying the Cloud Computing 02.22.2012

Demystifying the Cloud Computing 02.22.2012 Demystifying the Cloud Computing 02.22.2012 Speaker Introduction Victor Lang Enterprise Technology Consulting Services Victor Lang joined Smartbridge in early 2003 as the company s third employee and currently

More information

Carol Palmer, Principal Product Manager, Oracle Corporation

Carol Palmer, Principal Product Manager, Oracle Corporation USING ORACLE INTERMEDIA IN RETAIL BANKING PAYMENT SYSTEMS Carol Palmer, Principal Product Manager, Oracle Corporation INTRODUCTION Payment systems deployed by retail banks today include traditional paper

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Viswanath Nandigam Sriram Krishnan Chaitan Baru

Viswanath Nandigam Sriram Krishnan Chaitan Baru Viswanath Nandigam Sriram Krishnan Chaitan Baru Traditional Database Implementations for large-scale spatial data Data Partitioning Spatial Extensions Pros and Cons Cloud Computing Introduction Relevance

More information

Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2

Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2 Axceleon s CloudFuzion Turbocharges 3D Rendering On Amazon s EC2 In the movie making, visual effects and 3D animation industrues meeting project and timing deadlines is critical to success. Poor quality

More information

Manjrasoft Market Oriented Cloud Computing Platform

Manjrasoft Market Oriented Cloud Computing Platform Manjrasoft Market Oriented Cloud Computing Platform Innovative Solutions for 3D Rendering Aneka is a market oriented Cloud development and management platform with rapid application development and workload

More information

Scientific and Technical Applications as a Service in the Cloud

Scientific and Technical Applications as a Service in the Cloud Scientific and Technical Applications as a Service in the Cloud University of Bern, 28.11.2011 adapted version Wibke Sudholt CloudBroker GmbH Technoparkstrasse 1, CH-8005 Zurich, Switzerland Phone: +41

More information

IT as a Service. Transforming IT with the Windows Azure Platform. November 2010

IT as a Service. Transforming IT with the Windows Azure Platform. November 2010 IT as a Service Transforming IT with the Windows Azure Platform November 2010 Version 1.0 11/9/2010 Contents Understanding IT as a Service... 1 Realizing IT as a Service: The Importance of PaaS... 4 What

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

Elastic Private Clouds

Elastic Private Clouds White Paper Elastic Private Clouds Agile, Efficient and Under Your Control 1 Introduction Most businesses want to spend less time and money building and managing IT infrastructure to focus resources on

More information

Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html

Datacenters and Cloud Computing. Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html Datacenters and Cloud Computing Jia Rao Assistant Professor in CS http://cs.uccs.edu/~jrao/cs5540/spring2014/index.html What is Cloud Computing? A model for enabling ubiquitous, convenient, ondemand network

More information

This paper defines as "Classical"

This paper defines as Classical Principles of Transactional Approach in the Classical Web-based Systems and the Cloud Computing Systems - Comparative Analysis Vanya Lazarova * Summary: This article presents a comparative analysis of

More information

Recommendations for Performance Benchmarking

Recommendations for Performance Benchmarking Recommendations for Performance Benchmarking Shikhar Puri Abstract Performance benchmarking of applications is increasingly becoming essential before deployment. This paper covers recommendations and best

More information

Performance of the Cloud-Based Commodity Cluster. School of Computer Science and Engineering, International University, Hochiminh City 70000, Vietnam

Performance of the Cloud-Based Commodity Cluster. School of Computer Science and Engineering, International University, Hochiminh City 70000, Vietnam Computer Technology and Application 4 (2013) 532-537 D DAVID PUBLISHING Performance of the Cloud-Based Commodity Cluster Van-Hau Pham, Duc-Cuong Nguyen and Tien-Dung Nguyen School of Computer Science and

More information

Network Infrastructure Services CS848 Project

Network Infrastructure Services CS848 Project Quality of Service Guarantees for Cloud Services CS848 Project presentation by Alexey Karyakin David R. Cheriton School of Computer Science University of Waterloo March 2010 Outline 1. Performance of cloud

More information

On Cloud Computing Technology in the Construction of Digital Campus

On Cloud Computing Technology in the Construction of Digital Campus 2012 International Conference on Innovation and Information Management (ICIIM 2012) IPCSIT vol. 36 (2012) (2012) IACSIT Press, Singapore On Cloud Computing Technology in the Construction of Digital Campus

More information