Explicit Spatial Scattering for Load Balancing in Conservatively Synchronized Parallel Discrete-Event Simulations
|
|
|
- Christine Norris
- 10 years ago
- Views:
Transcription
1 Explicit Spatial ing for Load Balancing in Conservatively Synchronized Parallel Discrete-Event Simulations Sunil Thulasidasan Shiva Prasad Kasiviswanathan Stephan Eidenbenz Phillip Romero Los Alamos National Laboratory Abstract We re-examine the problem of load balancing in conservatively synchronized parallel, discrete-event simulations executed on high-performance computing clusters, focusing on simulations where computational and messaging load tend to be spatially clustered. Such domains are frequently characterized by the presence of geographic hot-spots regions that generate significantly more simulation events than others. Examples of such domains include simulation of urban regions, transportation networks and networks where interaction between entities is often constrained by physical proximity. Noting that in conservatively synchronized parallel simulations, the speed of execution of the simulation is determined by the slowest ( i.e most heavily loaded) simulation process, we study different partitioning strategies in achieving equitable processor-load distribution in domains with spatially clustered load. In particular, we study the effectiveness of partitioning via spatial scattering to achieve optimal load balance. In this partitioning technique, nearby entities are explicitly assigned to different processors, thereby scattering the load across the cluster. This is motivated by two observations, namely, (i) since load is spatially clustered, spatial scattering should, intuitively, spread the load across the compute cluster, and (ii) in parallel simulations, equitable distribution of CPU load is a greater determinant of execution speed than message passing overhead. Through large-scale simulation experiments both of abstracted and real simulation models we observe that scatter partitioning, even with its greatly increased messaging overhead, significantly outperforms more conventional spatial partitioning techniques that seek to reduce messaging overhead. Further, even if hot-spots change over the course of the simulation, if the underlying feature of spatial clustering is retained, load continues to be balanced with spatial scattering leading us to the observation that spatial scattering can often obviate the need for dynamic load balancing. I. INTRODUCTION In parallel computing, achieving an equitable distribution of load across available computing resources is a significant challenge and often necessitated on high performance computing platforms, where idle processors represent a huge waste of computational power. The problem of unbalanced load is especially egregious in conservatively synchronized parallel simulation systems, where causality constraints dictate that the simulation clock of any two processes not differ by more than a given time factor (called the look-ahead) [6]. Thus, a processor with relatively large event load, whose simulation clock advances at a much slower rate, can severely degrade the performance of the entire simulation. The load distribution in a parallel simulation is inherently an outcome of the partitioning strategy used to assign simulation entities to processors. Most of the research into the issue of load balancing and partitioning in parallel simulations has been approached from the perspective of dynamic load balancing. An algorithm based on process migration a challenging task in itself for dynamic load balancing for conservative parallel simulations is described in [2]. Another dynamic load balancing algorithm based on spatial partitioning for a simulation employing optimistic synchronization is described in [4] and [5], which involves the movement of spatial data between neighboring processes. A scheme using PVM [] for parallel discrete event simulations is described in [8]. General partitioning and data decomposition strategies for parallel processing are explored in [3] but the authors do not specifically consider parallel simulation applications. Dynamic load-balancing, in general, presents significant implementation challenges in parallel simulations, due to the complexity of migrating a simulation object to a different process. In this paper, we explore static partitioning strategies for load balancing in parallel simulations, concerning ourselves specifically to problems where (i) conservatively synchronized parallel simulation is used, and (ii) the problem domains exhibit spatial clustering of load. Such domains are frequently characterized by the presence of geographic hot-spots i.e. regions that generate significantly more computational and messaging load than other regions. Examples of such do-
2 mains include simulation of urban regions, transportation networks, epidemiological and biological networks and generally, networks where interaction between entities is often constrained by physical proximity. The main partitioning technique described in this paper spatial scattering, where nearby entities are explicitly assigned to different processors with the aim of scattering the load across the processors is motivated by two observations, namely, (i) in many domains, load tends to be spatially clustered and spatial scattering should, therefore, spread the load across the compute cluster, and (ii) in parallel simulations, equitable distribution of CPU load is often a greater determinant of execution speed than message passing overhead. An analysis of the scatter partitioning technique, based on observations that are similar to the ones that motivated this work, was originally presented in [7]. In that model (based on the authors experiences with one-dimensional fluid-flow computations and other numerically intensive problem domains), work-loads of nearby regions also exhibit strong correlation, and the conclusions reached are similar to ours. The cost of increased communication (which is very architecture dependent) that result from a scatter-based decomposition of processor workloads were not not part of that model; thus, our work is a validation of the results of [7] on present-day architectures with high-speed interconnects. Through large-scale parallel simulation experiments both of abstracted and real simulation models on HPC clusters, we observe that scatter partitioning even with its greatly increased messaging overhead often significantly outperforms more conventional spatial partitioning techniques that seek to reduce messaging overhead. Further, even if hot-spots change over the course of the simulation, if the underlying feature of spatial clustering is retained, load continues to be balanced, which leads us to the observation that spatial scattering can often obviate the need for dynamic load balancing. In what follows, we first present a method to generate an abstracted model of a network with spatially clustered entity and load distribution. Next we present experimental results from distributed simulation experiments on the abstract model using different partition strategies, comparing the performance of scatter partitioning with other algorithms. Finally we evaluate load-balancing and scaling results using scatter on a real application simulator a scalable, parallel transportation simulator that uses real road networks and realistic traffic patterns. II. MODELING SIMULATION APPLICATIONS WITH SPATIAL LOAD CLUSTERING In this section, we describe an abstracted simulation model that we use to compare different partitioning schemes. Our model allows us to capture real world features like the presence of dense regions with large load profiles. We first construct a graph G = (V, E), which we call the simulation graph; the vertices in the graph, which are points in a plane lying within a bounding box, are also the load centers that generate the computation and messaging load. Messages are sent between vertices in G. a) Vertex creation process: We start with a bounding box B (a square in our case) and discretize B into equally sized square cells {c,...,c }. The population of each cell (i.e., the number of vertices in each cell) is generated according to a distribution D (in our experiments D is either normal, uniform, or exponential). We set the mean of the distribution as, (in the case of normal distribution the standard deviation is / of the mean ). Let s i, chosen from D, be the size of the cell c i (where i ). Then for each cell c i, we assign s i random vertices that are distributed within the given cell. Figures and 2 show the spread of the vertices in the bounding box when the distribution D is normal, uniform, and exponential, respectively. b) Edge creation process: Once the vertices are created, we create edges in G. For our purpose, edges represent channels of communication between vertices in the graph i.e each vertex only communicates with another vertex with whom it shares an edge. Since one of the features of networks that exhibit spatial clustering is short edges, during the edge creation phase, an edge is much more likely to be assigned between vertices that are close to each other. Let v be a vertex in G. Let α 2 be a parameter. Define, D(v) = u V d(u,v) α whered(u,v) is the Euclidean distance betweenuandv. We pick an edge (v,u) in the graph G with probability d(v,u) α /D(v). This guarantees that most of the edges within the graph are small. In our experiments, we set α = 2. c) Computational load generation procedure: We generate a load demand for each vertex in the graph as follows: for a cell c i with s i vertices lying inside it, the mean of the load (normally distributed within the cell) is s i and the standard deviation s i /. For every vertex v V we do the following: if c j is the cell 2
3 that contains v (i.e., the coordinates of v lie within the boundary of c j ), then we pick a random number n v from a normal distribution with mean s j and standard deviation s j /. The load on v is then proportional to n v (the exact mechanism for generating load is described in Section IV). The right part of Figure 2 shows the spatial distribution of computational load when D is exponentially distributed. The load of a vertex dictates the number of cycles spent by the processor to which v has been assigned. Additional computational load might be generated for each vertex due to messages it receives (see below). d) Message load generation procedure: The previous three steps are executed in the network creation phase, and are input to the simulation. Message generation occurs during the simulation as follows: initially each vertex v schedules m v messages for itself, m v being dependent on the load demand for that vertex generated in the previous step. When a vertex u receives a message it does the following: with probability p, vertex u generates another message and sends this message to one of its neighbors picked randomly from its neighbor-list, and with probability p it executes a busy-wait cycle to emulate computational load. We refer to p as the message forwarding probability, and is an input parameter in our experiments. III. PARTITIONING STRATEGIES As noted before, in parallel discrete-event simulation with conservative synchronization schemes, the difference between the simulation clocks of any two logical processes is constrained by the look-ahead time. Thus, a simulation process with a relatively high computation load, whose simulation clock progresses at a slower rate, slows down the entire system; essentially the speed of simulation execution is determined by the slowest process. The computational load on a simulation process is determined by the partitioning scheme used to assign simulation objects to processors. In this section, we explore different strategies for partitioning the simulation workload on a high performance cluster. The ideal partitioning algorithm achieves perfectly balanced computational load while also keeping message overhead to a minimum. For networks with spatially clustered load, we explore the effectiveness of four different partitioning algorithms in this paper. () Partitioning with Balanced Entity Load: In networks with spatially clustered load, message trajectories are also spatially constrained. Thus, most of the messages are exchanged between entities that lie close to each other. A partition algorithm that minimizes messaging overhead would assign entities that are geographically close to the same processor. A simple geographic partitioning scheme would divide the simulated region into a grid, and assign every entity that falls into a grid cell to the same processor. Since the network is not uniformly distributed in terms of node density (as is the case with exponential and normally distributed networks seen earlier), a naive geographic partitioning would end up with highly unbalanced load. To mitigate this effect, we perform a geometric sweep assigning equal number of entities to each grid cell; the resulting partition will consist of non-uniform rectangular grid cells. (2) Partitioning with Balanced Computational Load: Computational load distribution can also be highly uneven in certain networks, especially networks with exponential properties. In such case, a geographic partitioning with balanced entities can result in unbalanced computation load. An alternative approach to balancing entity number is to balance the estimated computational load across processors during a geographical sweep. While this is fairly easy to do when one has a priori knowledge about load, as is the case in our abstracted model, in real applications this is often a difficult task. Load profiles can vary significantly during the course of the simulation, and complicated network effects can come into play altering the estimated load contribution of a network entity. (3) Partitioning: partition exploits the property of spatial clustering of load by explicitly assigning entities that lie close to each other to different processors i.e., entities are scattered across the cluster. Since entities that are close to each other have similar load characteristics, by scattering the entities one achieves the effect of scattering the load across the cluster. The obvious drawback to this approach is the additional messaging and synchronization overhead. However, one of our observations in this paper is that, often, for a large-scale parallel simulations on high performance clusters with fast interconnects, the computation time dominates message passing overhead. As we shall see in the following sections, the additional overhead from message passing is more than compensated for by balanced load. IV. DISTRIBUTED SIMULATIONS OF ABSTRACT MODEL To simulate our abstracted network and load model, we implemented a parallel, discrete event load simulator. For this, we used a generic C++ framework called Sim- Core that provides application programming interfaces 3
4 k Network - Normal Distribution 4.5e+6 4e+6 k Network - Uniform Distribution "k.uni.heat_matrix.x" matrix.5e+6.4e+6 3.5e+6.3e+6 3e+6.2e+6 2.5e+6.e+6 2e+6 e+6.5e+6 99 e Fig.. Left: Heat plot depicting the distribution of the load centers (vertices) when D is the normal distribution. The darker a cell, the more the vertices it has; vertices within a cell have similar load characteristics. Right: Heat plot for uniform distribution. k Network - Exponential Distribution 5e+7 4.5e+7 Exponentially Distributed Load 4e+7 3.5e+7 3e+7 2.5e+7 2e+7.5e+7 e+7.6e+7.4e+7.2e+7 e+7 8e+6 6e+6 4e+6 2e+6 5e+6 Fig. 2. Left: Heat plot depicting the distribution of the load centers (vertices) when D is the exponential distribution. Right: Distribution of the computational load. (APIs) and generic data structures for building scalable distributed-memory discrete-event simulations. SimCore uses PrimeSSF [9] as its underlying simulation engine which uses conservative synchronization protocols and provides APIs for message passing. The load simulator uses SimCore constructs to model the abstract network, which primarily consists of nodes that are scattered in a given region according to some statistical distribution (described in Section II). Each node is assigned a load profile, which depends on the node density of the grid cell containing the node. The main global inputs to the simulation are: ) total simulation time T, b) an a priori total event load L, c) the probability parameter p, that determines the computational and messaging load, and d) mean number of computational cycles C for each computation. Let N(V) = v V n v, where n v (defined in Section II) is the load parameter generated for vertex v. The basic algorithm for the load simulator is explained in Figure 3. Since p <, the total number of events that are generated in the simulation will die out (as it forms a subcritical branching process) independently of the type of partition or even the type of network. Each event will cause a message to be generated only if the recipient of the event is on a different processor; thus, the number of messages generated depend on the partitioning algorithm. Obviously, this is a greatly simplified model of a simulation application; I/O, which is often a major bottleneck, is not simulated directly. However, the slowdown of a particular simulation process due to I/O can be captured through busy-wait cycles. V. SIMULATION EXPERIMENTS ON GENERAL MODEL In this section, we present experimental results for the load simulator using various partitioning schemes described in Section III. All experiments were carried out on a high performance computation cluster at the Los Alamos National Laboratory, comprising of 29 compute nodes, with each node consisting of two 64- bit AMD Opteron 2.6Ghz processors and 8GB memory. The compute nodes are connected together by a Voltaire Infiniband high-speed interconnect. For the partitioning experiments described in this section, we limited the cluster size to 64 compute nodes; scaling results on 4
5 LOAD SIMULATOR ALGORITHM ) Each vertex v schedules (n v /N(V)) L events for itself. Note that sum of all events is L. 2) Upon receiving an event, a node u: a) With probability p, sends an event (message) to a neighboring node, or b) With probability p, simulates a computational cycle. Computational cycles are implemented by a simple busy-wait loop; the number of iterations are geometrically distributed with mean C. Fig. 3. Load Simulator Algorithm larger cluster sizes for real simulation applications are described in Section VI. We simulated a, node network for different spatial distributions exponential, normal and uniform with an average connectivity degree of five. For the input parameters (see previous section), we set T =, L = 7, and C = 8. We test scenarios with different messaging overheads by varying the value of p to be one of.,.5, or.9. Figure 4 shows the execution time and the number of messages passed in the load simulator under the various partitioning schemes for the different types of networks, when the messaging probability is.9. Even in this scenario with the most number of inter process messages, scatter outperforms geographical partitioning (with balanced entities) by almost an order of magnitude, despite the message overhead being twice that of the geographical partitioning schemes. For subsequent scenarios (Figures 5) when the message passing probability is decreased to.5, scatter has even better relative performance, even though the absolute execution time increases because of the increased number of busy-wait cycles (computational load). The results shown here are for the exponential distributed network; similar results were observed in the normal and uniformly distributed networks. Figures 6 and 7 illustrate the underlying reason for the superior performance of scatter in comparison to the geographical partitioning schemes. Each 3D bar represents the computational load (in terms of busy wait cycles) on one processor in the 64-processor cluster in the exponential network case, for p =.9 and p =.5. Not surprisingly, the worst performing scheme (geographic with balanced entities) is also the most unbalanced (the latter being the cause for the former), while the plots for scatter partitioning are more or less flat, indicating a well balanced load across the cluster. The Min/Max load ratio depicted in these figures is a useful metric for fairness comparison, since the speed of execution in a synchronized simulation is determined by the slowest process; it follows that a low Min/Max ratio will severely degrade performance, with ideal ratio being. VI. PERFORMANCE ON REAL APPLICATIONS In this section, we present results for the different partitioning algorithms for a real application simulator. FastTrans [], [2] is a scalable, parallel transportation microsimulator for real-world road networks that can simulate and route millions of vehicles significantly faster than real time. FastTrans combines parallel, discrete-event simulation (using conservative synchronization) with a queue-based approach to traffic modeling to accelerate road network simulations. Vehicular trips and activities are generated using the detailed and realistic activity modeling capability of the Urban Population Mobility modeling tools that was developed as part of the TRANSIMS [] project at the Los Alamos National Laboratory. For the results presented in this section, we simulated a portion of the road network in the North East region of the United States, covering most of the urban regions of New York, and parts of New Jersey and Connecticut; the network graphs consists of approximately half million nodes,. million links and over 25 million vehicular trips. A more detailed description of FastTransis given in [], [2]. Figure 8 depicts a heat map of the road network in the New York region, with areas in red being the busiest points in the network, and areas in blue being regions with low traffic volume. Spatial clustering of load is a conspicuous feature of the road network, with urban downtowns having much higher activity than other regions; simulated activity decreases significantly as one moves away from the core regions. Further, vehicle trajectories, are by nature, spatially constrained. Thus, the overwhelming majority of messages (which are used to represent vehicles in FastTrans) are exchanged between neighboring network elements. The spatial load clustering and localized messaging nature of FastTrans make this a good candidate to test the effectiveness of scatter partitioning, in comparison to the other schemes. All partitioning experiments described in this section were conducted on a high performance computing cluster 5
6 Execution Time (seconds) 45 4 Random 35 Geo-Load Total Messages Random Geo-Load Exp Normal Uniform Exp Normal Uniform Fig. 4. Comparison of the execution time (above) and messages passed in the exponential network, under different paritioning schemes with the message forwarding probability p =.9. Execution Time (seconds) 45 4 Random 35 Geo-Load Total Messages Random Geo-Load Exp Normal Uniform Exp Normal Uniform Fig. 5. Comparison of the execution time (above) and messages passed with the message forwarding probability p =.5 e+ Min/Max = µ = 8.38e+ σ = 2.53e+ 2.5e+2 2e+2.5e+2 5e+ e+ Min/Max =.4679 µ = 8.7e+ σ =.66e+.2e+2 8e+ 6e+ 4e+ 2e+ e+ Min/Max = µ = 7.55e+ σ = 5.7e+9 9.5e+ 9e+ 8.5e+ 8e+ 7.5e+ 7e+ 6.5e+ 6e+ Fig. 6. Computational load (busy-wait) distribution across processors in the load simulator in a 64 CPU run under different partitioning schemes with p =.9. The first two are for the geographic partitioning with balanced entities and balanced computational load, while on the right is scatter partitioning. The underlying distribution D is exponential. e+ Min/Max = µ = 2.9e+ σ = 2.45e+ 2.2e+2 2e+2.8e+2.6e+2.4e+2.2e+2 8e+ 6e+ 4e+ 2e+ e+ Min/Max = µ = 2.8e+ σ =.9.2e+2 8e+ 6e+ 4e+ 2e+ e+ Min/Max = µ = 2. σ = 6.97e+9 2.3e+ 2.25e+ 2.2e+ 2.5e e+ 2e+.95e+.9e+ Fig. 7. Computational load (busy-wait) distribution across processors in the load simulator in a 64 CPU run for balanced entity, balanced computational loada nd scatter with p =.5. The underlying distribution D is exponential. 6
7 Fig. 8. Figure illustrating distribution of event load in the New York region e+9 with Balanced Entity Distribution with Balanced Routing Load with Balanced Event Load 3e+9 Total Messages Passed Execution Time (Minutes) 2 Fig. 9. Figure illustrating assignment of entities under geographic partitioning scheme with balanced entities. All entities in one colored box are assigned to the same processor. Fig.. A zoomed in figure illustrating assignment of entities under scatter partitioning scheme. Nearby entities are assigned different colors (processors) with Balanced Entity Distribution with Balanced Routing Load with Balanced Event Load 2.5e+9 2e+9.5e+9 e+9 5 5e Fig.. Comparison of execution times of FastTrans (as a function of #CPUs) under different partitioning schemes Number of CPUs Number of CPUs Fig. 2. Comparison of the number of messages passed in FastTrans (as a function of #CPUs) under different partitioning schemes. at Los Alamos National Laboratory for different processor configurations, ranging from 32 to 52 processors. Figure illustrates the basic performance of the various partitioning schemes in terms of execution speed. partitioning with balanced entities performs the worst, while scatter partitioning is the fastest, outperforming geographic partitioning with balanced entities by about an order of magnitude. Interestingly, performance in terms of message-passing overhead is almost the reverse of execution time the number of interprocess messages being passed in scatter partitioning is an order of magnitude higher than geographic. The fairness of load distribution is shown in Figures 4 through 6. These figures illustrate the computational load on each processor in the 256 processor set-up, with load being defined as a weighted sum of event and routing load. Each bar represents the load on one CPU over the entire simulation. The best-performing partitioning scheme (scatter) clearly outperforms the other schemes. The fairness of the partitioning schemes, calculated as the ratio of the minimum to maximum pro- Fig. 3. Comparison of fairness of computation load in FastTrans under different partitioning schemes. cessor load for a given experiment (with the ideal ratio being ) is a useful metric, since the speed of execution time in a synchronized simulation is determined by the slowest process. Figure 3 compares this ratio for the various partitioning schemes; note how these correlate with speed of execution. A. A Note on Dynamic Load Balancing Achieving an equitable distribution of load across a cluster is a significant challenge as well as a necessity for computationally intensive tasks, especially in high performance clusters where idle processors represent a huge waste of computational power. A partition algorithm can exploit knowledge of the problem domain, and assign a priori load profiles to entities in the network, but as we have seen, these do not always translate into a balanced run-time load distribution. Further, complicated network effects and highly dynamic scenarios can result in significant geographical migrations of load. For example, in the case of a transportation network, the event and routing load profile may differ significantly in a 7
8 e+8 8e+7 6e+7 4e+7 2e+7 Min/Max =.9985 µ = 2.52e+7 σ =.86e+6 3.2e+7 3e+7 2.8e+7 2.6e+7 2.4e+7 2.2e+7 2e+7 e+8 8e+7 6e+7 4e+7 2e+7 Min/Max =.26 µ = 2.49e+7 σ = 3.55e+7 e+8 5e+7 e+8 8e+7 6e+7 4e+7 2e+7 Min/Max =.995 µ = 2.52e+7 σ = 7.7e+6 6e+7 4e+7 2e+7 Fig. 4. Computational load distribution of FastTrans in a 256 CPU run under scatter partitioning. Each bar represents the load on one CPU. Fig. 5. Computational load distribution in FastTrans in a 256 CPU run under geographic partitioning with balanced entities. Fig. 6. Computational load distribution in FastTrans in a 256 CPU run under geographic partitioning with balanced load. simulation of a disaster scenario than from the simulation of a normal day. However if the load patterns continue to exhibit spatial clustering (as is often the case), it is fairly intuitive to see that scatter partitioning would continue to achieve a balanced load profile, thus largely obviating the need for dynamic load balancing. Generally, for problem domains that exhibit spatial clustering of load, scatter partitioning is a simple and highly effective approach. VII. CONCLUSION In this paper we presented a partitioning technique for achieving balanced computational load in distributed simulations of networks that exhibit spatial load clustering. Such domains include transportation networks, epidemiological networks and other networks where geographical proximity constrains interaction between entities, often leading to geographical hotspots. Through simplified models and a real application, we demonstrated the effectiveness of explicit spatial scattering for achieving near optimal load balance, and vastly improved execution times even at the cost of increased messaging overhead compared to more traditional topological partitioning schemes. Directions for future research include validating the effectiveness of scatter partitioning for different types of simulation applications. [6] Richard M. Fujimoto. Parallel discrete event simulation. Commun. ACM, 33():3 53, 99. [7] D. Nicol and J. Saltz. An analysis of scatter decomposition. IEEE Transactions on Computers, 39(), 99. [8] A.N. Pears and N. Thong. A dynamic load balancing architecture for PDES using PVM on clusters. Recent Advances in Parallel Virtual Machine and Message Passing Interface, pages 66 73, 2. [9] PRIME. Parallel Real-time Immersive network Modeling Environment. Available at [] L Smith, R Beckman, D Anson, K Nagel, and M E Williams. Transims: Transportation analysis and simulation system. In Proceedings of the Fifth National Conference on Transportation Planning Methods, Seattle,Washington, 995. [] S. Thulasidasan and S. Eidenbenz. Accelerating traffic microsimulations: A parallel discrete-event queue-based approach for speed and scale. In Proceedings of the Winter Simulation Conference, 29. [2] S. Thulasidasan, S. Kasiviswanathan, S. Eidenbenz, E. Galli, S. Mniszewski, and P. Romero. Designing systems for largescale, discrete-event simulations: Experiences with the fasttrans parallel microsimulator. In HiPC, 29. REFERENCES [] A.B. Al Geist, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM: Parallel virtual machine: a users guide and tutorial for networked parallel computing [2] A. Boukerche and S.K. Das. Dynamic load balancing strategies for conservative parallel simulations. ACM SIGSIM Simulation Digest, 27():2 28, 997. [3] Phyllis E. Crandall and Michael J. Quinn. A partitioning advisory system for networked data-parallel processing. In Concurrency: Practice and Experience, pages , 995. [4] E. Deelman and B.K. Szymanski. Simulating spatially explicit problems on high performance architectures. JPDC, 62(3): , 22. [5] Ewa Deelman and Boleslaw K. Szymanski. Dynamic load balancing in parallel discrete event simulation for spatially explicit problems. In Workshop on Parallel and Distributed Simulation, pages 46 53,
A Review of Customized Dynamic Load Balancing for a Network of Workstations
A Review of Customized Dynamic Load Balancing for a Network of Workstations Taken from work done by: Mohammed Javeed Zaki, Wei Li, Srinivasan Parthasarathy Computer Science Department, University of Rochester
Parallel Scalable Algorithms- Performance Parameters
www.bsc.es Parallel Scalable Algorithms- Performance Parameters Vassil Alexandrov, ICREA - Barcelona Supercomputing Center, Spain Overview Sources of Overhead in Parallel Programs Performance Metrics for
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Lecture 2 Parallel Programming Platforms
Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple
An Optimistic Parallel Simulation Protocol for Cloud Computing Environments
An Optimistic Parallel Simulation Protocol for Cloud Computing Environments 3 Asad Waqar Malik 1, Alfred J. Park 2, Richard M. Fujimoto 3 1 National University of Science and Technology, Pakistan 2 IBM
Performance of networks containing both MaxNet and SumNet links
Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for
Fast Multipole Method for particle interactions: an open source parallel library component
Fast Multipole Method for particle interactions: an open source parallel library component F. A. Cruz 1,M.G.Knepley 2,andL.A.Barba 1 1 Department of Mathematics, University of Bristol, University Walk,
Overlapping Data Transfer With Application Execution on Clusters
Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm [email protected] [email protected] Department of Computer Science Department of Electrical and Computer
A SIMULATOR FOR LOAD BALANCING ANALYSIS IN DISTRIBUTED SYSTEMS
Mihai Horia Zaharia, Florin Leon, Dan Galea (3) A Simulator for Load Balancing Analysis in Distributed Systems in A. Valachi, D. Galea, A. M. Florea, M. Craus (eds.) - Tehnologii informationale, Editura
LOAD BALANCING FOR MULTIPLE PARALLEL JOBS
European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2000 Barcelona, 11-14 September 2000 ECCOMAS LOAD BALANCING FOR MULTIPLE PARALLEL JOBS A. Ecer, Y. P. Chien, H.U Akay
Distributed Dynamic Load Balancing for Iterative-Stencil Applications
Distributed Dynamic Load Balancing for Iterative-Stencil Applications G. Dethier 1, P. Marchot 2 and P.A. de Marneffe 1 1 EECS Department, University of Liege, Belgium 2 Chemical Engineering Department,
SCALABILITY OF CONTEXTUAL GENERALIZATION PROCESSING USING PARTITIONING AND PARALLELIZATION. Marc-Olivier Briat, Jean-Luc Monnot, Edith M.
SCALABILITY OF CONTEXTUAL GENERALIZATION PROCESSING USING PARTITIONING AND PARALLELIZATION Abstract Marc-Olivier Briat, Jean-Luc Monnot, Edith M. Punt Esri, Redlands, California, USA [email protected], [email protected],
Resource Allocation Schemes for Gang Scheduling
Resource Allocation Schemes for Gang Scheduling B. B. Zhou School of Computing and Mathematics Deakin University Geelong, VIC 327, Australia D. Walsh R. P. Brent Department of Computer Science Australian
Energy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada [email protected] Micaela Serra
CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING
CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility
Load Balance Strategies for DEVS Approximated Parallel and Distributed Discrete-Event Simulations
Load Balance Strategies for DEVS Approximated Parallel and Distributed Discrete-Event Simulations Alonso Inostrosa-Psijas, Roberto Solar, Verónica Gil-Costa and Mauricio Marín Universidad de Santiago,
Performance Monitoring of Parallel Scientific Applications
Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed
Load Balancing on a Non-dedicated Heterogeneous Network of Workstations
Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Dr. Maurice Eggen Nathan Franklin Department of Computer Science Trinity University San Antonio, Texas 78212 Dr. Roger Eggen Department
Characterizing the Performance of Dynamic Distribution and Load-Balancing Techniques for Adaptive Grid Hierarchies
Proceedings of the IASTED International Conference Parallel and Distributed Computing and Systems November 3-6, 1999 in Cambridge Massachusetts, USA Characterizing the Performance of Dynamic Distribution
The Green Index: A Metric for Evaluating System-Wide Energy Efficiency in HPC Systems
202 IEEE 202 26th IEEE International 26th International Parallel Parallel and Distributed and Distributed Processing Processing Symposium Symposium Workshops Workshops & PhD Forum The Green Index: A Metric
Performance Modeling and Analysis of a Database Server with Write-Heavy Workload
Performance Modeling and Analysis of a Database Server with Write-Heavy Workload Manfred Dellkrantz, Maria Kihl 2, and Anders Robertsson Department of Automatic Control, Lund University 2 Department of
HPC Deployment of OpenFOAM in an Industrial Setting
HPC Deployment of OpenFOAM in an Industrial Setting Hrvoje Jasak [email protected] Wikki Ltd, United Kingdom PRACE Seminar: Industrial Usage of HPC Stockholm, Sweden, 28-29 March 2011 HPC Deployment
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
A Comparison of General Approaches to Multiprocessor Scheduling
A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA [email protected] Michael A. Palis Department of Computer Science Rutgers University
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Exploring RAID Configurations
Exploring RAID Configurations J. Ryan Fishel Florida State University August 6, 2008 Abstract To address the limits of today s slow mechanical disks, we explored a number of data layouts to improve RAID
High Performance Computing. Course Notes 2007-2008. HPC Fundamentals
High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
A Middleware Strategy to Survive Compute Peak Loads in Cloud
A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]
Parallel Processing over Mobile Ad Hoc Networks of Handheld Machines
Parallel Processing over Mobile Ad Hoc Networks of Handheld Machines Michael J Jipping Department of Computer Science Hope College Holland, MI 49423 [email protected] Gary Lewandowski Department of Mathematics
Cost Effective Selection of Data Center in Cloud Environment
Cost Effective Selection of Data Center in Cloud Environment Manoranjan Dash 1, Amitav Mahapatra 2 & Narayan Ranjan Chakraborty 3 1 Institute of Business & Computer Studies, Siksha O Anusandhan University,
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
FPGA area allocation for parallel C applications
1 FPGA area allocation for parallel C applications Vlad-Mihai Sima, Elena Moscu Panainte, Koen Bertels Computer Engineering Faculty of Electrical Engineering, Mathematics and Computer Science Delft University
Load Balancing. Load Balancing 1 / 24
Load Balancing Backtracking, branch & bound and alpha-beta pruning: how to assign work to idle processes without much communication? Additionally for alpha-beta pruning: implementing the young-brothers-wait
Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing
Sla Aware Load Balancing Using Join-Idle Queue for Virtual Machines in Cloud Computing Mehak Choudhary M.Tech Student [CSE], Dept. of CSE, SKIET, Kurukshetra University, Haryana, India ABSTRACT: Cloud
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES
www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 6 June, 2013 Page No. 1914-1919 IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES Ms.
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
InfiniteGraph: The Distributed Graph Database
A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086
Various Schemes of Load Balancing in Distributed Systems- A Review
741 Various Schemes of Load Balancing in Distributed Systems- A Review Monika Kushwaha Pranveer Singh Institute of Technology Kanpur, U.P. (208020) U.P.T.U., Lucknow Saurabh Gupta Pranveer Singh Institute
International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 Load Balancing Heterogeneous Request in DHT-based P2P Systems Mrs. Yogita A. Dalvi Dr. R. Shankar Mr. Atesh
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Praktikum Wissenschaftliches Rechnen (Performance-optimized optimized Programming)
Praktikum Wissenschaftliches Rechnen (Performance-optimized optimized Programming) Dynamic Load Balancing Dr. Ralf-Peter Mundani Center for Simulation Technology in Engineering Technische Universität München
Multilevel Load Balancing in NUMA Computers
FACULDADE DE INFORMÁTICA PUCRS - Brazil http://www.pucrs.br/inf/pos/ Multilevel Load Balancing in NUMA Computers M. Corrêa, R. Chanin, A. Sales, R. Scheer, A. Zorzo Technical Report Series Number 049 July,
A Novel Switch Mechanism for Load Balancing in Public Cloud
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) A Novel Switch Mechanism for Load Balancing in Public Cloud Kalathoti Rambabu 1, M. Chandra Sekhar 2 1 M. Tech (CSE), MVR College
walberla: Towards an Adaptive, Dynamically Load-Balanced, Massively Parallel Lattice Boltzmann Fluid Simulation
walberla: Towards an Adaptive, Dynamically Load-Balanced, Massively Parallel Lattice Boltzmann Fluid Simulation SIAM Parallel Processing for Scientific Computing 2012 February 16, 2012 Florian Schornbaum,
Big Graph Processing: Some Background
Big Graph Processing: Some Background Bo Wu Colorado School of Mines Part of slides from: Paul Burkhardt (National Security Agency) and Carlos Guestrin (Washington University) Mines CSCI-580, Bo Wu Graphs
Dynamic Load Balancing of Virtual Machines using QEMU-KVM
Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College
EMC XTREMIO EXECUTIVE OVERVIEW
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
Efficient Load Balancing using VM Migration by QEMU-KVM
International Journal of Computer Science and Telecommunications [Volume 5, Issue 8, August 2014] 49 ISSN 2047-3338 Efficient Load Balancing using VM Migration by QEMU-KVM Sharang Telkikar 1, Shreyas Talele
Locality-Sensitive Operators for Parallel Main-Memory Database Clusters
Locality-Sensitive Operators for Parallel Main-Memory Database Clusters Wolf Rödiger, Tobias Mühlbauer, Philipp Unterbrunner*, Angelika Reiser, Alfons Kemper, Thomas Neumann Technische Universität München,
How To Compare Load Sharing And Job Scheduling In A Network Of Workstations
A COMPARISON OF LOAD SHARING AND JOB SCHEDULING IN A NETWORK OF WORKSTATIONS HELEN D. KARATZA Department of Informatics Aristotle University of Thessaloniki 546 Thessaloniki, GREECE Email: [email protected]
EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR
EXPERIENCES PARALLELIZING A COMMERCIAL NETWORK SIMULATOR Hao Wu Richard M. Fujimoto George Riley College Of Computing Georgia Institute of Technology Atlanta, GA 30332-0280 {wh, fujimoto, riley}@cc.gatech.edu
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study
SIMULATION OF LOAD BALANCING ALGORITHMS: A Comparative Study Milan E. Soklic Abstract This article introduces a new load balancing algorithm, called diffusive load balancing, and compares its performance
LCMON Network Traffic Analysis
LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia [email protected] Abstract The Swinburne
A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters
A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters Abhijit A. Rajguru, S.S. Apte Abstract - A distributed system can be viewed as a collection
Load Balancing on a Grid Using Data Characteristics
Load Balancing on a Grid Using Data Characteristics Jonathan White and Dale R. Thompson Computer Science and Computer Engineering Department University of Arkansas Fayetteville, AR 72701, USA {jlw09, drt}@uark.edu
The assignment of chunk size according to the target data characteristics in deduplication backup system
The assignment of chunk size according to the target data characteristics in deduplication backup system Mikito Ogata Norihisa Komoda Hitachi Information and Telecommunication Engineering, Ltd. 781 Sakai,
Load Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Load Balancing in cloud computing
Load Balancing in cloud computing 1 Foram F Kherani, 2 Prof.Jignesh Vania Department of computer engineering, Lok Jagruti Kendra Institute of Technology, India 1 [email protected], 2 [email protected]
Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.
Load Rebalancing for Distributed File Systems in Clouds. Smita Salunkhe, S. S. Sannakki Department of Computer Science and Engineering KLS Gogte Institute of Technology, Belgaum, Karnataka, India Affiliated
Locality Based Protocol for MultiWriter Replication systems
Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin [email protected] One of the challenging problems in building replication
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
Parallel Computing. Benson Muite. [email protected] http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite [email protected] http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
Mizan: A System for Dynamic Load Balancing in Large-scale Graph Processing
/35 Mizan: A System for Dynamic Load Balancing in Large-scale Graph Processing Zuhair Khayyat 1 Karim Awara 1 Amani Alonazi 1 Hani Jamjoom 2 Dan Williams 2 Panos Kalnis 1 1 King Abdullah University of
A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture
A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture Yangsuk Kee Department of Computer Engineering Seoul National University Seoul, 151-742, Korea Soonhoi
Advanced Core Operating System (ACOS): Experience the Performance
WHITE PAPER Advanced Core Operating System (ACOS): Experience the Performance Table of Contents Trends Affecting Application Networking...3 The Era of Multicore...3 Multicore System Design Challenges...3
LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Session # LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton 3, Onur Celebioglu
SCALABILITY AND AVAILABILITY
SCALABILITY AND AVAILABILITY Real Systems must be Scalable fast enough to handle the expected load and grow easily when the load grows Available available enough of the time Scalable Scale-up increase
159.735. Final Report. Cluster Scheduling. Submitted by: Priti Lohani 04244354
159.735 Final Report Cluster Scheduling Submitted by: Priti Lohani 04244354 1 Table of contents: 159.735... 1 Final Report... 1 Cluster Scheduling... 1 Table of contents:... 2 1. Introduction:... 3 1.1
Source Code Transformations Strategies to Load-balance Grid Applications
Source Code Transformations Strategies to Load-balance Grid Applications Romaric David, Stéphane Genaud, Arnaud Giersch, Benjamin Schwarz, and Éric Violard LSIIT-ICPS, Université Louis Pasteur, Bd S. Brant,
SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs
SIGMOD RWE Review Towards Proximity Pattern Mining in Large Graphs Fabian Hueske, TU Berlin June 26, 21 1 Review This document is a review report on the paper Towards Proximity Pattern Mining in Large
Distributed communication-aware load balancing with TreeMatch in Charm++
Distributed communication-aware load balancing with TreeMatch in Charm++ The 9th Scheduling for Large Scale Systems Workshop, Lyon, France Emmanuel Jeannot Guillaume Mercier Francois Tessier In collaboration
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, [email protected] Abstract 1 Interconnect quality
Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.
Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component
Mixing Hadoop and HPC Workloads on Parallel Filesystems
Mixing Hadoop and HPC Workloads on Parallel Filesystems Esteban Molina-Estolano *, Maya Gokhale, Carlos Maltzahn *, John May, John Bent, Scott Brandt * * UC Santa Cruz, ISSDM, PDSI Lawrence Livermore National
Modernizing Hadoop Architecture for Superior Scalability, Efficiency & Productive Throughput. ddn.com
DDN Technical Brief Modernizing Hadoop Architecture for Superior Scalability, Efficiency & Productive Throughput. A Fundamentally Different Approach To Enterprise Analytics Architecture: A Scalable Unit
Intelligent Heuristic Construction with Active Learning
Intelligent Heuristic Construction with Active Learning William F. Ogilvie, Pavlos Petoumenos, Zheng Wang, Hugh Leather E H U N I V E R S I T Y T O H F G R E D I N B U Space is BIG! Hubble Ultra-Deep Field
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT
IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT Muhammad Muhammad Bala 1, Miss Preety Kaushik 2, Mr Vivec Demri 3 1, 2, 3 Department of Engineering and Computer Science, Sharda
An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability
An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
