Disk-aware Request Distribution-based Web Server Power Management

Size: px
Start display at page:

Download "Disk-aware Request Distribution-based Web Server Power Management"

Transcription

1 Disk-aware Request Distribution-based Web Server Power Management Lin Zhong Technical Report CE-4-ZPJ Department of Electrical Engineering Princeton University Princeton, NJ 8544 Abstract This work is concerned with reducing power consumption by cluster-based web servers. We focused on server hard disks, a major source of server power consumption. We started with the modification of Logsim, a simulator for cluster-based web servers. The new simulator, NLogsim behaves exactly like a cluster-based web server handling requests when they arrive. Based on NLogsim, we exposed the relationship between file-cache size, workload, and disk idle periods. We explored the possibility of turning disks off dynamically according to the cluster s performance and distributing requests in a disk-aware fashion. Based on our disk-aware request distribution, we show that up to more than 7% of time, disks can be turned off with a reasonable rate of disk switchings and without significantly affect the cluster s performance. This may lead to a more than 2.5X disk power reduction. We also show that disk power management is complementary to CPU and whole server power management, and in most times more flexible and effective than server power management. I. Introduction We introduce the project from two different perspectives: request distribution and power management. A. Request distribution How fast a request for a file is served by a cluster of web servers is coarsely determined by two factors: 1) how many request there are before it when it is assigned a server; 2) where the requested file is stored in that server, memory (file cache) or hard disks. These two factors boil down to two server performance rules: 1) avoiding overloading a server and 2) improving file cache. Conventional request distribution techniques to achieve the performance rules have a common philosophy, i.e., all working servers should work in a similar way. Indeed, it is friendly to CPU power management (See Appendix) due to that CPU can scale its power consumption according to the work load. However, so far there is no such scalability for a hard disk. Hard disks have to rely on the STANDBY mode or being turned off to save energy. For hard disks to benefit from these two modes, the disk idle period should be long enough such that the overhead of mode transitions will be amortized. We find that there are a lot of idle periods for hard disks in web servers and they account for from 2% to 9% of the total time. Unfortunately, the conventional philosophy distribute the idle periods equally among the servers in the cluster so that the number of idle periods long enough for being put into the STANDBY/OFF mode is reduced. Our request distribution technique is based on a different philosophy: different servers work for different type of requests. In our solutions, there are two type of servers: servers with hard disk on and servers with hard disk off. Servers with hard disk on to serve requests for non-popular files while servers without hard disks to serve request for popular and cached files. By properly distributing requests and adjusting the number of diskless servers, the performance rules can also be achieved. B. Power management Power consumption by web servers has become an important concerns. Researches have been conducted for CPU and whole server power management. CPU power management dynamically adjusts CPU speed by varying clock speed and supply voltage. Server power management dynamically turn off or on servers in a cluster. Both solutions capitalize the dynamics in web servers workload intensity: If there are more request to serve per second, one needs more servers and fast CPU, and vice versa. We propose to capitalize the temporal dynamics in such demand for disk bandwidth to turn off or on the hard disk of a web server in a cluster and distribute requests in a disk-aware way. We called them disk power management and disk-aware request distribution (DARD), respectively. The rationale behind disk power management is that the disk load will be quite low if the locality in the file requests is high and the file-cache size is large. DARD improves the opportunities for disk power management. Based on DARD, disk power management is more flexible than server power management due to its lower impact on the cluster s capacity and smaller overhead. It is complementary to CPU power management. We validate our techniques based on a simulator for cluster-based web servers, a modified Logsim called NLogsim, and the World Cup 1998 web server traces. We show that by DARD and disk power management, disks can be turned off for a significant amount of time without incurring much overhead. We also compared it with server

2 power management and investigated different load balancing techniques for DARD. Due to the lack of time, power models are not yet implemented in NLogsim. Therefore, we report the percentage of time disks are powered off as an indirect indicator of power savings and the number of times that disks are turning off/on as an indirect indicator of overhead. However, an approximate power analysis shows that more than 2.5X disk power reduction is possible. The report is organized as follows. In Section II, background and motivations for this project is offered. Relationships between file-cache size, workload, and hard disk idle time are exposed by simulations. In Section III, the new request distribution mechanism for disk power management is presented. Details about the NLogsim and the experimental settings are provided in Section IV. Experimental results are presented in Section V. A simple power analysis is also offered there. Related works and conclusions are then offered in Sections VI and VII, respectively. II. Background and Motivations In this section, we present background and motivational information for this project. A. Web server power consumption A web server is actually a computing and storage device without display. Table 1 summarizes the power characterization of a web server in [2]. The server has a desktop hard disk, which is not always true. Therefore, Table 1 also shows the power consumption by a high performance hard disk [7], which is more common in web servers. As shown by the table, the CPU and hard disk are the two dominating sources of power consumption. After CPU power management techniques are applied, the hard disk will loom large, especially when a high performance disk is used. B. Hard disk power consumption Unlike CPU, hard disks are barely scalable in terms of performance. Commercially available disks usually operate in one of the three modes. In the ACTIVE mode, the disk is writing/reading data. When there is no disk access, a disk is in the IDLE mode with the spindle motor still active. For power savings, most disks offer the STANDBY mode in which the spindle motor is stopped. Moreover, the disk power off can be regarded as a power mode too (the OFF mode). One has to rely on the STANDBY and OFF modes to save disk power consumption without industrial support for multi-speed disks. Transitions between different modes incurs power and delay overhead. For the IBM Ultrastar36ZX [7], transition from IDLE to STANDBY takes 15 seconds and Watt, that from STANDBY to AC- TIVE takes 26 seconds and 34.8 Watt, while transitions between IDLE and ACTIVE are instant. Since transitions between STANDBY/OFF to AC- TIVE/IDLE incur significant delay and power overhead, using the STANDBY and OFF modes will be efficient only when the idle periods are long enough. This motivates our proposal to distribute requests so that disk idle periods will be concentrated. Moreover, since the STANDBY mode still consumes significant power, the hard disk may be turned disk completely off after the idle periods are concentrated. C. Disk idle periods for LARD Using NLogsim, we would like to reveal the relationships between disk idle periods and other cluster configurations for locality-aware request distribution (LARD) clusters [11]. The fraction of total simulation time in which disks are idle offers hints of the upper bound for disk power savings. Since an idle period must be long enough for a disk to benefit from the STANDBY and OFF modes, the fraction of total disk idle time that is spent in idle periods long enough is also interesting. C.1 File-cache size Trace WC-5-4 and a four-server cluster configuration are used in this analysis. Please refer to Section IV for the details about the web access trace. Figure 1 shows the fraction of total simulation time in which disks are idle, the fractions of disk idle time spent in idle periods longer than 5 seconds and 1 seconds when the file-cache size on each server increases from 32MB to 512 MB. It demonstrates that the disks spend most of its time in idle time while increasing file-cache size won t increase their idle time very much. This may be contributed to the relative light workload and relative small total size of request files in the WC-5-4 trace. However, increasing file-cache size drastically increases the fraction of idle time spent in long idle periods. C.2 Workload intensity Figure 2 shows how disk idle time percentage changes when the requested file size and the rate of requests increase in the way described in Section IV. The high fraction of time that disks spend in idle periods offers enormous opportunities for disk power savings. Conventional time-out power management can be applied to server disks. However, it will waste a lot of such opportunities in the time-out periods. Moreover, since LARD forwards requests to servers without regard to their disk status, the disk idle periods are shortened compared to LARD as we will see later. III. Disk-aware Request Distribution The best known request distribution technique for cluster-based web server performance is the locality-based request distribution (LARD) [11]. Our technique is based on LARD. A. Assumptions A.1 Information keeping We use the same front-end/back-end cluster architecture as used in [11]. The following assumptions are made: As in LARD, the front-end is responsible for handling off new connections and passing incoming data from the client to the back-ends.

3 Table 1 Power consumption of different components in a whitebox server Component Description Power consumption (Watt) Idle Busy CPU Pentium III 2. Volt 6MHz Memory 512MB Hard disk IBM 6GB Deskstar controller about 2. about 2. IBM Ultrastar 36ZX 33.6GB 22.3(idle) 39W 12.7(standby) fraction of disk idle time Disk idle periods > 5 s Disk idle periods > 1 s Total disk idle time File cache size (MB) Fig fraction of toal time Disk idle periods vs. file-cache sizes. Fig. 2. Disk idle time percentage. The front-end knows whether a file is cached and if it is, where. The front-end keeps the following information about every back-end: 1)Whether its hard disk is on; 2) recent requests delays and finishing times. A.2 Cluster configuration Although LARD investigates different configurations of a cluster-based web servers, we focus on the following configurations. Each back-end has a single hard disk. Unless otherwise pointed out, all systems have four back-ends and each of the back-end has 32MB file cache. A file can be cached at no more than one back-end at a given time. That is, there is no file cache replication. The file cache replacement policy is Greedy-Dual- Size(GDS) [3]. The hard disk of a back-end can be turned off and on upon the front-end s instruction. Back-ends are able to transfer file to each other upon the front-end s instruction. Better strategies actually obviate this assumption. B. Basic Disk-aware Request Distribution When a request r reaches the front-end, it is handled using strategy presented in Algorithm 1. Algorithm 1 Basic DARD strategy. if server[r.target]=null then then n,server[r.target] {least loaded disk node}; else n server[r.target]; send r to n; return; When a request for a file arrives, the front-end first check whether it is cached in some back-end. If so, it is immediately send to it. Otherwise, the front-end picks up the least loaded back-end with disk on and send the request to it. There are two differences between the basic DARD and the basic LARD. First, the un-cached file is sent to backends with disk on only. Second, since we do not permit file cache replication, we send the file to the caching diskless node no matter it is over loaded or not. Unlike LARD, which totally relies on the request distribution to avoid overloading a back-end, DARD utilizes a mechanism called file migration to avoid overloading back-ends. B.1 Back-end load The LARD and Logsim [11] use the number of active connections as the gauge for a back-end s load, which is an indirect indicator of a back-end s delay in responding a request. In this project, we use the rate of recent late delays instead, which is similar to that used in [6]. A response is late if it is longer than 5ms [6]. The front-end keeps the recent delays and the corresponding finishing times for each back-end. In this project, recent requests means the last one hundred requests finished in the last ten seconds. The load is calculated as the fraction of the late delays in recent delays, called recent late delay rate. If there is no request finished in the last ten seconds, the rate is set as zero. We use the recent late delay rate as the back-end s load and use the late delay rate of the system as a performance gauge.

4 C. File migration File migration is the key mechanism to achieve the server performance rules. It is initiated periodically by the frontend. Algorithm 2 illustrates the first policy we designed. Later on in Section V, we will present better and simpler policies we found. Note that the least evictable file is the last file the file-cache replacement policy will evict to free space. Note that SLAHIGH means too high for servicelevel agreement..5 is used in this study. Similarly, SLALOW means too low for service-level agreement and.1 is used. Algorithm 2 File migration strategy. m {most loaded diskless node}; n {most loaded disk node}; k {least loaded diskless node}; if m.load>n.load then /* Pipe 1: Avoid overloading diskless nodes*/ f {most evictable file in m}; evict f from m; else if n.load>slahigh then /* Pipe 2: Avoid overloading disk nodes*/ f {least evictable file in n}; evict f from n; insert f in k; ; return; If we view the back-ends as water tanks, and the load as water in the tanks.the basic DARD strategy determines how water enter the system and tries to balance load among back-ends with disks. The file migration strategy forms two unidirectional pipes between the group of diskless nodes and the group of disk nodes. Pipe 1 makes sure that the water in diskless nodes is always lower than that in disk nodes. Pipe 2 make sure the disk nodes will not be overloaded by leaking popular files into the diskless nodes. One potential problem with Algorithm 2 is that it may encourage file exchanging between diskless and disk back-ends when the work load is high. This problem could be solved by imposing an extra condition that requires the least evictable file in the disk node is actually less evictable than the most evictable file in the diskless node. This extra condition will stop Pipe 2 after popular files are transferred to diskless nodes. However, it is more expensive since it requires the front-end to keep file global file popularity information. Later on we will discuss other file-migration policies than may avoid this exchange-encouraging problem. In the next subsection, we will address how to adjust total capacity of disk nodes by dynamic turning on/off disks according to the water level. D. Disk management The front-end periodically check the load of the disk nodes, if they are overloaded, a diskless node is selected and its disk is turned on. On the other hand, of the load of disk nodes is low, a disk node is selected and its disk is turned off. The disk management strategy is illustrated by Algorithm 3. Algorithm 3 Disk management strategy. n {least loaded disk node}; num {num of disk nodes}; if n.load>slahigh and num<numnodes then m {least loaded diskless node}; turn on m s disk; m.hasdisk 1; else if num>1 then ld {total load of disk nodes}; if ld<slalow then m {least loaded disk node}; turn off m s disk after it finishes all pending work; m.hasdisk ; return; Note that at least one node has its disk on at any given time and the system starts with all disks on. E. Overall architecture The overall DARD system works by combining the strategies in Algorithms 1, 2 and 3, which is illustrated by Algorithm 4. Note that DMPERIOD and FMPERIOD Algorithm 4 Overall DARD architecture. while 1 do r GetNextRequest(); if r NULL then BasicDARD(r); if (currtime-lastdmtime)>dmperiod then DiskManagement(); continue; if (currtime-lastfmtime)>fmperiod then FileMigration(); end while are different with DMPERIOD FMPERIOD since the delay overhead for turning disk off and on is large. In this study, DMPERIOD is 12 seconds and FMPERIOD is 12 seconds. Ideally, we should explore the different values for them. Since each file-migration time at most one file is migrated and FMPERIOD is relatively long, the exchangeencouraging problem aforementioned can be avoided to some extent. IV. Simulation In this section, a real-time cluster-based web server simulator based on Logsim [11] is detailed. The input web server trace is also described. A. Simulation model Since Logsim is basically for file cache behavior and throughput studies, the input requests have no time stamps. That is, the simulator process a new request only when it finishes one.

5 Since we are interested in the idle periods of web servers, the request arrival time becomes important. The simulator has to handle requests when the simulation-internal clock reaches their arrival time. Therefore, we modified Logsim so that it handles requests in a real-time fashion. The modified Logsim is called NLogsim. Logsim is an eventbased simulator. It contains an event-loop and maintains an internal clock. NLogsim utilizes the event-loop and the internal clock. When NLogsim accepts a new request for processing, it also keeps the information about the next request. It checks the internal clock in the event-loop to see whether it is time to accept the next request. When there is no simulation events, NLogsim accepts the next request immediately and push the internal clock forward to the request s arrival time. B. Simulation inputs Each input to the simulator consists of a unique token and the size for the requested time, and the requestarrival time. The trace used in this study is the 1998 World Cup Web site access logs [1] from the Internet Traffic Archive [14]. Due to the overwhelming size of the World- Cup98 trace, only the trace of the first ten days with web accesses (day 5 to day 14)is used. Since the trace is relative old, modern web servers will have more intense request and requests for larger files. Therefore, we scale the time interval between requests and/or file sizes to generate new traces. A trace named WC-4-2 has the time intervals scaled down by four times and the file sizes doubled, which is also called 4X request rate change and 2X file size change. V. Experimental results In this section. various aspects of DARD are investigated and comparison against LARD and other server power management techniques are made. The DARD uses the file migration policy as illustrated in Algorithm 2. In the following experiments, the default request rate increase is five and the default file size increase is four. A. DARD vs LARD Figure 3 shows how the late delay rate and max delay for DARD and LARD changes when the requested file size increases. Figure 4 shows how the CPU and disk idle time percentage, and the file cache hit rate change. Also, it gives the average percentage of total time DARD keeps a disk as off. Figures 6 and 5 show how the same metrics changes when the rate of file requests increases. Figure 7 shows the accumulative distribution of delays for both LARD and DARD of WC-5-4. Figure 8 shows the disk idle time distribution for both LARD and DARD for the trace WC-5-4. Note both axes are in the logarithmic scale. Figures 3 to 8 demonstrate that DARD has close performance to LARD and significantly increases the length of disk idle periods. Note that the CPU idle percentage for DARD is as high as that for LARD, which means DARD does not reduce the system s opportunities to benefit from Max delay (s) LARD-max delay DARD-max delay LARD-late delay rate DARD-late delay rate File size increase (X) Fig. 3. Late delay rate and max delay for LARD and DARD when the size of requested file increases. Fig. 4. Performance metrics for LARD and DARD when the size of requested file increases. CPU power management [6]. In this sense, DARD and disk power management is very much complementary to CPU power management. B. DARD vs SARD Many researchers proposed to turn off servers when workload is low to save power. When a server is turned off, the front-end will not forward requests to it until it is turned on again. This is called server-aware request distribution (SARD). We would like to see how DARD compare with SARD. The DARD uses the file migration policy as illustrated in Algorithm 2. Figures 9 to 12 shows the same metrics we showed for LARD vs. DARD in Figures 3 to 5. They clearly demonstrate DARD s performance advantage against SARD, especially when the file size increases. This is because by turning a server off, its file cache is no longer available. In DARD, when a server s disk is turned off, its file cache continues to function. Since turning off a whole server has larger impact on the cluster s capacity than turning of a server s disk, we expect SARD systems will have more oscillations, i.e., turning off and on servers more frequently. This is verified by Figure 13, which shows the numbers of turning-ons and turning-offs for both SARD and LARD when the request rate increases, respectively. Figure 13 demonstrates that SARD has to switch servers between on and off much more to balance match the cluster s capacity and workload fluctuations Late delay rate (%)

6 Max delay (s) LARD-max delay DARD-max delay LARD-late delay rate DARD-late delay rate Request rate increase (X) Late delay rate (%) # of delays LARD(32,4) DARD(32,4) Fig. 5. Late delay rate and max delay for LARD and DARD when the file request rate increases. Fig. 6. Performance metrics for LARD and DARD when the file request rate increases. # of idle periods 1 Fig Delay (us) Accumulative delay distributions for LARD and DARD. LARD(32,4) DARD(32,4) idle period length (s) C. File migration Algorithm 2 shows the first policy we designed. However, it requires inter-back-end file transfers for Pipe 2, which not only impose performance/power overhead but also increase system complexity. In this subsection, we consider three other alternatives. First, DARD will work even if there is no file migration at all. In this case, the system is called SDARD (simple DARD). Since Pipe 2 in Algorithm 2 is expensive, we consider a file migration policy with only Pipe 1. This is called RDARD1 (realistic DARD 1). Moreover, a different Pipe 2 can be used. Instead of migrating the most evictable file from the most loaded disk back-end to the least loaded diskless back-end, the file can be just evicted from the file cache. Therefore there is no need for inter-back-end file transfer. DARD using this policy is called RDARD2 (realistic DARD 2). The following figures show various aspects of these different file migration policies. Note DARD using Algorithm 2 is still called DARD. Figures 14 and 15 show the maximum delay values. The late delay rates for these policies are very close. The figures demonstrate that the performance of DARD,RDARD1 and RDARD2 are barely distinguishable while SDARD is slightly worse. Figures 16 and 17 show the percentage of total time the disks are turned off. Again, DARD, RDARD1, and RDARD2 are hard to distinguish while SDARD is slightly worse. Figures 18 and 19 shows the numbers of times that a disk is turned off/on. RDARD1 and RDARD2 are better Fig. 8. DARD. Length distributions of disk idle periods for LARD and than both DARD and SDARD, although the reason is not yet clear now. Therefore, instead of using the original policy which requires inter-server file transfers, RDARD1 and RDARD2 can be used with similar performance and better overhead. Moreover, they are simpler and consistent with the filecaching system. In view of SDARD s worst performance and least disk off time, one can also conclude that file migration improves both a DARD system s performance and power efficiency. Algorithm 2 balances load between diskless and disk back-ends by offloading files from diskless nodes file cache (Pipe 1) and transferring files from disk nodes to diskless nodes (Pipe 2). It is interesting to have a close look at RDARD1 and RDARD2. Neither of them has a pipe to transfer a file from disk nodes to diskless nodes. Therefore, disk nodes are likely to be overloaded when the workload increases. Then disk power management will then turn on the disk on a diskless node. When the workload become lower again, a disk node will be converted into a diskless one. Therefore, RDARD1 and RDARD2 avoid overloading disk nodes by disk power management. D. A simple power analysis The trace WC-5-4 is about 155, 875 seconds (about 43.5 hours). Using the power data for IBM Ultrastar 36ZX described in Section II, we present the disk power savings in

7 Max delay (s) SARD-max delay DARD-max delay SARD-late delay rate DARD-late delay rate File size increase (X) Fig. 9. Late delay rate and max delay for SARD and DARD when the size of requested file increases. Fig. 1. Performance metrics for SARD and DARD when the size of requested file increases. Table 2. The average power consumption is 22.5 Watt if disks are put into the IDLE mode when there is no access. The DARD s average disk power is more than 2.5 times small. If the STANDBY mode is used instead of turning disks off, the power reduction is about 27% smaller. The power advantage of our DARD and disk power management strategies is obvious. VI. Related works In this section, we discuss related works from the two perspectives from which we introduced the project in Section I Late delay rate (%) Max delay (s) SARD-max delay DARD-max delay SARD-late delay rate DARD-late delay rate Request rate increase (X) Fig. 11. Late delay rate and max delay for SARD and DARD when the file request rate increases. Fig. 12. Performance metrics for SARD and DARD when the file request rate increases Fig. 13. The numbers of turning-offs and turning-ons for SARD and DARD when the file request rate increases. Late delay rate (%) A. Request distribution Conventional request distribution techniques are focused on improving file cache performance and cluster throughput. LARD [11], on which this work is based upon, may be the best among many. Only recently, IBM researchers investigated request-distribution issues related to turning servers on and off for power saving [13], which is a totally different power management strategy from this work. B. Web server power management Power consumption of web servers can be reduced from many consumers: CPU, disks, memory, and network interfaces. A very recent survey can be found in [1]. A detailed power consumption profiling can be found in [2]. Many researchers investigated the possibility of turning whole servers off in a cluster to save power [4, 12, 13]. Elnozahy et al investigated using dynamic voltage scaling (DVS) for reducing CPU power consumption [6]. DVS is much more flexible than the server on-off solution. However, the power savings are limited to CPU s share. The work proposed in this project is complementary to all these CPU and server power management techniques. Another group of researchers investigated the hard disk power consumption in servers. Carrera et al proposed to use multi-speed disks for disk power savings in low load time. Similarly, dynamic rotations per minute (DRPM) was proposed in [8] to dynamically change the splatters s spinning speed so that a linear latency increase is traded for a quadratic power saving in the hard disk spindle motor. However, there is not yet industrial support for multispeed or DRPM hard disks. Colarelli and Grunwald [5]

8 Table 2 A simple power analysis Strategy Disk off time Disk switchings Avg. power (%) (#) Off Mode(Watt) STANDBY Mode(Watt) DARD / RDARD / RDARD / Fig. 14. Max response delays for RDARD1, RDARD2, SDARD, and DARD when the size of requested file increases. Fig. 16. Percentage of disk off time for RDARD1, RDARD2, SDARD, and DARD when the size of requested file increases. Fig. 15. Max response delays for RDARD1, RDARD2, SDARD, and DARD when the file request rate increases. Fig. 17. Percentage of disk off time for RDARD1, RDARD2, SDARD, and DARD when the file request rate increases. proposed to use massive arrays of idle disks (MAID) for storage servers so that individual disks in an array could be turned on or off for power savings so that the array would have the scalability of CPU DVS. More realistically, people has proposed to utilize the STANDBY mode for disk power savings [7] for transaction processing servers. However, even the STANDBY mode consumes significant power as shown in Section II. As pointed by the authors for the transaction processing workloads and Section II for the web access workload, idle periods for server disks tend to be very short, although the total percentage of idle time is high. None of these works investigated how to improve disks idle periods so that more idle disk time could used for power savings. As far as we know, this project is the first work on this problem. A. Future works VII. Future works and Conclusions Many works remain to be done. First, power models should be incorporate into NLogsim so that NLogsim also reports power consumption for hardware components. Sec- ond, a file is allowed to be cached in at most one back-end. How DARD would behave and should be changed with multiple caching sites allowed is interesting. Third, there are many aspects of DARD itself to explore. For example, the values of DMPERIOD and FMPERIOD and the estimation of recent late delay rate. Fourth, it is also interesting to see how DARD works for servers with multiple hard disks. Fifth, in this work, we claim that DARD and disk power management is complementary to CPU power management without any experimental supports. It is also interesting to see how a combination of CPU power management and disk power management would outperform server power management through experiments. B. Conclustions In this project, we investigated the possibility of turning server disk off and distributing request in a disk-aware way to reduce the power consumption of cluster-based web servers. The best strategy we found consists of disk-ware request distribution, file migration, and disk power management. The only infrastructure change it requires is

9 Fig. 18. Number of times that disks are turned off/on for RDARD1, RDARD2, SDARD, and DARD when the size of requested file increases. RDARD1-TurnOn RDARD1-TurnOff RDARD2-TurnOn RDARD2-TurnOff DARD-TurnOn DARD-TurnOff SDARD-TurnOn SDARD-TurnOff Fig. 19. Number of times that disks are turned off/on for RDARD1, RDARD2, SDARD, and DARD when the file request rate increases. the capability of the front-end to notify a back-end to turn off/on its disks. We conducted simulations based on NLogsim, a modified Logsim. Our strategy is shown to reduce disk power drastically while maintains a close performance match with LARD, the best request distribution strategy for performance. We compared our strategy with server-aware request distribution and server power management. Our strategy is shown to have much better performance, less overhead, and be more flexible. We expect DARD and disk power management, if combined with CPU power management, will also have close power savings as server power management while being more flexible. However when the workload is extremely low for a long time, server power management is still more powerefficient. Therefore, the best power saving strategy may be a combination of CPU, disk and whole server power management. VIII. Acknowledgment This project was conducted as the final project for Fall 23 COS518 Advanced Operating System at Princeton University. Professor Vivek Pai was the course professor and provided the source code of Logsim. He should be specially credited. The author also thank the other authors of LARD and Logism on which this work is based. References [1] M. Arlitt and T. Jin, Workload characterization of the 1998 world cup web site, Internet Systems and Applications Laboratory, HP Laboratories Palo Alto, Tech. Rep. HPL (R.1), Sep [2] P. Bohrer, E. Elnozahy, T. Keller, M. Kistler, C. Lefurgy, and R. Rajamony, The case for power management in web servers, in Power-Aware Computing (Robert Graybill and Rami Melhem, editors). Kluwer/Plenum series in Computer Science, 22. [3] P. Cao and S. Irani, Cost-aware WWW proxy caching algorithms, in Proc. the 1997 Usenix Symp. Internet Technologies and Systems (USITS-97), Monterey, CA, [4] J. Chase and R. Doyle, Balance of power: Energy management for server clusters, in 8th Workshop Hot Topics in Operating Systems, may 21. [5] D. Colarelli and D. Grunwald, Massive arrays of idle disks for storage archives, in Proc. IEEE/ACM SC22 Conf. IEEE Computer Society, 22, p. 47. [6] M. Elnozahy, M. Kistler, and R. Rajamony, Energy conservation policies for web servers, in Proc. 4th USENIX Sym. Internet Technologies and Systems, March 23. [7] S. Gurumurthi, A. Sivasubramaniam, M. Kandemir, H. Franke, N. Vijaykrishnan, and M. J. Irwin, Interplay of energy and performance for disk arrays running transaction processing workloads, in Proc. Intl. Sym. Performance Analysis of Systems and Software, March 23. [8] S. Gurumurthi, A. Sivasubramaniam, M. Kandemir, and H. Franke, Reducing disk power consumption in servers with drpm, Computer, vol. 36, no. 12, pp , 23. [9] Hölder s Inequalities, [1] C. Lefurgy, K. Rajamani, F. Rawson, W. Felter, M. Kistler, and T. W. Keller, Energy management for commercial ser, Computer, vol. 36, no. 12, pp , 23. [11] V. S. Pai, M. Aron, G. Banga, M. Svendsen, P. Druschel, W. Zwaenepoel, and E. M. Nahum, Locality-aware request distribution in cluster-based network servers, in Architectural Support for Programming Languages and Operating Systems, 1998, pp [12] E. Pinheiro, R. Bianchini, E. Carrera, and T. Heath, Load balancing and unbalancing for power and performance in clusterbased systems, Department of Computer Science, Rutgers University, Tech. Rep. DCS-TR-44, May 21. [13] K. Rajamani and C. Lefurgy, On evaluating requestdistribution schemes for saving energy in server clusters, in Proc. Intl. Sym. Performance Analysis of Systems and Software, March 23. [14] The Internet Traffic Archive, Appendix The observation is that for the same amount of work with a given deadline, it s more energy/power efficient to have N CPUs working on a lower performance/power level than to have (N 1) CPUs working on a higher performance/power level. Let s assume the work load is W in a fixed period of T and there are N CPUs of the same configuration. Suppose the ith CPU get a work load share of W i. Let s assume the CPUs can adjust its clock speed, f i and supply voltage, v i, so that they all finish their share just in time T, which is the most energy efficient way. Therefore, we have f i = βw i. Since the supply voltage is approximately proportional to the clock speed f i, the power consumption for the ith CPU, P i, is then P i = c f i v 3 i = αw 3 i The total power for N CPUs, P, is N N P = P i = α i=1 Since we have N i=1 W i = W and W i, P is minimal when W 1 = W 2 =... = W N, according to the Hölder s inequalities [9]. i=1 W 3 i

Performance Comparison of Assignment Policies on Cluster-based E-Commerce Servers

Performance Comparison of Assignment Policies on Cluster-based E-Commerce Servers Performance Comparison of Assignment Policies on Cluster-based E-Commerce Servers Victoria Ungureanu Department of MSIS Rutgers University, 180 University Ave. Newark, NJ 07102 USA Benjamin Melamed Department

More information

HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers

HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers TANET2007 臺 灣 網 際 網 路 研 討 會 論 文 集 二 HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers Shang-Yi Zhuang, Mei-Ling Chiang Department of Information Management National

More information

Reliability-Aware Energy Management for Hybrid Storage Systems

Reliability-Aware Energy Management for Hybrid Storage Systems MSST Research Track, May 2011 Reliability-Aware Energy Management for Hybrid Storage Systems Wes Felter, Anthony Hylick, John Carter IBM Research - Austin Energy Saving using Hybrid Storage with Flash

More information

Autonomic Power Management Schemes for Internet Servers and Data Centers

Autonomic Power Management Schemes for Internet Servers and Data Centers Autonomic Power Management Schemes for Internet Servers and Data Centers Lykomidis Mastroleon, Nicholas Bambos, Christos Kozyrakis and Dimitris Economou Department of Electrical Engineering Stanford University

More information

Power and Energy Management for Server Systems

Power and Energy Management for Server Systems Power and Energy Management for Server Systems Ricardo Bianchini and Ram Rajamony Department of Computer Science Low-Power Computing Research Center Rutgers University IBM Austin Research Lab Piscataway,

More information

The Case for Massive Arrays of Idle Disks (MAID)

The Case for Massive Arrays of Idle Disks (MAID) The Case for Massive Arrays of Idle Disks (MAID) Dennis Colarelli, Dirk Grunwald and Michael Neufeld Dept. of Computer Science Univ. of Colorado, Boulder January 7, 2002 Abstract The declining costs of

More information

Optimizing a ëcontent-aware" Load Balancing Strategy for Shared Web Hosting Service Ludmila Cherkasova Hewlett-Packard Laboratories 1501 Page Mill Road, Palo Alto, CA 94303 cherkasova@hpl.hp.com Shankar

More information

Improving Response Time and Energy Efficiency in Server Clusters

Improving Response Time and Energy Efficiency in Server Clusters Improving Response Time and Energy Efficiency in Server Clusters Raphael Guerra, Luciano Bertini and J.C.B. Leite Instituto de Computação - Universidade Federal Fluminense Rua Passo da Pátria, 156, Bloco

More information

The IntelliMagic White Paper: Green Storage: Reduce Power not Performance. December 2010

The IntelliMagic White Paper: Green Storage: Reduce Power not Performance. December 2010 The IntelliMagic White Paper: Green Storage: Reduce Power not Performance December 2010 Summary: This white paper provides techniques to configure the disk drives in your storage system such that they

More information

AN APPROACH TOWARDS THE LOAD BALANCING STRATEGY FOR WEB SERVER CLUSTERS

AN APPROACH TOWARDS THE LOAD BALANCING STRATEGY FOR WEB SERVER CLUSTERS INTERNATIONAL JOURNAL OF REVIEWS ON RECENT ELECTRONICS AND COMPUTER SCIENCE AN APPROACH TOWARDS THE LOAD BALANCING STRATEGY FOR WEB SERVER CLUSTERS B.Divya Bharathi 1, N.A. Muneer 2, Ch.Srinivasulu 3 1

More information

Energy aware RAID Configuration for Large Storage Systems

Energy aware RAID Configuration for Large Storage Systems Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa norifumi@tkl.iis.u-tokyo.ac.jp Miyuki Nakano miyuki@tkl.iis.u-tokyo.ac.jp Masaru Kitsuregawa kitsure@tkl.iis.u-tokyo.ac.jp Abstract

More information

Energy Conservation in Heterogeneous Server Clusters

Energy Conservation in Heterogeneous Server Clusters Energy Conservation in Heterogeneous Server Clusters Taliver Heath, Bruno Diniz, Enrique V. Carrera, Wagner Meira Jr., and Ricardo Bianchini Dept. of Computer Science Dept. of Computer Science Dept. of

More information

Experimental Evaluation of Horizontal and Vertical Scalability of Cluster-Based Application Servers for Transactional Workloads

Experimental Evaluation of Horizontal and Vertical Scalability of Cluster-Based Application Servers for Transactional Workloads 8th WSEAS International Conference on APPLIED INFORMATICS AND MUNICATIONS (AIC 8) Rhodes, Greece, August 2-22, 28 Experimental Evaluation of Horizontal and Vertical Scalability of Cluster-Based Application

More information

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based

More information

Ready Time Observations

Ready Time Observations VMWARE PERFORMANCE STUDY VMware ESX Server 3 Ready Time Observations VMware ESX Server is a thin software layer designed to multiplex hardware resources efficiently among virtual machines running unmodified

More information

Improving Scalability for Citrix Presentation Server

Improving Scalability for Citrix Presentation Server VMWARE PERFORMANCE STUDY VMware ESX Server 3. Improving Scalability for Citrix Presentation Server Citrix Presentation Server administrators have often opted for many small servers (with one or two CPUs)

More information

Web Server Software Architectures

Web Server Software Architectures Web Server Software Architectures Author: Daniel A. Menascé Presenter: Noshaba Bakht Web Site performance and scalability 1.workload characteristics. 2.security mechanisms. 3. Web cluster architectures.

More information

Optimizing Power Consumption in Large Scale Storage Systems

Optimizing Power Consumption in Large Scale Storage Systems Optimizing Power Consumption in Large Scale Storage Systems Lakshmi Ganesh, Hakim Weatherspoon, Mahesh Balakrishnan, Ken Birman Computer Science Department, Cornell University {lakshmi, hweather, mahesh,

More information

A Case for Dynamic Selection of Replication and Caching Strategies

A Case for Dynamic Selection of Replication and Caching Strategies A Case for Dynamic Selection of Replication and Caching Strategies Swaminathan Sivasubramanian Guillaume Pierre Maarten van Steen Dept. of Mathematics and Computer Science Vrije Universiteit, Amsterdam,

More information

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

1. Comments on reviews a. Need to avoid just summarizing web page asks you for: 1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study

Real-Time Analysis of CDN in an Academic Institute: A Simulation Study Journal of Algorithms & Computational Technology Vol. 6 No. 3 483 Real-Time Analysis of CDN in an Academic Institute: A Simulation Study N. Ramachandran * and P. Sivaprakasam + *Indian Institute of Management

More information

Power-Aware Autonomous Distributed Storage Systems for Internet Hosting Service Platforms

Power-Aware Autonomous Distributed Storage Systems for Internet Hosting Service Platforms Power-Aware Autonomous Distributed Storage Systems for Internet Hosting Service Platforms Jumpei Okoshi, Koji Hasebe, and Kazuhiko Kato Department of Computer Science, University of Tsukuba, Japan {oks@osss.,hasebe@,kato@}cs.tsukuba.ac.jp

More information

Power Efficiency Metrics for the Top500. Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab

Power Efficiency Metrics for the Top500. Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab Power Efficiency Metrics for the Top500 Shoaib Kamil and John Shalf CRD/NERSC Lawrence Berkeley National Lab Power for Single Processors HPC Concurrency on the Rise Total # of Processors in Top15 350000

More information

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems*

A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* A Content-Based Load Balancing Algorithm for Metadata Servers in Cluster File Systems* Junho Jang, Saeyoung Han, Sungyong Park, and Jihoon Yang Department of Computer Science and Interdisciplinary Program

More information

PERFORMANCE ANALYSIS OF WEB SERVERS Apache and Microsoft IIS

PERFORMANCE ANALYSIS OF WEB SERVERS Apache and Microsoft IIS PERFORMANCE ANALYSIS OF WEB SERVERS Apache and Microsoft IIS Andrew J. Kornecki, Nick Brixius Embry Riddle Aeronautical University, Daytona Beach, FL 32114 Email: kornecka@erau.edu, brixiusn@erau.edu Ozeas

More information

Optimizing Power Consumption in Large Scale Storage Systems

Optimizing Power Consumption in Large Scale Storage Systems Optimizing Power Consumption in Large Scale Storage Systems Lakshmi Ganesh, Hakim Weatherspoon, Mahesh Balakrishnan, Ken Birman Computer Science Department, Cornell University {lakshmi, hweather, mahesh,

More information

LOAD BALANCING AS A STRATEGY LEARNING TASK

LOAD BALANCING AS A STRATEGY LEARNING TASK LOAD BALANCING AS A STRATEGY LEARNING TASK 1 K.KUNGUMARAJ, 2 T.RAVICHANDRAN 1 Research Scholar, Karpagam University, Coimbatore 21. 2 Principal, Hindusthan Institute of Technology, Coimbatore 32. ABSTRACT

More information

Oracle Applications Release 10.7 NCA Network Performance for the Enterprise. An Oracle White Paper January 1998

Oracle Applications Release 10.7 NCA Network Performance for the Enterprise. An Oracle White Paper January 1998 Oracle Applications Release 10.7 NCA Network Performance for the Enterprise An Oracle White Paper January 1998 INTRODUCTION Oracle has quickly integrated web technologies into business applications, becoming

More information

Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems

Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems 215 IEEE International Conference on Big Data (Big Data) Computing Load Aware and Long-View Load Balancing for Cluster Storage Systems Guoxin Liu and Haiying Shen and Haoyu Wang Department of Electrical

More information

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE.,

AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM. Dr. T.Ravichandran, B.E (ECE), M.E(CSE), Ph.D., MISTE., AN EFFICIENT LOAD BALANCING ALGORITHM FOR A DISTRIBUTED COMPUTER SYSTEM K.Kungumaraj, M.Sc., B.L.I.S., M.Phil., Research Scholar, Principal, Karpagam University, Hindusthan Institute of Technology, Coimbatore

More information

A Hybrid Scheduling Approach for Scalable Heterogeneous Hadoop Systems

A Hybrid Scheduling Approach for Scalable Heterogeneous Hadoop Systems A Hybrid Scheduling Approach for Scalable Heterogeneous Hadoop Systems Aysan Rasooli Department of Computing and Software McMaster University Hamilton, Canada Email: rasooa@mcmaster.ca Douglas G. Down

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

Performance Optimization Guide

Performance Optimization Guide Performance Optimization Guide Publication Date: July 06, 2016 Copyright Metalogix International GmbH, 2001-2016. All Rights Reserved. This software is protected by copyright law and international treaties.

More information

How To Improve Performance On A Single Chip Computer

How To Improve Performance On A Single Chip Computer : Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings

More information

International Journal of Advance Research in Computer Science and Management Studies

International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 6, June 2015 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

Dynamics and Evolution of Web Sites: Analysis, Metrics and Design Issues

Dynamics and Evolution of Web Sites: Analysis, Metrics and Design Issues Dynamics and Evolution of Web Sites: Analysis, Metrics and Design Issues Ludmila Cherkasova, Magnus Karlsson Computer Systems and Technology Laboratory HP Laboratories Palo Alto HPL-21-1 (R.1) July 1 th,

More information

Server Traffic Management. Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems

Server Traffic Management. Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems Server Traffic Management Jeff Chase Duke University, Department of Computer Science CPS 212: Distributed Information Systems The Server Selection Problem server array A server farm B Which server? Which

More information

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays

Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009

More information

Load Distribution in Large Scale Network Monitoring Infrastructures

Load Distribution in Large Scale Network Monitoring Infrastructures Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu

More information

Load Balancing on Stateful Clustered Web Servers

Load Balancing on Stateful Clustered Web Servers Load Balancing on Stateful Clustered Web Servers G. Teodoro T. Tavares B. Coutinho W. Meira Jr. D. Guedes Department of Computer Science Universidade Federal de Minas Gerais Belo Horizonte MG Brazil 3270-00

More information

Configuring ThinkServer RAID 500 and RAID 700 Adapters. Lenovo ThinkServer

Configuring ThinkServer RAID 500 and RAID 700 Adapters. Lenovo ThinkServer Configuring ThinkServer RAID 500 and RAID 700 Adapters Lenovo ThinkServer October 4, 2011 Contents Overview... 4 RAID 500 features... 4 RAID 700 features... 4 RAID Overview... 4 Choosing the RAID Level...

More information

Benchmarking Cassandra on Violin

Benchmarking Cassandra on Violin Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract

More information

Concept of Cache in web proxies

Concept of Cache in web proxies Concept of Cache in web proxies Chan Kit Wai and Somasundaram Meiyappan 1. Introduction Caching is an effective performance enhancing technique that has been used in computer systems for decades. However,

More information

FLEX: Load Balancing and Management Strategy for Scalable Web Hosting Service

FLEX: Load Balancing and Management Strategy for Scalable Web Hosting Service : Load Balancing and Management Strategy for Scalable Hosting Service Ludmila Cherkasova Hewlett-Packard Labs 1501 Page Mill Road,Palo Alto, CA 94303, USA e-mail:fcherkasovag@hpl.hp.com Abstract is a new

More information

Dynamic Adaptive Feedback of Load Balancing Strategy

Dynamic Adaptive Feedback of Load Balancing Strategy Journal of Information & Computational Science 8: 10 (2011) 1901 1908 Available at http://www.joics.com Dynamic Adaptive Feedback of Load Balancing Strategy Hongbin Wang a,b, Zhiyi Fang a,, Shuang Cui

More information

This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902

This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902 Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited

More information

Load Balancing in Distributed Web Server Systems With Partial Document Replication

Load Balancing in Distributed Web Server Systems With Partial Document Replication Load Balancing in Distributed Web Server Systems With Partial Document Replication Ling Zhuo, Cho-Li Wang and Francis C. M. Lau Department of Computer Science and Information Systems The University of

More information

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1 Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System

More information

Performance Analysis of Web based Applications on Single and Multi Core Servers

Performance Analysis of Web based Applications on Single and Multi Core Servers Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department

More information

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1 CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation

More information

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT 81 CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT 5.1 INTRODUCTION Distributed Web servers on the Internet require high scalability and availability to provide efficient services to

More information

NetFlow-Based Approach to Compare the Load Balancing Algorithms

NetFlow-Based Approach to Compare the Load Balancing Algorithms 6 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.1, October 8 NetFlow-Based Approach to Compare the Load Balancing Algorithms Chin-Yu Yang 1, and Jian-Bo Chen 3 1 Dept.

More information

Energy Efficient MapReduce

Energy Efficient MapReduce Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing

More information

Task Scheduling in Hadoop

Task Scheduling in Hadoop Task Scheduling in Hadoop Sagar Mamdapure Munira Ginwala Neha Papat SAE,Kondhwa SAE,Kondhwa SAE,Kondhwa Abstract Hadoop is widely used for storing large datasets and processing them efficiently under distributed

More information

Performance Evaluation of Web Proxy Cache Replacement Policies

Performance Evaluation of Web Proxy Cache Replacement Policies Performance Evaluation of Web Proxy Cache Replacement Policies Martin Arlitt, Rich Friedrich, Tai Jin Internet Systems and Applications Laboratory HP Laboratories Palo Alto HPL-98-97(R.1) October, 1999

More information

Load balancing as a strategy learning task

Load balancing as a strategy learning task Scholarly Journal of Scientific Research and Essay (SJSRE) Vol. 1(2), pp. 30-34, April 2012 Available online at http:// www.scholarly-journals.com/sjsre ISSN 2315-6163 2012 Scholarly-Journals Review Load

More information

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM Albert M. K. Cheng, Shaohong Fang Department of Computer Science University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu

More information

Esri ArcGIS Server 10 for VMware Infrastructure

Esri ArcGIS Server 10 for VMware Infrastructure Esri ArcGIS Server 10 for VMware Infrastructure October 2011 DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction... 3 Esri ArcGIS Server 10 Overview.... 3 VMware Infrastructure

More information

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example Real-Time Scheduling (Part 1) (Working Draft) Insup Lee Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania www.cis.upenn.edu/~lee/ CIS 41,

More information

How To Balance A Web Server With Remaining Capacity

How To Balance A Web Server With Remaining Capacity Remaining Capacity Based Load Balancing Architecture for Heterogeneous Web Server System Tsang-Long Pao Dept. Computer Science and Engineering Tatung University Taipei, ROC Jian-Bo Chen Dept. Computer

More information

Queue Weighting Load-Balancing Technique for Database Replication in Dynamic Content Web Sites

Queue Weighting Load-Balancing Technique for Database Replication in Dynamic Content Web Sites Queue Weighting Load-Balancing Technique for Database Replication in Dynamic Content Web Sites EBADA SARHAN*, ATIF GHALWASH*, MOHAMED KHAFAGY** * Computer Science Department, Faculty of Computers & Information,

More information

Tableau Server 7.0 scalability

Tableau Server 7.0 scalability Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different

More information

G22.3250-001. Porcupine. Robert Grimm New York University

G22.3250-001. Porcupine. Robert Grimm New York University G22.3250-001 Porcupine Robert Grimm New York University Altogether Now: The Three Questions! What is the problem?! What is new or different?! What are the contributions and limitations? Porcupine from

More information

Tableau Server Scalability Explained

Tableau Server Scalability Explained Tableau Server Scalability Explained Author: Neelesh Kamkolkar Tableau Software July 2013 p2 Executive Summary In March 2013, we ran scalability tests to understand the scalability of Tableau 8.0. We wanted

More information

.:!II PACKARD. Performance Evaluation ofa Distributed Application Performance Monitor

.:!II PACKARD. Performance Evaluation ofa Distributed Application Performance Monitor r~3 HEWLETT.:!II PACKARD Performance Evaluation ofa Distributed Application Performance Monitor Richard J. Friedrich, Jerome A. Rolia* Broadband Information Systems Laboratory HPL-95-137 December, 1995

More information

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at distributing load b. QUESTION: What is the context? i. How

More information

Dynamic Thread Pool based Service Tracking Manager

Dynamic Thread Pool based Service Tracking Manager Dynamic Thread Pool based Service Tracking Manager D.V.Lavanya, V.K.Govindan Department of Computer Science & Engineering National Institute of Technology Calicut Calicut, India e-mail: lavanya.vijaysri@gmail.com,

More information

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff

Process Scheduling CS 241. February 24, 2012. Copyright University of Illinois CS 241 Staff Process Scheduling CS 241 February 24, 2012 Copyright University of Illinois CS 241 Staff 1 Announcements Mid-semester feedback survey (linked off web page) MP4 due Friday (not Tuesday) Midterm Next Tuesday,

More information

Microsoft Exchange Server 2003 Deployment Considerations

Microsoft Exchange Server 2003 Deployment Considerations Microsoft Exchange Server 3 Deployment Considerations for Small and Medium Businesses A Dell PowerEdge server can provide an effective platform for Microsoft Exchange Server 3. A team of Dell engineers

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

Building well-balanced CDN 1

Building well-balanced CDN 1 Proceedings of the Federated Conference on Computer Science and Information Systems pp. 679 683 ISBN 978-83-60810-51-4 Building well-balanced CDN 1 Piotr Stapp, Piotr Zgadzaj Warsaw University of Technology

More information

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 ISSN 2229-5518

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 ISSN 2229-5518 International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November-2013 349 Load Balancing Heterogeneous Request in DHT-based P2P Systems Mrs. Yogita A. Dalvi Dr. R. Shankar Mr. Atesh

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014 RESEARCH ARTICLE An Efficient Priority Based Load Balancing Algorithm for Cloud Environment Harmandeep Singh Brar 1, Vivek Thapar 2 Research Scholar 1, Assistant Professor 2, Department of Computer Science

More information

Wait-Time Analysis Method: New Best Practice for Performance Management

Wait-Time Analysis Method: New Best Practice for Performance Management WHITE PAPER Wait-Time Analysis Method: New Best Practice for Performance Management September 2006 Confio Software www.confio.com +1-303-938-8282 SUMMARY: Wait-Time analysis allows IT to ALWAYS find the

More information

Profit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs

Profit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs Profit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs Mahdi Ghamkhari and Hamed Mohsenian-Rad Department of Electrical Engineering University of California at Riverside,

More information

Effective Virtual Machine Scheduling in Cloud Computing

Effective Virtual Machine Scheduling in Cloud Computing Effective Virtual Machine Scheduling in Cloud Computing Subhash. B. Malewar 1 and Prof-Deepak Kapgate 2 1,2 Department of C.S.E., GHRAET, Nagpur University, Nagpur, India Subhash.info24@gmail.com and deepakkapgate32@gmail.com

More information

ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND RESOURCE UTILIZATION IN CLOUD NETWORK

ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND RESOURCE UTILIZATION IN CLOUD NETWORK International Journal of Computer Engineering & Technology (IJCET) Volume 7, Issue 1, Jan-Feb 2016, pp. 45-53, Article ID: IJCET_07_01_006 Available online at http://www.iaeme.com/ijcet/issues.asp?jtype=ijcet&vtype=7&itype=1

More information

Storage Class Memory and the data center of the future

Storage Class Memory and the data center of the future IBM Almaden Research Center Storage Class Memory and the data center of the future Rich Freitas HPC System performance trends System performance requirement has historically double every 18 mo and this

More information

Master s Thesis. Design, Implementation and Evaluation of

Master s Thesis. Design, Implementation and Evaluation of Master s Thesis Title Design, Implementation and Evaluation of Scalable Resource Management System for Internet Servers Supervisor Prof. Masayuki Murata Author Takuya Okamoto February, 2003 Department

More information

Load Testing on Web Application using Automated Testing Tool: Load Complete

Load Testing on Web Application using Automated Testing Tool: Load Complete Load Testing on Web Application using Automated Testing Tool: Load Complete Neha Thakur, Dr. K.L. Bansal Research Scholar, Department of Computer Science, Himachal Pradesh University, Shimla, India Professor,

More information

CHAPTER 4 PERFORMANCE ANALYSIS OF CDN IN ACADEMICS

CHAPTER 4 PERFORMANCE ANALYSIS OF CDN IN ACADEMICS CHAPTER 4 PERFORMANCE ANALYSIS OF CDN IN ACADEMICS The web content providers sharing the content over the Internet during the past did not bother about the users, especially in terms of response time,

More information

Eloquence Training What s new in Eloquence B.08.00

Eloquence Training What s new in Eloquence B.08.00 Eloquence Training What s new in Eloquence B.08.00 2010 Marxmeier Software AG Rev:100727 Overview Released December 2008 Supported until November 2013 Supports 32-bit and 64-bit platforms HP-UX Itanium

More information

Distributed Caching Algorithms for Content Distribution Networks

Distributed Caching Algorithms for Content Distribution Networks Distributed Caching Algorithms for Content Distribution Networks Sem Borst, Varun Gupta, Anwar Walid Alcatel-Lucent Bell Labs, CMU BCAM Seminar Bilbao, September 30, 2010 Introduction Scope: personalized/on-demand

More information

A System for Energy-Efficient Data Management

A System for Energy-Efficient Data Management A System for Energy-Efficient Data Management Yi-Cheng Tu 1, Xiaorui Wang 2, Bo Zeng 3 1 Dept. of Computer Science and Engineering, University of South Florida, U.S.A. 2 Dept. of Electrical and Computer

More information

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi

EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either

More information

Dr. J. W. Bakal Principal S. S. JONDHALE College of Engg., Dombivli, India

Dr. J. W. Bakal Principal S. S. JONDHALE College of Engg., Dombivli, India Volume 5, Issue 6, June 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Factor based Resource

More information

Offloading file search operation for performance improvement of smart phones

Offloading file search operation for performance improvement of smart phones Offloading file search operation for performance improvement of smart phones Ashutosh Jain mcs112566@cse.iitd.ac.in Vigya Sharma mcs112564@cse.iitd.ac.in Shehbaz Jaffer mcs112578@cse.iitd.ac.in Kolin Paul

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive

More information

Integrated Application and Data Protection. NEC ExpressCluster White Paper

Integrated Application and Data Protection. NEC ExpressCluster White Paper Integrated Application and Data Protection NEC ExpressCluster White Paper Introduction Critical business processes and operations depend on real-time access to IT systems that consist of applications and

More information

THE wide deployment of Web browsers as the standard

THE wide deployment of Web browsers as the standard IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 16, NO. 3, MARCH 2005 1 Workload-Aware Load Balancing for Clustered Web Servers Qi Zhang, Alma Riska, Member, IEEE, Wei Sun, Evgenia Smirni,

More information

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

Quiz for Chapter 6 Storage and Other I/O Topics 3.10 Date: 3.10 Not all questions are of equal difficulty. Please review the entire quiz first and then budget your time carefully. Name: Course: Solutions in Red 1. [6 points] Give a concise answer to each

More information

Real Time Data Communication over Full Duplex Network Using Websocket

Real Time Data Communication over Full Duplex Network Using Websocket Real Time Data Communication over Full Duplex Network Using Websocket Shruti M. Rakhunde 1 1 (Dept. of Computer Application, Shri Ramdeobaba College of Engg. & Mgmt., Nagpur, India) ABSTRACT : Internet

More information

High Performance Cluster Support for NLB on Window

High Performance Cluster Support for NLB on Window High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor,

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

Performance evaluation of Web Information Retrieval Systems and its application to e-business

Performance evaluation of Web Information Retrieval Systems and its application to e-business Performance evaluation of Web Information Retrieval Systems and its application to e-business Fidel Cacheda, Angel Viña Departament of Information and Comunications Technologies Facultad de Informática,

More information

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

More information

ESX Server Performance and Resource Management for CPU-Intensive Workloads

ESX Server Performance and Resource Management for CPU-Intensive Workloads VMWARE WHITE PAPER VMware ESX Server 2 ESX Server Performance and Resource Management for CPU-Intensive Workloads VMware ESX Server 2 provides a robust, scalable virtualization framework for consolidating

More information

Server Operational Cost Optimization for Cloud Computing Service Providers over a Time Horizon

Server Operational Cost Optimization for Cloud Computing Service Providers over a Time Horizon Server Operational Cost Optimization for Cloud Computing Service Providers over a Time Horizon Haiyang Qian and Deep Medhi University of Missouri Kansas City, Kansas City, MO, USA Abstract Service providers

More information