Traffic-Aware Resource Provisioning for Distributed Clouds
|
|
- Brittney Wood
- 8 years ago
- Views:
Transcription
1 ENERGY EFFICIENCY Traffic-Aware Resource Provisioning for Distributed Clouds Dan Xu, AT&T Labs Xin Liu, University of California, Davis Athanasios V. Vasilakos, Lulea University of Technology, Sweden Examining important cloud traffic characteristics and optimizations produced fine-grained traffic-awareness approaches that can more efficiently reduce energy costs for distributed clouds with dynamic, diverse traffic. loud-computing-based traffic has been rapidly growing in recent years. Cisco forecasted that annual global datacenter IP traffic will reach 7.7 zettabytes by the end of 2017, with its cloud IP traffic reaching 5.3 zettabytes. 1 Correspondingly, the service providers, including Google, Microsoft, Facebook, and AT&T, are building and expanding their datacenters nationwide and worldwide. Such geographically distributed datacenters are often referred to as Internet datacenters (IDCs) and we use cloud as a general term that refers to an IDC s collection of hardware, software, and services. An IDC typically consumes many megawatts of power, which imposes a significant electricity cost to its operator. For example, Google s datacenters consume nearly 300 million watts annually. 2 To reduce energy costs, researchers have proposed load-aware server provisioning schemes, in which the number of active servers is controlled dynamically based on the load. 3 6 When the load is low, extra servers can be scheduled in sleeping mode. In this paradigm, obtaining traffic volume information is a challenging issue. As the Related Work in Resource Provisioning sidebar describes, many researchers have worked on traffic-aware cloud resource provisioning. However, considerable room for achieving a fine-grained traffic-awareness remains. In the load-aware server provisioning schemes, 4 7 researchers typically consider traffic dynamics in a large time scale only, such as tens of minutes, during which traffic demand (that is, input to the server provisioning algorithms) is usually assumed static given the current time interval for server provisioning. However, as we observed from a real 30 IEEE CLOUD COMPUTING PUBLISHED BY THE IEEE COMPUTER SOCIETY /15/$ IEEE
2 RELATED WORK IN RESOURCE PROVISIONING any researchers have explored traffic-aware cloud resource provisioning. Common approaches include prediction- or estimation-based approaches. For example, Gong Chen and his colleagues propose statistical models to forecast Microsoft servers load of connection-intensive requests (such as messenger login in), and design server provisioning schemes accordingly. 1 Minghong Lin and his colleagues propose basing their online server provisioning design on an estimate of the load size at the beginning of each time interval. 2 Other work gives workload as the inputs of the resource provisioning algorithms. 3,4 In those schemes, server provisioning is often jointly designed with load dispatching that is, by determining the load at each server. To guarantee service quality, some work explicitly considers a QoS constraint, such as a delay. 3 References 1. G. Chen et al., Energy-Aware Server Provisioning and Load Dispatching for Connection-Intensive Internet Services, Proc. 5th USENIX Symp. Networked Systems Design and Implementation (NSDI 08), 2008, pp M. Lin et al., Dynamic Right-Sizing for Power-Proportional Data Centers, IEEE/ACM Trans. Networking, vol. 21, no. 5, 2013, pp L. Rao et al., Minimizing Electricity Cost: Optimization of Distributed Internet Data Centers in a Multi-Electricity-Market Environment, Proc. IEEE INFOCOM, 2010, pp J. Tu et al., Dynamic Provisioning in Next-Generation Data Centers with On-Site Power Production, Proc. 4th Int l Conf. Future Energy Systems (e-energy 13), 2013, pp datacenter traffic trace and as Theophilus Benson and his colleagues indicated 8 cloud traffic demand also varies over a small time scale, such as hundreds of milliseconds, and often shows burstiness. Here, we ll discuss the key challenges caused by traffic dynamics in a small time scale, such as server overloading. In considering traffic dynamics in both large and small time scales, we propose a framework of solutions that can guarantee service quality with a low cost for distributed clouds. Our solutions leverage joint server provisioning (dam building) for large time scales and cross-idc load shifting (water balancing) for small time scales, which are similar to water flood control. We discuss how cross-idc load shifting can be designed to efficiently and costeffectively alleviate traffic dynamic issues in a small time scale. Like existing work, 6,9 our cross-idc load-shifting schemes can leverage electricity price diversity to reduce energy costs. To achieve a desirable balance between costs and service quality, it s also important to exploit the heterogeneity of cloud jobs. Cloud jobs can be classified by two general categories: delay sensitive jobs (DSJs), such as interactive jobs, and delay tolerant jobs (DTJs), such as batch jobs and analytical jobs. Most existing work considers only interactive jobs. 4,6,7 Although some work considers both interactive and batch jobs, 10 joint resource provisioning for both DSJs and DTJs hasn t been studied in-depth. In this article, we further examine cloud traffic diversity and consider joint DSJ and DTJ provisioning. Cloud Traffic Dynamics and Diversity In this section, we examine traffic dynamics and diversity of clouds. We first show the traffic dynamics of an IDC in different time scales. Traffic Dynamics and Time Scales User request traffic at an IDC varies over time. For example, research shows that the login rate of Windows Live Messenger changes significantly at different hours. 4 Dynamic traffic leads to a time-varying load demand. The current traffic or load-aware server provisioning schemes often use load prediction or estimation methods to capture the variation. However, those schemes address only large-time-scale traffic variation, such as predicting the future load approximately every half hour. 4 Small-time-scale traffic variation is rarely considered. JANUARY/FEBRUARY 2015 IEEE CLOUD COMPUTING 31
3 ENERGY EFFICIENCY Traffiic size (Mbit) Time granularity:1 sec Time (in seconds of a three-minute snapshot) FIGURE 1. Datacenter traffic burstiness. Traffic trace is collected from the datacenter of a major US service provider. We analyze the traffic of an IDC operated by a major US service provider. We extract the traffic information from the IDC s Hadoop Distributed File System (HDFS) log and plot a one-second traffic cap size for a three-minute snapshot (see Figure 1). As the figure shows, the one-second traffic exhibits an obvious on/off pattern. Other research shows the on/off traffic pattern for a Microsoft datacenter, 8 in a time granularity of 15 and 100 milliseconds, respectively. Small-time-scale traffic burstiness is difficult to address for two reasons. First, it s hard to predict traffic variation in a small time scale. Second, it s impossible to perform dynamic server provisioning according to the small-time-scale traffic variation due to the large time delay in turning on servers. Thus, server provisioning can be performed only in a large time scale, such as every tens of minutes to hours. Traffic burstiness might incur both congestion and server idleness at different times. Sometimes, traffic spikes impose a load that s larger than the available capacity. This overloading degrades service performance, such as by incurring a large service delay. It shouldn t happen frequently because most Internet requests, such as search and Web browsing, tolerate only a small time latency. When traffic goes lower that is, it goes into off cycles servers become idle and power is wasted. To address these issues, we view the IDCs as rivers or lakes and overloading as flooding. We design joint server provisioning and cross-idc load shifting, with server provisioning performed in a large time scale and considered as dam building. In a small time scale, cross-idc load shifting is performed to reduce traffic burstiness such that a smaller number of servers (a lower dam) is needed at each IDC. Cross-IDC load shifting is like water balancing among rivers, lakes, and reservoirs. We discuss efficient load-shifting policies that can reduce total energy costs for distributed clouds. The traffic we consider here mainly refers to interactive jobs. In a cloud, many other types of jobs might require provisioning in different ways, as we now describe. Cloud Traffic Diversity As we noted earlier, IDC traffic comes from both DSJs and DTJs. Resource provisioning to DTJs should be different from DSJs. DSJs have a small service delay and thus must be served promptly. In contrast, DTJs can be buffered in queues. DSJs have a higher service priority than DTJs, so capacity resource for DSJs must be guaranteed first. The remaining server capacity, which varies instantaneously due to dynamic DSJ traffic demand, can then serve DTJs via dynamic speed scaling. This paradigm is called valley filling that is, using DTJs to fill the DSJ load-demand valley (see Figure 2). Our discussion of interactive job traffic refers to DSJ traffic. When traffic is low, some servers become idle. In this case, they can be allocated to process DTJs rather than shut down. The cross-idc load shifting for DSJs also fills the valley of some IDCs. But this load shifting doesn t cause valleys to disappear; there s still some remaining capacity to utilize. DTJs can also be shifted from one IDC to another, such as to exploit more efficient serv- 32 IEEE CLOUD COMPUTING
4 Total active capacity Load demand of DSJs DTJs (filling the valley) Overloading Overloading Overloading Overloading Overloading FIGURE 2. Delay sensitive jobs (DJBs) overload and fill valleys by delay tolerant jobs (DTJs). The DSJs load demand exceeds total available capacity at four time intervals. If the measurement period starts from the first interval and lasts to the last interval (that is, the period is 34 time intervals), the overloading probability is calculated as 4/34 =11.7 percent. Time ers or a lower electricity price in another location. Also, DSJs or DTJs can be further divided by different classes according to job types and service requirements. For example, search requests and Web browsing requests can be considered as two different DSJ classes, whereas user information backup and weblog analysis can be two different DTJ classes. Building Dams and Balancing Water Using water and damming as metaphors for IDC resource provisioning applies to DTJ provisioning, which is like filling a river valley: the water (of DSJs) is below the height of the dam (an IDC s total active server capacity). Efficient valley filling is challenging, however. On one hand, DTJ load demand is high, which often incurs a large energy cost. On the other hand, DTJs also desire as small a delay as possible. Moreover, valley filling must be jointly designed with DSJ provisioning. Here, we discuss joint server provisioning, cross-idc load shifting, and valley filling for distributed clouds. Our design goal is to minimize the total costs, including energy costs and load-shifting costs, while satisfying DSJ QoS constraints and assuring a desirable delay performance for DTJs. Given the problem s complexity, we first focus on DSJs only, discussing joint server provisioning and cross-idc load shifting; we then address how valley filling can jointly work with server provisioning and load shifting. Measuring Server Overloading As discussed earlier, the small-time-scale DSJ traffic burstiness can lead to overloading that is, a situation in which load demand exceeds the capacity provisioned. To measure overloading, we introduce a QoS metric called overloading probability, which is defined as the probability that the DSJ load demand is larger than the available server capacity. As Figure 2 shows, IDC overloading can be temporally monitored. We can set a small time interval, such as hundreds of milliseconds, to track overloading and then set a measurement period, such as hours, to calculate overloading probability. We can monitor overloading probability by first counting the number of time intervals with overloading, and then dividing them by the total number of small time intervals during the measurement period. In practice, we can use a slide window of measurement periods to update an IDC s overloading probability. We can directly translate overloading probability into a DSJ s QoS requirements (that is, its service latency requirements) because the DSJ load demand is implicitly based on a given service latency requirement. A smaller service latency requirement leads to a larger load demand given the same amount of traffic. We can consider overloading probability as the percentage of time that the service latency requirement can t be guaranteed. A small overloading probability requirement is desirable for guaranteeing DSJ service quality. To reduce the overloading probability, we can increase the available capacity by provisioning more active servers. This is like building a higher dam to prevent flooding. However, turning on many additional active servers isn t cost effective. To address this issue, traffic burstiness at each IDC must be reduced first so a lower capacity can meet the same overloading probability requirement. Cross-IDC Load Shifting To reduce traffic burstiness, we designed cross-idc load-shifting schemes to leverage traffic s statistical multiplexing gain. Although existing work has considered cross-idc load shifting for exploiting electricity price diversity, 9 how to use load shifting JANUARY/FEBRUARY 2015 IEEE CLOUD COMPUTING 33
5 ENERGY EFFICIENCY to efficiently reduce traffic burstiness and further achieve cost effectiveness hasn t been investigated. By load shifting, an IDC s excessive traffic can be processed by another IDC that has lower traffic demand. Thus, load demand at each IDC is smoothed. Here, we consider different policies for load shifting. First, we consider a policy in which each IDC instantaneously shifts its traffic to another one according to a fixed ratio. Consider D i as the traffic volume of IDC i, which is a time-varying random variable. IDC i shifts its traffic to IDC j according to the current D i and a fixed ratio r ij (that is, r ij D i ). Any pair of IDCs can shift traffic to each other. Thus, we call this scheme ratio-based load swapping. In RBLS, traffic shifting ratio r ij is a control variable to optimize. We must consider many factors when optimizing r ij. For example, if IDC j has a large maximum server capacity, r ij is likely to be large. If the current electricity price at IDC j is low, r ij might also be large to let IDC i shift more traffic to IDC j. Thus, RBLS can exploit various factors to reduce energy costs and explore server capacity more efficiently; nevertheless, it also uses statistical multiplexing gain to smooth traffic at each IDC. However, RBLS can be expensive to implement because the load-shifting numbers can be large that is, O(N 2 ), where N is an operator s number of IDCs. We thus consider offloading polices, in which each IDC shifts traffic only to a selected offloader. In offloading, each IDC can shift traffic based on a fixed ratio that is, ratio-based offloading. In RBO, distribution of the remaining traffic demand at each IDC still follows the same (scaled-down) distribution as the original load demand. Alternatively, each IDC can shift the load beyond the available capacity to the offloader; this is called threshold-based offloading. Intuitively, TBO is more efficient than RBO in terms of decreasing overloading probability and saving energy costs. This is because, in RBO, each IDC s load demand is still random. A large server capacity is still needed to constrain server overloading, especially when traffic demand at each IDC is heavy-tail distributed. Although in TBO, the excessive traffic of all IDCs is aggregated at the offloader, we can expect a higher statistical multiplexing gain. TBO is similar to discharging flood from rivers to a reservoir that is, only the amount of water (traffic) that is beyond the dam (maximum active server) capacity is offloaded. Server Provisioning and Load Shifting Cross-IDC load shifting is performed in a small time scale to capture the small time scale s traffic variation. In TBO, configuring load shifting such as determining traffic-splitting ratios in RBLS or determining each IDC s number of servers (that is, the threshold for offloading is coupled with server provisioning, which is performed in a large time scale. For a relatively large time interval, load shifting follows the same configuration. The joint server provisioning and load-shifting schemes need each IDC s traffic information. To calculate overloading probability, we need the DSJ traffic s mean and variance (before load shifting). Assuming a certain distribution of the DSJ traffic, we can calculate the overloading probability based on the DSJ traffic s mean and variance. We require an overloading probability constraint that is, it must be smaller than a predefined value to guarantee service quality. Our objective is to minimize all IDCs costs, including energy costs and load-shifting costs. An IDC s energy cost is the product of the electricity price and the total power consumption, which is proportional to that the servers consume. Further, server power consumption can be modeled by an increasing function of both the number of active servers and the average load. Problem 1 Joint sever provisioning and load shifting for DSJs can be formulated by the following optimization problem (Problem 1). The main inputs for Problem 1 are each IDC s traffic statistics (mean and variance). The objective is to minimize the total energy and load-shifting costs. Problem 1 has three constraints: An overloading probability constraint at each IDC. A server allocation constraint at each IDC (the number of active servers at each IDC is upper limited by the total number of servers). A load-shifting constraint between two IDCs (the traffic amount shifted is bounded by bandwidth between two IDCs). Problem 1 s outputs are the number of active servers at each IDC and the parameters of traffic shifting. Although DSJ traffic is dynamic, Problem 1 is a deterministic optimization problem based on traffic statistics information that remains static for a large time interval. The variation of traffic statistics rightly represents the large-time-scale traffic dynamics we discussed earlier. Thus, Problem 1 captures traffic dynamics in both the large and small time scales. We formulated Problem 1 with different cross- IDC load-shifting schemes RBLS, RBO, and TBO using sophisticated convex optimization models. 11,12 Figure 3 compares the performance of 34 IEEE CLOUD COMPUTING
6 Total cost ($) 1,600 1,400 1,200 1, Joint server provisioning with RBLS Joint server provisioning with RBO Joint server provisioning with TBO Server provisioning without cross-idc load shifting Overloading probability constraint at each IDC FIGURE 3. Simulation results of three joint server provisioning schemes based on convex optimization models: ratio-based load swapping (RBLS), ratio-based offloading (RBO), and threshold-based offloading (TBO). Costs at an Internet datacenter (IDC) are the product of its power consumption and electricity price. the different schemes, each of which jointly works with server provisioning. In the simulations, we consider 15 IDCs, each of which has a maximum of 100 servers. Each server s maximum CPU frequency is normalized as 1. Server capacity or load is defined as the total CPU frequency of all active servers. The number of active servers, constrained by 100, is a control variable to optimize. Load demand (before load shifting) at each IDC follows an exponential distribution, with a mean of 10. We randomly selected each IDC s electricity price from one to five. The figure s x-axis shows the overloading probability constraint at each IDC. Clearly, a smaller overloading probability constraint leads to more active servers by the proposed optimization model for each scheme, and thus a higher energy cost. In RBO and TBO, we selected one IDC as the offloader. As the figure shows, RBLS always leads to the smallest cost, because it can achieve traffic statistical multiplexing gain at every IDC. TBO is second. Although RBO performs the worst, it s still much better than no load shifting. Our results show that it s important to reduce traffic burstiness using an efficient cross-idc load-shifting scheme. In practice, with operation complexity considered, TBO is a desirable load-shifting scheme with dynamic DSJ traffic demand. Filling Valleys: Joint Resource Provisioning for DSJs and DTJs In terms of resource usage, DSJs have a higher priority than DTJs. When the current DSJ load demand is smaller than the available capacity, we provision the remaining capacity to DTJs, which is like filling a river valley. Beyond a Best-Effort Scheme Intuitively, it seems that valley filling provides besteffort provisioning to DTJs, because they can use only the capacity remaining. However, valley filling isn t simply best effort; DTJ provisioning must be carefully optimized together with DSJs. As we indicated earlier, DTJs contribute to energy costs and also require reasonable service latency. Thus, when performing server provisioning that is, when determining total capacity we can t consider only the DSJ overloading issue; we also must consider the DTJ performance delay. Otherwise, even DTJ queue stability can t be guaranteed. Thus, valley filling must be jointly configured with server provisioning in a large time scale to determine how much average capacity DTJs should receive. In a small time scale, following the configuration, instantaneous DTJ capacity allocation is performed according to current DSJ load demand. Figure 4 shows the joint server provisioning, cross-idc load shifting, and DSJ/DTJ capacity allocation scheme. Configuring Valley Filling As Figure 4 shows, without DTJs, server provisioning in Problem 1 is deterministic and performed independently for each (large) time interval. This is because the overloading probability constraint for DSJs must be satisfied within each large time interval. Otherwise, for some large intervals, DSJs will experience poor performance. With DTJs, joint server provisioning and valley-filling configuring aren t independent for different large time intervals. In this case, DTJ delay performance, such as queue stability, is measured over a long time period, which might cross many large time intervals. Also, DTJs JANUARY/FEBRUARY 2015 IEEE CLOUD COMPUTING 35
7 ENERGY EFFICIENCY Deterministic Independent of other time intervals Problem 1 Problem 2 Stochastic Correlated with other time intervals Configuring Cross-IDC load shifting Server provisioning Configuring valley filling Estimate DSJ traffic statistics Estimate DSJ traffic statistics Instantaneous load shifting Instantaneous valley filling Large time interval (scale) Small time interval (scale) FIGURE 4. Joint server provisioning, load shifting, and valley filling at different time scales. Without delay tolerant jobs (DTJs), server provisioning in Problem 1 is deterministic and performed independently for each (large) time interval. can be buffered in a long queue, whereas DSJs must be served in real time. Moreover, configuring valley filling must incorporate both load variation of DSJs and price temporal dynamics. For example, if the electricity price is potentially lower in the next large time interval, less capacity can be allocated to DTJs in the current large time interval. Historic information can also affect the current decision, such as by affecting the DTJ queue. Problem 2 We now consider Problem 2, a stochastic problem of joint server provisioning, load shifting, and valley filling, that we designed based on Problem 1. Here, we can shift the load of DTJs from one IDC to another. For DTJ load shifting, the purpose isn t to exploit statistical multiplexing gain, because DTJs don t have issues with small-scale traffic dynamics. For DTJs, traffic can be smoothed by buffering. DTJ load shifting can exploit a lower electricity price and a higher service rate in another IDC. Given their different purposes, the structures of DSJ and DTJ load shifting are also different. First, the objective of Problem 2 is to minimize the total costs, which also include energy costs contributed by DTJs. Unlike Problem 1, the total costs here aren t for each single (large) time interval, but rather the average costs over all large time intervals. Second, there are additional constraints. The sum of load demand of DSJs and DTJs must be smaller than the total available capacity, which is the capacity allocation constraint. To guarantee DTJ queue stability, the average queue length should be upper bounded. Or, alternatively, the average service rates for DTJs should be larger than their average arrival rates. We determine the service rate at each IDC for DTJs by the capacity allocated. Nevertheless, the overloading probability, server allocation, and load-shifting constraints also must be satisfied. Thus, Problem 1 s constraints constitute a subset of Problem 2 s constraints. Leveraging DTJ queue information. Problem 2 is difficult because DSJs and DTJs must be jointly considered, along with the traffic dynamics of DSJs and the stochastic properties of DTJ provisioning. We developed several different sophisticated algorithms that solve Problem 2. 13,14 Our proposed algorithms barely leverage system statistical information which is difficult to obtain in practice yet nonetheless guarantee the cost and delay bounds. One important difference among our schemes is whether DTJ queue backlog information at each IDC is leveraged or not; note that when a class of DTJs is shifted to another IDC, a sub-queue is created for that DTJ class in the destination IDC. We evaluate performance of a DTJ queue-based valley-filling scheme that leverages the current DTJ queue information to adjust capacity for DTJs. The DTJ queue-based scheme s basic idea is to achieve a tunable tradeoff between DTJ queue length and IDC energy costs. The scheme s objective function is to minimize the sum of two functions of the control variable vector that is, the server capacity assigned 36 IEEE CLOUD COMPUTING
8 Costs by service rate-based valley filling Costs by queue-based valley filling Delay by service rate-based valley filling Delay by queue-based valley filling Total cost ($) x 10, DTJ queue delay (mins) Percentage of DTJ load demand out of total load demand 0 FIGURE 5. Performance of valley filling that leverages queue information of delay tolerant jobs (DTJs). Here, five Internet datacenters (IDCs) are considered, with 15 classes of delay sensitive jobs (DSJs) and 10 classes of DTJs; load demand between the DSJs and DTJs is comparable. to each class of DTJs at each IDC. One function is increasing with the control variable vector, such as the energy cost function, which is weighted by a control parameter V. The other one is decreasing with the control variable vector, such as a weighted linear sum of server capacity for each DTJ class at each IDC, multiplied by 1, where the server capacity s weight for each class of DTJs at each IDC is the current queue length of the DTJ class. Clearly, when queue length of a DTJ class becomes large, the decreasing function has a large weight, and a high server capacity will be assigned to this DTJ class due to the minimization objective; this in turn reduces the queue length of the DTJ class. The control parameter V is to tune the tradeoff between energy costs and DTJ queue length. Setting a large value of V gives the increasing function a large weight, which leads to a small server capacity due to the objective of minimization, and thus reduces energy costs. When V is set small, the decreasing function has a large weight and thus a large server capacity will be assigned, which further reduces DTJ queue length. The scheme s constraints are the same as those in Problem 2. We compared the queue-based scheme to a service-rate based scheme that doesn t leverage any queue length information but does guarantee longterm queue stability. The service-rate based scheme almost achieves the minimum cost with DTJ queuestability assured. As Figure 5 shows, the queue-based scheme achieves a significantly smaller queue delay for a slightly larger cost. This result implies the importance of leveraging DTJ queue information in valley filling. Details of this study are available elsewhere. 13 The DTJ queue-based provisioning scheme intuits that capacity allocation for each class of DTJs at each IDC is correlated with the queue length of the class of DTJs. When a DTJ queue in an IDC is long, a larger server capacity is assigned to reduce queue length and vice versa. In practice, we can tune queue-based algorithms to achieve a desirable tradeoff between energy costs and DTJ queue delay. Coupling load shifting to capacity allocation. Intuitively, the classic stochastic optimization method back-pressure routing 15 can be a good candidate for DTJ load shifting among IDCs because it guarantees DTJ queue stability. Back-pressure routing s basic idea is to shift more of a DTJ class s traffic to an IDC, which has less traffic than the class of DTJs. However, we found that back-pressure routing isn t optimal in reducing energy costs in the distributed clouds scenario. This is because, in backpressure routing, the decision of cross-idc DTJ load shifting is mainly based on the difference between the amount of DTJ class traffic in both the original IDC and each destination IDC. Cross-IDC load shifting isn t directly correlated with capacity allocation in each IDC. Thus, back-pressure routing doesn t leverage electricity price location diversity well. To address this issue, we designed a cross-idc DTJ load-shifting scheme that s closely coupled to server capacity allocation at each IDC. The scheme s basic idea is that the amount of a DTJ class load JANUARY/FEBRUARY 2015 IEEE CLOUD COMPUTING 37
9 ENERGY EFFICIENCY Total cost ($) 95,000 90,000 85,000 80,000 75,000 Total costs (Dollar) DTJ queue delay (sec) 1,800 1,600 1,400 1,200 1, DTJ queue delay (sec) 70,000 Joint back-pressure routing based cross-idc DTJ load shifting and capacity allocations scheme Closely coupling cross-idc DTJ load shifting to capacity allocation 0 FIGURE 6. Simulation results comparing the cross-idc load-shifting scheme based on back-pressure routing with our scheme of closely coupling cross-idc delay tolerant job (DTJ) load shifting with capacity allocation. The simulations considered five IDCs and 10 classes of DTJs. The result shows the importance of closely coupling cross-idc DTJ load shifting with capacity allocation in cloud resource provisioning. that s shifted to another IDC depends only on how much server capacity in the destination IDC is assigned for the DTJ class. As Figure 6 shows, our scheme achieves a much smaller energy cost and a much smaller DTJ queue delay compared to the cross-idc DTJ load-shifting scheme based on backpressure routing. 15 any challenges and opportunities exist related to efficient cloud management: Traffic dynamics might need to be considered in different time granularities. Applications with different resource requirements such as CPU, I/O, disk, and memory must be better coordinated and provisioned. Multiplexing different applications by virtualization could further improve efficiency. Jobs with different QoS or service deadlines must be further prioritized. Accurately modeling server energy costs and load-shifting costs is challenging. We will consider all of these issues in our future work. References 1. Cisco Global Cloud Index: Forecast and Methodology, , white paper, Cisco Systems, J. Glanz, Power, Pollution and the Internet, New York Times, 22 Sept. 2012; -vast-amounts-of-energy-belying-industry-image.html. 3. Y.Y. Chen et al., Managing Server Energy and Operational Costs in Hosting Centers, Proc. ACM SIGMETRICS Int l Conf. Measurement and Modeling of Computer Systems, 2005, pp G. Chen et al., Energy-Aware Server Provisioning and Load Dispatching for Connection- Intensive Internet Services, Proc. 5th USENIX Symp. Networked Systems Design and Implementation (NSDI 08), 2008, pp M. Lin et al., Dynamic Right-Sizing for Power- Proportional Data Centers, IEEE/ACM Trans. Networking, vol. 21, no. 5, 2013, pp L. Rao et al., Minimizing Electricity Cost: Optimization of Distributed Internet Data Centers in a Multi-Electricity-Market Environment, Proc. IEEE INFOCOM, 2010, pp J. Tu et al., Dynamic Provisioning in Next- Generation Data Centers with On-Site Power Production, Proc. 4th Int l Conf. Future Energy Systems (e-energy 13), 2013, pp T. Benson et al., Understanding Data Center Traffic Characteristics, ACM SIGCOMM Computer Comm. Rev., vol. 40, no. 1, 2010, pp R. Stanojevic and R. Shorten, Distributed Dynamic Speed Scaling, Proc. IEEE INFOCOM, 2010, pp H. Xu and B. Li, Temperature Aware Workload Management in Geo-Distributed Datacenters, 38 IEEE CLOUD COMPUTING
10 IEEE Trans. Parallel and Distributed Systems, preprint, doi: /TPDS D. Xu, X. Liu, and B. Fan, Minimizing Energy Cost for Internet-Scale Datacenters with Dynamic Traffic, Proc. IEEE 19th Int l Workshop on Quality of Service (IWQoS), 2011, pp D. Xu, X. Liu, and B. Fan, Efficient Server Provisioning and Offloading Policies for Internet Datacenters with Dynamic Load Demand, IEEE Trans. Computers, vol. 64, no. 3, pp , Mar. 2014; doi: /tc D. Xu and X. Liu, Geographic Trough Filling for Internet Datacenters, Proc. IEEE INFOCOM, 2012, pp L. Georgiadis, M.J. Neely, and L. Tassiulas, Resource Allocation and Cross-Layer Control in Wireless Networks, Foundations and Trends in Networking, vol. 1, no. 1, pp , D. Xu, X. Liu, and Z. Niu, Joint Resource Provisioning for Internet Datacenters with Diverse and Dynamic Traffic, IEEE Trans. Cloud Computing, preprint, doi: /tcc DAN XU is a senior member of technical staff at AT&T Labs in California. His research interests include datacenter network resource provisioning and energy efficiency design, along with 3G/4G mobile networks performance modeling and analysis. Xu has a PhD in computer science from the University of California, Davis. Contact him at danxu@ucdavis.edu. XIN LIU is a professor in the Computer Science Department at the University of California, Davis. Her research interests include wireless communication networks. Liu has a PhD in electrical engineering from Purdue University. She s a Chancellor s Fellow and a member of IEEE. Contact her at liu@cs.ucdavis.edu. ATHANASIOS V. VASILAKOS is a professor in the Department of Computer Science, Electrical, and Space Engineering at the Lulea University of Technology, Sweden. His research interests include cloud computing, cyber physical systems, and green computing/ networking. He s a senior member of IEEE. Contact him at th.vasilako@gmail.com. ADVERTISER INFORMATION Advertising Personnel Marian Anderson: Sr. Advertising Coordinator manderson@computer.org Phone: Fax: Sandy Brown: Sr. Business Development Mgr. sbrown@computer.org Phone: Fax: Advertising Sales Representatives (display) Central, Northwest, Far East: Eric Kincaid e.kincaid@computer.org Phone: Fax: Northeast, Midwest, Europe, Middle East: Ann & David Schissler a.schissler@computer.org, d.schissler@computer.org Phone: Fax: Southwest, California: Mike Hughes mikehughes@computer.org Phone: Southeast: Heather Buonadies h.buonadies@computer.org Phone: Fax: Advertising Sales Representatives (Classified Line) Heather Buonadies h.buonadies@computer.org Phone: Fax: Advertising Sales Representatives (Jobs Board) Heather Buonadies h.buonadies@computer.org Phone: Fax: JANUARY/FEBRUARY 2015 IEEE CLOUD COMPUTING 39
Dynamic Scheduling and Pricing in Wireless Cloud Computing
Dynamic Scheduling and Pricing in Wireless Cloud Computing R.Saranya 1, G.Indra 2, Kalaivani.A 3 Assistant Professor, Dept. of CSE., R.M.K.College of Engineering and Technology, Puduvoyal, Chennai, India.
More informationENERGY EFFICIENT AND REDUCTION OF POWER COST IN GEOGRAPHICALLY DISTRIBUTED DATA CARDS
ENERGY EFFICIENT AND REDUCTION OF POWER COST IN GEOGRAPHICALLY DISTRIBUTED DATA CARDS M Vishnu Kumar 1, E Vanitha 2 1 PG Student, 2 Assistant Professor, Department of Computer Science and Engineering,
More informationProfit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs
Profit Maximization and Power Management of Green Data Centers Supporting Multiple SLAs Mahdi Ghamkhari and Hamed Mohsenian-Rad Department of Electrical Engineering University of California at Riverside,
More informationEnergy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
More informationThe Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy
The Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy Yan Gao Accenture Technology Labs Zheng Zeng Apple Inc. Xue Liu McGill University P. R. Kumar Texas A&M University
More informationAvoiding Overload Using Virtual Machine in Cloud Data Centre
Avoiding Overload Using Virtual Machine in Cloud Data Centre Ms.S.Indumathi 1, Mr. P. Ranjithkumar 2 M.E II year, Department of CSE, Sri Subramanya College of Engineering and Technology, Palani, Dindigul,
More informationEntropy-Based Collaborative Detection of DDoS Attacks on Community Networks
Entropy-Based Collaborative Detection of DDoS Attacks on Community Networks Krishnamoorthy.D 1, Dr.S.Thirunirai Senthil, Ph.D 2 1 PG student of M.Tech Computer Science and Engineering, PRIST University,
More informationAlgorithms for sustainable data centers
Algorithms for sustainable data centers Adam Wierman (Caltech) Minghong Lin (Caltech) Zhenhua Liu (Caltech) Lachlan Andrew (Swinburne) and many others IT is an energy hog The electricity use of data centers
More informationPERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM
PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM Akmal Basha 1 Krishna Sagar 2 1 PG Student,Department of Computer Science and Engineering, Madanapalle Institute of Technology & Science, India. 2 Associate
More informationBackpressure with Adaptive Redundancy Mechanism for Networks in Overload
Backpressure with Adaptive Redundancy Mechanism for Networks in Overload G. Arul Selvam 1, M. Sivapriya 2. Assistant Professor, Department of CSE, EGS Pillay Engineering College, India 1 P.G. Scholar,
More informationCURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING
Journal homepage: http://www.journalijar.com INTERNATIONAL JOURNAL OF ADVANCED RESEARCH RESEARCH ARTICLE CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING R.Kohila
More informationKeywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.
Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement
More informationUSING ADAPTIVE SERVER ACTIVATION/DEACTIVATION FOR LOAD BALANCING IN CLOUD-BASED CONTENT DELIVERY NETWORKS
USING ADAPTIVE SERVER ACTIVATION/DEACTIVATION FOR LOAD BALANCING IN CLOUD-BASED CONTENT DELIVERY NETWORKS Darshna Dalvadi 1 and Dr. Keyur Shah 2 1 School of Computer Studies, Ahmedabad University, Ahmedabad,
More informationData Center Energy Cost Minimization: a Spatio-Temporal Scheduling Approach
23 Proceedings IEEE INFOCOM Data Center Energy Cost Minimization: a Spatio-Temporal Scheduling Approach Jianying Luo Dept. of Electrical Engineering Stanford University jyluo@stanford.edu Lei Rao, Xue
More informationEnergy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
More informationEfficient and Enhanced Load Balancing Algorithms in Cloud Computing
, pp.9-14 http://dx.doi.org/10.14257/ijgdc.2015.8.2.02 Efficient and Enhanced Load Balancing Algorithms in Cloud Computing Prabhjot Kaur and Dr. Pankaj Deep Kaur M. Tech, CSE P.H.D prabhjotbhullar22@gmail.com,
More informationADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS
ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS Lavanya M., Sahana V., Swathi Rekha K. and Vaithiyanathan V. School of Computing,
More informationCharacterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
More information[Sathish Kumar, 4(3): March, 2015] ISSN: 2277-9655 Scientific Journal Impact Factor: 3.449 (ISRA), Impact Factor: 2.114
IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY HANDLING HEAVY-TAILED TRAFFIC IN QUEUEING NETWORKS USING MAX WEIGHT ALGORITHM M.Sathish Kumar *, G.Sathish Kumar * Department
More informationEnergy Aware Consolidation for Cloud Computing
Abstract Energy Aware Consolidation for Cloud Computing Shekhar Srikantaiah Pennsylvania State University Consolidation of applications in cloud computing environments presents a significant opportunity
More informationScheduling Allowance Adaptability in Load Balancing technique for Distributed Systems
Scheduling Allowance Adaptability in Load Balancing technique for Distributed Systems G.Rajina #1, P.Nagaraju #2 #1 M.Tech, Computer Science Engineering, TallaPadmavathi Engineering College, Warangal,
More informationFlexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks
Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks Prashil S. Waghmare PG student, Sinhgad College of Engineering, Vadgaon, Pune University, Maharashtra, India. prashil.waghmare14@gmail.com
More informationComparative Analysis of Congestion Control Algorithms Using ns-2
www.ijcsi.org 89 Comparative Analysis of Congestion Control Algorithms Using ns-2 Sanjeev Patel 1, P. K. Gupta 2, Arjun Garg 3, Prateek Mehrotra 4 and Manish Chhabra 5 1 Deptt. of Computer Sc. & Engg,
More informationSetting deadlines and priorities to the tasks to improve energy efficiency in cloud computing
Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Problem description Cloud computing is a technology used more and more every day, requiring an important amount
More informationPath Selection Methods for Localized Quality of Service Routing
Path Selection Methods for Localized Quality of Service Routing Xin Yuan and Arif Saifee Department of Computer Science, Florida State University, Tallahassee, FL Abstract Localized Quality of Service
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015
RESEARCH ARTICLE OPEN ACCESS Ensuring Reliability and High Availability in Cloud by Employing a Fault Tolerance Enabled Load Balancing Algorithm G.Gayathri [1], N.Prabakaran [2] Department of Computer
More informationLOAD BALANCING MECHANISMS IN DATA CENTER NETWORKS
LOAD BALANCING Load Balancing Mechanisms in Data Center Networks Load balancing vs. distributed rate limiting: an unifying framework for cloud control Load Balancing for Internet Distributed Services using
More informationDynamic Workload Management in Heterogeneous Cloud Computing Environments
Dynamic Workload Management in Heterogeneous Cloud Computing Environments Qi Zhang and Raouf Boutaba University of Waterloo IEEE/IFIP Network Operations and Management Symposium Krakow, Poland May 7, 2014
More informationSla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing
Sla Aware Load Balancing Using Join-Idle Queue for Virtual Machines in Cloud Computing Mehak Choudhary M.Tech Student [CSE], Dept. of CSE, SKIET, Kurukshetra University, Haryana, India ABSTRACT: Cloud
More informationWhitepaper. A Guide to Ensuring Perfect VoIP Calls. www.sevone.com blog.sevone.com info@sevone.com
A Guide to Ensuring Perfect VoIP Calls VoIP service must equal that of landlines in order to be acceptable to both hosts and consumers. The variables that affect VoIP service are numerous and include:
More informationCOST MINIMIZATION OF RUNNING MAPREDUCE ACROSS GEOGRAPHICALLY DISTRIBUTED DATA CENTERS
COST MINIMIZATION OF RUNNING MAPREDUCE ACROSS GEOGRAPHICALLY DISTRIBUTED DATA CENTERS Ms. T. Cowsalya PG Scholar, SVS College of Engineering, Coimbatore, Tamilnadu, India Dr. S. Senthamarai Kannan Assistant
More informationVirtualization Technology using Virtual Machines for Cloud Computing
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Virtualization Technology using Virtual Machines for Cloud Computing T. Kamalakar Raju 1, A. Lavanya 2, Dr. M. Rajanikanth 2 1,
More informationA Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks
A Power Efficient QoS Provisioning Architecture for Wireless Ad Hoc Networks Didem Gozupek 1,Symeon Papavassiliou 2, Nirwan Ansari 1, and Jie Yang 1 1 Department of Electrical and Computer Engineering
More informationOnline Resource Management for Data Center with Energy Capping
Online Resource Management for Data Center with Energy Capping A. S. M. Hasan Mahmud Florida International University Shaolei Ren Florida International University Abstract The past few years have been
More informationLoad Balancing of Web Server System Using Service Queue Length
Load Balancing of Web Server System Using Service Queue Length Brajendra Kumar 1, Dr. Vineet Richhariya 2 1 M.tech Scholar (CSE) LNCT, Bhopal 2 HOD (CSE), LNCT, Bhopal Abstract- In this paper, we describe
More informationCloud Management: Knowing is Half The Battle
Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Joseph
More informationA Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
More informationScheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis Aparna.C 1, Kavitha.V.kakade 2 M.E Student, Department of Computer Science and Engineering, Sri Shakthi Institute
More informationNetwork Performance Monitoring at Small Time Scales
Network Performance Monitoring at Small Time Scales Konstantina Papagiannaki, Rene Cruz, Christophe Diot Sprint ATL Burlingame, CA dina@sprintlabs.com Electrical and Computer Engineering Department University
More informationMonitoring Performances of Quality of Service in Cloud with System of Systems
Monitoring Performances of Quality of Service in Cloud with System of Systems Helen Anderson Akpan 1, M. R. Sudha 2 1 MSc Student, Department of Information Technology, 2 Assistant Professor, Department
More informationA Slow-sTart Exponential and Linear Algorithm for Energy Saving in Wireless Networks
1 A Slow-sTart Exponential and Linear Algorithm for Energy Saving in Wireless Networks Yang Song, Bogdan Ciubotaru, Member, IEEE, and Gabriel-Miro Muntean, Member, IEEE Abstract Limited battery capacity
More informationTowards an understanding of oversubscription in cloud
IBM Research Towards an understanding of oversubscription in cloud Salman A. Baset, Long Wang, Chunqiang Tang sabaset@us.ibm.com IBM T. J. Watson Research Center Hawthorne, NY Outline Oversubscription
More informationEnergy aware RAID Configuration for Large Storage Systems
Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa norifumi@tkl.iis.u-tokyo.ac.jp Miyuki Nakano miyuki@tkl.iis.u-tokyo.ac.jp Masaru Kitsuregawa kitsure@tkl.iis.u-tokyo.ac.jp Abstract
More informationUsing median filtering in active queue management for telecommunication networks
Using median filtering in active queue management for telecommunication networks Sorin ZOICAN *, Ph.D. Cuvinte cheie. Managementul cozilor de aşteptare, filtru median, probabilitate de rejectare, întârziere.
More information@IJMTER-2015, All rights Reserved 355
e-issn: 2349-9745 p-issn: 2393-8161 Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com A Model for load balancing for the Public
More informationVoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications
VoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications Veselin Rakocevic School of Engineering and Mathematical Sciences City University, London, UK V.Rakocevic@city.ac.uk
More informationGlobal Cost Diversity Aware Dispatch Algorithm for Heterogeneous Data Centers
Global Cost Diversity Aware Dispatch Algorithm for Heterogeneous Data Centers Ananth Narayan S. ans6@sfu.ca Soubhra Sharangi ssa121@sfu.ca Simon Fraser University Burnaby, Canada Alexandra Fedorova fedorova@cs.sfu.ca
More informationHow To Balance A Web Server With Remaining Capacity
Remaining Capacity Based Load Balancing Architecture for Heterogeneous Web Server System Tsang-Long Pao Dept. Computer Science and Engineering Tatung University Taipei, ROC Jian-Bo Chen Dept. Computer
More informationFigure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues
More informationLoad Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
More informationIMPACT OF DISTRIBUTED SYSTEMS IN MANAGING CLOUD APPLICATION
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND SCIENCE IMPACT OF DISTRIBUTED SYSTEMS IN MANAGING CLOUD APPLICATION N.Vijaya Sunder Sagar 1, M.Dileep Kumar 2, M.Nagesh 3, Lunavath Gandhi
More informationAn Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems
An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems Ardhendu Mandal and Subhas Chandra Pal Department of Computer Science and Application, University
More informationVulnerability Analysis of Hash Tables to Sophisticated DDoS Attacks
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 12 (2014), pp. 1167-1173 International Research Publications House http://www. irphouse.com Vulnerability
More informationInternational Journal of Applied Science and Technology Vol. 2 No. 3; March 2012. Green WSUS
International Journal of Applied Science and Technology Vol. 2 No. 3; March 2012 Abstract 112 Green WSUS Seifedine Kadry, Chibli Joumaa American University of the Middle East Kuwait The new era of information
More informationStorage I/O Control: Proportional Allocation of Shared Storage Resources
Storage I/O Control: Proportional Allocation of Shared Storage Resources Chethan Kumar Sr. Member of Technical Staff, R&D VMware, Inc. Outline The Problem Storage IO Control (SIOC) overview Technical Details
More informationLoad Balancing and Switch Scheduling
EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load
More informationA Hybrid Load Balancing Policy underlying Cloud Computing Environment
A Hybrid Load Balancing Policy underlying Cloud Computing Environment S.C. WANG, S.C. TSENG, S.S. WANG*, K.Q. YAN* Chaoyang University of Technology 168, Jifeng E. Rd., Wufeng District, Taichung 41349
More informationAn Approach to Load Balancing In Cloud Computing
An Approach to Load Balancing In Cloud Computing Radha Ramani Malladi Visiting Faculty, Martins Academy, Bangalore, India ABSTRACT: Cloud computing is a structured model that defines computing services,
More informationLoad Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
More informationAdaptive Multiple Metrics Routing Protocols for Heterogeneous Multi-Hop Wireless Networks
Adaptive Multiple Metrics Routing Protocols for Heterogeneous Multi-Hop Wireless Networks Lijuan Cao Kashif Sharif Yu Wang Teresa Dahlberg Department of Computer Science, University of North Carolina at
More informationPower Consumption Based Cloud Scheduler
Power Consumption Based Cloud Scheduler Wu Li * School of Software, Shanghai Jiaotong University Shanghai, 200240, China. * Corresponding author. Tel.: 18621114210; email: defaultuser@sjtu.edu.cn Manuscript
More informationRouter Scheduling Configuration Based on the Maximization of Benefit and Carried Best Effort Traffic
Telecommunication Systems 24:2 4, 275 292, 2003 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Router Scheduling Configuration Based on the Maximization of Benefit and Carried Best Effort
More informationManaging Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER
Managing Capacity Using VMware vcenter CapacityIQ TECHNICAL WHITE PAPER Table of Contents Capacity Management Overview.... 3 CapacityIQ Information Collection.... 3 CapacityIQ Performance Metrics.... 4
More informationMulti-service Load Balancing in a Heterogeneous Network with Vertical Handover
1 Multi-service Load Balancing in a Heterogeneous Network with Vertical Handover Jie Xu, Member, IEEE, Yuming Jiang, Member, IEEE, and Andrew Perkis, Member, IEEE Abstract In this paper we investigate
More informationResource-Diversity Tolerant: Resource Allocation in the Cloud Infrastructure Services
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 5, Ver. III (Sep. Oct. 2015), PP 19-25 www.iosrjournals.org Resource-Diversity Tolerant: Resource Allocation
More informationComparison between Vertical Handoff Decision Algorithms for Heterogeneous Wireless Networks
Comparison between Vertical Handoff Decision Algorithms for Heterogeneous Wireless Networks Enrique Stevens-Navarro and Vincent W.S. Wong Department of Electrical and Computer Engineering The University
More informationGuidelines for Selecting Hadoop Schedulers based on System Heterogeneity
Noname manuscript No. (will be inserted by the editor) Guidelines for Selecting Hadoop Schedulers based on System Heterogeneity Aysan Rasooli Douglas G. Down Received: date / Accepted: date Abstract Hadoop
More informationHow To Model A System
Web Applications Engineering: Performance Analysis: Operational Laws Service Oriented Computing Group, CSE, UNSW Week 11 Material in these Lecture Notes is derived from: Performance by Design: Computer
More informationAccessing Private Network via Firewall Based On Preset Threshold Value
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 3, Ver. V (May-Jun. 2014), PP 55-60 Accessing Private Network via Firewall Based On Preset Threshold
More informationThe Probabilistic Model of Cloud Computing
A probabilistic multi-tenant model for virtual machine mapping in cloud systems Zhuoyao Wang, Majeed M. Hayat, Nasir Ghani, and Khaled B. Shaban Department of Electrical and Computer Engineering, University
More informationDynamic Virtual Machine Allocation in Cloud Server Facility Systems with Renewable Energy Sources
Dynamic Virtual Machine Allocation in Cloud Server Facility Systems with Renewable Energy Sources Dimitris Hatzopoulos University of Thessaly, Greece Iordanis Koutsopoulos Athens University of Economics
More informationA Scalable Video-on-Demand Service for the Provision of VCR-Like Functions 1
A Scalable Video-on-Demand Service for the Provision of VCR-Like Functions H.J. Chen, A. Krishnamurthy, T.D.C. Little, and D. Venkatesh, Boston University Multimedia Communications Laboratory Department
More informationA Sequential Game Perspective and Optimization of the Smart Grid with Distributed Data Centers
A Sequential Game Perspective and Optimization of the Smart Grid with Distributed Data Centers Yanzhi Wang, Xue Lin, and Massoud Pedram Department of Electrical Engineering University of Southern California
More informationPerformance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow
International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter
More informationOptical interconnection networks with time slot routing
Theoretical and Applied Informatics ISSN 896 5 Vol. x 00x, no. x pp. x x Optical interconnection networks with time slot routing IRENEUSZ SZCZEŚNIAK AND ROMAN WYRZYKOWSKI a a Institute of Computer and
More informationInternet Traffic Variability (Long Range Dependency Effects) Dheeraj Reddy CS8803 Fall 2003
Internet Traffic Variability (Long Range Dependency Effects) Dheeraj Reddy CS8803 Fall 2003 Self-similarity and its evolution in Computer Network Measurements Prior models used Poisson-like models Origins
More informationG.Vijaya kumar et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1413-1418
An Analytical Model to evaluate the Approaches of Mobility Management 1 G.Vijaya Kumar, *2 A.Lakshman Rao *1 M.Tech (CSE Student), Pragati Engineering College, Kakinada, India. Vijay9908914010@gmail.com
More informationSmall is Better: Avoiding Latency Traps in Virtualized DataCenters
Small is Better: Avoiding Latency Traps in Virtualized DataCenters SOCC 2013 Yunjing Xu, Michael Bailey, Brian Noble, Farnam Jahanian University of Michigan 1 Outline Introduction Related Work Source of
More informationAN EFFICIENT DISTRIBUTED CONTROL LAW FOR LOAD BALANCING IN CONTENT DELIVERY NETWORKS
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 9, September 2014,
More informationAdaptive Sampling for Network Performance Measurement Under Voice Traffic
Sampling for Network Performance Measurement Under Voice Traffic Wenhong Ma and Changcheng Huang Optical Networks Laboratory Department of Systems and Computer Engineering, Carleton University 1125 Colonel
More informationCS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015
CS423 Spring 2015 MP4: Dynamic Load Balancer Due April 27 th at 9:00 am 2015 1. Goals and Overview 1. In this MP you will design a Dynamic Load Balancer architecture for a Distributed System 2. You will
More informationIMPLEMENTATION OF VIRTUAL MACHINES FOR DISTRIBUTION OF DATA RESOURCES
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH IN ENGINEERING AND SCIENCE IMPLEMENTATION OF VIRTUAL MACHINES FOR DISTRIBUTION OF DATA RESOURCES M.Nagesh 1, N.Vijaya Sunder Sagar 2, B.Goutham 3, V.Naresh 4
More informationCost-aware Workload Dispatching and Server Provisioning for Distributed Cloud Data Centers
, pp.51-60 http://dx.doi.org/10.14257/ijgdc.2013.6.5.05 Cost-aware Workload Dispatching and Server Provisioning for Distributed Cloud Data Centers Weiwei Fang 1, Quan Zhou 1, Yuan An 2, Yangchun Li 3 and
More informationPower Efficiency Metrics for Geographical Routing In Multihop Wireless Networks
Power Efficiency Metrics for Geographical Routing In Multihop Wireless Networks Gowthami.A, Lavanya.R Abstract - A number of energy-aware routing protocols are proposed to provide the energy efficiency
More informationInternational Journal of Advanced Research in Computer Science and Software Engineering
Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com An Experimental
More informationProvisioning algorithm for minimum throughput assurance service in VPNs using nonlinear programming
Provisioning algorithm for minimum throughput assurance service in VPNs using nonlinear programming Masayoshi Shimamura (masayo-s@isnaistjp) Guraduate School of Information Science, Nara Institute of Science
More informationPassive Queue Management
, 2013 Performance Evaluation of Computer Networks Objectives Explain the role of active queue management in performance optimization of TCP/IP networks Learn a range of active queue management algorithms
More informationProvably-Efficient Job Scheduling for Energy and Fairness in Geographically Distributed Data Centers
3nd IEEE International Conference on Distributed Computing Systems Provably-Efficient Job Scheduling for Energy and Fairness in Geographically Distributed Data Centers Shaolei Ren Yuxiong He Fei Xu Electrical
More informationA novel load balancing algorithm for computational grid
International Journal of Computational Intelligence Techniques, ISSN: 0976 0466 & E-ISSN: 0976 0474 Volume 1, Issue 1, 2010, PP-20-26 A novel load balancing algorithm for computational grid Saravanakumar
More informationA Cloud Data Center Optimization Approach Using Dynamic Data Interchanges
A Cloud Data Center Optimization Approach Using Dynamic Data Interchanges Efstratios Rappos Institute for Information and Communication Technologies, Haute Ecole d Ingénierie et de Geston du Canton de
More informationEfficient Detection of Ddos Attacks by Entropy Variation
IOSR Journal of Computer Engineering (IOSRJCE) ISSN: 2278-0661, ISBN: 2278-8727 Volume 7, Issue 1 (Nov-Dec. 2012), PP 13-18 Efficient Detection of Ddos Attacks by Entropy Variation 1 V.Sus hma R eddy,
More informationCloud Computing for Agent-based Traffic Management Systems
Cloud Computing for Agent-based Traffic Management Systems Manoj A Patil Asst.Prof. IT Dept. Khyamling A Parane Asst.Prof. CSE Dept. D. Rajesh Asst.Prof. IT Dept. ABSTRACT Increased traffic congestion
More informationAnti-Virus Power Consumption Trial
Anti-Virus Power Consumption Trial Executive Summary Open Market Place (OMP) ISSUE 1.0 2014 Lockheed Martin UK Integrated Systems & Solutions Limited. All rights reserved. No part of this publication may
More informationDynamic Multi-User Load Balancing in Distributed Systems
Dynamic Multi-User Load Balancing in Distributed Systems Satish Penmatsa and Anthony T. Chronopoulos The University of Texas at San Antonio Dept. of Computer Science One UTSA Circle, San Antonio, Texas
More informationDynamic Load Balancing of Virtual Machines using QEMU-KVM
Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College
More informationWindows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
More informationEverything you need to know about flash storage performance
Everything you need to know about flash storage performance The unique characteristics of flash make performance validation testing immensely challenging and critically important; follow these best practices
More informationFair Scheduling Algorithm with Dynamic Load Balancing Using In Grid Computing
Research Inventy: International Journal Of Engineering And Science Vol.2, Issue 10 (April 2013), Pp 53-57 Issn(e): 2278-4721, Issn(p):2319-6483, Www.Researchinventy.Com Fair Scheduling Algorithm with Dynamic
More informationMobile Multimedia Meet Cloud: Challenges and Future Directions
Mobile Multimedia Meet Cloud: Challenges and Future Directions Chang Wen Chen State University of New York at Buffalo 1 Outline Mobile multimedia: Convergence and rapid growth Coming of a new era: Cloud
More information