OPTIMIZING ENERGY IN DATACENTER USING DVFS

Size: px
Start display at page:

Download "OPTIMIZING ENERGY IN DATACENTER USING DVFS"

Transcription

1 CHAPTER 5 OPTIMIZING ENERGY IN DATACENTER USING DVFS 5.1 Introduction Energy efficiency and low carbon strategies have attracted a lot of concern. The goal for 20% energy efficiency and carbon reduction by 2020 drove the Information Communication Technologies (ICT) (Koutitas 2010) sector to strategies that incorporate modern designs for a low carbon and sustainable growth. The ICT sector is part of the 2020 goal and participates in three different ways. In the direct way, ICT are called to reduce their own energy demands (green networks, green IT), in the indirect way ICT are used for carbon displacements and in the systematic way ICT collaborate with other sectors of the economy to provide energy efficiency (smartgrids, smart buildings, intelligent transportations systems, etc.). ICT and in particular datacenters have a strong impact to the global CO2 emissions. Moreover, an important part of the operational expenditure is due to the electricity demands. This work presents the sources and challenges that have to be addressed to reduce carbon emissions and electricity expenses of the sector. The datacenter is the most active element of an ICT infrastructure that provides computations and storage resources and supports respective applications. The datacenter infrastructure is central to the ICT architecture, from which all content is sourced or passes through. Worldwide, datacenters consume around 40,000,000,000 kw/hr of electricity per year and a big portion of this consumption is wasted due to inefficiencies and non-optimized designs. According to the Gartner Report, a typical datacenter consumes per year is equivalent to households the same amount of energy, and the electricity consumption by datacenters is about 0.5% of the world production (Koutitas 2010). In terms of carbon emissions this power consumption pattern is identical to the airline industry and comparable to emissions generated by

2 Argentina, Malaysia and the Netherlands. Energy efficiency in ICT is defined as the ratio of data processed over the required energy (Gbps/Watt) and is different than power conservation where the target is to reduce energy demands without considering the data volume. Taking into consideration this ratio, green IT technologies have important benefits in terms of reduce electricity costs and OPEX (Operational expenditure); improve corporate image; provide sustainability; extend useful life of hardware; reduce IT maintenance activities; reduce carbon emissions and prevent climate change; Provide foundations for the penetration of renewable energy sources in IT systems. The demand for high speed data transfer and storage capacity together with the increasingly growth of broadband subscribers and services will drive the green technologies to be of vital importance for the telecommunication industry, in the near future. Already, recent research and technological papers show that energy efficiency is an important issue for the future networks. A review of energy efficient technologies for wireless and wired networks is presented by Brown et al (2007). The design of energy efficient WDM (Wavelength Division Multiplexing) ring networks is highlighted. It is shown that energy efficiency can be achieved by increasing the capital expenditure of the

3 network, by reducing the complexity and by utilizing management schemes. The case of thin client solutions is investigated and it is shown that employing power states in the operation of a datacenter can yield energy efficiency (Tsirogiannis et al 2010). Efforts have been cited related to agreeing and enabling standard efficiency metric, real time measurement systems, modeling energy efficiency, suggesting optimal designs, incorporating renewable energy sources in the datacenter and developing sophisticated algorithms for designing and managing the datacenters. These approaches have been published by various companies, experts in the field and organizations (Rivoire 2007; Poess 2008; Barroso 2007). Although the IT industry has begun greening major corporate datacenters, most of the cyber infrastructure on a university campus or SMEs of suboptimal energy environment and ad hoc involves a complex network, in small departmental facilities placed with clusters. 5.2 Datacenter Infrastructure and Power Consumption Datacenter Infrastructure Datacenters incorporate critical and non-critical equipments. Critical equipments are related to devices that are responsible for data delivery and are usually named as IT equipments. Non-critical equipments are devices responsible for cooling and power delivery and are named as Non Critical Physical Infrastructure (NCPI). The Fig 5.1 presents a typical datacenter block diagram. The overall design of a datacenter can be classified in four categories Tier I IV each one presenting advantages and disadvantages related to power consumption and availability. In most cases availability and safety issues yield to redundant N +1, N +2 or 2N datacenter designs and this has a serious effect on power consumption.

4 Figure 5.1 Typical datacenter infrastructures. According to Fig 5.1, a datacenter has the following main units: Heat Rejection is usually placed outside the main infrastructure and incorporates chillers, dry coolers and present an N + 1 design. Pump Room they are used to pump chilled water between dry coolers and CRACs (Computer Room Air Conditioner) and present an N + 1 design (one pump in standby). Switchgear it provides direct distribution to mechanical equipment and electrical equipment via the UPS.

5 UPS Uninterruptible Power Supply modules provide power supply and are usually designed with multiple redundant configurations for safety. Usually 1000 kva to 800 kw per module. EG Emergency Generators supply with the necessary power supply to the datacenter in case of a breakdown. Usually diesel generators. PDU Power Distribution Units for power delivery to the IT. Usually 200 kw per unit and dual PDUs (2N) for redundancy and safety. CRAC Computer Room Air Conditioners provide cooling and air flow in the IT equipments. Usually air discharge is in up-flow or down-flow configuration. IT Room incorporates computers and servers placed in blades, cabinets or suites in a grid formation. Provide data manipulation and transfer 5.3 Power Consumption in Datacenters The overall datacenter power consumption is related to the associated power consumed by each unit. Efficiency at individual parts is an important step for greening the datacenter aims to the overall datacenter design but optimization is achieved efficiency. The power delivery in a typical datacenter is presented in Fig 5.2. The power is divided into two types: parallel path and a series path. The power enters the datacenter from the main utility (electric grid, generator), the Renewable Energy Supply (RES) (Saynor 2003) utility and feeds the switchgear in series. Within the switchgear, transformers scale down the voltage to V. Therefore, In case of a utility failure UPS flows voltage and also fed by the EG. The UPS incorporates batteries for emergency

6 power supply and process the voltage with a double AC-DC-AC conversion to protect from utility failures and smooth transition to the EG system. Of course, the AC-DC-AC conversion is a process that wastes power and reduces the overall efficiency. The output of the UPS feeds the PDUs that are placed within the main datacenter room. The PDUs break the high voltage from the UPS into many V circuits to supply the electronic equipments. Finally, power is consumed for the IT processes namely as storage, networking, CPU and in general data manipulation. Renewable Energy Supply (RES) Data Center IT Equipment Racks P G NCPI Services P IN In Series Power Path Switch Gear UPS PDU Prr Storage P M In Parallel Power Path Main Energy Supply Lights Networking Cooling Telecoms Equip Emergency energy Supply Figure 5.2 Power deliveries in a typical datacenter

7 Figure 5.3 Power consumption by various parts of a datacenter (system refers to motherboards, fans, etc.) The parallel path feeds the cooling system that is important for the heat protection of a datacenter. The cooling system is also connected to the EG since without cooling a typical datacenter can operate for a few minutes before getting overheated. The cooling system incorporates fans and liquid chillers. The power distribution of these processes in an inefficient datacenter is presented in Fig 5.3. It can be observed that almost 70% of the power is consumed for non-critical operations like cooling and power delivery and conversion and 30% is used by the IT equipments. Of course, a portion of this percentage is also wasted for networking, CPU, fans, storage and memory processing. In other

8 words, the useful work of the datacenter is associated to a percentage of power, the IT equipments are delivered at smaller than 30%. The power consumption pattern presented in Figure 5.3 is not constant with time but varies according to different parameters (Schneider et al 2012). The main are the workload of the datacenter and the outside environment. Modeling the energy efficiency and the losses of the datacenter s equipments is a complex tasks and crucial simplificative assumptions yielded great errors. Firstly, the assumption that losses associated to the power and cooling equipments can be found constant. It has been observed that the energy efficiency of these equipments is a function of the IT load and presents a nonlinear behavior. In addition, these equipments are usually operating at lower than the maximum capacity loads and this increases the losses of the system. Finally, the heat generated by the network-critical physical infrastructure (NCPI) equipments is not insignificant. In general, the losses of NCPI equipments are highly correlated to the workload of the datacenter in a complex nonlinear relationship. According to the input workload the losses of NCPI equipments can be categorized as follows: No load losses Losses that are fixed even if the datacenter has no workload. The loss percentage increases with decrease of load. Proportional losses Losses that depend linearly on workload. The loss percentage is constant with load. Square law losses Losses that depend on the square of the workload. These losses appear at high workloads (over 90%). The loss percentage decreases with a decrease of load. The IT equipment also presents non-constant losses and variable depends up on the input workload from energy efficiency. Based on these observations it is concluded that

9 energy efficiency of a datacenter is a complicate factor parameter, non-constant with time. For this reason, a technique for measuring and predicting the datacenter s energy efficiency is of great importance. 5.4 Sources of Losses at Datacenters The operation of datacenters suffers great inefficiencies and a great amount of power is wasted for the operation of non-critical equipments and for the produced heat by the electronic equipments. The main disadvantage of real datacenters is wasted the great amount of energy cooling or it is transformed to heat because of the inefficient operation of electronic equipments that can be NCPI or IT type. The main causes of power are summarized as Power units (UPS, Transformers, etc.) operate below their full load capacities. UPS are oversized to the actual load requirements in order to avoid operating near their capacity limit. Air conditioning equipment consumes to deliver the cool air flow at long distances for taking the extra power. Inefficient UPS equipments. Blockages between air conditioners and equipments that yield to inefficient operation. No virtualization and consolidation.

10 Inefficient servers. No closed coupling cooling. No efficient lighting. No energy management and monitoring. Underutilization due to N + 1 or 2N redundant designs. Over sizing of datacenter. Under-floor blockages that contribute to inefficiency by forcing cooling devices to work harder to accommodate existing load heat removal requirements. The procedure to transform a datacenter into an energy efficient one (green datacenter) is complex and it can only be achieved by targeting both individual part optimization that can be considered as operational costs and overall system performance that can be considered as planning actions. According to Fig s 5.2 and 5.3, the optimized operation of the datacenter requires the input power to be minimized without affecting the operation of the IT equipments. 5.5 Challenges for Energy Efficiency In general, efficiency can be achieved through the optimization of the operation and the optimal planning. Moreover, standards can be engineered in order to drive energy efficiency. This process incorporates the domains presented in Fig 5.4.

11 1. Free Cooling 2. Air-Flow Management 3. Liquid Heat Removal 4. Room Lay out 1. Alternative Energy Sources 2. Tri-Generator 1. Efficient UPS and Transformers 2. Blade-Servers 3. Dual-Core Processors 4. Energy Proportional Computing 5. Right Sizing Figure 5.4 Directions for green datacenter transformation

12 5.6 Optimization of Operational Costs The operational costs are associated to the optimization of individual equipments like the IT equipments and NCPI IT-Equipment Efficiency of IT equipments is an important step for the green operation of a datacenter (Kumon 2012). The DCeP metric and the factors ITEE and ITEU show s that energy efficiency is correlated to the efficient use of the IT equipment. Achieving efficiency at the IT level can be considered as the most important strategy for a green datacenter since for every Watt saved in computation, two additional Watts are saved one Watt in power conversion and one Watt for cooling. In general the following actions are necessary to achieve this goal. Retiring some datacenters have application servers which are operating but have no users. These servers add no load losses to removing the system is need. The usable lifetime for servers within the datacenter varies greatly, for some X86 servers ranging from as two years to seven years or more for large, scalable SMP server systems (Jamal 2009). IDC surveys indicate that almost 40% of deployed servers have been operating in place for four years or longer. That represents over 12 million single core-based servers still in use. To exists the servers performance of most datacenters has been designed for cost optimization and not for energy efficiency. Many servers in datacenters have power supplies that are only 70% efficient. To loss the 30% of power heat is going to the server. Therefore, having inefficient power supplies it means too excess money has been spent on power as well as with additional cooling is needed. With current servers is that they are used at only 15 25% of their capacity is the another problem. While the problem of this, from a cooling perspective

13 and power, to require of the power amount for run traditional servers does not vary linearly with the server utilization. Therefore, to run the ten servers at 10% of utilization will consume the power more than of one or two servers each run at 80 90% utilization. Migrating to more energy efficient platforms - Blade servers that produce a less heat in a smaller area around it is used. Non-blade systems require bulky, space inefficient components are space and hot, to across the many duplicate computers perform at capacity is may or may not. More efficient of the overall utilization by locating these services in one place the blade computers are sharing between them. The efficiency of blade centers is obvious if one considers power, cooling, networking and storage capabilities. Koutitas (2010) has presented an example where 53 blade servers consuming 21 KWatts provided 3.6 Tflops of operation when in 2002 required Intel Architecture based HPC cluster of 512 servers arraigned in 25 racks consuming 128 KWatts. This means that compared to 2002 technology today 17% of the energy is necessary for the same performance. Power of Blade Servers At higher voltage is required within the computers, these computers DC voltages are operate over a range but utilities deliver power as AC. One or more Power Supply Units (or PSUs) are required for converting. The computer operations is to be ensure that the failure of one power source does not affect, even entry level servers has been redundancy of the supplying power, to design the output and adding to the bulk and heat. Therefore, within the enclosures power source from power supply provides a single for all blades. Hence PSU supplying DC to multiple enclosures has been dedicated for separate the single power source may come as a power supply. Cooling of Blade Servers The proper functionality of the components to ensure the system while during the operation, mechanical and electrical components are producing the heat. Most blade enclosures, remove heat by using fans, like most computing systems.

14 A power supply does not generate the blade s shared cooling and power for heat as traditional servers. To design the newer blades enclose feature high speed, requirements of the system for adjustable fans and control logic is tune the liquid cooling system. Networking of Blade Servers To providing the blade enclosure which blade will connect the one or more network buses from single location (versus one in each computer chassis) ports are individually is present to reducing the connection cost for the individual devices. This also means that the probability of wiring blockage to air flow of cooling systems in the datacenter is minimized. More Efficient Server For energy efficiency, should focus on the vendors making to consume the processors less energy and using the energy is possible as efficiently. To designing the system coordination is closed between the server manufacture and processor. The practical strategies of power efficient computing technologies are presented based on microelectronic, off-chip interconnect, power delivery system, underlying logic device, and associated cache memory. In addition, replacement of old single core processors with dual or quad core machines is important. This combination can improve performance per Watt and efficiency per square meter. Energy Proportional Computing Many servers operate at a fraction of their maximum processing capacity. Efficiency can be achieved when the server scales down its power use when the workload is below its maximum capacity. All servers are being used heavily when the search traffic is high, but low traffic of during periods, hundreds of queries per second for server might shown, any idleness periods of meanings are not to be longer than a few seconds. In addition, it is well known that rapid transition between idle and full mode consumes high energy. Borroso et al (2007) has researched over 5000 Google server shows that the activity profile for most of the time is limited to 20 30% of the servers maximum capacity. Server s power consumption responds differently at varying input workloads.

15 Server Power Usage Relative to Peak (%) Figure 5.5 Comparison of power usage and energy efficiency for Energy Proportional Computing (EPC) and common server (NO EPC) (Kim, 2010). In Fig 5.5 the normalized power usage and energy efficiency of a common server is presented as a function of the utilization (% of maximum capacity). It can be observed that for typical server operation (10 50%) the energy efficiency is very low meaning that the server is oversized for the given input workload. This means that even for a no workload scenario, the power consumption of the server is high. In case of Energy Proportional Computing (Kim 2010) (EPC) the energy varies proportionally to the input work. Energy-proportional machines must exhibit a wide the range dynamic power a property in the other domains that might be rare but it is not unprecedented in computing equipment. To achieve energy proportional computing two key features are necessary such as wide dynamic power range and active low power modes.

16 Current processors have a wide dynamic power range of even more than 70% of peak power. On the other hand the dynamic power range of other equipment is much narrower such as DRAM 50%, disk drives 25% and network switches 15%. A processor running at a lower voltage-frequency mode can still execute instructions without requiring a performance impacting mode transition. With active low-power modes there are no other components in the system. Networking equipment rarely offers any lowpower modes, and the only low-power modes currently available in mainstream DRAM and disks are fully inactive. A technique to improve power efficiency is with dynamic Voltage Scaling and Clock Frequency (DFVS) (Yang et al 2012). The performance of CPU is adjusting dynamically (via clock rate and voltage) to match the workload for provide CFVS performance on demand. Another advantage obtained by energy proportional machines is the ability to develop power management software that can identify underutilized equipments and set to idle or sleep modes. Finally, provide the benchmarks such as SPECpower ssj2008a standard application is representative of server workloads for broad class and it can help isolate efficiency differences in the hardware platform. SPECpower_ssj2008 is the first industry-standard SPEC benchmark that evaluates the power and performance characteristics of volume server class and multi-node class computers. With SPECpower_ssj2008, SPEC is defining server power measurement standards in the same way we have done for performance. 5.7 Problem Description Majority of the prior research work done in the area of analyzing power utilization mainly concentrates on task scheduling in the center with respect to task allocation among the application servers targeted power saving or the criteria considering thermal factors. The major research gap is that there are only few implementation work being

17 carried out in past considering datacenters, unwanted power consumption along with task provisioning. Cloud computing has been accounted for diversified emerging issues which is yet to be resolved. Unfortunately, the current potentials and capability of cloud computing has yet not been enhanced to cater the crucial scope of the usage. There are technical and nontechnical gaps. Most prominent issues is that although vision of cloud computing is to deploy at any location irrespective of time and devices, particularly in case of distributed system and existing clouds is reported to deliver very poor performance in applications. Any significant issue is the migration of data from client s infrastructure to clouds actually cost very high and it is also a time consuming process. Various legal restrictions are yet to be created for the acceptance of this new technology in socioeconomic market. In many datacenters today, the cost of power is only second to the actual cost of the equipment investments. Today the cloud datacenter consumes 1-2 percent of world energy. If left unchecked, they will rapidly consume an ever-increasing amount of power consumption. 5.8 Proposed System The proposed technique attempts to reduce the cumulative power utilization of a specified datacenter which is done by choosing the suitable evaluating resources for task processing depending on the load quantity and transmission capability of the variants of datacenter specified. The transmission capability is represented by available bandwidth facilitated to the unique application servers or clusters of servers. The proposed technique designs an architectural framework possessing the better designed topology of datacenter. The main contribution of this proposed system is to give (i) analysis and in-depth study of real-time Cloud service with virtualization, and (ii) to probe various energyaware virtual machine optimizing schemes based on Dynamic Voltage Frequency Scaling schemes (DVFS). The weighted grouping of the server level W s, rack level W r, and module level W m is given by,

18 Weighted Grouping = P.Ws + Q. W r + R. W m. (5.1) where P, Q, and R are the weighted coefficients factors. The allocation of higher set of load towards the server is preferred if the value of P is very high. Figure: 5.6. Implementation Plan Priority is designed higher for higher set of task in the racks with reduced traffic events when the value of Q is higher. The selection of loaded modules are preferred if value of R is higher. For understanding the magnitude of the task allocation in datacenter, the value of R plays an important role. The implementation plan of the proposed system is as shown in Fig 5.6.

19 The selection of the operationally processing server will group the server load S L (l) along with its transmission capability T C (q) which is analogous to equivalent distribute of the uplink resources on the top of rack switch. W T ( q) C s ( l, q) SL( l). (5.2) r In equation 5.2, S L (l) is a parameter which is proportional to the load of any individual servers l, T C (q) which represents the load at the rack uplink by evaluating the traffic blocking intensity in the designed queue q, Δ r is a bandwidth over scheduled parameter at the rack switch and φ will represent a coefficient for share between S L (l) and T C (q). It is also known that both S L (l) and T C (q) should lie within a range (0,1) greater value φ reduces the significance of network variants T C (q). Therefore, with context to (5.2), the criteria having influencing the rack switch can be expressed as: W ( l, q) S r R Tm ( q) ( l). m j 0 Tm ( q) m 1. n n i 1 S ( l) L (5.3) k 1 Wm ( l) Sm( l) SR( l) (5.4) k In the equation (5.4), S R (l) is a load on rack which is derived from the addition of all the component loads on server in the rack, S M (l), is the module load derived as the normalized addition of all of the loads in rack in this module, n and k are quantity of the servers in a rack and the quantity of the racks in the module respectively, T C (q) is dependent to the network load at incoming networking devices and Δ m will represents channel capacity as over scheduled criteria at the module switches. It can be seen that module level criteria F M possesses only components l related to load. This phenomenon is due to the reason that all the components are actually associated to the uniform networking device and split the similar channel capacity by deploying a technique which can work on unit load balancing resulting in discretion of traffic flows by estimating a hash function on all ingress message. It was also noted that idle server in datacenter

20 consumes 2/3 rd of its crest utilization which concludes that if any provisioned is to be designed that it must merge all the allocated task on a minimum feasible group of computing resources. Also, it is observed that if the application servers are kept in uniform execution mode at high frequency than hardware reliability is found less with its performance effectiveness. Therefore, the proposed load criterion is designed in equation 5.5: 1 1 S (5.5) L ( l) ( l ) ( l (1 )) e 1 e The preliminary component as shown in equation (5.5) explains the data-structure of the function while the 2 nd variant denotes the fining method targeted at the junction towards the highest load of the server. The factor ε represents the dimension and the shift of the descent slope. The load of application server l should be in constraint of (0,1). In case of highly dynamic load towards server, the load on application server will be evaluated as the total of all processing load of the allocated present task. But in case of jobs where the deadline is fixed for completion, the load on the server can be represented as the least quantity of the networking resources which will be required from the application server to accomplish all the jobs at pre-defined time as per SLA. In datacenter, the application server will split the top of rack switch for meeting their transmission requirements. Conversely, representing a section of this channel capacity utilized by a specified application server or a transmission at higher frequency deployed in defined server during execution is definitely highly expensive in terms of processing the algorithm. In order to mitigate such issues, using equation 5.3 and 5.4 will use a variant based on the task allocated stage of the jobqueue at the networking device and levels the channel capacity over scheduling criteria Δ.

21 As an alternative of depending on the appropriate queue size, the allocation stage q will be leveled with the cumulative queue size T max within the range of (0,1). The variety is analogous to void and complete buffer allocation. By depending on the buffer allocation, the proposed system will be responsive to the increasing blocking in the racks moderately in comparison to rate of transmission differences. Therefore T (q) will be represented as in equation 5.6. T ( q ) 2 q ( T max 2 ) e (5.6) 5.9 Research Methodology Three-tier trees of hosts and switches form the most widely used datacenter architecture consists of the core tier at the root of the tree, the aggregation tier that is responsible for routing, and access the computing server s tier that hold the pool as in fig 5.7. Early datacenters used two tier architectures. Thus, such type of datacenters switches, depending on the requirements bandwidth per-host is used, could typically support not more than 5,000 hosts. Given the pool of servers in today s datacenters that are of the order of 100,000 hosts and the requirement to keep the two layer switches in the access network, a three-tiered design becomes the most appropriate option.

22 Core Network Aggregation Network Module Module Access Network RACK RACK RACK RACK RACK RACK 10 GE Link GE Link L3 Switch L2/L3 Rack Switch Computing Server Figure 5.7 Three-tier datacenter architecture Although 10 Gigabit Ethernet (GE) transceivers are commercially available, in a three-tiered architecture the computing servers (grouped in racks) are interconnected using 1 GE links. This is due to the fact those 10 GE transceivers: (a) too expensive and (b) needed for connecting computing servers probably offer more capacity. In current datacenters, rack connectivity is achieved with inexpensive Top-of-Rack (ToR) switches. A typical ToR switch shares two 10 GE uplinks with 48 GE links that interconnect computing servers within a rack. The difference between capacities of the uplink and the downlink a switch defines its oversubscription ratio is equal to 48/20 = 2.4:1. Therefore, remaining available individual servers out of their 1 GE links only 416 Mb/s. At the higher layers of hierarchy, the racks are arranged in modules (see Fig. 5.7) with a pair of aggregation switches servicing the module connectivity. Typical oversubscription ratios around 1.5:1 for these aggregation switches, which further reduces the available bandwidth for the individual computing servers to 277 Mb/s. The bandwidth between the aggregation and core networks is distributed a multi-path routing technology is used for Equal Cost Multi-Path (ECMP) routing (Kantola 2011). The ECMP technique performs a

23 per-flow load balancing that differentiates the flows by computing a hash function on the incoming packet headers. For a three-tiered architecture, the maximum number of allowable ECMP paths bounds the total number of core switches to eight. Such a bound also limits the deliverable bandwidth to the aggregation switches. This limitation is overcome with the (commercial) availability of 100 GE links, standardized in June The important research topic is datacenter is an extremely designed. Fat-tree successors are constantly proposed for datacenters large-scale. Thus, the fact that not even single datacenter has been built (to this date) based on such proposals, we constrict the scope of this thesis to the three-tiered architecture. Nevertheless, it is taken into account that all types of datacenter topologies findings which remain valid researches Result Analysis For performance evaluation purposes, the proposed system was implemented in the Java. The simulator is designed to capture datacenter communication processes at the packet level. It implements a set of energy-efficient optimization techniques, such as DVFS and offers tools to monitor energy consumption in datacenter servers, switches, and other components. A three-tier tree datacenter topology consists of 1536 servers and arranged into 32 racks each holding 48 servers, served by 8 aggregation switches and 4 core (see Fig. 5.7), was used in all simulation experiments. We used 1 GE links for interconnecting servers in the inside racks while 10 GE links are used to form a fat-tree topology interconnecting access, core switches and aggregation. The propagation delay on all of the links was set to 10 ns. The workload generation events and user arrivals are exponentially distributed in time to mimic typical process. As soon as an arrival scheduling decision is taken for a newly sent the workload for execution to over the datacenter network and selected server. The size of the workload is equal to 15 KB. Being fragmented, it occupies 10 Ethernet

24 packets. While execution, to produce the workloads a constant bit rate stream of 1 Mb/s datacenter is directed. Such a stream is designed to mimic the behavior of the most common video sharing applications. To add uncertainties, during the execution, each workload communicates with another randomly chosen workload by sending 75 KB message internally. The size of the message is also sent out of the datacenter at the moment of task completion as an external communication. The schedulers server left by the idle powered down to reduce the power consumption while using dynamic power management technique. Another technique is used for unused network switches in access and aggregation network. The core network switches remain always operational at the full rate due to their crucial importance in communications. The evaluated datacenters are designed in such a way that the workload scheduler consolidates leaving the most (around 1016 on average) servers in idle mode. Those servers are powered down. Thus, performed server load is kept close to the maximum communication and congestion levels and no consideration of network. As a consequence, the designed scheduler produce a number of workloads that scheduled a combined load exceeding ToR switch forwarding capacity and causes network congestion. The round-robin scheduler follows a completely opposite policy. It distributes communicational and computing loads are equally among switches and servers; there by the network traffic is balanced and no server is overloaded. However, the drawback is that no server or network switch is left idle for powering down, making the round-robin scheduler as the least energy-efficient. The proposed technique achieves the workload consolidation for power efficiency while preventing computing servers and network switches from overloading. In fact, the average load of the rack switch uplink is around 0.95 and the average load of an operating server is around 0.9. Such load levels ensure that no additional delays in job communications are caused by network congestion. However, this advantage comes at a

25 price of a slight increase in the number of running servers. On average, the proposed scheduler left 956 servers as opposed to 1016 servers left idle by the Green scheduler. To explore the uplink load in detail, the traffic statistics at the most loaded ToR switches are measured. Under the proposed scheduler, the link is constantly overloaded and the queue remains almost constantly full, which causes multiple congestion losses. All queues were limited to 1000 Ethernet packets in our simulations. Under the proposed scheduler, the occupancy of buffer is mostly below the half of its size with an average of 213 packets. At certain instances of time the queue even remains empty having no packets to send. This fact results in a slightly reduced uplink utilization level of The proposed system discussed is evaluated on java platform in 32 bit machine with windows OS of 1.84 GHz processor. The prototype design of DVFS protocol is layered into task-allocation, group creation, and processing elements. The virtual machines are developed for accepting number of task considering simulation time and processing speed of the system. The framework is developed to track down the communication in datacenter. The optimization is checked through JOULEX (See Appendix). The application server is taken for YOUTUBE as normally multimedia applications are heavy, frequently visited, and has sharing privileges for the users.

26 Figure 5.8. Joulex Interface Figure 5.9 Joulex interface with graphical representation of Power consumption The proposed system performs implementation of the study VFS scheme along with traditional DVS scheme, where consideration of both the schemes for energy optimization for the datacenters along with all its components is been considered. The considerations are as follows: No of server=1536 No. of Racks=32 No of application server by racks=48 Propagation delay on links=10 ns Experiment Task size=15 KB A communication link is created for connecting application server within the rack. The total set of work is exponentially dispersed to the executers

27 Figure Server workload distribution performed by proposed system, previous DVFS implementation, and round-robin technique Figure 5.11 Joint uplink traffic loads Once the provisioning decision is undertaken for a recently designed task, it will be forwarded to the purpose of execution. At the processing mode, the task generates a uniform stream of bitrate of 1.6 Mb/s which is forwarded to the datacenter. To increase

28 the challenges at the simulation time, each of the tasks communicates with another task which is arbitrarily chosen by transmitting a dummy message of 125 KB implicitly. Also at the event of task accomplishment, the similar size packets will be forwarded to the datacenters. The mean task load in the datacenter is fixed to 45% which is dispersed among other application server. A round robin algorithm can be used for this purpose. Fig shows the result being accomplished from load distribution for all three of the evaluated schedulers e.g. previous DVFS system, proposed DVFS system, and conventional Round Robin. Fig shows a combined uplink load at the corresponding rack switches. The previous DVFS schema basically consolidates the workload leaving the most (around 1016 on average) servers idle in the evaluated datacenter. These storages areas are then subjected to power down. However, the load of the active servers is maintained very close to the highest and no consideration of network congestion levels and communication delays is performed at this moment. As a consequence, a quantity of workloads scheduled by the previous DVFS schema actually generates an integrated load exceeding the/ switch forwarding capacity and finally results in traffic congestion over cloud network. The proposed algorithm is experimented with the previously implemented DVFS techniques as well as conventional round robin algorithm. The comparative analysis is presented in fig 5.10 and fig The Fig represents the load allocation to Server for all the analyzed provisionary and Fig 5.11 represents a Joint uplink traffic load. However, the round-robin scheduler follows a completely opposite policy for which it distributes computing and communicational loads equally among servers and switches; thereby the network traffic is balanced and no server is overloaded. However, the drawback is that no server or network switch is left idle for powering down, making the round-robin scheduler as the least energy-efficient in the proposed result analysis. The previous DVFS as well as proposed model is the most efficient schema for preserving power drainage in datacenters. The proposed schema is highly efficient to restore around 2/3 rd of the cloud servers along with network switches that potentially minimizes the score of unwanted power drainage. With the proposed system, the

29 datacenter energy consumption is reduced to half when compared to traditional and frequently used round-robin scheduler. The proposed technique when compared to the previous DVFS schema adds around: (a) 3.2 % to the total datacenter consumption, (b) 3% in servers energy consumption, and (c) 1.7 % in switches energy consumption. This minor increase in energy consumption is highly justified by the requirement of additional computing and communicational resources, detected by proposed technique, and required for keeping the quality of job execution at the desired level. In contrast to the previous DVFS schema, the proposed model basically deploys network awareness to detect congestion hotspots in the datacenter network and adjust its job consolidation methodology accordingly. It becomes particularly relevant for data intensive jobs which are constrained more by the availability of communication resources rather than by the available computing capacities Summary The proposed system identifies the function of transmission structure in the datacenter power utilization and highlights a unique process which amalgamates the power with network resources. Here, it also presents current challenges for achieving energy efficiency in local and regional datacenter systems. The power consumption of different layers of the datacenter is investigated and it is shown that there are great portions of power wasted both in non critical infrastructure and IT equipments. A review of the available metrics for energy efficiency was discussed and the effect of energy efficiency to carbon emissions and operational costs was computed. It is shown that there are great expectations for cost and carbon savings when the datacenter operates in a green manner. Strategies for developing and transforming a green datacenter were also presented based on operational and planning actions. It was shown that energy efficiency should target overall system optimization of both non critical and IT equipments with main focus placed on cooling, power delivery systems, virtualization and workload management. The proposed system maintains the equilibrium for the unique task

30 processing and accomplishment, power utilization of the datacenter, and optimal network requirements. The system optimizes the research gap for task grouping for reducing the quantity of the application server in the datacenter and dispersing of the network resources in order to mitigate the congestion which might possible occur in datacenter leading to unwanted power consumption.

Challenges for Energy Efficiency in Local and Regional Data Centers

Challenges for Energy Efficiency in Local and Regional Data Centers Challenges for Energy Efficiency in Local and Regional Data Centers G. Koutitas 1 and P. Demestichas 2 1 School of Science and Technology, International Hellenic University, Thessaloniki, Greece; e-mail:

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century.

More information

Using Shortest Job First Scheduling in Green Cloud Computing

Using Shortest Job First Scheduling in Green Cloud Computing Using Shortest Job First Scheduling in Green Cloud Computing Abeer H. El Bakely 1, Hesham A.Hefny 2 Department of CS and IS, ISSR Cairo University, Cairo, Egypt 1, 2 Abstract: Researchers try to solve

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

Reducing Data Center Energy Consumption

Reducing Data Center Energy Consumption Reducing Data Center Energy Consumption By John Judge, Member ASHRAE; Jack Pouchet, Anand Ekbote, and Sachin Dixit Rising data center energy consumption and increasing energy costs have combined to elevate

More information

Dynamic Power Variations in Data Centers and Network Rooms

Dynamic Power Variations in Data Centers and Network Rooms Dynamic Power Variations in Data Centers and Network Rooms By Jim Spitaels White Paper #43 Revision 2 Executive Summary The power requirement required by data centers and network rooms varies on a minute

More information

Dynamic Power Variations in Data Centers and Network Rooms

Dynamic Power Variations in Data Centers and Network Rooms Dynamic Power Variations in Data Centers and Network Rooms White Paper 43 Revision 3 by James Spitaels > Executive summary The power requirement required by data centers and network rooms varies on a minute

More information

Measuring Energy Efficiency in a Data Center

Measuring Energy Efficiency in a Data Center The goals of the greening of the Data Center are to minimize energy consumption and reduce the emission of green house gases (carbon footprint) while maximizing IT performance. Energy efficiency metrics

More information

Green Computing: Datacentres

Green Computing: Datacentres Green Computing: Datacentres Simin Nadjm-Tehrani Department of Computer and Information Science (IDA) Linköping University Sweden Many thanks to Jordi Cucurull For earlier versions of this course material

More information

Electrical Efficiency Modeling of Data Centers

Electrical Efficiency Modeling of Data Centers Electrical Efficiency Modeling of Data Centers By Neil Rasmussen White Paper 113 Executive Summary The conventional models used to estimate electrical efficiency of data centers are grossly inaccurate

More information

Electrical Efficiency Modeling for Data Centers

Electrical Efficiency Modeling for Data Centers Electrical Efficiency Modeling for Data Centers By Neil Rasmussen White Paper #113 Revision 1 Executive Summary Conventional models for estimating electrical efficiency of data centers are grossly inaccurate

More information

Energy Efficiency and Green Data Centers. Overview of Recommendations ITU-T L.1300 and ITU-T L.1310

Energy Efficiency and Green Data Centers. Overview of Recommendations ITU-T L.1300 and ITU-T L.1310 Energy Efficiency and Green Data Centers Overview of Recommendations ITU-T L.1300 and ITU-T L.1310 Paolo Gemma Chairman of Working Party 3 of ITU-T Study Group 5 International Telecommunication Union Agenda

More information

Green Computing: Datacentres

Green Computing: Datacentres Green Computing: Datacentres Simin Nadjm-Tehrani Department of Computer and Information Science (IDA) Linköping University Sweden Many thanks to Jordi Cucurull For earlier versions of this course material

More information

Last time. Data Center as a Computer. Today. Data Center Construction (and management)

Last time. Data Center as a Computer. Today. Data Center Construction (and management) Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases

More information

Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing

Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing Dr. Vikash K. Singh, Devendra Singh Kushwaha Assistant Professor, Department of CSE, I.G.N.T.U, Amarkantak,

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Managing Data Center Power and Cooling

Managing Data Center Power and Cooling White PAPER Managing Data Center Power and Cooling Introduction: Crisis in Power and Cooling As server microprocessors become more powerful in accordance with Moore s Law, they also consume more power

More information

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL

More information

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and IBM FlexSystem Enterprise Chassis White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This document

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com WHITE PAPER Solutions for the Datacenter's Thermal Challenges Sponsored by: IBM and Intel Jed Scaramella January 2007 Matthew Eastwood IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA 01701

More information

Motherboard- based Servers versus ATCA- based Servers

Motherboard- based Servers versus ATCA- based Servers Motherboard- based Servers versus ATCA- based Servers Summary: A comparison of costs, features and applicability for telecom application hosting After many years of struggling for market acceptance, it

More information

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure

Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper Power Efficiency Comparison: Cisco UCS 5108 Blade Server Chassis and Dell PowerEdge M1000e Blade Enclosure White Paper March 2014 2014 Cisco and/or its affiliates. All rights reserved. This

More information

Optimizing Power Distribution for High-Density Computing

Optimizing Power Distribution for High-Density Computing Optimizing Power Distribution for High-Density Computing Choosing the right power distribution units for today and preparing for the future By Michael Camesano Product Manager Eaton Corporation Executive

More information

Alternative data center power

Alternative data center power Alternative data center power A quantitative TCO comparison of 400V AC and 600V AC power systems Abstract Power efficiency has become a critical focus for IT and facilities managers as they struggle to

More information

Is an energy-wasting data center draining your bottom line?

Is an energy-wasting data center draining your bottom line? Abstract Is an energy-wasting data center draining your bottom line? New technology options and power distribution strategies can dramatically reduce the cost and carbon footprint of your data center.

More information

Energy Efficiency Monitoring in Data Centers: Case Study at International Hellenic University. George Koutitas, Prof. L. Tassiulas, Prof. I.

Energy Efficiency Monitoring in Data Centers: Case Study at International Hellenic University. George Koutitas, Prof. L. Tassiulas, Prof. I. UTH and IHU (CI)-WP4 Energy Efficiency Monitoring in Data Centers: Case Study at International Hellenic University Plenary Meeting Gent, February-2012 George Koutitas, Prof. L. Tassiulas, Prof. I. Vlahavas

More information

Power efficiency and power management in HP ProLiant servers

Power efficiency and power management in HP ProLiant servers Power efficiency and power management in HP ProLiant servers Technology brief Introduction... 2 Built-in power efficiencies in ProLiant servers... 2 Optimizing internal cooling and fan power with Sea of

More information

DataCenter 2020: first results for energy-optimization at existing data centers

DataCenter 2020: first results for energy-optimization at existing data centers DataCenter : first results for energy-optimization at existing data centers July Powered by WHITE PAPER: DataCenter DataCenter : first results for energy-optimization at existing data centers Introduction

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

THE GREEN DATA CENTER

THE GREEN DATA CENTER GREEN IT THE GREEN DATA CENTER WHERE ECOLOGY MEETS ECONOMY We truly live in an information age. Data Centers serve a very important purpose they provide the global community with nearly unlimited access

More information

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya Present by Leping Wang 1/25/2012 Outline Background

More information

Power & Environmental Monitoring

Power & Environmental Monitoring Data Centre Monitoring Made Easy Power & Environmental Monitoring Features & Benefits Packet Power provides the easiest, most cost effective way to capture detailed power and temperature information for

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space.

The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space. The following are general terms that we have found being used by tenants, landlords, IT Staff and consultants when discussing facility space. Terminology: Telco: Dmarc: NOC: SAN: GENSET: Switch: Blade

More information

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership

How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,

More information

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Top of Rack: An Analysis of a Cabling Architecture in the Data Center SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.

More information

Calculating Total Power Requirements for Data Centers

Calculating Total Power Requirements for Data Centers Calculating Total Power Requirements for Data Centers By Richard Sawyer White Paper #3 Executive Summary Part of data center planning and design is to align the power and cooling requirements of the IT

More information

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013

Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013 Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center

More information

OpenFlow Based Load Balancing

OpenFlow Based Load Balancing OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Data Center Facility Basics

Data Center Facility Basics Data Center Facility Basics Ofer Lior, Spring 2015 Challenges in Modern Data Centers Management, Spring 2015 1 Information provided in these slides is for educational purposes only Challenges in Modern

More information

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Filename: SAS - PCI Express Bandwidth - Infostor v5.doc Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Server

More information

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Recommendations for Measuring and Reporting Overall Data Center Efficiency Recommendations for Measuring and Reporting Overall Data Center Efficiency Version 2 Measuring PUE for Data Centers 17 May 2011 Table of Contents 1 Introduction... 1 1.1 Purpose Recommendations for Measuring

More information

IT@Intel. Comparing Multi-Core Processors for Server Virtualization

IT@Intel. Comparing Multi-Core Processors for Server Virtualization White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core

More information

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center May 2013 Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center Executive Summary There is no single end-all cabling configuration for every data center, and CIOs, data center

More information

IT Cost Savings Audit. Information Technology Report. Prepared for XYZ Inc. November 2009

IT Cost Savings Audit. Information Technology Report. Prepared for XYZ Inc. November 2009 IT Cost Savings Audit Information Technology Report Prepared for XYZ Inc. November 2009 Executive Summary This information technology report presents the technology audit resulting from the activities

More information

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient. The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data

More information

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics

Data Center Industry Leaders Reach Agreement on Guiding Principles for Energy Efficiency Metrics On January 13, 2010, 7x24 Exchange Chairman Robert Cassiliano and Vice President David Schirmacher met in Washington, DC with representatives from the EPA, the DOE and 7 leading industry organizations

More information

Optical interconnection networks for data centers

Optical interconnection networks for data centers Optical interconnection networks for data centers The 17th International Conference on Optical Network Design and Modeling Brest, France, April 2013 Christoforos Kachris and Ioannis Tomkos Athens Information

More information

Energy Savings in the Data Center Starts with Power Monitoring

Energy Savings in the Data Center Starts with Power Monitoring Energy Savings in the Data Center Starts with Power Monitoring As global competition intensifies, companies are increasingly turning to technology to help turn mountains of data into a competitive edge.

More information

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America

7 Best Practices for Increasing Efficiency, Availability and Capacity. XXXX XXXXXXXX Liebert North America 7 Best Practices for Increasing Efficiency, Availability and Capacity XXXX XXXXXXXX Liebert North America Emerson Network Power: The global leader in enabling Business-Critical Continuity Automatic Transfer

More information

10GBASE T for Broad 10_Gigabit Adoption in the Data Center

10GBASE T for Broad 10_Gigabit Adoption in the Data Center 10GBASE T for Broad 10_Gigabit Adoption in the Data Center Contributors Carl G. Hansen, Intel Carrie Higbie, Siemon Yinglin (Frank) Yang, Commscope, Inc 1 Table of Contents 10Gigabit Ethernet: Drivers

More information

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software

Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set

More information

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage

More information

Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology

Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology Sujal Das Product Marketing Director Network Switching Karthik Mandakolathur Sr Product Line

More information

Green HPC - Dynamic Power Management in HPC

Green HPC - Dynamic Power Management in HPC Gr eenhpc Dynami cpower Management i nhpc AT ECHNOL OGYWHI T EP APER Green HPC Dynamic Power Management in HPC 2 Green HPC - Dynamic Power Management in HPC Introduction... 3 Green Strategies... 4 Implementation...

More information

The Benefits of Purpose Built Super Efficient Video Servers

The Benefits of Purpose Built Super Efficient Video Servers Whitepaper Deploying Future Proof On Demand TV and Video Services: The Benefits of Purpose Built Super Efficient Video Servers The Edgeware Storage System / Whitepaper / Edgeware AB 2010 / Version1 A4

More information

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Data centre energy efficiency metrics. Existing and proposed metrics to provide effective understanding and reporting of data centre energy

Data centre energy efficiency metrics. Existing and proposed metrics to provide effective understanding and reporting of data centre energy Existing and proposed metrics to provide effective understanding and reporting of data centre energy Liam Newcombe Data Centre Specialist Group 1 Introduction... 2 2 Summary... 6 3 Overview of metrics...

More information

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O Introduction Data centers are growing at an unprecedented rate, creating challenges for enterprises. Enterprise-level

More information

Qualitative Analysis of Power Distribution Configurations for Data Centers

Qualitative Analysis of Power Distribution Configurations for Data Centers Qualitative Analysis of Power Distribution Configurations for Data Centers White Paper #4 2007 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted,

More information

abstract about the GREEn GRiD

abstract about the GREEn GRiD Guidelines for Energy-Efficient Datacenters february 16, 2007 white paper 1 Abstract In this paper, The Green Grid provides a framework for improving the energy efficiency of both new and existing datacenters.

More information

Non-blocking Switching in the Cloud Computing Era

Non-blocking Switching in the Cloud Computing Era Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...

More information

Business Case for the Brocade Carrier Ethernet IP Solution in a Metro Network

Business Case for the Brocade Carrier Ethernet IP Solution in a Metro Network Business Case for the Brocade Carrier Ethernet IP Solution in a Metro Network Executive Summary The dramatic rise of multimedia applications in residential, mobile, and business networks is continuing

More information

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation

More information

Implementing Energy Efficient Data Centers

Implementing Energy Efficient Data Centers Implementing Energy Efficient Data Centers By Neil Rasmussen White Paper #114 Executive Summary Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data

More information

PART III. OPS-based wide area networks

PART III. OPS-based wide area networks PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity

More information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information

Computer Network. Interconnected collection of autonomous computers that are able to exchange information Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.

More information

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions

How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Intel Intelligent Power Management Intel How High Temperature Data Centers & Intel Technologies save Energy, Money, Water and Greenhouse Gas Emissions Power and cooling savings through the use of Intel

More information

Recommendations for Measuring and Reporting Overall Data Center Efficiency

Recommendations for Measuring and Reporting Overall Data Center Efficiency Recommendations for Measuring and Reporting Overall Data Center Efficiency Version 1 Measuring PUE at Dedicated Data Centers 15 July 2010 Table of Contents 1 Introduction... 1 1.1 Purpose Recommendations

More information

BC43: Virtualization and the Green Factor. Ed Harnish

BC43: Virtualization and the Green Factor. Ed Harnish BC43: Virtualization and the Green Factor Ed Harnish Agenda The Need for a Green Datacenter Using Greener Technologies Reducing Server Footprints Moving to new Processor Architectures The Benefits of Virtualization

More information

Office of the Government Chief Information Officer. Green Data Centre Practices

Office of the Government Chief Information Officer. Green Data Centre Practices Office of the Government Chief Information Officer Green Data Centre Practices Version : 2.0 April 2013 The Government of the Hong Kong Special Administrative Region The contents of this document remain

More information

Power Distribution Considerations for Data Center Racks

Power Distribution Considerations for Data Center Racks Power Distribution Considerations for Data Center Racks Power Distribution Considerations for Data Center Racks Table of Contents 3 Overview 4 At what voltage do you run all the IT equipment? 4 What electrical

More information

Virtual Machine in Data Center Switches Huawei Virtual System

Virtual Machine in Data Center Switches Huawei Virtual System Virtual Machine in Data Center Switches Huawei Virtual System Contents 1 Introduction... 3 2 VS: From the Aspect of Virtualization Technology... 3 3 VS: From the Aspect of Market Driving... 4 4 VS: From

More information

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida Power Management in Cloud Computing using Green Algorithm -Kushal Mehta COP 6087 University of Central Florida Motivation Global warming is the greatest environmental challenge today which is caused by

More information

Advanced Computer Networks. Datacenter Network Fabric

Advanced Computer Networks. Datacenter Network Fabric Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking

More information

Automatic Workload Management in Clusters Managed by CloudStack

Automatic Workload Management in Clusters Managed by CloudStack Automatic Workload Management in Clusters Managed by CloudStack Problem Statement In a cluster environment, we have a pool of server nodes with S running on them. Virtual Machines are launched in some

More information

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29. Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet

More information

SummitStack in the Data Center

SummitStack in the Data Center SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable

More information

Analysis of data centre cooling energy efficiency

Analysis of data centre cooling energy efficiency Analysis of data centre cooling energy efficiency An analysis of the distribution of energy overheads in the data centre and the relationship between economiser hours and chiller efficiency Liam Newcombe

More information

Data Center Energy Efficiency Looking Beyond PUE

Data Center Energy Efficiency Looking Beyond PUE Data Center Energy Efficiency Looking Beyond PUE No Limits Software White Paper #4 By David Cole 2011 No Limits Software. All rights reserved. No part of this publication may be used, reproduced, photocopied,

More information

High-density modular data center making cloud easier. Frank Zheng Huawei

High-density modular data center making cloud easier. Frank Zheng Huawei High-density modular data center making cloud easier Frank Zheng Huawei 2012-10-10 Content 1 2 3 4 Cloud s impact to data center Concerns of DC construction 4S DC making cloud easier Success stories Cloud?

More information

BALANCING LOAD FOR OPTIMAL PERFORMANCE OF DATA CENTER

BALANCING LOAD FOR OPTIMAL PERFORMANCE OF DATA CENTER BALANCING LOAD FOR OPTIMAL PERFORMANCE OF DATA CENTER Karthik Narayan M 1, Shantharam Nayak 2, Annamma Abraham 3 1 4 th Sem M.Tech (SE), RVCE, Bangalore 59. 2 Associate professor ISE-RVCE and Research

More information

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential

More information

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers Power and Cooling Capacity for Data Centers By Neil Rasmussen White Paper #150 Executive Summary High density IT equipment stresses the power density capability of modern data centers. Installation and

More information

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers

Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that

More information

Any-to-any switching with aggregation and filtering reduces monitoring costs

Any-to-any switching with aggregation and filtering reduces monitoring costs Any-to-any switching with aggregation and filtering reduces monitoring costs Summary Physical Layer Switches can filter and forward packet data to one or many monitoring devices. With intuitive graphical

More information

Datacenter Power Delivery Architectures : Efficiency and Annual Operating Costs

Datacenter Power Delivery Architectures : Efficiency and Annual Operating Costs Datacenter Power Delivery Architectures : Efficiency and Annual Operating Costs Paul Yeaman, V I Chip Inc. Presented at the Darnell Digital Power Forum September 2007 Abstract An increasing focus on datacenter

More information

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING

CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING CHAPTER 6 CROSS LAYER BASED MULTIPATH ROUTING FOR LOAD BALANCING 6.1 INTRODUCTION The technical challenges in WMNs are load balancing, optimal routing, fairness, network auto-configuration and mobility

More information

Flexible SDN Transport Networks With Optical Circuit Switching

Flexible SDN Transport Networks With Optical Circuit Switching Flexible SDN Transport Networks With Optical Circuit Switching Multi-Layer, Multi-Vendor, Multi-Domain SDN Transport Optimization SDN AT LIGHT SPEED TM 2015 CALIENT Technologies 1 INTRODUCTION The economic

More information

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI

More information

Electrical Systems. <Presenter>

Electrical Systems. <Presenter> Review Root Causes of Energy Inefficiency Roadmap to Maximize Energy Efficiency Best Practices Alternative Energy Sources Controls Take Aways Seek Professional Help. ENERGY INEFFICIENCY OF

More information

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Important Design Factors for WSCs. Programming Models for WSCs

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Important Design Factors for WSCs. Programming Models for WSCs Concepts Introduced in Chapter 6 Warehouse-Scale Computers introduction to warehouse-scale computing programming models infrastructure and costs cloud computing A cluster is a collection of desktop computers

More information

Network analysis and dimensioning course project

Network analysis and dimensioning course project Network analysis and dimensioning course project Alexander Pyattaev March 10, 2014 Email: alexander.pyattaev@tut.fi 1 Project description Your task in this project is to be a network architect. You will

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,

More information

DATA CENTER ASSESSMENT SERVICES

DATA CENTER ASSESSMENT SERVICES DATA CENTER ASSESSMENT SERVICES ig2 Group Inc. is a Data Center Science Company which provides solutions that can help organizations automatically optimize their Data Center performance, from assets to

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

CarbonDecisions. The green data centre. Why becoming a green data centre makes good business sense

CarbonDecisions. The green data centre. Why becoming a green data centre makes good business sense CarbonDecisions The green data centre Why becoming a green data centre makes good business sense Contents What is a green data centre? Why being a green data centre makes good business sense 5 steps to

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information