Contents lists available at SciVerse ScienceDirect. Computer Networks. journal homepage: www. elsevier. com/ locat e/ comnet

Size: px
Start display at page:

Download "Contents lists available at SciVerse ScienceDirect. Computer Networks. journal homepage: www. elsevier. com/ locat e/ comnet"


1 1 COMPNW 4730 Computer Networks xxx (2012) xxx xxx Contents lists available at SciVerse ScienceDirect Computer Networks journal homepage: www. elsevier. com/ locat e/ comnet 2 Environmental-aware virtual data center network 3 Kim Khoa Nguyen a, Q1, Mohamed Cheriet a, Mathieu Lemay b, Victor Reijs c, Andrew Mackarel c, 4 Alin Pastrama d 5 a Department of Automated Manufacturing Engineering, Ecole de Technologie Superieure, University of Quebec, Montreal, Quebec, Canada 6 b Inocybe Technologies Inc., Gatineau, Quebec, Canada 7 c HEAnet Ltd., Dublin, Ireland 8 d NORDUnet, Kastrup, Denmark 9 a r t i c l e i n f o Article history: 13 Received 15 September Received in revised form 21 February Accepted 20 March Available online xxxx 17 Keywords: 18 GreenStar Network 19 Mantychore FP7 20 Green ICT 21 Virtual data center 22 Neutral carbon network 23 a b s t r a c t Cloud computing services have recently become a ubiquitous service delivery model, covering a wide range of applications from personal file sharing to being an enterprise data warehouse. Building green data center networks providing cloud computing services is an emerging trend in the Information and Communication Technology (ICT) industry, because of Global Warming and the potential GHG emissions resulting from cloud services. As one of the first worldwide initiatives provisioning ICT services entirely based on renewable energy such as solar, wind and hydroelectricity across Canada and around the world, the GreenStar Network (GSN) was developed to dynamically transport user services to be processed in data centers built in proximity to green energy sources, reducing Greenhouse Gas (GHG) emissions of ICT equipments. Regarding the current approach, which focuses mainly in reducing energy consumption at the micro-level through energy efficiency improvements, the overall energy consumption will eventually increase due to the growing demand from new services and users, resulting in an increase in GHG emissions. Based on the cooperation between Mantychore FP7 and the GSN, our approach is, therefore, much broader and more appropriate because it focuses on GHG emission reductions at the macro-level. This article presents some outcomes of our implementation of such a network model, which spans multiple green nodes in Canada, Europe and the USA. The network provides cloud computing services based on dynamic provision of network slices through relocation of virtual data centers Elsevier B.V. All rights reserved Introduction 48 Nowadays, reducing greenhouse gas (GHG) emissions is 49 becoming one of the most challenging research topics in 50 Information and Communication Technology (ICT) because 51 of the alarming growth of indirect GHG emissions resulting 52 from the overwhelming use of ICT electrical devices [1]. 53 The current approach when dealing with the ICT GHG 54 problem is improving energy efficiency, aimed at reducing 55 energy consumption at the micro level. Research projects 56 following this direction have focused on micro-processor Corresponding author. address: (K.K. Nguyen). design, computer design, power-on-demand architectures and virtual machine consolidation techniques. However, a micro-level energy efficiency approach will likely lead to an overall increase in energy consumption due to the Khazzoom Brookes postulate (also known as Jevons paradox) [2], which states that energy efficiency improvements that, on the broadest considerations, are economically justified at the micro level, lead to higher levels of energy consumption at the macro level. Therefore, we believe that reducing GHG emissions at the macro level is a more appropriate solution. Large ICT companies, like Microsoft which consumes up to 27 MW of energy at any given time [1], have built their data centers near green power sources. Unfortunately, many computing centers are not so close to /$ - see front matter 2012 Elsevier B.V. All rights reserved.

2 2 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 71 green energy sources. Thus, a green energy distributed net- 72 work is an emerging technology, given that losses incurred 73 in energy transmission over power utility infrastructures 74 are much higher than those caused by data transmission, 75 which makes relocating a data center near a renewable en- 76 ergy source a more efficient solution than trying to bring 77 the energy to an existing location. 78 The GreenStar Network (GSN) project [3,4] is the first 79 worldwide initiative aimed at providing ICT services based 80 entirely on renewable energy sources such as solar, wind 81 and hydroelectricity across Canada and around the world. 82 The network can transport user service applications to be 83 processed in data centers built in proximity to green en- 84 ergy sources, thus GHG emissions of ICT equipments are 85 reduced to a minimum. Whilst energy efficiency tech- 86 niques are still encouraged at low-end client equipment 87 (e.g., such as hand-held devices, home PCs), the heaviest 88 computing services should be dedicated to data centers 89 powered completely by green energy. This is enabled 90 thanks to a large abundant reserve of natural green energy 91 resources in Canada, Europe and the USA. The carbon credit 92 saving that we refer to in this article is the emission due to 93 the operation of the network; the GHG emission during the 94 production phase of the equipments used in the network 95 and in the server farms is not considered since no special 96 equipment is deployed in the GSN. 97 In order to move virtualized data centers towards net- 98 work nodes powered by green energy sources distributed 99 in such a multi-domain network, particularly between Eur- 100 ope and North America domains, the GSN is based on a 101 flexible routing platform provided by the Mantychore FP7 102 project [5], which collaborates with the GSN project to 103 enhance the carbon footprint exchange standard for ICT 104 services. This collaboration enables research on the feasi- 105 bility of powering e-infrastructures in multiple domains 106 worldwide with renewable energy sources. Management 107 and technical policies will be developed to leverage virtu- 108 alization, which helps to migrate virtual infrastructure 109 resources from one site to another based on power avail- 110 ability. This will facilitate use of renewable energy within 111 the GSN providing an Infrastructure as a Service (IaaS) 112 management tool. By integrating connectivity to parts of 113 the European National Research and Education Network 114 (NREN) infrastructures with the GSN network this devel- 115 ops competencies to understand how a set of green nodes 116 (where each one is powered by a different renewable en- 117 ergy source) could be integrated into an everyday network. 118 Energy considerations are taken before moving virtual 119 services without suffering connectivity interruptions. The 120 influence of physical location in that relocation is also ad- 121 dressed, such as weather prediction and estimation of solar 122 power generation. 123 The main objective of the GSN/Mantychore liaison is to 124 create a pilot study and a testbed environment from which 125 to derive best practices and guidelines to follow when 126 building low carbon networks. Core nodes are linked by 127 an underlying high speed optical network having up to Gbit/s bandwidth capacity provided by CANARIE. 129 Note that optical networks have a modest increase in 130 power consumption, especially with new 100 Gbit/s, in 131 comparison to electronic equipment such as routers and aggregators [6]. The migration of virtual data centers over network nodes is indeed a result of a convergence of server and network virtualizations as virtual infrastructure management. The GSN as a network architecture is built with multiple layers, resulting in a large number of resources to be managed. Virtualized management has therefore been proposed for service delivery regardless of the physical location of the infrastructure which is determined by resource providers. This allows complex underlying services to remain hidden inside the infrastructure provider. Resources are allocated according to user requirements; hence high utilization and optimization levels can be achieved. During the service, the user monitors and controls resources as if he was the owner, allowing the user to run their application in a virtual infrastructure powered by green energy sources. The remainder of this article is organized as follows. In Section 2, we present the connection plan of the GSN/ Mantychore joint project as well as its green nodes. Section 3 gives the physical architecture of data centers powered by renewable energy. Next, the virtualization solution of green data centers is provided in Section 4. Section 5 is dedicated to the cloud-based management solution that we implemented in the GSN. In Section 6, we formulate the optimization problem of renewable energy utilization in the network. Section 7 reports experimental and simulation results obtained when virtual data center service is hosted in the network. Finally, we draw conclusions and present future work. 2. Provisioning of ICT services with renewable energy Rising energy costs and working in an austerity based environment which has dynamically changing business requirements has raised the focus of the NREN community to control some characteristics of these connectivity services, so that users can change some of the service characteristics without having to renegotiate with the service provider. In the European NREN community connectivity services are provisioned on a manual basis with some effort now focusing towards automating the service setup and operation. Automation and monitoring of energy usage and emissions will be one of the new provisioning constraints employed in emerging networks. The Mantychore FP7 project has evolved from previous research projects MANTICORE and MANTICORE II [5,7]. The initial MANTICORE project goal was to implement a proof of concept based on the idea that routers and an IP network can be setup as a Service (IPNaaS, as a management Layer 3 network). MANTICORE II continued in the steps of its predecessor to implement stable and robust software while running trials on a range of network equipment. The Mantychore FP7 project allows the NRENs to provide a complete, flexible network service that offers research communities the ability to create an IP network under their control, where they can configure: (a) Layer 1, Optical links. Users will be able to get access control over optical devices like optical switches, to configure important properties of its cards and

3 COMPNW 4730 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 3 Fig. 1. The GreenStar Network ports. Mantychore integrates the Argia framework [8] which provides complete control of optical resources. (b) Layer 2, Ethernet and MPLS. Users will be able to get control over Ethernet and MPLS (Layer 2.5) switches to configure different services. In this aspect, Mantychore will integrate the Ether project [9] and its capabilities for the management of Ethernet and MPLS resources. (c) Layer 3, the Mantychore FP7 suite includes set of features for: (i) Configuration and creation of virtual networks, (ii) Configuration of physical interfaces, (iii) Support of routing protocols, both internal (RIP, OSPF) and external (BGP), (iv) Support of QoS and firewall services, (v) Creation, modification and deletion of resources (interfaces, routers) both physical and logical, and (vi) Support of IPv6. It allows the configuration of IPv6 in interfaces, routing protocols, networks. 208 Fig. 1 shows the connection plan of the GSN. The Canadian section of the GSN has the largest deployment of six GSN nodes powered by sun, wind and hydroelectricity. It is connected to the European green nodes in Ireland (HEAnet), Iceland (NORDUnet), Spain (i2cat), the Netherlands (SURFnet), and some other nodes in other parts of the world such as in China (WiCo), Egypt (Smart Village) and USA (ESNet). One of the key objectives of the liaison between Mantychore FP7 and GSN projects is to enable renewable energy provisioning for NRENs. Building competency using renewable energy resources is vital for any NREN with such an abundance of natural power generation resources at their backdoor and has been targeted as a potential major industry for the future. HEAnet in Ireland [10], where the share of electricity generated from renewable energy sources in 2009 was 14.4%, has connected two GSN nodes via the GEANT Plus service and its own NREN network to a GSN solar powered node in the South East of Ireland and to a

4 4 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Table 1 List of connected GSN nodes. Node Location Type of energy ETS Montreal, QC, Canada Hydroelectricity UQAM Montreal, QC, Canada Hydroelectricity CRC Ottawa, ON, Canada Solar Cybera Calgary, AB, Canada Solar BastionHost Halifax, NS, Canada Wind RackForce Kelowna, BC, Canada Hydroelectricity CalIT2 San Diego, CA, USA Solar/DC power NORDUnet Reykjavik, Iceland Geothermal SURFnet Utrecht, The Netherlands Wind HEAnet- DKIT Dundalk, Ireland Wind HEAnet-EPA Wexford, Ireland Solar i2cat Barcelona, Spain Solar WICO Shanghai, China Solar 228 wind powered grid supplied location in the North East of 229 the country. NORDUnet [11], which links Scandinavian 230 countries having the highest proportion of renewable en- 231 ergy sources in Europe, houses a GSN Node at a data center 232 in Reykjavík (Iceland) and also contributes to the GSN/ 233 Mantychore controller interface development. In Spain 234 [12], where 12.5% of energy comes from renewable energy 235 sources (mostly solar and wind), i2cat is leading the 236 Mantychore development as well as actively defining the 237 interface for GSN/Mantychore and they will setup a solar powered node in Lleida (Spain). The connected nodes and their source of energy are shown in Table 1. Recently, industrial projects have been built for generating renewable energy to offset data center power consumption around the world [1,13]. However, they rather focus on a single data center. The GSN is characterized by the cooperation of multiple green data centers to reduce the overall GHG emission in a network. To the best of our knowledge, it is the first ICT carbon reduction research initiative of this type in the world. Prior to our work, theoretical research has been done with respect to the optimization of power consumption (including brown energy) in a network of data centers based on the redistribution of workload across nodes [14 16]. The GSN goes one step further with a real deployment of a renewable energy powered network. In addition, our approach is to dynamically relocate virtual data centers through a cloud management middleware. 3. Architecture of a green powered data center Fig. 2 illustrates the physical architecture of a hydroelectricity and two green nodes, one is powered by solar energy and the other is powered by wind. The solar panels are grouped in bundles of 9 or 10 panels, each panel generates a power of W. The wind turbine system is a 15 kw generator. After being accumulated in a Fig. 2. Architecture of green nodes (hydro, wind and solar types).

5 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx battery bank, electrical energy is treated by an inverter/ 264 charger in order to produce an appropriate output current 265 for computing and networking devices. User applications 266 are running on multiple Dell PowerEdge R710 systems, 267 hosted by a rack mount structure in an outdoor climate- 268 controlled enclosure. The air conditioning and heating ele- 269 ments are powered by green energy at solar and wind 270 nodes; they are connected to the regular power grid at 271 hydro nodes. As servers are installed in the outdoor enclo- 272 sure, carbon assessment of a node can be highly accurate. 273 The PDUs (Power Distribution Unit), provided with power 274 monitoring features, measures electrical current and 275 voltage. Within each node, servers are linked by a local 276 network, which is then connected to the core network 277 through GE transceivers. Data flows are transferred 278 among GSN nodes over dedicated circuits (like light paths 279 or P2P links), tunnels over the Internet or logical IP 280 networks. 281 The Montreal GSN node plays a role of a manager (hub 282 node) that opportunistically sets up required connectivity 283 for Layer 1 and Layer 2 using dynamic services, then 284 pushes Virtual Machines (VMs) or software virtual routers 285 from the hub to a sun or wind node (spoke node) when 286 power is available. VMs will be pulled back to the hub 287 node when power dwindles. In such a case, the spoke node 288 may switch over to grid energy for running other services 289 if it is required. However, GSN services are powered en- 290 tirely by green energy. The VMs are used to run user appli- 291 cations, particularly heavy-computing services. Based on 292 this testbed network, experiments and research are per- 293 formed targeting cloud management algorithms and opti- 294 mization of the intermittently-available renewable energy 295 sources Virtualizing a data center 297 In the GSN, dynamic management of green data centers 298 is achieved through virtualization. All data center compo- 299 nents are virtualized and placed into a cloud which is then 300 controlled by a cloud manager. From the users point-of- 301 view, a virtual data center can be seen as an information 302 factory that runs 24/7 to deliver a sustained service (i.e., 303 stream of data). It includes virtual machines (VM) linked 304 by a virtual network. The primary benefit of virtualization 305 technologies across different IT resources is to boost over- 306 all effectiveness while improving application service deliv- 307 ery (performance, availability, responsiveness, security) to 308 sustain business growth in an environmentally friendly 309 manner. 310 Fig. 3 gives a schematic overview of the virtualization 311 process of a simple data center which is powered by 312 renewable energy sources, i.e., wind, solar, and power grid, 313 including: 314 Power generators, which produce electrical energy 315 from renewable energy, such as wind, solar or 316 geothermal. 317 A charge controller controls the generated electricity 318 going to the battery bank, preventing overcharging 319 and overvoltage. A battery bank stores energy and delivers power. An inverter converts DC from the battery bank to an AC supply used by electrical devices. When the voltage of the battery bank is not enough, the inverter will buy current from the power grid. In the CRC (Ottawa) data centers, the battery voltage is maintained between 24Vdc and 29Vdc, otherwise, electricity from the power grid will be bought (it is AC therefore is not converted by the inverter). If the generated power is too high, and it is not completely consumed by data center equipment, the surplus will be sold to the power grid. A Power Distribution Unit (PDU) distributes electricity from the inverter to servers, switches, routers. There are infeeds and outlets in the PDU for measuring and controlling the receiving and outgoing currents. Currently, the GSN software supports Raritan, Server Tech, Avocent, APC and Eaton PDUs. Climate control equipment allows the temperature and humidity of the data center s interior to be accurately controlled. Servers and switches are used to build the data center s network. Each device in the data center is virtualized by a software tool, and is then represented as a resource (e.g. PDU resource, network resource, etc.). These resource instances communicate with devices, parse commands and decide to perform appropriate actions. Communications with the devices can be done through Telnet, SSH or SNMP protocols. Some devices can be provided with the manufacture s software. However, such software is usually designed to interact with human users (e.g., through a web interface). The advantage of the virtualization approach is that a resource can be used by other services, or resources, which enables auto-management processes. Fig. 4 shows an example of the resources developed for PDU and computing servers. Similar resources are developed for power generators, climate controls, switches and routers. Server virtualization is able to control the physical server and hypervisors used on the server. The GSN supports KVM and XEN hypervisors. The Facility Resource (Fig. 3) is not attached to a physical device. It is responsible for linking the Power Source, PDU and Climate resources in order to create a measurement data which is used to make decisions for data center management. Each power consumer or generator provides a metering function which allows the user to view the power consumed or produced by the device. The total power consumption of a data center is reported by the Facility Resource. In the current implementation, network devices are virtualized at three layers. At the physical layer, Argia [8] treats optical cross-connects as software objects and can provision and reconfigure optical lightpaths within a single domain or across multiple, independently managed domains. At Layer 2, Ether [9] is used which is similar in concept to Argia, with a focus on Ethernet and MPLS networks, Ether allows users to acquire ports on an Enterprise Switch and manage VLANs or MPLS configurations on their own. At the network layer, a virtualized control plane created by the Mantychore FP7 project [5] is

6 6 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Fig. 3. Green data center and virtualization. Fig. 4. Example of resources in the cloud and a subset of their commands. 380 deployed which was specifically designed for IP networks 381 with an ability to define and configure physical and/or 382 logical IP networks. 383 In order to build a virtual data center, all Compute, 384 Network, Climate, PDU and Power Source resources are 385 hosted by a cloud service, which allows resources to be 386 always active and available to control devices. In many 387 data centers, i.e. [13], energy management and server man- 388 agement are separated functions. This imposes challenges in data center management, particularly when power sources are intermittent, because the server load cannot be adjusted to adapt to the power provision capacity, and the data center strictly depends on power providers. As power and server are managed in a unified manner, our proposed model allows a completely autonomous management of the data center. Our power management solution is designed with four key characteristics:

7 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Auto-configuration: system is able to detect a new 399 device connecting to a data center. It will ask the 400 administrator to provide appropriate configuration of 401 the new device. 402 Easy monitoring: comfortable easy access to real-time 403 information on energy consumption helping the admin- 404 istrator to make power regulation decisions. 405 Remote control: allows online access to devices 406 enabling these appliances to be controlled remotely. 407 Optimized planning: helps reduce GHG emission by 408 balancing workloads according to the capacity of serv- 409 ers and available energy sources in the entire network An example of how the power management function 412 interacts is as follows. The user wants to connect a new de- 413 vice (e.g., a server) to the data center. When this new ser- 414 ver is plugged into a PDU and turned on, the PDU resource 415 detects it by scanning the outlets and finds the load current 416 of the outlet. If there is no further information about the 417 device, the system considers it as a basic resource and as- 418 signs only a basic on/off command to the device. The user 419 will be asked to provide access information to the device 420 (e.g., username, password, protocol) and the type of device. 421 When the required information is provided, and a new 422 Compute resource has been identified, it will be exposed 423 to the cloud. If the device is non-intelligent (e.g. a light), 424 its consumption level is accounted in total consumption 425 of the data center, however, no particular action will be ta- 426 ken on this device (e.g., workload shared with other de- 427 vices through a migration when power dwindles). 428 Management of a network of virtual data centers is pro- 429 vided in the next section Management of a virtual data center network 431 From an operational point of view, the GreenStar Net- 432 work is controlled by a computing and energy manage- 433 ment middleware, as shown in Fig. 5, which includes two 434 distinct layers: 435 Resource layer: contains the drivers of each physical 436 device. Each resource controls a device and offers ser- 437 vices to a higher layer. Basic functions for any resource 438 include start, stop, monitor, power control, report 439 device status and then run application. Compute 440 Resources control physical servers and VMs. Power 441 Source Resources manage energy generators (e.g., wind 442 turbines or solar panels). PDU Resources control the 443 Power Distribution Units (PDUs). Climate Resources 444 monitor the weather condition and humidity of the data 445 centers. A Facility Resource links Power Source, PDU 446 and Climate resources of a data center with Compute 447 resources. It determines the power consumed by each 448 server and sends notifications to managers when the 449 power or environmental condition changes. 450 Management layer: includes a Controller and a set of 451 managers. Each manager is responsible for its individ- 452 ual resources. For example, the Cloud Manager manages 453 Compute resources, the Facility Manager manages the 454 Facility resources, and the Network Manager handles data center connectivity. The Controller is the brain of the network, which is responsible for determining the optimized location of each VM. It computes the action plans to be executed on each set of resources, and then orders the managers to perform them. Based on information provided by the Controller, these managers can execute relocation and migration tasks. The relationship between the Controller and the associated managers can be regarded as the Controller/Forwarder connection in an IP router. The Controller keeps an overall view of the entire network; it computes the best location for each VM and updates a Resource Location Table (RLT). A snapshot of the RLT is provided to the managers. When there is a request for the creation of a new VM, a manager will consult the table to determine the best location of the new VM. If there is a change in the network, e.g., when the power source of a data center dwindles, the controller re-computes best locations for VMs and updates the managers with the new RLT. The middleware is implemented based on the J2EE/ OSGi platform, using components of the IaaS Framework [17]. It is designed in a modular manner so the GSN can easily be extended to cover different layers and technologies. Through Web interfaces, users may determine GHG emission boundaries based on information providing VM power and energy sources, and then take actions to reduce GHG emissions. The project is therefore ISO compliant. Indeed, cloud management is not a new topic; however, the IaaS Framework is developed for the GSN because the project requires an open platform converging server and network virtualizations. Whilst many cloud management solutions in the market focus particularly on computing resources, IaaS Framework components can be used to build network virtualized tools [6], allowing flexible setup of data flows among data centers. Indeed, virtual network management provided by cloud middleware, like OpenNebula [18] and OpenStack [19], is limited at the IP layer, while the IaaS Framework deals with network virtualization on all three network layers. The ability of incorporating third-party power control components is also an advantage of the IaaS Framework. In order to compute the best location for each VM, each host in the network is associated with a number of parameters. Key parameters include the current power level and the available operating time, which can be determined respectively through a power monitor and weather measurement in addition to statistical information. The controller implements an optimization algorithm to maximize the utilization of data centers within the period when renewable energy is available. In other words, we will try to push as many VMs as possible to the hosts powered by renewable energy. A number of constraints are also taken into account, such as: Host capacity limitation: each VM requires a certain amount of memory and a number of CPU cycles. Thus, a physical server can host only a limited number of VMs

8 8 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Fig. 5. The GreenStar Network middleware architecture. 514 Network limitation: each service requires a certain 515 amount of bandwidth. So, the number of VMs moved 516 to a data center is limited by the available network 517 bandwidth of the data center. 518 Power source limitation. Each data center is powered by 519 a number of energy sources, which meets power 520 requirements of physical servers and accessories. When 521 the power dwindles, the load on physical servers should 522 be reduced by moving out the VMs The GreenStar Network is able to automatically make 525 the scheduling decision on dynamically migrating/consoli- 526 dating VMs among physical servers within datacenters to 527 meet the workload requirements meanwhile maximizing 528 renewable energy utilization, especially for performance- 529 sensitive applications. In the design of the GSN architec- 530 ture, several key issues are addressed including when to 531 trigger VM migrations, and how to select alternative phys- 532 ical machines to achieve optimized VM placement Environmental-aware migration of virtual data 534 centers 535 In the GSN, a virtual data center migration decision can 536 be made in the following situations: 537 If power dwindles below a threshold which is not suffi- 538 cient to keep the data center operational. This event is 539 detected and reported by the Facility resource. 540 If the climate condition is not suitable for the data cen- 541 ter (i.e., the temperature is too low or too high, or the 542 humidity level is too high). This event is detected and 543 reported by the Climate resource. The Climate resource is also able to import forecast information represented in XML format, which is used to plan migrations for a given period ahead. A migration involves four steps: (i) Setting up a new environment (i.e., in target data centers) for hosting the VMs with required configurations, (ii) Configuring the network connection, (iii) Moving VMs and their running state information through this high speed connection to the new locations, and (iv) Turning off computing resources at the original node. Note that the migration is implemented in a live manner based on support from hypervisors [20], meaning that a VM can be moved from a physical server to another while continuously running, without any noticeable effects from the point of view of the end users. During this procedure, the memory of the virtual machine (VM) is iteratively copied to the destination without stopping its execution. A delay of around ms is required to perform the final synchronization before the virtual machine begins executing at its final destination, providing an illusion of seamless migration. In our experiments with the online interactive application Geochronos [21], each VM migration requires 32Mbps bandwidth in order to keep the service live during the migration, thus a 10 Gbit/s link between two data centers can transport more than 300 VMs in parallel. Given that each VM occupies one processor and that each server has up to 16 processors, 20 servers can be moved in parallel. Generally, only one data center needs to be migrated at a given moment of time. However, the controller might be unable to migrate all VMs of a data center due to the overcapacity of servers in the network. In such a case, if there is no further requirement (e.g., user preferences on the VMs to be moved), the Controller will try to migrate as many

9 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx VMs as possible, starting from the smallest VM (in terms of 579 memory capacity) Optimization problem formulation 581 The key challenge of a mixed power management solu- 582 tion is to determine how long the electric current resource 583 can power the data center. We define an operating hour 584 (ophour) metric as the time that the data center can be 585 properly operational with a current power source. ophour 586 is calculated based on stored energy in the battery bank, 587 the system voltage and electric current. It can take two 588 values: 589 ophour under current load (ophour currentload ): the time 590 that the data center can be operational assuming no 591 additional load will be put on servers. 592 ophour under maximum load (ophour maximumload ): the 593 time that the data center can be operational assuming 594 all servers run at their full capacity (memory and CPU) The consumption of each server is determined through 597 the PDU which reports the voltage and current of the ser ver. The value of ophour is calculated as follows: ophour currentload ¼ E max BatteryChargeState V out I out ð1þ 601 ophour maximumload ¼ E max BatteryChargeState V out I out 602 where I max is the electrical current recorded by the PDU 603 outlet when the server is running at its maximal capacity 604 (reported by server benchmarking tools [22]). The maximal 605 capacity of server is reached when a heavy load application 606 is run. 607 The optimization problem of data center migration is 608 formulated as follows: 609 Given N VMs hosted by a data center, the ophour value 610 (op) of the data center is provided by its Facility Resource 611 using equations (1) and (2). When the ophour is lower than 612 a threshold T defined by administrator, the data center 613 needs to be relocated. Each VM V i with capacity C i = {C ik }, 614 i.e., memory capacity and the number of CPUs, will be 615 moved to an alternate location. In the GSN, the threshold 616 T = 1 for all data centers. 617 There are M physical data centers powered by green 618 energy sources, that will be used to host the virtual data 619 center. Each green data center D j is currently powered by 620 an energy source S j having a green factor E j. In the GSN, 621 the green factor of wind and solar sources is 100, of hydro 622 and geothermal is 95. The ophour of data center is op j, given 623 the current energy source. The data center D j has L j Com- 624 pute resources {R jl }, each has available capacity (CPU and 625 memory) C jl = {C jlk } and is hosting H jl VMs {V jlh } each VM V jlh 626 has capacity C jlh = {C jlhk }. When a VM V i is moved to D j, the 627 new ophour of the data center is predicted as follows: 628! 630 op j ¼ op j c ik 1 P a k P Lj k c l¼1 jlk 631 where a k is specific constant determining power consump- 632 tion for CPU and memory access as described in [23]. The ð2þ ð3þ following matrix of binary variables represents the virtual data center relocation result: x ijl ¼ 1 if V j is hosted by R jl in D j ð4þ 0 otherwise The objective function is to maximize the utilization of green energy; in other words, we try to keep the VM running on a green data center as long as possible. So, the problem is to maximize:! P N P M P L j i j l x ijl op j Subject to: P N P M P L j i j l x ijl ¼ N c ik 1 P a k P Lj k c l¼1 jlk x ijl C ik þ PH jl C jlhk 6 C jlk 8i; j; l; k h¼1! op j c ik 1 P a k P Lj k c l¼1 jlk E j ð5þ ð6þ ð7þ > T 8i; j ð8þ We assume that optical switches on the underlying CANARIE network are powered by a sustainable green power source (i.e., hydroelectricity). Also, note that the power consumption of optical lightpaths linking data centers is very small compared to the consumption of data centers. Therefore, migrations do not increase the consumption of brown energy in the network. The power consumption of the underlying network is hence not included in the objective function. In reality, as the GSN is supported by a very highspeed underlying network and current nodes are tiny data centers with few servers, the time required for migrating an entire node is in the order of minutes, whilst a full battery bank is able to power a node during a period of ten hours. Additional user-specific constraints, for example that VMs from a single virtual data center are required to be placed in a single physical data center, are not taken into account in the current network model. This formulation is referred to as the original mathematical optimization problem. The solution of the problem will show whether the virtual data center can be relocated and the optimized relocation scheme. If no solution is found, in other words, there are not sufficiently resources in the network to host the VMs, a notification will be sent to the system administrator who may then make a decision to scale up existing data centers. Since the objective function is nonlinear and there are nonlinear constraints, the optimization model is a nonlinear programming problem with binary variables. The problem can be considered as a bin packing problem with variable bin sizes and prices, which is proven NP-hard. In our implementation, we apply a modification of the Best Fit Decreasing (BFD) algorithm [24] to solve the problem. Our heuristic is follows: As the goal is to maximize the green power utilization, data centers are firstly sorted in decreasing order of their available power, namely. VMs are sorted in increasing order of their size, then will be placed into the data centers in the list, starting from the top. Given a relatively small number of

10 10 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 687 resources in the network (i.e., order of few hundred), the 688 solution can be obtained in few minutes. Without the pro- 689 posed ordering heuristic, the gap between a best-fit and 690 the optimum solution is about 25% for 13 network nodes 691 [25] Experimental results 693 We evaluate the time when a virtual data center in the 694 GSN/Mantychore can be viable without the power grid, 695 assuming that the processing power required by the data 696 center is equivalent to 48 processors. Weather condition 697 is also taken into account with a predefined threshold for 698 humidity level is 80%. We also assume the Power Usage 699 Effectiveness (PUE) of the data center is 1.5, meaning that 700 the total power used for data centers is 1.5 times of the 701 power effectively used for computing resources. In reality, 702 a lower PUE can be obtained by more advanced data center 703 architectures. 704 Fig. 6A and B shows respectively electricity generated 705 by a solar PV system installed at a data center at Ottawa 706 during a period of three months and the humidity level 707 of the data center. State ON in Fig. 5C indicates that the 708 data center is powered by solar energy. When the electric 709 current is too small or the solar PV charger is off due to 710 bad weather, the data center will be powered based on a 711 battery bank for a limited time; then it has to switch to 712 the power grid (state OFF ) if virtualization and migration 713 techniques are not present. Data center service fails (state 714 FAIL ) when the battery bank is empty or the humidity le- 715 vel is too high (>80%). 716 As shown in Fig. 6C, the data center has to switch fre- 717 quently from the solar PV to the power grid when the 718 autonomous power regulation feature is not implemented 719 as the weather gets worse during the last months of the 720 year. From mid-december to the end of January, electricity 721 generated by the solar PV is negligible, so the data center 722 has to be connected to the power grid permanently. The 723 failure of the data center is about 15% of the time due to 724 either humidity or power level. 725 The same workload and power pattern is imported to 726 the Controller in order to validate the performance of our 727 system when the data center is virtualized, as shown in 728 Fig. 6D. As the virtual data center is relocated to alternate 729 locations when the solar energy dwindles or the humidity 730 level rises above the threshold, the time it is in the ON 731 state is much longer than in Fig. 6C. The zero failing time 732 was achieved because the network has never been running 733 at its full capacity. In other words, other data centers have 734 always enough space (memory and CPU) to receive the 735 VMs of the Ottawa center. Although the time when the vir- 736 tual data center is powered entirely by solar energy in- 737 creases significantly compared to Fig. 6C (e.g., over 80%), 738 from the second half of January, the data center still has 739 to be connected to the power grid (state OFF ) because 740 renewable energy is not sufficient to power all data centers 741 in the network in mid-winter. 742 Extensible experiments showed that a virtual data cen- 743 ter having 10 VMs with totally 40 GB of memory takes about five minutes to be relocated across nodes on the Canadian section of the GSN from the western to eastern coast. This time is almost doubled when VMs are moved to a node located in the southern USA or Europe due to a larger number of transit switches. The performance of migration is extremely low when VMs go to China, due to the lack of lightpath tunnels. It hence suggests that optical connection is required for this kind of network. It is worth noting that the node in China is not permanently connected to the GSN, therefore it is not involved in our optimization problem. As GSN nodes are distributed across different time zones, we observed that VMs in solar nodes are moved from East (i.e., Europe) to West (i.e., North America) following the sunlight. When solar power is not sufficient for running VMs, wind, hydroelectricity and geothermal nodes will take over. Note that GSN is still an on-going research project, and the experimental results presented in this paper are obtained over a short period of time. Some remarks need to be added: VM migration would be costly, especially for bandwidth constrained or high latency networks. In reality, the GSN is deployed on top of a high speed optical network provided by CANARIE having up to 100 Gbps capacity. It is worth noting that CANARIE is one of the fastest networks in the world [26]. As a research project, the cost of communication network is not yet taken into account, although it could be high for intercontinental connections. Indeed, our algorithm attempts to minimize the number of migrations in the network by keeping VMs running on data centers as long as possible. Thus, network resources used for virtual data center relocations are also minimized. However, along with the overwhelming utilization of WDM networks, especially with the next generation highspeed networks such as Internet2 [27], GENI [28] or GÉANT [12], we believe that the portion of networking in the overall cost model will be reduced, making the green data center network realistic. The size of the GSN nodes is very small compared to real corporate data centers. Thus, the time for relocating a virtual data center could be underestimated in our research. However, some recent research suggested that small data centers would be more profitable than largescale ones [29,30]. The GSN model would therefore be appropriate for this kind of data centers. In addition to the issue of available renewable power, mega-scale data centers will result in a very long migration time when an entire data center is relocated. Also, resources used for migrations increase according to the size of data center. As migrations do not generate user revenue, a network of large scale data centers would hence become unrealistic. All GSN nodes are located in northern countries. Thus, it is hard to keep the network operational in mid-winter periods with solar energy. Our observation showed that the hub could easily be overloaded. A possible solution is to partition the network into multiple sections, each having a hub powered by sustainable energy sources

11 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 11 Fig. 6. Power, humidity level and data center state from October 12, 2010 to January 26, 2011.

12 12 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 803 Each hub will be responsible for maintaining service of 804 one section. For example, the GSN may have two hubs, 805 one in North America and the other in Europe Conclusion 808 In this paper, we have presented a prototype of a Future 809 Internet powered only by green energy sources. As a result 810 of the cooperation between Europe and North America 811 researchers, the GreenStar Network is a promising model 812 to deal with GHG reporting and carbon tax issues for large 813 ICT organizations. Based on the Mantychore FP7 project, a 814 number of techniques have been developed in order to 815 provision renewable energy for ICT services worldwide. 816 Virtualization techniques are shown to be the most appro- 817 priate solution to manage such a network and to migrate 818 virtual data centers following green energy source avail- 819 ability, such as solar and wind. 820 Our future work includes research on the quality of 821 services hosted by the GSN and a scalable resource 822 management. 823 Acknowledgments 824 The authors thank all partners for their contribution in 825 the GSN and Mantychore FP7 projects. The GSN project is 826 financially supported by CANARIE Inc. 827 References 828 [1] The Climate Group, SMART2020: Enabling the low carbon economy 829 in the information age, Report on behalf of the Global esustainability 830 Initiative (GeSI), [2] HD. Saunders, The Khazzoom Brookes postulate and neoclassical 832 growth, Energy Journal 13 (4) (1992) [3] The GreenStar Network Project. <http://www.greenstarnetwork. 834 com/>. 835 [4] C. Despins, F. Labeau, T. Le Ngoc, R. Labelle, M. Cheriet, C. Thibeault, 836 F. Gagnon, A. Leon-Garcia, O. Cherkaoui, B.St. Arnaud, J. Mcneill, Y. 837 Lemieux, M. Lemay, Leveraging green communications for carbon 838 emission reductions: techniques, testbeds and emerging carbon 839 footprint standards, IEEE Communications Magazine 49 (8) (2011) [5] E. Grasa, X. Hesselbach, S. Figuerola, V. Reijs, D. Wilson, J.M. Uzé, L. 842 Fischer, T. de Miguel, The MANTICORE project: providing users with 843 a logical IP network service, in: TERENA Networking Conference, 5/ [6] S. Figuerola, M. Lemay, V. Reijs, M. Savoie, B.St. Arnaud, Converged 846 optical network infrastructures in support of future internet and grid 847 services using IaaS to reduce GHG emissions, Journal of Lightwave 848 Technology 27 (12) (2009) [7] E. Grasa, S. Figuerola, X. Barrera, et al., MANTICORE II: IP network as a 850 service pilots at HEAnet, NORDUnet and RedIRIS, in: TERENA 851 Networking Conference, 6/ [8] E. Grasa, S. Figuerola, A. Forns, G. Junyent, J. Mambretti, Extending 853 the Argia software with a dynamic optical multicast service to 854 support high performance digital media, Optical Switching and 855 Networking 6 (2) (2009) [9] S. Figuerola, M. Lemay, Infrastructure services for optical networks 857 [Invited], Journal of Optical Communications and Networking 1 (2) 858 (2009) A247 A [10] HEAnet website. <>. 860 [11] NORDUnet website. <>. 861 [12] J. Moth, G. Kramer, GN3 Study of Environmental Impact Inventory of 862 Greenhouse Gas Emissions and Removals NORDUnet, GÉANT 863 Technical Report, 9/ [13] P. Thibodeau, Wind power data center project planned in urban area, <>. 866 [14] L. Zhenhua, L. Minghong, A. Wierman, S.H. Low, L.L.H. Andrew, 867 Greening geographic load balancing, Proceedings of ACM 868 SIGMETRICS (2011) [15] S.K. Garg, C.S. Yeo, A. Anandasivam, R. Buyya, Environmentconscious scheduling of HPC applications on distributed cloudoriented data centers, Journal of Parallel and Distributed Computing (JPDC) 71 (6) (2011) [16] A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, B. Maggs, Cutting the electric bill for internet-scale systems, ACM Computer Communication Review 39 (4) (2009) [17] M. Lemay, K.-K. Nguyen, B. St Arnaud, M. Cheriet, Convergence of cloud computing and network virtualization: towards a neutral carbon network, IEEE Internet Computing Magazine (2011). ISSN: [18] D. Milojičić, I.M. Llorente, R.S. Ruben, OpenNebula: a cloud management tool, IEEE Internet Computing Magazine 15 (2) (2011) [19], OpenStack Open Source Cloud Computing Software. <>. [20] M. Nelson, B.H. Lim, G. Hutchins, Fast transparent migration for virtual machines, in: Proceedings of USENIX Annual Technical Conference, 2005, pp [21] C. Kiddle, GeoChronos: a platform for earth observation scientists, OpenGridForum 28 (3) (2010). [22] V. Makhija, B. Herndon, P. Smith, L. Roderick, E. Zamost, J. Anderson, VMmark: a scalable benchmark for virtualized systems, Technical Report TR , VMware, [23] A. Kansal, F. Zhao, J. Liu, N. Kothari, Virtual machine power metering and provisioning, in: Proceedings of the 1st ACM Symposium on Cloud Computing, 2010, pp [24] V.V. Vazirani, Approximation algorithms, Springer, [25] D. György, The tight bound of first fit decreasing bin-packing algorithm is FFD(I) 6 (11/9)OPT(I) + 6/9, Combinatorics, Algorithms, Probabilistic and Experimental Methodologies 4614 (2007) [26] The British Broadcasting Corporation (BBC), Scientists Break World Record for Data Transfer Speeds, BBC News Technology, [27] R. Summerhill, The new Internet2 network, in: 6th GLIF Meeting, [28] J.S. Turner, A proposed architecture for the GENI backbone platform, in: Proceedings of ACM/IEEE Symposium ANCS, [29] V. Valancius, N. Laoutaris, L. Massoulié, C. Diot, P. Rodriguez, Greening the internet with nano data centers, Proceedings of CoNEXT 09 (2009) [30] K. Church, A. Greenberg, J. Hamilton, On delivering embarrassingly distributed cloud services, Proceedings of HotNets-VII (2008). Kim Khoa Nguyen is Research Fellow at the Automation Engineering Department of the École de Technologie Supérieure (University of Quebec). He is key architect of the Green- Star Network project and a member of the Synchromedia Consortium. Since 2008 he has been with the Optimization of Communication Networks Research Laboratory at Concordia University. His research includes green ICT, cloud computing, smart grid, router architectures and wireless networks. He holds a PhD in Electrical and Computer Engineering from Concordia University. Mohamed Cheriet is Full Professor in the Automation Engineering Department at the École de Technologie Supérieure (University of Quebec). He is co-founder of the Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), and founder and director of the Synchromedia Consortium. Dr. Cheriet serves on the editorial boards of Pattern Recognition (PR), the International Journal of Document Analysis and Recognition (IJDAR), and the International Journal of Pattern Recognition and Artificial Intelligence (IJPRAI). He is a member of IAPR, a senior member of the IEEE and the Founder and Premier Chair of the IEEE Montreal Chapter of Computational Intelligent Systems (CIS)

13 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Mathieu Lemay holds a degree in electrical 950 engineering from the École de Technologies 951 Supérieure (2005) and a master s in optical 952 networks (2007). He is currently a Ph.D. can- 953 didate at the Synchromedia Consortium of 954 ETS. He is the Founder, President, and CEO of 955 Inocybe Technologies Inc. He is currently 956 involved in Green IT and he is leading the IaaS 957 Framework Open Source initiative. His main 958 research themes are virtualization, network 959 segmentation, service-oriented architectures 960 and distributed systems Victor Reijs After studying at the University 965 of Twente in the Netherlands, Victor Reijs 966 worked for KPN Telecom Research and SURF- 967 net. He was involved in CLNS/TUBA (an earlier 968 alternative for IPv6). Experience was gained 969 with X.25 and ATM in a national and inter- 970 national environment. His last activity at 971 SURFnet was the tender for SURFnet5 (a step 972 towards optical networking). He is currently 973 managing the Network Development team of 974 HEAnet and is actively involved in interna- 975 tional activities such as GN3, FEDERICA and 976 Mantychore (IP Networks as a Service), as well as (optical) networking, 977 point-to-point links, virtualization and monitoring in general Andrew Mackarel is currently a Project Manager in HEAnet s NetDev Group for the past 5 years. His work concerns planning and deployment of large networking projects and introduction of new services such as Bandwidth on Demand. Previously Andrew has worked for Bells Labs and Lucent Technologies as Test Manager in their Optical Networking Group. He has worked as a Principal Test Engineer with Lucent and Ascend Communications and Stratus Computers. He has over 30 years work experience in the Telecommunications and Computing Industries. Alin Pastrama has been working for NORDUnet as a NOC Engineer since 2010 and has since been involved in a number of research and deployment projects, both internal and external, such as Mantychore FP7 and GN3 Bandwidth-on-Demand. He has a M.Sc. degree in Communication Systems from the Royal Institute of Technology (KTH) in Sweden

Energy-aware joint management of networks and cloud infrastructures

Energy-aware joint management of networks and cloud infrastructures Energy-aware joint management of networks and cloud infrastructures Bernardetta Addis 1, Danilo Ardagna 2, Antonio Capone 2, Giuliana Carello 2 1 LORIA, Université de Lorraine, France 2 Dipartimento di

More information

Greenhead: Virtual Data Center Embedding Across Distributed Infrastructures

Greenhead: Virtual Data Center Embedding Across Distributed Infrastructures : Virtual Data Center Embedding Across Distributed Infrastructures Ahmed Amokrane, Mohamed Faten Zhani, Rami Langar, Raouf Boutaba, Guy Pujolle LIP6 / UPMC - University of Pierre and Marie Curie; 4 Place

More information

February 2009. Seeding the Clouds: Key Infrastructure Elements for Cloud Computing

February 2009. Seeding the Clouds: Key Infrastructure Elements for Cloud Computing February 2009 Seeding the Clouds: Key Infrastructure Elements for Cloud Computing Page 2 Table of Contents Executive summary... 3 Introduction... 4 Business value of cloud computing... 4 Evolution of cloud

More information

Green-Cloud: Economics-inspired Scheduling, Energy and Resource Management in Cloud Infrastructures

Green-Cloud: Economics-inspired Scheduling, Energy and Resource Management in Cloud Infrastructures Green-Cloud: Economics-inspired Scheduling, Energy and Resource Management in Cloud Infrastructures Rodrigo Tavares Fernandes Instituto Superior Técnico Avenida Rovisco

More information

Journal of Computer and System Sciences

Journal of Computer and System Sciences Journal of Computer and System Sciences 78 (2012) 1330 1344 Contents lists available at SciVerse ScienceDirect Journal of Computer and System Sciences Cloud federation in a

More information

CLOUD computing has recently gained significant popularity

CLOUD computing has recently gained significant popularity IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. 1, NO. 1, JANUARY-JUNE 2013 1 Greenhead: Virtual Data Center Embedding across Distributed Infrastructures Ahmed Amokrane, Student Member, IEEE, Mohamed Faten

More information

GHG Protocol Product Life Cycle Accounting and Reporting Standard ICT Sector Guidance. Chapter 5:

GHG Protocol Product Life Cycle Accounting and Reporting Standard ICT Sector Guidance. Chapter 5: GHG Protocol Product Life Cycle Accounting and Reporting Standard ICT Sector Guidance Chapter : Guide for assessing GHG emissions of Cloud Computing and Data Center Services DRAFT January 0 Table of Contents

More information

Faster and Efficient VM Migrations for Improving SLA and ROI in Cloud Infrastructures

Faster and Efficient VM Migrations for Improving SLA and ROI in Cloud Infrastructures 1 Faster and Efficient VM Migrations for Improving SLA and ROI in Cloud Infrastructures Sujal Das, Michael Kagan, and Diego Crupnicoff ( Abstract Cloud platforms and infrastructures

More information

Boosting energy efficiency through Smart Grids

Boosting energy efficiency through Smart Grids Boosting energy efficiency through Smart Grids!"#"$%&&'()$*+%(,-."/0%1,-*(2- -------!"#"&*+$,-3*4%1*/%15-6.789:;7!37-;!6=7-=7- >9.?8@-.*+%(*#-7(/"1A6()B"1,)/5-C%(,%1+'&- D%1-!"#"$%&&'()$*+%(,-- Acknowledgements

More information

ITU-T. FG Cloud TR Version 1.0 (02/2012) Part 3: Requirements and framework architecture of cloud infrastructure

ITU-T. FG Cloud TR Version 1.0 (02/2012) Part 3: Requirements and framework architecture of cloud infrastructure I n t e r n a t i o n a l T e l e c o m m u n i c a t i o n U n i o n ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG Cloud TR Version 1.0 (02/2012) Focus Group on Cloud Computing Technical Report

More information

The Cost of a Cloud: Research Problems in Data Center Networks

The Cost of a Cloud: Research Problems in Data Center Networks The Cost of a Cloud: Research Problems in Data Center Networks Albert Greenberg, James Hamilton, David A. Maltz, Parveen Patel Microsoft Research, Redmond, WA, USA This article is an editorial note submitted

More information

IaaS: from Private Cloud to Hybrid Cloud

IaaS: from Private Cloud to Hybrid Cloud IaaS: from Private Cloud to Hybrid Cloud Project : SURFworks Project Year : 2013 Project Manager : Rogier Spoor Author(s) : Arjan Peddemors, Rogier Spoor Completion Date : 2013-4-10 Version : 1.0 Summary

More information


PUE : A COMPREHENSIVE EXAMINATION OF THE METRIC WHITE PAPER #49 PUE : A COMPREHENSIVE EXAMINATION OF THE METRIC EDITORS: Victor Avelar, Schneider Electric Dan Azevedo, Disney Alan French, Emerson Network Power PAGE 2 The Green Grid Association would

More information

Reducing Electricity Cost Through Virtual Machine Placement in High Performance Computing Clouds

Reducing Electricity Cost Through Virtual Machine Placement in High Performance Computing Clouds Reducing Electricity Cost Through Virtual Machine Placement in High Performance Computing Clouds Kien Le, Ricardo Bianchini, Thu D. Nguyen Department of Computer Science Rutgers University {lekien, ricardob,

More information

Cyber Security and Reliability in a Digital Cloud

Cyber Security and Reliability in a Digital Cloud JANUARY 2013 REPORT OF THE DEFENSE SCIENCE BOARD TASK FORCE ON Cyber Security and Reliability in a Digital Cloud JANUARY 2013 Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics

More information

Energy Efficiency in the Data Center

Energy Efficiency in the Data Center Energy Efficiency in the Data Center Maria Sansigre and Jaume Salom Energy Efficieny Area Thermal Energy and Building Performance Group Technical Report: IREC-TR-00001 Title: Energy Efficiency in the Data

More information

Monitoring Services in a Federated Cloud - The RESERVOIR Experience

Monitoring Services in a Federated Cloud - The RESERVOIR Experience Monitoring Services in a Federated Cloud - The RESERVOIR Experience Stuart Clayman, Giovanni Toffetti, Alex Galis, Clovis Chapman Dept of Electronic Engineering, Dept of Computer Science, University College

More information

The HAS Architecture: A Highly Available and Scalable Cluster Architecture for Web Servers

The HAS Architecture: A Highly Available and Scalable Cluster Architecture for Web Servers The HAS Architecture: A Highly Available and Scalable Cluster Architecture for Web Servers Ibrahim Haddad A Thesis in the Department of Computer Science and Software Engineering Presented in Partial Fulfillment

More information

OpenNaaS based Management Solution for inter-data Centers Connectivity

OpenNaaS based Management Solution for inter-data Centers Connectivity OpenNaaS based Management Solution for inter-data Centers Connectivity José Ignacio Aznar 1, Manel Jara 1, Adrián Roselló 1, Dave Wilson 2, Sergi Figuerola 1 1 Distributed Applications and Networks Area

More information



More information

GreenCloud: a packet-level simulator of energy-aware cloud computing data centers

GreenCloud: a packet-level simulator of energy-aware cloud computing data centers J Supercomput DOI 10.1007/s11227-010-0504-1 GreenCloud: a packet-level simulator of energy-aware cloud computing data centers Dzmitry Kliazovich Pascal Bouvry Samee Ullah Khan Springer Science+Business

More information


IP TELEPHONY POCKET GUIDE IP TELEPHONY POCKET GUIDE BY BARRY CASTLE 2nd Edition September 2004 ShoreTel, Inc. 960 Stewart Drive Sunnyvale, CA 94085 408.331.3300 1.800.425.9385 TABLE OF CONTENTS

More information

Network Monitoring with Software Defined Networking

Network Monitoring with Software Defined Networking Network Monitoring with Software Defined Networking Towards OpenFlow network monitoring Vassil Nikolaev Gourov Master of Science Thesis Network Architectures and Services Faculty of Electrical Engineering,

More information


PROJECT FINAL REPORT PROJECT FINAL REPORT Grant Agreement number: 212117 Project acronym: FUTUREFARM Project title: FUTUREFARM-Integration of Farm Management Information Systems to support real-time management decisions and

More information

Cloud Computing Operations Research

Cloud Computing Operations Research Cloud Computing Operations Research Ilyas Iyoob 1, Emrah Zarifoglu 2, A. B. Dieker 3 Abstract This paper argues that the cloud computing industry faces many decision problems where operations research

More information

Customer Cloud Architecture for Big Data and Analytics

Customer Cloud Architecture for Big Data and Analytics Customer Cloud Architecture for Big Data and Analytics Executive Overview Using analytics reveals patterns, trends and associations in data that help an organization understand the behavior of the people

More information

Online Social Network Data Placement over Clouds

Online Social Network Data Placement over Clouds Online Social Network Data Placement over Clouds Dissertation zur Erlangung des mathematisch-naturwissenschaftlichen Doktorgrades "Doctor rerum naturalium" der Georg-August-Universität Göttingen in the

More information

A Review on Cloud Computing: Design Challenges in Architecture and Security

A Review on Cloud Computing: Design Challenges in Architecture and Security Journal of Computing and Information Technology - CIT 19, 2011, 1, 25 55 doi:10.2498/cit.1001864 25 A Review on Cloud Computing: Design Challenges in Architecture and Security Fei Hu 1, Meikang Qiu 2,JiayinLi

More information

Masaryk University Faculty of Informatics. Master Thesis. Database management as a cloud based service for small and medium organizations

Masaryk University Faculty of Informatics. Master Thesis. Database management as a cloud based service for small and medium organizations Masaryk University Faculty of Informatics Master Thesis Database management as a cloud based service for small and medium organizations Dime Dimovski Brno, 2013 2 Statement I declare that I have worked

More information

Does My Infrastructure Look Big In This? A review of Fujitsu s Infrastructure Services. Author: Robin Bloor Published: April 2002. Bloor.

Does My Infrastructure Look Big In This? A review of Fujitsu s Infrastructure Services. Author: Robin Bloor Published: April 2002. Bloor. Does My Infrastructure Look Big In This? A review of Fujitsu s Infrastructure Services Author: Robin Bloor Published: April 2002 Bloor Research Table of Contents Contents Management Summary................................

More information