Contents lists available at SciVerse ScienceDirect. Computer Networks. journal homepage: www. elsevier. com/ locat e/ comnet

Size: px
Start display at page:

Download "Contents lists available at SciVerse ScienceDirect. Computer Networks. journal homepage: www. elsevier. com/ locat e/ comnet"

Transcription

1 1 COMPNW 4730 Computer Networks xxx (2012) xxx xxx Contents lists available at SciVerse ScienceDirect Computer Networks journal homepage: www. elsevier. com/ locat e/ comnet 2 Environmental-aware virtual data center network 3 Kim Khoa Nguyen a, Q1, Mohamed Cheriet a, Mathieu Lemay b, Victor Reijs c, Andrew Mackarel c, 4 Alin Pastrama d 5 a Department of Automated Manufacturing Engineering, Ecole de Technologie Superieure, University of Quebec, Montreal, Quebec, Canada 6 b Inocybe Technologies Inc., Gatineau, Quebec, Canada 7 c HEAnet Ltd., Dublin, Ireland 8 d NORDUnet, Kastrup, Denmark 9 a r t i c l e i n f o Article history: 13 Received 15 September Received in revised form 21 February Accepted 20 March Available online xxxx 17 Keywords: 18 GreenStar Network 19 Mantychore FP7 20 Green ICT 21 Virtual data center 22 Neutral carbon network 23 a b s t r a c t Cloud computing services have recently become a ubiquitous service delivery model, covering a wide range of applications from personal file sharing to being an enterprise data warehouse. Building green data center networks providing cloud computing services is an emerging trend in the Information and Communication Technology (ICT) industry, because of Global Warming and the potential GHG emissions resulting from cloud services. As one of the first worldwide initiatives provisioning ICT services entirely based on renewable energy such as solar, wind and hydroelectricity across Canada and around the world, the GreenStar Network (GSN) was developed to dynamically transport user services to be processed in data centers built in proximity to green energy sources, reducing Greenhouse Gas (GHG) emissions of ICT equipments. Regarding the current approach, which focuses mainly in reducing energy consumption at the micro-level through energy efficiency improvements, the overall energy consumption will eventually increase due to the growing demand from new services and users, resulting in an increase in GHG emissions. Based on the cooperation between Mantychore FP7 and the GSN, our approach is, therefore, much broader and more appropriate because it focuses on GHG emission reductions at the macro-level. This article presents some outcomes of our implementation of such a network model, which spans multiple green nodes in Canada, Europe and the USA. The network provides cloud computing services based on dynamic provision of network slices through relocation of virtual data centers Elsevier B.V. All rights reserved Introduction 48 Nowadays, reducing greenhouse gas (GHG) emissions is 49 becoming one of the most challenging research topics in 50 Information and Communication Technology (ICT) because 51 of the alarming growth of indirect GHG emissions resulting 52 from the overwhelming use of ICT electrical devices [1]. 53 The current approach when dealing with the ICT GHG 54 problem is improving energy efficiency, aimed at reducing 55 energy consumption at the micro level. Research projects 56 following this direction have focused on micro-processor Corresponding author. address: (K.K. Nguyen). design, computer design, power-on-demand architectures and virtual machine consolidation techniques. However, a micro-level energy efficiency approach will likely lead to an overall increase in energy consumption due to the Khazzoom Brookes postulate (also known as Jevons paradox) [2], which states that energy efficiency improvements that, on the broadest considerations, are economically justified at the micro level, lead to higher levels of energy consumption at the macro level. Therefore, we believe that reducing GHG emissions at the macro level is a more appropriate solution. Large ICT companies, like Microsoft which consumes up to 27 MW of energy at any given time [1], have built their data centers near green power sources. Unfortunately, many computing centers are not so close to /$ - see front matter 2012 Elsevier B.V. All rights reserved.

2 2 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 71 green energy sources. Thus, a green energy distributed net- 72 work is an emerging technology, given that losses incurred 73 in energy transmission over power utility infrastructures 74 are much higher than those caused by data transmission, 75 which makes relocating a data center near a renewable en- 76 ergy source a more efficient solution than trying to bring 77 the energy to an existing location. 78 The GreenStar Network (GSN) project [3,4] is the first 79 worldwide initiative aimed at providing ICT services based 80 entirely on renewable energy sources such as solar, wind 81 and hydroelectricity across Canada and around the world. 82 The network can transport user service applications to be 83 processed in data centers built in proximity to green en- 84 ergy sources, thus GHG emissions of ICT equipments are 85 reduced to a minimum. Whilst energy efficiency tech- 86 niques are still encouraged at low-end client equipment 87 (e.g., such as hand-held devices, home PCs), the heaviest 88 computing services should be dedicated to data centers 89 powered completely by green energy. This is enabled 90 thanks to a large abundant reserve of natural green energy 91 resources in Canada, Europe and the USA. The carbon credit 92 saving that we refer to in this article is the emission due to 93 the operation of the network; the GHG emission during the 94 production phase of the equipments used in the network 95 and in the server farms is not considered since no special 96 equipment is deployed in the GSN. 97 In order to move virtualized data centers towards net- 98 work nodes powered by green energy sources distributed 99 in such a multi-domain network, particularly between Eur- 100 ope and North America domains, the GSN is based on a 101 flexible routing platform provided by the Mantychore FP7 102 project [5], which collaborates with the GSN project to 103 enhance the carbon footprint exchange standard for ICT 104 services. This collaboration enables research on the feasi- 105 bility of powering e-infrastructures in multiple domains 106 worldwide with renewable energy sources. Management 107 and technical policies will be developed to leverage virtu- 108 alization, which helps to migrate virtual infrastructure 109 resources from one site to another based on power avail- 110 ability. This will facilitate use of renewable energy within 111 the GSN providing an Infrastructure as a Service (IaaS) 112 management tool. By integrating connectivity to parts of 113 the European National Research and Education Network 114 (NREN) infrastructures with the GSN network this devel- 115 ops competencies to understand how a set of green nodes 116 (where each one is powered by a different renewable en- 117 ergy source) could be integrated into an everyday network. 118 Energy considerations are taken before moving virtual 119 services without suffering connectivity interruptions. The 120 influence of physical location in that relocation is also ad- 121 dressed, such as weather prediction and estimation of solar 122 power generation. 123 The main objective of the GSN/Mantychore liaison is to 124 create a pilot study and a testbed environment from which 125 to derive best practices and guidelines to follow when 126 building low carbon networks. Core nodes are linked by 127 an underlying high speed optical network having up to Gbit/s bandwidth capacity provided by CANARIE. 129 Note that optical networks have a modest increase in 130 power consumption, especially with new 100 Gbit/s, in 131 comparison to electronic equipment such as routers and aggregators [6]. The migration of virtual data centers over network nodes is indeed a result of a convergence of server and network virtualizations as virtual infrastructure management. The GSN as a network architecture is built with multiple layers, resulting in a large number of resources to be managed. Virtualized management has therefore been proposed for service delivery regardless of the physical location of the infrastructure which is determined by resource providers. This allows complex underlying services to remain hidden inside the infrastructure provider. Resources are allocated according to user requirements; hence high utilization and optimization levels can be achieved. During the service, the user monitors and controls resources as if he was the owner, allowing the user to run their application in a virtual infrastructure powered by green energy sources. The remainder of this article is organized as follows. In Section 2, we present the connection plan of the GSN/ Mantychore joint project as well as its green nodes. Section 3 gives the physical architecture of data centers powered by renewable energy. Next, the virtualization solution of green data centers is provided in Section 4. Section 5 is dedicated to the cloud-based management solution that we implemented in the GSN. In Section 6, we formulate the optimization problem of renewable energy utilization in the network. Section 7 reports experimental and simulation results obtained when virtual data center service is hosted in the network. Finally, we draw conclusions and present future work. 2. Provisioning of ICT services with renewable energy Rising energy costs and working in an austerity based environment which has dynamically changing business requirements has raised the focus of the NREN community to control some characteristics of these connectivity services, so that users can change some of the service characteristics without having to renegotiate with the service provider. In the European NREN community connectivity services are provisioned on a manual basis with some effort now focusing towards automating the service setup and operation. Automation and monitoring of energy usage and emissions will be one of the new provisioning constraints employed in emerging networks. The Mantychore FP7 project has evolved from previous research projects MANTICORE and MANTICORE II [5,7]. The initial MANTICORE project goal was to implement a proof of concept based on the idea that routers and an IP network can be setup as a Service (IPNaaS, as a management Layer 3 network). MANTICORE II continued in the steps of its predecessor to implement stable and robust software while running trials on a range of network equipment. The Mantychore FP7 project allows the NRENs to provide a complete, flexible network service that offers research communities the ability to create an IP network under their control, where they can configure: (a) Layer 1, Optical links. Users will be able to get access control over optical devices like optical switches, to configure important properties of its cards and

3 COMPNW 4730 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 3 Fig. 1. The GreenStar Network ports. Mantychore integrates the Argia framework [8] which provides complete control of optical resources. (b) Layer 2, Ethernet and MPLS. Users will be able to get control over Ethernet and MPLS (Layer 2.5) switches to configure different services. In this aspect, Mantychore will integrate the Ether project [9] and its capabilities for the management of Ethernet and MPLS resources. (c) Layer 3, the Mantychore FP7 suite includes set of features for: (i) Configuration and creation of virtual networks, (ii) Configuration of physical interfaces, (iii) Support of routing protocols, both internal (RIP, OSPF) and external (BGP), (iv) Support of QoS and firewall services, (v) Creation, modification and deletion of resources (interfaces, routers) both physical and logical, and (vi) Support of IPv6. It allows the configuration of IPv6 in interfaces, routing protocols, networks. 208 Fig. 1 shows the connection plan of the GSN. The Canadian section of the GSN has the largest deployment of six GSN nodes powered by sun, wind and hydroelectricity. It is connected to the European green nodes in Ireland (HEAnet), Iceland (NORDUnet), Spain (i2cat), the Netherlands (SURFnet), and some other nodes in other parts of the world such as in China (WiCo), Egypt (Smart Village) and USA (ESNet). One of the key objectives of the liaison between Mantychore FP7 and GSN projects is to enable renewable energy provisioning for NRENs. Building competency using renewable energy resources is vital for any NREN with such an abundance of natural power generation resources at their backdoor and has been targeted as a potential major industry for the future. HEAnet in Ireland [10], where the share of electricity generated from renewable energy sources in 2009 was 14.4%, has connected two GSN nodes via the GEANT Plus service and its own NREN network to a GSN solar powered node in the South East of Ireland and to a

4 4 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Table 1 List of connected GSN nodes. Node Location Type of energy ETS Montreal, QC, Canada Hydroelectricity UQAM Montreal, QC, Canada Hydroelectricity CRC Ottawa, ON, Canada Solar Cybera Calgary, AB, Canada Solar BastionHost Halifax, NS, Canada Wind RackForce Kelowna, BC, Canada Hydroelectricity CalIT2 San Diego, CA, USA Solar/DC power NORDUnet Reykjavik, Iceland Geothermal SURFnet Utrecht, The Netherlands Wind HEAnet- DKIT Dundalk, Ireland Wind HEAnet-EPA Wexford, Ireland Solar i2cat Barcelona, Spain Solar WICO Shanghai, China Solar 228 wind powered grid supplied location in the North East of 229 the country. NORDUnet [11], which links Scandinavian 230 countries having the highest proportion of renewable en- 231 ergy sources in Europe, houses a GSN Node at a data center 232 in Reykjavík (Iceland) and also contributes to the GSN/ 233 Mantychore controller interface development. In Spain 234 [12], where 12.5% of energy comes from renewable energy 235 sources (mostly solar and wind), i2cat is leading the 236 Mantychore development as well as actively defining the 237 interface for GSN/Mantychore and they will setup a solar powered node in Lleida (Spain). The connected nodes and their source of energy are shown in Table 1. Recently, industrial projects have been built for generating renewable energy to offset data center power consumption around the world [1,13]. However, they rather focus on a single data center. The GSN is characterized by the cooperation of multiple green data centers to reduce the overall GHG emission in a network. To the best of our knowledge, it is the first ICT carbon reduction research initiative of this type in the world. Prior to our work, theoretical research has been done with respect to the optimization of power consumption (including brown energy) in a network of data centers based on the redistribution of workload across nodes [14 16]. The GSN goes one step further with a real deployment of a renewable energy powered network. In addition, our approach is to dynamically relocate virtual data centers through a cloud management middleware. 3. Architecture of a green powered data center Fig. 2 illustrates the physical architecture of a hydroelectricity and two green nodes, one is powered by solar energy and the other is powered by wind. The solar panels are grouped in bundles of 9 or 10 panels, each panel generates a power of W. The wind turbine system is a 15 kw generator. After being accumulated in a Fig. 2. Architecture of green nodes (hydro, wind and solar types).

5 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx battery bank, electrical energy is treated by an inverter/ 264 charger in order to produce an appropriate output current 265 for computing and networking devices. User applications 266 are running on multiple Dell PowerEdge R710 systems, 267 hosted by a rack mount structure in an outdoor climate- 268 controlled enclosure. The air conditioning and heating ele- 269 ments are powered by green energy at solar and wind 270 nodes; they are connected to the regular power grid at 271 hydro nodes. As servers are installed in the outdoor enclo- 272 sure, carbon assessment of a node can be highly accurate. 273 The PDUs (Power Distribution Unit), provided with power 274 monitoring features, measures electrical current and 275 voltage. Within each node, servers are linked by a local 276 network, which is then connected to the core network 277 through GE transceivers. Data flows are transferred 278 among GSN nodes over dedicated circuits (like light paths 279 or P2P links), tunnels over the Internet or logical IP 280 networks. 281 The Montreal GSN node plays a role of a manager (hub 282 node) that opportunistically sets up required connectivity 283 for Layer 1 and Layer 2 using dynamic services, then 284 pushes Virtual Machines (VMs) or software virtual routers 285 from the hub to a sun or wind node (spoke node) when 286 power is available. VMs will be pulled back to the hub 287 node when power dwindles. In such a case, the spoke node 288 may switch over to grid energy for running other services 289 if it is required. However, GSN services are powered en- 290 tirely by green energy. The VMs are used to run user appli- 291 cations, particularly heavy-computing services. Based on 292 this testbed network, experiments and research are per- 293 formed targeting cloud management algorithms and opti- 294 mization of the intermittently-available renewable energy 295 sources Virtualizing a data center 297 In the GSN, dynamic management of green data centers 298 is achieved through virtualization. All data center compo- 299 nents are virtualized and placed into a cloud which is then 300 controlled by a cloud manager. From the users point-of- 301 view, a virtual data center can be seen as an information 302 factory that runs 24/7 to deliver a sustained service (i.e., 303 stream of data). It includes virtual machines (VM) linked 304 by a virtual network. The primary benefit of virtualization 305 technologies across different IT resources is to boost over- 306 all effectiveness while improving application service deliv- 307 ery (performance, availability, responsiveness, security) to 308 sustain business growth in an environmentally friendly 309 manner. 310 Fig. 3 gives a schematic overview of the virtualization 311 process of a simple data center which is powered by 312 renewable energy sources, i.e., wind, solar, and power grid, 313 including: 314 Power generators, which produce electrical energy 315 from renewable energy, such as wind, solar or 316 geothermal. 317 A charge controller controls the generated electricity 318 going to the battery bank, preventing overcharging 319 and overvoltage. A battery bank stores energy and delivers power. An inverter converts DC from the battery bank to an AC supply used by electrical devices. When the voltage of the battery bank is not enough, the inverter will buy current from the power grid. In the CRC (Ottawa) data centers, the battery voltage is maintained between 24Vdc and 29Vdc, otherwise, electricity from the power grid will be bought (it is AC therefore is not converted by the inverter). If the generated power is too high, and it is not completely consumed by data center equipment, the surplus will be sold to the power grid. A Power Distribution Unit (PDU) distributes electricity from the inverter to servers, switches, routers. There are infeeds and outlets in the PDU for measuring and controlling the receiving and outgoing currents. Currently, the GSN software supports Raritan, Server Tech, Avocent, APC and Eaton PDUs. Climate control equipment allows the temperature and humidity of the data center s interior to be accurately controlled. Servers and switches are used to build the data center s network. Each device in the data center is virtualized by a software tool, and is then represented as a resource (e.g. PDU resource, network resource, etc.). These resource instances communicate with devices, parse commands and decide to perform appropriate actions. Communications with the devices can be done through Telnet, SSH or SNMP protocols. Some devices can be provided with the manufacture s software. However, such software is usually designed to interact with human users (e.g., through a web interface). The advantage of the virtualization approach is that a resource can be used by other services, or resources, which enables auto-management processes. Fig. 4 shows an example of the resources developed for PDU and computing servers. Similar resources are developed for power generators, climate controls, switches and routers. Server virtualization is able to control the physical server and hypervisors used on the server. The GSN supports KVM and XEN hypervisors. The Facility Resource (Fig. 3) is not attached to a physical device. It is responsible for linking the Power Source, PDU and Climate resources in order to create a measurement data which is used to make decisions for data center management. Each power consumer or generator provides a metering function which allows the user to view the power consumed or produced by the device. The total power consumption of a data center is reported by the Facility Resource. In the current implementation, network devices are virtualized at three layers. At the physical layer, Argia [8] treats optical cross-connects as software objects and can provision and reconfigure optical lightpaths within a single domain or across multiple, independently managed domains. At Layer 2, Ether [9] is used which is similar in concept to Argia, with a focus on Ethernet and MPLS networks, Ether allows users to acquire ports on an Enterprise Switch and manage VLANs or MPLS configurations on their own. At the network layer, a virtualized control plane created by the Mantychore FP7 project [5] is

6 6 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Fig. 3. Green data center and virtualization. Fig. 4. Example of resources in the cloud and a subset of their commands. 380 deployed which was specifically designed for IP networks 381 with an ability to define and configure physical and/or 382 logical IP networks. 383 In order to build a virtual data center, all Compute, 384 Network, Climate, PDU and Power Source resources are 385 hosted by a cloud service, which allows resources to be 386 always active and available to control devices. In many 387 data centers, i.e. [13], energy management and server man- 388 agement are separated functions. This imposes challenges in data center management, particularly when power sources are intermittent, because the server load cannot be adjusted to adapt to the power provision capacity, and the data center strictly depends on power providers. As power and server are managed in a unified manner, our proposed model allows a completely autonomous management of the data center. Our power management solution is designed with four key characteristics:

7 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Auto-configuration: system is able to detect a new 399 device connecting to a data center. It will ask the 400 administrator to provide appropriate configuration of 401 the new device. 402 Easy monitoring: comfortable easy access to real-time 403 information on energy consumption helping the admin- 404 istrator to make power regulation decisions. 405 Remote control: allows online access to devices 406 enabling these appliances to be controlled remotely. 407 Optimized planning: helps reduce GHG emission by 408 balancing workloads according to the capacity of serv- 409 ers and available energy sources in the entire network An example of how the power management function 412 interacts is as follows. The user wants to connect a new de- 413 vice (e.g., a server) to the data center. When this new ser- 414 ver is plugged into a PDU and turned on, the PDU resource 415 detects it by scanning the outlets and finds the load current 416 of the outlet. If there is no further information about the 417 device, the system considers it as a basic resource and as- 418 signs only a basic on/off command to the device. The user 419 will be asked to provide access information to the device 420 (e.g., username, password, protocol) and the type of device. 421 When the required information is provided, and a new 422 Compute resource has been identified, it will be exposed 423 to the cloud. If the device is non-intelligent (e.g. a light), 424 its consumption level is accounted in total consumption 425 of the data center, however, no particular action will be ta- 426 ken on this device (e.g., workload shared with other de- 427 vices through a migration when power dwindles). 428 Management of a network of virtual data centers is pro- 429 vided in the next section Management of a virtual data center network 431 From an operational point of view, the GreenStar Net- 432 work is controlled by a computing and energy manage- 433 ment middleware, as shown in Fig. 5, which includes two 434 distinct layers: 435 Resource layer: contains the drivers of each physical 436 device. Each resource controls a device and offers ser- 437 vices to a higher layer. Basic functions for any resource 438 include start, stop, monitor, power control, report 439 device status and then run application. Compute 440 Resources control physical servers and VMs. Power 441 Source Resources manage energy generators (e.g., wind 442 turbines or solar panels). PDU Resources control the 443 Power Distribution Units (PDUs). Climate Resources 444 monitor the weather condition and humidity of the data 445 centers. A Facility Resource links Power Source, PDU 446 and Climate resources of a data center with Compute 447 resources. It determines the power consumed by each 448 server and sends notifications to managers when the 449 power or environmental condition changes. 450 Management layer: includes a Controller and a set of 451 managers. Each manager is responsible for its individ- 452 ual resources. For example, the Cloud Manager manages 453 Compute resources, the Facility Manager manages the 454 Facility resources, and the Network Manager handles data center connectivity. The Controller is the brain of the network, which is responsible for determining the optimized location of each VM. It computes the action plans to be executed on each set of resources, and then orders the managers to perform them. Based on information provided by the Controller, these managers can execute relocation and migration tasks. The relationship between the Controller and the associated managers can be regarded as the Controller/Forwarder connection in an IP router. The Controller keeps an overall view of the entire network; it computes the best location for each VM and updates a Resource Location Table (RLT). A snapshot of the RLT is provided to the managers. When there is a request for the creation of a new VM, a manager will consult the table to determine the best location of the new VM. If there is a change in the network, e.g., when the power source of a data center dwindles, the controller re-computes best locations for VMs and updates the managers with the new RLT. The middleware is implemented based on the J2EE/ OSGi platform, using components of the IaaS Framework [17]. It is designed in a modular manner so the GSN can easily be extended to cover different layers and technologies. Through Web interfaces, users may determine GHG emission boundaries based on information providing VM power and energy sources, and then take actions to reduce GHG emissions. The project is therefore ISO compliant. Indeed, cloud management is not a new topic; however, the IaaS Framework is developed for the GSN because the project requires an open platform converging server and network virtualizations. Whilst many cloud management solutions in the market focus particularly on computing resources, IaaS Framework components can be used to build network virtualized tools [6], allowing flexible setup of data flows among data centers. Indeed, virtual network management provided by cloud middleware, like OpenNebula [18] and OpenStack [19], is limited at the IP layer, while the IaaS Framework deals with network virtualization on all three network layers. The ability of incorporating third-party power control components is also an advantage of the IaaS Framework. In order to compute the best location for each VM, each host in the network is associated with a number of parameters. Key parameters include the current power level and the available operating time, which can be determined respectively through a power monitor and weather measurement in addition to statistical information. The controller implements an optimization algorithm to maximize the utilization of data centers within the period when renewable energy is available. In other words, we will try to push as many VMs as possible to the hosts powered by renewable energy. A number of constraints are also taken into account, such as: Host capacity limitation: each VM requires a certain amount of memory and a number of CPU cycles. Thus, a physical server can host only a limited number of VMs

8 8 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Fig. 5. The GreenStar Network middleware architecture. 514 Network limitation: each service requires a certain 515 amount of bandwidth. So, the number of VMs moved 516 to a data center is limited by the available network 517 bandwidth of the data center. 518 Power source limitation. Each data center is powered by 519 a number of energy sources, which meets power 520 requirements of physical servers and accessories. When 521 the power dwindles, the load on physical servers should 522 be reduced by moving out the VMs The GreenStar Network is able to automatically make 525 the scheduling decision on dynamically migrating/consoli- 526 dating VMs among physical servers within datacenters to 527 meet the workload requirements meanwhile maximizing 528 renewable energy utilization, especially for performance- 529 sensitive applications. In the design of the GSN architec- 530 ture, several key issues are addressed including when to 531 trigger VM migrations, and how to select alternative phys- 532 ical machines to achieve optimized VM placement Environmental-aware migration of virtual data 534 centers 535 In the GSN, a virtual data center migration decision can 536 be made in the following situations: 537 If power dwindles below a threshold which is not suffi- 538 cient to keep the data center operational. This event is 539 detected and reported by the Facility resource. 540 If the climate condition is not suitable for the data cen- 541 ter (i.e., the temperature is too low or too high, or the 542 humidity level is too high). This event is detected and 543 reported by the Climate resource. The Climate resource is also able to import forecast information represented in XML format, which is used to plan migrations for a given period ahead. A migration involves four steps: (i) Setting up a new environment (i.e., in target data centers) for hosting the VMs with required configurations, (ii) Configuring the network connection, (iii) Moving VMs and their running state information through this high speed connection to the new locations, and (iv) Turning off computing resources at the original node. Note that the migration is implemented in a live manner based on support from hypervisors [20], meaning that a VM can be moved from a physical server to another while continuously running, without any noticeable effects from the point of view of the end users. During this procedure, the memory of the virtual machine (VM) is iteratively copied to the destination without stopping its execution. A delay of around ms is required to perform the final synchronization before the virtual machine begins executing at its final destination, providing an illusion of seamless migration. In our experiments with the online interactive application Geochronos [21], each VM migration requires 32Mbps bandwidth in order to keep the service live during the migration, thus a 10 Gbit/s link between two data centers can transport more than 300 VMs in parallel. Given that each VM occupies one processor and that each server has up to 16 processors, 20 servers can be moved in parallel. Generally, only one data center needs to be migrated at a given moment of time. However, the controller might be unable to migrate all VMs of a data center due to the overcapacity of servers in the network. In such a case, if there is no further requirement (e.g., user preferences on the VMs to be moved), the Controller will try to migrate as many

9 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx VMs as possible, starting from the smallest VM (in terms of 579 memory capacity) Optimization problem formulation 581 The key challenge of a mixed power management solu- 582 tion is to determine how long the electric current resource 583 can power the data center. We define an operating hour 584 (ophour) metric as the time that the data center can be 585 properly operational with a current power source. ophour 586 is calculated based on stored energy in the battery bank, 587 the system voltage and electric current. It can take two 588 values: 589 ophour under current load (ophour currentload ): the time 590 that the data center can be operational assuming no 591 additional load will be put on servers. 592 ophour under maximum load (ophour maximumload ): the 593 time that the data center can be operational assuming 594 all servers run at their full capacity (memory and CPU) The consumption of each server is determined through 597 the PDU which reports the voltage and current of the ser ver. The value of ophour is calculated as follows: ophour currentload ¼ E max BatteryChargeState V out I out ð1þ 601 ophour maximumload ¼ E max BatteryChargeState V out I out 602 where I max is the electrical current recorded by the PDU 603 outlet when the server is running at its maximal capacity 604 (reported by server benchmarking tools [22]). The maximal 605 capacity of server is reached when a heavy load application 606 is run. 607 The optimization problem of data center migration is 608 formulated as follows: 609 Given N VMs hosted by a data center, the ophour value 610 (op) of the data center is provided by its Facility Resource 611 using equations (1) and (2). When the ophour is lower than 612 a threshold T defined by administrator, the data center 613 needs to be relocated. Each VM V i with capacity C i = {C ik }, 614 i.e., memory capacity and the number of CPUs, will be 615 moved to an alternate location. In the GSN, the threshold 616 T = 1 for all data centers. 617 There are M physical data centers powered by green 618 energy sources, that will be used to host the virtual data 619 center. Each green data center D j is currently powered by 620 an energy source S j having a green factor E j. In the GSN, 621 the green factor of wind and solar sources is 100, of hydro 622 and geothermal is 95. The ophour of data center is op j, given 623 the current energy source. The data center D j has L j Com- 624 pute resources {R jl }, each has available capacity (CPU and 625 memory) C jl = {C jlk } and is hosting H jl VMs {V jlh } each VM V jlh 626 has capacity C jlh = {C jlhk }. When a VM V i is moved to D j, the 627 new ophour of the data center is predicted as follows: 628! 630 op j ¼ op j c ik 1 P a k P Lj k c l¼1 jlk 631 where a k is specific constant determining power consump- 632 tion for CPU and memory access as described in [23]. The ð2þ ð3þ following matrix of binary variables represents the virtual data center relocation result: x ijl ¼ 1 if V j is hosted by R jl in D j ð4þ 0 otherwise The objective function is to maximize the utilization of green energy; in other words, we try to keep the VM running on a green data center as long as possible. So, the problem is to maximize:! P N P M P L j i j l x ijl op j Subject to: P N P M P L j i j l x ijl ¼ N c ik 1 P a k P Lj k c l¼1 jlk x ijl C ik þ PH jl C jlhk 6 C jlk 8i; j; l; k h¼1! op j c ik 1 P a k P Lj k c l¼1 jlk E j ð5þ ð6þ ð7þ > T 8i; j ð8þ We assume that optical switches on the underlying CANARIE network are powered by a sustainable green power source (i.e., hydroelectricity). Also, note that the power consumption of optical lightpaths linking data centers is very small compared to the consumption of data centers. Therefore, migrations do not increase the consumption of brown energy in the network. The power consumption of the underlying network is hence not included in the objective function. In reality, as the GSN is supported by a very highspeed underlying network and current nodes are tiny data centers with few servers, the time required for migrating an entire node is in the order of minutes, whilst a full battery bank is able to power a node during a period of ten hours. Additional user-specific constraints, for example that VMs from a single virtual data center are required to be placed in a single physical data center, are not taken into account in the current network model. This formulation is referred to as the original mathematical optimization problem. The solution of the problem will show whether the virtual data center can be relocated and the optimized relocation scheme. If no solution is found, in other words, there are not sufficiently resources in the network to host the VMs, a notification will be sent to the system administrator who may then make a decision to scale up existing data centers. Since the objective function is nonlinear and there are nonlinear constraints, the optimization model is a nonlinear programming problem with binary variables. The problem can be considered as a bin packing problem with variable bin sizes and prices, which is proven NP-hard. In our implementation, we apply a modification of the Best Fit Decreasing (BFD) algorithm [24] to solve the problem. Our heuristic is follows: As the goal is to maximize the green power utilization, data centers are firstly sorted in decreasing order of their available power, namely. VMs are sorted in increasing order of their size, then will be placed into the data centers in the list, starting from the top. Given a relatively small number of

10 10 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 687 resources in the network (i.e., order of few hundred), the 688 solution can be obtained in few minutes. Without the pro- 689 posed ordering heuristic, the gap between a best-fit and 690 the optimum solution is about 25% for 13 network nodes 691 [25] Experimental results 693 We evaluate the time when a virtual data center in the 694 GSN/Mantychore can be viable without the power grid, 695 assuming that the processing power required by the data 696 center is equivalent to 48 processors. Weather condition 697 is also taken into account with a predefined threshold for 698 humidity level is 80%. We also assume the Power Usage 699 Effectiveness (PUE) of the data center is 1.5, meaning that 700 the total power used for data centers is 1.5 times of the 701 power effectively used for computing resources. In reality, 702 a lower PUE can be obtained by more advanced data center 703 architectures. 704 Fig. 6A and B shows respectively electricity generated 705 by a solar PV system installed at a data center at Ottawa 706 during a period of three months and the humidity level 707 of the data center. State ON in Fig. 5C indicates that the 708 data center is powered by solar energy. When the electric 709 current is too small or the solar PV charger is off due to 710 bad weather, the data center will be powered based on a 711 battery bank for a limited time; then it has to switch to 712 the power grid (state OFF ) if virtualization and migration 713 techniques are not present. Data center service fails (state 714 FAIL ) when the battery bank is empty or the humidity le- 715 vel is too high (>80%). 716 As shown in Fig. 6C, the data center has to switch fre- 717 quently from the solar PV to the power grid when the 718 autonomous power regulation feature is not implemented 719 as the weather gets worse during the last months of the 720 year. From mid-december to the end of January, electricity 721 generated by the solar PV is negligible, so the data center 722 has to be connected to the power grid permanently. The 723 failure of the data center is about 15% of the time due to 724 either humidity or power level. 725 The same workload and power pattern is imported to 726 the Controller in order to validate the performance of our 727 system when the data center is virtualized, as shown in 728 Fig. 6D. As the virtual data center is relocated to alternate 729 locations when the solar energy dwindles or the humidity 730 level rises above the threshold, the time it is in the ON 731 state is much longer than in Fig. 6C. The zero failing time 732 was achieved because the network has never been running 733 at its full capacity. In other words, other data centers have 734 always enough space (memory and CPU) to receive the 735 VMs of the Ottawa center. Although the time when the vir- 736 tual data center is powered entirely by solar energy in- 737 creases significantly compared to Fig. 6C (e.g., over 80%), 738 from the second half of January, the data center still has 739 to be connected to the power grid (state OFF ) because 740 renewable energy is not sufficient to power all data centers 741 in the network in mid-winter. 742 Extensible experiments showed that a virtual data cen- 743 ter having 10 VMs with totally 40 GB of memory takes about five minutes to be relocated across nodes on the Canadian section of the GSN from the western to eastern coast. This time is almost doubled when VMs are moved to a node located in the southern USA or Europe due to a larger number of transit switches. The performance of migration is extremely low when VMs go to China, due to the lack of lightpath tunnels. It hence suggests that optical connection is required for this kind of network. It is worth noting that the node in China is not permanently connected to the GSN, therefore it is not involved in our optimization problem. As GSN nodes are distributed across different time zones, we observed that VMs in solar nodes are moved from East (i.e., Europe) to West (i.e., North America) following the sunlight. When solar power is not sufficient for running VMs, wind, hydroelectricity and geothermal nodes will take over. Note that GSN is still an on-going research project, and the experimental results presented in this paper are obtained over a short period of time. Some remarks need to be added: VM migration would be costly, especially for bandwidth constrained or high latency networks. In reality, the GSN is deployed on top of a high speed optical network provided by CANARIE having up to 100 Gbps capacity. It is worth noting that CANARIE is one of the fastest networks in the world [26]. As a research project, the cost of communication network is not yet taken into account, although it could be high for intercontinental connections. Indeed, our algorithm attempts to minimize the number of migrations in the network by keeping VMs running on data centers as long as possible. Thus, network resources used for virtual data center relocations are also minimized. However, along with the overwhelming utilization of WDM networks, especially with the next generation highspeed networks such as Internet2 [27], GENI [28] or GÉANT [12], we believe that the portion of networking in the overall cost model will be reduced, making the green data center network realistic. The size of the GSN nodes is very small compared to real corporate data centers. Thus, the time for relocating a virtual data center could be underestimated in our research. However, some recent research suggested that small data centers would be more profitable than largescale ones [29,30]. The GSN model would therefore be appropriate for this kind of data centers. In addition to the issue of available renewable power, mega-scale data centers will result in a very long migration time when an entire data center is relocated. Also, resources used for migrations increase according to the size of data center. As migrations do not generate user revenue, a network of large scale data centers would hence become unrealistic. All GSN nodes are located in northern countries. Thus, it is hard to keep the network operational in mid-winter periods with solar energy. Our observation showed that the hub could easily be overloaded. A possible solution is to partition the network into multiple sections, each having a hub powered by sustainable energy sources

11 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 11 Fig. 6. Power, humidity level and data center state from October 12, 2010 to January 26, 2011.

12 12 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx 803 Each hub will be responsible for maintaining service of 804 one section. For example, the GSN may have two hubs, 805 one in North America and the other in Europe Conclusion 808 In this paper, we have presented a prototype of a Future 809 Internet powered only by green energy sources. As a result 810 of the cooperation between Europe and North America 811 researchers, the GreenStar Network is a promising model 812 to deal with GHG reporting and carbon tax issues for large 813 ICT organizations. Based on the Mantychore FP7 project, a 814 number of techniques have been developed in order to 815 provision renewable energy for ICT services worldwide. 816 Virtualization techniques are shown to be the most appro- 817 priate solution to manage such a network and to migrate 818 virtual data centers following green energy source avail- 819 ability, such as solar and wind. 820 Our future work includes research on the quality of 821 services hosted by the GSN and a scalable resource 822 management. 823 Acknowledgments 824 The authors thank all partners for their contribution in 825 the GSN and Mantychore FP7 projects. The GSN project is 826 financially supported by CANARIE Inc. 827 References 828 [1] The Climate Group, SMART2020: Enabling the low carbon economy 829 in the information age, Report on behalf of the Global esustainability 830 Initiative (GeSI), [2] HD. Saunders, The Khazzoom Brookes postulate and neoclassical 832 growth, Energy Journal 13 (4) (1992) [3] The GreenStar Network Project. <http://www.greenstarnetwork. 834 com/>. 835 [4] C. Despins, F. Labeau, T. Le Ngoc, R. Labelle, M. Cheriet, C. Thibeault, 836 F. Gagnon, A. Leon-Garcia, O. Cherkaoui, B.St. Arnaud, J. Mcneill, Y. 837 Lemieux, M. Lemay, Leveraging green communications for carbon 838 emission reductions: techniques, testbeds and emerging carbon 839 footprint standards, IEEE Communications Magazine 49 (8) (2011) [5] E. Grasa, X. Hesselbach, S. Figuerola, V. Reijs, D. Wilson, J.M. Uzé, L. 842 Fischer, T. de Miguel, The MANTICORE project: providing users with 843 a logical IP network service, in: TERENA Networking Conference, 5/ [6] S. Figuerola, M. Lemay, V. Reijs, M. Savoie, B.St. Arnaud, Converged 846 optical network infrastructures in support of future internet and grid 847 services using IaaS to reduce GHG emissions, Journal of Lightwave 848 Technology 27 (12) (2009) [7] E. Grasa, S. Figuerola, X. Barrera, et al., MANTICORE II: IP network as a 850 service pilots at HEAnet, NORDUnet and RedIRIS, in: TERENA 851 Networking Conference, 6/ [8] E. Grasa, S. Figuerola, A. Forns, G. Junyent, J. Mambretti, Extending 853 the Argia software with a dynamic optical multicast service to 854 support high performance digital media, Optical Switching and 855 Networking 6 (2) (2009) [9] S. Figuerola, M. Lemay, Infrastructure services for optical networks 857 [Invited], Journal of Optical Communications and Networking 1 (2) 858 (2009) A247 A [10] HEAnet website. <http://www.heanet.ie/>. 860 [11] NORDUnet website. <http://www.nordu.net>. 861 [12] J. Moth, G. Kramer, GN3 Study of Environmental Impact Inventory of 862 Greenhouse Gas Emissions and Removals NORDUnet, GÉANT 863 Technical Report, 9/ [13] P. Thibodeau, Wind power data center project planned in urban area, <http://www.computerworld.com/>. 866 [14] L. Zhenhua, L. Minghong, A. Wierman, S.H. Low, L.L.H. Andrew, 867 Greening geographic load balancing, Proceedings of ACM 868 SIGMETRICS (2011) [15] S.K. Garg, C.S. Yeo, A. Anandasivam, R. Buyya, Environmentconscious scheduling of HPC applications on distributed cloudoriented data centers, Journal of Parallel and Distributed Computing (JPDC) 71 (6) (2011) [16] A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, B. Maggs, Cutting the electric bill for internet-scale systems, ACM Computer Communication Review 39 (4) (2009) [17] M. Lemay, K.-K. Nguyen, B. St Arnaud, M. Cheriet, Convergence of cloud computing and network virtualization: towards a neutral carbon network, IEEE Internet Computing Magazine (2011). ISSN: [18] D. Milojičić, I.M. Llorente, R.S. Ruben, OpenNebula: a cloud management tool, IEEE Internet Computing Magazine 15 (2) (2011) [19] OpenStack.org, OpenStack Open Source Cloud Computing Software. <http://openstack.org>. [20] M. Nelson, B.H. Lim, G. Hutchins, Fast transparent migration for virtual machines, in: Proceedings of USENIX Annual Technical Conference, 2005, pp [21] C. Kiddle, GeoChronos: a platform for earth observation scientists, OpenGridForum 28 (3) (2010). [22] V. Makhija, B. Herndon, P. Smith, L. Roderick, E. Zamost, J. Anderson, VMmark: a scalable benchmark for virtualized systems, Technical Report TR , VMware, [23] A. Kansal, F. Zhao, J. Liu, N. Kothari, Virtual machine power metering and provisioning, in: Proceedings of the 1st ACM Symposium on Cloud Computing, 2010, pp [24] V.V. Vazirani, Approximation algorithms, Springer, [25] D. György, The tight bound of first fit decreasing bin-packing algorithm is FFD(I) 6 (11/9)OPT(I) + 6/9, Combinatorics, Algorithms, Probabilistic and Experimental Methodologies 4614 (2007) [26] The British Broadcasting Corporation (BBC), Scientists Break World Record for Data Transfer Speeds, BBC News Technology, [27] R. Summerhill, The new Internet2 network, in: 6th GLIF Meeting, [28] J.S. Turner, A proposed architecture for the GENI backbone platform, in: Proceedings of ACM/IEEE Symposium ANCS, [29] V. Valancius, N. Laoutaris, L. Massoulié, C. Diot, P. Rodriguez, Greening the internet with nano data centers, Proceedings of CoNEXT 09 (2009) [30] K. Church, A. Greenberg, J. Hamilton, On delivering embarrassingly distributed cloud services, Proceedings of HotNets-VII (2008). Kim Khoa Nguyen is Research Fellow at the Automation Engineering Department of the École de Technologie Supérieure (University of Quebec). He is key architect of the Green- Star Network project and a member of the Synchromedia Consortium. Since 2008 he has been with the Optimization of Communication Networks Research Laboratory at Concordia University. His research includes green ICT, cloud computing, smart grid, router architectures and wireless networks. He holds a PhD in Electrical and Computer Engineering from Concordia University. Mohamed Cheriet is Full Professor in the Automation Engineering Department at the École de Technologie Supérieure (University of Quebec). He is co-founder of the Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), and founder and director of the Synchromedia Consortium. Dr. Cheriet serves on the editorial boards of Pattern Recognition (PR), the International Journal of Document Analysis and Recognition (IJDAR), and the International Journal of Pattern Recognition and Artificial Intelligence (IJPRAI). He is a member of IAPR, a senior member of the IEEE and the Founder and Premier Chair of the IEEE Montreal Chapter of Computational Intelligent Systems (CIS)

13 K.K. Nguyen et al. / Computer Networks xxx (2012) xxx xxx Mathieu Lemay holds a degree in electrical 950 engineering from the École de Technologies 951 Supérieure (2005) and a master s in optical 952 networks (2007). He is currently a Ph.D. can- 953 didate at the Synchromedia Consortium of 954 ETS. He is the Founder, President, and CEO of 955 Inocybe Technologies Inc. He is currently 956 involved in Green IT and he is leading the IaaS 957 Framework Open Source initiative. His main 958 research themes are virtualization, network 959 segmentation, service-oriented architectures 960 and distributed systems Victor Reijs After studying at the University 965 of Twente in the Netherlands, Victor Reijs 966 worked for KPN Telecom Research and SURF- 967 net. He was involved in CLNS/TUBA (an earlier 968 alternative for IPv6). Experience was gained 969 with X.25 and ATM in a national and inter- 970 national environment. His last activity at 971 SURFnet was the tender for SURFnet5 (a step 972 towards optical networking). He is currently 973 managing the Network Development team of 974 HEAnet and is actively involved in interna- 975 tional activities such as GN3, FEDERICA and 976 Mantychore (IP Networks as a Service), as well as (optical) networking, 977 point-to-point links, virtualization and monitoring in general Andrew Mackarel is currently a Project Manager in HEAnet s NetDev Group for the past 5 years. His work concerns planning and deployment of large networking projects and introduction of new services such as Bandwidth on Demand. Previously Andrew has worked for Bells Labs and Lucent Technologies as Test Manager in their Optical Networking Group. He has worked as a Principal Test Engineer with Lucent and Ascend Communications and Stratus Computers. He has over 30 years work experience in the Telecommunications and Computing Industries. Alin Pastrama has been working for NORDUnet as a NOC Engineer since 2010 and has since been involved in a number of research and deployment projects, both internal and external, such as Mantychore FP7 and GN3 Bandwidth-on-Demand. He has a M.Sc. degree in Communication Systems from the Royal Institute of Technology (KTH) in Sweden

Renewable Energy Provisioning for ICT Services in Future Internet

Renewable Energy Provisioning for ICT Services in Future Internet Renewable Energy Provisioning for ICT Services in Future Internet Kim Khoa Nguyen, Mohamed Cheriet Mathieu Lemay Bill St. Arnaud Victor Reijs, Andrew Mackarel Pau Minoves Alin Pastrama Ward Van Heddeghem

More information

Renewable Power associated Infrastructure as a Service

Renewable Power associated Infrastructure as a Service Renewable Power associated Infrastructure as a Service Andrew Mackarel Alin Pastrama Pau Minoves Kim Khoa Nguyen, Ward Van Heddeghem HEAnet, Ireland NORDUnet, Denmark i2cat Foundation, Spain University

More information

Convergence of Cloud Computing and Network Virtualization: Towards a Zero-Carbon Network

Convergence of Cloud Computing and Network Virtualization: Towards a Zero-Carbon Network This article has been accepted for publication in IEEE Internet Computing but has not yet been fully edited. Some content may change prior to final publication. Convergence of Cloud Computing and Network

More information

Powering a Data Center Network via Renewable Energy: A Green Testbed

Powering a Data Center Network via Renewable Energy: A Green Testbed Sustainable Internet O P F Powering a Data Center Network via Renewable Energy: A Green Testbed Today s information and communications technology (ICT) services emit an increasing amount of greenhouse

More information

Study of virtual data centers for cost savings and management

Study of virtual data centers for cost savings and management 1 Study of virtual data centers for cost savings and management María Virtudes López López School of Industrial Engineering and Information Technology Master s Degree in Cybernetics Research León, Spain

More information

MANTICORE: Virtualisation of the IP Network service. Victor Reijs Dave Wilson

MANTICORE: Virtualisation of the IP Network service. Victor Reijs Dave Wilson MANTICORE: Virtualisation of the IP Network service Victor Reijs Dave Wilson Outline Service from MANTICORE II project Infrastructure as a Service Framework Use cases MANTICORE FP7 proposal Service from

More information

Enabling Infrastructure as a Service (IaaS) on IP Networks: From Distributed to Virtualized Control Plane

Enabling Infrastructure as a Service (IaaS) on IP Networks: From Distributed to Virtualized Control Plane ACPTED FROM OPEN CALL Enabling Infrastructure as a Service (IaaS) on IP s: From Distributed to Virtualized Control Plane Kim Khoa Nguyen and Mohamed Cheriet, University of Quebec Mathieu Lemay, Inocybe

More information

Infrastructure as a Service (IaaS)

Infrastructure as a Service (IaaS) Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,

More information

MANTICORE: Providing Users with a Logical IP Network Service

MANTICORE: Providing Users with a Logical IP Network Service MANTICORE: Providing Users with a Logical IP Network Service Victor Reijs (HEAnet) MANTICORE Partners (self funded project): Agenda MANTICORE vision MANTICORE-I implementation Infrastructure as a Service

More information

OpenNaaS based Management Solution for inter-data Centers Connectivity

OpenNaaS based Management Solution for inter-data Centers Connectivity OpenNaaS based Management Solution for inter-data Centers Connectivity José Ignacio Aznar 1, Manel Jara 1, Adrián Roselló 1, Dave Wilson 2, Sergi Figuerola 1 1 Distributed Applications and Networks Area

More information

The New Infrastructure Virtualization Paradigm, What Does it Mean for Campus?

The New Infrastructure Virtualization Paradigm, What Does it Mean for Campus? The New Infrastructure Virtualization Paradigm, What Does it Mean for Campus? Jean-Marc Uzé Juniper Networks juze@juniper.net TNC2008, Brugge, May 19 th, 2008 Copyright 2008 Juniper Networks, Inc. www.juniper.net

More information

Europe Canada collaborative R&D Opportunities: Driving sustainability through integrated information and energy infrastructures

Europe Canada collaborative R&D Opportunities: Driving sustainability through integrated information and energy infrastructures Europe Canada collaborative R&D Opportunities: Driving sustainability through integrated information and energy infrastructures Contact:: Charles Despins, Prompt Inc. cdespins@promptinc.org +1.514.875.0032

More information

Hierarchical Approach for Green Workload Management in Distributed Data Centers

Hierarchical Approach for Green Workload Management in Distributed Data Centers Hierarchical Approach for Green Workload Management in Distributed Data Centers Agostino Forestiero, Carlo Mastroianni, Giuseppe Papuzzo, Mehdi Sheikhalishahi Institute for High Performance Computing and

More information

The FEDERICA Project: creating cloud infrastructures

The FEDERICA Project: creating cloud infrastructures The FEDERICA Project: creating cloud infrastructures Mauro Campanella Consortium GARR, Via dei Tizii 6, 00185 Roma, Italy Mauro.Campanella@garr.it Abstract. FEDERICA is a European project started in January

More information

International Journal of Applied Science and Technology Vol. 2 No. 3; March 2012. Green WSUS

International Journal of Applied Science and Technology Vol. 2 No. 3; March 2012. Green WSUS International Journal of Applied Science and Technology Vol. 2 No. 3; March 2012 Abstract 112 Green WSUS Seifedine Kadry, Chibli Joumaa American University of the Middle East Kuwait The new era of information

More information

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient. The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004

More information

DC4Cities: Incresing Renewable Energy Utilization of Data Centre in a smart cities. Page 1

DC4Cities: Incresing Renewable Energy Utilization of Data Centre in a smart cities. Page 1 DC4Cities: Incresing Renewable Energy Utilization of Data Centre in a smart cities M A RTA CHINNICI R E S E A R C H E R E N E A Page 1 DC4Cities Origins Green Service Level Agreements To Adapt the Renewable

More information

MANTICORE II: IP Network as a Service Pilots at HEAnet, NORDUnet and RedIRIS

MANTICORE II: IP Network as a Service Pilots at HEAnet, NORDUnet and RedIRIS MANTICORE II: IP Network as a Service Pilots at HEAnet, NORDUnet and RedIRIS Eduard Grasa 1, Sergi Figuerola 1, Xavier Barrera 1, Laia Ferrao 1, Carlos Baez 1, Pau Minoves 1, Xavier Hesselbach 2, Juan

More information

Environmental and Green Cloud Computing

Environmental and Green Cloud Computing International Journal of Allied Practice, Research and Review Website: www.ijaprr.com (ISSN 2350-1294) Environmental and Green Cloud Computing Aruna Singh and Dr. Sanjay Pachauri Abstract - Cloud computing

More information

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm

Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm Shanthipriya.M 1, S.T.Munusamy 2 ProfSrinivasan. R 3 M.Tech (IT) Student, Department of IT, PSV College of Engg & Tech, Krishnagiri,

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

Leveraging Renewable Energy in Data Centers: Present and Future

Leveraging Renewable Energy in Data Centers: Present and Future Leveraging Renewable Energy in Data Centers: Present and Future Ricardo Bianchini Department of Computer Science Collaborators: Josep L. Berral, Inigo Goiri, Jordi Guitart, Md. Haque, William Katsak, Kien

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

GN1 (GÉANT) Deliverable D13.2

GN1 (GÉANT) Deliverable D13.2 Contract Number: IST-2000-26417 Project Title: GN1 (GÉANT) Deliverable 13.2 Technology Roadmap for Year 3 Contractual Date: 31 July 2002 Actual Date: 16 August 2002 Work Package: WP8 Nature of Deliverable:

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture

Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture 1 Shaik Fayaz, 2 Dr.V.N.Srinivasu, 3 Tata Venkateswarlu #1 M.Tech (CSE) from P.N.C & Vijai Institute of

More information

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI

More information

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc. White Paper Juniper Networks Solutions for VMware NSX Enabling Businesses to Deploy Virtualized Data Center Environments Copyright 2013, Juniper Networks, Inc. 1 Table of Contents Executive Summary...3

More information

How to Maximize Data Center Efficiencies with Rack Level Power Distribution and Monitoring

How to Maximize Data Center Efficiencies with Rack Level Power Distribution and Monitoring How to Maximize Data Center Efficiencies with Rack Level Power Distribution and Monitoring Brian Rehnke e-pdu Business Development Manager November, 2011 The Problems Energy Usage in most data centers

More information

When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014

When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014 When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014 Table of Contents Executive Summary... 2 Case Study: Amazon Ec2 Vs In-House Private Cloud... 3 Aim... 3 Participants...

More information

When Does Colocation Become Competitive With The Public Cloud?

When Does Colocation Become Competitive With The Public Cloud? When Does Colocation Become Competitive With The Public Cloud? PLEXXI WHITE PAPER Affinity Networking for Data Centers and Clouds Table of Contents EXECUTIVE SUMMARY... 2 CASE STUDY: AMAZON EC2 vs IN-HOUSE

More information

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,

More information

Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing

Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Problem description Cloud computing is a technology used more and more every day, requiring an important amount

More information

Brain of the Virtualized Data Center

Brain of the Virtualized Data Center Brain of the Virtualized Data Center Contents 1 Challenges of Server Virtualization... 3 1.1 The virtual network breaks traditional network boundaries... 3 1.2 The live migration function of VMs requires

More information

A Case Study about Green Cloud Computing: An Attempt towards Green Planet

A Case Study about Green Cloud Computing: An Attempt towards Green Planet A Case Study about Green Cloud Computing: An Attempt towards Green Planet Devinder Kaur Padam 1 Analyst, HCL Technologies Infra Structure Department, Sec-126 Noida, India 1 ABSTRACT: Cloud computing is

More information

CA Cloud Overview Benefits of the Hyper-V Cloud

CA Cloud Overview Benefits of the Hyper-V Cloud Benefits of the Hyper-V Cloud For more information, please contact: Email: sales@canadianwebhosting.com Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

Last time. Data Center as a Computer. Today. Data Center Construction (and management)

Last time. Data Center as a Computer. Today. Data Center Construction (and management) Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases

More information

Network Performance Channel

Network Performance Channel Network Performance Channel Net Optics Products Overview MIHAJLO PRERAD, Network Performance Channel GmbH Who we are Network Performance Channel GmbH Leading global value added distributor specialized

More information

An Active Packet can be classified as

An Active Packet can be classified as Mobile Agents for Active Network Management By Rumeel Kazi and Patricia Morreale Stevens Institute of Technology Contact: rkazi,pat@ati.stevens-tech.edu Abstract-Traditionally, network management systems

More information

Power & Environmental Monitoring

Power & Environmental Monitoring Data Centre Monitoring Made Easy Power & Environmental Monitoring Features & Benefits Packet Power provides the easiest, most cost effective way to capture detailed power and temperature information for

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

Elastic Management of Cluster based Services in the Cloud

Elastic Management of Cluster based Services in the Cloud First Workshop on Automated Control for Datacenters and Clouds (ACDC09) June 19th, Barcelona, Spain Elastic Management of Cluster based Services in the Cloud Rafael Moreno Vozmediano, Ruben S. Montero,

More information

Lightpath Planning and Monitoring

Lightpath Planning and Monitoring Lightpath Planning and Monitoring Ronald van der Pol 1, Andree Toonk 2 1 SARA, Kruislaan 415, Amsterdam, 1098 SJ, The Netherlands Tel: +31205928000, Fax: +31206683167, Email: rvdp@sara.nl 2 SARA, Kruislaan

More information

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation

More information

Virtualizing the SAN with Software Defined Storage Networks

Virtualizing the SAN with Software Defined Storage Networks Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (meclavea@brocade.com) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (jallen@llnw.com) Senior Architect, Limelight

More information

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center A NEW NETWORK PARADIGM What do the following trends have in common? Virtualization Real-time applications

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation

Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation Software Define Storage (SDs) and its application to an Openstack Software Defined Infrastructure (SDi) implementation This paper discusses how data centers, offering a cloud computing service, can deal

More information

Demonstrating the high performance and feature richness of the compact MX Series

Demonstrating the high performance and feature richness of the compact MX Series WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table

More information

Coordinated and Optimized Control of Distributed Generation Integration

Coordinated and Optimized Control of Distributed Generation Integration Coordinated and Optimized Control of Distributed Generation Integration Hui Yu & Wenpeng Luan China Electric Power Research Institute July, 2015 1 Background and introduction 2 Coordinated control of distributed

More information

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory

More information

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011

FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011 FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient

More information

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS

ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS ADAPTIVE LOAD BALANCING ALGORITHM USING MODIFIED RESOURCE ALLOCATION STRATEGIES ON INFRASTRUCTURE AS A SERVICE CLOUD SYSTEMS Lavanya M., Sahana V., Swathi Rekha K. and Vaithiyanathan V. School of Computing,

More information

Designing Applications with Distributed Databases in a Hybrid Cloud

Designing Applications with Distributed Databases in a Hybrid Cloud Designing Applications with Distributed Databases in a Hybrid Cloud Evgeniy Pluzhnik 1, Oleg Lukyanchikov 2, Evgeny Nikulchev 1 & Simon Payain 1 1 Moscow Technological Institute, Moscow, 119334, Russia,

More information

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain

More information

Operational Solution for Service Providers deploying Metro Ethernet, Carrier Ethernet or Wireless Ethernet Backhaul Applications.

Operational Solution for Service Providers deploying Metro Ethernet, Carrier Ethernet or Wireless Ethernet Backhaul Applications. Operational Solution for Service Providers deploying Metro Ethernet, Carrier Ethernet or Wireless Ethernet Backhaul Applications. Network Power Management White Paper White Paper STI-100-010 November 2013

More information

diversifeye Application Note

diversifeye Application Note diversifeye Application Note Test Performance of IGMP based Multicast Services with emulated IPTV STBs Shenick Network Systems Test Performance of IGMP based Multicast Services with emulated IPTV STBs

More information

FNT EXPERT PAPER. // From Cable to Service AUTOR. Data Center Infrastructure Management (DCIM) www.fntsoftware.com

FNT EXPERT PAPER. // From Cable to Service AUTOR. Data Center Infrastructure Management (DCIM) www.fntsoftware.com FNT EXPERT PAPER AUTOR Oliver Lindner Head of Business Line DCIM FNT GmbH // From Cable to Service Data Center Infrastructure Management (DCIM) Data center infrastructure management (DCIM), as understood

More information

Green Cloud Computing: Balancing and Minimization of Energy Consumption

Green Cloud Computing: Balancing and Minimization of Energy Consumption Green Cloud Computing: Balancing and Minimization of Energy Consumption Ms. Amruta V. Tayade ASM INSTITUTE OF MANAGEMENT & COMPUTER STUDIES (IMCOST), THANE, MUMBAI. University Of Mumbai Mr. Surendra V.

More information

Network Virtualization: A Tutorial

Network Virtualization: A Tutorial Network Virtualization: A Tutorial George N. Rouskas Department of Computer Science North Carolina State University http://rouskas.csc.ncsu.edu/ Network Virtualization: A Tutorial OFC 2012, March 2012

More information

DATA CENTER PHYSICAL INFRASTRUCTURE

DATA CENTER PHYSICAL INFRASTRUCTURE Is your datacenter running with maximum performance? Datacenter managers have been challenged to maintain or increase availability, maximize efficiency, and optimize the use of existing capacity. However,

More information

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya Present by Leping Wang 1/25/2012 Outline Background

More information

Software Defined Network Application in Hospital

Software Defined Network Application in Hospital InImpact: The Journal of Innovation Impact: ISSN 2051-6002 : http://www.inimpact.org Special Edition on Innovation in Medicine and Healthcare : Vol. 6. No. 1 : pp.1-11 : imed13-011 Software Defined Network

More information

Agenda. NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion. Mauro Campanella Internet Festival, Pisa 9 Oct 2015 2

Agenda. NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion. Mauro Campanella Internet Festival, Pisa 9 Oct 2015 2 Agenda NRENs, GARR and GEANT in a nutshell SDN Activities Conclusion 2 3 The Campus-NREN-GÉANT ecosystem CAMPUS networks NRENs GÉANT backbone. GÉANT Optical + switching platforms Multi-Domain environment

More information

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD

AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD AN IMPLEMENTATION OF E- LEARNING SYSTEM IN PRIVATE CLOUD M. Lawanya Shri 1, Dr. S. Subha 2 1 Assistant Professor,School of Information Technology and Engineering, Vellore Institute of Technology, Vellore-632014

More information

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform

Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform Benchmarking the Performance of XenDesktop Virtual DeskTop Infrastructure (VDI) Platform Shie-Yuan Wang Department of Computer Science National Chiao Tung University, Taiwan Email: shieyuan@cs.nctu.edu.tw

More information

Energy Management Solutions

Energy Management Solutions Energy Management Solutions Introducing the world s most innovative intelligent Power Distribution Units (ipdus) designed to simplify rack energy and environmental management of your data center. Enlogic

More information

Layer 3 Network + Dedicated Internet Connectivity

Layer 3 Network + Dedicated Internet Connectivity Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for

More information

Brocade One Data Center Cloud-Optimized Networks

Brocade One Data Center Cloud-Optimized Networks POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere

More information

OpenNebula Leading Innovation in Cloud Computing Management

OpenNebula Leading Innovation in Cloud Computing Management OW2 Annual Conference 2010 Paris, November 24th, 2010 OpenNebula Leading Innovation in Cloud Computing Management Ignacio M. Llorente DSA-Research.org Distributed Systems Architecture Research Group Universidad

More information

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.

More information

Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data

Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data White Paper Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data What You Will Learn Financial market technology is advancing at a rapid pace. The integration of

More information

software networking Jithesh TJ, Santhosh Karipur QuEST Global

software networking Jithesh TJ, Santhosh Karipur QuEST Global software defined networking Software Defined Networking is an emerging trend in the networking and communication industry and it promises to deliver enormous benefits, from reduced costs to more efficient

More information

DDS-Enabled Cloud Management Support for Fast Task Offloading

DDS-Enabled Cloud Management Support for Fast Task Offloading DDS-Enabled Cloud Management Support for Fast Task Offloading IEEE ISCC 2012, Cappadocia Turkey Antonio Corradi 1 Luca Foschini 1 Javier Povedano-Molina 2 Juan M. Lopez-Soler 2 1 Dipartimento di Elettronica,

More information

GEANT Green Best Practices

GEANT Green Best Practices GEANT Green Best Practices Andrew Mackarel NORDUNET Conference Sept 2012 Oslo GEANT Studies! Leader of NAT3-T5 GEANT Environmental team comprised of individuals who are interested in Environmental aspects

More information

Data Center Networking Designing Today s Data Center

Data Center Networking Designing Today s Data Center Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability

More information

Analysis of Network Segmentation Techniques in Cloud Data Centers

Analysis of Network Segmentation Techniques in Cloud Data Centers 64 Int'l Conf. Grid & Cloud Computing and Applications GCA'15 Analysis of Network Segmentation Techniques in Cloud Data Centers Ramaswamy Chandramouli Computer Security Division, Information Technology

More information

International Journal of Advance Research in Computer Science and Management Studies

International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 6, June 2015 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online

More information

DC Power Solutions Overview. DC Power Solutions for EVERY Network Application.

DC Power Solutions Overview. DC Power Solutions for EVERY Network Application. DC Power Solutions Overview DC Power Solutions for EVERY Network Application. DC Power Solutions for Every Network Application Eaton recognizes the challenges that are unique to different customers applications.

More information

Building a Converged Infrastructure with Self-Service Automation

Building a Converged Infrastructure with Self-Service Automation Building a Converged Infrastructure with Self-Service Automation Private, Community, and Enterprise Cloud Scenarios Prepared for: 2012 Neovise, LLC. All Rights Reserved. Case Study Report Introduction:

More information

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman

More information

Traffic Monitoring in a Switched Environment

Traffic Monitoring in a Switched Environment Traffic Monitoring in a Switched Environment InMon Corp. 1404 Irving St., San Francisco, CA 94122 www.inmon.com 1. SUMMARY This document provides a brief overview of some of the issues involved in monitoring

More information

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

ConnectX -3 Pro: Solving the NVGRE Performance Challenge WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3

More information

Comprehensive Data Center Energy Management Solutions

Comprehensive Data Center Energy Management Solutions FROM INSIGHT TO EFFICIENCY: Comprehensive Data Center Energy Management Solutions Since 1995, facility managers and BAS professionals have relied on the Niagara framework to provide full convergence of

More information

Xperience of Programmable Network with OpenFlow

Xperience of Programmable Network with OpenFlow International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is

More information

Silver Peak s Virtual Acceleration Open Architecture (VXOA)

Silver Peak s Virtual Acceleration Open Architecture (VXOA) Silver Peak s Virtual Acceleration Open Architecture (VXOA) A FOUNDATION FOR UNIVERSAL WAN OPTIMIZATION The major IT initiatives of today data center consolidation, cloud computing, unified communications,

More information

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform

An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department

More information

Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically

Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically Flauncher and Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically Daniel Balouek, Adrien Lèbre, Flavien Quesnel To cite this version: Daniel Balouek,

More information

Value stacking with Data Center Infrastructure Management (DCIM) - driving energy and capacity efficiency -

Value stacking with Data Center Infrastructure Management (DCIM) - driving energy and capacity efficiency - Press Frimley, UK 13 April 2016 PRESS RELEASE For immediate Release Value stacking with Data Center Infrastructure Management (DCIM) - driving energy and capacity efficiency - Organisations rely heavily

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

OpenNaaS: an European Open Source framework for the delivery of NaaS An enabler for SDN and NFV www.opennaas.org

OpenNaaS: an European Open Source framework for the delivery of NaaS An enabler for SDN and NFV www.opennaas.org for the delivery of NaaS An enabler for SDN and NFV www.opennaas.org Sergi Figuerola, Technology and Innovation Director (sergi.figuerola@i2cat.net) GLIF 13 th Meeting, Singapoure, October 4 th,2013 RINA

More information

HP Virtualization Performance Viewer

HP Virtualization Performance Viewer HP Virtualization Performance Viewer Efficiently detect and troubleshoot performance issues in virtualized environments Jean-François Muller - Principal Technical Consultant - jeff.muller@hp.com HP Business

More information