A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network



Similar documents
T. S. Eugene Ng Rice University

Optical Networking for Data Centres Network

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM CALIENT Technologies

Data Center Network Structure using Hybrid Optoelectronic Routers

Delivering Scale Out Data Center Networking with Optics Why and How. Amin Vahdat Google/UC San Diego

PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric

Optical interconnection networks for data centers

A Comparative Study of Data Center Network Architectures

Wireless Data Center Network. Reporter: Green

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Load Balancing Mechanisms in Data Center Networks

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Electronic Theses and Dissertations UC San Diego

Data Center Network Topologies: FatTree

Control Plane architectures for Photonic Packet/Circuit Switching-based Large Scale Data Centres

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support

Data Center Load Balancing Kristian Hartikainen

Rethinking the architecture design of data center networks

Brocade Solution for EMC VSPEX Server Virtualization

Ethernet Fabrics: An Architecture for Cloud Networking

Photonic Switching Applications in Data Centers & Cloud Computing Networks

Chapter 1 Reading Organizer

Scaling 10Gb/s Clustering at Wire-Speed

A Framework for the Design of Cloud Based Collaborative Virtual Environment Architecture

Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors

Powerful Duo: MapR Big Data Analytics with Cisco ACI Network Switches

Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers

Juniper Networks QFabric: Scaling for the Modern Data Center

Disaster Recovery Design Ehab Ashary University of Colorado at Colorado Springs

Addressing Scaling Challenges in the Data Center

SAN Conceptual and Design Basics

Opnet Based simulation for route redistribution in EIGRP, BGP and OSPF network protocols

How To Design A Network For A Small Business

Accelerating Cast Traffic Delivery in Data Centers Leveraging Physical Layer Optics and SDN

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

SummitStack in the Data Center

Chapter 3. Enterprise Campus Network Design

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

Brocade One Data Center Cloud-Optimized Networks

Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat

A Study of Network Security Systems

A REPORT ON ANALYSIS OF OSPF ROUTING PROTOCOL NORTH CAROLINA STATE UNIVERSITY

Networking Topology For Your System

Data Sheet. V-Net Link 700 C Series Link Load Balancer. V-NetLink:Link Load Balancing Solution from VIAEDGE

10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable

Scale and Efficiency in Data Center Networks

Packet Tracer 3 Lab VLSM 2 Solution

Data Center Networking Designing Today s Data Center

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

Layer 3 Network + Dedicated Internet Connectivity

! # % & (!) ( ( # +,% ( +& (. / %. 1. 0(2131( !, 4 56 / + &

VMware Virtual SAN 6.2 Network Design Guide

Intelligent Content Delivery Network (CDN) The New Generation of High-Quality Network

Demonstrating the high performance and feature richness of the compact MX Series

CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs

Cisco Nexus Data Broker: Deployment Use Cases with Cisco Nexus 3000 Series Switches

White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology

Optimizing Data Center Networks for Cloud Computing

Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments

Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches

BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

SummitStack in the Data Center

Cost Effective Selection of Data Center in Cloud Environment

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding

RSVP- A Fault Tolerant Mechanism in MPLS Networks

John Ragan Director of Product Management. Billy Wise Communications Specialist

Architecting Low Latency Cloud Networks

Datacenter architectures

Core and Pod Data Center Design

Unified Computing Systems

Flexible Modular Data Center Architecture Simplifies Operations

Voice Over IP. MultiFlow IP Phone # 3071 Subnet # Subnet Mask IP address Telephone.

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

TRILL Large Layer 2 Network Solution

Big Data Testbed for Research and Education Networks Analysis. SomkiatDontongdang, PanjaiTantatsanawong, andajchariyasaeung

A New Fault Tolerant Routing Algorithm For GMPLS/MPLS Networks

WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1

Enabling Multiple Wireless Networks on RV320 VPN Router, WAP321 Wireless-N Access Point, and Sx300 Series Switches

Bandwidth Measurement in xdsl Networks

Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing

Networking in the Hadoop Cluster

LAN Switching and VLANs

Preparing Your IP Network for High Definition Video Conferencing

A Link Load Balancing Solution for Multi-Homed Networks

The Economics of Cisco s nlight Multilayer Control Plane Architecture

SDN, a New Definition of Next-Generation Campus Network

SDN CENTRALIZED NETWORK COMMAND AND CONTROL

VMDC 3.0 Design Overview

A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN)

Contents. Foreword. Acknowledgments

Transcription:

ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman Department of Electrical and Computer Engineering University of New Haven West Haven, CT, USA MRahm1@unh.newhaven.edu Dr. Amir Esmailpour Department of Electrical and Computer Engineering University of New Haven West Haven, CT, USA aesmailpour@newhaven.edu Abstract Now-a-days data centers are experiencing rapid increase in network traffic and the number of servers in racks due to exponential increase of data, at the same time they have to cope with the demands of cloud computing and several emerging web applications. Keeping all these in mind we also have to consider compatibility when we will interconnect several data centers. Present data centers network based on electrical packet switching consumes much power and has bottleneck issues. Several researches in optical fiber have made it possible to use optical circuit switch in data centers which provides promising solution offering high bandwidth, low latency, incredibly less power consumption. In this paper we present a possible bottom to top electrical and optical packet switch architecture of data centers and interconnectivity solution between them. Furthermore we propose two networks interconnects: an optical circuit switch (OCS) handling long-lived bulk data transfer using optical network and a secondary lower bandwidth electronic packet switch (EPS) network for handling short data transfer. Keywords Big Data, Data Center Network, Electronic Packet Switch, Optical Circuit Switch. I. INTRODUCTION Over the last few years data center networking topology has improved significantly. Due to the commercial development of high speed switching device and multilayer architecture of data center, now we have more powerful data center with low latency, scalability and higher bandwidth. But due to the exponential increase of internet traffic and emerging web application, still data centers performance is not up to the mark to meet the requirement. Numerous applications that are running in servers are data sensitive require low latency communication among each other and also fulfill the bandwidth requirement [1]. At the same time we are consider system compatibility while interconnecting data centers. In interconnected data centers we might require long-lived bulk data transfer which sometimes not possible with traditional Ethernet (10 Gbps) electronic packet switch link between servers and access switch/top of the rack (ToR) switch. However, using electronic packet switch network it has to sustain the increased network traffic, while maintaining communication with servers as well other data centers. Furthermore, as more and more processing core are increasing into single communication chip also servers are increasing in rows due to rapid increase of data, depending on electronic packet switch between server to access/ (ToR) switch is inefficient and also increase power consumption. In order to face this increasing bandwidth demand and power requirement in the data centers, new connection scheme must be developed that can provide high throughput, low latency and less power consumption. Therefore the optimal solution would be using optical fiber link between sever to access switch/ (ToR) switch. [2] Figure 1 shows a block diagram of a typical data center with four layers hierarchical network architecture from bottom to top, the first layer is the servers which are connected to upper access layer switch. In this second layer, access switch where many servers are connected which sometimes called ToR switch. The third layer is aggregation layer which are connected bottom access layer switch. Forth or highest layer is core layer where core switches are connected with router at top and aggregation layer at bottom form a data center block. When a request is generated by a user the request is forwarded through the internet to the top layer of the data center. The core switch devices are used to route the ingress traffic to the appropriate server. A request may require the communication with other servers such as a web request may require communication between application and database servers [3]. The main advantage of architecture is that it can be scaled easily and has a good fault tolerance and quick failover. However, the main drawback of this architecture is high power consumption due to the several layer architecture switches and latency introduce due to multiple store-and-forwarding processing [4].

the basis of motivations for a top-down optical-electrical link topological architecture of data center proposed for Big Data in this study. USER Internet Router USER III. RELATED WORK Recently few optical interconnecting schemes are proposed for data center network and reviewed in this section with general insight of their architectures. An electrical-optical hybrid network has been proposed by G. Wang et al, called c- Through, which is an enhancement of current data center networks [10]. Figure 2 shows the access/tor switches are connected to an electrical packet based network (i.e. Ethernet) and an optical circuit based network. To determine the configuration of the optical switch, a traffic monitoring system is required. This monitoring system is placed in the host and measures the bandwidth requirement with the other hosts. An optical configuration manager collects all these information and determines the configuration of optical switch based on the traffic demand. After configuring the optical switch, it informs all the access/tor to route the packets accordingly. An algorithm is used to provide the solution called Edmonds algorithm [11]. Aggregation Layer Switch Aggregation Layer Switch Access Layer Switch Access Layer Switch Fig. 1. Architecture of Present data center II. BACKGROUND AND MOTVATION Because the amount and size of traffic increasing exponentially the architecture of traditional a data centers are not sufficient to handle traffic and to meet the future challenges. The cluster- server architecture of data center uses Ethernet connection 10Gbps data rate per link between server and access switch or top of the rack (ToR) switch. Upper layer switches have both optical and electrical links. Due to the number of client increasing rapidly and numerous emerging web applications the traffic between servers is increasing dramatically. In the new architecture, servers are communicating with each other most of the time rather than serving the client. Recent studies suggest that an optimal performance could be achieved when servers spend 80% of their activities within their local segment [5]. In order to maintain system efficiency and providing support, large organization often are required to interconnect their data centers located in remote locations. Those considerations are Fig. 2. Architecture of c-through Network Fig. 3. Architecture of Helios data center network: Electrical/Optical Switch

N. Farrington et al. from UCSD proposed a hybrid electrical/optical switch architecture called Helios. Helios follows the architecture of typical 2-layer data center networks which is similar to the c-through architecture but its use WDM links. Figure 3, illustrates the setup used in [8] they define access/tor switches as pod switches where the core can be electrical packet switch or optical circuit switch. The electrical packet switches provide all-to-all communication where high bandwidth slowly changing communication between ToR switches is provided by optical circuit switch. The Helios architecture tries to combine the best of optical and electrical networks. IV. PROPOSED ARCHITECTURE The architecture of current data centers suffer from some challenges such as latency, bandwidth, energy consumption and quick failover. In keeping future demand in mind, in order to maintain system efficiency large organization are required to interconnect their data centers located in remote locations. Hence we propose to connect servers with upper layer switch with high availability and low latency. Connecting servers directly with upper layer using both electrical and optical link would fulfill the requirement for the propose scheme. Figure 4 depicts the architecture that we propose which is an enhancement to current data center network. Here blue lines indicate electrical connectivity and black dashed line indicate optical connectivity. Due to the emerging demand, now servers requires high bandwidth and low latency communication. Several process running in data center produce data-intensive workloads such as those generated by MapReduce, Hadoop are sufficiently less latency-sensitive require fast transfer for servers. Optical connectivity consumes less power at the same bandwidth provided by it is larger than electrical links. In our architecture, we propose to connect server directly using optical and electrical links which will meet the demand and at the same time better load balancing is achieved. Fig. 4. Access Switch Proposed architecture of data center V. DATA CENTER INTERCONNECTIVITY Data center interconnection could be provided by several schemes. Interconnection could be at layer 1, 2 or 3. Based on the transportation options available such as dark fiber, high bandwidth Ethernet we can select the interconnection process. Considering transportation option and layered architecture of data center, interconnection is recommended at layer 2 aggregation layer by [9]. Figure 5 shows a data center interconnectivity solution. Here each data center is connected at its aggregation layer using high speed optical fiber.

B. Cost and Power Consumption Fig.5 Data Center Interconnects Architecture A. How this processes help for Intra and Inter data center networking Figure 4 and 5 show each server has both optical and electrical connectivity. The electrical connectivity is usually useful for communication between servers and handling short data transfer, whereas optical connectivity is used for long bulked data transfer. This is necessary when transferring data between data centers that require large bandwidth and low latency. At the same time because of layered architecture server can communicate with each other via access layer and don t need to go the upper layer. Load balancing is achieved due to both hybrid electrical and optical links. Because of direct optical connectivity the main advantage would be in server virtualization. Virtual environments require greater bandwidth and higher performance with least overhead on the server. Due to the optical connectivity virtual Ethernet port aggregator switching situation enhanced. Now it s become easier to move server virtually not only within the data center but also among the data centers. The hybrid links intra and inter data center networking scenarios has significantly improved. As the number of optical switches is introduced in every layer the load sharing capability of the network is increased and power consumption for data center is reduced. In each layer we are reducing electrical switch and replace them with optical switch overall cost and power consumption associated with electrical switches have reduced [7]. VI. CONCLUSION AND FUTURE WORK In this paper, a bottom to top possible electrical and optical architecture is presented. Use of optical connectivity has some advantage such as less power consumption, higher bandwidth and low latency. Introducing both types of switches in each layer helps in load balancing. This also helps us when we interconnect our data centers where we require large bandwidth between servers. Because of higher bandwidth, moving server virtually not only within the data center but also among the data centers becomes easier. So a hybrid electrical and optical networking topology improves the overall scenarios for data center and also for big data network. In our work we are trying to measure the performance of optical circuit switch and electrical packet switch network between servers and ToR switches. We are running simulation to compare switching time of optical and electrical links. We also metering fault tolerance and quick failover time of optical network. Depend on the traffic characteristics and latency

requirement of applications we will monitor the time required in packet switch and circuit switch network for handling short and long bulk data. Based on those analysis we will introduce QoS (Quality of Service) inside data center and discuss how this scenario work for big data network. REFERENCES [1] G. Schulz, The Green and Virtual Data Center, 1st ed. Boston, MA, USA: Auerbach Publications, 2009. [2] L. Schares, D. M. Kuchta, and A. F. Benner, Optics in future data center networks, in Symposium on High-Performance Interconnects, 2010, pp.104 108. [3] Cisco Data Center Interconnect Design and Deployment Guide. CiscoPress, 2010. [4] K. Chen, C. Hu, X. Zhang, K. Zheng, Y. Chen, and A. V. Vasilakos, Survey on routing in data centers: insights and future directions, Network, IEEE, vol. 25, no. 4, pp. 6 10, Jul. 2011. [5] Lan Switching Limitation.Available: http://en.wikipedia.org/wiki/lan_switching. [6] Kevin J. Baker, Alan Benner, Ray Hoare, and Adolfy Hoisie, On the Feasibility of Optical Circuit Switching for High Perhormance Computing Systems SuoerComputing, November 2005. [7] Y. Zhang, P. Chowdhury, M. Tornatore, and B. Mukherjee, Energy Efficiency in Telecom Optical Networks, IEEE Communications Surveys and Tutorials, vol. 12, no. 4, pp. 441 458, 2010. [8] Nathan Farrington, George Porter, Sivasankar Radhakrishnan, Hamid Hajabdolali Bazzaz, Vikram Subramanya, Yeshaiahu Fainman, George Papen, and Amin Vahdat, Helios: A Hybrid Electrical/Optical Switch Architecture for Modular Data Centers, San diago CA: University of California. [9] Cisco Data Center Interconnect(White Paper C11_493718), Cisco Press. [10] G. Wang, D. G. Andersen, M. Kaminsky, K. Papagiannaki, T. E.Ng, M. Kozuch, and M. Ryan, c-through: Part-time Optics in Data Centers, in Proceedings of the ACM SIGCOMM 2010 conference on SIGCOMM, ser. SIGCOMM 10, 2010, pp. 327 338. [11] J. Edmonds, Paths, trees, and flowers, Canadian Journal on Mathematics, vol. 17, pp. 449 467, 1965.