ASIC: An Architecture for Scalable Intra-domain Control in OpenFlow



Similar documents
A collaborative model for routing in multi-domains OpenFlow networks

Multiple Service Load-Balancing with OpenFlow

East-West Bridge for SDN Network Peering

Orion: A Hybrid Hierarchical Control Plane of Software-Defined Networking for Large-Scale Networks

Scalability of Control Planes for Software Defined Networks:Modeling and Evaluation

Enabling Software Defined Networking using OpenFlow

HyperFlow: A Distributed Control Plane for OpenFlow

SDN Security Design Challenges

CloudWatcher: Network Security Monitoring Using OpenFlow in Dynamic Cloud Networks

SDN. What's Software Defined Networking? Angelo Capossele

A Study on Software Defined Networking

The Evolution of SDN and OpenFlow: A Standards Perspective

Open Source Network: Software-Defined Networking (SDN) and OpenFlow

Performance Evaluation of OpenDaylight SDN Controller

Scalable Network Virtualization in Software-Defined Networks

Autonomicity Design in OpenFlow Based Software Defined Networking

Software Defined Networks (SDN)

Traffic-based Malicious Switch Detection in SDN

Towards an Elastic Distributed SDN Controller

Software Defined Networking for Telecom Operators: Architecture and Applications

Improving Network Management with Software Defined Networking

Kandoo: A Framework for Efficient and Scalable Offloading of Control Applications

Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management

OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?

OMNI: OpenFlow MaNagement Infrastructure

Software Defined Networking, openflow protocol and its controllers

A Method for Load Balancing based on Software- Defined Network

Software Defined Networking: Advanced Software Engineering to Computer Networks

Software Defined Networking & Openflow

On Controller Performance in Software-Defined Networks

The Evolution of SDN and OpenFlow: A Standards Perspective

libnetvirt: the network virtualization library

On Scalability of Software-Defined Networking

Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX

Xperience of Programmable Network with OpenFlow

On Bringing Software Engineering to Computer Networks with Software Defined Networking

Software Defined Networking What is it, how does it work, and what is it good for?

Extensible and Scalable Network Monitoring Using OpenSAFE

Security improvement in IoT based on Software Defined Networking (SDN)

Department of Computer Science and Engineering, Indian Institute of Technology, Delhi

Funded in part by: NSF, Cisco, DoCoMo, DT, Ericsson, Google, Huawei, NEC, Xilinx

Enabling Fast Failure Recovery in OpenFlow Networks

Software Defined Networks

Advanced Study of SDN/OpenFlow controllers

Virtualizing the Network Forwarding Plane

Limitations of Current Networking Architecture OpenFlow Architecture

Software Defined Networking Architecture

CS6204 Advanced Topics in Networking

ulobal: Enabling In-Network Load Balancing for Arbitrary Internet Services on SDN

Software Defined Networking Basics

Design and Implementation of Dynamic load balancer on OpenFlow enabled SDNs

Network Virtualization in the Data Center

Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee

OpenFlow: Enabling Innovation in Campus Networks

OpenFlow Based Load Balancing

Michael Jarschel, Thomas Zinner, Tobias Hoßfeld, Phuoc Tran Gia University of Würzburg, Institute of Computer Science, Würzburg, Germany.

HybNET: Network Manager for a Hybrid Network Infrastructure

Software-Defined Networking in Mobile Access Networks

EventBus Module for Distributed OpenFlow Controllers

SDN Software Defined Networks

software networking Jithesh TJ, Santhosh Karipur QuEST Global

Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network

FlowSense: Monitoring Network Utilization with Zero Measurement Cost

Design and implementation of server cluster dynamic load balancing in virtualization environment based on OpenFlow

How To Understand The Power Of The Internet

Enhancing Responsiveness and Scalability for OpenFlow Networks via Control-Message Quenching

VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING. Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk

From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.

Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics

Network Management through Graphs in Software Defined Networks

FlowSense: Monitoring Network Utilization with Zero Measurement Cost

Network Virtualization and Application Delivery Using Software Defined Networking

Flow Based Load Balancing: Optimizing Web Servers Resource Utilization

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support

Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks

Survey: Software Defined Networks with Emphasis on Network Monitoring

OPENFLOW-BASED LOAD BALANCING GONE WILD

Reactive Logic in Software-Defined Networking: Measuring Flow-Table Requirements

The libfluid OpenFlow Driver Implementation

Software Defined Networking

OpenFlow: Load Balancing in enterprise networks using Floodlight Controller

Hosted cloud computing has significantly. Scalable Network Virtualization in Software- Defined Networks FPO. Virtualization

Architecture of distributed network processors: specifics of application in information security systems

Dynamic Controller Provisioning in Software Defined Networks

Testing Challenges for Modern Networks Built Using SDN and OpenFlow

The Internet: A Remarkable Story. Inside the Net: A Different Story. Networks are Hard to Manage. Software Defined Networking Concepts

Accelerate SDN Adoption with Open Source SDN Control Plane

CS244 Lecture 5 Architecture and Principles

OpenFlow: Enabling Innovation in Campus Networks

Carrier-grade Network Management Extensions to the SDN Framework

ORAN: OpenFlow Routers for Academic Networks

Network congestion control using NetFlow

HERCULES: Integrated Control Framework for Datacenter Traffic Management

A Study of Network Security Systems

Transcription:

ASIC: An Architecture for Scalable Intra-domain Control in Pingping Lin, Jun Bi, Hongyu Hu Network Research Center, Department of Computer Science, Tsinghua University Tsinghua National Laboratory for Information Science and Technology 4-204, FIT Building, Beijing, 100084, China linpp@netarchlab.tsinghua.edu.cn, junbi@tsinghua.edu.cn, huhongyu@cernet.edu.cn ABSTRACT Currently, the architecture of network device is closed which is detrimental to network innovation. Software defined networking decouples the vertically coupled architecture and reconstructs the Internet as a modular structure. The idea of software defined networking is widely accepted., a typical instance of the software defined networking, has been deployed by many universities and research institutions all over the world. With the increasing scale of deployment, the poor scalability of the centralized control mode becomes more and more obvious. To solve this scalability problem, this paper adopts the idea of load balance and proposes an architecture for the scalable intra-domain control named ASIC in. ASIC balances all the data flow initialization requests to several physical controllers in an network, and then, those requests are processed with a shared global network view in a parallel manner. ASIC also builds a data cluster for the global network view. By this way, the scalability problem in the intra-domain control plane could be completely solved. At the end of this paper, the emulation shows the feasibility of ASIC. Categories and Subject Descriptors C.2.1 [Computer Communication Networks]: Network Architecture and Design - Centralized networks. General Terms Performance, Design, Experimentation. Keywords Software Defined Networking,, Scalability. 1. INTRODUCTION Currently, the architecture of the network device is closed. This is not favorable of network innovation, and protocols related to the core network lay or core network device are hard to apply. Software defined networking (SDN) [1] decouples the vertical and tight coupled architecture, and reconstructs the Internet as a modular structure. At the same time, it opens up the control plane Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CFI 12, September 11 12, 2012, Seoul, Korea. Copyright 2012 ACM 978-1-4503-1690-3/12/09 $15.00. and the protocol implementation in the control plane. Then, the architecture in network device is no longer closed. By this way, SDN promotes the rapid innovation and the evolution of the network. Currently the idea of SDN is well received by both of the academic researchers and industry researchers, and the SDN has become a popular topic in recent years and is considered as a promising way to re-architect the Internet. In addition, some research groups are formed such as the Open Networking Summit [2]. The Open Networking Foundation [3] is defining the standard for SDN, and more than 50 companies have joined the foundation to accelerate the creation of standards, products, and applications, such as Google, Facebook, Microsoft, IBM, VMware, Juniper, Cisco and so on. At present, SDN already has some implementations such as Openflow [4] and NetOpen [5]., the most popular instance, is deployed by many universities and research institutions in the world. adopts the centralized control model and is made up by the controller, FlowVisor [6], switch, and protocol as shown in the figure below. Multicast Network Status Statistics FlowVisor Other Applications Protocol Protocol Switch Figure 1. Architecture of separates the control plane from the network equipment, and moves it to the controller. The controller takes charge of all the functions in control plane, while the switch retains only the basic data forwarding function. The FlowVisor can achieve the network virtualization, and then, different applications on the network controllers such as multicast [7], network situation assessment [8] and network status statistics could form their own private virtual networks. Furthermore, the controller can just run in a normal host or server to control the data packet forwarding in switches through a standardized protocol and a safe channel. In the centralized control model, all the routes are determined by the controller, so the first packet of each data flow 21

is sent to the central controller. Then, the controller will compute the routing path for each data flow and install the routing path to the related switches according to a global network view. Here, the first packet of each data flow is usually called the data flow initialization request. In addition, the request processing capability of a single controller is limited: NOX [9] could process about 30K requests [10] per second; 2) Maestro [11] could process about 600K requests per second. In fact, large-scale network environments always have vast amounts of data flows: 1) a 1500 server cluster might generate 100K requests per second [12]; 2) a 100 switch data center might generate 10000K requests per second [11]. With the increasing scale of deployment, the scalability of the centralized control becomes more and more obvious. Thus, this paper proposes building a powerful logical controller made up of several physical controllers. Then, the scalability in the control plane evolves into a cooperative performance problem of multi-controllers as illustrated below. Global Network View Switch Scalability Figure 2. The scalability problem in control Based the analysis above, to improve the performance of the control plane, this paper adopts the idea of load balancing, parallel processing, data sharing and clustering to the network, and proposes an architecture for the scalable intra-domain control named ASIC: the first step is to balance the vast amounts of data flow initialization requests, and then process those requests by several controllers simultaneously according to a shared global network view. In which, each controller computes and installs the routing path for every request independently. In such way, ASIC has the ability to solve the scalability problem completely. The rest of the paper is organized as follows: the next Section presents the related work. Section 3 describes the design of the ASIC architecture for scalable Intra-domain control. Then this paper gives emulation and evaluation to the proposed mechanism in Section 4. Finally, Section 5 concludes this paper. 2. RELATED WORK The research to the scalability in control plane is mainly divided into two categories: 1) improving the request processing speed of the controller software itself. 2) new proposals by pre-installing the data forwarding rules to switches, or clustering the data flow initialization requests, or returning a certain right to the switch for local network events, or multiple geographical distributed controllers and each handles the local area switches. 2.1 Research on High Performance Software At present, there are many kinds of controller systems: NOX [9], Maestro [11], Beacon [13], SNAC [14] and so on. Some researchers such as Beacon and Maestro are trying to improve the performance of the controller by the multi-thread technology. Furthermore, the controller software systems are always being deployed on a multicore host or server. However, for large-scale data centers or networks, a single physical controller is not enough which has already been shown in Figure 2. 2.2 Research on Solutions for Scalable Control Plane There already have some proposals for the scalable control plane, such as HyperFlow [15], DevoFlow [16], DIFANE [17], and ONIX [29]. HyperFlow proposes a distributed control plane for. In HyperFlow, a domain should run multiple controllers simultaneously, and each controller handles the local area switches. For sharing the global information among those controllers, HyperFlow adopts a distributed file system named WheelFS [18] which is designed for the WAN (Wide Area Network) environment. Each HyperFlow controller has the right to deal with network events within a certain local area, and the events which will affect the global network should be announced from time to time. Once other controllers learn the event information, they should replay the event to achieve the synchronization of the global view. However, this approach can only deal with events which do not occur frequently, such as link status changes. For some data centers or networks, the administrator needs to know all the real-time events which is not supported by HyperFlow at present. DevoFlow adopts the clustering or fuzzy matching technology. After clustering for data flows, each of the controller processes is not a data flow, but a class of data flows. At the same time, DevoFlow gives the switch the decision-making right as much as possible. By clustering to reduce the burden of the controller, there is a compromise between the number of classifications and the burden of the control. The more loose the fuzzy matching rules are, the less the burden of the control. Conversely, when the fuzzy matching rules are strictly set to exact-match, the scalability problem will reoccur. DIFANE returns some control right to the switch. By pre-installing several forwarding rules, once the matched data flows arrive, the switch does not need to forward the initialization request to the controller anymore and forward the flow directly as the pre-installed rules. In such way, DIFANE alleviates the burden of controllers and enlarges the control scope of a controller. ONIX treats each partition network as a logical node when make global routing decisions, and each partition network makes local routing decisions. The common drawback of DIFANE and ONIX is that they could not achieve the real-time visibility of the global data flow status. By the analysis above, the existing solutions could only reduce the load of the controller to a certain extent, or increase the control scale of the controller by expensing the real-time network view. So far, there has not been a good solution to completely solve the control scalability for large-scale networks. In order to completely solve the scalability issue, this paper proposes a scalable mechanism with a three level architecture, in which each level has its own scalable approach to evolve. 22

3. THE ASIC ARCHITECTURE 3.1 Overview of ASIC The capability of a single controller for handling the data flow initialization requests is limited. In order to deal with large-scale networks which have vast amounts of initialization requests, this paper suggests a multi-controller collaboration and network view sharing method to process the requests in parallel. ASIC is a powerful logical controller and mainly includes three levels: the first layer is a load balancing system; the second layer is the controller cluster; and the third layer is a distributed storage system for data sharing. In ASIC, the load balancer only forwards the requests, and the functions of each controller are equivalent. When a data flow initialization request packet arrives at the logic controller, firstly it is forwarded to a physical controller located in the second level by the load balancing system in the first level. Then, the physical controller calculates the routing path by calling the global network view in the third level, and then installs the routing path to the corresponding switches. Further, for the replying routing path packet, the physical controller sends them directly to the switches without passing through the load balancer in the first level. In detail, the routing path is expressed by the entries in a flowtable [4]. The routing path installation is the process that the controller installs the flowtable entries into the corresponding switches by a secure channel described in the protocol. 3.2 Load Balancing to Initialization Requests Intranet Data Flow Initialization Request Load Balancer Hosts or Servers Reply s Global Network View Figure 3. Applying the load balancing to network As shown in the figure above, all the data flow requests within a domain are distributed to different controllers by the load balancing equipment, and the load balancer can do the requests distribution by a variety of algorithms like round-robin scheduling [19], or hash scheduling algorithm [20] by the IP address. All the physical controllers are equivalent. Their tasks are to calculate the routing path based on the global network view, generating entries, and installing the entries to the corresponding switches. From the prospective of a single controller, the role of the controller in ASIC is the same as that in the current environment. In the actual network environment, the load balancer or packet dispatcher can be selectively adopted according to the number of requests. For example, networks might use a router as a balancer (using the policy routing model [21]: turning off the route learning function and forwarding packets according to the statically configured rules such as forwarding certain packets to a specific port), an switch (distributing requests by the action attribute [4] in flowtable), a professional Web load balancer (like the F5-BIG-LTM-8950 product of F5 corporation), or a software such as Click Router [22][23]. 3.3 Storage Cluster for Data Sharing In order to achieve the data sharing for the distributed control plane and to provide a consistent global view for each controller, this paper suggests adopting a mature data storage system which must include at least two parts: 1) persistent storage which must support transaction (A transaction is a group of operations, any operation in the group fails, the whole group must be undone or rolling back); 2) caching storage. s directly deal with the caching storage in both reading and writing, and the persistent storage could be used for data analysis or maintain network status during a reboot process. We explain the data cluster in two steps. Step 1: To give the readers a concrete picture of the persistent data storage, this paper takes the MySQL [24] database for example which could be replaced by other arbitrary persistent storage systems such as DHT (distributed hash table). MySQL is a mature technology with good scalability in both reading and writing. A feasible design for the data sharing in is as shown below: Read from Slave DB Write to Master DB Master DB Replication Slave DB Figure 4. Writing and reading mechanism to the distributed storage of the shared data In the figure above, one master database might correspond to several slave databases. The master database is responsible for data writing (INSERT, DELETE, and UPDATE) while the slave databases are responsible for data reading (SELECT). The data consistency between the master database and slave databases can be achieved by the data replication mechanism. Thus, the scalability in data reading can be achieved by increasing the number of the slave databases. At the same time, to achieve the scalability in data writing, the master database should be split. A network administrator can vertically split the master database based on the actual requirement to form groups of one-master and multi-slave unit. For example, by grouping according to the source IP address, data with related source IP address is written to a same master-slaves database group. Meanwhile, in order to obtain a global view of the domain, the controller needs to read data fragments from slave databases of corresponding group units according to the actual requirement and, then, assemble the data fragments together, which is shown in the data reading process. Step 2: Furthermore, in order to accelerate the data reading and writing speed, ASIC adopts a memory caching system (such as Memcached, [25] [26]). The memory caching system can cache data in the memory to shorten the data reading time compared to reading directly from the database. Therefore, based on the MySQL master-slaves architecture, to further accelerate the data reading speed, the data cluster in ASIC adds a memory caching system as shown below: 23

Control Plane Data Cluster Persistent Storage write Data Synchronization read node node node node node node node node Memcahed Pool Figure 5. Adding the memory caching pool to the data sharing mechanism After adding the memory caching system to the data cluster, the data reading and writing against the persistent database are changed to against the Memcahed pool. The Memcahed pool and the persistent storage should synchronize the data from time to time. Then the persistent storage in the data cluster could be used for data analysis or maintain network status during a reboot process. 3.4 Scalability Analysis of the ASIC The ASIC architecture can dynamically adjust its scale according to the number of initial requests or according to network size. The selection of the load balancing equipment can range from the software balancer to the current Internet backbone router. Thus, by using the backbone routers, the load balancing equipment will not bottleneck the entire logic controller. Along with the performance improvement from the load balancing device, ASIC can also increase the corresponding number of the physical controllers, and different controllers could run different controller software systems independently. Once the number of requests increases, the number of databases should also increase correspondingly. Thus, each of the three levels in ASIC has its own scalable solution and could be extended to meet the requirements of large-scale networks. 3.5 Application Deployment in Control Plane Production network Alice Bob Jim Topology calculation QoS routing Network Operating system Access control Packet Forwarding Figure 6. The deployment of applications in control plane The current experimental environment is always deploying one physical controller, which is unable to run a large number of applications. In ASIC, controllers behind the load balancer can run different controller software systems (network operating system) such as NOX and Beacon. The control plane applications from the third-party developers can be installed into one or several physical controllers. All controllers share a same global network view, and the only difference is the controller software installed. Thus, developers can selectively install their applications according to their preference of developing language or to the functions of different controllers. 4. EXPERIMENT AND RESULTS To verify the feasibility of ASIC, this paper carries out the emulation experiment for the entire ASIC framework. The performance metrics of controller mainly refers to the request processing throughput and the data flow setup latency. 4.1 Experiment Environment and Design Data flow initialization request generator Host1 Host2 Host5 Load Balancer Data flow initialization request processor 1 5 Switch MySQL master Memcahed1 Memcahed5 Data cluster Figure 7. The Experimental Topology MySQL salve1 MySQL salve5 Hardware configuration: ten hosts (Intel Core 2 Quad processor, 4GB memory, network interface card 1Gbps/port), five of which play the role of data flow initialization request packets generator. The other five hosts work as the Memcahed cache servers. To ensure the requests sending speed, the request packets with different IP addresses are generated in advance. The load balancer in this experiment is a router with each network interface card for 1Gbps/port. All the controllers and the MySQL database servers are using blade servers (Intel Core 2 Quad processor, 4GB memory, network interface card 1Gbps/port), and each network interface card from any of the equipment in this experiment supports 1 Gbps/port. The switch is the Ruijie corporation product RG-S5760-24GT/4SFP switch. Software configuration: all the hosts and servers are installed with Java-1.6.0_25 and the operating system Ubuntu-10.04. The controller software is Beacon-1.0.0 with mild modification for the experiment. Furthermore, each Beacon controller runs four threads. The cache server is installed with Memcached-1.4.13, and the database system is MySQL-5.1.42. Experiment design: emulating an domain with the Multihoming [27] scenario in which two Internet service providers A and B are configured for the network export. Some of the users' IP addresses belong to provider A, and some user's IP addresses belong to provider B. When the logical controller receives request packets with different source IP addresses, it should use different routing paths to route them to the appropriate export. Five requests generating hosts are connected to five different ports of the router, and the router cascades five controllers in the right direction. By such connection, the experiment emulates the data flow initialization requests process in a real network environment. In the initial stage, we first calculate out the routing paths and store them in the databases and then cache them in the Memcached server. By sending requests in high-speed to the 24

router, the requests processing capability of each controller could be counted. 4.2 Experimental Results and Analysis 4.2.1 Throughput By monitoring the Memcached server, the modified Beacon software indeed reads data directly from the cache server. By counting the number of fetching data from cache, the request process speeds of controller 1 and controller 2 are shown in the figure below. Figure 8. The request processing speeds of controller 1 and controller 2 Further, we record the capability of all the controllers as follows: Table 1. Statistics of controllers processing the requests 1 2 3 4 5 Speed Requests/s 414,598 403,470 425,553 383,376 394,511 In this experiment, the capacity of one single Beacon controller is about 400,000 requests per second. After the load balancing, the processing capability of the logical controller which includes 5 Beacon controllers is about 400,000x5 = 2,000,000 requests per second. In this experiment, we use only 5 controllers and one group of the MySQL master-slaves unit. If we expand the scale of the experiment deployment, the capacity of the logic controller will increase correspondingly in a linear growth. Thus, the purpose of load balancing is achieved. The capability of handling requests is directly related to the size of the network topology. The larger the size is, the longer the average time for the controller to calculate out a routing path is. In this experiment, each controller needs to compute for each request (according to the source IP address to calculate out which routing path to fetch) and fetch the global network view (reading routing path from cache), so it is reasonable of each controller to process 400,000 requests per second. With the expansion of the network size, we can do the vertically fine-grained division to the master database, and achieve greater data clusters and distributed storage. Thus, the data storage also has good scalability. If we adopt the Internet backbone router as the request distribution equipment (for example, the Huawei corporation product Quidway S9312 switches can reach a throughput of 1,320,000,000 request packets per second in theory), in theory the logical controller can support the size of an actual autonomous domain in the current Internet. 4.2.2 Latency Analysis Comparing this powerful logical controller to a single controller deployed in the current network, the time delayed in ASIC is O ( h ) + O( f ( n ) ) which could be accepted by end users based on the analysis as follows: 1) h is the hops of balancers in the balancing system which a request needs to go through before reaching a physical controller. Because each balancer, like the router, has a limited number of ports to be connected by the physical controllers, cascading the balancers is necessary in some circumstances. Since each balancer is only responsible for the packets distribution, the time for traversing a balancer is equal to traversing an ordinary switch, so the time spent in this step can be accepted. 2) f ( n ) represents the time consumed to fetch the global network view data from the shared data storage, and n means the number of nodes where the data segments located. Because the storage is cached in the memory, controllers can get the data directly from the Memcached device in a high-speed. In addition, the distributed data can be fetched by multi-thread in a parallel manner. We suggest connecting the cache devices and controllers with the same switch. This way, the delay time of fetching the distributed data can drop to f ( n / num(threads) ). Due to the two reasons analyzed above, the data fetching time f ( n ) could be negligible. 5. CONCLUSION AND FUTURE WORK Reviewing the development of the computer, the computer field adopted a common underlying hardware (x86 instruction set [28] based hardware). Based on the x86 instruction set, the computer operating system and software present a trend of diversification. Today's network is also showing a similar trend: reduces the logic of the network switching equipment and lets the data plane in switch to forward packets through a uniformed flowtable. At the same time, the operation to the flowtable is through a public network API (Application Programming Interface) defined in the protocol. The control plane and data plane are decoupled and evolve independently. The control logics and the applications in the control plane are no longer relying on network equipment vendors to implement them, and the network researchers can do arbitrary innovations on the controllers. In such way, enables the rapid innovation of the networks and the rapid evolution of the Internet. Thus, can be considered as the x86 instruction set of the network. This paper adopts the idea of load balancing, parallel processing, data sharing and clustering to solve the scalability problem in the control plane for the first time, and proposes an ASIC architecture: load balancing the data flow initialization requests and using multi-controller to process those requests in a parallel manner by sharing a global network view in a cached data cluster. Furthermore, in the ASIC framework, the three levels have their own scalability solutions and can evolve independently. By clustering the network device and building the large-scale data center, ASIC has the ability to completely resolve the scalability issue in the centralized control plane. The future work mainly focuses on deploying the ASIC to the actual network environment and monitoring its behavior and the actual performance. 6. ACKNOWLEDGMENTS Supported by National Science Foundation of China under Grant 61073172, Program for New Century Excellent Talents in University, Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant 200800030034, and 25

National Basic Research Program ("973" Program) of China under Grant 2009CB320501. 7. REFERENCES [1] N. McKeown. Keynote talk: Software-defined networking. In Proc. of IEEE INFOCOM 09, Apr.2009. [2] Open Networking Summit: http://opennetsummit.org/. [3] Open Networking Foundation: https://www.opennetworking.org/. [4] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner. : enabling innovation in campus networks. ACM SIGCOMM Computer Communication Review, 38(2), 2008. [5] N. Kim and J. Kim. Building netopen networking services over open flow-based programmable networks. In Proc. of ICOIN 11, Jan. 2011. [6] Sherwood, R., Gibb, G., Yap, K.-K., Appenzeller, G., Casado, M., McKeown, N., and Parulkar, G. FlowVisor: A Network Virtualization Layer. Tech. Rep. OPENFLOW- TR-2009-01, Consortium, October 2009. [7] E. Karipidis, N. D. Sidiropoulos and Z.-Q. Luo. Quality of service and max-min-fair transmit beamforming to multiple co-channel multicast groups, IEEE Trans. Signal Process., 56(3), pp.1268-1279, March 2008, [8] Yiqin, Dengguo Feng, Kai Chen, Yifeng Lian. Research on monitor position in network situation assessment. American Journal of Engineering and Technology Research. 11(9), pp.2197-2203, 2011. [9] N. Gude, T. Koponen, J. Pettit, B. Pfa, M. Casado, N. McKeown, and S. Shenker. NOX: Towards and operating system for networks. In ACM SIGCOMM Computer Communication Review, 38(3), pp.105-110, July 2008. [10] Tavakoli, A., Casado, M., Koponen, T., and Shenker, S. Applying nox to the datacenter. In Proceedings of workshop on Hot Topics in Networks (HotNets-VIII), 2009. [11] Z. Cai, A. L. Cox, and T. S. E. Ng. Maestro: A System for Scalable Control. Tech. Rep. TR10-08, Rice University, 2010. [12] S. Kandula, S. Sengupta, A. Greenberg, and P. Patel. The Nature of Datacenter Traffic: Measurements & Analysis. In Proc. IMC, 2009. [13] Beacon: http://beaconcontroller.net/. [14] Simple network access control: http://www.openflow.org/wp/snac/. [15] A. Tootoocian and Y. Ganjali. HyperFlow: A distributed control plane for. In INM/WREN workshop, 2010. [16] A. R. Curtis, Jeffrey C. Mogul, Jean Tourrilhes, Praveen Yalagandula, Puneet Sharma, and Sujata Banerjee. DevoFlow: Scaling Flow Management for High- Performance Networks. SIGCOMM, 2011. [17] M. Yu, J. Rexford, M. J. Freedman, and J. Wang. Scalable Flow-Based Networking with DIFANE. In Proc. SIGCOMM, Aug. 2010. [18] Stribling, J., Sovran, Y., Zhang, I., Pretzer, X., Li, J., Kaashoek, M. F., and Morris, R. Flexible, wide-area storage for distributed systems with wheelfs. In Proceedings of the 6th USENIX Symposium on Networked Systems Design and Implementation (NSDI 09), Apr. 2009. [19] R.V. Rasmussen, M.A. Trick. Round robin scheduling a survey. European Journal of Operational Research, 188 (3), pp. 617 636, 2008, [20] YingChun Lei, Yili Gong, Song Zhang and GuoJie Li. Research on scheduling algorithms in Web cluster servers. Journal of Computer Science and Technology, 18(6), pp.703-716, Nov.2003. [21] H. Tangmunarunkit, R. Govindan, D. Estrin, and S. Shenker. The impact of routing policy on internet paths. In Proc. 20th IEEE INFOCOM, Alaska, USA, Apr. 2001. [22] Click Modular Router: http://www.read.cs.ucla.edu/click/click. [23] Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti, M. Frans Kaashoek. The click modular router. ACM Transactions on Computer Systems (TOCS), 18(3), pp.263-297, Aug. 2000. [24] MySQL: http://www.mysql.com/. [25] Distributed Memory Object Caching System: http://memcached.org/. [26] Jure Petrovic. Using Memcached for Data Distribution in Industrial Environment. Proceedings of the Third International Conference on Systems, pp.368-372, Apr. 2008. [27] B. Sousa, K. Pentikousis, and M. Curado. Multihoming management for Future Networks. ACM/Springer Mobile Networks and Applications, 16(4), pp. 505-517, Aug. 2011. [28] I.J. Huang, T.C. Peng. Analysis of x86 instruction set usage for DOS/Windows applications and its implication on superscalar design. IEICE Transactions on Information and Systems, E85-D (6), pp. 929 939, 2002. [29] Teemu Koponen, Martin Casado, Natasha Gude, Jeremy Stribling, Leon Poutievski, Min Zhu, Rajiv Ramanathan, Yuichiro Iwata, Hiroaki Inoue, Takayuki Hama, Scott Shenker, Onix: a distributed control platform for large-scale production networks, Proceedings of the 9th USENIX conference on Operating systems design and implementation, pp.1-6, October, 2010. 26