Enhancing Responsiveness and Scalability for OpenFlow Networks via Control-Message Quenching
|
|
|
- Calvin Dixon
- 10 years ago
- Views:
Transcription
1 Enhancing Responsiveness and Scalability for OpenFlow Networs via Control-Message Quenching Tie Luo, Hwee-Pin Tan, Philip C. Quan, Yee Wei Law and Jiong Jin Institute for Infocomm Research, A*STAR, Singapore Faculty of Mathematics, University of Cambridge, UK Department of Electrical & Electronic Engineering, University of Melbourne, Australia {luot, Abstract OpenFlow has been envisioned as a promising approach to next-generation programmable and easyto-manage networs. However, the inherent heavy switchcontroller communications in OpenFlow may throttle controller responsiveness and, ultimately, networ scalability. In this paper, we identify that a ey cause of this problem lies in flow setup, and propose a Control-Message Quenching (CMQ) scheme to address it. CMQ requires minimal changes to OpenFlow, imposes no overhead on the central controller which is often the performance bottlenec, is lightweight and simple to implement. We show, via worst-case analysis and numerical results, an upper bound of performance improvement that CMQ can achieve, and evaluate the average performance via experiments using a widely-adopted prototyping system. Our experimental results demonstrate considerable enhancement of controller responsiveness and networ scalability by using CMQ, with reduced flow setup latency and elevated networ throughput. I. INTRODUCTION The advent of software-defined networing (SDN) [1] and OpenFlow [2] is ushering in a new era of networing with a clearly-decoupled networ architecture: a programmable data plane and a centralized control plane. The data plane is tased to flow-based pacet forwarding and the control plane centralizes all intelligent control such as routing, traffic engineering and QoS control. The flow-based pacet forwarding is realized by programmable (i.e., user-customizable) flow tables via an open interface called OpenFlow, which is the de facto communication standard protocol between the control and the data planes. The control plane is usually embodied by a central controller, providing a global view of the underlying networ to upper applications. This architecture is depicted in Fig. 1. The split architecture of SDN, gelled by OpenFlow, brings substantial benefits such as consistent global policy enforcement and much simplified networ management. While the standardization process, steered by Open Networing Foundation consortium [3], is still ongoing, SDN and OpenFlow have won wide recognition and support from a large number of industry giants such as Microsoft, Google, Faceboo, HP, Deutsche Teleom, Verizon, Cisco, IBM and Samsung. It has also Fig. 1. The architecture of SDN [1]. attracted substantial attention from the academia where research has recently heated up on a variety of topics related to OpenFlow. In this paper, we address OpenFlow scalability by way of enhancing controller responsiveness. This was motivated by the imbalance between the flow setup rate offered by OpenFlow and the demand of real production networs. The flow setup rate, as measured by Curtis et al. [4] on a HP ProCurve 5406zl switch, is 275 flows per second. On the other hand, a data center with 1500 servers, as reported by Kandula et al. [5], has a median flow arrival rate of 100 flows per second, and an 100- switch networ, according to Benson et al. [6], can have a spie of 10M flow arrivals per second in the worst case. This indicates a clear gap. The inability of OpenFlow to meet the demanding requirements stems from its inherent design which lead to frequent switch-controller communications: to eep the data plane simple, OpenFlow delegates the tas of routing (as required by flow setup) from switches to the central controller. As a result, switches have to consult the controller frequently for instructions on how to handle incoming pacets, which taxes the controller s processing power and tends to congest switch-controller connections, thereby imposing a serious bottlenec to the scalability of OpenFlow. Prior studies on the scalability issue focus on designing a distributed control plane [7], [8], devolving control /12/$ IEEE 348 ICTC 2012
2 to switches [4], [9], [10] or end-hosts [11], or using multi-threaded controllers [12], [13]. These approaches either entail significant changes to OpenFlow (while a wide range of commercial OpenFlow products and prototypes have been off the shelf) or are complex to implement. In this paper, we tae a different and algorithmic approach, rather than architectural or implementationbased ones as used by above-mentioned wor. We propose a Control-Message Quenching (CMQ) scheme to enhance controller responsiveness, in order to reduce flow setup latency and boost networ throughput. By way of suppressing redundant control messages, it conserves more computational and bandwidth resources for the control plane and mitigates the controller bottlenec problem, thereby enhancing networ scalability. CMQ is lightweight and easy to implement; it runs locally on switches without imposing any overhead on the controller. We show, via worst-case analysis and numerical results, an upper bound of control message reduction that CMQ can achieve, and evaluate its average performance via experiments using a widely-adopted prototyping system. The experimental results demonstrate the effectiveness of CMQ via considerably reduced flow setup latency and elevated networ throughput. The rest of the paper is organized as follows. Section II reviews the literature and Section III presents our proposed solution together with worst-case analysis. We evaluate the performance of CMQ in Section IV, and conclude the paper in Section V. II. RELATED WORK Notwithstanding various advantages mentioned elsewhere, the architecture of a much-simplified data plane together with a centralized control planes of OpenFlow creates higher demand on switch-controller communications, thereby maing the controller prone to bottlenec and throttling networ scalability. To address this, Yu et al. proposed DIFANE [9] to push networ intelligence down to the data plane. It partitions preinstalled flow-matching rules among multiple authorized switches and lets each authorized switch oversee one partition of the networ; normal switches only communicate in the data plane with each other or authorized switches, while only authorized switches need to tal to the controller occasionally. However, the visibility of flow states and statistics is compromised as being largely hidden from the controller. It also requires significant changes to OpenFlow. Mahout [11] attempts to identify significant flows (called elephant flows therein) by end-hosts looing at the TCP buffer of outgoing flows, and does not invoe the controller for insignificant flows (called mice therein). The downside is that it requires modifying end-hosts, which is difficult to achieve because end-hosts are beyond the control of networ operators. DevoFlow [4] (i) devolves control partly from the controller bac to switches, an idea similar to [9] but incurs smaller changes to OpenFlow, using a technique called rule cloning, and (ii) only involves the controller for elephant flows, an idea similar to [11] but does not require modifying endhosts. It also allows switches to mae local routing decisions without consulting the controller, through multipath support and rapid rerouting. We tae a different approach by improving the woring procedures of the OpenFlow protocol and enhancing controller responsiveness; we only require minimal changes to switches and our proposed scheme is much simpler than DeveFlow. Another line of thought to improve OpenFlow scalability focuses on redesigning the control plane. One such approach is to design a distributed control plane, as in Onix [7] and HyperFlow [8]. Another approach is to implement multi-threaded controllers, as done by Maestro [12] and NOX-MT [13]. Distributed control incurs additional overhead for synchronizing controllers networ-wide views, and increases complexity. Multithreading is orthogonal to, yet compatible with, our approach, and hence can be applied to CMQ. III. CONTROL-MESSAGE QUENCHING Our solution is based on the woring procedures of the OpenFlow protocol, particularly the flow setup process. We identify that the major traffic occurred in switch-controller communications is comprised of two types of control messages: (1) pacet-in and (2) pacet-out or flow-mod. A pacet-in message is sent by a switch to the controller in order to see instructions on how to handle a pacet upon tablemiss, an event that an incoming pacet fails to match any entry in the switch s flow table. The controller, upon receiving pacet-in, will compute a route for the pacet based on the pacet header information encapsulated in the pacet-in, and then respond to the requesting switch by sending a pacet-out or flow-mod message: (i) if the requesting switch has encapsulated the entire original pacet in pacet-in (usually because of no queue), the controller will use pacet-out which also encapsulates the entire pacet; (ii) otherwise, if the pacet-in only contains the header of the original pacet, flow-mod will be used instead, which also only includes the pacet header so that the switch can associate with the original pacet (in its local queue). Either way, the switch can be instructed on how to handle the pacet (e.g., which port to send to), and will record the instruction by installing a new entry in its flow table to avoid re-consulting the controller for subsequent pacets that should be handled the same way. This mechanism wors fine when the switchcontroller connection, referred to as OpenFlow channel 349
3 [14], 1 has ample bandwidth and the controller is hyperfast, or the demand of flow setup is low to moderate. However, chances are good that neither is met. On one hand, our analysis and measurement show that the round trip time (RTT) the interval between the instance pacet-in is sent out and the instance pacet-out or flow-mod is received can be considerably long: for instance, according to our measurement (Section IV), RTT is ms even in lowload conditions, which is in line with what others (e.g., [4], [15]) reported. But on the other hand, the flow interarrival time (reciprocal of flow setup request rate) can reach µs [5], [6], which is several orders shorter than the RTT. In general cases, and for a more rigorous understanding, we analyze the problem as below. A. Analysis Let us consider a networ with s edge switches, each of which connects to h hosts. The data traffic between the hosts is specified by the following matrix: i\j h h h... sh 1 0 λ λ 1h λ 1,h+1... λ 1,2h... λ 1,sh 2 λ λ 2h λ 2,h+1... λ 2,2h... λ 2,sh sh λ sh,1 λ sh, where λ ij is the pacet arrival rate at host i destined to host j, and i, j =1, 2,..., sh. Denote by RT T the average round trip time for switch where = 1, 2,..., s. Denote by N the average number of pacet-in that switch sends during RT T, i.e., N /RT T is flow setup request rate. Consider the worst case, where either no flowtable entries have been set up or all the entries have expired (OpenFlow supports both idle-entry timeouts and hard timeouts); in other words, switch s flow table is empty. In this case, each pacet arrival will trigger a pacet-in to be sent and N achieves the maximum: N max = RT T h i=( 1)h+1 ( ( 1)h j=1 λ ij + sh j=h+1 λ ij ) (1) where it is assumed that switch is configured to be aware of the set of h hosts attached to it and will only consult the controller when the destination host is not attached to it. Otherwise, the switch will consult the controller for every destination host and Eqn. (1) becomes N max = RT T h sh i=( 1)h+1 j=1 λ ij (2) which does not alter the problem in principle. Henceforth, we focus on Eqn. (1). 1 An OpenFlow channel can be provisioned either in band or out of band, which is out of the scope of OpenFlow. B. Numerical Results To obtain concrete values, let us consider a threelevel Clos networ [16] that consists of s = 80 edge switches with 8 uplins each, 80 aggregation switches, and 8 core switches, depicted in Fig. 2. Each edge switch is attached to h = 20 servers. Fig. 2. A Clos networ with 168 switches and 1600 servers. In a typical data-center setting, each server transfers 128MB to every other server per second with byte pacets, each lin is of capacity 1 Gbps, OpenFlow channels are provisioned out of band with bandwidth 1 Gbps, whereby the controller can accomplish 275 flow setups per second [4]. This setting translates to a pacet arrival rate of λ ij 85.3 pacet/sec or inter-arrival time of 11.7µs, as well as RT T =1/(275/80) 291 ms. It then follows from Eqn. (1) that N max , =1, 2,..., 80, (3) or 2.7 Giga pacet-in messages per second, which far exceeds OpenFlow channel bandwidth (1Gbps) and will liely cause congestion. C. Algorithm We propose a Control-Message Quenching (CMQ) scheme to address this problem. In CMQ, a switch will send, during each RTT, only one pacet-in message for each source-destination pair upon (multiple) tablemisses. The rationale is that pacets associated with each such pair will be treated as in the same flow and the corresponding pacet-out or flow-mod messages will carry redundant information and thus are unnecessary. To achieve this, each switch maintains a dynamically updated list, L, of source-destination address 2 pairs s, d. For each incoming pacet that fails to match any flow-table entry, i.e., a table-miss occurs, the switch checs the pacet against L and only sends a pacet-in if the pacet s s, d / L. Otherwise, it simply inserts the s, d in L and queues the pacet. After receiving the controller response, i.e., pacet-out or flow-mod, those baclogged pacets that match the address information encapsulated in the response will be processed accordingly, as per the original OpenFlow. 2 Depending on the type of the switch, the address could be an IP address, Ethernet address, MPLS label, or MAC address. 350
4 A new flow entry will also be installed in the flow table for the sae of subsequent pacets. This process is formally described in Algorithm 1. Algorithm 1 Control-Message Quenching (CMQ) 1: L := 2: for each incoming event e do 3: if e = an incoming pacet then 4: loo up flow table to match the pacet 5: if matched then 6: handle the pacet as per the matched entry 7: else 8: loo up L for the pacet s sourcedestination address pair s, d 9: if found then 10: enqueue pacet //suppresses pacet-in 11: else 12: send pacet-in to controller 13: enlist s, d L 14: end if 15: end if 16: else if e = incoming pacet-out or flow-mod then 17: dequeue and process matched pacets as per instruction 18: install a new entry in flow table 19: de-list corresponding s, d from L 20: end if 21: end for At line 12, as per OpenFlow, the associated pacet will be queued at the ingress port in ternary content addressable memory (TCAM) if the switch has; otherwise, the pacet-in will encapsulate the entire pacet. At lines 10 and 17, the queue is the same TCAM queue as above if the switch has; otherwise, the switch shall allocate such a queue in its OS user or ernel space. 3 D. Analysis Revisited Let us reconsider the networ described in Section III-A but with CMQ now. Let RT T denote the round trip time for switch when using CMQ, and N the average number of pacet-in sent by switch during RT T. Still considering the worst case, we have N max = h i=( 1)h+1 ( ( 1)h j=1 sh j=h+1 min{1,λ ij RT T }+ min{1,λ ij RT T }). (4) 3 Although OS memory is generally slower than TCAM, it does not present a problem because the enqueueing process happens during RTT which is essentially an idle period; in the case of dequeueing, one can dedicate a thread for non-blocing processing. Each summation term is capped by 1 because at most one pacet-in will be sent for each sourcedestination pair of hosts in one RTT. Revisiting the example Clos networ given in Section III-B, we first note that λ ij RT T Hence, it is plausible to assume that λ ij RT T > 1 because RT T is unliely to be more than 4 orders smaller than RT T (which was later verified by our experiments, too). It then follows from Eqn. (4) that N max = , =1, 2,..., 80. (5) Compared to Eqn. (3), it indicates a remarable reduction of control messages. For a comparison on the same time scale using per second, i.e., N max /RT T versus N max /RT T, it becomes RT T /RT T folds reduction of flow setup requests per second, which is still substantial since RT T is unliely to be more than 4 orders smaller than RT T as mentioned above. Our worst-case analysis gives an upper bound to the (average) performance improvement, in terms of reduction of flow setup requests that CMQ can achieve. 4 This shows significant potential; to evaluate the average performance, we conduct experiments described next. IV. PERFORMANCE EVALUATION To quantify the performance gain with CMQ, we conduct experiments using Mininet [17], which is a prototyping system widely adopted by institutions including Stanford, Princeton, Bereley, Purdue, ICSI, UMass, NEC, NASA, and Deutsche Teleom Labs. One important feature of Mininet is that users can deploy exactly the same code and test scripts in a real production networ. We use probably the most popular OpenFlow controller, NOX [18], and a popular OpenFlow switch model, Open vswitch [19]. We also use Wireshar [20], a networ protocol analyzer, to verify the protocol worflow. A. Experiment Setup Our experiments use three topologies: Linear, Tree-8 and Tree-16, 5 as shown in Fig. 3. We employ two traffic modes, latency mode and throughput mode, as used by cbench [21]. In latency mode, a switch initiates only one outstanding flow setup request, waiting for a response from the controller before soliciting the next request. In throughput mode, each switch sends pacets using its maximum rate, attempting to saturate the networ. 4 It is not obvious, though, that N max /N max gives an upper bound to average performance, while replacing N max with N would be more intuitive. In fact, because capped by 1 as in Eqn. (4), the difference between N max and N is by far smaller than that between N max and N. Hence, what we derive is a de facto upper bound. 5 Both tree and Clos topologies are typical in data center networs; we did not use Clos because Mininet does not allow loops to appear in its topology. 351
5 (a) Linear: 2 switches. (b) Tree-8: depth=3 and fanout=2. (c) Tree-16: depth=2 and fanout=4. Fig. 3. Topologies used in experiments. Rectangles represent switches and ellipses represent hosts. The traffic pattern is designated as follows. In latency mode, the first host sends pacets to the last host and the others do not generate traffic. That is, H1 H4 in Linear (Fig. 3a), H1 H8 in Tree-8 (Fig. 3b), H1 H16 in Tree-16 (Fig. 3c). In throughput mode, in Linear, H1 H3 and H2 H4; in Tree-8, H i sends pacets to H (i+4) mod 8, i.e, H1 H5,..., H4 H8 (treating H8 as H0), H5 H1,..., H8 H4 (8 flows); in Tree-16, H i sends pacets to H (i+4) mod 16 (16 flows). Therefore, the hop count of each route in Tree-8 or Tree-16, is equal (6 and 4, respectively). The metrics we evaluate are (1) RTT: flow setup latency as defined in Section III, which we measure on all switches and then tae the average, and (2) endto-end throughput, aggregated over all flows, which we measure using iperf. Note that RTT is essentially a control-plane metric and throughput a data-plane metric; using these two metrics allows us to see how the two planes quantitatively interact. In all networs, lin capacity is 1 Gbps, and Open- Flow channels are provisioned out of band with bandwidth also 1 Gbps. Flow-table entry timeout is 60 seconds by default. All the results are averaged over 10 equal-setting runs. In each run, the first 5 seconds are treated as warm-up and excluded from statistics. B. Experimental Results As a baseline, we first measure RTT for Open- Flow without CMQ and report the data in Table I. In throughput mode, RTT significantly increases by about 13, 143, 175 folds compared to latency mode on the three topologies, respectively. This clearly indicates congested OpenFlow channels and necessitates a remedy. TABLE I RTT IN OPENFLOW (UNIT: MS) Linear Tree-8 Tree-16 Latency mode Throughput mode Next, we measure RTT for OpenFlow with CMQ in throughput mode (it should be understood that there will be no difference between OpenFlow with and without CMQ in latency mode) and present the results in Fig. 4a, where the RTT of original OpenFlow is reproduced from Table I. We can see that, CMQ reduces RTT by 14.6%, 20.7% and 21.2% in Linear, Tree-8 and Tree-16, respectively. This is considerable and, more desirably, the improvement increases when the networ gets larger, which indicates an aggregating effect and is meaningful to networ scalability. To see how RTT affects throughput, we measure throughput in the same (throughput) mode, and show the results in Fig. 4b. We see that, with CMQ, networ throughput is increased by 7.8%, 12.0% and 14.8% for Linear, Tree-8 and Tree-16, respectively. On one hand, this improvement is not as large as in the case of RTT, because what CMQ directly acts on is reducing RTT which only taes effect during flow setup and not during the lifetime of a flow. On the other hand, the improvement will become larger if the controller is configured in band, because the quenched control traffic will additionally alleviate traffic burden of the data plane as well. Finally, we zoom in onto single-flow case by measuring the throughput for each networ containing only one flow: H1 H4, H1 H8, H1 H16 in Linear, Tree-8 and Tree-16, respectively (as in the latency mode). The results are shown in Fig. 4c. In addition to observing the similar improvement to the multi-flow case as in Fig. 4b, we mae two other observations from a cross-comparison with Fig. 4b: (1) For each topology, single-flow throughput is slightly higher than multi-flow throughput. This is because, in the multi-flow case, each topology effectively has at least one shared or bottlenec switch on all the routes, i.e, S5 and S6 in Linear, S9 in Tree-8, and S17 in Tree-16, which essentially prevents concurrency. (2) Tree-16 has lower multi-flow throughput but higher single-flow throughput than Tree-8. This is because in the single-flow case, H1 H16 in Tree-16 actually taes shorter path than H1 H8 in Tree-8 (4 hops versus 6 hops). Remar: Our experiments demonstrate that CMQ considerably enhances controller responsiveness, evidenced by reduced RTT or shorter flow setup latency. As a direct result, this improvement in the control plane leads to increased throughput in the data plane. More importantly, it substantially benefits networ scalability 352
6 (a) RTT (ms) in throughput mode. (b) Throughput (Mbps) in throughput mode. (c) Single-flow throughput (Mbps). Fig. 4. Performance evaluation for RTT and throughput under different modes. as the (central) controller is now much less involved in flow setup (most flow setup requests are suppressed by CMQ running on switches) and OpenFlow channels become more congestion-proof. This implies that more computational resource and bandwidth are conserved and networ capacity is effectively expanded and can accommodate higher traffic demand. V. CONCLUSION AND FUTURE WORK In this paper, we address OpenFlow scalability by enhancing controller responsiveness via a Control- Message Quenching (CMQ) scheme. Compared to prior studies that address the same issue, our solution requires minimal changes to OpenFlow switches, imposes zero overhead on the controller (which is bottlenec-prone), is lightweight and simple to implement. These features are important considerations in industry adoption. Using analysis and experiments based on a widelyadopted prototyping system, we demonstrate that CMQ is effective in improving controller responsiveness which, ultimately, leads to more scalable OpenFlow networs. This study also bolsters resource-constrained yet large-scale OpenFlow networs. For instance, it could be used by Sensor OpenFlow [22], the first wor that synergizes OpenFlow and wireless sensor networs. In future wor, we plan to experiment with larger networs and more extensive settings including traffic patterns and parameters such as flow entry expiry time, and investigate the effect of in band channel provision. REFERENCES [1] Open Networing Foundation, Software-defined networing: The new norm for networs, white paper, April [2] N. McKeown, T. Anderson, H. Balarishnan, G. Parular, L. Peterson, J. Rexford, S. Shener, and J. Turner, Open- Flow: enabling innovation in campus networs, SIGCOMM Comput. Commun. Rev., vol. 38, no. 2, pp , [3] Open Networing Foundation, [4] A. R. Curtis, J. C. Mogul, J. Tourrilhes, P. Yalagandula, P. Sharma, and S. Banerjee, DevoFlow: Scaling flow management for high-performance networs, in ACM SIGCOMM, [5] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaien, The nature of data center traffic: Measurements and analysis, in ACM Internet Measurement Conference (IMC), [6] T. Benson, A. Aella, and D. A. Maltz, Networ traffic characteristics of data centers in the wild, in ACM Internet Measurement Conference (IMC), [7] T. Koponen, M. Casado, N. Gude, J. Stribling, L. Poutievsi, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue, T. Hama, and S. Shener, Onix: a distributed control platform for largescale production networs, in The 9th USENIX conference on Operating systems design and implementation (OSDI), 2010, pp [8] A. Tootoonchian and Y. Ganjali, HyperFlow: a distributed control plane for openflow, in INM/WREN. USENIX Association, [9] M. Yu, J. Rexford, M. J. Freedman, and J. Wang, Scalable flow-based networing with DIFANE, in ACM SIGCOMM, [10] A. S.-W. Tam, K. Xi, and H. J. Chao, Use of devolved controllers in data center networs, in IEEE INFOCOM Worshop on Cloud Computing, [11] A. R. Curtis, W. Kim, and P. Yalagandula, Mahout: Lowoverhead datacenter traffic management using end-host-based elephant detection, in IEEE INFOCOM, [12] Z. Cai, A. L. Cox, and T. S. E. Ng, Maestro: A system for scalable openflow control, Rice University, Tech. Rep. TR10-08, [13] A. Tootoonchian, S. Gorbunov, Y. Ganjali, and M. Casado, On controller performance in software-defined networs, in USENIX Worshop on Hot Topics in Management of Internet, Cloud, and Enterprise Networs and Services (Hot-ICE), [14] Open Networing Foundation, OpenFlow switch specification, April 16, 2012, version [Online]. Available: stories/downloads/specification/openflow-spec-v1.3.0.pdf [15] R. Sherwood, G. Gibb, K.-K. Yap, G. Appenzeller, M. Casado, N. McKeown, and G. Parular, Can the production networ be the testbed? in the 9th USENIX conference on Operating systems design and implementation (OSDI), 2010, pp [16] C. Clos, A study of non-blocing switching networs, Bell System Technical Journal, vol. 32, no. 2, [17] B. Lantz, B. Heller, and N. McKeown, A networ in a laptop: Rapid prototyping for software-defined networs, in ACM HotNets, [18] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shener, NOX: Towards an operating system for networs, SIGCOMM Comput. Commun. Rev., vol. 38, no. 3, pp , [19] Open vswitch, [20] Wireshar, [21] R. Sherwood and K.-K. Yap, Cbench: an openflow controller benchmarer, [22] T. Luo, H.-P. Tan, and T. Q. S. Que, Sensor OpenFlow: Enabling software-defined wireless sensor networs, IEEE Communications Letters, 2012, to appear. 353
Scalability of Control Planes for Software Defined Networks:Modeling and Evaluation
of Control Planes for Software Defined Networks:Modeling and Evaluation Jie Hu, Chuang Lin, Xiangyang Li, Jiwei Huang Department of Computer Science and Technology, Tsinghua University Department of Computer
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
A collaborative model for routing in multi-domains OpenFlow networks
A collaborative model for routing in multi-domains OpenFlow networks Xuan Thien Phan, Nam Thoai Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology Ho Chi Minh city, Vietnam
Towards an Elastic Distributed SDN Controller
Towards an Elastic Distributed SDN Controller Advait Dixit, Fang Hao, Sarit Mukherjee, T.V. Lakshman, Ramana Kompella Purdue University, Bell Labs Alcatel-Lucent ABSTRACT Distributed controllers have been
On Controller Performance in Software-Defined Networks
On Controller Performance in Software-Defined Networks Amin Tootoonchian University of Toronto/ICSI Martin Casado Nicira Networks Sergey Gorbunov University of Toronto Rob Sherwood Big Switch Networks
Kandoo: A Framework for Efficient and Scalable Offloading of Control Applications
Kandoo: A Framework for Efficient and Scalable Offloading of Control s Soheil Hassas Yeganeh University of Toronto [email protected] Yashar Ganjali University of Toronto [email protected] ABSTRACT
FlowSense: Monitoring Network Utilization with Zero Measurement Cost
FlowSense: Monitoring Network Utilization with Zero Measurement Cost Curtis Yu 1, Cristian Lumezanu 2, Yueping Zhang 2, Vishal Singh 2, Guofei Jiang 2, and Harsha V. Madhyastha 1 1 University of California,
FlowSense: Monitoring Network Utilization with Zero Measurement Cost
FlowSense: Monitoring Network Utilization with Zero Measurement Cost Curtis Yu 1, Cristian Lumezanu 2, Yueping Zhang 2, Vishal Singh 2, Guofei Jiang 2, and Harsha V. Madhyastha 1 1 University of California,
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
Enabling Software Defined Networking using OpenFlow
Enabling Software Defined Networking using OpenFlow 1 Karamjeet Kaur, 2 Sukhveer Kaur, 3 Vipin Gupta 1,2 SBS State Technical Campus Ferozepur, 3 U-Net Solutions Moga Abstract Software Defined Networking
Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management
Research Paper Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management Raphael Eweka MSc Student University of East London
Orion: A Hybrid Hierarchical Control Plane of Software-Defined Networking for Large-Scale Networks
2014 IEEE 22nd International Conference on Network Protocols Orion: A Hybrid Hierarchical Control Plane of Software-Defined Networking for Large-Scale Networks Yonghong Fu 1,2,3, Jun Bi 1,2,3, Kai Gao
ASIC: An Architecture for Scalable Intra-domain Control in OpenFlow
ASIC: An Architecture for Scalable Intra-domain Control in Pingping Lin, Jun Bi, Hongyu Hu Network Research Center, Department of Computer Science, Tsinghua University Tsinghua National Laboratory for
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller Srinivas Govindraj, Arunkumar Jayaraman, Nitin Khanna, Kaushik Ravi Prakash [email protected], [email protected],
Traffic-based Malicious Switch Detection in SDN
, pp.119-130 http://dx.doi.org/10.14257/ijsia.2014.8.5.12 Traffic-based Malicious Switch Detection in SDN Xiaodong Du 1, Ming-Zhong Wang 1, Xiaoping Zhang 2* and Liehuang Zhu 1 1 Beijing Engineering Research
OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu [email protected] [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
HERCULES: Integrated Control Framework for Datacenter Traffic Management
HERCULES: Integrated Control Framework for Datacenter Traffic Management Wonho Kim Princeton University Princeton, NJ, USA Email: [email protected] Puneet Sharma HP Labs Palo Alto, CA, USA Email:
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Improving Network Management with Software Defined Networking
Improving Network Management with Software Defined Networking Hyojoon Kim and Nick Feamster, Georgia Institute of Technology 2013 IEEE Communications Magazine Presented by 101062505 林 瑋 琮 Outline 1. Introduction
Enabling Fast Failure Recovery in OpenFlow Networks
Enabling Fast Failure Recovery in OpenFlow Networks Sachin Sharma, Dimitri Staessens, Didier Colle, Mario Pickavet and Piet Demeester Ghent University - IBBT, Department of Information Technology (INTEC),
Software Defined Networking & Openflow
Software Defined Networking & Openflow Autonomic Computer Systems, HS 2015 Christopher Scherb, 01.10.2015 Overview What is Software Defined Networks? Brief summary on routing and forwarding Introduction
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
On Scalability of Software-Defined Networking
SOFTWARE DEFINED NETWORKS On Scalability of Software-Defined Networking Soheil Hassas Yeganeh, Amin Tootoonchian, and Yashar Ganjali, University of Toronto ABSTRACT In this article, we deconstruct scalability
How To Understand The Power Of The Internet
DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach, Kurose, Ross Slides: - Course book Slides - Slides from Princeton University COS461
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
Software Defined Networks
Software Defined Networks Damiano Carra Università degli Studi di Verona Dipartimento di Informatica Acknowledgements! Credits Part of the course material is based on slides provided by the following authors
OpenFlow: Enabling Innovation in Campus Networks
OpenFlow: Enabling Innovation in Campus Networks Nick McKeown Stanford University Presenter: Munhwan Choi Table of contents What is OpenFlow? The OpenFlow switch Using OpenFlow OpenFlow Switch Specification
From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.
From Active & Programmable Networks to.. OpenFlow & Software Defined Networks Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S. Braun University of Basel Cs321 - HS 2012 (Slides material from www.bigswitch.com)
OFCProbe: A Platform-Independent Tool for OpenFlow Controller Analysis
c 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising
Performance Evaluation of OpenDaylight SDN Controller
Performance Evaluation of OpenDaylight SDN Controller Zuhran Khan Khattak, Muhammad Awais and Adnan Iqbal Department of Computer Science Namal College Mianwali, Pakistan Email: {zuhran2010,awais2010,adnan.iqbal}@namal.edu.pk
SDN Security Design Challenges
Nicolae Paladi SDN Security Design Challenges SICS Swedish ICT! Lund University In Multi-Tenant Virtualized Networks Multi-tenancy Multiple tenants share a common physical infrastructure. Multi-tenancy
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? Many slides stolen from Jennifer Rexford, Nick McKeown, Scott Shenker, Teemu Koponen, Yotam Harchol and David Hay Agenda
The Internet: A Remarkable Story. Inside the Net: A Different Story. Networks are Hard to Manage. Software Defined Networking Concepts
The Internet: A Remarkable Story Software Defined Networking Concepts Based on the materials from Jennifer Rexford (Princeton) and Nick McKeown(Stanford) Tremendous success From research experiment to
Towards an Ethernet Learning Switch and Bandwidth Optimization using POX Controller
Towards an Ethernet Learning Switch and Bandwidth Optimization using POX Controller Abhishek Bagewadi 1, Dr. K N Rama Mohan Babu 2 M.Tech Student, Department Of ISE, Dayananda Sagar College of Engineering,
Autonomicity Design in OpenFlow Based Software Defined Networking
GC'12 Workshop: The 4th IEEE International Workshop on Management of Emerging Networks and Services Autonomicity Design in OpenFlow Based Software Defined Networking WANG Wendong, Yannan HU, Xirong QUE,
OPENFLOW-BASED LOAD BALANCING GONE WILD
OPENFLOW-BASED LOAD BALANCING GONE WILD RICHARD WANG MASTER S THESIS IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCE IN ENGINEERING DEPARTMENT OF COMPUTER SCIENCE PRINCETON UNIVERSITY
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework)
Enabling Practical SDN Security Applications with OFX (The OpenFlow extension Framework) John Sonchack, Adam J. Aviv, Eric Keller, and Jonathan M. Smith Outline Introduction Overview of OFX Using OFX Benchmarks
Datacenter Network Large Flow Detection and Scheduling from the Edge
Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou [email protected] Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter
libnetvirt: the network virtualization library
libnetvirt: the network virtualization library Daniel Turull, Markus Hidell, Peter Sjödin KTH Royal Institute of Technology, School of ICT Stockholm, Sweden Email: {danieltt,mahidell,psj}@kth.se Abstract
Xperience of Programmable Network with OpenFlow
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
Open Source Network: Software-Defined Networking (SDN) and OpenFlow
Open Source Network: Software-Defined Networking (SDN) and OpenFlow Insop Song, Ericsson LinuxCon North America, Aug. 2012, San Diego CA Objectives Overview of OpenFlow Overview of Software Defined Networking
Dynamic Controller Provisioning in Software Defined Networks
Dynamic Controller Provisioning in Software Defined Networks Md. Faizul Bari, Arup Raton Roy, Shihabur Rahman Chowdhury, Qi Zhang, Mohamed Faten Zhani, Reaz Ahmed, and Raouf Boutaba David R. Cheriton School
Software Defined Networking Basics
Software Defined Networking Basics Anupama Potluri School of Computer and Information Sciences University of Hyderabad Software Defined Networking (SDN) is considered as a paradigm shift in how networking
Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics
Information- Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics Funding These educational materials have been developed as part of the instructors educational
OpenTM: Traffic Matrix Estimator for OpenFlow Networks
OpenTM: Traffic Matrix Estimator for OpenFlow Networks Amin Tootoonchian, Monia Ghobadi, Yashar Ganjali {amin,monia,yganjali}@cs.toronto.edu Department of Computer Science University of Toronto, Toronto,
Extensible and Scalable Network Monitoring Using OpenSAFE
Extensible and Scalable Network Monitoring Using OpenSAFE Jeffrey R. Ballard [email protected] Ian Rae [email protected] Aditya Akella [email protected] Abstract Administrators of today s networks are
Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee
International Journal of Science and Engineering Vol.4 No.2(2014):251-256 251 Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee Yu-Jia Chen, Feng-Yi Lin and Li-Chun Wang Department of
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks Aaron Rosen and Kuang-Ching Wang Holcombe Department of Electrical and Computer Engineering Clemson University Clemson,
Failover Mechanisms for Distributed SDN Controllers
Failover Mechanisms for Distributed SDN Controllers Mathis Obadia, Mathieu Bouet, Jérémie Leguay, Kévin Phemius, Luigi Iannone Thales Communications & Security {firstname.name}@thalesgroup.com Telecom
Virtualizing the Network Forwarding Plane
Virtualizing the Network Forwarding Plane Martín Casado Nicira Teemu Koponen Nicira Rajiv Ramanathan Google Scott Shenker UC Berkeley 1 Introduction Modern system design often employs virtualization to
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
Reactive Logic in Software-Defined Networking: Measuring Flow-Table Requirements
Reactive Logic in Software-Defined Networking: Measuring Flow-Table Requirements Maurizio Dusi, Roberto Bifulco, Francesco Gringoli, Fabian Schneider NEC Laboratories Europe, Germany E-mail: @neclab.eu
Software Defined Network Application in Hospital
InImpact: The Journal of Innovation Impact: ISSN 2051-6002 : http://www.inimpact.org Special Edition on Innovation in Medicine and Healthcare : Vol. 6. No. 1 : pp.1-11 : imed13-011 Software Defined Network
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
East-West Bridge for SDN Network Peering
East-West Bridge for SDN Network Peering Pingping Lin, Jun Bi, and Yangyang Wang Institute for Network Sciences and Cyberspace, Department of Computer Science, Tsinghua University Tsinghua National Laboratory
CS6204 Advanced Topics in Networking
CS6204 Advanced Topics in Networking Assoc Prof. Chan Mun Choon School of Computing National University of Singapore Aug 14, 2015 CS6204 Lecturer Chan Mun Choon Office: COM2, #04-17 Email: [email protected]
The Evolution of SDN and OpenFlow: A Standards Perspective
The Evolution of SDN and OpenFlow: A Standards Perspective Jean Tourrilhes, Puneet Sharma, Sujata Banerjee HP- Labs - {FirstName.LastName}@hp.com Justin Pettit VMware [email protected] 1. Introduction
Panopticon: Incremental SDN Deployment in Enterprise Networks
Panopticon: Incremental SDN Deployment in Enterprise Networks Stefan Schmid with Dan Levin, Marco Canini, Fabian Schaffert, Anja Feldmann https://venture.badpacket.in I SDN! How can I deploy it? SDN: Where
CloudWatcher: Network Security Monitoring Using OpenFlow in Dynamic Cloud Networks
CloudWatcher: Network Security Monitoring Using OpenFlow in Dynamic Cloud Networks (or: How to Provide Security Monitoring as a Service in Clouds?) Seungwon Shin SUCCESS Lab Texas A&M University Email:
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
A Study on Software Defined Networking
A Study on Software Defined Networking Yogita Shivaji Hande, M. Akkalakshmi Research Scholar, Dept. of Information Technology, Gitam University, Hyderabad, India Professor, Dept. of Information Technology,
Software Defined Networking and the design of OpenFlow switches
Software Defined Networking and the design of OpenFlow switches Paolo Giaccone Notes for the class on Packet Switch Architectures Politecnico di Torino December 2015 Outline 1 Introduction to SDN 2 OpenFlow
Network Virtualization and Resource Allocation in OpenFlow-based Wide Area Networks
Network Virtualization and Resource Allocation in OpenFlow-based Wide Area Networks Pontus Sköldström Acreo AB, Kista, Sweden Email: [email protected] Kiran Yedavalli Ericsson Research, San Jose, CA, USA
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
OMNI: OpenFlow MaNagement Infrastructure
OMNI: OpenFlow MaNagement Infrastructure Diogo M. F. Mattos, Natalia C. Fernandes, Victor T. da Costa, Leonardo P. Cardoso, Miguel Elias M. Campista, Luís Henrique M. K. Costa, Otto Carlos M. B. Duarte
HyperFlow: A Distributed Control Plane for OpenFlow
HyperFlow: A Distributed Control Plane for OpenFlow Amin Tootoonchian University of Toronto [email protected] Yashar Ganjali University of Toronto [email protected] Abstract OpenFlow assumes a
Scalable Network Virtualization in Software-Defined Networks
Scalable Network Virtualization in Software-Defined Networks Dmitry Drutskoy Princeton University Eric Keller University of Pennsylvania Jennifer Rexford Princeton University ABSTRACT Network virtualization
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
An OpenFlow Extension for the OMNeT++ INET Framework
An OpenFlow Extension for the OMNeT++ INET Framework Dominik Klein University of Wuerzburg, Germany [email protected] Michael Jarschel University of Wuerzburg, Germany [email protected]
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Steve Gennaoui, Jianhua Yin, Samuel Swinton, and * Vasil Hnatyshin Department of Computer Science Rowan University
Network Management through Graphs in Software Defined Networks
Network Management through Graphs in Software Defined Networks Gustavo Pantuza, Frederico Sampaio, Luiz F. M. Vieira, Dorgival Guedes, Marcos A. M. Vieira Departament of Computer Science Universidade Federal
Advanced Study of SDN/OpenFlow controllers
Advanced Study of SDN/OpenFlow controllers Alexander Shalimov [email protected] Vasily Pashkov [email protected] Dmitry Zuikov [email protected] Ruslan Smeliansky [email protected] Daria Zimarina [email protected]
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet
How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet Professor Jiann-Liang Chen Friday, September 23, 2011 Wireless Networks and Evolutional Communications Laboratory
Wedge Networks: Transparent Service Insertion in SDNs Using OpenFlow
Wedge Networks: EXECUTIVE SUMMARY In this paper, we will describe a novel way to insert Wedge Network s multiple content security services (such as Anti-Virus, Anti-Spam, Web Filtering, Data Loss Prevention,
Funded in part by: NSF, Cisco, DoCoMo, DT, Ericsson, Google, Huawei, NEC, Xilinx
Funded in part by: NSF, Cisco, DoCoMo, DT, Ericsson, Google, Huawei, NEC, Xilinx Nick McKeown, Guru Parulkar, Guido Appenzeller, Nick Bastin, David Erickson, Glen Gibb, Nikhil Handigol, Brandon Heller,
HybNET: Network Manager for a Hybrid Network Infrastructure
HybNET: Network Manager for a Hybrid Network Infrastructure Hui Lu, Nipun Arora, Hui Zhang, Cristian Lumezanu, Junghwan Rhee, Guofei Jiang Purdue University NEC Laboratories America [email protected] {nipun,huizhang,lume,rhee,gfj}@nec-labs.com
Software Defined Networks (SDN)
Software Defined Networks (SDN) Nick McKeown Stanford University With: Martín Casado, Teemu Koponen, Scott Shenker and many others With thanks to: NSF, GPO, Stanford Clean Slate Program, Cisco, DoCoMo,
The libfluid OpenFlow Driver Implementation
The libfluid OpenFlow Driver Implementation Allan Vidal 1, Christian Esteve Rothenberg 2, Fábio Luciano Verdi 1 1 Programa de Pós-graduação em Ciência da Computação, UFSCar, Sorocaba 2 Faculdade de Engenharia
Scotch: Elastically Scaling up SDN Control-Plane using vswitch based Overlay
Scotch: Elastically Scaling up SDN Control-Plane using vswitch based Overlay An Wang George Mason University [email protected] T. V. Lakshman Bell Labs, Alcatel-Lucent [email protected] Yang
Survey: Software Defined Networks with Emphasis on Network Monitoring
Survey: Software Defined Networks with Emphasis on Network Monitoring Prashanth [email protected] Indian Institute of Technology, Bombay (IIT-B) Powai, Mumbai, Maharastra India 31 Oct 2015 Abstract
MASTER THESIS. Performance Comparison Of the state of the art Openflow Controllers. Ahmed Sonba, Hassan Abdalkreim
Master's Programme in Computer Network Engineering, 60 credits MASTER THESIS Performance Comparison Of the state of the art Openflow Controllers Ahmed Sonba, Hassan Abdalkreim Computer Network Engineering,
Modeling and Performance Evaluation of an OpenFlow Architecture
ing and Performance Evaluation of an OpenFlow Architecture Michael Jarschel, Simon Oechsner, Daniel Schlosser, Rastin Pries, Sebastian Goll, Phuoc Tran-Gia University of Würzburg, Institute of Computer
VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING. Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk
VIDEO STREAMING OVER SOFTWARE DEFINED NETWORKS WITH SERVER LOAD BALANCING Selin Yilmaz, A. Murat Tekalp, Bige D. Unluturk College of Engineering, Koç University, 34450 Sariyer, Istanbul, Turkey ABSTRACT
AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK
Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,
Dynamic Controller Provisioning in Software Defined Networks
Dynamic Controller Provisioning in Software Defined Networks Md. Faizul Bari, Arup Raton Roy, Shihabur Rahman Chowdhury, Qi Zhang, Mohamed Faten Zhani, Reaz Ahmed, and Raouf Boutaba David R. Cheriton School
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE
ON THE IMPLEMENTATION OF ADAPTIVE FLOW MEASUREMENT IN THE SDN-ENABLED NETWORK: A PROTOTYPE PANG-WEI TSAI, CHUN-YU HSU, MON-YEN LUO AND CHU-SING YANG NATIONAL CHENG KUNG UNIVERSITY, INSTITUTE OF COMPUTER
Security improvement in IoT based on Software Defined Networking (SDN)
Security improvement in IoT based on Software Defined Networking (SDN) Vandana C.P Assistant Professor, New Horizon College of Engineering Abstract With the evolving Internet of Things (IoT) technology,
Software Defined Networking: Advanced Software Engineering to Computer Networks
Software Defined Networking: Advanced Software Engineering to Computer Networks Ankush V. Ajmire 1, Prof. Amit M. Sahu 2 1 Student of Master of Engineering (Computer Science and Engineering), G.H. Raisoni
