CherryPick: Tracing Packet Trajectory in Software-Defined Datacenter Networks
|
|
|
- Moses Rogers
- 10 years ago
- Views:
Transcription
1 CherryPick: Tracing Packet Trajectory in Software-Defined Datacenter Networks Praveen Tammana University of Edinburgh Rachit Agarwal UC Berkeley Myungjin Lee University of Edinburgh ABSTRACT SDN-enabled datacenter network management and debugging can benefit by the ability to trace packet trajectories. For example, such a functionality allows measuring traffic matrix, detecting traffic anomalies, localizing network faults, etc. Existing techniques for tracing packet trajectories require either large data collection overhead or large amount of data plane resources such as switch flow rules and packet header space. We present CherryPick, a scalable, yet simple technique for tracing packet trajectories. The core idea of our technique is to cherry-pick the links that are key to representing an end-to-end path of a packet, and to embed them into its header on its way to destination. Preliminary evaluation on a fat-tree topology shows that CherryPick requires minimal switch flow rules, while using header space close to state-of-the-art techniques. Categories and Subject Descriptors C.2.3 [Computer-Communication Networks]: Network Operations Keywords Network monitoring; software-defined network 1. INTRODUCTION Software-Defined Networking (SDN) has emerged as a key technology to make datacenter network management easier and more fine-grained. SDN allows network operators to express the desired functionality using high-level abstractions at the control plane, that are automatically translated into low-level functionality at the data plane. However, debugging SDN-enabled networks is challenging. In addition to network misconfiguration errors and failures [10, 13, 16], network operators need to ensure that operations at the data plane conform to the high-level policies expressed at the control plane. Noting that traditional tools (e.g., NetFlow, sflow, SNMP, traceroute) are simply insufficient to debug SDN-enabled networks, a number of tools have been developed recently [1, 11, 12, 14, 17, 26, 27]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SOSR2015, June 17-18, 2015, Santa Clara, CA, USA Copyright 2015 ACM. ISBN /15/06...$15.00 DOI: A particularly interesting problem in SDN debugging is to be able to reason about flow of traffic (e.g., tracing individual packet trajectories) through the network [11, 13, 14, 16, 17, 27]. Such a functionality enables measuring network traffic matrix [28], detecting traffic anomalies caused by congestion [9], localizing network failures [13, 14, 16], or simply ensuring that forwarding behavior at the data plane matches the policies at the control plane [11]. We discuss related work in depth in 5, but note that existing tools for tracing packet trajectories can use one of the two broad approaches. On the one hand, tools like NetSight [11] support a wide range of queries using after-thefact analysis, but also incur large out-of-band data collection overhead. In contrast, in-band tools (e.g., PathQuery [17] and PathletTracer [27]) significantly reduce data collection overhead at the cost of supporting a narrower range of queries. We present CherryPick, a scalable, yet simple in-band technique for tracing packet trajectories in SDN-enabled datacenter networks. CherryPick is designed with the goal of minimizing two data plane resources: the number of switch flow rules and the packet header space. Indeed, existing approaches to tracing packet trajectories in SDN trade off one of these resources to minimize the other. At one end of the spectrum is the most naïve approach of assigning each network link a unique identifier and switches embedding the identifier into the packet header during the forwarding process. This minimizes the number of switch flow rules required, but has high packet header space overhead especially when the packets traverse along non-shortest paths (e.g., due to failures along the shortest path). At the other end are techniques like PathletTracer [27] that aim to minimize the packet header space, but end up requiring a large number of switch flow rules ( 3); PathQuery [17] acknowledges a similar limitation in terms of switch resources. CherryPick minimizes the number of switch flow rules required to trace packet trajectories by building upon the naïve approach each network link is assigned a unique identifier and switches simply embed the identifier into the packet header during the forwarding process. However, in contrast to the naïve approach, CherryPick minimizes the packet header space by selectively picking a minimum number of essential links to represent an end-to-end path. By exploiting the fact that datacenter network topologies are often well-structured, CherryPick requires packet header space comparable to state-of-the-art solutions [27], while retaining the minimal switch flow rule requirement of the naïve approach. For instance, Table 1 compares the number of switch flow rules and the packet header space required by CherryPick against the above two approaches for a 48-ary fat-tree topology.
2 Table 1: CherryPick achieves the best of the two existing techniques for tracing packet trajectories the minimal number of switch flow rules required by the naïve approach and close to the minimal packet header space required by PathletTracer [27]. These results are for a 48-ary fat-tree topology; M and B stand for million and billion respectively. See 3 for details. CherryPick PathletTracer Naïve Path length #Flow rules M 1.7B #Header bits In summary, we make three contributions: We design CherryPick, a simple and scalable packet trajectory tracing technique for SDN-enabled datacenter networks. The main idea in CherryPick is to exploit the structure in datacenter network topologies to minimize number of switch flow rules and packet header space required to trace packet trajectories. We apply CherryPick to a fat-tree topology in this paper to demonstrate the benefits of the technique ( 2). We show that CherryPick can trace all 4- and 6-hop paths in an up-to 72-ary fat-tree with no hardware modification by using IEEE 802.1ad double-tagging ( 2). We evaluate CherryPick over a 48-ary fat-tree topology. Our results show that CherryPick requires minimal number of switch flow rules while using packet header space close to state-of-the-art techniques ( 3). 2. CHERRYPICK In this section, we describe the CherryPick design in detail with a focus on enabling L2/L3 packet trajectory tracing in a fattree network topology. We discuss, in 4, how this design can be generalized to other network topologies. 2.1 Preliminaries The fat-tree topology. A k-ary fat-tree topology contains three layers of k-port switches: Top-of-Rack (ToR), aggregate (Agg) and core. A 4-ary fat-tree topology is presented in Figure 1 (ToR switches are nodes with letter T, Agg with letter A and core with letter C). The topology consists of k pods, each of which has a layer of k/2 ToR switches and a layer of k/2 Agg switches. k/2 ports of each ToR switch are directly connected to servers and each of remaining k/2 ports connected to k/2 Agg switches. The remaining k/2 ports of each Agg switch are connected to k/2 core switches. There are a total of k 2 /4 core switches where port i on each core switch is connected to pod i. To ease the following discussion, we group the links present in the topology into two categories: i) intra-pod link and ii) podcore link. An intra-pod link is one connecting a ToR and an Agg switch, whereas a pod-core link is one connecting an Agg and a core switch. Note that there are k 2 /4 intra-pod links within a pod and a total of k 3 /4 pod-core links. In addition, we define affinity core segment, or affinity segment, to be the group of core switches that can be directly reached by each Agg switch at a particular position in each pod. In Figure 1, C1 and C2 in affinity segment 1 are directly reached by Agg switches at the lefthand side in each pod (i.e., A1, A3, A5, and A7). Src A1 A2 T1 T2 Pod 1 Seg. 1 Seg. 2 C1 C2 C3 C4 A3 A4 T3 T4 Pod 2 A5 A6 T5 T6 Pod 3 Dst Figure 1: A 4-ary fat-tree topology. A7 A8 T7 T8 Pod 4 Routing along non-shortest paths. While datacenter networks typically use shortest path routing, packets can traverse along non-shortest paths due to several reasons. First, node and/or link failures can enforce routing of packets along non-shortest paths [5, 18, 24]. Consider, for instance, the topology in Figure 1 where a packet is being routed between Src and Dst along the shortest path Src T1 A1 C1 A3 T3 Dst. If link C1 A3 fails upon the packet arrival, the packet will be forced to traverse a non-shortest path. Second, recently proposed techniques reroute packets along alternate (potentially non-shortest) paths [20, 23, 25] to avoid congested links. Finally, misconfiguration may also create similar situations. We use detour to collectively refer to situations that force packets to traverse along a non-shortest path. The ability to be able to trace packet trajectories in case of packet detours is, thus, important since this may reveal network failures and/or misconfiguration issues. However, packet detours complicate trajectory tracing due to the vastly increased number of possible paths even in a mediumsize datacenter. For instance, given a 48-ary fat-tree topology, the number of shortest paths (i.e., 4-hop paths) between a host pair in different pods is just 576. On the other hand, there exist almost 1.31 million 6-hop paths for the same host pair. As we show in 3, techniques that work well for tracing shortest paths [27] do not necessarily scale to the case of packet detours. Detour model. As we discuss in 4, CherryPick is designed to work with arbitrary routing schemes. However, to ease the discussion in this paper, we focus on a simplified detour model where once the packet is forwarded by the Agg switch in the source pod, it does not traverse any ToR switch other than the ones in the destination pod. For instance, in Figure 1, consider a packet traversing from Src in Pod 1 to Dst in Pod 2. Under this simplification, the packet visits none of ToR switches T5, T6, T7 and T8 once it leaves T1. Of course, in practice, packets may visit ToR switches in non-destination pods. We leave a complete discussion of the general detour model to the full version. 2.2 Overview of CherryPick We now give a high-level description of CherryPick design. Consider the naïve approach that embeds in the packet header an identifier (ID) for each link that the packet traverses. For a 48-port switch, it is easy to see that this approach requires log(48) = 6 bits to represent each link. Indeed, the header space requirement for this naïve approach is far higher than the theoretical bound, log(p) bits, where P is the number of paths between any source-destination pair. For tracing 4-hop paths, the naïve scheme requires 24 bits whereas only 10 bits are theoretically required since P is 576 (P = k 2 /4). CherryPick builds upon the observation that datacenter network topologies are often well-structured and allow reconstructing the end-to-end path without actually storing each link as the
3 packet traverses. CherryPick, thus, cherry-picks a minimum number of links essential to represent an end-to-end path. For instance, for the fat-tree topology, it suffices to store the ID of the pod-core link to reconstruct any 4-hop path. To handle a longer path, in addition to picking a pod-core link, CherryPick selects one extra link every additional 2 hops. Hence, tracing any n-hop path (n 4) requires only (n 4)/2+1 links worth of header space 1. However, the cherry-picking of links makes it impossible to use local port IDs as link identifiers and using global link IDs requires a large number of bits per ID due to the sheer number of links in the topology. CherryPick, thus, assigns link IDs in a manner that each link ID requires fewer bits than a global ID and that the end-to-end path between any source-destination pair can be reconstructed without any ambiguity. We discuss in 2.3 how CherryPick reconstructs end-to-end paths using cherry-picking the links along with a careful assignment of non-global link IDs. CherryPick leverages VLAN tagging to embed chosen links in the packet header. While the OpenFlow standard [19] does not dictate how many VLAN tags can be inserted in the header, typically commodity SDN switches only support IEEE 802.1ad doubletagging. With two tags, CherryPick can keep track of all 1.31 million 6-hop paths in the 48-ary fat-tree while keeping switch flow memory overhead low. As hardware programmability in SDN switch increases [3, 12], we expect that the issue raised by the limited number of tags can be mitigated. 2.3 Design We now discuss CherryPick design in depth. We focus on three aspects of the design: (1) selectively picking links that allow reconstructing the end-to-end path and configuring switch rules to enable link picking; (2) careful assignment of link IDs to further minimize the packet header space; and (3) the path reconstruction process using the link IDs embedded in the packet header. Picking links. Consider a packet arriving at an input port. We call a link attached to the input port ingress link. For each packet, every switch has a simple link selection mechanism that can be easily converted into OpenFlow rules. If the packet matches one of the rules, the ingress link is picked; and its ID is embedded into the packet using VLAN tag. The following describes the link selection mechanism at each switch level, and Figure 2 shows flow rules derived from the mechanisms: ToR: If a ToR switch receives the packet from an Agg switch and if the packet s source belongs to the same pod, the switch picks the ingress link connected to the Agg switch that forwarded the packet. However, if both source and destination are in the same pod, the switch ignores the ingress link (we use write_metadata command to implement the do nothing operation). For all other cases, no link is picked. Aggregate: If an Agg switch receives the packet from a ToR switch and if the packet s destination is in the same pod, the ingress link is chosen. Otherwise, no link is picked. Core: Core switch always picks the ingress link. Using the above set of rules, CherryPick selects the minimum number of links required to reconstruct the end-to-end path of any packet. For ease of exposition, we present four examples in Figure 3. First, Figure 3(a) illustrates the baseline 4-hop scenario. In this scenario, core switch C2 only picks an ingress link and other switches (e.g., A1, A3 and T3) do nothing. In case of 1 A 2-hop path is the shortest path between servers in the same pod, for which CherryPick simply picks one intra-pod link at Agg. Port IP Src IP Dst Action 3 10.pod.0.0/16 10.pod.0.0/16 write_metadata: 0x0/0x pod.0.0/16 10.pod.0.0/16 write_metadata: 0x0/0x pod.0.0/16 * push_vlan_id: linkid(3) 4 10.pod.0.0/16 * push_vlan_id: linkid(4) (a) ToR switch Port IP Src IP Dst Action 1 * 10.pod.0.0/16 push_vlan_id: linkid(1) 2 * 10.pod.0.0/16 push_vlan_id: linkid(2) (b) Aggregate switch Port IP Src IP Dst Action 1 * * push_vlan_id: linkid(1) 2 * * push_vlan_id: linkid(2) 3 * * push_vlan_id: linkid(3) 4 * * push_vlan_id: linkid(4) (c) Core switch Figure 2: OpenFlow table entries at each switch layer for the 4-ary fat-tree. In this example, the address follows the form of 10.pod.switch.host, where pod denotes pod number (where switch is), switch denotes position of switch in the pod, host denotes the sequential ID of each host. These entries are stored in a separate table which will be placed at the beginning of a table pipeline. In (a), 3 and 4 are port numbers connected to Agg layer. In (b), 1 and 2 are port numbers connected to ToR layer. one detour at source pod (Figure 3(b)), T2 and C2 will choose each ingress link of a packet while others not. A similar picking process is undertaken in case of one detour at destination pod (Figure 3(c)). When one detour occurs between aggregate and core switch (Figure 3(d)), only core switches which see the detoured packet pick the ingress links. Link ID assignment. CherryPick assigns IDs for intra-pod links and pod-core links separately. i) Intra-pod links: Since pods are separated by core switches, the same set of links IDs is used across all pods. Since each pod has (k/2) 2 intra-pod links, we need (k/2) 2 IDs. Links at the same position across pods have the same link ID. ii) Pod-core links: Assigning IDs to pod-core links such that the correct end-to-end path can be reconstructed while minimizing the number of bits required to represent the ID is non-trivial. Indeed, one way to assign IDs is to consider all pod-core links, and assign each link a unique ID. However, there are k 3 /4 pod-core links and such an approach would require log(k 3 /4) many bits per link ID. We show that this can be significantly reduced by viewing the problem as an edge coloring problem of a complete bipartite graph. In this problem, the goal is to assign colors to the edges of the graph such that adjacent edges of a vertex have different colors. Edge-coloring a complete bipartite graph requires α different colors where α is the graph s maximum degree. To view a fat-tree as a complete bipartite graph with two disjoint sets, we treat pod as a vertex in the first set and an affinity core segment as a vertex in the second set (compare Figures 4(a) and 4(b)). We group edges (links) from an Agg switch to an affinity core segment and supersede them by one edge that we call cluster edge (Figure 4(b)). Since the maximum degree is k (i.e., the number of cluster edges at an affinity core segment), we need k different colors to edge-color this bipartite graph. Note that one cluster edge is a collection of all k/2 links. Therefore, we need k different color sets such that each color set has k/2
4 Packet Trajectory Picked Link Picking Node C1 C2 C3 C4 C1 C2 C3 C4 C1 C2 C3 C4 C1 C2 C3 C4 A1 A2 A3 A4 A1 A2 A3 A4 A1 A2 A3 A4 A1 A2 A3 A4 A5 T1 T2 T3 T4 T1 T2 T3 T4 T1 T2 T3 T4 T1 T2 T3 T4 T5 Src Dst Src Dst Src Dst Src Dst (a) (4, -) (b) (6, src pod) (c) (6, dst pod) (d) (6, core) Figure 3: An illustration of link cherry-picking in CherryPick. (x, y) means x number of hops and detour at location y. Segment 1 Cluster edge Segment 2 Cluster edge Segment 1 Segment 2 (a) 4-ary fat-tree Pod 1 Pod 2 Pod 3 Pod 4 Pod 1 Pod 2 Pod 3 Pod 4 (b) A bipartite graph (c) Uniquely colored links Figure 4: Edge-coloring pod-core links in a 4-ary fat-tree. (d) Edge-colored fat-tree different colors and any two color sets are disjoint (Figure 4(c)). Thus in total k(k/2) different colors are required. The actual color assignment is done by applying a near-linear time algorithm [6]. Figure 4(d) shows an accurate color allocation. Putting it together, the number of unique IDs required is 3k 2 /4 ((k/2) 2 for the intra-pod links and k(k/2) for the pod-core links). Thus, CherryPick requires a total of log(3k 2 /4) bits to represent each link. For 48-ary and 72-ary fat-tree, CherryPick requires just 11 and 12 bits respectively to represent each link. CherryPick can thus support an up-to 72-ary fat-tree topology using the 12 bits available in VLAN tag. Path reconstruction. In our scheme, when a packet reaches its destination, it contains a minimum set of link IDs necessary to reconstruct a complete path. To keep it simple, suppose that those link IDs in the header are extracted and stored in the order that they were selected. At the destination, a network topology graph is given and each link in the graph is annotated with one ID. Path reconstruction process begins from source ToR switch in the graph. Initially, a list S contains the source ToR switch. Until all the link IDs are consumed, the following steps are executed: i) take one link ID (say, l) from the ID list and find, from the topology graph, a link whose ID is l (if l is in the pod ID space, search for the link in either source or destination pod depending on whether pod-core link is consumed; otherwise, search for it in the current affinity segment); ii) identify two switches (say, s a and s b ) that form the link; iii) out of the two, choose one (say, s a ) closer to the switch (say, s r ) that was most recently added to S; iv) find a shortest path (which is simple because it is either 1-hop or 2-hop path) between s a and s r and add all intermediate nodes (those closer to s r first) and s a later to S; v) add the remaining switch s b to S. After all link IDs are consumed, we add to S the switches that form a shortest path from the switch included last in S to the destination ToR switch. Finally, we obtain a complete path by enumerating switches in S. 3. EVALUATION In this section, we present preliminary evaluation results for CherryPick, and compare it against PathletTracer [27] over a 48- ary fat-tree topology. We evaluate the two schemes in terms of number of switch flow rules ( 3.1), packet header space ( 3.2) and end-host resources ( 3.3) required for tracing packet trajectories. While preliminary, our evaluation suggests that: CherryPick requires minimal number of switch flow rules to trace packet trajectories. In particular, CherryPick requires as many rules as the number of ports per switch. In contrast, PathletTracer requires number of switch flow rules linear in the number of paths that the switch belongs to. For tracing 6-hop paths in a 48-ary fat-tree topology, for instance, CherryPick requires three orders of magnitude fewer switch rules than PathletTracer while supporting similar functionality. CherryPick requires packet header space close to state-ofthe-art techniques. Compared to PathletTracer, CherryPick trades off slightly higher packet header space requirements for significantly improved scalability in terms of number of switch flow rules required to trace packet trajectories. CherryPick requires minimal resources at the end hosts for tracing packet trajectories. In particular, CherryPick requires as much as three orders of magnitude fewer entries at the destination when compared to PathletTracer for tracing 6-hop paths on a 48-ary fat-tree topology. 3.1 Switch flow rules CherryPick, for any given switch, requires as many flow rules as the number of ports at that switch. In contrast, for any given switch, PathletTracer requires as many switch flow rules as the number of paths that contain that switch. Since the latter depends on the layer at which the switch resides, we plot the number of switch flow rules for CherryPick and PathletTracer across each layer separately (see Figure 5). We observe that for tracing shortest paths only, the number of switch flow rules required by PathletTracer is comparable to those required by CherryPick. However, if one desires to trace non-shortest paths (e.g., in case of failures), the number of switch flow requirement of PathletTracer grows super-linearly with the number of hops constituting the paths 2. For tracing 6-hop paths, 2 The number of switch flow rules required by PathletTracer could be reduced by tracing pathlets (a sub-path of an end-to-end path) at the cost of coarser tracing compared to CherryPick.
5 (a) ToR (b) Aggregate (c) Core Figure 5: CherryPick requires number of switch flow rules comparable to PathletTracer for tracing shortest paths. However, for tracing non-shortest paths (e.g., packets may traverse such paths in case of failures), the number of switch flow rules required by PathletTracer increases super-linearly. In contrast, the number of switch flow rules required by CherryPick remains constant. Figure 6: CherryPick requires packet header space comparable to PathletTracer for tracing packet trajectories. In particular, for tracing 6-hop paths in a 48-ary fat-tree topology, CherryPick requires 22 bits while PathletTracer requires 21 bits worth of header space. for instance, PathletTracer requires over a million rules on ToR switch, tens of thousands of rules on Aggregate switch, and thousands of rules on Core switch. This means that PathletTracer would not scale well for path tracing at L3 layer because packets may follow non-shortest paths. In contrast, CherryPick requires a small number of switch flow rules independent of path length. 3.2 Packet header space We now evaluate the number of bits in the packet header required to trace packet trajectories (see Figure 6). Recall from 2 that to enable tracing of any n-hop path in a fat-tree topology, CherryPick requires embedding (n 4)/2 + 1 links in the packet header. The number of bits required to uniquely represent each link increases logarithmically with number of ports per switch; for a 48-ary fat-tree topology, each link requires 11 bits worth of space. PathletTracer requires log(p) bits worth of header space, where P is the number of paths between the source and the destination. We observe that CherryPick requires slightly higher packet header space than PathletTracer (especially for longer paths); however, as discussed earlier, CherryPick trades off slightly higher header space requirement with significantly improved scalability in terms of switch flow rules. 3.3 End host resources Finally, we compare the performance of CherryPick against PathletTracer in terms of the resource requirements at the end host. Each of the two schemes stores certain entries at the end host to trace the packet trajectory using the information carried Figure 7: CherryPick requires significantly fewer entries than PathletTracer at each end host compared to trace packet trajectories. In particular, for tracing 6-hop paths in a 48-ary fat-tree topology, CherryPick requires less than 1MB worth of entries while PathletTracer requires approximately 12GB worth of entries at each end host. in the packet header. Processing the packet header requires relatively simple lookups into the stored entries for both schemes, which is a lightweight operation. We hence only quantify the number of entries required for both schemes. CherryPick stores, at each destination, the entire set of network links, each annotated with a unique identifier. Pathlet- Tracer requires storing a codebook, where each entry is a code assigned to each of the unique path. For fair comparison, we assume that individual hosts in PathletTracer store an equal-sized subset of the codebook that is only relevant to them. Figure 7 shows that PathletTracer needs to store non-trivial number of entries if non-shortest paths need to be traced. For instance, PathletTracer requires more than 10 9 entries at each end host to trace 6-hop paths, which translates to approximately 12GB worth of entries as each entry is about 12 bytes long. As discussed earlier, this overhead for PathletTracer could be reduced at the cost of tracing at coarser granularities. In contrast, since CherryPick stores only the set of links constituting the network, it requires a small, fixed number of entries ( 56K entries per host for a 48-ary fat-tree topology). 4. DISCUSSION Preliminary evaluation ( 3) suggests that CherryPick can enable packet trajectory tracing functionality in SDN-enabled datacenter networks while efficiently utilizing data plane resources. In this section, we discuss a number of research challenges for CherryPick that constitute the ongoing work.
6 Generalization to other network topologies. In this paper, we focused on the CherryPick design for fat-tree topology. A natural question here is whether it is possible to generalize CherryPick to other datacenter network topologies (e.g., VL2, DCell, BCube, HyperX, etc.). While these topologies differ, we observe that they share a common structure: these topologies can be broken down into bipartite-like graphs. Intuitively, CherryPick may be able to exploit this property to cherry-pick a subset of links in the path. Generalizing CherryPick to these topologies and devising techniques to automatically produce corresponding flow rules constitutes our main ongoing work. Impact of routing schemes and link failures. The current design of CherryPick is agnostic to the routing protocol used in the network. In particular, the design does not assume that the packets are routed along shortest paths (e.g., 6-hop paths in evaluation results from 3). In fact, since each packet carries all the information required to trace its own trajectory, CherryPick works even in case where packets within a single flow traverse along different paths [8, 21]. The design also works when packets encounter link failures on their way to the destination such packets may traverse along a non-shortest path; CherryPick will still pick the necessary links and be able to reconstruct the endto-end trajectory. Impact of packet drops. One of the assumptions made in design of CherryPick is that the packet trajectory is traced when the packet reaches the destination. However, a packet may not reach the destination for a multitude of reasons, including packet drops due to network congestion, routing loops 3, or switch misconfiguration (e.g., race condition [11]). Note that this problem is fundamental to most techniques that reconstruct packet trajectory at the destination [27]. Prior work suggests to partially mitigate this issue by forwarding the packet header of the dropped packet to the controller and/or the designated server for analysis. The challenge for CherryPick here is that due to selectively picking links that represent an end-to-end path, it may be challenging to identify the precise location of packet drop. We are currently investigating this issue. Impact of controller and switch misconfiguration. At a minimum, the flow rules of CherryPick must be correctly pre-installed at switches and the switches must insert correct link identifiers into packet headers. However, if the SDN controller and switches behave erratically with respect to inserting correct link identifiers into packet headers, it may not be possible for CherryPick to trace the correct packet trajectory. Note that both of these issues are related to the correctness of the SDN control plane. Indeed, there has been significant work recently [2, 4, 15] in ensuring correctness of control plane behavior in SDN-enabled networks. In contrast, CherryPick focuses on debugging the SDN data plane by enabling packet trajectory tracing as the packets traverse through the network data plane. Packet marking. Similar to other schemes that rely on packet marking, CherryPick faces the challenge of having limited packet header space. CherryPick currently relies on multiple VLAN tagging capability specified in the SDN standard [19] to partially mitigate this issue. Indeed, adding more than two tags is possible in commodity OpenFlow switches (e.g., Pica8 switch), but their ASIC understands two VLAN tags only. In the current implementation of the commodity OpenFlow switches, more than 3 In OpenFlow switch, Decrement IP TTL action is supported, but not enabled by default. If enabled, routing loops may result in packet drops. two tags disrupt other rules that match against fields in IP and TCP layers. We believe that this limitation may become less severe as SDN switch hardware becomes more reconfigurable, as proposed in several other research works [3, 12]. 5. RELATED WORK We now compare and contrast CherryPick against the key related works [1, 11, 13, 14, 16, 17, 26, 27]. PathQuery [17] and PathletTracer [27], similar to CherryPick, enable tracing of packet trajectories by embedding the required information into the packet header. While these approaches are agnostic to the underlying network topology, they require large number of switch flow rules even in moderate size datacenter networks. In contrast, CherryPick aggressively exploits the structure in datacenter network topologies to enable similar functionality while requiring significantly fewer switch flow rules and comparable packet header space. Several other techniques attempt to minimize data plane resources to trace packet trajectories. In particular, approaches like [7, 22] propose to keep track of trajectories on a per-flow basis by partially marking path information across multiple packets belonging to the same flow. However, these techniques do not work for short (mice) flows or when packets in a flow are split over multiple paths. Data plane resources for tracing packet trajectories can also be minimized by injecting test packets [1, 26] and using the paths taken by these packets as a proxy for the path taken by the user traffic. However, short-term network dynamics may result in test packets and user traffic being routed along different paths and these techniques may result in imprecise inferences. NetSight [11] is an out-of-band approach in that it traces packet trajectories by collecting logs at switches along the packet trajectory. In contrast, CherryPick is an in-band approach as packets themselves carry path information. CherryPick is also different from approaches like VeriFlow [14], Anteater [16] and Header Space Analysis [13]. In particular, these approaches rely either on flow rules installed at switches [14] or on data plane configuration information [13, 16] to perform data plane debugging (e.g., loops, reachability). In contrast, CherryPick traces trajectories of real traffic directly on data plane with no reliance on such information. Recent advances in data plane programmability [3, 12] make it easy to enable numerous network debugging functionalities including packet trajectory tracing. CherryPick can efficiently implement the path tracing functionality on top of those flexible platforms. Thus, our approach is complementary to them. 6. CONCLUSION We have presented CherryPick, a simple yet scalable technique for tracing packet trajectories in SDN-enabled datacenter networks. The core idea in CherryPick is that structure in datacenter network topologies enable reconstructing end-to-end paths using a few essential links. To that end, CherryPick cherry-picks a subset of links along the packet trajectory and embeds them into the packet header. We applied CherryPick to a fattree topology in this paper and showed that it requires minimal switch flow rules to enable tracing packet trajectories, while requiring packet header space close to state-of-the-art techniques. Acknowledgments We thank the anonymous reviewers for their insightful comments. This work was supported by EPSRC grant EP/L02277X/1.
7 7. REFERENCES [1] K. Agarwal, E. Rozner, C. Dixon, and J. Carter. SDN Traceroute: Tracing SDN Forwarding Without Changing Network Behavior. In ACM HotSDN, [2] T. Ball, N. Bjørner, A. Gember, S. Itzhaky, A. Karbyshev, M. Sagiv, M. Schapira, and A. Valadarsky. VeriCon: Towards Verifying Controller Programs in Software-defined Networks. In ACM PLDI, [3] P. Bosshart, G. Gibb, H.-S. Kim, G. Varghese, N. McKeown, M. Izzard, F. Mujica, and M. Horowitz. Forwarding Metamorphosis: Fast Programmable Match-action Processing in Hardware for SDN. In ACM SIGCOMM, [4] M. Canini, D. Venzano, P. Perešíni, D. Kostić, and J. Rexford. A NICE Way to Test OpenFlow Applications. In USENIX NSDI, [5] M. Chiesa, I. Nikolaevskiy, A. Panda, A. Gurtov, M. Schapira, and S. Shenker. Exploring the Limits of Static Failover Routing. CoRR, abs/ , [6] R. Cole, K. Ost, and S. Schirra. Edge-Coloring Bipartite Multigraphs in O(E log D) Time. Combinatorica, 21(1), [7] D. Dean, M. Franklin, and A. Stubblefield. An algebraic approach to IP traceback. ACM Transactions on Information and System Security, 5(2): , [8] A. Dixit, P. Prakash, Y. C. Hu, and R. R. Kompella. On the Impact of Packet Spraying in Data Center Networks. In IEEE INFOCOM, [9] N. G. Duffield and M. Grossglauser. Trajectory Sampling for Direct Traffic Observation. IEEE/ACM ToN, 9(3), [10] P. Gill, N. Jain, and N. Nagappan. Understanding Network Failures in Data Centers: Measurement, Analysis, and Implications. In ACM SIGCOMM, [11] N. Handigol, B. Heller, V. Jeyakumar, D. Mazières, and N. McKeown. I Know What Your Packet Did Last Hop: Using Packet Histories to Troubleshoot Networks. In USENIX NSDI, [12] V. Jeyakumar, M. Alizadeh, Y. Geng, C. Kim, and D. Mazières. Millions of Little Minions: Using Packets for Low Latency Network Programming and Visibility. In ACM SIGCOMM, [13] P. Kazemian, G. Varghese, and N. McKeown. Header Space Analysis: Static Checking for Networks. In USENIX NSDI, [14] A. Khurshid, X. Zou, W. Zhou, M. Caesar, and P. B. Godfrey. VeriFlow: Verifying Network-Wide Invariants in Real Time. In USENIX NSDI, [15] M. Kuzniar, P. Peresini, M. Canini, D. Venzano, and D. Kostic. A SOFT Way for Openflow Switch Interoperability Testing. In ACM CoNEXT, [16] H. Mai, A. Khurshid, R. Agarwal, M. Caesar, P. B. Godfrey, and S. T. King. Debugging the data plane with anteater. In ACM SIGCOMM, [17] S. Narayana, J. Rexford, and D. Walker. Compiling Path Queries in Software-defined Networks. In ACM HotSDN, [18] S. Nelakuditi, S. Lee, Y. Yu, Z.-L. Zhang, and C.-N. Chuah. Fast local rerouting for handling transient link failures. In IEEE/ACM ToN, [19] Open Networking Foundation. OpenFlow Switch Specification Version [20] P. Prakash, A. Dixit, Y. C. Hu, and R. Kompella. The TCP Outcast Problem: Exposing Unfairness in Data Center Networks. In USENIX NSDI, [21] C. Raiciu, S. Barre, C. Pluntke, A. Greenhalgh, D. Wischik, and M. Handley. Improving Datacenter Performance and Robustness with Multipath TCP. In ACM SIGCOMM, [22] S. Savage, D. Wetherall, A. Karlin, and T. Anderson. Practical Network Support for IP Traceback. In ACM SIGCOMM, [23] F. P. Tso, G. Hamilton, R. Weber, C. S. Perkins, and D. P. Pezaros. Longer is better: exploiting path diversity in data center networks. In IEEE ICDCS, [24] B. Yang, J. Liu, S. Shenker, J. Li, and K. Zheng. Keep forwarding: Towards k-link failure resilient routing. In IEEE INFOCOM, [25] K. Zarifis, R. Miao, M. Calder, E. Katz-Bassett, M. Yu, and J. Padhye. DIBS: Just-in-time Congestion Mitigation for Data Centers. In ACM EuroSys, [26] H. Zeng, P. Kazemian, G. Varghese, and N. McKeown. Automatic Test Packet Generation. IEEE/ACM ToN, 22(2): , [27] H. Zhang, C. Lumezanu, J. Rhee, N. Arora, Q. Xu, and G. Jiang. Enabling Layer 2 Pathlet Tracing Through Context Encoding in Software-defined Networking. In ACM HotSDN, [28] Y. Zhang, M. Roughan, N. Duffield, and A. Greenberg. Fast Accurate Computation of Large-scale IP Traffic Matrices from Link Loads. In ACM SIGMETRICS, 2003.
Enabling Layer 2 Pathlet Tracing through Context Encoding in Software-Defined Networking
Enabling Layer 2 Pathlet Tracing through Context Encoding in Software-Defined Networking Hui Zhang, Cristian Lumezanu, Junghwan Rhee, Nipun Arora, Qiang Xu, Guofei Jiang NEC Laboratories America {huizhang,lume,rhee,nipun,qiangxu,gfj}@nec-labs.com
An Assertion Language for Debugging SDN Applications
An Assertion Language for Debugging SDN Applications Ryan Beckett, X. Kelvin Zou, Shuyuan Zhang, Sharad Malik, Jennifer Rexford, and David Walker Princeton University {rbeckett, xuanz, shuyuanz, sharad,
Enabling Flow-level Latency Measurements across Routers in Data Centers
Enabling Flow-level Latency Measurements across Routers in Data Centers Parmjeet Singh, Myungjin Lee, Sagar Kumar, Ramana Rao Kompella Purdue University Abstract Detecting and localizing latency-related
OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu [email protected] [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Transactional Support for SDN Control Planes "
Transactional Support for SDN Control Planes Petr Kuznetsov Telecom ParisTech WTTM, 2015 Software Defined Networking An emerging paradigm in computer network management Separate forwarding hardware (data
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.
IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam
Longer is Better? Exploiting Path Diversity in Data Centre Networks
Longer is Better? Exploiting Path Diversity in Data Centre Networks Fung Po (Posco) Tso, Gregg Hamilton, Rene Weber, Colin S. Perkins and Dimitrios P. Pezaros University of Glasgow Cloud Data Centres Are
Introduction. The Inherent Unpredictability of IP Networks # $# #
Introduction " $ % & ' The Inherent Unpredictability of IP Networks A major reason that IP became the de facto worldwide standard for data communications networks is its automated resiliency based on intelligent
Empowering Software Defined Network Controller with Packet-Level Information
Empowering Software Defined Network Controller with Packet-Level Information Sajad Shirali-Shahreza, Yashar Ganjali Department of Computer Science, University of Toronto, Toronto, Canada Abstract Packet
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP
IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York
A Network Recovery Scheme for Node or Link Failures using Multiple Routing Configurations
A Network Recovery Scheme for Node or Link Failures using Multiple Routing Configurations Suresh Babu Panatula Department of Computer Science and Engineering Sri Sai Aditya Institute of Science and Technology,
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? Many slides stolen from Jennifer Rexford, Nick McKeown, Scott Shenker, Teemu Koponen, Yotam Harchol and David Hay Agenda
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Scalable High Resolution Network Monitoring
Scalable High Resolution Network Monitoring Open Cloud Day Wintherthur, 16 th of June, 2016 Georgios Kathareios, Ákos Máté, Mitch Gusat IBM Research GmbH Zürich Laboratory {ios, kos, mig}@zurich.ibm.com
Network Security through Software Defined Networking: a Survey
[email protected] 09/30/14 Network Security through Software Defined Networking: a Survey Jérôme François, Lautaro Dolberg, Olivier Festor, Thomas Engel 2 1 Introduction 2 Firewall 3 Monitoring
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Data Center Networking with Multipath TCP
Data Center Networking with Multipath TCP Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley Hotnets 2010 報 告 者 : 莊 延 安 Outline Introduction Analysis Conclusion
From Electronic Design Automation to NDA: Treating Networks like Chips or Programs
From Electronic Design Automation to NDA: Treating Networks like Chips or Programs George Varghese With Collaborators at Berkeley, Cisco, MSR, Stanford Networks today SQL 1001 10* P1 1* P2 Drop SQL,P2
Software Defined Networks
Software Defined Networks Damiano Carra Università degli Studi di Verona Dipartimento di Informatica Acknowledgements! Credits Part of the course material is based on slides provided by the following authors
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering
Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Communication Networks Software Defined Networking (SDN) Prof. Dr. Admela Jukan Dr.
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks Hong Xu, Baochun Li [email protected], [email protected] Department of Computer Science, City University of Hong Kong Department
LPM: Layered Policy Management for Software-Defined Networks
LPM: Layered Policy Management for Software-Defined Networks Wonkyu Han 1, Hongxin Hu 2 and Gail-Joon Ahn 1 1 Arizona State University, Tempe, AZ 85287, USA {whan7,gahn}@asu.edu 2 Clemson University, Clemson,
EventBus Module for Distributed OpenFlow Controllers
EventBus Module for Distributed OpenFlow Controllers Igor Alekseev Director of the Internet Center P.G. Demidov Yaroslavl State University Yaroslavl, Russia [email protected] Mikhail Nikitinskiy System
The Internet: A Remarkable Story. Inside the Net: A Different Story. Networks are Hard to Manage. Software Defined Networking Concepts
The Internet: A Remarkable Story Software Defined Networking Concepts Based on the materials from Jennifer Rexford (Princeton) and Nick McKeown(Stanford) Tremendous success From research experiment to
A Reliability Analysis of Datacenter Topologies
A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br
How To Understand The Power Of The Internet
DATA COMMUNICATOIN NETWORKING Instructor: Ouldooz Baghban Karimi Course Book: Computer Networking, A Top-Down Approach, Kurose, Ross Slides: - Course book Slides - Slides from Princeton University COS461
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Department of Computer Science City University of Hong Kong ShanghaiTech Symposium on Data Scinece June 23, 2015 Today s plan Overview of research in DCN
Set Up a VM-Series Firewall on the Citrix SDX Server
Set Up a VM-Series Firewall on the Citrix SDX Server Palo Alto Networks VM-Series Deployment Guide PAN-OS 6.1 Contact Information Corporate Headquarters: Palo Alto Networks 4401 Great America Parkway Santa
FLOWGUARD: Building Robust Firewalls for Software-Defined Networks
FLOWGUARD: Building Robust Firewalls for Software-Defined Networks Hongxin Hu, Wonkyu Han, Gail-Joon Ahn, and Ziming Zhao Clemson University Arizona State University [email protected], {whan7,gahn,zzhao3}@asu.edu
TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems
for Service Provider Data Center and IXP Francois Tallet, Cisco Systems 1 : Transparent Interconnection of Lots of Links overview How works designs Conclusion 2 IETF standard for Layer 2 multipathing Driven
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
FLOWGUARD: Building Robust Firewalls for Software-Defined Networks
FLOWGUARD: Building Robust Firewalls for Software-Defined Networks Hongxin Hu, Wonkyu Han, Gail-Joon Ahn, and Ziming Zhao Clemson University Arizona State University [email protected], {whan7,gahn,zzhao3}@asu.edu
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
SIMPLE NETWORKING QUESTIONS?
DECODING SDN SIMPLE NETWORKING QUESTIONS? Can A talk to B? If so which what limitations? Is VLAN Y isolated from VLAN Z? Do I have loops on the topology? SO SDN is a recognition by the Networking industry
Multipath TCP design, and application to data centers. Damon Wischik, Mark Handley, Costin Raiciu, Christopher Pluntke
Multipath TCP design, and application to data centers Damon Wischik, Mark Handley, Costin Raiciu, Christopher Pluntke Packet switching pools circuits. Multipath pools links : it is Packet Switching 2.0.
Network-Wide Class of Service (CoS) Management with Route Analytics. Integrated Traffic and Routing Visibility for Effective CoS Delivery
Network-Wide Class of Service (CoS) Management with Route Analytics Integrated Traffic and Routing Visibility for Effective CoS Delivery E x e c u t i v e S u m m a r y Enterprise IT and service providers
Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
Packet-Marking Scheme for DDoS Attack Prevention
Abstract Packet-Marking Scheme for DDoS Attack Prevention K. Stefanidis and D. N. Serpanos {stefanid, serpanos}@ee.upatras.gr Electrical and Computer Engineering Department University of Patras Patras,
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
SOFTWARE-DEFINED NETWORKING AND OPENFLOW
SOFTWARE-DEFINED NETWORKING AND OPENFLOW Freddie Örnebjär TREX Workshop 2012 2012 Brocade Communications Systems, Inc. 2012/09/14 Software-Defined Networking (SDN): Fundamental Control
VLANs. Application Note
VLANs Application Note Table of Contents Background... 3 Benefits... 3 Theory of Operation... 4 IEEE 802.1Q Packet... 4 Frame Size... 5 Supported VLAN Modes... 5 Bridged Mode... 5 Static SSID to Static
LineSwitch: Efficiently Managing Switch Flow in Software-Defined Networking while Effectively Tackling DoS Attacks
LineSwitch: Efficiently Managing Switch Flow in Software-Defined Networking while Effectively Tackling DoS Attacks Moreno Ambrosin, Mauro Conti, Fabio De Gaspari, University of Padua, Italy {surname}@math.unipd.it
Frenetic: A Programming Language for OpenFlow Networks
Frenetic: A Programming Language for OpenFlow Networks Jennifer Rexford Princeton University http://www.frenetic-lang.org/ Joint work with Nate Foster, Dave Walker, Rob Harrison, Michael Freedman, Chris
Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani
Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:
From Active & Programmable Networks to.. OpenFlow & Software Defined Networks. Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S.
From Active & Programmable Networks to.. OpenFlow & Software Defined Networks Prof. C. Tschudin, M. Sifalakis, T. Meyer, M. Monti, S. Braun University of Basel Cs321 - HS 2012 (Slides material from www.bigswitch.com)
Software Defined Networking
Software Defined Networking Richard T. B. Ma School of Computing National University of Singapore Material from: Scott Shenker (UC Berkeley), Nick McKeown (Stanford), Jennifer Rexford (Princeton) CS 4226:
Formal Specification and Programming for SDN
Formal Specification and Programming for SDN relevant ID: draft-shin-sdn-formal-specification-01 Myung-Ki Shin, Ki-Hyuk Nam ETRI Miyoung Kang, Jin-Young Choi Korea Univ. Proposed SDN RG Meeting@IETF 84
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions
Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions Steve Gennaoui, Jianhua Yin, Samuel Swinton, and * Vasil Hnatyshin Department of Computer Science Rowan University
Scaling IP Mul-cast on Datacenter Topologies. Xiaozhou Li Mike Freedman
Scaling IP Mul-cast on Datacenter Topologies Xiaozhou Li Mike Freedman IP Mul0cast Applica0ons Publish- subscribe services Clustered applica0ons servers Distributed caching infrastructures IP Mul0cast
Panopticon: Incremental SDN Deployment in Enterprise Networks
Panopticon: Incremental SDN Deployment in Enterprise Networks Stefan Schmid with Dan Levin, Marco Canini, Fabian Schaffert, Anja Feldmann https://venture.badpacket.in I SDN! How can I deploy it? SDN: Where
A Novel Packet Marketing Method in DDoS Attack Detection
SCI-PUBLICATIONS Author Manuscript American Journal of Applied Sciences 4 (10): 741-745, 2007 ISSN 1546-9239 2007 Science Publications A Novel Packet Marketing Method in DDoS Attack Detection 1 Changhyun
Assignment 6: Internetworking Due October 17/18, 2012
Assignment 6: Internetworking Due October 17/18, 2012 Our topic this week will be the notion of internetworking in general and IP, the Internet Protocol, in particular. IP is the foundation of the Internet
MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet
MAPS: Adaptive Path Selection for Multipath Transport Protocols in the Internet TR-11-09 Yu Chen Xin Wu Xiaowei Yang Department of Computer Science, Duke University {yuchen, xinwu, xwy}@cs.duke.edu ABSTRACT
Extensible and Scalable Network Monitoring Using OpenSAFE
Extensible and Scalable Network Monitoring Using OpenSAFE Jeffrey R. Ballard [email protected] Ian Rae [email protected] Aditya Akella [email protected] Abstract Administrators of today s networks are
Stability of QOS. Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu
Stability of QOS Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Abstract Given a choice between two services, rest of the things being equal, it is natural to prefer the one with more
Software Defined Networking & Openflow
Software Defined Networking & Openflow Autonomic Computer Systems, HS 2015 Christopher Scherb, 01.10.2015 Overview What is Software Defined Networks? Brief summary on routing and forwarding Introduction
Multihoming and Multi-path Routing. CS 7260 Nick Feamster January 29. 2007
Multihoming and Multi-path Routing CS 7260 Nick Feamster January 29. 2007 Today s Topic IP-Based Multihoming What is it? What problem is it solving? (Why multihome?) How is it implemented today (in IP)?
Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu [email protected]
Large-Scale Distributed Systems Datacenter Networks COMP6511A Spring 2014 HKUST Lin Gu [email protected] Datacenter Networking Major Components of a Datacenter Computing hardware (equipment racks) Power supply
Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network
Auto-Configuration of SDN Switches in SDN/Non-SDN Hybrid Network Rohit Katiyar [email protected] Prakash Pawar [email protected] Kotaro Kataoka [email protected] Abhay Gupta [email protected]
OFTEN Testing OpenFlow Networks
OFTEN Testing OpenFlow Networks Maciej Kuźniar EPFL Marco Canini TU Berlin / T-Labs Dejan Kostić EPFL Abstract Software-defined networking and OpenFlow in particular enable independent development of network
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
Port evolution: a software to find the shady IP profiles in Netflow. Or how to reduce Netflow records efficiently.
TLP:WHITE - Port Evolution Port evolution: a software to find the shady IP profiles in Netflow. Or how to reduce Netflow records efficiently. Gerard Wagener 41, avenue de la Gare L-1611 Luxembourg Grand-Duchy
COMPSCI 314: SDN: Software Defined Networking
COMPSCI 314: SDN: Software Defined Networking Nevil Brownlee [email protected] Lecture 23 Current approach to building a network Buy 802.3 (Ethernet) switches, connect hosts to them using UTP cabling
Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain
Praveen Bhaniramka, Wei Sun, Raj Jain Department of Computer and Information Science The Ohio State University 201 Neil Ave, DL39 Columbus, OH 43210 USA Telephone Number: +1 614-292-3989 FAX number: +1
Data Center Load Balancing. 11.11.2015 Kristian Hartikainen
Data Center Load Balancing 11.11.2015 Kristian Hartikainen Load Balancing in Computing Efficient distribution of the workload across the available computing resources Distributing computation over multiple
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport
MPLS-TP Future Ready. Today Introduction As data traffic started dominating telecom networks, there was a need for transport data networks, as opposed to transport TDM networks. Traditional transport technologies
OpenFlow: Enabling Innovation in Campus Networks
OpenFlow: Enabling Innovation in Campus Networks Nick McKeown Stanford University Presenter: Munhwan Choi Table of contents What is OpenFlow? The OpenFlow switch Using OpenFlow OpenFlow Switch Specification
Advanced Computer Networks. Scheduling
Oriana Riva, Department of Computer Science ETH Zürich Advanced Computer Networks 263-3501-00 Scheduling Patrick Stuedi, Qin Yin and Timothy Roscoe Spring Semester 2015 Outline Last time Load balancing
On the effect of forwarding table size on SDN network utilization
IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz
Enabling Service Function Chaining through Routing Optimization in Software Defined Networks
Enabling Service Function Chaining through Routing Optimization in Software Defined Networks Andrey Gushchin Cornell University Ithaca, New York 14850 [email protected] Anwar Walid Bell Labs, Alcatel-Lucent
Software Defined Networking and the design of OpenFlow switches
Software Defined Networking and the design of OpenFlow switches Paolo Giaccone Notes for the class on Packet Switch Architectures Politecnico di Torino December 2015 Outline 1 Introduction to SDN 2 OpenFlow
Finding Fault Location: Combining network topology and end-to-end measurments to locate network problems?
Finding Fault Location: Combining network topology and end-to-end measurments to locate network problems? Chris Kelly - [email protected] Research Network Operations Center Georgia Tech Office
20. Switched Local Area Networks
20. Switched Local Area Networks n Addressing in LANs (ARP) n Spanning tree algorithm n Forwarding in switched Ethernet LANs n Virtual LANs n Layer 3 switching n Datacenter networks John DeHart Based on
VMDC 3.0 Design Overview
CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated
Telekinesis: Controlling Legacy Switch Routing with OpenFlow in Hybrid Networks
: Controlling witch Routing with OpenFlow in Hybrid Networks Cheng Jin, Cristian Lumezanu, Qiang Xu, Zhi-Li Zhang, Guofei Jiang University of Minnesota, {cheng,zhzhang}@cs.umn.edu NEC Laboratories America,
A Survey of Enterprise Middlebox Deployments
A Survey of Enterprise Middlebox Deployments Justine Sherry Sylvia Ratnasamy Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-202-24 http://www.eecs.berkeley.edu/pubs/techrpts/202/eecs-202-24.html
Autonomous Fast Rerouting for Software Defined Network
Autonomous ast Rerouting for Software Defined Network 2012.10.29 NTT Network Service System Laboratories, NTT Corporation Shohei Kamamura, Akeo Masuda, Koji Sasayama Page 1 Outline 1. Background and Motivation
A collaborative model for routing in multi-domains OpenFlow networks
A collaborative model for routing in multi-domains OpenFlow networks Xuan Thien Phan, Nam Thoai Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology Ho Chi Minh city, Vietnam
Datacenter Network Large Flow Detection and Scheduling from the Edge
Datacenter Network Large Flow Detection and Scheduling from the Edge Rui (Ray) Zhou [email protected] Supervisor : Prof. Rodrigo Fonseca Reading & Research Project - Spring 2014 Abstract Today, datacenter
Avaya ExpertNet Lite Assessment Tool
IP Telephony Contact Centers Mobility Services WHITE PAPER Avaya ExpertNet Lite Assessment Tool April 2005 avaya.com Table of Contents Overview... 1 Network Impact... 2 Network Paths... 2 Path Generation...
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Software-Defined Networking for the Data Center. Dr. Peer Hasselmeyer NEC Laboratories Europe
Software-Defined Networking for the Data Center Dr. Peer Hasselmeyer NEC Laboratories Europe NW Technology Can t Cope with Current Needs We still use old technology... but we just pimp it To make it suitable
