Ethernet-based Software Defined Network (SDN) Tzi-cker Chiueh Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1
Cloud Data Center Architecture Physical Server VM0 VM1 VMn Layer-3 Border Routers Data Center Network Fabrics IEL Load Balancing Traffic Shaping Intrusion Detection NAT/VPN Compute Server Rack Storage Server 2
Cloud Data Center Network Cloud data centers are Big and Shared Scalable and available data center fabrics Not all links are used No load-sensitive routing Fail-over latency is high (> 5 seconds) Network virtualization: Each virtual data center (VDC) gets to define its own network VMs in a VDC belong to one or multiple subnets (broadcast domains) Each VDC has its own private IP address space Each VDC has a set of public IP addresses Each VDC has a set of external VPN connections Each VDC has its Internet traffic shaping policy, intra-vdc and inter-vdc firewalling policy, and server load balancing policy 3
SDN Software Architecture User Applications Management Software Data plane Controller Control plane applications Southbound AP Northbound API NorthNorthbound API Control Plane Application 1 Northbound API Control Plane Application 2 Northbound API Control Plane Application 3 Virtual Switch Controller Southbound API OpenFlow Switch Ethernet Swicth
SDN OpenFlow Can we apply SDN to Ethernet switches, especially in cloud data center space? Peregrine 5
Peregrine A unified Layer-2-only data center network for LAN and SAN traffic A SDN architecture using only commodity Ethernet switches: centralized control plane and distributed data plane Turn off Ethernet s control protocol: spanning tree, source learning, flooding of unknown-destination-mac packets, broadcast of ARP and DHCP VLAN is optional Centralized load-balancing routing using real-time traffic matrix Fast fail-over using pre-computed primary/backup routes Native support for network virtualization Private IP address space reuse Multiple subnets per virtual network 6
Peregrine Software Architecture Route Server Directory Server Ethernet Switch-based Network Fabrics VM2 VM3 Peregrine Agent Hypervisor Hardware Physical Server 1 Physical Server 4 Physical Server 7 7
Dynamic Traffic Engineering Periodic collection of real-time traffic matrix Traffic volume between each pair of VMs Traffic volume between each pair of PMs Load balancing routing algorithm Loads on the physical links Number of hops Forwarding table entries Prioritization Computed routes are programmatically installed on the forwarding tables of Ethernet switches 8
Fast Failure Recovery 9
Network Virtualization Multiple virtual networks running on a single physical network The network of each virtual data center (VDC) consists of VMs MAC addresses are pre-assigned A single layer-2 network A complete private IP address space, organized into multiple subnets each with its own broadcast domain A set of public IP addresses Its own copy of the DHCP and DNS service Security: Intra-VDC and inter-vdc firewall policy SLA: Traffic shaping policy Scalability: Server load balancing policy
Peregrine in SDN Framework Data plane: Ethernet switches Southbound API: SNMP and CLI Controller: (1) Physical network resource set-up, (2) Physical topology record keeping, (3) SNMP trap processing, (4) Ethernet switch configuration, including forwarding table programming, (5) Traffic load information Northbound API: Failure/congestion notification, including SNMP trap packet delivery ARP request packet delivery Forwarding table programming Physical topology/traffic load querying Control plane applications: Dynamic traffic engineering Fast fail-over for data/control plane failures Network virtualization 11
Current Status A fully operational Peregrine prototype that works on a 10- switch and 100-server test-bed Start-up and shut-down without out-of-band control network Fail-over for both data plane and control plane failures To do items: Refactor Peregrine in the OpenDaylight software framework Encapsulate Peregrine s network virtualization capability into a Quantum plug-in implementation Embed dynamic load balancing and fast fail-over logic inside an opensource OpenFlow controller An agentless implementation of Peregrine 12
Ethernet SAL Plugin Start with a subset of the SAL API, which is designed for OpenFlow switches Define their semantics on Ethernet switches Add a set of new APIs 13
Manipulation of Ethernet Switch 1. Disable STP To make all ports change to the forwarding state 2. Disable BPDU flooding To prevent switch from flooding BPDU packets when STP is off 3. Port based enable/disable ingress/egress flooding To prevent broadcast storm 4. Port based enable/disable ingress packet source MAC check To allow asymmetric route
Manipulation of Ethernet Switch 5. Port based enable/disable source MAC learning All routes are static 6. Write/delete and read the forwarding database To insert/modify/delete routing rules 7. Setup SNMP host and community To collect link down/up event trap 8. Setup LLDP functionality and collect LLDP tables To collect network topology 9. Setup ACL rules
Corresponding SAL APIs (1/2) Ethernet switch Manipulation Add/modify/remove ACL SAL API sal.flowprogrammer IPluginInFlowProgrammerService addflow() addflowasync() modifyflow() modifyflowasync() removeallflows() removeflow() removeflowasync() IPluginOutFlowProgrammerService flowerrorreported() flowremoved()
Corresponding SAL APIs Ethernet switch Manipulation Collect LLDP table SAL API sal.inventory IPluginInInverotyService sal.topology getnodeconnectorprops() getnodeprops() IpluginInTopologyService sollicitrefresh()
Proposed New SAL APIs disablestp() disablebpduflooding() disablebroadcastflooding() disablemulticastflooding() disableunknownflooding() disablesourcemaccheck() disablesourcelearning() addfdbentry() modifyfdbentry() deletefdbentry() setsnmphost() setsnmpcomm()
Development Schedule Milestone Offset 1 Date Deliverables M2 8/21 Final release plan M3 9/18 An plugin that support Ethernet-based SDN M4 10/16 An plugin that implements all Ethernet compatible OpenDaylight SAL API M5 11/13 Extend the APIs in SAL RC0 11/20 RC0 RC1 11/27 RC1 RC2 12/4 RC2 Formal Release 12/11 Release 1.0
Thank You! Questions and Comments? tcc@itri.org.tw 20