OpenStack and Cumulus Linux Validated Design Guide Deploying OpenStack with Network Switches Running Cumulus Linux
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Contents Contents... 2 OpenStack with Cumulus Linux... 4 Objective... 4 Enabling Choice of Hardware in the Data Center... 4 Combined Solution Using OpenStack and Cumulus Linux... 4 Driving Towards Operational Efficiencies... 5 Intended Audience for Network Design and Build... 6 OpenStack Network Architecture in a PoC or Small Test/Dev Environment... 6 Network Architecture and Design Considerations... 6 OpenStack Network Architecture in a Cloud Data Center... 8 Network Architecture... 8 Scaling Out... 9 Out-of-Band Management... 10 Building an OpenStack Cloud with Cumulus Linux... 11 Minimum Hardware Requirements... 11 Network Assumptions and Numbering... 12 Build Steps... 16 1. Set Up Physical Network...17 2. Basic Physical Network Configuration...17 3. Verify Connectivity...20 4. Set Up Physical Servers...20 5. Configure Spine Switches...21 6. Configure Each Pair of Leaf Switches...24 7. Configure the OpenStack Controller...27 8. Configure Each Compute Node...32 9. Create Project Networks...36 10. Start VMs Using the OpenStack Horizon Web UI...36 Conclusion... 37 Summary... 37 References... 37 Appendix A: Example /etc/network/interfaces Configurations... 39 leaf01... 39 leaf02... 42 2
CONTENTS leaf03... 45 leaf04... 48 spine01... 51 spine02... 53 Appendix B: Network Setup Checklist... 55 Version 1.1 November 11, 2015 About Cumulus Networks Unleash the power of Open Networking with Cumulus Networks. Founded by veteran networking engineers from Cisco and VMware, Cumulus Networks makes the first Linux operating system for networking hardware and fills a critical gap in realizing the true promise of the software-defined data center. Just as Linux completely transformed the economics and innovation on the server side of the data center, Cumulus Linux is doing the same for the network. It is radically reducing the costs and complexities of operating modern data center networks for service providers and businesses of all sizes. Cumulus Networks has received venture funding from Andreessen Horowitz, Battery Ventures, Sequoia Capital, Peter Wagner and four of the original VMware founders. For more information visit cumulusnetworks.com or @cumulusnetworks. 2015 Cumulus Networks. CUMULUS, the Cumulus Logo, CUMULUS NETWORKS, and the Rocket Turtle Logo (the Marks ) are trademarks and service marks of Cumulus Networks, Inc. in the U.S. and other countries. You are not permitted to use the Marks without the prior written consent of Cumulus Networks. The registered trademark Linux is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. All other marks are used under fair use or license from their respective owners. The OpenStack Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundaiton, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. www.cumulusnetworks.com 3
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE OpenStack with Cumulus Linux Objective This Validated Design Guide presents a design and implementation approach for deploying OpenStack with network switches running Cumulus Linux. Detailed steps are included for installing and configuring both switches and servers. Enabling Choice of Hardware in the Data Center Cloud-oriented infrastructure designs revolutionized how server applications are delivered in the data center. They reduce CapEx costs by commoditizing server hardware platforms and OpEx costs by automating and orchestrating infrastructure deployment and management. The same benefits of choice of commodity hardware and automation are available to networking in the data center. With Cumulus Linux, network administrators now have a multi-platform network OS that provides freedom of choice with network switch hardware. Because Cumulus Linux is Linux, data center administrators have access to a rich ecosystem of existing Linux automation tools and now the ability for converged deployment, administration and monitoring of compute servers and network switches. OpenStack is a cloud platform for enterprise and commercial IT environments. Widely deployed in private and public cloud applications, OpenStack offers a rich variety of components that can be combined to build a tailored cloud solution. OpenStack enables data center architects to use commodity server hardware to build infrastructure environments that deliver the agility and easy scaling promised by the cloud. The cloud allows infrastructure consumers to request and utilize capacity in seconds rather than hours or days, providing you with radical CapEx and OpEx savings while delivering rapid, self-service deployment of capacity for IT consumers. Cumulus Networks believes the same design principles should hold true for networking. A network device can be configured at first boot, so an administrator can quickly replace failed equipment instead of spending valuable time and resources troubleshooting hardware. This enables new support models to be leveraged to drive down operational costs. Imagine managing your own set of hot spare switches, guaranteeing that a replacement will always be available instead of paying for ongoing support for every device. This is the same model currently used by most organizations for managing large fleets of servers. Additionally, Cumulus Linux can help you achieve the same CapEx and OpEx efficiencies for your networks by enabling an open market approach for switching platforms, and by offering a radically simple automated lifecycle management framework built on the industry s best open source tools. By using bare metal servers and network switches, you can achieve cost savings that would be impossible just a few years ago. Combined Solution Using OpenStack and Cumulus Linux Both Cumulus Linux and Linux/OpenStack are software solutions run on top of bare metal hardware. Because both solutions are hardware-agnostic, customers can select their chosen platform from a wide array of suppliers who often employ highly competitive pricing models. The software defines the performance and behavior of the environment and allows the administrator to exercise version control and programmatic approaches that are already in use by DevOps teams. Refer to the Cumulus Linux Hardware Compatibility List (HCL) at cumulusnetworks.com/hcl for a list of hardware vendors and their supported model numbers, descriptions, switch silicon, and CPU type. 4
OPENSTACK WITH CUMULUS LINUX Driving Towards Operational Efficiencies Figure 1. OpenStack and Cumulus Linux OpenStack enables the building of cloud environments using commodity off-the-shelf servers combined with standard Linux virtualization, monitoring, and management technologies. Cloud users can request resources (compute VMs, storage, network) using APIs and self-service Web interfaces, and those resources will be allocated and delivered without human intervention. The hardware in the cloud is thus homogenous, and users neither know nor care where their resources are physically allocated. Operators monitor aggregate resource utilization, so management is done at the level of a capacity planning exercise, rather than worrying about individual workloads and users. OpenStack comprises subcomponents that work together to deliver a cloud. The major components are: 1. Nova, which manages compute resources for VMs. 2. Glance, which manages OS disk images. 3. Cinder, which manages VM block storage. 4. Swift, which manages unstructured data objects. 5. Keystone, which provides authentication and authorization services. 6. Horizon, a Web-based UI. 7. Neutron, which provides virtual networking and services. Cumulus Linux complements OpenStack by delivering the same automated, self-service operational model to the network. And since the underlying operating system is the same on the OpenStack nodes and the switches, the same automation, monitoring and management tools can be used, greatly simplifying provisioning and operations. Cumulus Linux offers powerful automation capabilities, by way of technologies such as ONIE, zero touch provisioning, PXE and Puppet. The combination of bare metal hardware with a consistent Linux platform enables you to leverage automation to deploy servers and networks together. Thus, you can use a unified set of tools to automate the installation and configuration of both switches and servers. You can use a common automation framework that uses a simple config file to install and configure an entire pod of switches and call OpenStack to install and configure the servers, all without any human intervention. www.cumulusnetworks.com 5
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Intended Audience for Network Design and Build The rest of this document is aimed at the data center architect or administrator interested in evaluating a Proof of Concept (PoC) or deploying a production cloud using Cumulus Linux and OpenStack. The implementer is expected to have basic knowledge of Linux commands, logging in, navigating the file system and editing files. Basic understanding of Layer 2 networking is assumed, such as interfaces, bonds (also known as LAGs), and bridges. If you are using this guide to help you with setting up your OpenStack and Cumulus Linux environment, we assume you have Cumulus Linux installed and licensed on switches from the Cumulus Linux HCL. Additional information on Cumulus Linux software, licensing, and supported hardware may be found on cumulusnetworks.com or by contacting sales@cumulusnetworks.com. This guide references the Icehouse release of OpenStack. OpenStack Network Architecture in a PoC or Small Test/Dev Environment Network Architecture and Design Considerations Figure 2 shows the network design of a typical Proof of Concept (PoC) or small test/dev environment running OpenStack. Figure 2. PoC or Test/Dev OpenStack Environment 6
OPENSTACK NETWORK ARCHITECTURE IN A POC OR SMALL TEST/DEV ENVIRONMENT Figure 3 below details the connectivity for the hypervisor. Figure 3. Hypervisor Host Detail The network architecture for an OpenStack PoC follows a simplified Top of Rack (ToR) access-tier-only design, all within Layer 2, while the single services rack provides a gateway to the rest of the network, and also contains all the hypervisor hosts. The services rack contains the OpenStack controller, and can optionally contain any load balancers, firewalls, and other network services. Using Layer 2 works well for OpenStack Kilo, since it allows the use of Neutron ML2. For optimal network performance, 10G switches are used for the ToR/access switches. The network design employs multi-chassis Link Aggregation (MLAG, the implementation of MLAG for Cumulus Linux) for host path redundancy and link aggregation for network traffic optimization. The switches are paired into a single logical switch for MLAG, with a peer LACP bond link between pair members. No breakout cables are used in this design. A single OpenStack controller instance is assumed in this design. Connectivity to external networks is assumed to be via a pair of links to routers, with a single upstream default route. These links are connected to the leaf switches in the services rack, since it contains the controller. This guide assumes the routers have been configured with VRR or some other first-hop redundancy protocol. If there is only one upstream router link, connect it to either of the leaf switches in the services rack. www.cumulusnetworks.com 7
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE OpenStack Network Architecture in a Cloud Data Center Network Architecture The network design of a typical cloud data center running OpenStack is shown in Figure 4. Figure 4. Enterprise Data Center Network OpenStack Environment The network architecture for an OpenStack data center follows the traditional hierarchical core, aggregation switch (also known as spine), and access switch (also known as leaf) tiers, all within Layer 2, while a single services rack provides a gateway to the rest of the network. The services rack contains the OpenStack controller, compute nodes, and can optionally contain load balancers, firewalls and other network services. Using Layer 2 works well for OpenStack Kilo, since it allows the use of Neutron ML2. For optimal network performance, 40G switches are used for aggregation switches, and 10G switches are used for access switches. The network design employs MLAG for host and network path redundancy and link aggregation for network traffic optimization. Switches are paired into logical switches for MLAG, with a peer LACP bond link between pair members. No breakout cables are used in this design. A single OpenStack controller instance is assumed in this design. Connectivity to external networks is assumed to be via a pair of links to routers, with a single upstream default route. These links are connected to the leaf switches in the services rack, which is the one that contains the controller. This guide assumes the routers have been configured with VRRP or some other router redundancy protocol. If there is only one upstream router link, connect it to either of the leaf switches in the services rack. 8
OPENSTACK NETWORK ARCHITECTURE IN A CLOUD DATA CENTER Scaling Out Scaling out the architecture involves adding more hosts to the access switch pairs, and then adding more access switches in pairs as needed, as shown in Figure 5. Figure 5. Adding Additional Switches Once the limit for the aggregation switch pair has been reached, an additional network pod of aggregation/access switch tiers may be added as shown in Figure 6. Each new pod has its own services rack and OpenStack controller. Figure 6. Adding Network Pods/OpenStack Clusters www.cumulusnetworks.com 9
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Out-of-Band Management An important supplement to the high capacity production data network is the management network used to administer infrastructure elements, such as network switches, physical servers, and storage systems. The architecture of these networks vary considerably based on their intended use, the elements themselves, and access isolation requirements. This solution guide assumes that a single Layer 2 domain is used to administer the network switches and management interfaces on the controller and hypervisor hosts. These operations include imaging the elements, configuring them, and monitoring the running system. This network is expected to host both DHCP and HTTP servers, such as isc-dhcp and apache2, as well as provide DNS reverse and forward resolution. In general, these networks provide some means to connect to the corporate network, typically a connection through a router or jump host. Figure 7 below shows the logical and, where possible, physical connections of each element as well as the services required to realize this deployment. Figure 7. Out-of-Band Management 10
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Building an OpenStack Cloud with Cumulus Linux Minimum Hardware Requirements For PoC, test/dev: 3x x86 servers, each with 2x 10G NICs + 1x 1G NIC 2x 48 port 10G switches, with 40G uplinks Note that this design may be scaled up to 47 hypervisor nodes. For a cloud data center: 5x x86 servers, each with 2x 10G NICs + 1x 1G NIC 4x 48 port 10G leaf switches, with 40G uplinks 2x 32 port 40G spine switches Note that this design may be scaled up to 1535 hypervisor nodes. If required, additional OpenStack clusters may be configured and connected to the core/external routers. OpenStack scalability limits will be hit before full scale is achieved. www.cumulusnetworks.com 11
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Network Assumptions and Numbering The network design for the full cloud deployment (6 switches, 5 servers) is shown in Figure 8 below. The PoC subset is just the first pair of leafs and no spine switches. The implementation does not assume use of IPMI, as it is intended to demonstrate the most generic network as possible. Figure 8. Cloud Data Center Network Topology Note that the peer bonds for MLAG support are always the last two interfaces on each switch. For spines, they are swp31 and swp32. For leafs, they are swp51 and swp52. The next-to-last two interfaces on each leaf are for the uplinks to spine01 and spine02. Also note that the same subnet is used for every MLAG peer pair. This is safe because the addresses are only used on the link between the pairs. Routing protocols will not distribute these routes because they are part of the link-local 169.254.0.0/16 subnet. The details for the switches, hosts, and logical interfaces are as follows: 12
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX leaf01 connected to Logical Interface Description Physical Interfaces leaf02 peerlink peer bond utilized for MLAG traffic swp51, swp52 leaf02 peerlink.4094 subinterface used for clagd communication N/A spine01, spine02 uplink for MLAG between spine01 and spine02 swp49, swp50 external router N/A for accessing the outside network swp48 multiple hosts access ports connect to compute hosts swp1 through swp44 controller compute01 bond to controller for host-to-switch MLAG swp1 compute01 compute02 bond to compute01 for host-to-switch MLAG swp2 out-of-band management N/A out-of-band management interface eth0 leaf02 connected to Logical Interface Description Physical Interfaces leaf01 peerlink peer bond utilized for MLAG traffic swp51, swp52 leaf01 peerlink.4094 subinterface used for clagd communication N/A spine01, spine02 uplink for MLAG between spine01 and spine02 swp49, swp50 external router N/A for accessing the outside network swp48 multiple hosts access ports connect to hosts swp1 through swp44 controller compute01 bond to controller for host-to-switch MLAG swp1 compute01 compute02 bond to compute01 for host-to-switch MLAG swp2 out-of-band management N/A out-of-band management interface eth0 leaf0n connected to Logical Interface Description Physical Interfaces Repeat above configurations for each additional pair of leafs, minus the external router interfaces. www.cumulusnetworks.com 13
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE spine01 connected to Logical Interface Description Physical Interfaces spine02 peerlink peer bond utilized for MLAG traffic swp31, swp32 spine02 peerlink.4094 subinterface used for clagd communication N/A multiple leafs leaf ports connect to leaf switch pairs swp1 through swp30 leaf01, leaf02 downlink1 bond to another leaf switch pair swp1, swp2 leaf03, leaf04 downlink2 bond to another leaf switch pair swp3, swp4 out-of-band management N/A out-of-band management interface eth0 spine02 connected to Logical Interface Description Physical Interfaces spine01 peerlink peer bond utilized for MLAG traffic swp31, swp32 spine01 peerlink.4094 subinterface used for clagd communication N/A multiple leafs leaf ports connect to leaf switches swp1 through swp30 leaf01, leaf02 downlink1 bond to another peerlink group swp1, swp2 leaf03, leaf04 downlink2 bond to another peerlink group swp3, swp4 out-of-band management N/A out-of-band management interface eth0 The manual process detailed below has some fixed parameters for things like VLAN ranges and IP addresses. These can be changed. If you re following the manual process and want to use different parameters, be careful to modify the numbers in the configuration to match. 14
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX The parameters you are most likely to need to change are the external subnet and default route. Get this information from whoever configured your access to the outside world (either the Internet or the rest of the data center network). Parameter Default Setting OpenStack tenant VLANs 200-2000 OpenStack tenant subnets 10.10.TENANT#.0/24 External (public) VLAN 101 External (public) subnet 192.168.100.0/24 External default route 192.168.100.1 External IP of controller 192.168.100.2 External IP of first compute node 192.168.100.3 OpenStack API VLAN 102 OpenStack API subnet 10.254.192.0/20 OpenStack API IP of controller 10.254.192.1 OpenStack API IP of first compute node 10.254.192.2 Out-of-band management network 192.168.0.0/24 clagd peer VLAN 4094 clagd peer subnet 169.254.255.0/30 clagd system ID (base) 44:38:39:ff:00:01 www.cumulusnetworks.com 15
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Build Steps Here are the detailed steps for manually installing and configuring the cloud. If you are building the simpler PoC/Test/Dev configuration, skip step 5 (configure spine switches), as well as any steps that reference spine01, spine02, leaf03, and leaf04. The steps are: Step Tasks Physical Network and Servers 1. Set up physical network. Rack and cable all network switches. Install Cumulus Linux. Install license. 2. Basic physical network configuration. Name switches. Bring up out of band management ports. Bring up front panel ports. 3. Verify connectivity. Use LLDP to ensure that the topology is as expected, and that switches can communicate. 4. Set up physical servers. Install Ubuntu Server 14.04 on each of the servers. Network Topology 5. Configure spine switches. Configure MLAG peer bond between the pair. 6. Configure each pair of leaf switches. Configure MLAG peer bond between the pair. OpenStack 7. Configure the OpenStack controller. Install all components and configure. 8. Configure each OpenStack compute node. Install all components and configure. 9. Create tenant networks. Use Neutron CLI 10. Start VMs using the OpenStack Horizon Web UI. Attach a laptop to the external network. Point a Web browser at http://192.168.100.2/horizon, and log in (user: admin, pass: adminpw). Start a VM in your new OpenStack cloud. Note that you can also plug the laptop into the management network, if that is easier. 16
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX 1. Set Up Physical Network Rack all servers and switches, and wire them together according to the wiring plan. Install Cumulus Linux, install your license, and gain serial console access on each switch, as described in the Quick Start Guide of the Cumulus Linux Documentation. 2. Basic Physical Network Configuration Cumulus Linux contains a number of text editors, including nano, vi and zile; this guide uses nano in its examples. First, edit the hostname file to change the hostname: cumulus@cumulus$ nano /etc/hostname Change cumulus to spine01, and save the file. Make the same change to /etc/hosts: cumulus@cumulus$ nano /etc/hosts Change the first occurrence of cumulus on the line that starts with 127.0.1.1, then save the file. For example, for spine01, you would edit the line to look like: 127.0.1.1 spine01 cumulus Reboot the switch so the new hostname takes effect: cumulus@cumulus$ sudo reboot Configure Interfaces on Each Switch By default, a switch with Cumulus Linux freshly installed has no switch port interfaces defined. Define the basic characteristics of swp1 through swpn by creating stanza entries for each switch port (swp) in the /etc/network/interfaces file. Each stanza should include the following statements: auto <switch port name> allow-<alias> <switch port name> iface <switch port name> The auto keyword above specifies that the interface is brought up automatically after issuing a reboot or service networking restart command. The allow- keyword is a way to group interfaces so they can be brought up or down as a group. For example, allow-hosts compute01 adds the device compute01 to the alias group hosts. Using ifup -- allow=hosts brings up all of the interfaces with allow-hosts in their configuration. On each switch, define the physical ports to be used according to the network topology as described in Figure 8 and the corresponding table that follows the figure. www.cumulusnetworks.com 17
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE For the leaf switches, the basic interface configuration is the range of interfaces from swp1 to swp52. On the spine switches, the range is swp1 to swp32. For example, the configuration on leaf01 would look like: cumulus@leaf01$ nano /etc/network/interfaces.. # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2.. auto swp52 iface swp52 Additional attributes such as speed and duplex can be set. Refer to the Settings section of the Understanding Network Interfaces chapter of the Cumulus Linux Documentation for more information. Configure all leaf switches identically. Instead of manually configuring each interface definition, you can programmatically define them using shorthand syntax that leverages Python Mako templates. For information about configuring interfaces with Mako, read this knowledge base article. Once all configurations have been defined in the /etc/network/interfaces file, run the ifquery command to ensure that all syntax is proper and the interfaces are created as expected: cumulus@leaf01$ ifquery -a auto lo iface lo inet loopback auto eth0 iface eth0 address 192.168.0.90/24 gateway 192.168.0.254 auto swp1 iface swp1... Once all configurations have been defined in the /etc/network/interfaces file, apply the configurations to ensure they are loaded into the kernel. There are several methods for applying configuration changes depending on when and what changes you want to apply: 18
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Command sudo ifreload -a sudo service networking restart sudo ifup <swpx> Action Parse interfaces labelled with auto that have been added to or modified in the configuration file, and apply changes accordingly. Note: This command is disruptive to traffic only on interfaces that have been modified. Restart all interfaces labelled with auto as defined in the configuration file, regardless of what has or has not been recently modified. Note: This command is disruptive to all traffic on the switch including the eth0 management network. Parse an individual interface labelled with auto as defined in the configuration file and apply changes accordingly. Note: This command is disruptive to traffic only on interface swpx. For example, on leaf01, to apply the new configuration to all changed interfaces labeled with auto: or individually: cumulus@leaf01:~$ sudo ifreload -a cumulus@leaf01:~$ sudo ifup swp1 cumulus@leaf01:~$ sudo ifup swp2... cumulus@leaf01:~$ sudo ifup swp52 The above configuration in the /etc/network/interfaces file is persistent, which means the configuration applies even after you reboot the switch. Another option to test network connectivity is to run a shell loop to bring up each front-panel interface temporarily (until the next reboot), so that LLDP traffic can flow. This lets you verify the wiring is done correctly in the next step: cumulus@spine01$ for i in `grep '^swp' /var/lib/cumulus/porttab cut -f1`; do sudo ip link set dev $i up; done Repeat the above steps on each of spine02, leaf01, leaf02, leaf03, and leaf04, changing the hostname appropriately in each command or file. www.cumulusnetworks.com 19
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE 3. Verify Connectivity Back on spine01, use LLDP to verify that the cabling is correct, according to the cabling diagram: cumulus@spine01$ sudo lldpctl less snip ------------------------------------------------------------------------------- Interface: swp31, via: LLDP, RID: 4, Time: 0 day, 00:12:10 Chassis: ChassisID: mac 44:38:39:00:49:0a SysName: spine02 SysDescr: Cumulus Linux Capability: Bridge, off Capability: Router, on Port: PortID: ifname swp31 PortDescr: swp31 ------------------------------------------------------------------------------- Interface: swp32, via: LLDP, RID: 4, Time: 0 day, 00:12:10 Chassis: ChassisID: mac 44:38:39:00:49:0a SysName: spine02 SysDescr: Cumulus Linux Capability: Bridge, off Capability: Router, on Port: PortID: ifname swp32 PortDescr: swp32 ------------------------------------------------------------------------------- The output above shows only the last 2 interfaces, which you can see are correctly connected to the other spine switch, based on the SysName field being spine02 (shown in green above). Verify that the remote-side interfaces are correct per the wiring diagram, using the PortID field. Note: Type q to quit less when you are done verifying. Repeat the lldpctl command on spine02 to verify the rest of the connectivity. 4. Set Up Physical Servers Install Ubuntu Server 14.04 LTS release on each server, as described in Ubuntu s Installing from CD documentation. During the install, configure the two drives into a RAID1 mirror, and then configure LVM on the mirror. Create a 1G swap partition, and a 50G root partition. Leave the rest of the mirror s space free for the creation of VMs. Make sure that openssh server is installed, and configure the management network such that you have out-of-band SSH access to the servers. As part of the installation process you will create a user, which will have sudo access. Remember the username and password you created for later. Name the controller node (the one attached to swp1 on leaf01/leaf02) controller and name the compute nodes compute01, compute02, and so on. 20
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Populate the hostname alias for the controller and each of the compute nodes in the /etc/hosts file. Using the name controller matches the sample configurations in the official OpenStack install guide. Edit /etc/hosts file on the controller and each compute node, by adding the following entries at the end: 10.254.192.1 controller 10.254.192.2 compute01 10.254.192.3 compute02... 5. Configure Spine Switches Enable MLAG Peering between Switches An instance of the clagd daemon runs on each MLAG switch member to keep track of various networking information, including MAC addresses, which are needed to maintain the peer relationship. clagd communicates with its peer on the other switch across a Layer 3 interface between the two switches. This Layer 3 network should not be advertised by routing protocols, nor should the VLAN be trunked anywhere else in the network. This interface is designed to be a keep-alive reachability test and for synchronizing the switch state across the directly attached peer bond. Create the VLAN subinterface for clagd communication and assign an IP address for this subinterface. A unique.1q tag is recommended to avoid mixing data traffic with the clagd control traffic. To enable MLAG peering between switches, configure clagd on each switch by creating a peerlink subinterface in /etc/network/interfaces with a unique.1q tag. Set values for the following parameters under the peerlink subinterface: address. The local IP address/netmask of this peer switch. o Cumulus Networks recommends you use a link local address; for example 169.254.1.X/30. clagd-enable. Set to yes (default). clagd-peer-ip. Set to the IP address assigned to the peer interface on the peer switch. clagd-backup-ip Set to an IP address on the peer switch reachable independently of the peerlink. For example, the management interfaces or a routed interface that does not traverse the peerlink. clagd-sys-mac. Set to a unique MAC address you assign to both peer switches. o Cumulus Networks recommends you use addresses within the Cumulus Linux reserved range of 44:38:39:FF:00:00 through 44:38:39:FF:FF:FF. On both spine switches, edit /etc/network/interfaces and add the following sections at the bottom: #Bond for the peerlink. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp31 swp32 bond-use-carrier 1 www.cumulusnetworks.com 21
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE On spine01, add a VLAN for the MLAG peering communications: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.1/30 clagd-enable yes clagd-peer-ip 169.254.255.2 clagd-backup-ip 192.168.0.95/24 clagd-sys-mac 44:38:39:ff:00:00 On spine02, add a VLAN for the MLAG peering communications. Note the change of the last octet in the address and clagd-peer-ip lines: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.2/30 clagd-enable yes clagd-peer-ip 169.254.255.1 clagd-backup-ip 192.168.0.94/24 clagd-sys-mac 44:38:39:ff:00:00 On both spine switches, bring up the peering interfaces. The --with-depends option tells ifup to bring up the peer first, since peerlink.4094 depends on it: cumulus@spine0n:~$ sudo ifup --with-depends peerlink.4094 On spine01, verify that you can ping spine02: cumulus@spine01$ ping -c 3 169.254.255.2 PING 169.254.255.2 (169.254.255.2) 56(84) bytes of data. 64 bytes from 169.254.255.2: icmp_req=1 ttl=64 time=0.716 ms 64 bytes from 169.254.255.2: icmp_req=2 ttl=64 time=0.681 ms 64 bytes from 169.254.255.2: icmp_req=3 ttl=64 time=0.588 ms --- 169.254.255.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.588/0.661/0.716/0.061 ms Now on both spine switches, verify that the peers are connected: cumulus@spine01:~$ clagctl The peer is alive Peer Priority, ID, and Role: 32768 44:38:39:00:49:87 secondary Our Priority, ID, and Role: 32768 44:38:39:00:49:06 primary Peer Interface and IP: peerlink.4094 169.254.255.2 Backup IP: 192.168.0.95 (active) System MAC: 44:38:39:ff:00:00 The MAC addresses in the output will be different depending on the MACs issued to your hardware. 22
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Now that the spines are peered, create the bonds for the connections to the leaf switches. On both spine switches, edit /etc/network/interfaces and add the following at the end: #Bonds down to the pairs of leafs. auto downlink1 allow-leafs downlink1 iface downlink1 bond-slaves swp1 swp2 bond-use-carrier 1 clag-id 1 auto downlink2 allow-leafs downlink2 iface downlink2 bond-slaves swp3 swp4 bond-use-carrier 1 clag-id 2 You can add more stanzas for more pairs of leaf switches as needed, modifying the sections in green above. For example, to add a third stanza, you d use downlink3; the corresponding swp interfaces would be swp5 and swp6 and clag-id 3. Bridge together the MLAG peer bond and all the leaf bonds. On both switches, edit /etc/network/interfaces and add the following at the end: #Bridge that connects our peer and downlinks to the leafs. auto bridge iface bridge bridge-vlan-aware yes bridge-ports peerlink downlink1 downlink2 bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 12288 If you added more downlink# interfaces in the previous step, add them to the bridge-ports line, at the end of the line. If you re familiar with the traditional Linux bridge mode, you may be surprised that we called the bridge bridge instead of br0. The reason is that we re using the new VLAN-aware Linux bridge mode in this example, which doesn t require multiple bridge interfaces for common configurations. It trades off some of the flexibility of the traditional mode in return for supporting very large numbers of VLANs. See the Cumulus Linux Documentation for more information on the two bridging modes supported in Cumulus Linux. www.cumulusnetworks.com 23
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Finally, on both spine01 and spine02, bring up all the interfaces, bonds and bridges. The --with-depends option tells ifup to bring up any down interfaces that are needed by the bridge: cumulus@spine0n:~$ sudo ifup --with-depends bridge 6. Configure Each Pair of Leaf Switches On each leaf switch, edit /etc/network/interfaces, and add the following sections at the bottom: #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 On odd numbered leaf switches, add a VLAN for the MLAG peering communications. Note that the last octet of the clagdsys-mac must be the same for each switch in a pair, but incremented for subsequent pairs. For example, leaf03 and leaf04 should have 03 as the last octet: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.1/30 clagd-enable yes clagd-peer-ip 169.254.255.2 clagd-backup-ip 192.168.0.91/24 clagd-sys-mac 44:38:39:ff:00:02 On even numbered leaf switches, add a VLAN for the MLAG peering communications. Note the change of the last octet in the address and clagd-sys-peer-ip lines. Also note that for subsequent pairs of switches, the last octet of clagdsys-mac must match as described for the odd-numbered switches: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.2/30 clagd-enable yes clagd-peer-ip 169.254.255.1 clagd-backup-ip 192.168.0.90/24 clagd-sys-mac 44:38:39:ff:00:02 On each leaf switch, bring up the peering interfaces: cumulus@leaf0n:~$ sudo ifup --with-depends peerlink.4094 24
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX On each odd numbered leaf switch, verify that you can ping its corresponding even-numbered leaf switch: cumulus@leaf0n:~$ ping -c 3 169.254.255.2 PING 169.254.255.2 (169.254.255.2) 56(84) bytes of data. 64 bytes from 169.254.255.2: icmp_req=1 ttl=64 time=0.716 ms 64 bytes from 169.254.255.2: icmp_req=2 ttl=64 time=0.681 ms 64 bytes from 169.254.255.2: icmp_req=3 ttl=64 time=0.588 ms --- 169.254.255.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.588/0.661/0.716/0.061 ms Now, on each leaf switch, verify that the peers are connected: cumulus@leaf0n:~$ clagctl The peer is alive Peer Priority, ID, and Role: 32768 6c:64:1a:00:39:5a primary Our Priority, ID, and Role: 32768 6c:64:1a:00:39:9b secondary Peer Interface and IP: peerlink.4094 169.254.255.2 Backup IP: 192.168.0.91 (active) System MAC: 44:38:39:ff:00:02 Now that the leafs are peered, create the uplink bonds connecting the leafs to the spines. On each leaf switch, edit /etc/network/interfaces and add the following at the end: #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 clag-id 1000 On each leaf switch, bring up the bond up to the spine: cumulus@leaf0n:~$ sudo ifup --with-depends uplink On each leaf switch, verify that the link to the spine is up: cumulus@leaf0n:~$ ip link show dev uplink 2: uplink: <BROADCAST,MULTICAST,UP,LOWER_UP> qdisc pfifo_fast state UP qlen 1000 link/ether 44:38:39:00:49:06 brd ff:ff:ff:ff:ff:ff The UP,LOWER_UP (shown in green above) line means that the bond itself is up (UP), and slave interfaces (swp49 and swp50) are up (LOWER_UP). www.cumulusnetworks.com 25
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE On leaf01 and leaf02, and only leaf01 and leaf02, configure the interfaces going to the core/external routers. These are associated with external VLAN (101), but are configured as access ports and therefore untagged. Edit /etc/network/interfaces and add the following at the end: auto swp48 iface swp48 bridge-access 101 Create the bonds for the connections to the servers. On each leaf switch, edit /etc/network/interfaces and add the following at the end: #Bonds down to the host. #Only one swp, because the other swp is on the peer switch. auto compute01 allow-hosts compute01 iface compute01 bond-slaves swp1 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 1 Repeat the above stanza for each front panel port that has servers attached. You ll need to adjust compute01, swp1 and the value for clag-id everywhere they appear (in green). For example, for swp2, change each compute01 to compute02 and swp1 to swp2, and change clag-id from 1 to 2. Bridge together the MLAG peer bond, the uplink bond, and all the leaf bonds. On each leaf switch, edit /etc/network/interfaces and add the following at the end: #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplink swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 16384 If you added more host# interfaces in the previous step, add them to the bridge-ports line, at the end of the line. Note that swp48 (in green above) should only be present on leaf01 and leaf02, not on subsequent leafs. Finally, on each leaf switch, bring up all the interfaces, bonds and bridges: cumulus@leaf0n:~$ sudo ifup --with-depends bridge 26
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX 7. Configure the OpenStack Controller The server connected to swp1 on leaf01 and leaf02 will be the OpenStack controller. It will manage all the other servers, which run VMs. ssh into it as the user you configured when installing the OS. Configure the uplinks. The server has two 10G interfaces, in this example they are called p1p1 and p2p2. They may be named differently on other server hardware platforms. The ifenslave package must be installed for bonding support, and the vlan package must be installed for VLAN support. To install them, run: cumulus@controller$ sudo apt-get install ifenslave vlan For the bond to come up, the bonding driver needs to be loaded. Similarly, for VLANs, the 802.1q driver must be loaded. So that they will be loaded automatically at boot time, edit /etc/modules and add the following to the end: bonding 8021q Now load the modules: cumulus@controller$ sudo modprobe bonding cumulus@controller$ sudo modprobe 8021q www.cumulusnetworks.com 27
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Edit /etc/network/interfaces to add the following at the end: #The bond, one subinterface goes to each leaf. auto bond0 iface bond0 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down bond-slaves none #First 10G link. auto p1p1 iface p1p1 inet manual bond-master bond0 #Second 10G link. auto p1p2 iface p1p2 inet manual bond-master bond0 #OpenStack API VLAN. auto bond0.102 iface bond0.102 inet static address 10.254.192.1 netmask 255.255.240.0 #External VLAN. auto bond0.101 iface bond0.101 inet static address 192.168.100.2 netmask 255.255.255.0 gateway 192.168.100.1 Note that Ubuntu uses ifupdown, while Cumulus Linux uses ifupdown2. The configuration format is similar, but many constructs that work on the switch will not work in Ubuntu. Now bring up the interfaces: cumulus@controller$ sudo ifup -a Verify that the VLAN interface is UP and LOWER_UP: cumulus@controller$ sudo ip link show bond0.102 9: bond0.102@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 90:e2:ba:7c:28:28 brd ff:ff:ff:ff:ff:ff In the following sections, read the notes after the links before you follow the OpenStack install guide sections, as they contain important additional information you ll need. 28
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Enable the Ubuntu Openstack repository, using the directions in the OpenStack packages install guide. Note that you ll have to use sudo when installing the packages. Install the database server (MySQL), using the directions in the OpenStack database install guide. Note that you ll have to use sudo when installing the packages. Make sure to remember the database password you chose for later. Use 10.254.192.1 for the bind-address when configuring MySQL. When running mysql_secure_installation the root password is the MySQL password you chose. There is no need to reset it during mysql_secure_installation. Install the message broker (RabbitMQ) using the directions in the OpenStack message queue install guide. Note that you ll have to use sudo when installing the packages. Make sure to remember the rabbitmq password you chose for the openstack user, as you will need it later. Install the Keystone authentication service using the directions in the OpenStack keystone install guide. Note that you ll have to use sudo with the commands in that guide. Make sure to remember the ADMIN_TOKEN you generated using openssl for later. Create the service entity and API endpoint using the directions in the OpenStack Create service entity and API endpoint guide. Rather than using sudo, openstack commands use the keystone service, so the ADMIN_TOKEN will be used initially while setting up the identity service and endpoint. Create users, roles, and projects using the directions in the OpenStack Create projects, users, and roles guide. OpenStack commands use the keystone service, so the ADMIN_TOKEN will be used initially while setting up the users, roles, and projects. For a simple test deployment, we recommend admin/adminpw and demo/demopw for the usernames and passwords for the admin user and demo user. Create an OpenStack RC file to set the various environment variables needed to run OpenStack commands. This simplifies running commands as various OpenStack users; just source the rc file any time you want to change users. The directions are in the OpenStack client environment script guide. Don t use sudo for these commands. If you used admin/adminpw for your admin user, replace ADMIN_PASS with adminpw. To help identify the user environment sourced, it is beneficial to also set the prompt in each script indicating the user. Append this line after the other export commands in the rc files: export PS1='\u[OS_${OS_USERNAME}]@\h:\w\$ ' Verify that Keystone is operating properly using the directions in the OpenStack Keystone verification guide. Don t use sudo for these commands. You don t need to recreate the admin-openrc.sh file you created previously, rather this step will test the settings of the environment scripts created in the openrc install guide. Install the Glance image storage service using the directions in the OpenStack Glance install guide. Note that command prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with the $ symbol do not. The OpenStack commands should be executed with the admin credentials sourced. Import a demo Linux image into the Glance inventory, using the directions in the OpenStack Glance verification guide. This image provides an OS to start in VMs to demonstrate OpenStack. This guide assumes your server has direct access to the Internet; however, if you need an HTTP proxy to access the Internet from your environment, you can specify the proxy prior to wget: cumulus@controller$ http_proxy="http://my.http.proxy/" wget http:// Install the Nova compute controller service using the directions in the OpenStack Nova install guide (Controller Node). Note that command prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with a $ symbol do not. You ll need to replace the example 10.0.0.11 IP addresses in the my_ip, vncserver_listen and vncserver_proxyclient_address fields with 10.254.192.1. Install the Neutron networking service and components, as explained below. Working with Neutron requires a little understanding of the requirements for the OpenStack deployment. Neutron is multi-faceted, in that it can provide Layer 3 routing, Layer 2 switching, DHCP service, firewall service, and load-balancing service, etc. The OpenStack install guide www.cumulusnetworks.com 29
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE shows how to install a network node and utilizes the open virtual switch agent, but these are not necessary here. To keep it simple, this design will use the DHCP agent, and the linux-bridge agent. This guide deviates a little from the published OpenStack Neutron install guide (Controller node). Therefore, be sure to make the following changes denoted by the corresponding section title. Note that command prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with a $ symbol do not. Section To install the Networking components : cumulus@controller$ sudo apt-get install neutron-server neutron-plugin-ml2 neutron-plugin-linuxbridge-agent python-neutronclient neutron-dhcp-agent Section To configure the Networking server component : During step d, use the following configs: [DEFAULT]... core_plugin = ml2 service_plugin = allow_overlapping_ips = True Section To configure the Modular Layer 2 (ML2) plug-in : This design uses the linux bridge mechanism driver (agent) to build the networking connections for instances. In this design, the controller node needs to handle instance traffic for the DHCP agent. Use the following configuration for the /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,gre,vxlan tenant_network_types = mechanism_drivers = linuxbridge [ml2_type_flat] flat_networks = toswitch [ml2_type_vlan] network_vlan_ranges = toswitch:201:1000 [linux_bridge] physical_interface_mappings = toswitch:bond0 [vxlan] enable_vxlan = False [securitygroup] enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.iptablesfirewalldriver Section To configure Compute to use Networking : Since the ML2 is using the linux bridge agent, the interface driver needs to match. 30
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX During step a, use the following configs: [DEFAULT]... network_api_class = nova.network.neutronv2.api.api security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.linuxbridgeinterfacedriver firewall_driver = nova.virt.firewall.noopfirewalldriver Since there will not be a dedicated Network node in this design, the DHCP and metadata agents are still needed and will be run on the controller. The following instructions are based on sections in the published OpenStack Neutron install guide (Network Node). Configure the controller using the modified instructions here. Section To configure the DHCP agent, Edit the /etc/neutron/dhcp_agent.ini file: [DEFAULT]... verbose = True interface_driver = neutron.agent.linux.interface.bridgeinterfacedriver dhcp_driver = neutron.agent.linux.dhcp.dnsmasq dhcp_delete_namespaces = True Section To configure the metadata agent, configure the metadata agent on the controller node using the steps in the published guide. All steps should be executed on the controller for the metadata-agent and nova, by editing the files /etc/neutron/metadata_agent.ini, and /etc/nova/nova.conf. Finalize the controller install by executing the following: sudo service nova-api restart sudo service neutron-server restart sudo service neutron-plugin-linuxbridge-agent restart sudo service neutron-dhcp-agent restart sudo service neutron-metadata-agent restart Install the Horizon Web dashboard packages, as explained in the OpenStack Horizon install guide. Then remove the openstack-dashboard-ubuntu-theme package, as it may cause rendering issues: cumulus@controller$ sudo apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard cumulus@controller$ sudo apt-get remove --purge openstack-dashboard-ubuntu-theme Installing the Horizon Web interface is optional. If installed, it is not a good idea to expose the Horizon Web interface to untrusted networks without hardening the configuration. www.cumulusnetworks.com 31
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE 8. Configure Each Compute Node The remaining servers are all compute nodes. They run VMs, as directed by the controller. Connect to the node, using ssh as the user you configured when installing the OS. In this example, that user is called cumulus. Enable IP forwarding. Since you re using multi-host mode, each compute node needs to use NAT for floating IP addresses. Edit /etc/sysctl.conf and uncomment the following line: #net. ipv4.ip_forward=1 This enables forwarding on reboots, but not immediately, so enable it right now as well: cumulus@compute0n:~$ sudo sysctl -w net.ipv4.ip_forward=1 Configure the uplinks. The server has two 10G interfaces; in this example they are called p1p1 and p2p2. They may be named differently on other server hardware platforms. The ifenslave package must be installed for bonding support, and the vlan package must be installed for VLAN support. To install them, run: cumulus@compute01$ sudo apt-get install ifenslave vlan For the bond to come up, the bonding driver needs to be loaded. Similarly, for VLANs, the 802.1q driver must be loaded. So that they will be loaded automatically at boot time, edit /etc/modules and add the following to the end: bonding 8021q Now load the modules: cumulus@compute0n:~$ sudo modprobe bonding cumulus@compute0n:~$ sudo modprobe 8021q 32
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Edit /etc/network/interfaces and add the following at the end: #The bond, one interface goes to each leaf. auto bond0 iface bond0 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down bond-slaves none #First 10G link. auto p1p1 iface p1p1 inet manual bond-master bond0 #Second 10G link. auto p1p2 iface p1p2 inet manual bond-master bond0 #OpenStack API VLAN. auto bond0.102 iface bond0.102 inet static address 10.254.192.2 netmask 255.255.240.0 #External network access VLAN. auto bond0.101 iface bond0.101 inet static address 192.168.100.3 netmask 255.255.240.0 gateway 192.168.100.1 You ll need to increment the API VLAN s IP address (show in green above, on bond0.102) for each compute node. You ll also need to increment the external VLAN s IP address (show in green above, on bond0.101). The examples given above are for compute01. For compute02, you would use 10.254.192.3 and 192.168.100.4. Note: Ubuntu uses ifupdown, while Cumulus Linux uses ifupdown2. The configuration format is similar, but many advanced configurations that work on the switch will not work in Ubuntu. Now bring up the interfaces: cumulus@compute0n:~$ sudo ifup -a Verify that the VLAN interface is UP and LOWER_UP: cumulus@compute0n:~$ sudo ip link show bond0.102 9: bond0.102@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 90:e2:ba:7c:28:28 brd ff:ff:ff:ff:ff:ff www.cumulusnetworks.com 33
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Add a hostname alias for the controller. Edit /etc/hosts and add the following at the end: 10.254.192.1 controller Verify that this node can talk to the controller over the API VLAN: cumulus@compute0n:~$ ping -c 3 controller PING controller (10.254.192.1) 56(84) bytes of data. 64 bytes from controller (10.254.192.1): icmp_seq=1 ttl=64 time=0.229 ms 64 bytes from controller (10.254.192.1): icmp_seq=2 ttl=64 time=0.243 ms 64 bytes from controller (10.254.192.1): icmp_seq=3 ttl=64 time=0.220 ms --- controller ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.220/0.230/0.243/0.019 ms Install the Nova compute node service using the directions in the OpenStack Nova compute install guide. Note that command prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with a $ symbol do not. NOTE: The Ubuntu Nova package has a bug, whereby the default nova.conf has the key logdir, but should be log_dir. This is easily fixed using the following command: sed -i "s/\(log\)\(dir\)/\1_\2/g" /etc/nova/nova.conf Section To install and configure Compute hypervisor components, part 2c and 2d, the value of MANAGEMENT_INTERFACE_IP_ADDRESS is 10.254.192.2 for compute01. Install the Neutron compute node service using the directions in the OpenStack Neutron install guide (Compute node). Since we are using the Linux bridge agent, be sure to make the following changes denoted by the corresponding section title. Note that command prompts in that guide that end with a # symbol must be run with sudo, while command prompts that end with a $ symbol do not. Section To install the Networking components : cumulus@controller$ sudo apt-get install neutron-plugin-ml2 neutron-pluginlinuxbridge-agent Section To configure the Networking server component : During step d, use the following configs: [DEFAULT]... core_plugin = ml2 service_plugin = allow_overlapping_ips = True 34
BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Section To configure the Modular Layer 2 (ML2) plug-in : This design uses the Linux bridge mechanism driver (agent) to build the networking connections for instances. In this design, the controller node needs to handle instance traffic for the DHCP agent. Use the following configuration for the /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,gre,vxlan tenant_network_types = mechanism_drivers = linuxbridge [ml2_type_flat] flat_networks = toswitch [ml2_type_vlan] network_vlan_ranges = toswitch:201:1000 [linux_bridge] physical_interface_mappings = toswitch:bond0 [vxlan] enable_vxlan = False [securitygroup] enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.iptablesfirewalldriver Section To configure Compute to use Networking : Since the ML2 is using the Linux bridge agent, the interface driver needs to match. For step a, use the following configs: [DEFAULT]... network_api_class = nova.network.neutronv2.api.api security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.linuxbridgeinterfacedriver firewall_driver = nova.virt.firewall.noopfirewalldriver Now restart the services: cumulus@compute0n:~$ sudo service nova-compute restart cumulus@compute0n:~$ sudo service neutron-plugin-linuxbridge-agent restart Repeat the all the steps in this section on the rest of the compute nodes, changing the hostnames and IP addresses appropriately in each command or file. www.cumulusnetworks.com 35
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE 9. Create Project Networks On the controller, source the openrc.sh file for the project user account. In neutron, the network is owned by project or tenant. Alternately, a network may be shared by all projects using the shared option. Create the VLAN networks, use the neutron net-create command. Add the optional --shared option to allow any project to use the network. The --provider options reference the neutron ML2 plugin providing the service. The physical_network is the same name specified in the ml2_conf.ini. The network_type is vlan, and the segmentation_id is the VLAN ID. cumulus[os_user1]@os-controller:~$ neutron net-create vmnet1 \ --provider:physical_network toswitch --provider:network_type vlan \ --provider:segmentation_id 201 Associate a subnet with the network using the neutron subnet-create command. The allocation-pool defines the DHCP address pool used on the subnet. cumulus[os_user1]@os-controller:~$ neutron subnet-create vmnet1 10.10.1.0/24 \ --name SUBNET1 --allocation-pool start=10.10.1.100,end=10.10.1.199 \ --gateway 10.10.1.1 In the next example, we are automating the creation of 10 VLANs and subnets, which may take a minute. These will all be owned by the admin user, so the --shared option is used. Each network will create a bridge interface in Linux, and the specified segmentation ID will be the 802.1q VLAN tag used. cumulus[os_admin]@os-controller:~$ for i in `seq 1 10`; do neutron net-create vmnet$i --shared --provider:physical_network toswitch \ --provider:network_type vlan --provider:segmentation_id $((200+i)) neutron subnet-create vmnet$i 10.10.$i.0/24 --name SUBNET$i \ --allocation-pool start=10.10.$i.100,end=10.10.$i.199 \ --gateway 10.10.$i.1; done The default quota is set to only allow ten networks per project. Creating an eleventh, results in the following error: root[os_admin]@os-controller:~$ neutron net-create vmnet11 \ --provider:physical_network toswitch --provider:network_type vlan \ --provider:segmentation_id 211 Quota exceeded for resources: ['network'] To launch a VM using the CLI, follow this guide. Otherwise, proceed to deploy the OpenStack Web UI named Horizon. 10. Start VMs Using the OpenStack Horizon Web UI Point a Web browser at http://192.168.100.2/horizon and log in (user: admin, password: adminpw). Start a VM in your new OpenStack cloud. The documentation describing the Horizon Web UI is available here. 36
CONCLUSION Conclusion Summary The fundamental abstraction of hardware from software and providing customers a choice through a hardware agnostic approach is core to the philosophy of Cumulus Networks and fits very well within the software-centric, commodity hardware friendly design of OpenStack. Just as OpenStack users have choice in server compute and storage, they can tap the power of Open Networking and select from a broad range of switch providers running Cumulus Linux. Choice and CapEx savings are only the beginning. OpEx savings come from agility through automation. Just as OpenStack orchestrates the cloud by enabling the automated provisioning of hosts, virtual networks, and VMs through the use of APIs and interfaces, Cumulus Linux enables network and data center architects to leverage automated provisioning tools and templates to define and provision physical networks. References Article/Document OpenStack Documentation Database Install Guide Message Queue Install Guide Keystone Install Guide Users Install Guide Services Install Guide Openrc Install Guide Keystone Verification Install Guide Glance Install Guide Nova Install Guide Neutron Network Install Guide URL http://docs.openstack.org/kilo/installguide/install/apt/content/index.html Cumulus Linux Documentation http://docs.cumulusnetworks.com Quick Start Guide Understanding Network Interfaces MLAG LACP Bypass Authentication, Authorization, and Accounting Zero Touch Provisioning www.cumulusnetworks.com 37
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Article/Document Cumulus Linux KB Articles Configuring /etc/network/interfaces with Mako Demos and Training Installing collectd and graphite Manually Putting All Switch Ports into a Single VLAN Cumulus Linux Product Information Software Pricing Hardware Compatibility List Cumulus Linux Downloads Cumulus Linux Repository Cumulus Networks GitHub Repository URL http://cumulusnetworks.com/product/pricing/ https://support.cumulusnetworks.com/hc/enus/articles/202868023 https://support.cumulusnetworks.com/hc/enus/sections/200398866 https://support.cumulusnetworks.com/hc/enus/articles/201787586 https://support.cumulusnetworks.com/hc/enus/articles/203748326 http://cumulusnetworks.com/support/linux-hardwarecompatibility-list/ http://cumulusnetworks.com/downloads/ http://repo.cumulusnetworks.com https://github.com/cumulusnetworks/ 38
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS Appendix A: Example /etc/network/interfaces Configurations leaf01 cumulus@leaf01$ cat /etc/network/interfaces auto eth0 iface eth0 address 192.168.0.90/24 gateway 192.168.0.254 # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp48 iface swp48 bridge-access 101.. auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 www.cumulusnetworks.com 39
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE bond-use-carrier 1 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.1/30 clagd-peer-ip 169.254.255.2 clagd-backup-ip 192.168.0.91/24 clagd-sys-mac 44:38:39:ff:00:02 #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on the peer switch. auto compute01 allow-hosts compute01 iface compute01 bond-slaves swp1 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 1 auto compute02 allow-hosts compute02 iface compute02 bond-slaves swp2 40
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 2 auto controller allow-hosts controller iface controller bond-slaves swp3 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 16384 www.cumulusnetworks.com 41
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE leaf02 cumulus@leaf02$ cat /etc/network/interfaces auto eth0 iface eth0 address 192.168.0.91/24 gateway 192.168.0.254 # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp48 iface swp48 bridge-access 101.. auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 #VLAN for the MLAG control traffic. auto peerlink.4094 42
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS iface peerlink.4094 address 169.254.255.2/30 clagd-peer-ip 169.254.255.1 clagd-backup-ip 192.168.0.90/24 clagd-sys-mac 44:38:39:ff:00:02 #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on the peer switch. auto compute01 allow-hosts compute01 iface compute01 bond-slaves swp1 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 1 auto compute02 allow-hosts compute02 iface compute02 bond-slaves swp2 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 2 www.cumulusnetworks.com 43
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE auto controller allow-hosts controller iface controller bond-slaves swp3 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 16384 44
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS leaf03 cumulus@leaf03$ cat /etc/network/interfaces auto eth0 iface eth0 address 192.168.0.92/24 gateway 192.168.0.254 # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.1/30 clagd-peer-ip 169.254.255.2 clagd-backup-ip 192.168.0.94/24 clagd-sys-mac 44:38:39:ff:00:03 www.cumulusnetworks.com 45
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on the peer switch. auto compute03 allow-hosts compute03 iface compute03 bond-slaves swp1 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 auto compute04 allow-hosts compute04 iface compute04 bond-slaves swp2 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 4 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on 46
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS bridge-vids 100-2000 mstpctl-treeprio 16384 www.cumulusnetworks.com 47
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE leaf04 cumulus@leaf04$ cat /etc/network/interfaces auto eth0 iface eth0 address 192.168.0.93/24 gateway 192.168.0.254 # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.2/30 clagd-peer-ip 169.254.255.1 clagd-backup-ip 192.168.0.92/24 clagd-sys-mac 44:38:39:ff:00:03 48
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on the peer switch. auto compute03 allow-hosts compute03 iface compute03 bond-slaves swp1 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 auto compute04 allow-hosts compute04 iface compute04 bond-slaves swp2 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 4 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 www.cumulusnetworks.com 49
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 16384 50
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS spine01 cumulus@spine01$ sudo vi /etc/network/interfaces auto eth0 iface eth0 address 192.168.0.94/24 gateway 192.168.0.254 # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3... auto swp32 iface swp32 # peerlink bond for clag auto peerlink iface peerlink bond-slaves swp31 swp32 bond-use-carrier 1 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.1/30 clagd-enable yes clagd-peer-ip 169.254.255.2 clagd-backup-ip 192.168.0.95/24 clagd-sys-mac 44:38:39:ff:00:00 www.cumulusnetworks.com 51
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE # leaf01-leaf02 downlink auto downlink1 allow-leafs downlink2 iface downlink1 bond-slaves swp1 swp2 bond-use-carrier 1 clag-id 1 # leaf03-leaf04 downlink auto downlink2 allow-leafs downlink2 iface downlink2 bond-slaves swp3 swp4 bond-use-carrier 1 clag-id 2 #Bridge that connects our peer and downlinks to the leafs. auto bridge iface bridge bridge-vlan-aware yes bridge-ports peerlink downlink1 downlink2 bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 12288 52
APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS spine02 cumulus@spine02$ sudo vi /etc/network/interfaces auto eth0 iface eth0 address 192.168.0.95/24 gateway 192.168.0.254 # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3... auto swp32 iface swp32 # peerlink bond for clag auto peerlink iface peerlink bond-slaves swp31 swp32 bond-use-carrier 1 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address 169.254.255.2/30 clagd-enable yes clagd-peer-ip 169.254.255.1 clagd-backup-ip 192.168.0.94/24 clagd-sys-mac 44:38:39:ff:00:00 www.cumulusnetworks.com 53
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE # leaf01-leaf02 downlink auto downlink1 allow-leafs downlink2 iface downlink1 bond-slaves swp1 swp2 bond-use-carrier 1 clag-id 1 # leaf03-leaf04 downlink auto downlink2 allow-leafs downlink2 iface downlink2 bond-slaves swp3 swp4 bond-use-carrier 1 clag-id 2 #Bridge that connects our peer and downlinks to the leafs. auto bridge iface bridge bridge-vlan-aware yes bridge-ports peerlink downlink1 downlink2 bridge-stp on bridge-vids 100-2000 mstpctl-treeprio 12288 54
APPENDIX B: NETWORK SETUP CHECKLIST Appendix B: Network Setup Checklist Tasks Considerations 1. Set up physical network. Select network switches Plan cabling Install Cumulus Linux Refer to the HCL and hardware guides at http://cumulusnetworks.com/support/hcl. Refer to KB article, Suggested Transceivers and Cables: https://support.cumulusnetworks.com/hc/en-us/articles/202983783. Generally, higher number ports on a switch are reserved for uplink ports, so: Assign downlinks or host ports to the lower end, like swp1, swp2 Reserve higher number ports for network Reserve highest ports for MLAG peer links Connect all console ports. Obtain the latest version of Cumulus Linux. Obtain license key, which is separate from Cumulus Linux OS distribution. To minimize variables and aid in troubleshooting, use identical versions across switches same version X.Y.Z, packages, and patch levels. See the Quick Start Guide in the Cumulus Linux Documentation. 2. Basic Physical Network Configuration Reserve management space Edit configuration files Reserve pool of IP addresses. Define hostnames and DNS. RFC 1918 should be used where possible. Note: we used RFC 6598 in our automation explicitly to avoid the use of any existing RFC 1918 deployments. Apply standards and conventions to promote similar configurations. For example, place stanzas in the same order in configuration files across switches and specify the child interfaces before the parent interfaces (so a bond member appears earlier in the file than the bond itself, for example). This allows for standardization and easier maintenance and troubleshooting, and ease of automation and the use of templates. Consider naming conventions for consistency, readability, and manageability. Doing so helps facilitate automation. For example, call your leaf switches leaf01 and leaf02 rather than leaf1 and leaf02. Use all lowercase for names Avoid characters that are not DNS-compatible. Define child interfaces before using them in parent interfaces. For example, create the member interfaces of a bond before defining the bond interface itself. www.cumulusnetworks.com 55
OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Tasks Define switch ports (swp) in /etc/network/interfaces on a switch Set speed and duplex Considerations Instantiate swp interfaces for using the ifup and ifdown commands. These settings are dependent on your network. 3. Verify connectivity. Use LLDP (Link Layer Discovery Protocol) LLDP is useful to debug or verify cabling between directly attached switches. By default, Cumulus Linux listens and advertises LLDP packages on all configured Layer 3 routed or Layer 2 access ports. LLDP is supported on tagged interfaces or those configured as an 802.1q sub interface. The command lldpctl will display a dump of the connected interfaces. 4. Set up physical servers. Install Ubuntu 5. Configure spine switches. Create peer link bond between pair of switches Enable MLAG Assign clagd-sys-mac Assign priority Assign IP address for clagd peerlink. Consider using a link local address (RFC 3927, 169.254/16) to avoid advertising, or an RFC 1918 private address. Use a very high number VLAN if possible to separate the peer communication traffic from typical VLANs handling data traffic. Valid VLAN tags end at 4096. Set up MLAG in switch pairs. There s no particular order necessary for connecting pairs. Assign a unique clagd-sys-mac value per pair. This value is used for spanning tree calculation, so assigning unique values will prevent overlapping MAC addresses. Use the range reserved for Cumulus Networks: 44:38:39:FF:00:00 through 44:38:39:FF:FF:FF. Define primary and secondary switches in an MLAG switch pair, if desired. Otherwise, by default the switches will elect a primary switch on their own. Set priority if you want to explicitly control which switches are designated primary switches. 6. Configure each pair of leaf switches. Repeat steps for configuring spine switches Connect to core routers Steps for leaf switches are similar. 7. Configure the OpenStack controller. Install all components and configure 56
APPENDIX B: NETWORK SETUP CHECKLIST 8. Configure each compute node. Enable IP forwarding Configure uplinks Load modules 9. Create tenant networks. Create Networks and VLANs Create subnets and IP address range 10. Start VMs using the OpenStack Horizon Web UI. Log into admin web UI There is no Network tab www.cumulusnetworks.com 57