rackspace.com/cloud/private
Rackspace Private Cloud Networking (2015-10-07) Copyright 2014 Rackspace All rights reserved. This documentation is intended to help users understand OpenStack Networking in conjunction with Rackspace Private Cloud v4. ii
Table of Contents 1. Preface... 1 1.1. About Rackspace Private Cloud... 1 1.2. Rackspace Private Cloud configuration... 1 1.3. Rackspace Private Cloud support... 2 2. Networking in Rackspace Private Cloud... 3 2.1. Network types... 3 2.2. Namespaces... 4 2.3. Metadata... 4 2.4. OVS bridges... 4 2.5. OpenStack Networking and high availability... 4 I. Networking with nova-network... 5 3. Floating IP addresses... 6 3.1. Creating a pool of floating IP addresses... 6 II. Networking with OpenStack Networking... 8 4. OpenStack Networking installation... 9 4.1. Installing a new cluster with OpenStack Networking... 11 4.2. OpenStack Networking limitations... 11 5. OpenStack Networking configuration... 13 5.1. Interface Configurations... 16 6. Creating a network... 18 7. Security Groups... 20 8. RPCDaemon... 21 8.1. RPCDaemon overview... 21 8.2. RPCDaemon operation... 21 8.3. DHCPAgent plugin... 21 8.4. L3Agent plugin... 22 8.5. RPCDaemon configuration... 22 8.6. General RPCDaemon options... 22 8.7. DHCPAgent and L3Agent options... 23 8.8. Dump plugin options... 23 8.9. Command line options... 23 9. Troubleshooting... 24 10. Additional resources... 26 10.1. Document Change History... 26 iii
1. Preface 1.1. About Rackspace Private Cloud Rackspace Private Cloud is a set of tools that allow you to quickly install an OpenStack private cloud, configured as recommended by Rackspace OpenStack specialists. Rackspace Private Cloud uses Chef to create an OpenStack cluster on Ubuntu, CentOS, or Red Hat Enterprise Linux. The installation scripts provide a familiar approach for Linux system administrators, and can be updated easily without downloading and installing a new ISO. Note This is the Rackspace Private Cloud v4 Installation. The Rackspace Private Cloud v9 Installation is available at http://docs.rackspace.com/ rpc/api/v9/bk-rpc-installation/content/rpc-common-front.html 1.2. Rackspace Private Cloud configuration Rackspace Private Cloud v4.1.5 installs OpenStack Grizzly, which contains these components: Compute (nova) Image Service (glance) Dashboard (horizon) Identity (keystone) Virtual Network (neutron) Rackspace Private Cloud v4.2.2 installs OpenStack Havana, which contains these components: Compute (nova) Image Service (glance) Dashboard (horizon) Identity (keystone) Virtual Network (neutron) Metering (ceilometer) Object Storage (swift) is also available in the Rackspace Private Cloud Object Storage offering. 1
1.3. Rackspace Private Cloud support Rackspace offers 365x24x7 support for Rackspace Private Cloud. If you are interested in purchasing Escalation Support or Core Support for your cloud, or taking advantage of our training offerings, contact us at: <opencloudinfo@rackspace.com>. You can also visit the Rackspace Private Cloud community forums. The forum is open to all Rackspace Private Cloud users and is moderated and maintained by Rackspace personnel and OpenStack specialists: https://community.rackspace.com/products/f/45 For any other information regarding your Rackspace Private Cloud, refer to the Rackspace Private Cloud release notes. 2
2. Networking in Rackspace Private Cloud A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (project name Neutron) can be manually enabled. For proof-of-concept and demonstration purposes, nova-network is adequate. If you intend to build a production cluster and require software-defined networking, use OpenStack Networking. This book contains information and procedures for managing the default networking component, nova-network, in Part I, Networking with nova-network [5]. If you want to use OpenStack Networking for your private cloud, you must specify it in your Chef environment and configure the nodes appropriately. The book contains the basic procedures required in Part II, Networking with OpenStack Networking [8]. When OpenStack Networking is enabled, the components are installed on the controller, compute, and network nodes in this configuration: Controller node. The controller node hosts the OpenStack Networking server service, which provides the networking API, communicates with the agents, and tracks them. Compute node. Network node. DHCP agent Metadata agent OVS plugin agent L3 agent The compute node hosts an OVS plugin agent. The network node hosts these agents: Spawns and controls the dnsmasq processes to provide leases to instances. This agent also spawns the neutron-ns-metadata-proxy processes as part of the metadata system. Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy. Controls OVS network bridges and the routes between them using patch, tunnel, or tap without requiring an external OpenFlow controller. Performs L3 forwarding and NAT. Note 2.1. Network types You can use the single-network-node role alone or in combination with the ha-controller1 or single-compute roles. The single-network-node can reside on the same device as the ha-controller1 and the single-compute roles if you are using the devices in combination. The OpenStack Networking configuration provided by the Rackspace Private Cloud cookbooks allows you to choose between VLAN or GRE isolated networks, both provider- and tenant-specific. From the provider side, an administrator can also create a flat network. 3
The type of network that is used for private tenant networks is determined by the network_type attribute, which can be edited in the Chef override_attributes. This attribute sets both the default provider network type and the only type of network that tenants are able to create. Administrators can always create flat and VLAN networks. GRE networks of any type require the network_type to be set to gre. 2.2. Namespaces For each network you create, the network node (or controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the ip netns [list] command, and can interact with the namespaces with the ip netns exec namespace <command> command. 2.3. Metadata Not all networks or VMs need metadata access. Rackspace recommends that you use metadata if you are using a single network. If you need metadata, you will need to enable metadata route injection when creating a subnet. If you need to use a default route and provide instances with access to the metadata route, see Procedure 6.2, Creating a subnet [18]. Note that this approach will not provide metadata on CirrOS images. However, booting a CirrOS instance with nova boot --config-drive will bypass the metadata route requirement. 2.4. OVS bridges An OVS bridge for provider traffic is created and configured on the nodes where single-network-node and single-compute are applied. Bridges are created, but physical interfaces are not added. An OVS bridge is not created on a controller-only node. When creating networks, you can specify the type and properties, such as flat or VLAN, shared or tenant, or provider or overlay. These properties identify and determine the behavior and resources of instances attached to the network. The cookbooks will create bridges for the configuration that you specify, although they do not add physical interfaces to provider bridges. For example, if you specify a network type of GRE, a br-tun tunnel bridge will be created to handle overlay traffic. 2.5. OpenStack Networking and high availability OpenStack Networking has high availability (HA) as of Rackspace Private Cloud v 4.2.0, and has been tested on Ubuntu 12.04 and CentOS 6.4. For an HA configuration, you must configure the controller node with the OpenStack Networking roles - ha-controller1. Do not use a standalone network node if you require HA from your OpenStack Networking configuration. 4
Part I. Networking with nova-network
3. Floating IP addresses This chapter discusses how to manage floating IP addresses in Rackspace Private Cloud, using the default network management tool,nova-network. For more information about networking in OpenStack, refer to the Networking chapter of the OpenStack Cloud Administrator. Note Currently, the dashboard does not permit robust network management. The procedures in this chapter use the dashboard in conjunction with the nova commands on the command line to manage networking. 3.1. Creating a pool of floating IP addresses This section shows you how to create a pool of floating IP addresses, allocate an address to a project, and assign it to an instance. Note If your cloud is hosted in a Rackspace data center, contact your Rackspace support representative for assistance with floating IP addresses. 1. Log in to the controller node as the root user. 2. Issue the nova floating-ip-create command with the appropriate variables for your image. If you have an IP range that has been specified for you, use the CIDR you have been given: # nova floating-ip-create --ip_range=cidr This creates a pool of floating IP addresses, which are available to all projects on the host. You can now use the dashboard to allocate a floating IP address and assign it to an instance. 3. With your project selected in the navigation panel, open the Access & Security page. 4. On the Floating IPs tab, click the Allocate IP to Project button. 5. In the Allocate Floating IP dialog box, accept the default (typically Floating) in the Pool drop-down menu and click Allocate IP. You will receive a confirmation message that a floating IP address has been allocated to the project and the IP address will appear in the Floating IPs table. This reserves the addresses for the project, but does not immediately associate that address with an instance. 6. To associate the address with an instance, locate the IP address in the Floating IPs table, and click Associate IP. 6
7. In the Manage Floating IP Associations dialog, ensure that the allocated IP address is selected and select the instance from the Instance menu. Click Associate to complete the association. You will receive a confirmation message that the IP has been associated with the instance. The instance ID will now appear in the Floating IPs table, associated with the IP address. It may be a few minutes before the IP address is included on the Instances table on the Instances & Volumes page. 7
Part II. Networking with OpenStack Networking
4. OpenStack Networking installation This chapter discusses how to install OpenStack Networking in Rackspace Private Cloud, as a replacement for the default network management tool, nova-network. To use OpenStack Networking, you must specify it in your Chef environment. The procedures in this book assume that you have controller and compute nodes already configured. It also assumes you will be adding the single-network-node role to an existing cluster where a controller node and at least one compute node have already been configured. You can can configure OpenStack Networking roles on the controller node, or on a stand-alone network node. This means the single-network-node role can be applied to an already-configured controller node, or a stand-alone network node. The assumed environment is as follows: A minimum of one controller node configured with the ha-controller1 role, with an out-of-band eth0 management interface. This controller node can also be configured with the single-network-node role, which means the controller node serves the dual purpose of hosting core OpenStack services and networking services. A minimum of one compute node configured with the single-compute role, with an out-of-band eth0 management interface and an eth1 physical provider interface. If you do not require OpenStack Networking HA, and are not using the controller node for the single-network-node role, you will need one network node that will be configured with the single-network-node role, with an out-of-band eth0 management interface and an eth1 physical provider interface. You should not use a stand-alone network node if you need OpenStack Networking to be HA. Before you begin the OpenStack Networking installation, make sure you have the Classless Inter-Domain Routing (CIDR) for the following: The Nova network CIDR The public network CIDR The management network CIDR Also make sure you have the following information: The name of the Nova cluster The login credentials for an OpenStack administrative user The Nova, public, and management networks must be pre-existing, working networks with addresses already configured on the hosts. They are defined by CIDR range, and any network interface with an address within the named CIDR range is assumed to be included in that network. You or your hosting provider must provision the CIDRs. You can specify the same CIDR for multiple networks. All three networks can use the same CIDR, but this is not recommended for production environments. 9
This table lists the networks and the services that bind to the IP address within each of these general networks: Network nova Services keystone-admin-api nova-xvpvnc-proxy nova-novnc-proxy public nova-novnc-server graphite-api keystone-service-api glance-api glance-registry nova-api nova-ec2-admin nova-ec2-public nova-volume neutron-api cinder-api ceilometer-api horizon-dash management horizon-dash_ssl graphite-statsd graphite-carbon-line-receiver graphite-carbon-pickle-receiver graphite-carbon-cache-query memcached collectd mysql keystone-internal-api glance-admin-api glance-internal-api nova-internal-api nova-admin-api cinder-internal-api cinder-admin-api cinder-volume ceilometer-internal-api ceilometer-admin-api ceilometer-central 10
Network Services rabbitmq-server On Ubuntu, you may also need to install a linux-headers package on the nodes where the networking roles will be applied and on the compute nodes. This allows the openvswitch-datapath-dkms tool to build the OpenVSwitch module required by the service. Install the package with this command: #apt-get install linux-headers-`uname -r` 4.1. Installing a new cluster with OpenStack Networking If you are installing a cluster and want to install OpenStack Networking from the start, follow the instructions documented in the Rackspace Private Cloud Installation, and take the following additional steps: 1. Modify the override_attributes as described in Procedure 5.1, Editing the override attributes [13]. 2. Add the single-network-node role to the run list when creating controller nodes, as shown in the following examples: For a single controller: # knife node run_list add <devicehostname> \ 'role[single-controller],role[single-network-node]' For HA controllers: # knife node run_list add <devicehostname> \ 'role[single-controller],role[single-network-node]' # knife node run_list add <devicehostname> \ 'role[ha-controller2],role[single-network-node]' 4.2. OpenStack Networking limitations When using OpenStack Networking, controller nodes with networking features and standalone networking nodes require namespace kernel features which are not available in the default kernel shipped with Red Hat Enterprise Linux 6.4, CentOS 6.4, and older versions of these operating systems. If you require these features, we recommend that you use Rackspace Private Cloud v4.2.0 with CentOS 6.4 or Ubuntu 12.04 for the controller and networking nodes. On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel. 11
For more information about OpenStack Networking limitations, see the OpenStack networking documentation. For more information about Red Hat-derivative kernel limitations, see therdo FAQ. 12
5. OpenStack Networking configuration This chapter discusses the various components that require configuration in order to use OpenStack Networking with Rackspace Private Cloud. Procedure 5.1. Editing the override attributes Before you can apply roles to your environment, edit the override_attributes section of your environment file to add the OpenStack Networking attributes. 1. On the Chef server, run the knife environment edit command and edit the nova network section of the override_attributes to specify OpenStack Networking: "override_attributes": { "nova": { "network": { "provider": "neutron" } }, "osops_networks": { "nova": "<novanetworkcidr>", "public": "<publicnetworkcidr>", "management": "<managementnetworkcidr>" } } 2. If you require a Generic Routing Encapsulation (GRE) network, you must also add a network_type attribute: "override_attributes": { "nova": { "network": { "provider": "neutron" } }, "neutron": { "ovs": { "network_type": "gre" } } } 3. If you need to customize your provider_network settings, you will need to add a provider_networks block to the override_attributes. Ensure that the label and bridge settings match the name of the interface for the instance network, and that the vlans value is correct for your provider VLANs, for example: 13
"neutron": { "ovs": { "provider_networks": [ { "label": "ph-eth1", "bridge": "br-eth1", "vlans": "1:1000" } ] } } 4. If you are working in an HA environment, add the neutron-api virtual IP (VIP) to the override-attributes. These attribute blocks define which VIPs are associated with which service, and define the virtual router ID (VRID) and network for each VIP: "override_attributes": { "vips": { "rabbitmq-queue": "<rabbitmqvip>", "horizon-dash": "<haproxyvip>", "horizon-dash_ssl": "<haproxyvip>" "keystone-service-api": "<haproxyvip>", "keystone-admin-api": "<haproxyvip>", "keystone-internal-api": "<haproxyvip>", "nova-xvpvnc-proxy": "<haproxyvip>", "nova-api": "<haproxyvip>", "nova-ec2-public": "<haproxyvip>", "nova-novnc-proxy": "<haproxyvip>", "cinder-api": "<haproxyvip>", "glance-api": "<haproxyvip>", "glance-registry": "<haproxyvip>", "swift-proxy": "<haproxyvip>", "neutron-api": "<haproxyvip>", "mysql-db": "<mysqlvip>", "config": { "<rabbitmqvip>": { "vrid": <rabbitmqvirtualrouterid>, "network": "<networkname>" }, "<haproxyvip>": { "vrid": <haproxyvirtualrouterid>, "network": "<networkname>" }, "<mysqlvip>": { "vrid": <mysqlvirtualrouterid>, "network": "<networkname>" } } } } Procedure 5.2. Applying the network role The single-network-node role can be applied in your environment after the controller node installation is complete, and eth1 is configured. You must also have updated the 14
override_attributes section of your environment file, as described in Procedure 5.1, Editing the override attributes [13]. The single-network-node role can be applied to a stand-alone network node, or to an existing controller node. 1. Add the single-network-node role to the target node's runlist. # knife node run_list add <devicehostname> 'role[single-network-node]' 2. Log in to the target node using ssh. 3. Run chef-client on the node. It will take chef-client several minutes to complete the installation tasks. As it does so, it will provide output to help you monitor the progress of the installation. Note On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel. Procedure 5.3. Configuring L3 routers Rackspace Private Cloud supports L3 routing. L3 routers can connect multiple OpenStack Networking L2 networks and can provide a gateway to connect one or more private L2 networks to a shared external network. By default, an L3 router uses secure NAT (SNAT) to manage all traffic. 1. Create a network to use as the external network for an L3 router: # neutron net-create --provider:network_type=local \ --router:external=true public 2. Create a network and subnets, then use the neutron router-create command to create L3 routers: # neutron router-create <routername> 3. For an internal router, use neutron router-interface-add to add subnets to the router: # neutron router-interface-add <routername> <subnetuuid> 4. To use a router to connect to an external network, which allows the router to act as a NAT gateway for external traffic, use neutron router-gateway-set: # neutron router-gateway-set <routername> <externalnetworkid> 5. Use the router-list command to view all routers in the environment: 15
# neutron router-list For more information about L3 routing, see the OpenStack Networking Administrator. 5.1. Interface Configurations Rackspace Private Cloud supports a separated plane interface configuration. In this configuration, the control plane includes the management interface, management IP address, service IP bindings, VIPs, and VIP traffic. The control plane communicates through a different logical interface than the data plane, which includes all OpenStack instance traffic. Separated plane configuration is recommended in situations where high traffic on the data plane will impede service traffic on the control plane. In this configuration, you need separate switches, firewalls, and routers for each plane. Combined plane configurations are not supported. The out-of-band eth0 management interfaces are where the primary IP address of the node is located, and is not controlled by OpenStack Networking. The eth1 physical provider interfaces have no IP addresses and must be configured to be up on boot. Procedure 5.4. Creating a separated plane configuration on Ubuntu 1. Add an eth1 entry in /etc/network/interfaces. This ensures that eth1 will come up on boot: auto eth1 iface eth1 inet manual up ip link set $IFACE up down ip link set $IFACE down 2. Bring up eth1 with the ifup command: # ifup eth1 3. Add the interface as a port in the bridge: # ovs-vsctl add-port br-eth1 eth1 Procedure 5.5. Creating a separated plane configuration on CentOS 1. Create the OVS bridge in the /etc/sysconfig/network-scripts/ifcfg-breth1 file: 16
DEVICE=br-eth1 ONBOOT=yes BOOTPROTO=none STP=off NM_CONTROLLED=no HOTPLUG=no DEVICETYPE=ovs TYPE=OVSBridge 2. Modify the bridge interface in the /etc/sysconfig/network-scripts/ifcfgeth1 file: DEVICE=eth1 BOOTPROTO=none HWADDR=<hardwareAddress> NM_CONTROLLED=no ONBOOT=yes TYPE=OVSPort DEVICETYPE="ovs" OVS_BRIDGE=br-eth1 UUID="UUID" IPV6INIT=no USERCTL=no 17
6. Creating a network This chapter discusses how to create a network with OpenStack Networking. For more information about networking with OpenStack Networking, see the Networking chapter of the OpenStack Cloud Administrator. Important All the procedures in this chapter require you to have added the single-network-node role to your environment and configured the environment files and interfaces as described in Chapter 5, OpenStack Networking configuration [13]. Procedure 6.1. Creating a network The neutron net-create command is used to create a network with OpenStack Networking. You will require authentication details to run this command, which can be found in the openrc file created during installation. On the controller node, create a provider network with the neutron net-create command. a. For a flat network: # neutron net-create --provider:physical_network=ph-eth1 \ --provider:network_type=flat NetworkName b. For a VLAN network: # neutron net-create --provider:physical_network=ph-eth1 \ --provider:network_type=vlan --provider:segmentation_id=100 NetworkName c. For a GRE network: # neutron net-create --provider:network_type=gre \ --provider:segmentation_id=100 NetworkName d. For an L3 router external network: # neutron net-create --provider:network_type=local \ --router:external=true public Procedure 6.2. Creating a subnet The neutron subnet-create command is used to create subnets with OpenStack Networking. 18
1. Create a subnet: # neutron subnet-create --name range-one NetworkName SubnetCIDR 2. If you are using metadata, you will need to configure a default metadata route. This can be done using a configuration with no gateway IP, and a static route from 0.0.0.0/0 to your gateway IP address. You will also need a DHCP allocation pool, with a range that starts beyond the gateway IP, in order to accommodate the gateway IP correctly: # neutron subnet-create --name SubnetName NetworkName \ --no-gateway \ --host-route destination=0.0.0.0/0,nexthop=10.0.0.1 \ --allocation-pool start=10.0.0.2,end=10.0.0.251 With this configuration, dnsmasq will pass both routes to instances. Metadata will be routed correctly without any changes on the external gateway. If you have a non-routed network and are not using a gateway, you can create the subnet with the --no-gateway command. A metadata route will automatically be created: # neutron subnet-create --name SubnetName NetworkName \ --no-gateway \ 3. You can also specify --dns-nameservers as the subnet. If a name server is not specified, the instances will try to make DNS queries through the default gateway IP: # neutron subnet-create --name SubnetName NetworkName> \ --dns-nameservers list=true DNSNameserverIP 19
7. Security Groups This chapter discusses how security groups within OpenStack Networking. Openstack Networking security groups are managed from the command line using neutron security-group-* commands. In order to use these commands, you will need to add the ping and ssh rules to the default security group. Procedure 7.1. Modifying the default security group rules Use the neutron security-group-rule-create command with the name of the protocol to allow, and other details. 1. To allow ping: # neutron security-group-rule-create --protocol icmp --direction \ ingress default 2. To allow ssh: # neutron security-group-rule-create --protocol tcp --port-range-min 22 \ --port-range-max 22 --direction ingress default 3. Check the updated default security group rules with the neutron security-group-rulelist command: # neutron security-group-list -c id -c tenantid -c name You might need to identify the default security group by its UUID, which can be found using the neutron security-group-list command. 20
8. RPCDaemon This section describes the Rackspace Private Cloud RPCDaemon, a utility that facilitates high availability (HA) of virtual routers and DHCP services in a Rackspace Private Cloud Open- Stack cluster using OpenStack Networking (neutron). Generally, you will not need to interact directly with RPCDaemon. This information is provided to give you an understanding of how Rackspace has implemented HA in OpenStack Networking and to assist in the event of troubleshooting. 8.1. RPCDaemon overview The RPCDaemon is a python-based daemon that is designed to monitor network topology changes in an OpenStack cluster running OpenStack Networking. It automatically makes changes to the networking configuration to maintain availability of services even in the event of an OpenStack Networking node failure. RPCDaemon supports both Grizzly and Havana releases of OpenStack and is part of Rackspace Private Cloud v4.1.3 and v4.2.2. Currently, OpenStack does not include built-in support for highly available virtual routers or DHCP services. These services are scheduled to a single network node and are not rescheduled when that node fails. Since these services are normally scheduled evenly, the failure of a single network node can cause failures in IP addressing and routing on a number of networks proportional to the number of network nodes in use. To avert this risk, most production deployments of OpenStack use the nova-network driver in HA mode, or use OpenStack Networking with provider networks to externalize these services for HA. However, these solutions reduce the utility of OpenStack's software-defined networking. Rackspace developed RPCDaemon to address these issues and improve the utility of OpenStack Networking to meet Rackspace Private Cloud production requirements. 8.2. RPCDaemon operation RPCDaemon monitors the AMQP message bus for actionable events. It is automatically installed on a network node as part of the single-network-node role. Three plugins are currently implemented: DHCPAgent: Implements HA in DHCP services. L3Agent: Implements HA in virtual routers. Dump: Dumps message traffic. This is typically only used for development or troubleshooting purposes and is not discussed here. 8.3. DHCPAgent plugin The DHCPAgent plugin performs the following tasks: Periodically removes DHCP services from an OpenStack Networking DHCP agent that is no longer reporting itself as available. 21
Periodically provisions DHCP services on every neutron DHCP agent node that does not already have them provisioned. Ensures that DHCP services are deprovisioned on all neutron DHCP agent nodes when a DHCP enabled network is removed. The operational effect of these actions is that when you create new DHCP enabled networks, DHCP servers appear on every neutron network node rather than on a single neutron network node. While this slightly increases DHCP traffic from multiple offers to each DHCP discovery request, it does so safely, because the OpenStack DHCP implementation uses DHCP reservations to ensure virtual machines always boot with predictable IP addresses. Available network nodes can service DHCP requests because of this, even in the event of catastrophic failure of a single network node. 8.4. L3Agent plugin The L3Agent plugin runs periodically and only monitors virtual routers that are currently assigned to L3 agents. If the L3Agent plugin observes an inactive L3 agent that OpenStack Networking shows as hosting a virtual router, then the L3Agent plugin deprovisions the virtual router from that node, and reprovisions it on another active neutron L3 agent node. This reprovisioning action does not occur immediately, and there will be some minimal network interruption while the virtual router is migrated. However, the corrective action happens without intervention, and any network outage is transient. This process does allow a higher availability of virtual routing, and the minimal interruption may be acceptable for some production workloads. 8.5. RPCDaemon configuration Not all configuration options are currently exposed by the Rackspace Private Cloud cookbooks. This section describes the configuration values in the RPCDaemon configuration file, which is typically located at /etc/rpcdaemon.conf. 8.6. General RPCDaemon options General daemon options are specified in the Daemon section of the configuration file. The available options include: plugins: Space-separated list of plugins to load. Valid options include L3Agent, DHCPAgent, and Dump. rpchost: Kombu connection url for the OpenStack message server. In the case of Rabbit- MQ, an IP address is sufficient. See the Kombu Documentation for more information on Kombu connection URLs. pidfile: Location of the daemon pid file. logfile: Location of the log file. loglevel: Verbosity of logging. Valid options include DEBUG, INFO, WARNING, ERROR, and CRITICAL. 22
check_interval: The interval in seconds in which to run plugin checks. 8.7. DHCPAgent and L3Agent options DHCPAgent plugin options are specified in the DHCPAgent section of the configuration file and L3Agent plugin options are specified in the L3Agent section of the configuration file. Logs will also be sent to the logfile specified in the Daemon section, while the log level is independently configurable. The following configuration options are available for the DHCPAgent and L3Agent plugins: conffile: Path to the neutron configuration file. loglevel: Verbosity of logging. timeout: Maximum time for API calls to complete. This also affects failover speed. queue_expire: Auto-terminate RabbitMQ queues if there's no activity in a specified time. 8.8. Dump plugin options The Dump plugin options are specified in the Dump section of the configuration file. The Dump plugin is the most useful when: the loglevel option has been set to DEBUG. In this mode, dumped messages will be logged in the logfile specified in the Daemon section. in foreground mode. See Command Line Options for more information. The following configuration options are available for the Dump plugin: loglevel: Verbosity of logging. DEBUG will produce the most useful results. queue: Queue to dump. Typically neutron to view network related messages. 8.9. Command line options The rpcdaemon command currently takes two options: -d: Run in foreground and do not detach. When running in foreground, a pidfile is not dropped, the default log level is set to DEBUG, and the daemon logs to stderr rather than the specified logfile. This is the most useful for running the Dump plugin, but can be helpful in development mode as well. -c : Display the path to the configuration file. The default configuration file path is /usr/ local/etc/rpcdaemon.conf, but init scripts on packaged version of RPCDaemon pass -c /etc/rpcdaemon.conf. 23
9. Troubleshooting This chapter identifies common problems that you may encounter when deploying an OpenStack network, and suggestions for resolving them. Q: If I'm on RHEL 6.4 or CentOS 6.4 or older, why won't my namespaces work properly when using OpenStack Networking (neutron)? A: Controller nodes with networking features and standalone networking nodes require namespace kernel features which aren't available in the default kernel. Rackspace recommends that you use Rackspace Private Cloud v 4.2.0 with CentOS 6.4, or Ubuntu 12.04 for the controller and networking nodes. Note On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel. Q: How do I resolve an issue with adding or deleting subnets? A: You may be experiencing issues with dnsmasq or neutron-dhcp-agent.the following steps will help you determine if the issue is with dnsmasq, neutron-server, or neutron-dhcp-agent. Procedure 9.1. To identify OpenStack Networking issues 1. Ensure that dnsmasq is running with pgrep -fl dnsmasq. If it is not, restart neutron-dhcp-agent. 2. If dnsmasq is running, confirm that that the IP address is in the namespace with ip netns list. 3. Identify the qdhcp-network <networkuuid> namespace with ip netns exec qdhcp-<networkuuid> ip and ensure that the IP on the interface is present and matches the one present for dnsmasq. To verify what the expected IP address is, use neutron port-list and neutron port-show <portuuid>. 4. Use cat /var/lib/neutron/dhcp/<networkuuid>/host to determine the leases that dnsmasq is configured with by OpenStack Networking. If the dnsmasq configuration is correct, but dnsmasq is not responding with leases and the bridge/interface is created and running, pkill dnsmasq and restart neutron-dhcp-agent. If dnsmasq does not include the correct leases, verify that neutron-server is running correctly and that it can communicate with neutron-dhcp-agent. If it is running correctly, and the bridge/interface is created and running, restart neutron-dhcp-agent. 24
Q: When using the nova-network, how do I resolve starting instances that are failing with the error message: dnsmasq: failed to create listening socket for dhcp_server_address: Cannot assign requested address A: It's possible that nova-network is unable to run sysctl. Check that the relevant user's PATH includes /usr/sbin:/sbin and that sysctl -n net.ipv4.ip_forward succeeds without sudo. 25
10. Additional resources These additional resources are designed help you learn more about the Rackspace Private Cloud Software and OpenStack. If you are an advanced user and are comfortable with APIs, the OpenStack API documentation is available in the OpenStack API Documentation library. OpenStack API Quick Start Programming OpenStack Compute API OpenStack Compute Developer Rackspace Private Cloud Knowledge Center OpenStack Manuals OpenStack API Reference OpenStack - Nova Developer Documentation OpenStack - Glance Developer Documentation OpenStack - Keystone Developer Documentation OpenStack - Horizon Developer Documentation OpenStack - Cinder Developer Documentation 10.1. Document Change History This version replaces and obsoletes all previous versions. The most recent set of changes are listed in the following table: Revision Date September 25, 2014 August 28, 2014 Summary of Changes Rackspace Private Cloud v9 Software General Availability release Rackspace Private Cloud v9 Software Limited Availability release 26