rackspace.com/cloud/private



Similar documents
rackspace.com/cloud/private

Release Notes for Fuel and Fuel Web Version 3.0.1

rackspace.com/cloud/private

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University

การใช งานและต ดต งระบบ OpenStack ซอฟต แวร สาหร บบร หารจ ดการ Cloud Computing เบ องต น

Mirantis

How To Install Openstack On Ubuntu (Amd64)

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

rackspace.com/cloud/private

Guide to the LBaaS plugin ver for Fuel

CloudCIX Bootcamp. The essential IaaS getting started guide.

IBM Cloud Manager with OpenStack. Administrator Guide, version 4.1

Building a big IaaS cloud with Apache CloudStack

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting

IBM Cloud Manager with OpenStack

Release Notes for Contrail Release 1.20

CERN Cloud Infrastructure. Cloud Networking

Plexxi Control Installation Guide Release 2.1.0

IBM Cloud Manager with OpenStack. Administrator Guide, version 4.2

Cloud on TIEN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

RackConnect User Guide

Automated Configuration of Open Stack Instances at Boot Time

SUSE Cloud 5 Private Cloud based on OpenStack

Acronis Storage Gateway

Installation Runbook for Avni Software Defined Cloud

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager Product Marketing Manager

Outline. Why Neutron? What is Neutron? API Abstractions Plugin Architecture

Openstack. Cloud computing with Openstack. Saverio Proto

OpenStack Introduction. November 4, 2015

SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment

Mirantis OpenStack 6. with VMware vcenter and NSX. Mirantis Reference Architecture. US HEADQUARTERS Mountain View, CA

SUSE Cloud. OpenStack End User Guide. February 20, 2015

docs.rackspace.com/api

Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date

Syncplicity On-Premise Storage Connector

1 Keystone OpenStack Identity Service

Installing and Configuring vcenter Multi-Hypervisor Manager

Prepared for: How to Become Cloud Backup Provider

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Getting Started with OpenStack and VMware vsphere TECHNICAL MARKETING DOCUMENTATION V 0.1/DECEMBER 2013

Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo

VMware Identity Manager Connector Installation and Configuration

Reference Architecture: Enterprise Security For The Cloud

EMC Data Domain Management Center

Software Defined Networking (SDN) and OpenStack. Christian Koenning

Dell One Identity Cloud Access Manager How to Configure for High Availability

Cloud-init. Marc Skinner - Principal Solutions Architect Michael Heldebrant - Solutions Architect Red Hat

Migration of virtual machine to cloud using Openstack Python API Clients

SDN v praxi overlay sítí pro OpenStack Daniel Prchal daniel.prchal@hpe.com

Installing and Configuring vcloud Connector

Virtual Appliance Setup Guide

Automation and DevOps Best Practices. Rob Hirschfeld, Dell Matt Ray, Opscode

CS312 Solutions #6. March 13, 2015

cloud functionality: advantages and Disadvantages

DameWare Server. Administrator Guide

About the VM-Series Firewall

OpenStack & Hyper-V. Alessandro Pilo- CEO Cloudbase

SOA Software API Gateway Appliance 7.1.x Administration Guide

PARALLELS SERVER BARE METAL 5.0 README

vcloud Director User's Guide

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014

Quick Start Guide for VMware and Windows 7

How To Create A Port On A Neutron.Org Server On A Microsoft Powerbook (Networking) On A Macbook 2 (Netware) On An Ipad Or Ipad On A

NephOS A Licensed End-to-end IaaS Cloud Software Stack for Enterprise or OEM On-premise Use.

Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide. Revised February 28, :32 pm Pacific

FortiGate-AWS Deployment Guide

Interworks. Interworks Cloud Platform Installation Guide

Eucalyptus User Console Guide

How To Use Openstack On Your Laptop

OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora

PHD Virtual Backup for Hyper-V

A10 Networks LBaaS Driver for Thunder and AX Series Appliances

AMD SEAMICRO OPENSTACK BLUEPRINTS CLOUD- IN- A- BOX OCTOBER 2013

vsphere Upgrade vsphere 6.0 EN

VM-Series Firewall Deployment Tech Note PAN-OS 5.0

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé

Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers

VMware vcenter Log Insight Getting Started Guide

How to Test Out Backup & Replication 6.5 for Hyper-V

OpenStack Awareness Session

CloudPlatform (powered by Apache CloudStack) Version 4.2 Administrator's Guide

RED HAT INFRASTRUCTURE AS A SERVICE OVERVIEW AND ROADMAP. Andrew Cathrow Red Hat, Inc. Wednesday, June 12, 2013

Introduction to VMware EVO: RAIL. White Paper

Sales Slide Midokura Enterprise MidoNet V1. July 2015 Fujitsu Limited

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.

Copyright 2014, Oracle and/or its affiliates. All rights reserved. 2

OpenStack/Quantum SDNbased network virtulization with Ryu

Transcription:

rackspace.com/cloud/private

Rackspace Private Cloud (2014-03-31) Copyright 2014 Rackspace All rights reserved. This guide is intended to assist Rackspace customers in downloading and installing Rackspace Private Cloud, powered by OpenStack. The document is for informational purposes only and is provided AS IS. RACKSPACE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS DOCUMENT AND RESERVES THE RIGHT TO MAKE CHANGES TO SPECIFICATIONS AND PRODUCT/SERVICES DESCRIPTION AT ANY TIME WITHOUT NOTICE. RACKSPACE SERVICES OFFERINGS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS MUST TAKE FULL RESPONSIBILITY FOR APPLICATION OF ANY SERVICES MENTIONED HEREIN. EXCEPT AS SET FORTH IN RACKSPACE GENERAL TERMS AND CONDITIONS AND/OR CLOUD TERMS OF SERVICE, RACKSPACE ASSUMES NO LIABILITY WHATSOEVER, AND DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO ITS SERVICES INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. Except as expressly provided in any written license agreement from Rackspace, the furnishing of this document does not give you any license to patents, trademarks, copyrights, or other intellectual property. Rackspace, Rackspace logo, Fanatical Support, and OpenCenter are either registered trademarks or trademarks of Rackspace US, Inc. in the United States and/or other countries. OpenStack is either a registered trademark or trademark of the OpenStack Foundation in the United States and/or other countries. Third-party trademarks and tradenames appearing in this document are the property of their respective owners. Such third-party trademarks have been printed in caps or initial caps and are used for referential purposes only. We do not intend our use or display of other companies tradenames, trademarks, or service marks to imply a relationship with, or endorsement or sponsorship of us by, these other companies. ii

Table of Contents 1. Overview... 1 1.1. Intended Audience... 1 1.2. Document Change History... 1 1.3. Additional Resources... 2 1.4. Contact Rackspace... 2 2. About Rackspace Private Cloud... 3 2.1. What is Rackspace Private Cloud?... 3 2.2. The Rackspace Private Cloud configuration... 3 2.2.1. Supported OpenStack features... 4 2.2.2. Unsupported OpenStack features... 5 2.3. Rackspace Private Cloud support... 5 3. Rackspace Private Cloud installation prerequisites and concepts... 6 3.1. Hardware requirements... 6 3.1.1. Chef server requirements... 6 3.1.2. Cluster node requirements... 6 3.2. Software requirements... 7 3.3. Internet requirements... 7 3.4. Network requirements... 7 3.4.1. Networking in Rackspace Private Cloud cookbooks... 8 3.4.2. Preparing for the installation... 9 3.4.3. Instance access considerations in nova-network... 10 3.5. Proxy considerations... 11 3.5.1. Configuring proxy environment settings on the nodes... 11 3.5.2. Testing proxy settings... 13 3.6. High availability... 13 3.7. Availability zones... 14 4. Installing OpenStack with Rackspace Private Cloud tools... 15 4.1. Prepare the nodes... 15 4.2. Install Chef server, cookbooks, and chef-client... 15 4.2.1. Install Chef server... 16 4.2.2. Install Rackspace Private Cloud Cookbooks... 17 4.2.3. Install chef-client... 17 4.3. Installing OpenStack... 18 4.3.1. Overview of the configuration... 18 4.3.2. Create an environment... 19 4.3.3. Define network attributes... 19 4.3.4. Set the node environments... 21 4.3.5. Add a controller node... 21 4.3.6. Controller node high availability... 22 4.3.7. Add a compute node... 25 4.3.8. Troubleshooting the installation... 26 5. Configuring OpenStack Networking... 27 5.1. OpenStack Networking concepts... 27 5.1.1. Network types... 28 5.1.2. Namespaces... 28 5.1.3. Metadata... 28 5.1.4. OVS bridges... 28 5.1.5. OpenStack Networking and high availability... 29 iii

5.2. OpenStack Networking prerequisites... 29 5.3. Configuring OpenStack Networking... 30 5.3.1. Networking infrastructure... 31 5.3.2. Editing the override attributes for Networking... 31 5.3.3. Apply the network role... 33 5.3.4. Interface Configurations... 34 5.4. Creating a network... 35 5.4.1. Creating a subnet... 36 5.4.2. Configuring L3 routers... 37 5.4.3. Configuring Load Balancing as a Service (LBaaS)... 38 5.4.4. Configuring Firewall as a Service (FWaaS)... 38 5.4.5. Configuring VPN as a Service (VPNaaS)... 39 5.5. Installing a new cluster With OpenStack Networking... 39 5.6. Troubleshooting OpenStack Networking... 40 5.7. RPCDaemon... 40 5.7.1. RPCDaemon overview... 41 5.7.2. RPCDaemon operation... 41 5.7.3. RPCDaemon configuration... 42 5.7.4. Command line options... 43 6. OpenStack metering... 44 6.1. OpenStack metering implementation In Rackspace Private Cloud... 44 6.2. Using OpenStack metering... 44 7. OpenStack Orchestration... 45 7.1. Installing Orchestration... 45 7.2. Using Orchestration... 46 8. Accessing the cloud... 47 8.1. Accessing the controller node... 47 8.2. Accessing the dashboard... 48 8.2.1. Using your logo in the OpenStack dashboard... 48 8.3. OpenStack client utilities... 49 8.4. Viewing and setting environment variables... 49 9. Creating an instance in the cloud... 51 9.1. Image management... 51 9.1.1. Uploading AMI images... 52 9.1.2. Converting VMDK Linux images... 52 9.2. Network management... 53 9.3. Create a project... 53 9.4. Generate an SSH keypair... 54 9.5. Update the default security group... 55 9.6. Create an instance... 56 9.6.1. File injection best practice... 58 9.7. Accessing the instance... 59 9.7.1. Logging in to the instance... 59 9.7.2. Accessing the instance by SSH... 60 9.7.3. Managing floating IP addresses... 61 9.8. What's next?... 63 10. OpenStack Image Storage... 64 10.1. Local File Storage... 64 10.2. Rackspace cloud files... 65 10.3. Swift storage... 66 11. Glossary of terms... 67 iv

1. Overview Rackspace has developed a fast, no-charge, and easy way to deploy a Rackspace Private Cloud powered by OpenStack in any data center. This method is suitable for anyone who wants to install a stable, tested, and supportable OpenStack private cloud, and can be used for all scenarios from initial evaluations to production deployments. Two versions are available: Rackspace Private Cloud is based on the OpenStack Havana code base, Rackspace Private Cloud v4.1.3 is based on the OpenStack Grizzly code base. 1.1. Intended Audience This guide is intended for anyone who wants to deploy an OpenStack powered cloud that has been tested and optimized by the OpenStack experts at Rackspace. This document includes an overview of Rackspace Private Cloud and instructions for downloading and deploying Rackspace Private Cloud in the data center of your choice. To use the product and this document, you should have prior knowledge of OpenStack and cloud computing, and basic Linux administration skills. 1.2. Document Change History This version of the guide replaces and obsoletes all previous versions. The most recent changes are described in the following table: Revision Date March 17, 2014 December 18, 2013 November 13, 2013 September 12, 2013 July 31, 2013 July 24, 2013 July 16, 2013 June 25, 2013 April 8, 2013 March 20, 2013 March 6, 2013 November 15, 2012 August 15, 2012 Summary of Changes Rackspace Private Cloud Limited Availability release. Rackspace Private Cloud v4.2.1 Release. Rackspace Private Cloud v4.2.0 Early Access Release. Rackspace Private Cloud v4.1.2 updates. Updates to network override attributes. Updates to HA VIPs. RabbitMQ v4.1.0 to v4.1.2 upgrade instructions. OpenStack Networking on CentOS 6.4. Added information about a known issue with Neutron and Red Hat Enterprise Linux/ CentOS. HA Controller instructions updated. OpenStack Networking concepts and structures updated and placed in separate chapter. Release of Rackspace Private Cloud v4.0.0. Minor edits and corrections. Package update: Change to install command, updated information about HA. Release of Rackspace Private Cloud v3.0. Release of Rackspace Private Cloud v2.0. Added information about Folsom implementation, OpenStack Block Storage, changing the Horizon dashboard, proxy settings, changing rate limits, updating the cookbooks, and configuring OpenStack Image Storage to use Rackspace Cloud Files. Release of Rackspace Private Cloud v1.0. 1

1.3. Additional Resources Rackspace Private Cloud Knowledge Center OpenStack Manuals OpenStack API Reference OpenStack - Nova Developer Documentation OpenStack - Glance Developer Documentation OpenStack - Keystone Developer Documentation OpenStack - Horizon Developer Documentation OpenStack - Cinder Developer Documentation 1.4. Contact Rackspace For more information about sales and support, send an email to <opencloudinfo@rackspace.com>. If you have feedback about the product and the documentation, send and email to <RPCFeedback@rackspace.com>. For the documentation, you can also leave a comment at the Knowledge Center. For more troubleshooting information and user discussion, you can also inquire at the Rackspace Private Cloud Support Forum at https://community.rackspace.com/ products/f/45 2

2. About Rackspace Private Cloud This chapter describes the Rackspace Private Cloud configuration and support offerings. 2.1. What is Rackspace Private Cloud? Rackspace offers a tool set that enables users to quickly deploy a private cloud OpenStack cluster configured according to the recommendations of Rackspace OpenStack specialists. Previous versions of Rackspace Private Cloud were packaged in an ISO that contained a full Ubuntu OS and a Chef server running on a virtual machine. Although the ISO was a convenient and simple package, it did not allow large deployments. The user also had no choice of host operating system and Chef server running on a virtual machine was resourceintensive. Rackspace Private Cloud v4.0.0 and later can now be deployed with a Chef-based approach that enables users to create an OpenStack cluster on Ubuntu, CentOS, or Red Hat Enterprise Linux. This version uses installation scripts, which creates a more traditional application experience for the Linux system administrator. It also offers a framework that is capable of being updated without downloading and deploying a whole new ISO. 2.2. The Rackspace Private Cloud configuration The following table lists the OpenStack versions and components supported by the current releases of Rackspace Private Cloud. v4.1.3 OpenStack Grizzly X OpenStack Havana X Compute (Nova) X X Image Service (Glance) X X Dashboard (Horizon) X X Identity (Keystone) X X Virtual Network (Neutron) X X Metering (Ceilometer) X Object Storage (Swift) is available in the Rackspace Private Cloud Object Storage offering. 3

The following diagram shows a typical Rackspace Private Cloud Mass Compute reference architecture in which instances reside directly on the Compute nodes. More information about Rackspace Private Cloud reference architectures can be found on the Common Rackspace Private Cloud reference architectures page. 2.2.1. Supported OpenStack features Rackspace supports features such as floating IP address management, security groups, availability zones, and the python command line clients. The following OpenStack features and configurations are supported: Separated plane configurations NFS and iscsi file storage as backing stores for VM storage VNC Proxy KVM hypervisor Nova Multi Scheduler instead of Filter Schedule Keystone integrated authentication Glance integrated image service Horizon dashboard Cirros, Ubuntu 12.04, and CentOS/RHEL guest instances, to the extent that they can be booted and pinged Single metadata server running on each device Cloud management through OpenStack APIs High availability for all Nova service components and APIs, Cinder, and Keystone, as well as the scheduler, RabbitMQ, and MySQL. Cinder block storage service, documented in Rackspace Private Cloud: OpenStack Block Storage Swift object storage service, available as the Rackspace Private Cloud Object Storage offering. 4

Rackspace Private Cloud also supports the use of Rackspace Cloud Files as a back end for OpenStack Image Storage. 2.2.2. Unsupported OpenStack features The following OpenStack features are not supported: Nova object store Nova volumes Clustered file system solutions Xen and other hypervisors Centralized metadata servers Contents of guest instances after a successful boot Any other OpenStack project, extension, or configuration not explicitly listed as a supported feature or installed component. Rackspace Private Cloud is an evolving product and will continue to be developed and enhanced. 2.3. Rackspace Private Cloud support Rackspace Private Cloud is offered primarily as a "do it yourself" package, at no charge. You can also access the Rackspace Private Cloud Support Forum, at the following URL: https://community.rackspace.com/products/f/45 The forum is open to all Rackspace Private Cloud users and is moderated and maintained by Rackspace personnel and OpenStack specialists. Rackspace offers 365x24x7 support for Rackspace Private Cloud. If you are interested in purchasing Rackspace Private Cloud Escalation Support or Core Support, or you plan to install on more than 20 nodes, send an email to <opencloudinfo@rackspace.com>. 5

3. Rackspace Private Cloud installation prerequisites and concepts This chapter lists the prerequisites for installing a cluster with the Rackspace Private Cloud tools. Rackspace recommends that you review this chapter in detail before attempting the installation. 3.1. Hardware requirements Rackspace has tested Rackspace Private Cloud deployment with a physical device for each of the following nodes: A Chef server An OpenStack Nova Controller node Additional physical machines with OpenStack Nova Compute nodes as required If you have different requirements for your environment, contact Rackspace. 3.1.1. Chef server requirements Rackspace recommends that the Chef server hardware meet the following requirements: 4 GB RAM 50 GB disk space Dual socket CPU with dual core 3.1.2. Cluster node requirements Each node in the cluster will have chef-client installed on it. The hardware requirements vary depending on the purpose of the node. Each device should support VT-x. Refer to the following table for detailed requirements. Node Type Nova Controller Nova Compute Requirements 16 GB RAM 144 GB disk space Dual socket CPU with dual core, or single socket quad core 32 GB RAM 144 GB disk space Dual socket CPU with dual core, or single socket quad core CPU overcommit is set at 16:1 VCPUs to cores, and memory overcommit is set to 1.5:1. Each physical core can support up to 16 virtual cores; for example, one dual-core processor can support up to 32 virtual cores. If you require more virtual cores, add additional physical nodes to your configuration. 6

For a list of Private Cloud certified devices, refer to Private Cloud Certified Devices - Compute. 3.2. Software requirements The following table lists the operating systems on which Rackspace Private Cloud has been tested. Operating System Ubuntu 12.04 Ubuntu 12.10 and later CentOS 6.3 CentOS 6.4 CentOS 6.5 Red Hat Enterprise Linux 6.0 Tested Rackspace Private Cloud Version All versions None v4.0.0 and earlier v4.1.2 and later and later None It is possible to install Rackspace Private Cloud on untested OSes, but this may cause unexpected issues. If you require OpenStack Networking, Rackspace recommends that you use Rackspace Private Cloud or later with CentOS 6.4 or Ubuntu 12.04 for the Controller and Networking nodes. The following Ubuntu kernels have been tested: linux-image-3.8.0-38-generic linux-image-3.2.0-58-generic kernel-2.6.32-358.123.2.openstack.el6.x86_64 3.3. Internet requirements Internet access is required to complete the installation, so ensure that the devices that you use have internet access to download the installation files. 3.4. Network requirements A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (Neutron, formerly Quantum) can be manually enabled. For proof-of-concept and demonstration purposes, nova-network will be adequate, but if you intend to build a production cluster and require software-defined networking for any reason, you should use OpenStack Networking, which is designed as a replacement for nova-network. If you want to use OpenStack Networking for your private cloud, you will have to specify it in your Chef environment and configure the nodes appropriately. See Configuring OpenStack Networking for detailed information about OpenStack Networking concepts and instructions for configuring OpenStack Networking in your environment. 7

3.4.1. Networking in Rackspace Private Cloud cookbooks The cookbooks are built on the following assumptions: The IP addresses of the infrastructure are fixed. Binding endpoints for the services are best described by the networks to which they are connected. The cookbooks contain definitions for three general networks and what services bind to them. When you install a private cloud and configure the networking, you must specify pre-existing, working networks with addresses already configured on the hosts. They are defined by CIDR range, and any network interface with an address within the named CIDR range is assumed to be included in that network. The CIDRs must be provisioned by your hosting provider or yourself. You can specify the same CIDR for multiple networks. All three networks can use the same CIDR, but this is not recommended in production environments. The following table lists the networks and the services that bind to the IP address within each of these general networks. Network nova Services keystone-admin-api nova-xvpvnc-proxy nova-novnc-proxy public nova-novnc-server graphite-api keystone-service-api glance-api glance-registry nova-api nova-ec2-admin nova-ec2-public nova-volume neutron-api cinder-api ceilometer-api horizon-dash management horizon-dash_ssl graphite-statsd graphite-carbon-line-receiver graphite-carbon-pickle-receiver graphite-carbon-cache-query 8

Network Services memcached collectd mysql keystone-internal-api glance-admin-api glance-internal-api nova-internal-api nova-admin-api cinder-internal-api cinder-admin-api cinder-volume ceilometer-internal-api ceilometer-admin-api ceilometer-central The configuration allows you to have either multiple interfaces or VLAN-separated subinterfaces. You can create VLAN-tagged interfaces both in the nova-network configuration (with Linux bridges) or in Neutron (with OVS). Single-NIC deployment also possible. Both multi- and single-nic configurations are described in Configuring OpenStack Networking. 3.4.2. Preparing for the installation You need the following information for the installation: The nova network address in CIDR format The nova network bridge, such as br100 or eth0. This will be used as the VM bridge on Compute nodes. The default value is usually acceptable. This bridge will be created by Nova as necessary and does not need to be manually configured. The public network address in CIDR format. This is where public API services, such as the public Keystone endpoint and the dashboard, will run. An IP address from this network should be configured on all hosts in the Nova cluster. The public network interface, such as eth0 or eth1. This is the network interface that is connected to the public (Internet/WAN) network on compute nodes. In a nova-network configuration, Compute nodes with instance traffic NAT out from this interface unless a specific floating IP address has been assigned to that instance. The management network address in CIDR format. This network is used for communication among services such as monitoring and syslog. This may be the same as your Nova public network if you do not want to separate service traffic. The VM network CIDR range. This is the range from which IP addresses will be automatically assigned to VMs. An address from this network will be visible from within your instance on eth0. This network should be dedicated to OpenStack and not shared with other services. 9

The network interface of the VM network for the Compute notes (such as eth1). The name of the Nova cluster. This should be unique and composed of alphanumeric characters, hyphens (-), or underscores (_). The name of the default availability zone. Rackspace recommends using nova as the default. For nova-network configurations, an optional NAT exclusion CIDR range or ranges for networks configured with a DMZ. A comma-separated list of CIDR network ranges that will be excluded from NAT rules. This enables direct communication to and from instances from other network ranges without the use of floating IPs. 3.4.3. Instance access considerations in nova-network In a nova-network configuration, by default, the instances that you create in the OpenStack cluster can be publicly accessed via NAT only by assigning floating IP addresses to them. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. Floating IP addresses are not specified during the installation process; once the Controller node is operational, you can add them with the nova-manage floating create --ip_range command. Refer to "Managing Floating IP Addresses". You can also make the instances accessible to other hosts in the network by default by configuring the cloud with a network DMZ. The network DMZ range cannot be the same as the nova network range. Specifying a DMZ enables NAT-free network traffic between the virtual machine instances and resources outside of the nova fixed network. For example, if the nova fixed network is 10.1.0.0/16 and you specify a DMZ of 172.16.0.1/12, any devices or hosts in that range will be able to communicate with the instances on the nova fixed network. 10

To use the DMZ, you must have at least two NICs on the deployment servers. One NIC must be dedicated to the VM instances. 3.5. Proxy considerations In general, the Rackspace Private Cloud installation instructions assume that none of your nodes are behind a proxy. If they are behind a proxy, review this section before proceeding with the installation. Rackspace has not yet tested a hybrid environment where some nodes are behind the proxy and others are not. 3.5.1. Configuring proxy environment settings on the nodes You must make your proxy settings available to the entire OS on each node by configuring /etc/environment as follows: $ /etc/environment http_proxy=http://<yourproxyurl>:<port> https_proxy=http://<yourproxyurl>:<port> ftp_proxy=http://<yourproxyurl>:<port> no_proxy=<localhost>,<node1>,<node2> Replace node1 and node2 with the hostnames of your nodes. In all cases, no_proxy is required and must contain a localhost entry. If localhost is missing, the Omnibus Chef Server installation will fail. Ubuntu requires http_proxy and no_proxy at a minimum. CentOS requires http_proxy, https_proxy, and no_proxy for yum packages and key updates. However, because installation methods might change over time, Rackspace recommends that you set as many variables as you can. 11

The nodes must also have sudo configured to retain the following environment variables: Defaults env_keep += "http_proxy https_proxy ftp_proxy no_proxy" In Ubuntu, this can be put in a file in /etc/sudoers.d. In CentOS, ensure that your version loads files in /etc/sudoers.d before adding this variable. 12

3.5.2. Testing proxy settings You can verify that the proxy settings are correct by logging in and running env and sudo env. You should see the configured http_proxy, https_proxy, ftp_proxy, and no_proxy settings in the output, as in the following example. $ env TERM=screen-256color SHELL=/bin/bash SSH_CLIENT=<ssh-url> SSH_TTY=/dev/pts/0 LC_ALL=en_US http_proxy=<yourproxyurl>:<port> ftp_proxy=<yourproxyurl>:<port> USER=admin PATH=/usr/local/sbin PWD=/home/admin LANG=en_US.UTF-8 https_proxy=<yourproxyurl>:<port> SHLVL=1 HOME=/home/admin no_proxy=localhost,chef-server,client1 LOGNAME=admin SSH_CONNECTION=<sshConnectionInformation> LC_CTYPE=en_US.UTF-8 LESSOPEN= /usr/bin/lesspipe %s LESSCLOSE=/usr/bin/lesspipe %s %s _=/usr/bin/env $ sudo env TERM=screen-256color LC_ALL=en_US http_proxy=<yourproxyurl>:<port> ftp_proxy=<yourproxyurl>:<port> PATH=/usr/local/sbin LANG=en_US.UTF-8 https_proxy=<yourproxyurl>:<port> HOME=/home/admin no_proxy=localhost,chef-server,client1 LC_CTYPE=en_US.UTF-8 SHELL=/bin/bash LOGNAME=root USER=root USERNAME=root MAIL=/var/mail/root SUDO_COMMAND=/usr/bin/env SUDO_USER=admin SUDO_UID=1000 SUDO_GID=1000 3.6. High availability Rackspace Private Cloud has the ability to implement support for high availability (HA) for all Nova service components and APIs, Cinder, and Keystone, and Glance, as well as the scheduler, RabbitMQ, and MySQL. HA functionality is powered by Keepalived and HAProxy. 13

Rackspace Private Cloud uses the following methods to implement HA in your cluster. MySQL master-master replication and active-passive failover: MySQL is installed on both Controller nodes, and master-master replication is configured between the nodes. Keepalived manages connections to the two nodes, so that only one node receives reads/write requests at any one time. RabbitMQ active/passive failover: RabbitMQ is installed on both Controller nodes. Keepalived manages connections to the two nodes, so that only one node is active at any one time. API load balancing: All services that are stateless and can be load balanced (essentially all the APIs and a few others) are installed on both Controller nodes. HAProxy is then installed on both nodes, and Keepalived manages connections to HAProxy, which makes HAProxy itself HA. Keystone endpoints and all API access go through Keepalived. 3.7. Availability zones Availability zones enable you to manage and isolate different nodes within the environment. For example, you might want to isolate different sets of Compute nodes to provide different resources to customers. If one availability zone experiences downtime, other zones in the cluster are not affected. When you create a Nova cluster, it is created with a default availability zone, and all Compute nodes are assigned to that zone. You can create additional availability zones within the cluster as needed. 14

4. Installing OpenStack with Rackspace Private Cloud tools This chapter discusses the process for installing an OpenStack environment with the Rackspace Private Cloud cookbooks. For networking, these instructions only apply to the default nova-network configuration. For information about OpenStack Networking (Neutron, formerly Quantum) see Configuring OpenStack Networking. For information about upgrading between Rackspace Private Cloud versions, refer to the Rackspace Private Cloud Upgrade. The installation process involves the following stages: Preparing the nodes Installing Chef server, the Rackspace Private Cloud cookbooks, and chef-client Creating a Chef environment and defining attributes Setting the node environments Applying the Controller and Compute roles to the nodes Note Before you begin, Rackspace recommends that you review Installation Prerequisites and Concepts to ensure that you have completed all necessary preparations for the installation process. 4.1. Prepare the nodes Before you begin, ensure that the OS is up-to-date on the nodes. Log in to each node and run the appropriate update for the OS and the package manager. You should also have an administrative user (such as admin) with the same user name configured across all nodes that will be part of your environment. 4.2. Install Chef server, cookbooks, and chef-client Your environment must have a Chef server, the latest versions of the Rackspace Private Cloud cookbooks, and chef-client on each node within the environment. You must install the Chef server node first. Installation is performed via a curl command that launches an installation script. The script downloads the packages from GitHub and uses the packages to install the components. You can review the scripts in the GitHub repository at the following link https:// github.com/rcbops/support-tools/tree/master/chef-install. Before you begin, ensure that curl is available, or install it with apt-get install -y curl on Ubuntu or yum install curl on CentOS. 15

4.2.1. Install Chef server The Chef server should be a device that is accessible by the devices that will be configured as OpenStack cluster nodes on ports 443 and 80. On distros running iptables, you may need to enable access on these ports. By default, the script installs Chef 11.0.8 with a set of randomly generated passwords, and also installs a Knife configuration that is set up for the root user. The following variables are added to your environment: CHEF_SERVER_VERSION: defaults to 11.0.8 CHEF_URL: defaults to https://<hosturl>:443 CHEF_UNIX_USER: the user for which the Knife configuration is set; defaults to root. A set of randomly generated passwords: CHEF_WEBUI_PASSWORD CHEF_AMQP_PASSWORD CHEF_POSTGRESQL_PASSWORD CHEF_POSTGRESQL_RO_PASSWORD Procedure 4.1. To install the Chef server 1. Log in to the device that will be the Chef server and download and run the installchef-server.sh script. # curl -s -O https://raw.github.com/rcbops/support-tools/master/chefinstall/install-chef-server.sh \ # bash install-chef-server.sh 2. Source the environment file to enable the knife command. # source /root/.bash_profile 3. Run the following command to ensure that knife is working correctly. # knife client list If the command runs successfully, the installation is complete. If it does not run successfully, you may need to log out of the server and log in again to re-source the environment. 16

4.2.2. Install Rackspace Private Cloud Cookbooks The Rackspace Private Cloud cookbooks are set up as Git submodules and are hosted at http://github.com/rcbops/chef-cookbooks,with individual cookbook repositories at http://github.com/rcbops-cookbooks. The following procedure describes the download process for the full suite, but you can also download individual cookbook repositories, such as the Nova repository at https:// github.com/rcbops-cookbooks/nova. Procedure 4.2. To download and install cookbooks from GitHub 1. Log into your Chef server or on a workstation that has knife access to the Chef server. 2. Verify that the knife.rb configuration file contains the correct cookbook_path setting. 3. Use git clone to download the cookbooks. # git clone https://github.com/rcbops/chef-cookbooks.git 4. Navigate to the chef-cookbooks directory. # cd chef-cookbooks 5. Check out the desired version of the cookbooks. The current versions are and v4.1.3. # git checkout <version> # git submodule init # git submodule sync # git submodule update 6. Upload the cookbooks to the Chef server. # knife cookbook upload -a -o cookbooks 7. Apply the updated roles. # knife role from file roles/*rb Your Chef cookbooks are now up to date. 4.2.3. Install chef-client All of the nodes in your OpenStack cluster need to have chef-client installed and configured to communicate with the Chef server. This can be most eaily accomplished with the knife 17

bootstrap command. The nodes on which the OpenStack Object Storage cluster will be configured should be able to access the Chef server on ports 443 and 80. Note that this will not work if you are behind an HTTP proxy. Each client node must have a resolvable hostname. If the hostname cannot resolve, the nodes will not be able to check in properly. Procedure 4.3. To bootstrap notes to the Chef server 1. Log in to the Chef server as root. 2. Generate an ssh key with the ssh-keygen command. Accept the defaults when prompted. 3. Use the knife bootstrap command to bootstrap the nodes to the Chef server. This command installs chef-client on the target node and allows it to communicate with the server. You will specify the name of the environment, the user name that will be associated with the ssh key, and the IP address of the node. For a single controller node: # knife bootstrap -E <environmentname> -i.ssh/id_rsa_private \ --sudo -x <sshusername> <nodeipaddress> 4. After you have completed the bootstrap process on each node, you must add the IP address and host name of the Chef server to the /etc/hosts file on each node. Log into the first client node and open /etc/hosts with your preferred text editor. 5. Add a line with the Chef server's IP address and host name in the following format: <chefserveripaddress> <chefserverhostname> 6. Save the file. Repeat steps 3-6 for each node in the environment. 4.3. Installing OpenStack At this point, you have now created a configuration management system for your OpenStack cluster, based on Chef, and given Chef the ability to manage the nodes in the environment. You are now ready to use the Rackspace Private Cloud cookbooks to deploy OpenStack. This section demonstrates a typical OpenStack installation, and includes additional information about customizing or modifying your installation. 4.3.1. Overview of the configuration A typical OpenStack installation configured with Rackspace Private Cloud cookbooks consists of the following components: 18

One or two infrastructure controller nodes that host central services, such as rabbitmq, MySQL, and the Horizon dashboard. These nodes will be referred to as Controller nodes in this document. One or more servers that host virtual machines. These nodes will be referred to as Compute nodes. If you are using OpenStack Networking, you may have a standalone network node. Networking roles can also be applied to the Controller node. This is explained in detail in Configuring OpenStack Networking. The cookbooks are based on the following assumptions: All OpenStack services, such as Nova and Glance, use MySQL as their database. High availability is provided by VRRP. Load balancing is provided by haproxy. KVM is the hypervisor. The network will be flat HA as nova-network, or will be Neutron-controlled. More information is available at the Rackspace Private Cloud Reference Architectures page. 4.3.2. Create an environment The first step is to create an environment on the Chef server. In this example, the knife environment create command is used to create an environment called private-cloud. The -d flag is used to add a description of the environment. # knife environment create private-cloud -d "Rackspace Private Cloud OpenStack Environment" This creates an JSON environment file that can be directly edited to add attributes specific to your configuration. To edit the environment, run the knife environment edit command: # knife environment edit private-cloud This will open a text editor where the environment settings can be modified and override attributes added. 4.3.3. Define network attributes You must now add a set of override attributes to define the nova, public, and management networks in your environment. For more information about the information you need to configure networking, refer to Network Requirements. Note This information is for configuring nova-network, which is what a Rackspace Private Cloud environment uses by default. If you want to use OpenStack Networking, see Configuring OpenStack Networking. 19

To define override attributes, you will need to run the knife environment edit command and add a networking section, substituting your network information. The and v4.1.3 cookbooks use hash syntax to define network attributes. The syntax is as follows: "override_attributes": { "nova": { "network": { "public_interface": "<publicinterface>" }, "networks": { "public": { "label": "public", "bridge_dev": "<VMNetworkInterface>", "dns2": "8.8.4.4", "ipv4_cidr": "<VMNetworkCIDR>", "bridge": "<networkbridge>", "dns1": "8.8.8.8" } } }, "mysql": { "allow_remote_root": true, "root_network_acl": "%" }, "osops_networks": { "nova": "<novanetworkcidr>", "public": "<publicnetworkcidr>", "management": "<managementnetworkcidr>" } } 20

The following example shows an environment configuration in which all three networks are folded onto a single physical network. This network has an IP address in the 192.0.2.0/24 range. All internal services, API endpoints, and monitoring and management functions run over this network. VMs are brought up on a 198.51.100.0/24 network on eth1, connected to a bridge called br100. "override_attributes": { "nova": { "network": { "public_interface": "br100" }, "networks": { "public": { "label": "public", "bridge_dev": "eth1", "dns2": "8.8.4.4", "ipv4_cidr": "198.51.100.0/24", "bridge": "br100", "dns1": "8.8.8.8" } } }, "mysql": { "allow_remote_root": true, "root_network_acl": "%" }, "osops_networks": { "nova": "192.0.2.0/24", "public": "192.0.2.0/24", "management": "192.0.2.0/24" } } 4.3.4. Set the node environments To ensure that all changes are made correctly, you must now set the environments of the client nodes to match the node created on the Chef server. While logged on to the Chef server, run the following command: # knife exec -E 'nodes.transform("chef_environment:_default") \ { n n.chef_environment("<environmentnaame>") }' This command will update the environment on all nodes in the cluster. Be aware that if you have any non-openstack nodes in your cluster, their environments will be altered as well. 4.3.5. Add a controller node The Controller node (also known as an infrastructure node) must be installed before any Compute nodes are added. Until the Controller node chef-client run is complete, the endpoint information will not be pushed back to the Chef server, and the Compute nodes will be unable to locate or connect to infrastructure services. 21

A device with the ha-controller1 role assigned will include all core OpenStack services and should be used even in non-ha environments. For more information about HA, see Controller Node High Availability. Note Rackspace ONLY sells and supports a dual-controller architecture. Escalation and Core Support customers should always have dual-controller HA configurations. Users who install a single-controller cloud will not be supported by Rackspace Support. Contact your Rackspace Support representative for more information. This procedure assumes that you have already installed chef-client on the device, as described in Install Chef Client, and that you are logged in to the Chef server. Procedure 4.4. To install a single Controller node 1. Add the ha-controller1 role to the target node's run list. # knife node run_list add <devicehostname> 'role[ha-controller1]' 2. Log in to the target node via ssh. 3. Run chef-client on the node. It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation. 4.3.6. Controller node high availability By creating two Controller nodes in the environment and applying the ha-controller* roles to them, you can create a pair of Controller nodes that provide HA with VRRP and monitored by Keepalived. Each service has a VIP of its own, and failover occurs on a service-by-service basis. Refer to High Availability Concepts for more information about HA configuration. Before you configure HA in your environment, you must allocate IP addresses for the MySQL, RabbitMQ, and HAProxy VIPs on an interface available to both Controller nodes. You will then add the VIPs to the override attributes. Note If you are upgrading your environment from an older configuration in which VIP vrid and networks were not defined, you may have to remove the Keepalived configurations in /etc/keepalived/conf.d/* and run chef-client before adding the VIP vrid and network definitions to override_attributes. 22

4.3.6.1. Havana VIP attribute blocks These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP. The neutron-api VIP only needs to be specified if you are deploying OpenStack Networking. The following example shows the attributes for a (Havana) VIP configuration where the RabbitMQ VIP is 192.0.2.51, the HAProxy VIP is 192.0.2.52, and the MySQL VIP is 192.0.2.53: "override_attributes": { "vips": { "rabbitmq-queue": "192.0.2.51", "ceilometer-api": "192.0.2.52", "ceilometer-central-agent": "192.0.2.52", "cinder-api": "192.0.2.52", "glance-api": "192.0.2.52", "glance-registry": "192.0.2.52", "heat-api": "192.0.2.52", "heat-api-cfn": "192.0.2.52", "heat-api-cloudwatch": "192.0.2.52", "horizon-dash": "192.0.2.52", "horizon-dash_ssl": "192.0.2.52", "keystone-admin-api": "192.0.2.52", "keystone-internal-api": "192.0.2.52", "keystone-service-api": "192.0.2.52", "nova-api": "192.0.2.52", "nova-api-metadata": "192.0.2.52", "nova-ec2-public": "192.0.2.52", "nova-novnc-proxy": "192.0.2.52", "nova-xvpvnc-proxy": "192.0.2.52", "swift-proxy": "192.0.2.52", "neutron-api": "192.0.2.52", "mysql-db": "192.0.2.53", } } "config": { "192.0.2.51": { "vrid": 1, "network": "public" }, "192.0.2.52": { "vrid": 2, "network": "public" }, "192.0.2.53": { "vrid": 3, "network": "public" } } 23

4.3.6.2. Grizzly VIP attribute blocks These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP. The quantum-api VIP only needs to be specified if you are deploying OpenStack Networking. The following example shows the attributes for a v4.1.n (Grizzly) VIP configuration where the RabbitMQ VIP is 192.0.2.51, the HAProxy VIP is 192.0.2.52, and the MySQL VIP is 192.0.2.53: "override_attributes": { "vips": { "rabbitmq-queue": "192.0.2.51", "cinder-api": "192.0.2.52", "glance-api": "192.0.2.52", "glance-registry": "192.0.2.52", "horizon-dash": "192.0.2.52", "horizon-dash_ssl": "192.0.2.52", "keystone-admin-api": "192.0.2.52", "keystone-internal-api": "192.0.2.52", "keystone-service-api": "192.0.2.52", "nova-api": "192.0.2.52", "nova-ec2-public": "192.0.2.52", "nova-novnc-proxy": "192.0.2.52", "nova-xvpvnc-proxy": "192.0.2.52", "swift-proxy": "192.0.2.52", "quantum-api": "192.0.2.52", "mysql-db": "192.0.2.53", } } "config": { "192.0.2.51": { "vrid": 1, "network": "public" }, "192.0.2.52": { "vrid": 2, "network": "public" }, "192.0.2.53": { "vrid": 3, "network": "public" } } 4.3.6.3. Installing a pair of High Availability Controller nodes Follow this procedure to edit your environment file and apply the HA Controller roles to your Controller nodes. 24

Procedure 4.5. To install a pair of High Availability controller nodes 1. Open the environment file for editing. # knife environment edit <yourenvironmentname> 2. Locate the override_attributes section. 3. Add the VIP information to the override_attributes. If you are deploying a Havana environment, refer to Havana VIP attributes. If you are deploying a v4.1.n Grizzly environment, refer to Grizzly VIP attributes. 4. On the first Controller node, add the ha-controller1 role. # knife node run_list add <devicehostname> 'role[ha-controller1]' 5. On the second Controller node, add the ha-controller2 role. # knife node run_list add <devicehostname> 'role[ha-controller2]' 6. Run chef-client on the first Controller node. 7. Run chef-client on the second Controller node. 8. Run chef-client on the first Controller node again. 4.3.7. Add a compute node The Compute nodes can be installed after the Controller node installation is complete. Procedure 4.6. To install a single Compute node 1. Add the single-compute role to the target node's run list. # knife node run_list add <devicehostname> 'role[single-compute]' 2. Log in to the target node via ssh. 3. Run chef-client on the node. It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation. Repeat this process on each Compute node. You will also need to run chef-client on each existing Compute node when additional Compute nodes are added. 25

4.3.8. Troubleshooting the installation If the installation is unsuccessful, it may be due to one of the following issues. The node does not have access to the Internet. The installation process requires Internet access to download installation files, so ensure that the address for the nodes provides that access and that the proxy information that you entered is correct. You should also ensure that the nodes have access to a DNS server. Your network firewall is preventing Internet access. Ensure the IP address that you assign to the Controller is available through the network firewall. For more troubleshooting information and user discussion, you can also inquire at the Rackspace Private Cloud Support Forum at the following URL: https://community.rackspace.com/products/f/45 26

5. Configuring OpenStack Networking A network deployed with the Rackspace Private Cloud cookbooks uses nova-network by default, but OpenStack Networking (Neutron) can be manually enabled. This section discusses the concepts behind the Rackspace deployment of Neutron and provides instructions for configuring it in your cluster. Note When using OpenStack Networking (Neutron), Controller nodes with Networking features and standalone Networking nodes require namespace kernel features which are not available in the default kernel shipped with RHEL 6.4, CentOS 6.4, and older versions of these operating systems. More information about Neutron limitations is available in the OpenStack documentation, and more information about RedHat-derivative kernel limitations is provided in the RDO FAQ. If you require OpenStack Networking using these features, Rackspace recommends that you use Rackspace Private Cloud v 4.2.0 with CentOS 6.4, or Ubuntu 12.04 for the Controller and Networking nodes. On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel. 5.1. OpenStack Networking concepts The Rackspace Private Cloud cookbooks deploy OpenStack Networking components on the Controller, Compute, and Network nodes in the following configuration: Controller node: hosts the Neutron server service, which provides the networking API and communicates with and tracks the agents. Network Node: DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system. Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy. OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller. L3 agent: performs L3 forwarding and NAT. Compute node: has an OVS plugin agent 27

Note 5.1.1. Network types You can use the single-network-node role alone or in combination with the ha-controller1 or single-compute roles. The OpenStack Networking configuration provided by the Rackspace Private Cloud cookbooks allows you to choose between VLAN or GRE isolated networks, both providerand tenant-specific. From the provider side, an administrator can also create a flat network. The type of network that is used for private tenant networks is determined by the network_type attribute, which can be edited in the Chef override_attributes. This attribute sets both the default provider network type and the only type of network that tenants are able to create. Administrators can always create flat and VLAN networks. GRE networks of any type require the network_type to be set to gre. 5.1.2. Namespaces For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the ip netns [list] command, and can interact with the namespaces with the ip netns exec <namespace> <command> command. 5.1.3. Metadata Not all networks or VMs need metadata access. Rackspace recommends that you use metadata if you are using a single network. If you need metadata, you will need to enable metadata route injection when creating a subnet. If you need to use a default route and provide instances with access to the metadata route, refer to Creating a Subnet for more information. Note that this approach will not provide metadata on cirros images. However, booting a cirros instance with nova boot --configdrive will bypass the metadata route requirement. 5.1.4. OVS bridges An OVS bridge for provider traffic is created and configured on the nodes where singlenetwork-node and single-compute are applied. Bridges are created, but physical interfaces are not added. An OVS bridge is not created on a Controller-only node. When creating networks, you can specify the type and properties, such as Flat vs. VLAN, Shared vs. Tenant, or Provider vs. Overlay. These properties identify and determine the behavior and resources of instances attached to the network. The cookbooks will create bridges for the configuration that you specify, although they do not add physical interfaces to provider bridges. For example, if you specify a network type of GRE, a br-tun tunnel bridge will be created to handle overlay traffic. 28

5.1.5. OpenStack Networking and high availability OpenStack Networking has been made HA as of Rackspace Private Cloud v 4.2.0, and has been tested on Ubuntu 12.04 and CentOS 6.4. For an HA configuration, you must configure the OpenStack networking roles on the Controller node. Do not use a standalone Network node if you require OpenStack Networking to be HA. For more information about HA, refer to Controller Node High Availability. 5.2. OpenStack Networking prerequisites If you are using OpenStack Networking, you will have to specify it in your Chef environment. You should also have the following information: The Nova network CIDR The public network CIDR The management network CIDR The name of the Nova cluster The password for an OpenStack administrative user The nova, public, and management networks must be pre-existing, working networks with addresses already configured on the hosts. They are defined by CIDR range, and any network interface with an address within the named CIDR range is assumed to be included in that network. The CIDRs must be provisioned by your hosting provider or yourself. You can specify the same CIDR for multiple networks. All three networks can use the same CIDR, but this is not recommended in production environments. The following table lists the networks and the services that bind to the IP address within each of these general networks. Network nova Services keystone-admin-api nova-xvpvnc-proxy nova-novnc-proxy public nova-novnc-server graphite-api keystone-service-api glance-api glance-registry nova-api nova-ec2-admin nova-ec2-public nova-volume 29

Network management Services neutron-api cinder-api ceilometer-api horizon-dash horizon-dash_ssl graphite-statsd graphite-carbon-line-receiver graphite-carbon-pickle-receiver graphite-carbon-cache-query memcached collectd mysql keystone-internal-api glance-admin-api glance-internal-api nova-internal-api nova-admin-api cinder-internal-api cinder-admin-api cinder-volume ceilometer-internal-api ceilometer-admin-api ceilometer-central You should also have Controller and Compute nodes already configured. You may configure OpenStack Networking roles on the Controller node or on a standalone Network node. You should not use a standalone Network node if you need OpenStack Networking to be HA. In Ubuntu, you may also need to install a linux-headers package on the nodes where the Networking roles will be applied and on the Compute nodes. This enables the openvswitchdatapath-dkms to build the OpenVSwitch module required by the service. The package is installed with the following command: $ sudo apt-get -y install linux-headers-`uname -r` 5.3. Configuring OpenStack Networking By default, the Rackspace Private Cloud cookbooks create a cluster that uses nova-network. The configuration also supports OpenStack Networking (Neutron, formerly Quantum), which must be enabled by editing the override attributes in the Chef environment. 30

5.3.1. Networking infrastructure The procedures documented here assume that you will be adding the single-networknode role to an existing cluster where a Controller node and at least one Compute node have already been configured. This role can be applied to a standalone network node, or to the already-configured Controller node. The assumed environment is as follows: A minimum of one Controller node configured with the ha-controller1 role, with an out-of-band eth0 management interface. The single-network-node role can also be applied to this node, so that the node serves the dual purpose of hosting core OpenStack services as well as networking services. A minimum of one Compute node configured with the single-compute role, with an out-of-band eth0 management interface and an eth1 physical provider interface.. If you do not require OpenStack Networking HA and are not using the Controller node for the single-network-node role, you will need one Network node that will be configured with the single-network-node role, with an out-of-band eth0 management interface and an eth1 physical provider interface. If you are installing an entirely new cluster, refer to Installing a New Cluster With OpenStack Networking. 5.3.2. Editing the override attributes for Networking Before you apply the single-network-node role, you must update the override_attributes section of your environment file with the OpenStack Networking attributes. Procedure 5.1. To edit override attributes for networking 1. On the Chef server, run the knife environment edit command and edit the nova network section of the override_attributes to specify OpenStack Networking, as in the following example: "override_attributes": { "nova": { "network": { "provider": "neutron" } }, "osops_networks": { "nova": "<novanetworkcidr>", "public": "<publicnetworkcidr>", "management": "<managementnetworkcidr>" } } 31

If you require a GRE network, you must also add a network_type attribute. "override_attributes": { "nova": { "network": { "provider": "neutron" } }, "neutron": { "ovs": { "network_type": "gre" } } } 2. If you need to customize your provider_network settings, you will need to add a provider_networks block to the override_attributes. Ensure that the label and bridge settings match the name of the interface for the instance network, and that the vlans value is correct for your provider VLANs, as in the following example: "neutron": { "ovs": { "provider_networks": [ { "label": "ph-eth1", "bridge": "br-eth1", "vlans": "1:1000" } ] } } 32

3. If you are working in an HA environment, add the neutron-api VIP to the overrideattributes. These attribute blocks define which VIPs are associated with which service, and they also define the virtual router ID (vrid) and network for each VIP. "override_attributes": { "vips": { "rabbitmq-queue": "<rabbitmqvip>", "horizon-dash": "<haproxyvip>", "horizon-dash_ssl": "<haproxyvip>" "keystone-service-api": "<haproxyvip>", "keystone-admin-api": "<haproxyvip>", "keystone-internal-api": "<haproxyvip>", "nova-xvpvnc-proxy": "<haproxyvip>", "nova-api": "<haproxyvip>", "nova-ec2-public": "<haproxyvip>", "nova-novnc-proxy": "<haproxyvip>", "cinder-api": "<haproxyvip>", "glance-api": "<haproxyvip>", "glance-registry": "<haproxyvip>", "swift-proxy": "<haproxyvip>", "neutron-api": "<haproxyvip>", "mysql-db": "<mysqlvip>", "config": { "<rabbitmqvip>": { "vrid": <rabbitmqvirtualrouterid>, "network": "<networkname>" }, "<haproxyvip>": { "vrid": <haproxyvirtualrouterid>, "network": "<networkname>" }, "<mysqlvip>": { "vrid": <mysqlvirtualrouterid>, "network": "<networkname>" } } } } 5.3.3. Apply the network role The single-network-node role can be applied in your environment after the Controller node installation is complete and eth1 is configured. You can apply this role to a standalone Network node, or to the existing Controller node. Note On CentOS 6.4, after applying the single-network-node role to the device, you must reboot it to use the appropriate version of the CentOS 6.4 kernel. 33

Procedure 5.2. To apply the network role 1. Add the single-network-node role to the target node's run list. # knife node run_list add <devicehostname> 'role[single-network-node]' 2. Log in to the target node via ssh. 3. Run chef-client on the node. It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation. 5.3.4. Interface Configurations Rackspace Private Cloud supports the following interface configurations: Separated Plane: The control plane includes the management interface, management IP address, service IP bindings, VIPs, and VIP traffic. The control plane communicates through a different logical interface than the data plane, which includes all OpenStack instance traffic. Combined Plane: Both control and data planes communicate through a single logical interface on a single device. A Separated Plane configuration is recommended in situations where high traffic on the data plane will impede service traffic on the control plane. In this configuration, you need separate switches, firewalls, and routers for each plane. A Combined Plane configuration maximizes port density efficiency, but is more difficult to configure. If you need a Combined Plane configuration, contact Rackspace Support for assistance. 5.3.4.1. Separated Plane Configuration This section describes how to manually create separated plane configurations on Ubuntu and CentOS. The out-of-band eth0 management interfaces are where the primary IP address of the node is located, and is not controlled by OpenStack Networking. The eth1 physical provider interfaces have no IP addresses and must be configured to be "up" on boot. Procedure 5.3. To create a separated plane configuration on Ubuntu 1. Add an eth1 entry in /etc/network/interfaces: auto eth1 iface eth1 inet manual up ip link set $IFACE up down ip link set $IFACE down 34

This ensures that eth1 will be up on boot. 2. To bring up eth1 without rebooting the device, use the ifup command: # ifup eth1 3. Add the interface as a port in the bridge. # ovs-vsctl add-port br-eth1 eth1 Procedure 5.4. To create a separated plane configuration on CentOS 1. Create the OVS bridge in the file /etc/sysconfig/network-scripts/ifcfgbr-eth1. DEVICE=br-eth1 ONBOOT=yes BOOTPROTO=none STP=off NM_CONTROLLED=no HOTPLUG=no DEVICETYPE=ovs TYPE=OVSBridge 2. Modify the bridge interface in the file /etc/sysconfig/network-scripts/ ifcfg-eth1. DEVICE=eth1 BOOTPROTO=none HWADDR=<hardwareAddress> NM_CONTROLLED=no ONBOOT=yes TYPE=OVSPort DEVICETYPE="ovs" OVS_BRIDGE=br-eth1 UUID="UUID" IPV6INIT=no USERCTL=no 5.4. Creating a network The following procedure assumes that you have added the single-network-node role in your environment and configured the environment files and interfaces as described in the previous sections. This documentation provides a high-level overview of basic OpenStack Networking tasks. For detailed documentation, refer to the Networking chapter of the OpenStack Cloud Administrator. 35

Authentication details are required for the neutron net-create command. These details can be found in the openrc file created during installation. On the Controller node, create a provider network with the neutron net-create command. For a flat network: # neutron net-create --provider:physical_network=ph-eth1 \ --provider:network_type=flat <networkname> For a VLAN network: # neutron net-create --provider:physical_network=ph-eth1 \ --provider:network_type=vlan --provider:segmentation_id=100 <networkname> For a GRE network: # neutron net-create --provider:network_type=gre \ --provider:segmentation_id=100 <networkname> For an L3 router external network: # neutron net-create --provider:network_type=local \ --router:external=true public 5.4.1. Creating a subnet Once you have created a network, you can create subnets with the neutron subnet-create command: # neutron subnet-create --name range-one <networkname> <subnetcidr> If you are using metadata, you will need to configure a default metadata route. To accomplish this, you will need to create a subnet with the following configuration: No gateway IP is specified. A static route is defined from 0.0.0.0/0 to your gateway IP address. The DHCP allocation pool is defined to accommodate the gateway IP by starting the pool range beyond the gateway IP. # neutron subnet-create --name <subnetname> <networkname> \ --no-gateway \ 36

--host-route destination=0.0.0.0/0,nexthop=10.0.0.1 \ --allocation-pool start=10.0.0.2,end=10.0.0.251 With this configuration, dnsmasq will pass both routes to instances. Metadata will be routed correctly without any changes on the external gateway. If you have a non-routed network and are not using a gateway, you only need create the subnet with --no-gateway. A metadata route will automatically be created. # neutron subnet-create --name <subnetname> <networkname> \ --no-gateway \ You can also specify --dns-nameservers for the subnet. If a name server is not specified, the instances will try to make DNS queries through the default gateway IP. # neutron subnet-create --name <subnetname> <networkname> \ --dns-nameservers list=true <dnsnameserverip> \ 5.4.2. Configuring L3 routers Rackspace Private Cloud supports L3 routing. L3 routers can connect multiple OpenStack Networking L2 networks and can also provide a gateway to connect one or more private L2 networks to a shared external network. An L3 router uses SNAT to manage all traffic by default. When you create a network to use as the external network for an L3 router, use the following command. # neutron net-create --provider:network_type=local \ --router:external=true public After you have created a network and subnets, you can use the neutron router-create command to create L3 routers. # neutron router-create <routername> For an internal router, use neutron router-interface-add to add subnets to the router. # neutron router-interface-add <routername> <subnetuuid> To use a router to connect to an external network, which will enable the router to act as a NAT gateway for external traffic, use neutron router-gateway-set # neutron router-gateway-set <routername> <externalnetworkid> 37

You can view all routers in the environment with the router-list command. # neutron router-list Additional information about L3 routing can be found in the OpenStack Networking Adminstrator. 5.4.3. Configuring Load Balancing as a Service (LBaaS) Rackspace Private Cloud includes a technology preview implementation of LBaaS with the neutron-lbaas-agent recipe, which is part of the single-network-node role. The Rackspace Private Cloud cookbooks currently support only the HAProxy LBaaS service provider. LBaaS is offered as a technology preview feature for Rackspace Private Cloud and is not fully supported at this time. It is not currently recommended for Rackspace Private Cloud production environments. To enable LBaaS, you must first have the or later version of the cookbooks with the single-network-node role applied to a node in your environment. You must then set the following attribute to true: node["neutron"]["lbaas"]["enabled"] = true To enable the LBaaS button in Horizon, you must set the following attribute in your environment: default["horizon"]["neutron"]["enable_lb"] = "True" Additional information about using LBaaS in OpenStack can be found on the OpenStack wiki: OpenStack Networking and LBaaS overview How to run LBaaS LBaaS architecture LBaaS agent 5.4.4. Configuring Firewall as a Service (FWaaS) Rackspace Private Cloud includes a technology preview implementation of FWaaS with the neutron-fwaas-agent recipe, which is part of the single-network-node role. FWaaS is offered as a technology preview feature for Rackspace Private Cloud and is not fully supported at this time. This feature can only be accessed via the API and CLI. It is not currently recommended for Rackspace Private Cloud production environments. 38

To enable FWaaS, you must first have the version of the cookbooks with the single-network-node role applied to a node in your environment. You must then set the following attribute to true: node["neutron"]["fwaas"]["enabled"] = true Additional information about using FWaaS in OpenStack can be found on the OpenStack wiki article on FWaaS. 5.4.5. Configuring VPN as a Service (VPNaaS) Rackspace Private Cloud includes a technology preview implementation of VPNaaS with the neutron-vpnaas-agent recipe, which is part of the single-network-node role. VPNaaS is offered as a technology preview feature for Rackspace Private Cloud and is not fully supported at this time. This feature can only be accessed via the API and CLI. It is not currently recommended for Rackspace Private Cloud production environments. To enable VPNaaS, you must first have the version of the cookbooks with the single-network-node role applied to a node in your environment. You must then set the following attribute to true: node["neutron"]["vpnaas"]["enabled"] = true Additional information about using VPNaaS in OpenStack can be found on the OpenStack wiki article on VPNaaS. Note Rackspace Private Cloud Support does not support currently support technology preview features such as LBaaS, FWaaS, and VPNaaS. They are not recommended for Rackspace Private Cloud production environments, and are included for experienced OpenStack users only. 5.5. Installing a new cluster With OpenStack Networking If you are installing a cluster and want to install OpenStack Networking from the start, you will follow the instructions documented in Installing OpenStack With Rackspace Private Cloud Tools. You will take the following additional steps: Modify the override_attributes as documented in Editing the Override Attributes for Networking. Add the single-network-node role to the runlist when creating Controller nodes, as shown in the following examples. 39

For a single controller: # knife node run_list add <devicehostname> \ 'role[ha-controller1],role[single-network-node]' For HA controllers: # knife node run_list add <devicehostname> \ 'role[ha-controller1],role[single-network-node]' # knife node run_list add <devicehostname> \ 'role[ha-controller2],role[single-network-node]' 5.6. Troubleshooting OpenStack Networking In the event you experience issues adding or deleting subnets, you may be experiencing issues with dnsmasq or neutron-dhcp-agent. The following steps will help you determine if the issue is with dnsmasq, neutron-server, or neutron-dhcp-agent. Procedure 5.5. To identify OpenStack Networking issues 1. Ensure that dnsmasq is running with pgrep -fl dnsmasq. If it is not, restart neutrondhcp-agent. 2. If dnsmasq is running, confirm that that the IP address is in the namespace with ip netns list. 3. Identify the qdhcp-network <networkuuid> namespace with ip netns exec qdhcp-<networkuuid> ip and ensure that the IP on the interface is present and matches the one present for dnsmasq. To verify what the expected IP address is, use neutron port-list and neutron port-show <portuuid>. 4. Use cat /var/lib/neutron/dhcp/<networkuuid>/host to determine the leases that dnsmasq is configured with by OpenStack Networking. If the dnsmasq configuration is correct, but dnsmasq is not responding with leases and the bridge/interface is created and running, pkill dnsmasq and restart neutron-dhcpagent. If dnsmasq does not include the correct leases, verify that neutron-server is running correctly and that it can communicate with neutron-dhcp-agent. If it is running correctly, and the bridge/interface is created and running, restart neutron-dhcp-agent. 5.7. RPCDaemon This section describes the Rackspace Private Cloud RPCDaemon, a utility that facilitates high availability of virtual routers and DHCP services in a Rackspace Private Cloud OpenStack cluster using OpenStack Networking (Neutron, formerly Quantum). 40

Generally, you will not need to interact directly with RPCDaemon. This information is provided to give you an understanding of how Rackspace has implemented HA in OpenStack Networking and to assist in the event of troubleshooting. 5.7.1. RPCDaemon overview The RPCDaemon is a python-based daemon that is designed to monitor network topology changes in an OpenStack cluster running OpenStack Networking, and which automatically makes changes to the networking configuration to maintain availability of services even in the event of an OpenStack Networking node failure. RPCDaemon supports both Grizzly and Havana releases of OpenStack and is part of Rackspace Private Cloud v4.1.3 and. Currently, OpenStack does not include built-in support for highly available virtual routers or DHCP services. These services are scheduled to a single network node and are not rescheduled when that node fails. Since these services are normally scheduled evenly, the failure of a single network node can cause failures in IP addressing and routing on a number of networks proportional to the number of network nodes in use. To avert this risk, most production deployments of OpenStack use the nova-network driver in HA mode, or use OpenStack Networking with provider networks to externalize these services for high availability. However, these solutions reduce the utility of OpenStack's software-defined networking. Rackspace developed RPCDaemon to address these issues and improve the utility of OpenStack Networking to meet Rackspace Private Cloud production requirements. 5.7.2. RPCDaemon operation RPCDaemon monitors the AMQP message bus for actionable events. It is automatically installed on a network node as part of the single-network-node role. Three plugins are currently implemented: DHCPAgent: Implements high availability in DHCP services. L3Agent: Implements high availability in virtual routers. Dump: Dumps message traffic. This is typically only used for development or troubleshooting purposes and is not discussed here. 5.7.2.1. DHCPAgent plugin The DHCPAgent plugin performs the following tasks: Periodically removes DHCP services OpenStack Networking DHCP agent that is no longer reporting itself as available. Periodically provisions DHCP services on every Neutron DHCP agent node that does not already have them provisioned. Ensures that DHCP services are deprovisioned on all Neutron DHCP agent nodes when a DHCP enabled network is removed. The operational effect of these actions is that when you create new DHCP enabled networks, DHCP servers appear on every Neutron network node rather than on a 41

single Neutron network node. While this slightly increases DHCP traffic from multiple offers to each DHCP discovery request, it does so safely, because the OpenStack DHCP implementation uses DHCP reservations to ensure virtual machines always boot with predictable IP addresses. Because of this, DHCP requests can continue to be serviced by other available network nodes, even in the event of catastrophic failure of a single network node. 5.7.2.2. L3Agent plugin The L3Agent plugin runs periodically and only monitors virtual routers that are currently assigned to L3 agents. If the L3Agent plugin observes an inactive L3 agent that OpenStack Networking shows as hosting a virtual router, then the L3Agent plugin deprovisions the virtual router from that node and reprovisions it on another active Neutron L3 agent node. This reprovisioning action does not occur immediately, and there will be some minimal network interruption while the virtual router is migrated. However the corrective action happens without intervention, and any network outage is transient. This process does allow a higher availability of virtual routing, and the minimal interruption may be acceptable for some production workloads. 5.7.3. RPCDaemon configuration Not all configuration options are currently exposed by the Rackspace Private Cloud cookbooks. This section describes the configuration values in the RPCDaemon configuration file, which is typically located at /etc/rpcdaemon.conf. 5.7.3.1. General RPCDaemon options General daemon options are specified in the Daemon section of the configuration file. Available options include: plugins: Space-separated list of plugins to load. Valid options include L3Agent, DHCPAgent, and Dump. rpchost: Kombu connection url for the OpenStack message server. In the case of RabbitMQ, an IP address is sufficient. See the Kombu Documentation for more information on Kombu connection URLs. pidfile: Location of the daemon pid file. logfile: Location of the log file. loglevel: Verbosity of logging. Valid options include DEBUG, INFO, WARNING, ERROR, and CRITICAL. check_interval: The interval in seconds in which to run plugin checks. 5.7.3.2. DHCPAgent and L3Agent options DHCPAgent plugin options are specified in the DHCPAgent section of the configuration file and L3Agent plugin options are specified in the L3Agent section of the configuration file. 42

Logs will also be sent to the logfile specified in the Daemon section, while the log level is independently configurable. 5.7.3.3. Dump The following configuration options are available for the DHCPAgent and L3Agent plugins: conffile: Path to the neutron configuration file. loglevel: Verbosity of logging. timeout: Max time for API calls to complete. This also affects failover speed. queue_expire: Auto-terminate rabbitmq queues if no activity in specified time. The Dump plugin options are specified in the Dump section of the configuration file. The Dump plugin is most useful when the loglevel option has been set to DEBUG. In this mode, dumped messages will be logged in the logfile specified in the Daemon section. The Dump plugin is most useful when running in foreground mode. See Command Line Options for more information. The following configruation options are available for the Dump plugin: loglevel: Verbosity of logging. DEBUG will produce the most useful results. queue: Queue to dump. Typically neutron to view network related messages. 5.7.4. Command line options The command rpcdaemon currently uses two command-line options: -d: Run in foreground and do not detach. When running in foreground, a pidfile is not dropped, the default log level is set to DEBUG, and the daemon logs to stderr rather than the specified logfile. This is most useful for running the Dump plugin, but can be helpful in development mode as well. -c : Display the path to the configuration file. The default configuration file path is /usr/ local/etc/rpcdaemon.conf, but init scripts on packaged version of RPCDaemon pass -c /etc/rpcdaemon.conf. 43

6. OpenStack metering Rackspace Private Cloud offers full OpenStack Metering (Ceilometer) support, including access to OpenStack statistics through the Horizon dashboard. 6.1. OpenStack metering implementation In Rackspace Private Cloud The Rackspace Private Cloud implementation of Metering is based on a MySQL framework and uses SQL Alchemy. For more information about the parameters involved in this implementation, refer to the OpenStack documentation on Metering configuration options. For a list of database backends and what they support, refer to the OpenStack documentation on choosing a database backend for metering. 6.2. Using OpenStack metering Metering statistics are available through the Horizon dashboard. When you log in to the dashboard as the admin user, click on the Resource Usage tab to view the statistics. The following information is available: Global Disk Usage: Provides an average of the last 30 days of disk usage, broken down by tenant/project. Global Network Traffic Usage: Provides an average of the last 30 days of network traffic usage, broken down by tenant/project. Stats: Provides a graphic represenation of all resources. You can view the usage for individual services, over a range of periods from the last day up to the last year. For more information about Metering, refer to the Ceilometer developer documentation, maintained by OpenStack. 44

7. OpenStack Orchestration Rackspace Private Cloud offers a technology preview implementation of OpenStack Orchestration (Heat), which enables you to manage infrastructure and applications within OpenStack clouds. Note Rackspace Private Cloud Support does not support currently support technology preview features. They are not recommended for Rackspace Private Cloud production environments, and are included for experienced OpenStack users only. OpenStack Orchestration is a service that orchestrates cloud applications onto cloud resources using the OpenStack Heat Orchestration Syntax (HOT) template format, through an OpenStack ReST API. OpenStack Orchestration also provides compatibility with the AWS CloudFormation template format. More information about Orchestration is available on the OpenStack wiki at https://wiki.openstack.org/wiki/heat. The Rackspace Private Cloud implementation of Orchestration includes the following components: The standard Heat API The heat-api-cfn API The CloudWatch API, which enables metric collection 7.1. Installing Orchestration OpenStack Orchestration is not included in the default Controller and all-in-one installations, but you can add Orchestration to your private cloud at any time by applying the heat-all role to your Controller node. Procedure 7.1. To install OpenStack Orchestration 1. Log into the Chef server or a device that has knife access on the Chef server. 2. Add the heat-all role to the Controller node's run list. # knife node run_list add <devicehostname> 'role[heat-all]' 3. Log on to the Controller node via ssh. 4. Run chef-client on the Controller node. It will take chef-client several minutes to complete the installation tasks. chef-client will provide output to help you monitor the progress of the installation. 45

7.2. Using Orchestration OpenStack Orchestration is accessible through the Horizon dashboard and through the command line interface. Generally, interactions with Orchestration through the command line interface provides better error handling and user interaction. Refer to the Heat commands chapter of the OpenStack End User for documentation of the heat client. The Orchestration page is accessed through the Project tab of the navigation bar. A repository of templates is maintained at https://github.com/openstack/heattemplates; you may also wish to develop your own for your environment. When a template is launched, it will create a "stack" of one or more instances, which may be configured to run applications. Follow this procedure to launch a stack from a URL, a template file, or by entering template information directly through the Horizon dashboard. Procedure 7.2. To launch OpenStack Orchestration 1. On the Orchestration page, click the Launch Stack button in the upper right corner of the screen. The Select Template dialog opens. 2. Select the Template Source and enter the source information, URL: Enter the URL where the template is located. File: Browse to the location of the template file on your workstation. Direct Input: Enter the template code in the Template Data field. 3. You will be prompted to enter information that the template requires to launch the stack. The exact information will vary depending on the template. 4. Click Launch Template. When the stack is complete, it will appear in the Stacks list. Any instances have been created by the template will appear on the Instances page. Click on a stack name to view the following information: A network topology diagram. Resources in a healthy state are displayed in green, and those experiencing issues are displayed in red. Detailed information about the stack and its parameters. A list of resources being used by the stack, as well as detailed information about the resources. Events that have taken place in the stack, such as server creation. 46

8. Accessing the cloud This chapter describes the methods you will use to access your cloud. You should be familiar with the contents of this section before attempting to create an instance or perform other configuration and maintenance tasks. 8.1. Accessing the controller node Rackspace Private Cloud also installs the OpenStack client utilities necessary to use the cloud. You can access these features through the command line interface on the Controller node. To use them, log in to the Controller node via SSH as root. You can now run the following commands. $ source openrc $ nova flavor-list You should see output similar to the following: +----+-----------+-----------+------+-----------+------+-------+-------------+ ID Name Memory_MB Disk Ephemeral Swap VCPUs RXTX_Factor +----+-----------+-----------+------+-----------+------+-------+-------------+ 1 m1.tiny 512 0 0 1 1.0 2 m1.small 2048 10 20 1 1.0 3 m1.medium 4096 10 40 2 1.0 4 m1.large 8192 10 80 4 1.0 5 m1.xlarge 16384 10 160 8 1.0 +----+-----------+-----------+------+-----------+------+-------+-------------+ This is a list of "flavors", different disk sizes that you can assign to images, and is an example of the information that you can access through the python-novaclient command line client. Note Do not remove the default flavors. Doing so will cause issues with the dashboard. You can also view the status of the Controller and Compute nodes and the nova components active on each while logged in as the root user. $ nova service-list You should see output similar to the following: Binary Host Zone Status State Updated_At nova-scheduler ctrl nova enabled :-) 2012-08-02 14:51:34 nova-consoleauth ctrl nova enabled :-) 2012-08-02 14:51:41 nova-network compute1 nova enabled :-) 2012-08-02 14:51:39 nova-compute compute1 nova enabled :-) 2012-08-02 14:51:35 47

You can also view logs with the tail command. For example, to view nova.log, execute the following command: $ tail /var/log/nova/nova.log All logs are available in the /var/log/ directory and its subdirectories. 8.2. Accessing the dashboard In addition to the command line, you can use your web browser to access the Controller host. You can use the hostname or the IP address of the Controller node. You should see the OpenStack dashboard (Horizon) login page. Log in with the OpenStack username admin and the OpenStack admin password that you created during the Nova cluster creation. When the login is successful, you can configure additional users, create and manage images, and launch instances. Note Clusters created with earlier versions of Rackspace Private Cloud tools have a Rackspace-customized dashboard. In Rackspace Private Cloud v 4.1.2, support for the Rackspace theme was deprecated to accommodate the new Neutron tab. 8.2.1. Using your logo in the OpenStack dashboard You can customize the dashboard by adding your own logo. Procedure 8.1. To add your logo to the Dashboard 1. Create a transparent PNG of your logo, sized to fit within a 200-pixel wide by 160-pixel tall space. 2. Name the file logo.png. 3. Save logo.png in the following location: /usr/share/openstack-dashboard/openstack_dashboard/static/ dashboard/img/logo.png 4. If you have not already done so, switch to root access with sudo -i. 5. Open style.css for editing with nano. $ nano /usr/share/openstack-dashboard/openstack_dashboard/static/ dashboard/css/style.css 6. Press Ctrl+w and search for: h1.brand. 48

7. Replace the entire h1.brand rule with the following: h1.brand a { background: url(../img/logo.png) center center no-repeat; display: block; height: 160px; text-indent: -9999px; margin: 25px auto; } 8. Press Ctrl+X; then press Y to commit the change. 9. Press Return to save style.css and exit the editor. 8.3. OpenStack client utilities The OpenStack client utilities are a convenient way to interact with OpenStack from the command line from your own workstation, without being directly logged in to the Controller node. The client utilities for python are available via pypy and can be installed on most Linux systems with python available via pip install python-novaclient and pip install python-glanceclient. For more information, refer to the following links. python-novaclient Setting up python-novaclient python-glanceclient OpenStack Glance CLI Note The clients are maintained by the community and should be considered software in development. When in doubt, refer to the internal client help for more information. A command line client is also available for OpenStack Block Storage (Cinder). For more information about Cinder, refer to Configuring OpenStack Block Storage. 8.4. Viewing and setting environment variables The environment variables set in the openrc file are used by the OpenStack clients to provide the information necessary to authenticate to your cloud. When you are logged into the Controller node as root, you can view the openrc file. Caution Be careful with the information contained in openrc. This file contains administrative credentials by default. This file should not be edited, since it is automatically maintained by chef. If you want to connect to the OpenStack installation via python-novaclient or other command line clients, you must add environment variables to your local environment. The easiest way to capture environment variables is to download them from the dashboard. 49

Procedure 8.2. To download environment variables in the Dashboard 1. Log into the dashboard. 2. In the upper right corner, click Settings. 3. In the navigation panel, select OpenStack Credentials. 4. Select the project for which you want to download the environment variables and click Download RC file. 5. After you have saved the file, open a local terminal and execute the command source openrc to add the environment variables to your local environment. The contents of the openrc file are as follows: #!/bin/bash # With the addition of Keystone, to use an openstack cloud you should # authenticate against keystone, which returns a **Token** and **Service # Catalog**. The catalog contains the endpoint for all services the # user/tenant has access to - including nova, glance, keystone, swift. # # *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We # will use the 1.1 *compute api* export OS_AUTH_URL=http://<controllerNodeURL>:5000/v2.0 # With the addition of Keystone we have standardized on the term **tenant** # as the entity that owns the resources. export OS_TENANT_ID=<tenantID> export OS_TENANT_NAME=<tenantName> # In addition to the owning entity (tenant), openstack stores the entity # performing the action as the **user**. export OS_USERNAME=<username> # With Keystone you pass the keystone password. echo "Please enter your OpenStack Password: " read -s OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT 50

9. Creating an instance in the cloud OpenStack administration is documented in detail in the OpenStack Compute Administration Manual. In this section, we discuss key tasks you should perform that will allow you to launch instances. Refer to the official OpenStack documentation for more information. For these tasks, you must be logged in to the Dashboard as the admin user. These tasks can also be performed on the command line; some tasks require you to be logged into the controller via SSH, and some can be performed via python-novaclient on the controller or on a workstation. You should also be familiar with the material documented in "Accessing the Cloud". Note Nova volumes are not supported in Rackspace Private Cloud. For block storage, refer to the instructions for configuring OpenStack Block Storage. 9.1. Image management For more information about downloading and creating additional images, refer to the OpenStack Virtual Machine Image. Images can be added via the Horizon dashboard when logged in as the admin user. Currently only images accessible via http URL are supported, and the URL must be a valid and direct URL. Compressed image binaries in.zip and.tar.gz format are supported. Procedure 9.1. To add images in the Dashboard 1. Ensure that the Admin tab in the navigation panel is in view, and select Images. 2. Click Create New Image. 3. Enter the URL path to the image. 4. Select the image format from the drop-down menu. 5. Enter the minimum disk size in GB and the minimum RAM in MB. 6. Select the Public option to make the image available to all users. 7. Click Create Image. The new image will appear in the Images table. Adding an image with the command line You can use glance image-create when logged into the controller node, or if you have Glance client installed on your local workstation and have configured your environment with administrative user access to the controller. In the following example, the user has a virtual disk image in qcow2 format stored on the local file system at /tmp/images/test-image.img. When the image is imported, it will be named "Test Image" and will be public to any Glance user with access to the controller. 51

$ glance image-create --name "Test Image" --is-public true \ --container-format=bare --disk-format qcow2 < /tmp/images/test-image/img If the image is successfully added, Glance will return a confirmation similar to the following: Added new image with ID: 85a0a926-d3e5-4a22-a062-f9c78ed7a2c0 More information is available via the command glance help add. 9.1.1. Uploading AMI images AMI images can be added with the glance image-create command as described above. However, since AMI disk images do not include kernels, you will first need to upload a kernel to Glance, as in the following example. $ glance image-create name="<kernelimagename>" is_public=true \ --container_format=aki --disk_format=aki < <imagepath> After the kernel has been uploaded, you can then upload the AMI image, specifying the Glance ID of the uploaded kernel with the kernel_id property. $ glance image-create name="<imagename>" is_public=true \ --container_format=ami --disk_format=ami \ --property kernel_id=<kernelimageglanceid> < <imagepath> 9.1.2. Converting VMDK Linux images Rackspace has developed a Python-based tool that enables you to convert and upload single-disk Linux VMDK images. Multi-disk and Windows image conversions are not available at this time. This tool can be used on any workstation that supports Python. Before you begin, ensure that you have installed libguestfs, python-libguestfs, python-hivex on your workstation. Procedure 9.2. To use the VMDK conversion tool 1. Clone the conversion script from https://github.com/rcbops/supporttools/tree/master/vmdk-conversion. 2. Use the source command to copy your Nova environment variables to your workstation. 3. To ensure that your environment is configured correctly, run the glance index command. If the command executes successfully and returns a list of available images, you are ready to proceed. 4. You can now run convert.py. You can use the following options: 52

-i, --input <path>: path to the VMDK image that you want to convert. This option is required. -o, --output <path>: the name of the qcow2 file that will be generated by the conversion process. -n, --name <name>: The name that the image will be assigned in Glance, if you are using the -u/--upload option. -u, --upload: Enables automatic uploading of the converted image to Glance. -s, --sysprep: Reserved for future use. -d, --debug <1-5>: Debug level. Level 5 is verbose. When the conversion process is complete, you can upload the image to Glance (if you did not already enable automatic image uploading). 9.2. Network management By default, your cluster is installed with nova-network. If you want to use OpenStack Networking, you must add a network node and enable OpenStack Networking as described in Adding OpenStack Networking. Currently, the Horizon dashboard does not permit robust network management. When you need to create a network or subnet, you should use the quantum commands. Refer to the procedures under Adding OpenStack Networking and to the OpenStack Networking Administration for more information. 9.3. Create a project You must create a project before you can launch an instance. A demo project is available by default, but if you want to create your own project, follow this procedure. Procedure 9.3. To create a project in the Dashboard 1. Ensure that the Admin tab in the navigation panel is in view, and select Projects. 2. Click Create Project. 3. On the Project Info tab on the Create Project dialog, enter the domain ID and name, project name, and a brief description, and ensure that the Enabled option is selected. 4. On the Project Members tab, add users to the project to grant them access to the project. Click the user name in the All Users column to add them to the Project Members column. Typically, when configuring your first project, these will be the admin user and the demo user that you created during the installation process (not to be confused with the operating system user). When prompted for a role for the user, you may wish to 53

assign the admin role to the admin user and the member role the demo user.. Refer to the OpenStack Keystone documentation for information about customizing roles. 5. On the Project Groups tab, you can add groups to the project to grant them access to the project. By default, no groups are configured, but you can configure them on the Groups page. 6. You may also need to modify quotas, which create limits for the number of VCPUs that the project can contain, the number of instances that can be created, and more. On the Quotas tab, modify the quotas as needed and click Update Quota to save your changes. 7. The new project will appear in the Projects table. Your project is now ready for additional configuration. Log out as the administrator and log in as the demo user before proceeding. When logged in, ensure that the project is selected in the navigation bar. Adding a project with the command line On the command line, projects are managed when logged in as root with keystone tenant-create. For example, to create a project named Marketing, you would use sudo -i to switch to root and execute the following command: $ keystone tenant-create --name Marketing --enabled true 9.4. Generate an SSH keypair Keypairs provide secure authentication to an instance, and will enable you to create instances securely and to log into the instance afterward. Keypairs are generated separately for each project and assigned to instances at time of creation. You can create as many keypairs in a project as you like. Procedure 9.4. To generate an SSH keypair in the Dashboard 1. With your project selected in the navigation panel, select Access and Security. 2. On the Keypairs tab, click Create Keypair. 3. In the Create Keypair dialog, enter the name for the keypair. 4. You will be prompted to save the keypair.pem file. Generating a keypair with the command line On the command line, keypairs are managed with nova keypair-* commands in pythonnovaclient. When generating a keypair, you must have your OS_USERNAME and OS_TENANT_NAME configured in your environment to ensure that you have access to the correct project. Our user jdoe, after configuring their environment, would then issue the following command to generate a keypair: 54

$ nova keypair-add jdoe-keypair The client will generate a block of RSA Private Key text, which the user copies and saves to a file called jdoe-keypair.pem. 9.5. Update the default security group A Security Group is a named set of rules that get applied to the incoming packets for the instances. Packets that match the parameters of the rules are given access to the instance; all other packets are blocked. At minimum, you should ensure that the default security group permits ping and SSH access. You may edit the default security group or add additional security groups as your security settings require. Procedure 9.5. To update the default security group in the Dashboard 1. With your project selected in the navigation panel, open the Access & Security page. 2. In the Security Groups table, click Edit Rules in the default security group row. 3. In the Edit Security Group Rules dialog box, enable SSH access by entering the following values: IP Protocol: TCP From Port: 22 To Port: 22 Source Group: CIDR CIDR: you may leave it as 0.0.0.0/0 if you want to enable access from all networks, or you may enter a specific network, such as 192.0.2.0/24. 4. Click Add Rule. You will receive a confirmation message at the top of the Dashboard window that the new rule was added to the default security group. To enable ping, repeat the procedure with a protocol of ICMP, type of -1, and code of -1. Managing nova-network security groups with the command line On the command line, security groups are managed with nova secgroup-* commands in python-novaclient. To add the ping and SSH rules to the default security group, issue the following commands: $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 $ nova secgroup-add-rule default icmp -1-1 0.0.0.0/0 Use nova secgroup-list-rules to view the updated default security group rules: $ nova secgroup-list-rules default +-------------+-----------+---------+-----------+--------------+ IP Protocol From Port To Port IP Range Source Group +-------------+-----------+---------+-----------+--------------+ icmp -1-1 0.0.0.0/0 55

tcp 22 22 0.0.0.0/0 +-------------+-----------+---------+-----------+--------------+ Managing Openstack Networking (Neutron) security groups with the command line In OpenStack Networking, security groups are managed with neutron security-group- * commands in the neutron client. To add the ping and SSH rules to the default security group, run the following commands: $ neutron security-group-rule-create --protocol icmp --direction \ ingress default $ neutron security-group-rule-create --protocol tcp --port-range-min 22 \ --port-range-max 22 --direction ingress default Use neutron security-group-rule-list to view the updated default security group rules. Note that you may need to target the default security group by its UUID. You can use neutron security-group-list to view all security groups. To view the security group associated with a particular tenant, run the following command: $ neutron security-group-list -c id -c <tenantid> -c name 9.6. Create an instance Before you can create an instance, you must have already generated a keypair and updated the default security group. The project in which you want to create the instance should be in focus on the dashboard. Procedure 9.6. To create an instance in the Dashboard 1. With your project selected in the navigation panel, open the Images & Snapshots page. 2. Locate the image from which you want to create the instance in the Images table and click Launch. For example, to create an Ubuntu 12.04 image, select a precise image. 3. On the Details tab of the Launch Instances dialog, enter the following information: Availability Zone: The availability zone in which the instance will be created. If no availability zones have been defined, none will be found. Instance Name: The name of the instance. You might choose a name like myinstance. Image: The image that the instance will be based on. This option will be labeled as Snapshot when Snapshot is selected as the Instance Source. Flavor: The VCPU configuration. Note that instances with larger flavors can take a long time to create. If you are creating an instance for the first time and want something small with which to test, select m1.small. Instance Count: Accept the default value of 1. If you wanted to create multiple instances with this configuration, you could enter an integer up to the number permitted by your quota, which is 10 by default. 56

Instance Boot Source: Specify whether the instance will be based on an image or a snapshot. Your first instance will not have any snapshots available yet. 4. On the Access and Security tab, enter the following information: Keypair: Select the keypair that you created earlier. You must assign a keypair when generating an Ubuntu image. Admin Pass: Enter and confirm the password for the admin user on the instance. Accept the default security group. 5. On the Networking tab (if available), select the network on which you want the instance to reside. This tab will be available only if you have OpenStack Networking enabled. 6. On the Volume Options tab (if available), you can choose to launch the instance with a storage volume attached. This should only be a when you have a Block Storage volume created; for your first instance, select Don't boot from a volume. 7. On the Post-Creation tab, you can add customization scripts. Some instances support user data, such as root passwords or admin users. If you have the information available, you may enter it here. 8. Click Launch. The Instances and Volumes page will open, with the new instance creation in process. The process should take less than a minute to complete, after which the instance status will be listed as Active. You may need to refresh the page. Launching an instance with the command line On the command line, image creation is managed with the nova boot command. Before you can launch an image, you need to determine what images and flavors are available to create a new instance. $ nova image-list +--------------------------+----------------------------+--------+--------+ ID Name Status Server +--------------------------+----------------------------+--------+--------+ 033c0027-[ID truncated] cirros-image ACTIVE 0ccfc8c4-[ID truncated] My Image 2 ACTIVE 85a0a926-[ID truncated] precise-image ACTIVE +--------------------------+----------------------------+--------+--------+ $ nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+ ID Name Memory_MB Disk Ephemeral Swap VCPUs RXTX_Factor +----+-----------+-----------+------+-----------+------+-------+-------------+ 1 m1.tiny 512 0 0 1 1.0 2 m1.small 2048 10 20 1 1.0 3 m1.medium 4096 10 40 2 1.0 4 m1.large 8192 10 80 4 1.0 5 m1.xlarge 16384 10 160 8 1.0 +----+-----------+-----------+------+-----------+------+-------+-------------+ 57

In the following example, an instance is launched with an image called precise-image. It uses the m1.small flavor with an ID of 2, and is named markets-test. $ nova boot --image precise-image --flavor="2" markets-test +-------------------------------------+--------------------------------------+ Property Value +-------------------------------------+--------------------------------------+ OS-DCF:diskConfig MANUAL OS-EXT-SRV-ATTR:host None OS-EXT-SRV-ATTR:hypervisor_hostname None OS-EXT-SRV-ATTR:instance_name instance-0000000d OS-EXT-STS:power_state 0 OS-EXT-STS:task_state scheduling OS-EXT-STS:vm_state building accessipv4 accessipv6 adminpass ATSEfRY9fZPx config_drive created 2012-08-02T15:43:46Z flavor m1.small hostid id 5bf46a3b-084c-4ce1-b06f-e460e875075b image precise-image key_name metadata {} name markets-test progress 0 status BUILD tenant_id b4769145977045e2a9279c842b09be6a updated 2012-08-02T15:43:46Z user_id 5f2f2c28bdc844f9845251290b524e80 +-------------------------------------+--------------------------------------+ You can also view the newly-created instance at the command line with nova list. $ nova list +------------------+--------------+--------+-------------------+ ID Name Status Networks +------------------+--------------+--------+-------------------+ [ID truncated] markets-test ACTIVE public=192.0.2.0 +------------------+--------------+--------+-------------------+ Note Booting a cirros instance with nova boot --config-drive will bypass the metadata route requirement. This will enable you to use a cirros instance without special network configuration. 9.6.1. File injection best practice When file injection is needed, Rackspace recommends enabling config-drive so that cloud-init can copy files to an instance. When performed manually, set the --configdrive attribute to true in the nova boot command, as in the following example. 58

$ nova boot --config_drive=true --file /root/openrc=/root/openrc \ --flavor 1 --image cirros-image You can also set an override attribute in your environment that enforces the use of config-drive at all times. "nova": { "config": "force_config_drive": true } } 9.7. Accessing the instance All instances exist on a nova network that is not accessible by other hosts by default. There are various ways to access an instance. In all cases, be sure that you have updated the default security group. If you added a DMZ range during installation, you can access the instance via SSH from other hosts within the DMZ. Log in through the VNC console on the dashboard. On the Instances & Volumes page, select VNC from the drop-down menu in the Instances table. If the console does not respond to keyboard input, click the grey bar at the top of the console window. For best results, you should be running the dashboard on a Firefox browser with Flash installed. Connect by SSH to the address that you assigned to the compute node, and connect to the instance by SSH while logged in to the compute node. Refer to "Accessing the Image By SSH on the Compute Node". Assign a floating IP address to the instance and connect to that IP address by SSH. Refer to "Managing Floating IP Addresses". 9.7.1. Logging in to the instance The login for each instance is determined by the configuration of the image from which it was created. Rackspace Private Cloud installations include a CIRROS image and an Ubuntu 12.04 (Precise) image. CIRROS: log in with the username cirros and the password cubswin:). Ubuntu 12.04 Precise: Log in with the user ubuntu and the SSH key that you specified for the instance during the instance creation process. The key must be present on the host from which you are connecting to the instance, and you must log in with the key name and the -i flag. In the following example, the keypair file is named jdoekeypair.pem. $ ssh -i jdoe-keypair.pem 192.0.2.0 59

For instances launched from other images, log in with the credentials defined in the image. 9.7.2. Accessing the instance by SSH The following procedure is for instances created in an OpenStack Networking environment. Procedure 9.7. To access the instance by SSH in OpenStack Networking 1. On the Controller node, identify the namespace that the DHCP or L3 agent created for the network the instance is attached to. This command can assist. $ ip netns list 2. Verify that you can ping the instance from the namespace. $ ip netns exec <namespace> ping <instanceipaddress> PING 198.51.100.0 (198.51.100.0) 56(84) bytes of data. 64 bytes from 198.51.100.0: icmp_req=1 ttl=64 time=0.394 ms 64 bytes from 198.51.100.0: icmp_req=2 ttl=64 time=0.266 ms 64 bytes from 198.51.100.0: icmp_req=3 ttl=64 time=0.285 ms 3. Within the namespace, connect to the instance via SSH. You may be prompted for the password. $ ip netns exec <namespace> ssh <username>@<instanceipaddress> If the login requires an SSH key, log in with the key name and the -i flag. $ ip netns exec <namespace> ssh -i <keypairname>.pem <instanceipaddress> In a nova-network environment, these steps are performed on the Compute node the instance is running on, or any node with access to the instance's network Procedure 9.8. To access the instance by SSH in nova-network 1. If you have one compute node, go on to Step 2. If you have more than one compute node, log into the controller node, use sudo -i to switch to root, and execute the following command to identify the compute node on which the instance is stored. $ nova-manage vm list grep <instancename> The output generated will include the following information, where N is the number of the compute node. Compute nodes will be numbered in the order in which you added them. <instancename> compute<n> m1.small active 2012-08-13 00:42:53 60

2. Connect to the compute node via SSH. 3. Verify that you can ping the instance. $ ping <instanceipaddress> PING 198.51.100.0 (198.51.100.0) 56(84) bytes of data. 64 bytes from 198.51.100.0: icmp_req=1 ttl=64 time=0.394 ms 64 bytes from 198.51.100.0: icmp_req=2 ttl=64 time=0.266 ms 64 bytes from 198.51.100.0: icmp_req=3 ttl=64 time=0.285 ms 4. Connect to the instance. $ ssh <username>@<instanceipaddress> If the login requires an SSH key, log in with the key name and the -i flag. You may need to copy the *.pem keypair file associated with the instance to the compute node. $ ip netns exec <namespace> ssh -i <keypairname>.pem <instanceipaddress> 9.7.3. Managing floating IP addresses If you are using nova-network instead of OpenStack Networking, you will need to use floating IP addresses. Before you assign a floating IP address to an instance, you must have a pool of addresses to choose from. Your network security team must provision an address range and assign it to your environment. These addresses need to be publicly accessible. Note If your cloud is hosted in a Rackspace data center and you require more floating IP addresses, contact your Rackspace support representative for assistance. Follow this procedure to create a pool of floating IP addresses, allocate an address to a project, and assign it to an instance. Procedure 9.9. To manage floating IP addresses in nova-network 1. Log into the controller node and use sudo -i to switch to root. Execute the following command, substituting in the CIDR for the address range in --ip_range that was provisioned by your network security team: $ nova floating-ip-create --ip_range=<cidr> This creates the pool of floating IP addresses, which will be available to all projects on the host. You can now allocate a floating IP address and assign it to an instance in the dashboard. 61

2. Open the Access & Security Page. 3. Click Allocate IP to Project above the Floating IPs table. 4. In the Allocate Floating IP dialog box, accept the default (typically Floating) in the Pool drop-down menu and click Allocate IP. You will receive a confirmation message that a floating IP address has been allocated to the project and the IP address will appear in the Floating IPs table. This reserves the addresses for the project, but does not immediately associate that address with an instance. 5. In the row for the IP address, click Associate IP. 6. In the Manage Floating IP Associations dialog, ensure that the allocated IP address is selected and select the instance from the Instance menu. Click Associate. You will receive a confirmation message that the IP has been associated with the instance. The instance ID will now appear in the Floating IPs table, associated with the IP address. It may be a few minutes before the IP address is included on the Instances table on the Instances & Volumes page. Once the IP address assignment is completed, you can access the instance from any Internet-enabled host by using SSH to access the newly-assigned floating IP. See Logging In to the Instance for more information. Managing floating IP addresses with the command line Allocation and assignment of floating IP addresses is managed with the nova floating-ip* commands. In this example, the IP address is first allocated to the Marketing project with nova floatingip-create command. $ nova floating-ip-create marketing The floating IP address has been reserved for the Marketing project, and can now be associated with an instance with the nova add-floating-ip command. For this example, we'll associate this IP address with the image markets-test. $ nova add-floating-ip markets-test 203.0.113.0 62

After the command is complete, you can confirm that the IP address has been associated with the nova floating-ip-list and nova-list commands. $ nova floating-ip-list +-------------+--------------------------------------+-----------+------+ Ip Instance Id Fixed Ip Pool +-------------+--------------------------------------+-----------+------+ 203.0.113.0 542235df-8ba4-4d08-90c9-b79f5a77c04f 192.0.2.0 nova +-------------+--------------------------------------+-----------+------+ $ nova list +------------------+--------------+--------+---------------------------------+ ID Name Status Networks +------------------+--------------+--------+---------------------------------+ [ID truncated] markets-test ACTIVE public=[<networkipaddresses>] +------------------+--------------+--------+---------------------------------+ The first table shows that the 203.0.113.0 is now associated with the markets-test instance ID, and the second table shows the IP address included under markets-test's public IP addresses. 9.8. What's next? Congratulations! You have created a project and launched your first instance in your Rackspace Private Cloud cluster. You can now use your OpenStack environment for any purpose you like. If you're a more advanced user and are comfortable with APIs, OpenStack API documentation is available in the OpenStack API Documentation library. The following documents are a good place to start: OpenStack API Quick Start Programming OpenStack Compute API OpenStack Compute Developer You may want to purchase Escalation Support or Core Support for your cloud or take advantage of our training offerings. Contact us at <opencloudinfo@rackspace.com> for more information. And please come join your fellow Rackspace Private Cloud users on our customer forums. https://community.rackspace.com/products/f/45 Welcome aboard! 63

10. OpenStack Image Storage The Glance cookbook used for Rackspace Private Cloud supports OpenStack Image storage in the local file system, in OpenStack Object Storage (Swift), and in Rackspace Cloud Files. Note If you change the image storage location from Swift to Cloud Files (or vice versa), you must manually export and import the images. 10.1. Local File Storage By default, OpenStack Image stores the image files locally on the controller node, and as long as you're using local file storage, you will not have to make any changes to your configuration. In the event that you need to switch from a different storage method to the local file system, follow these steps. Procedure 10.1. To configure local image storage 1. Log into the controller node and use sudo -i to switch to root access. 2. Define your text editor: $ export EDITOR=vi 3. Use knife to open the environment file for editing. $ knife environment edit <environmentfile> 4. Add the following attributes to the environment. "glance": { "api": { "default_store": "file" }, "images": [ "cirros" ], "image_upload": true } 5. Run chef-client to commit the change. $ chef-client 64

10.2. Rackspace cloud files To use Rackspace Cloud Files for image storage, you must have an account. To sign up, visit the Rackspace Cloud Files web site. Procedure 10.2. To configure Rackspace Cloud Files image storage 1. Log into the controller node and use sudo -i to switch to root access. 2. Use the following command to obtain your Cloud Files tenant ID. $ curl -s -X POST https://identity.api.rackspacecloud.com/v2.0/tokens \ -d '{"auth": {"passwordcredentials": {"<cloudfilesusername>": "", \ "password": "<cloudfilespassword>"}}}' \ -H "Content-type: application/json" python -mjson.tool \ grep "tenantid.*mosso" head -1 The output of this command will display on the screen. Copy and save the tenant ID. 3. Define your text editor: $ export EDITOR=vi 4. Use knife to open the environment file for editing. $ knife environment edit <environmentfile> 5. Add the following attributes to the environment, using the tenant ID that you obtained in Step 2 and your Cloud Files username and password. "glance": { "api": { "default_store": "swift", "swift_store_user": "<cloudfilestenant_id:<cloudfilesusername>", "swift_store_key": "<cloudfilespassword>", "swift_store_auth_version": "2", "swift_store_auth_address": "https://identity.api.rackspacecloud.com/ v2.0" }, "images": [ "cirros" ], "image_upload": true }, 6. Run chef-client to commit the change. $ chef-client 65

10.3. Swift storage To use Swift storage, you must have a Swift cluster configured in your environment. Refer to Rackspace Private Cloud OpenStack Object Storage Installation for more the process of creating and configuring a Swift cluster. Procedure 10.3. To configure Swift image storage 1. Log into the controller node and use sudo -i to switch to root access. 2. Define your text editor: $ export EDITOR=vi 3. Use knife to open the environment file for editing. $ knife environment edit <environmentfile> 4. Add the following attributes to the environment. "glance": { "api": { "default_store": "swift" }, "images": [ "cirros" ], "image_upload": true } 5. Run chef-client to commit the change. $ chef-client 66

11. Glossary of terms Cinder Compute Flavor Floating IP address Glance Image Keypairs Keystone MySQL Neutron Nova Project name for OpenStack Block Storage, which supersedes nova-volume. OpenStack Compute is a compute service that provides server capacity in the cloud. Compute Servers come in different flavors of memory, disk space, and CPU, and can be provisioned in minutes. Interactions with Compute Servers can occur programmatically via the OpenStack Compute API or the Dashboard. Flavor is an available hardware configuration for a server. Each flavor has a unique combination of disk space, memory capacity and priority for CPU time. A floating IP address is an IP address (typically public) that can be dynamically assigned to an instance. This address enables network address translation (NAT) and allows an instance to be accessed from outside the nova fixed network. Project name for the Image Service software, which is the main image repository piece of OpenStack, it is the place where you will be uploading your images as well as the place from which they will be consumed by the rest of the OpenStack system. Images are your templates for creating new virtual machines. The project under OpenStack that stores the available images is called Glance. These are simple ssh keys and are your credentials for accessing any running instances. Keypairs are added and managed using the Keypairs section of the user dashboard. Project name for the Identity service software, which offers an integrated identity management system for OpenStack. Initially using token-based authentication, but eventually supporting plug-in modules for identity storage (LDAP, DB, file, PAM, Active Directory, etc...), protocols (SAML, OAUTH, OpenID, etc...) Datastore that stores build-time and run-time state for a cloud infrastructure. New project name (as of June 2013) for the Network service, which provides a network connectivity abstraction layer to OpenStack Compute. Project name for the Compute service that provisions and manages large networks of virtual machines, creating a redundant and scalable cloud computing platform. 67

Quantum Rabbit MQ Security Groups Server Swift Former project name for the Network service, which provides a network connectivity abstraction layer to OpenStack Compute. Provides robust messaging for applications. It is completely open source and based on open standard protocols. Security groups at this time exist mostly as tags for the servers and can be consumed via the meta-data API via a simple curl command. Security groups can be specified as part of the "personality" of an instance. A server is a virtual machine instance in the compute system. Flavor and image are requisite elements when creating a server. Project name for the Object Storage service software, which provides consistent and redundant storage and retrieval of fixed digital content. 68