rackspace.com/cloud/private



Similar documents
rackspace.com/cloud/private

OpenStack Introduction. November 4, 2015

Release Notes for Fuel and Fuel Web Version 3.0.1

How To Install Openstack On Ubuntu (Amd64)

Mirantis

Deploying workloads with Juju and MAAS in Ubuntu 13.04

Prepared for: How to Become Cloud Backup Provider

Running an OpenStack Cloud for several years and living to tell the tale. Alexandre Maumené Gaëtan Trellu Tokyo Summit, November 2015

OpenStack Towards a fully open cloud. Thierry Carrez Release Manager, OpenStack

CS312 Solutions #6. March 13, 2015

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager Product Marketing Manager

DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT

How To Manage An Openstack Cloud On A Raspberry Pommer (Rpc) V9 Software (Rp) V2.5 (Rper) (Rproper) V3.5.5 And V3 (Rpl

Guide to the LBaaS plugin ver for Fuel

Building Storage as a Service with OpenStack. Greg Elkinbard Senior Technical Director

Snakes on a cloud. A presentation of the OpenStack project. Thierry Carrez Release Manager, OpenStack

CloudCIX Bootcamp. The essential IaaS getting started guide.

SWIFT. Page:1. Openstack Swift. Object Store Cloud built from the grounds up. David Hadas Swift ATC. HRL 2012 IBM Corporation

SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment

OpenStack Awareness Session

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting

OpenStack Object Storage Administrator Guide

Installation Runbook for Avni Software Defined Cloud

SUSE Cloud 5 Private Cloud based on OpenStack

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP

Openstack. Cloud computing with Openstack. Saverio Proto

NephOS A Licensed End-to-end IaaS Cloud Software Stack for Enterprise or OEM On-premise Use.

Veeam Cloud Connect. Version 8.0. Administrator Guide

Integrating Scality RING into OpenStack Unified Storage for Cloud Infrastructure

WP4: Cloud Hosting Chapter Object Storage Generic Enabler

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University

The OpenStack TM Object Storage system

GRAVITYZONE HERE. Deployment Guide VLE Environment

SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment

Improving Scalability Of Storage System:Object Storage Using Open Stack Swift

vrealize Operations Management Pack for OpenStack

CumuLogic Load Balancer Overview Guide. March CumuLogic Load Balancer Overview Guide 1

How to Deploy OpenStack on TH-2 Supercomputer Yusong Tan, Bao Li National Supercomputing Center in Guangzhou April 10, 2014

Investigating Private Cloud Storage Deployment using Cumulus, Walrus, and OpenStack/Swift

Technical Overview Simple, Scalable, Object Storage Software

Service Description Cloud Storage Openstack Swift

insync Installation Guide

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé

How to manage your OpenStack Swift Cluster using Swift Metrics Sreedhar Varma Vedams Inc.

Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo

HAProxy. Ryan O'Hara Principal Software Engineer, Red Hat September 17, HAProxy

rackspace.com/cloud/private

Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date

Eucalyptus User Console Guide

Getting Started with the CLI and APIs using Cisco Openstack Private Cloud

Syncplicity On-Premise Storage Connector

How To Create A Port On A Neutron.Org Server On A Microsoft Powerbook (Networking) On A Macbook 2 (Netware) On An Ipad Or Ipad On A

HP Reference Architecture for OpenStack on Ubuntu LTS

Load Balancing Microsoft Sharepoint 2010 Load Balancing Microsoft Sharepoint Deployment Guide

Ubuntu OpenStack Fundamentals Training

HDFS Users Guide. Table of contents

Acronis Storage Gateway

Project Documentation

UZH Experiences with OpenStack

KVM, OpenStack, and the Open Cloud

User's Guide - Beta 1 Draft

Establishing Scientific Computing Clouds on Limited Resources using OpenStack

DEPLOYMENT GUIDE. Deploying the BIG-IP LTM v9.x with Microsoft Windows Server 2008 Terminal Services

INSTALL ZENTYAL SERVER

Citrix XenServer Workload Balancing Quick Start. Published February Edition

1 Keystone OpenStack Identity Service

Multi Provider Cloud. Srinivasa Acharya, Engineering Manager, Hewlett-Packard

StreamServe Persuasion SP5 StreamStudio

Desktop virtualization using SaaS Architecture

CloudStack Release Notes

How To Install Powerpoint 6 On A Windows Server With A Powerpoint 2.5 (Powerpoint) And Powerpoint On A Microsoft Powerpoint 4.5 Powerpoint (Powerpoints) And A Powerpoints 2

Interworks. Interworks Cloud Platform Installation Guide

App Orchestration Setup Checklist

How To Design A Private Cloud Powered By Openstack

Migration of virtual machine to cloud using Openstack Python API Clients

SUSE Cloud. Deployment Guide. February 20, 2015

Moving Drupal to the Cloud: A step-by-step guide and reference document for hosting a Drupal web site on Amazon Web Services

Fuel User Guide. version 8.0

New Features... 1 Installation... 3 Upgrade Changes... 3 Fixed Limitations... 4 Known Limitations... 5 Informatica Global Customer Support...

Introduction to Mobile Access Gateway Installation

Corso di Reti di Calcolatori M

How To Manage Storage With Novell Storage Manager 3.X For Active Directory

vcloud Director User's Guide

VMware vrealize Automation

JAMF Software Server Installation and Configuration Guide for OS X. Version 9.0

SUSE Cloud. Deployment Guide. January 26, 2015

SYNNEFO: A COMPLETE CLOUD PLATFORM OVER GOOGLE GANETI WITH OPENSTACK APIs VANGELIS KOUKIS, TECH LEAD, SYNNEFO

Building Multi-Site & Ultra-Large Scale Cloud with Openstack Cascading

mguard Device Manager Release Notes Version 1.6.1

VMware vcloud Automation Center 6.0

Automated Configuration of Open Stack Instances at Boot Time

SUSE Cloud. OpenStack End User Guide. February 20, 2015

Transcription:

rackspace.com/cloud/private

Standalone Object Storage: Rackspace Private Cloud Powered By Open- Stack RPCO v11 (2015-12-09) Copyright 2015 Rackspace All rights reserved. This documentation is intended for Rackspace customers who want to deploy OpenStack Object Storage (swift) as a standalone application. ii

Table of Contents 1. Overview... 1 1.1. System requirements for standalone Object Storage... 2 1.2. Installation workflow... 3 2. Example of Object Storage installation architecture... 4 3. Prerequisites... 6 4. Configure and mount storage devices... 7 5. Networking for standalone Object Storage... 9 6. Configure a standalone Object Storage deployment... 10 7. Standalone Object Storage playbooks... 18 7.1. Running Object Storage playbooks... 18 8. Verify the installation... 19 9. Object Storage monitoring... 20 9.1. Service and response... 20 9.2. Object Storage monitors... 20 A. Object Storage configuration files... 22 A.1. swift.yml example configuration file... 22 A.2. user_variables.yml configuration file... 29 10. Document history and additional information... 34 10.1. Document change history... 34 10.2. Additional resources... 34 iii

List of Figures 1.1. Installation workflow... 3 2.1. Object Storage architecture... 5 iv

List of Tables 4.1. Mounted devices... 8 v

1. Overview Object Storage (swift) is a multi-tenant object storage system. It is highly scalable, can manage large amounts of unstructured data, and provides a RESTful HTTP API. Object Storage includes the following components: Proxy servers (swift-proxyserver) Account servers (swift-account-server) Accepts OpenStack Object Storage API and raw HTTP requests to upload files, modify metadata, and create containers. It also serves file or container listings to web browsers. The proxy server takes each request and looks up locations for the account, container, or object and routes the requests correctly. The proxy server also handles API requests. To improve performance, the proxy server can use an optional cache that is usually deployed with memcache. Manages accounts defined with Object Storage. Note Accounts are the root storage location for data. Container servers (swift-container-server) Manages the mapping of containers or folders, within Object Storage. Note Containers are user-defined segments of the account namespace that provides the storage location where objects are found. Object servers (swift-object-server) Manages actual objects, such as files, on the storage nodes. Note Objects are the actual data stored in Object Storage (swift). Ring A set of hash tables that associate each object to a specific physical device. There is one ring per type of data manipulated by Object Storage (Objects, Containers and Accounts). The set of rings are shared among every Object Storage node (storage and proxy). Each ring determines the physical devices (hard disks) where each object, container, and account will be stored. The number of devices on which an object is stored depends on the number of replicas (copies) specified for the Object Storage cluster. 1

Various periodic processes WSGI middleware Perform housekeeping tasks on the large data store. The replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers. Handles authentication (usually OpenStack Identity). Object Storage integrates with the Compute layer and can be used as a back end for Image service (glance), providing a horizontally scalable store for all image and snapshot data. Object Storage focuses on storing non-transactional data. Object Storage data (the accounts, containers, and objects) are resources that are stored on physical hardware. Object Storage proxy containers are installed on specific infrastructure nodes. The other Object Storage nodes are storage nodes and can have any combination (at a device or drive level) of object, container, or account servers running on them. For example, given two servers with five drives in each, each drive can be part of the ring for any combination of object, account, or container (including each having different storage policies within the object server). Object Storage uses the following nodes: Three existing infrastructure nodes to run the swift-proxy-server processes. The proxy servers proxy requests to the appropriate storage nodes. Multiple storage nodes that run the swift-account-server, swift-container-server, and swift-object-server processes which control storage of the account databases, the container databases, as well as the actual stored objects. The following Object Storage features are not supported in RPCO Object Storage: Ring management and ring storage repositories Drive detection, formatting, mounting, and unmounting 1.1. System requirements for standalone Object Storage Hardware Object Storage is designed to run on commodity hardware. Rackspace recommends that each node in the cluster meet the following minimum specifications: 1 core per 3 TB capacity At least 6 SAS drives of at least 1 TB capacity each At least 2 GB RAM, plus an additional 250 MB RAM per TB of drive capacity. The amount of disk space depends on how much can fit into the rack efficiently. At Rackspace, storage servers are generic 4U servers with 24 2T SATA drives and 8 cores of processing power. 2

RAID on the storage drives is not required and is not recommended. Object Storage's disk usage pattern is unsuitable for RAID. Operating system Software RPCO Object Storage runs on Ubuntu. RPCO Object Storage requires the following components of Rackspace Private Cloud v11 (Kilo): Identity (keystone) Infrastructure services (SQLite database, memchache, and RabbitMQ) Networking Database 1 Gbps or 10 Gbps is suggested internally. For RPCO Object Storage, an external network should connect anything external to the proxy servers. The storage network is intended to be isolated on a private network or multiple private networks. A SQLite database is part of the Rackspace Private Cloud Object Storage container and account management process. Permissions 1.2. Installation workflow RPCO Object Storage can be installed with either root permissions or with as a user with sudo permissions if the sudoers file is configured to enable all the permissions. The following diagram shows the general workflow for a standalone Object Storage installation. Figure 1.1. Installation workflow 3

2. Example of Object Storage installation architecture This section provides an example Object Storage installation architecture. Object Storage uses the following constructs: Node Proxy node Storage node Ring Replica Zone Region (optional) A host machine that runs one or more OpenStack Object Storage services. Runs proxy services. Runs account, container, and Object services. Contains the SQLite databases. A set of mappings between OpenStack Object Storage data and physical devices. A copy of an object. By default, three copies are maintained in the cluster. A logically separate section of the cluster. Related to independent failure characteristics. A logically separate section of the cluster representing distinct physical locations such as cities or countries. Similar to zones, but regions represent the physical locations of portions of the cluster. To increase reliability and performance, additional proxy servers can be added. The ring guarantees that every replica is stored in a separate zone. This is applicable only if the zones are greater than or equal to the replica count. A zone is a group of nodes that are as isolated as possible from other nodes (separate servers, network, power, even geography). The following diagram shows one possible configuration for a minimal installation. In this example, each storage node is a separate zone in the ring. At a minimum, five zones are recommended. 4

Figure 2.1. Object Storage architecture 5

3. Prerequisites Enable the trusty-backports repository. The trusty-backports repository is required to install Object Storage. Add repository details in /etc/apt/sources.list, and update the package list: $ cd /opt/openstack-ansible/rpc_deployment ansible hosts -m shell -a "sed -r -i 's/^ \ (deb.*trusty-backports.*)$/\1/' /etc/apt/sources.list; apt-get update" 6

4. Configure and mount storage devices This section offers a set of prerequisite instructions for setting up Object Storage storage devices. The storage devices must be set up before installing Object Storage. Procedure 4.1. Configuring and mounting storage devices RPCO Object Storage requires a minimum of three Object Storage devices with mounted storage drives. The example commands in this procedure assume the storage devices for Object Storage are devices sdc through sdg. 1. Determine the storage devices on the node to be used for Object Storage. 2. Format each device on the node used for storage with XFS. While formatting the devices, add a unique label for each device. Note Without labels, a failed drive can cause mount points to shift and data to become inaccessible. For example, create the file systems on the devices using the mkfs command $ apt-get install xfsprogs $ mkfs.xfs -f -i size=1024 -L sdc /dev/sdc $ mkfs.xfs -f -i size=1024 -L sdd /dev/sdd $ mkfs.xfs -f -i size=1024 -L sde /dev/sde $ mkfs.xfs -f -i size=1024 -L sdf /dev/sdf $ mkfs.xfs -f -i size=1024 -L sdg /dev/sdg 3. Add the mount locations to the /etc/fstab file so that the storage devices are remounted on boot. The following example mount options are recommended when using XFS. Finish all modifications to the /etc/fstab file before mounting the new filesystems created within the storage devices. LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8, nobootwait 0 0 4. Create the mount points for the devices using the mkdir command. $ mkdir -p /srv/node/sdc $ mkdir -p /srv/node/sdd $ mkdir -p /srv/node/sde $ mkdir -p /srv/node/sdf 7

$ mkdir -p /srv/node/sdg The mount point is referenced as the mount_point parameter in the swift.yml file (/etc/rpc_deploy/conf.d/swift.yml). $ mount /srv/node/sdc $ mount /srv/node/sdd $ mount /srv/node/sde $ mount /srv/node/sdf $ mount /srv/node/sdg To view an annotated example of the swift.yml file, see Appendix A, Object Storage configuration files [22]. For the following mounted devices: Table 4.1. Mounted devices Device /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg Mount location /srv/node/sdc /srv/node/sdd /srv/node/sde /srv/node/sdf /srv/node/sdg The entry in the swift.yml would be: drives: - name: sdc - name: sdd - name: sde - name: sdf - name: sdg mount_point: /srv/node 8

5. Networking for standalone Object Storage Standalone Object Storage requires the br-storage and br-mgmt networks. These networks are specified in the provider_networks section of the /etc/rpc_deploy/ user_config.yml. provider networks - network: container_bridge: "br-mgmt" container_interface: "eth1" type: "raw" ip_from_q: "container" group_binds: - all_containers - all hosts - network: container_bridge: "br-storage" container_interface: "eth2" type: "raw" ip_from_q: "storage" group_binds: - glance_api - cinder_api - cinder_volume - nova_compute - swift_proxy 9

6. Configure a standalone Object Storage deployment Standalone Object Storage is configured using the following files: /etc/rpc_deploy/rpc_user_config.yml /etc/rpc_deploy/rpc_user_variables.yml /etc/rpc_deploy/conf.d/swift.yml Procedure 6.1. Updating the standalone Object Storage configuration files. 1. In the rpc_user_config.yml, set the infra_hosts to the following values to only set up the Identity and infrastructure services. This sets up The following example shows node1 through node3. The values must be adjusted for all infra_hosts. identity_hosts: node1: ip: 192.168.100.1 node2: ip: 192.168.100.2 node3: ip: 192.168.100.3 shared-infra_hosts: node1: ip: 192.168.100.1 node2: ip: 192.168.100.2 node3: ip: 192.168.100.3 Note Do not set up any compute_hosts or storage_hosts in your rpc_user_config.yml file. When the hosts are not set up in the rpc_user_config.yml, the anisble playbooks to not deploy the hosts. 2. In the rpc_user_variables.yml, update the following user variables. Note The rackspace_cloud variables are only needed if using Monitoring as a service (MaaS). container_openstack_password: keystone_auth_admin_password: keystone_auth_admin_token: keystone_container_mysql_password: keystone_service_password: memcached_encryption_key: mysql_debian_sys_maint_password: mysql_root_password: 10

rabbitmq_cookie_token: rabbitmq_password: rackspace_cloud_api_key: rackspace_cloud_auth_url: rackpace_cloud_password: rackspace_cloud_tenant_id: rackspace_cloud_username: rackspace_cloudfiles_tenant_id: rpc_support_holland_password: swift_container_mysql_password: swift_hash_path_prefix: swift_hash_path_suffix: swift_service_password: swift_dispersion_user: swift_dispersion_password: service_syslog: galera_root_password: 3. In the /etc/rpc_deploy/conf.d/swift.yml, update the global override values: global_overrides: swift: part_power: 8 weight: 100 min_part_hours: 1 repl_number: 3 storage_network: 'br-storage' replication_network: 'br-repl' drives: - name: sdc - name: sdd - name: sde - name: sdf mount_point: /mnt account: container: storage_policies: - policy: name: gold index: 0 default: True - policy: name: silver index: 1 repl_number: 3 deprecated: True part_power Set the partition power value based on the total amount of storage the entire ring will use. Multiply the maximum number of drives ever used with this Object Storage installation by 100 and round that value up to the closest power of two value. For example, a maximum of six drives, times 100, equals 600. The nearest power of two above 600 is two to the power of nine, so the partition power is nine. The partition power cannot be changed after the Object Storage rings are built. 11

weight min_part_hours repl_number storage_network The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. For example, a 1 TB disk would have a weight of 100, while a 2 TB drive would have a weight of 200. The default value is 1. Set the minimum partition hours to the amount of time to lock a partition's replicas after a partition has been moved. Moving multiple replicas at the same time might make data inaccessible. This value can be set separately in the swift, container, account, and policy sections with the value in lower sections superseding the value in the swift section. The default value is 3. Set the replication number to the number of replicas of each object. This value can be set separately in the swift, container, account, and policy sections with the value in the more granular sections superseding the value in the swift section. By default, the swift services will listen on the default management IP. Optionally, specify the interface of the storage network Note If the storage_network is not set, but the storage_ips per host are set (or the storage_ip is not on the storage_network interface) the proxy server will not be able to connect to the storage services. replication_network Optionally, specify a dedicated replication network interface, so dedicated replication can be setup. If this value is not specified, no dedicated replication_network is set. Note As with the storage_network, if the repl_ip is not set on the replication_network interface, replication will not work properly. drives mount_point Set the default drives per host. This is useful when all hosts have the same drives. These can be overridden on a per host basis. Set the mount_point value to the location where the swift drives are mounted. For example, with a mount point of /mnt and a drive of sdc, a drive is mounted at / 12

mnt/sdc on the swift_host. This can be overridden on a per-host basis. storage_policies default deprecated Storage policies determine on which hardware data is stored, how the data is stored across that hardware, and in which region the data resides. Each storage policy must have an unique name and a unique index. There must be a storage policy with an index of 0 in the swift.yml file to use any legacy containers created before storage policies were instituted. Set the default value to yes for at least one policy. This is the default storage policy for any non-legacy containers that are created. Set the deprecated value to yes to turn off storage policies. Note For account and container rings, min_part_hours and repl_number are the only values that can be set. Setting them in this section overrides the defaults for the specific ring. 4. In the /etc/rpc_deploy/conf.d/swift.yml, update the Object Storage proxy hosts values: swift-proxy_hosts: infra-node1: ip: 192.0.2.1 infra-node2: ip: 192.0.2.2 infra-node3: ip: 192.0.2.3 swift-proxy_hosts Set the IP address of the hosts that Ansible will connect to to deploy the swift-proxy containers. The swiftproxy_hosts value should match the infra nodes. 5. In the /etc/rpc_deploy/conf.d/swift.yml, update the Object Storage hosts values: swift_hosts: swift-node1: ip: 192.0.2.4 container_vars: swift_vars: zone: 0 swift-node2: ip: 192.0.2.5 container_vars: swift_vars: zone: 1 swift-node3: 13

ip: 192.0.2.6 container_vars: swift_vars: zone: 2 swift-node4: ip: 192.0.2.7 container_vars: swift_vars: zone: 3 swift-node5: ip: 192.0.2.8 container_vars: swift_vars: storage_ip: 198.51.100.8 repl_ip: 203.0.113.8 zone: 4 region: 3 weight: 200 groups: - account - container - silver drives: - name: sdb storage_ip: 198.51.100.9 repl_ip: 203.0.113.9 weight: 75 groups: - gold - name: sdc - name: sdd - name: sde - name: sdf swift_hosts swift_vars storage_ip and repl_ip Specify the hosts to be used as the storage nodes. The ip is the address of the host to which Ansible connects. Set the name and IP address of each Object Storage host. The swift_hosts section is not required. Contains the Object Storage host specific values. These values are based on the IP addresses of the host's storage_network or replication_network. For example, if the storage_network is br-storage and host1 has an IP address of 1.1.1.1 on br-storage, then that is the IP address that will be used for storage_ip. If only the storage_ip is specified then the repl_ip defaults to the storage_ip. If neither are specified, both will default to the host IP address. 14

Note Overriding these values on a host or drive basis can cause problems if the IP address that the service listens on is based on a specified storage_network or replication_network and the ring is set to a different IP address. zone region weight groups drives weight The default is 0. Optionally, set the Object Storage zone for the ring. Optionally, set the Object Storage region for the ring. The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. This value can be specified on a host or drive basis (if specified at both, the drive setting takes precedence). Set the groups to list the rings to which a host's drive belongs. This can be set on a per drive basis which will override the host setting. Set the names of the drives on this Object Storage host. At least one name must be specified. The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. This value can be specified on a host or drive basis (if specified at both, the drive setting takes precedence). In the following example, swift-node5 shows values in the swift_hosts section that will override the global values. Groups are set, which overrides the global settings for drive sdb. The weight is overridden for the host and specifically adjusted on drive sdb. Also, the storage_ip and repl_ip are set differently for sdb. swift-node5: ip: 192.0.2.8 container_vars: swift_vars: storage_ip: 198.51.100.8 repl_ip: 203.0.113.8 zone: 4 region: 3 weight: 200 groups: - account - container - silver 15

drives: - name: sdb storage_ip: 198.51.100.9 repl_ip: 203.0.113.9 weight: 75 groups: - gold - name: sdc - name: sdd - name: sde - name: sdf 6. Ensure the swift.yml is in the /etc/rpc_deploy/conf.d/ folder. 7. Optionally, if using the built in HAProxy as the load balancer, add the following variables to the rpc_user_variables.yml file to ensure that only the appropriate load balancer endpoints are set. haproxy_service_configs: - service: haproxy_service_name: repo_all haproxy_backend_nodes: "{{ groups['pkg_repo'] }}" haproxy_port: 8181 haproxy_backend_port: 8181 haproxy_balance_type: http - service: haproxy_service_name: galera haproxy_backend_nodes: "{{ [groups['galera_all'][0]] }}" list expected haproxy_backup_nodes: "{{ groups['galera_all'][1:] }}" haproxy_port: 3306 haproxy_balance_type: tcp haproxy_timeout_client: 5000s haproxy_timeout_server: 5000s haproxy_backend_options: - "mysql-check user {{ galera_monitoring_user }}" - service: haproxy_service_name: swift_proxy haproxy_backend_nodes: "{{ groups['swift_proxy'] }}" haproxy_balance_alg: source haproxy_port: 8080 haproxy_balance_type: http - service: haproxy_service_name: keystone_admin haproxy_backend_nodes: "{{ groups['keystone_all'] }}" haproxy_port: 35357 haproxy_balance_type: http haproxy_backend_options: - "forwardfor" - "httpchk" - "httplog" - service: haproxy_service_name: keystone_service haproxy_backend_nodes: "{{ groups['keystone_all'] }}" haproxy_port: 5000 haproxy_balance_type: http haproxy_backend_options: - "forwardfor" - "httpchk" - "httplog" 16

Procedure 6.2. Allowing all Identity users to use Object Storage 1. Set swift_allow_all_users in the user_variables.yml file to True. Any users with the _member_ role (all authorized Identity (keystone) users) can create containers and upload objects to Object Storage. Note 2. Run the Identity playbook: If this value is False, then by default, only users with the admin or swiftoperator role are allowed to create containers or manage tenants. When the backend type for the Image service (glance) is set to swift, the Image service can access the Object Storage cluster regardless of whether this value is True or False. $ ansible-playbook -e @/etc/rpc_deploy/user_variables.yml \ playbooks/openstack/keystone.yml 17

7. Standalone Object Storage playbooks The Ansible Object Storage playbooks prepare the target hosts for Object Storage services and depend on the values in the swift.yml file. 7.1. Running Object Storage playbooks Before running the standalone Object Storage playbooks, make sure the configuration files have been updated. 1. Change to the /opt/rpc-openstack/openstack-ansible/playbooks directory. 2. Run the setup-everything.yml playbook to install Object Storage and the necessary components of OpenStack. $ ansible-playbook -e @/etc/rpc_deploy/user_variables.yml \ playbooks/setup-everything.yml 18

8. Verify the installation These commands can be run from the proxy server or any server that has access to Identity (keystone). 1. Ensure that the credentials are set correctly in the /root/openrc file and then source it: $ source /root/openrc 2. Run the following command: $ swift stat Account: AUTH_11b9758b7049476d9b48f7a91ea11493 Containers: 0 Objects: 0 Bytes: 0 Content-Type: text/plain; charset=utf-8 X-Timestamp: 1381434243.83760 X-Trans-Id: txdcdd594565214fb4a2d33-0052570383 X-Put-Timestamp: 1381434243.83760 3. Run the following commands to upload files to a container. Create the test.txt and test2.txt test files locally if needed (can contain anything). $ swift upload myfiles test.txt $ swift upload myfiles test2.txt 4. Run the following command to download all files from the myfiles container: $ swift download myfiles test2.txt [headers 0.267s, total 0.267s, 0.000s MB/s] test.txt [headers 0.271s, total 0.271s, 0.000s MB/s] If the files download successful, Object Storage has installed successfully. 19

9. Object Storage monitoring Rackspace Cloud Monitoring Service allows Rackspace Private Cloud customers to monitor system performance, and safeguard critical data. 9.1. Service and response When a threshold is reached or functionality fails, the Rackspace Cloud Monitoring Service generates an alert, which creates a ticket in the Rackspace ticketing system. This ticket moves into the RPCO support queue. Tickets flagged as monitoring alerts are given highest priority, and response is delivered according to the Service Level Agreement (SLA). Refer to the SLA for detailed information about incident severity levels and corresponding response times. Specific monitoring alert guidelines can be set for the installation. These details should be arranged by a Rackspace account manager. 9.2. Object Storage monitors Object Storage has its own set of monitors and alerts. For more information about installing the monitoring tools, see the RPCO Installation Guide. The following checks are performed on services: Health checks on the services on each server. Object Storage makes a request to /healthcheck for each service to ensure it is responding appropriately. If this check fails, determine why the service is failing and fix it accordingly. Health check against the proxy service on the virtual IP (VIP). Object Storage checks the load balancer address for the swift-proxy-server service, and monitors the proxy servers as a whole rather than individually which is managed by the individual service health checks. If this check fails, it suggests that there is no access to the VIP or all of the services are failing. The following checks are performed against the output of the swift-recon middleware: md5sum checks on the ring files across all Object Storage nodes. This check ensures that the ring files are the same on each node. If this check fails, determine why the md5sum for the ring is different and determine which of the ring files is correct. Copy the correct ring file onto the node that is causing the md5sum to fail. md5sum checks on the swift.conf across all swift nodes. If this check fails, determine why the swift.conf is different and determine which of the swift.conf is correct. Copy the correct swift.conf onto the node that is causing the md5sum to fail. 20

Asyncs pending This check monitors the average number of async pending requests and the percentage that are put in async pending. This happens when a PUT or DELETE fails (due to, for example, timeouts, heavy usage, failed disk). If this check fails, determine why requests are failing and being put in async pending status and fix accordingly. Quarantine This check monitors the percentage of objects that are quarantined (objects that are found to have errors and moved to quarantine). An alert is set up against account, container, and object servers. If this fails, determine the cause of the corrupted objects and fix accordingly. Replication This check monitors replication success percentage. An alert is set up against account, container, and object servers. If this fails, determine why objects are not replicating and fix accordingly. 21

Appendix A. Object Storage configuration files Table of Contents A.1. swift.yml example configuration file... 22 A.2. user_variables.yml configuration file... 29 Object Storage is configured using the /etc/rpc_deploy/conf.d/swift.yml file and the /etc/rpc_deploy/user_variables.yml file. This appendix shows both configuration files and indicates which variables are required and which are optional. A.1. swift.yml example configuration file --- Copyright 2015, Rackspace US, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/license-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Overview ======== This file contains the configuration for the OpenStack Ansible Deployment (OSA) Object Storage (swift) service. Only enable these options for deployments that contain the Object Storage service. For more information on these options, see the documentation at http://docs.openstack.org/developer/swift/index.html You can customize the options in this file and copy it to /etc/openstack_deploy/conf.d/swift.yml or create a new file containing only necessary options for your environment before deployment. OSA implements PyYAML to parse YAML files and therefore supports structure and formatting options that augment traditional YAML. For example, aliases or references. For more information on PyYAML, see the documentation at http://pyyaml.org/wiki/pyyamldocumentation Configuration reference ======================= 22

Level: global_overrides (required) Contains global options that require customization for a deployment. For example, the ring stricture. This level also provides a mechanism to override other options defined in the playbook structure. Level: swift (required) Contains options for swift. Option: storage_network (required, string) Name of the storage network bridge on target hosts. Typically 'br-storage'. Option: repl_network (optional, string) Name of the replication network bridge on target hosts. Typically 'br-repl'. Defaults to the value of the 'storage_network' option. Option: part_power (required, integer) Partition power. Applies to all rings unless overridden at the 'account' or 'container' levels or within a policy in the 'storage_policies' level. Immutable without rebuilding the rings. Option: repl_number (optional, integer) Number of replicas for each partition. Applies to all rings unless overridden at the 'account' or 'container' levels or within a policy in the 'storage_policies' level. Defaults to 3. Option: min_part_hours (optional, integer) Minimum time in hours between multiple moves of the same partition. Applies to all rings unless overridden at the 'account' or 'container' levels or within a policy in the 'storage_policies' level. Defaults to 1. Option: region (optional, integer) Region of a disk. Applies to all disks in all storage hosts unless overridden deeper in the structure. Defaults to 1. Option: zone (optional, integer) Zone of a disk. Applies to all disks in all storage hosts unless overridden deeper in the structure. Defaults to 0. Option: weight (optional, integer) Weight of a disk. Applies to all disks in all storage hosts unless overridden deeper in the structure. Defaults to 100. Option: reclaim_age (optional, integer, default 604800) The amount of time in seconds before items, such as tombstones are reclaimed, default is 604800 (7 Days). Example: Define a typical deployment: - Storage network that uses the 'br-storage' bridge. Proxy containers typically use the 'storage' IP address pool. However, storage hosts use bare metal and require manual configuration of the 'br-storage' bridge on each host. - Replication network that uses the 'br-repl' bridge. Only storage hosts contain this network. Storage hosts use bare metal and require manual 23

configuration of the bridge on each host. - Ring configuration with partition power of 8, three replicas of each file, and minimum 1 hour between migrations of the same partition. All rings use region 1 and zone 0. All disks include a weight of 100. swift: storage_network: 'br-storage' replication_network: 'br-repl' part_power: 8 repl_number: 3 min_part_hours: 1 region: 1 zone: 0 weight: 100 Note: Most typical deployments override the 'zone' option in the 'swift_vars' level to use a unique zone for each storage host. Option: mount_point (required, string) Top-level directory for mount points of disks. Defaults to /mnt. Applies to all hosts unless overridden deeper in the structure. Level: drives (required) Contains the mount points of disks. Applies to all hosts unless overridden deeper in the structure. Option: name (required, string) Mount point of a disk. Use one entry for each disk. Applies to all hosts unless overridden deeper in the structure. Example: Mount disks 'sdc', 'sdd', 'sde', and 'sdf' to the '/mnt' directory on all storage hosts: mount_point: /mnt drives: - name: sdc - name: sdd - name: sde - name: sdf Level: account (optional) Contains 'min_part_hours' and 'repl_number' options specific to the account ring. Level: container (optional) Contains 'min_part_hours' and 'repl_number' options specific to the container ring. Level: storage_policies (required) Contains storage policies. Minimum one policy. One policy must include the 'index: 0' and 'default: True' options. Level: policy (required) Contains a storage policy. Define for each policy. Option: name (required, string) Policy name. 24

Option: index (required, integer) Policy index. One policy must include this option with a '0' value. Option: policy_type (optional, string) Defines policy as replication or erasure coding. Accepts 'replication' 'erasure_coding' values. Defaults to 'replication' value if omitted. Option: ec_type (conditionally required, string) Defines the erasure coding algorithm. Required for erasure coding policies. Option: ec_num_data_fragments (conditionally required, integer) Defines the number of object data fragments. Required for erasure coding policies. Option: ec_num_parity_fragments (conditionally required, integer) Defines the number of object parity fragments. Required for erasure coding policies. Option: ec_object_segment_size (conditionally required, integer) Defines the size of object segments in bytes. Swift sends incoming objects to an erasure coding policy in segments of this size. Required for erasure coding policies. Option: default (conditionally required, boolean) Defines the default policy. One policy must include this option with a 'True' value. Option: deprecated (optional, boolean) Defines a deprecated policy. Note: The following levels and options override any values higher in the structure and generally apply to advanced deployments. Option: repl_number (optional, integer) Number of replicas of each partition in this policy. Option: min_part_hours (optional, integer) Minimum time in hours between multiple moves of the same partition in this policy. Example: Define three storage policies: A default 'gold' policy, a deprecated 'silver' policy, and an erasure coding 'ec10-4' policy. storage_policies: - policy: name: gold index: 0 default: True - policy: name: silver index: 1 repl_number: 3 deprecated: True - policy: 25

name: ec10-4 index: 2 policy_type: erasure_coding ec_type: jerasure_rs_vand ec_num_data_fragments: 10 ec_num_parity_fragments: 4 ec_object_segment_size: 1048576 -------- Level: swift_proxy-hosts (required) List of target hosts on which to deploy the swift proxy service. Recommend three minimum target hosts for these services. Typically contains the same target hosts as the 'shared-infra_hosts' level in complete OpenStack deployments. Level: <value> (optional, string) Name of a proxy host. Option: ip (required, string) IP address of this target host, typically the IP address assigned to the management bridge. Level: container_vars (optional) Contains options for this target host. Level: swift_proxy_vars (optional) Contains swift proxy options for this target host. Typical deployments use this level to define read/write affinity settings for proxy hosts. Option: read_affinity (optional, string) Specify which region/zones the proxy server should prefer for reads from the account, container and object services. E.g. read_affinity: "r1=100" this would prefer region 1 read_affinity: "r1z1=100, r1=200" this would prefer region 1 zone 1 if that is unavailable region 1, otherwise any available region/ zone. Lower number is higher priority. When this option is specified the sorting_method is set to 'affinity' automatically. Option: write_affinity (optional, string) Specify which region to prefer when object PUT requests are made. E.g. write_affinity: "r1" - favours region 1 for object PUTs Option: write_affinity_node_count (optional, string) Specify how many copies to prioritise in specified region on handoff nodes for Object PUT requests. Requires "write_affinity" to be set in order to be useful. This is a short term way to ensure replication happens locally, Swift's eventual consistency will ensure proper distribution over time. e.g. write_affinity_node_count: "2 * replicas" - this would try to store Object PUT replicas on up to 6 disks in region 1 assuming replicas is 3, and write_affinity = r1 Example: Define three swift proxy hosts: swift_proxy-hosts: 26

infra1: ip: 172.29.236.101 container_vars: swift_proxy_vars: read_affinity: "r1=100" write_affinity: "r1" write_affinity_node_count: "2 * replicas" infra2: ip: 172.29.236.102 container_vars: swift_proxy_vars: read_affinity: "r2=100" write_affinity: "r2" write_affinity_node_count: "2 * replicas" infra3: ip: 172.29.236.103 container_vars: swift_proxy_vars: read_affinity: "r3=100" write_affinity: "r3" write_affinity_node_count: "2 * replicas" -------- Level: swift_hosts (required) List of target hosts on which to deploy the swift storage services. Recommend three minimum target hosts for these services. Level: <value> (required, string) Name of a storage host. Option: ip (required, string) IP address of this target host, typically the IP address assigned to the management bridge. Note: The following levels and options override any values higher in the structure and generally apply to advanced deployments. Level: container_vars (optional) Contains options for this target host. Level: swift_vars (optional) Contains swift options for this target host. Typical deployments use this level to define a unique zone for each storage host. Option: storage_ip (optional, string) IP address to use for accessing the account, container, and object services if different than the IP address of the storage network bridge on the target host. Also requires manual configuration of the host. Option: repl_ip (optional, string) IP address to use for replication services if different than the IP address of the replication network bridge on the target host. Also requires manual configuration of the host. Option: region (optional, integer) Region of all disks. 27

Option: zone (optional, integer) Zone of all disks. Option: weight (optional, integer) Weight of all disks. Level: groups (optional) List of one of more Ansible groups that apply to this host. Example: Deploy the account ring, container ring, and 'silver' policy. groups: - account - container - silver Level: drives (optional) Contains the mount points of disks specific to this host. Level or option: name (optional, string) Mount point of a disk specific to this host. Use one entry for each disk. Functions as a level for disks that contain additional options. Option: storage_ip (optional, string) IP address to use for accessing the account, container, and object services of a disk if different than the IP address of the storage network bridge on the target host. Also requires manual configuration of the host. Option: repl_ip (optional, string) IP address to use for replication services of a disk if different than the IP address of the replication network bridge on the target host. Also requires manual configuration of the host. Option: region (optional, integer) Region of a disk. Option: zone (optional, integer) Zone of a disk. Option: weight (optional, integer) Weight of a disk. Level: groups (optional) List of one or more Ansible groups that apply to this disk. Example: Define four storage hosts. The first three hosts contain typical options and the last host contains advanced options. swift_hosts: swift-node1: ip: 172.29.236.151 container_vars: swift_vars: 28

zone: 0 swift-node2: ip: 172.29.236.152 container_vars: swift_vars: zone: 1 swift-node3: ip: 172.29.236.153 container_vars: swift_vars: zone: 2 swift-node4: ip: 172.29.236.154 container_vars: swift_vars: storage_ip: 198.51.100.11 repl_ip: 203.0.113.11 region: 2 zone: 0 weight: 200 groups: - account - container - silver drives: - name: sdc storage_ip: 198.51.100.21 repl_ip: 203.0.113.21 weight: 75 groups: - gold - name: sdd - name: sde - name: sdf A.2. user_variables.yml configuration file --- Copyright 2014, Rackspace US, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/license-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Ceilometer Options ceilometer_db_type: mongodb ceilometer_db_ip: localhost 29

ceilometer_db_port: 27017 swift_ceilometer_enabled: False heat_ceilometer_enabled: False cinder_ceilometer_enabled: False glance_ceilometer_enabled: False nova_ceilometer_enabled: False neutron_ceilometer_enabled: False Aodh Options aodh_db_type: mongodb aodh_db_ip: localhost aodh_db_port: 27017 Glance Options Set glance_default_store to "swift" if using Cloud Files or swift backend or "rbd" if using ceph backend; the latter will trigger ceph to get installed on glance glance_default_store: file glance_notification_driver: noop `internalurl` will cause glance to speak to swift via ServiceNet, use `publicurl` to communicate with swift over the public network glance_swift_store_endpoint_type: internalurl Ceph client user for glance to connect to the ceph cluster glance_ceph_client: glance Ceph pool name for Glance to use glance_rbd_store_pool: images glance_rbd_store_chunk_size: 8 Nova When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova hosts. nova_libvirt_images_rbd_pool: vms by default we assume you use rbd for both cinder and nova, and as libvirt needs to access both volumes (cinder) and boot disks (nova) we default to reuse the cinder_ceph_client only need to change this if you'd use ceph for boot disks and not for volumes nova_ceph_client: nova_ceph_client_uuid: This defaults to KVM, if you are deploying on a host that is not KVM capable change this to your hypervisor type: IE "qemu", "lxc". nova_virt_type: kvm nova_cpu_allocation_ratio: 2.0 nova_ram_allocation_ratio: 1.0 If you wish to change the dhcp_domain configured for both nova and neutron dhcp_domain: Glance with Swift Extra options when configuring swift as a glance back-end. By default it will use the local swift installation. Set these when using a remote swift as a glance backend. NOTE: Ensure that the auth version matches your authentication endpoint. NOTE: If the password for glance_swift_store_key contains a dollar sign ($), it must be escaped with an additional dollar sign ($$), not a backslash. For 30

example, a password of "super$ecure" would need to be entered as "super$$ecure" below. See Launchpad Bug 1259729 for more details. glance_swift_store_auth_version: 3 glance_swift_store_auth_address: "https://some.auth.url.com" glance_swift_store_user: "OPENSTACK_TENANT_ID:OPENSTACK_USER_NAME" glance_swift_store_key: "OPENSTACK_USER_PASSWORD" glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER" glance_swift_store_region: "NAME_OF_REGION" Cinder Ceph client user for cinder to connect to the ceph cluster cinder_ceph_client: cinder Ceph Enable these if you use ceph rbd for at least one component (glance, cinder, nova) ceph_apt_repo_url_region: "www" or "eu" for Netherlands based mirror ceph_stable_release: hammer Ceph Authentication - by default cephx is true cephx: true Ceph Monitors A list of the IP addresses for your Ceph monitors ceph_mons: - 10.16.5.40-10.16.5.41-10.16.5.42 Custom Ceph Configuration File (ceph.conf) By default, your deployment host will connect to one of the mons defined above to obtain a copy of your cluster's ceph.conf. If you prefer, uncomment ceph_conf_file and customise to avoid ceph.conf being copied from a mon. ceph_conf_file: [global] fsid = 00000000-1111-2222-3333-444444444444 mon_initial_members = mon1.example.local,mon2.example.local,mon3.example. local mon_host = 10.16.5.40,10.16.5.41,10.16.5.42 optionally, you can use this construct to avoid defining this list twice: mon_host = {{ ceph_mons join(',') }} auth_cluster_required = cephx auth_service_required = cephx SSL Settings Adjust these settings to change how SSL connectivity is configured for various services. For more information, see the openstack-ansible documentation section titled "Securing services with SSL certificates". SSL: Keystone These do not need to be configured unless you're creating certificates for services running behind Apache (currently, Horizon and Keystone). ssl_protocol: "ALL -SSLv2 -SSLv3" Cipher suite string from https://hynek.me/articles/hardening-your-webservers-ssl-ciphers/ ssl_cipher_suite: "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH +AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS" To override for Keystone only: - keystone_ssl_protocol 31

- keystone_ssl_cipher_suite To override for Horizon only: - horizon_ssl_protocol - horizon_ssl_cipher_suite SSL: RabbitMQ Set these variables if you prefer to use existing SSL certificates, keys and CA certificates with the RabbitMQ SSL/TLS Listener rabbitmq_user_ssl_cert: <path to cert on ansible deployment host> rabbitmq_user_ssl_key: <path to cert on ansible deployment host> rabbitmq_user_ssl_ca_cert: <path to cert on ansible deployment host> By default, openstack-ansible configures all OpenStack services to talk to RabbitMQ over encrypted connections on port 5671. To opt-out of this default, set the rabbitmq_use_ssl variable to 'false'. The default setting of 'true' is highly recommended for securing the contents of RabbitMQ messages. rabbitmq_use_ssl: true Additional pinning generator that will allow for more packages to be pinned as you see fit. All pins allow for package and versions to be defined. Be careful using this as versions are always subject to change and updates regarding security will become your problem from this point on. Pinning can be done based on a package version, release, or origin. Use "*" in the package name to indicate that you want to pin all package to a particular constraint. apt_pinned_packages: - { package: "lxc", version: "1.0.7-0ubuntu0.1" } - { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.9" } - { package: "rabbitmq-server", origin: "www.rabbitmq.com" } - { package: "*", release: "MariaDB" } Environment variable settings This allows users to specify the additional environment variables to be set which is useful in setting where you working behind a proxy. If working behind a proxy It's important to always specify the scheme as "http://". This is what the underlying python libraries will handle best. This proxy information will be placed both on the hosts and inside the containers. Example environment variable setup: proxy_env_url: http://username:pa$$w0rd@10.10.10.9:9000/ no_proxy_env: "localhost,127.0.0.1,{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %} {% endfor %}" global_environment_variables: HTTP_PROXY: "{{ proxy_env_url }}" HTTPS_PROXY: "{{ proxy_env_url }}" NO_PROXY: "{{ no_proxy_env }}" Multiple region support in Horizon: For multiple regions uncomment this configuration, and 32

add the extra endpoints below the first list item. horizon_available_regions: - { url: "{{ keystone_service_internalurl }}", name: "{{ keystone_service_region }}" } - { url: "http://cluster1.example.com:5000/v2.0", name: "RegionTwo" } SSH connection wait time If an increased delay for the ssh connection check is desired, uncomment this variable and set it appropriately. ssh_delay: 5 HAProxy Uncomment this to disable keepalived installation (cf. documentation) haproxy_use_keepalived: False HAProxy Keepalived configuration (cf. documentation) haproxy_keepalived_external_vip_cidr: "{{external_lb_vip_address}}/32" haproxy_keepalived_internal_vip_cidr: "{{internal_lb_vip_address}}/32" haproxy_keepalived_external_interface: haproxy_keepalived_internal_interface: Defines the default VRRP id used for keepalived with haproxy. Overwrite it to your value to make sure you don't overlap with existing VRRPs id on your network. Default is 10 for the external and 11 for the internal VRRPs haproxy_keepalived_external_virtual_router_id: haproxy_keepalived_internal_virtual_router_id: Defines the VRRP master/backup priority. Defaults respectively to 100 and 20 haproxy_keepalived_priority_master: haproxy_keepalived_priority_backup: All the previous variables are used in a var file, fed to the keepalived role. To use another file to feed the role, override the following var: haproxy_keepalived_vars_file: 'vars/configs/keepalived_haproxy.yml' 33

10. Document history and additional information 10.1. Document change history This version replaces and obsoletes all previous versions. The most recent versions are listed in the following table: Revision Date 2015-12-10 Rackspace Private Cloud r11.1.0 release 10.2. Additional resources Release information These additional resources help you learn more about Rackspace Private Cloud Powered By OpenStack and OpenStack. Rackspace Private Cloud v11 Administrator Guide Rackspace Private Cloud v11 FAQ Rackspace Private Cloud v11 Installation Guide Rackspace Private Cloud v11 Object Storage Deployment Rackspace Private Cloud v11 Operations Guide Rackspace Private Cloud v11 Release Notes Rackspace Private Cloud v11 Upgrade Guide Rackspace Private Cloud Knowledge Center OpenStack Documentation OpenStack Developer Documentation OpenStack API Quick Start OpenStack Block Storage (cinder) Developer Documentation OpenStack Compute (nova) Developer Documentation OpenStack Compute API v2 Developer Guide OpenStack Dashboard (horizon) Developer Documentation OpenStack Identity (keystone) Developer Documentation OpenStack Image service (glance) Developer Documentation OpenStack Object Storage (swift) Developer Documentation 34