Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date

Similar documents
Deploying workloads with Juju and MAAS in Ubuntu 13.04

Technical. AMD Reference Architecture for SeaMicro SM15000 Server and Ubuntu OpenStack LTS (Icehouse) Table of Contents

How To Install Openstack On Ubuntu (Amd64)

HP Reference Architecture for OpenStack on Ubuntu LTS

Installation Runbook for Avni Software Defined Cloud

1 Keystone OpenStack Identity Service

Security Gateway for OpenStack

Guide to the LBaaS plugin ver for Fuel

Create a virtual machine at your assigned virtual server. Use the following specs

Release Notes for Fuel and Fuel Web Version 3.0.1

Comodo MyDLP Software Version 2.0. Installation Guide Guide Version Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013

CloudCIX Bootcamp. The essential IaaS getting started guide.

rackspace.com/cloud/private

Ubuntu OpenStack Fundamentals Training

rackspace.com/cloud/private

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

NOC PS manual. Copyright Maxnet All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3

HP SDN VM and Ubuntu Setup

rackspace.com/cloud/private

How to Configure an Initial Installation of the VMware ESXi Hypervisor

Mirantis

Linux Terminal Server Project

Automated Configuration of Open Stack Instances at Boot Time

A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor.

AlienVault Unified Security Management (USM) 4.x-5.x. Deploying HIDS Agents to Linux Hosts

How To Install An Org Vm Server On A Virtual Box On An Ubuntu (Orchestra) On A Windows Box On A Microsoft Zephyrus (Orroster) 2.5 (Orner)

If you re not using Citrix XenCenter 6.0, your screens may vary. Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0

Deploying RDO on Red Hat Enterprise Linux. Dan Radez Sr. Software Engineer, RED HAT

SOA Software API Gateway Appliance 7.1.x Administration Guide

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

Using VirtualBox ACHOTL1 Virtual Machines

Field Installation Guide

Virtual Appliance Setup Guide

F-Secure Messaging Security Gateway. Deployment Guide

Building a Penetration Testing Virtual Computer Laboratory

Creating a DUO MFA Service in AWS

CommandCenter Secure Gateway

Perforce Helix Threat Detection OVA Deployment Guide

Installing and Configuring vcloud Connector

GX-V. Quick Start Guide. Microsoft Hyper-V Hypervisor. Before You Begin SUMMARY OF TASKS. Before You Begin WORKSHEET VIRTUAL GMS SERVER

vrealize Operations Management Pack for OpenStack

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

Murano User Guide. v0.2. Publication date Abstract. This document is intended for individuals who wish to use Murano Product.

Installing and Using the vnios Trial

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University

How To Use Openstack On Your Laptop

How To Deploy An Openstack Private Cloud On An Nc Dx1000 Micro Server

Current unresolved challenges and issues in next generation cloud deployments in a virtual environment. Muhammad Adnan Malik

SUSE Cloud Installation: Best Practices Using an Existing SMT and KVM Environment

PHD Virtual Backup for Hyper-V

CommandCenter Secure Gateway

FOG Guide. IPBRICK International. July 17, 2013

Apache CloudStack 4.x (incubating) Network Setup: excerpt from Installation Guide. Revised February 28, :32 pm Pacific

SUSE Cloud. OpenStack End User Guide. February 20, 2015

Oracle OpenStack for Oracle Linux Release 1.0 Installation and User s Guide ORACLE WHITE PAPER DECEMBER 2014

Rally Installation Guide

SUSE Manager in the Public Cloud. SUSE Manager Server in the Public Cloud

VMTurbo Operations Manager 4.5 Installing and Updating Operations Manager

By Reeshu Patel. Getting Started with OpenStack

AlienVault. Unified Security Management (USM) x Initial Setup Guide

How to Use? SKALICLOUD DEMO

SevOne NMS Download Installation and Implementation Guide

rackspace.com/cloud/private

Acano solution. Virtualized Deployment R1.1 Installation Guide. Acano. February B

Dell Proximity Printing Solution. Installation Guide

Hadoop Data Warehouse Manual

Mirantis OpenStack 6. with VMware vcenter and NSX. Mirantis Reference Architecture. US HEADQUARTERS Mountain View, CA

Quick Start Guide. Citrix XenServer Hypervisor. Server Mode (Single-Interface Deployment) Before You Begin SUMMARY OF TASKS

INUVIKA OVD INSTALLING INUVIKA OVD ON UBUNTU (TRUSTY TAHR)

Cloud Storage Quick Start Guide

Aspen Cloud Server Management Console

HOWTO: Set up a Vyatta device with ThreatSTOP in bridge mode

Common Services Platform Collector 2.5 Quick Start Guide

VELOCITY. Quick Start Guide. Citrix XenServer Hypervisor. Server Mode (Single-Interface Deployment) Before You Begin SUMMARY OF TASKS

Deploy the ExtraHop Discover Appliance with Hyper-V

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting

INUVIKA TECHNICAL GUIDE

Web Application Firewall

F-SECURE MESSAGING SECURITY GATEWAY

Connections and wiring Diagram

HOWTO: Set up a Vyatta device with ThreatSTOP in router mode

Introduction. Created by Richard Bell 10/29/2014

Exinda How to Guide: Virtual Appliance. Exinda ExOS Version Exinda, Inc

SI455 Advanced Computer Networking. Lab2: Adding DNS and Servers (v1.0) Due 6 Feb by start of class

ADFS 2.0 Application Director Blueprint Deployment Guide

Private Cloud in Educational Institutions: An Implementation using UEC

VM-Series Firewall Deployment Tech Note PAN-OS 5.0

Configuring Keystone in OpenStack (Essex)

SUSE Cloud Installation: Best Practices Using a SMT, Xen and Ceph Storage Environment

Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0. virtual network = wan0 mgmt1. network adapter not connected lan0

Extending Remote Desktop for Large Installations. Distributed Package Installs

Backup & Disaster Recovery Appliance User Guide

PZVM1 Administration Guide. V1.1 February 2014 Alain Ganuchaud. Page 1/27

VMware vcenter Log Insight Getting Started Guide

Penetration Testing LAB Setup Guide

LABS Agenda University of Luxembourg, FSTC, 6 rue Richard Coudenhove-Kalergi

Trial environment setup. Exchange Server Archiver - 3.0

w w w. u l t i m u m t e c h n o l o g i e s. c o m Infrastructure-as-a-Service on the OpenStack platform

Procedure to Create and Duplicate Master LiveUSB Stick

Transcription:

Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date Participants Consultant Name, Canonical Cloud Consultant,name.lastname@canonical.com Cloud Architect Name, Canonical Cloud Architect, name.lastname@canonical.com Project Manager Name, Canonical Project Manager, name.lastname@canonical.com Customer Name, Customer Company, customer@email.address Canonical 2 of 32 v1.6

The provided hardware complies with the Hardware Prerequisites document Before we start we must make sure that the hardware complies with the Customer Hardware Prerequisites : Ubuntu Seed Cloud Section Status Comments The provided hardware complies with the Hardware Prerequisites document PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Canonical 3 of 32 v1.6

Architecture Overview Note: Please if possible try to adapt this diagram to the customer environment using the template here. Provided Subnets Management: 1.1.21.0/27 Internal : 1.1.21.32/27 Floating (Public): 1.1.21.64/26 Description of the Nodes and Services Note: We will write here the nodes and the services allocated to them and optionally an explanation of why each service goes to what node. MAAS node, zookeeper node and controller node will be hosted in a single hypervisor. Hostname IP Services maas.customer.com 1.1.21.4 1. MAAS Canonical 4 of 32 v1.6

zookeeper.customer.com nova-cloud-controller-01. customer.com swift-storage-01.custome r.com swift-storage-02.custome r.com swift-storage-03.custome r.com swift-storage-04.custome r.com swift-storage-05.custome r.com swift-api-01.customer.co m swift-api-02.customer.co m db-rabbit-01.customer.co m nova-compute-01.custom er.com nova-compute-02.custom er.com Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service Dynamically assigned by MAAS DHCP service 1. Zookeeper / Juju server 1. Nova Cloud Controller 2. MySQL 3. Keystone 4. Openstack Dashboard (Horizon) 1. Swift Storage 2. Glance 1. Swift Storage 1. Swift Storage 1. Swift Storage 1. Swift Storage 1. Swift Proxy 1. Swift Proxy 1. RabbitMQ 1. Nova Compute 1. Nova Compute Section Status Comments Description of the nodes and services PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Installing the MAAS Server Operating System Canonical 5 of 32 v1.6

Note: We assume in this example that the MAAS server has a dedicated NIC to manage the IPMI cards of the nodes. Please update accordingly for every customer 1. Boot the server and install via USB or CD 2. Choose Install Ubuntu Server 3. Configure the network manually IP: 1.1.21.4 Net Mask: 255.255.255.128 Gateway: 1.1.21.1 DNS: 1.1.21.22 Hostname: maas Domain Name: customer.com After that we add the IPMI network to eth1: Edit /etc/network/interfaces and add eth1: auto eth1 iface eth1 inet static address 10.0.0.80 netmask 255.255.255.0 Section Status Comments Installing the MAAS Server Operating System PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Installing and Configuring the MAAS Service Install MAAS: Login by ssh to the MAAS server. $ sudo apt-get update $ sudo apt-get install -y maas maas-dhcp maas-dns maas-cli ntp After this we can log into MAAS at http://1.1.21.4/maas Create a superuser account Which you will use to log into the MAAS web interface. We will use root as username and ubuntu as password Canonical 6 of 32 v1.6

$ sudo maas createsuperuser Import the isos from the Canonical archive: Specify the images to download We are going to use only Ubuntu 12.04 LTS (Precise). Edit /etc/maas/import_pxe_files and change the RELEASES line so it looks like: RELEASES="precise" Save the file and exit Download the images $ sudo maas-import-pxe-files Configure the rest with the Web UI Add the ssh public key: Log into the MAAS web interface and add a ssh public key under your user preferences. This key will be added to the.ssh/authorized_keys of the ubuntu user for machines deployed via MAAS. Configure the DHCP service: Click on the settings icon next to the search box in the MAAS main screen. Scroll down and under Default domain for new nodes enter customer.com. Then scroll back up and on Cluster controllers click the edit icon. Under DNS zone name enter customer.com Click on the edit icon of the NIC that will be used for DHCP. Under Management select Manage DHCP and DNS Under Interfaces click on the edit button for the interface that will make the DHCP offers to the MAAS nodes and enter these settings: IP: 1.1.21.4 Subnet mask: 255.255.255.224 Broadcast IP: 1.1.21.31 Router IP: 1.1.21.1 IP range low: 1.1.21.10 IP range high: 1.1.21.30 Configure squid-deb-proxy Squid-deb-proxy will cache the deb packages downloaded by the MAAS nodes from Canonical 7 of 32 v1.6

the Ubuntu repositories to save bandwidth, this way the deb packages are not pulled from the public repositories more than once. Allow the local subnet in squid-deb-proxy sudo su cat << EOF > /etc/squid-deb-proxy/allowed-networks-src.acl.d/99-jumpstart 1.1.21.0/25 EOF Add access to the following servers to squid-deb-proxy These archives contain the OpenStack and other deb packages. Some of them $ sudo su $ cat << EOF > /etc/squid-deb-proxy/mirror-dstdomain.acl.d/88-jumpstart cloud-images.ubuntu.com ubuntu-cloud.archive.canonical.com keyserver.ubuntu.com.launchpad.net EOF Apply the changes to squid-deb-proxy $ sudo restart squid-deb-proxy Section Status Comments Installing and Configuring the MAAS Service PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Enlisting and Commissioning Machines Power on all the remaining machines. Note: The machines must be set to PXE boot. They will automatically PXE boot from the MAAS server, enlist then power off. Once machines are enlisted they will appear in the MAAS Web UI as Declared. Edit each machine changing its hostname to the desired one in MAAS, for example, nova-compute-01.customer.com, then click to Accept and Commission each machine. Note: to accept all the nodes from the MAAS command line interface we would do the following: 1. Log into MAAS from the MAAS server: Canonical 8 of 32 v1.6

$ maas-cli login root http://maas/maas/ 2. Enter the API Key found in the MAAS Web UI under root and then Preferences. 3. Accept all the nodes: $ maas-cli root nodes accept-all MAAS will power on and commission the accepted machines. Once commissioned, the machines will show as being in the Ready state in the MAAS Web UI and will be powered off by MAAS. Section Status Comments Enlisting and Commissioning Machines PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Setting up the Juju Bootstrap Node Change DNS name of the node that will be the bootstrap node In the MAAS Web UI, change the name of the machine intended to be the bootstrap node to zookeeper.customer.com. Also change the names of all other machines for their intended role, e.g: nova-compute-01.customer.com, nova-compute-02.customer.com, and so on. Create the Juju client environment for MAAS The Juju client can be installed on any machine and for convenience we will use the MAAS server: Login by ssh to the maas server. Install the Juju repository ppa:juju/pkgs: $ sudo apt-get install -y python-software-properties $ sudo apt-add-repository ppa:juju/pkgs $ sudo apt-get update $ sudo apt-get install -y juju juju-jitsu charm-tools bzr Edit ~/.juju/environments.yaml on the MAAS node adding the following with the correct oauth API key found in the MAAS Web UI under the user preferences. environments: mymaas: Canonical 9 of 32 v1.6

type: maas maas-server: 'http://1.1.21.4:80/maas' maas-oauth: 'cp4urdladeqnyzjpx5:shej3calr4kkkygrnj:bwmnfkgrqfp7mxkh9wmn2b3mmkksmnr x' admin-secret: 'nothing' default-series: precise juju-origin: ppa Deploy the bootstrap/zookeeper Juju node Note: Juju uses the ssh key that we configured in MAAS. We need to have that ssh key in the node we run juju bootstrap from Now run juju bootstrap which will power on zookeeper.customer.com as the bootstrap node: $ juju bootstrap --constraints maas-name=zookeeper.customer.com Note: if we don t need or want to specify which node the zookeeper needs to be installed on, then we can ignore the --constraints option above. This will take a few minutes to complete. Note: after bootstrapping the node we will see in the MAAS Web UI the node as Allocated to root. We can check the progress in the rsyslog file in MAAS for that node: if, for example, its IP is 1.1.21.16, then we would do this: $ sudo tail -f /var/log/maas/rsyslog/1-1-21-16.customer.com/2013/02/21/messages As the installation completes check the status: $ juju status At this point, the bootstrap/zookeeper node should show as running. Now clear the created constraint for maas-name: $ juju set-constraints maas-name= Make sure that the constraint set is cleared: $ juju get-constraints Note: More information about juju constraints available here: Canonical 10 of 32 v1.6

https://juju.ubuntu.com/docs/constraints.html We want the ppa version of Juju in the Juju node so we proceed as we did before for the Juju node: $ juju ssh 0 $ sudo apt-add-repository ppa:juju/pkgs $ sudo apt-get update $ sudo apt-get install juju Note: Alternatively we can just include the following line in the file.juju/environments.yaml juju-origin: ppa Section Status Comments Setting up the Juju Bootstrap Node PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Verify all the nodes by installing a clean Ubuntu image We need to download the Ubuntu charm to deploy it to all the hosts. We can do this from the MAAS node: $ sudo apt-get install charm-tools $ mkdir -p $HOME/charms/precise $ cd $HOME/charms/precise $ charm get ubuntu Note: we can specify with -n the total number of nodes we will be deploying the Ubuntu Charm to. Then we deploy the Ubuntu charm to all the nodes. $ juju deploy -n 18 ubuntu After verifying that the installation of Ubuntu on the nodes went well, we destroy the service ubuntu that we just deployed: $ juju destroy-service ubuntu Canonical 11 of 32 v1.6

Section Status Comments Verify all the nodes by installing a clean Ubuntu image PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Install NTP on each machine In order to ensure that all machines are in sync, install the ntp package on each machine. $ for machine in `juju status grep machine grep -v machines awk '{ print $2 }'`; do juju ssh $machine "sudo apt-get -y install ntp" done Section Status Comments Install NTP on each machine PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Deploy OpenStack All new machines should show as Ready in the MAAS Web UI and the bootstrap node should be running, we can begin deploying the OpenStack charms. Define where the services will be allocated Note: The table at the beginning of this document shows the decided roles as well, this is a reminder before we start deploying roles/services to nodes. We will use the following setup: Juju Bootstrap node Main service: zookeeper Other services allocated: Nova Cloud Controller Main service: Nova Cloud Controller Other services allocated: Keystone, OpenStack Dashboard (Horizon) Canonical 12 of 32 v1.6

Swift storage nodes (2-5) Main service: Swift Storage Other services allocated: Swift storage node 1 Main service: Swift Storage Other services allocated: Glance Swift Proxy node Main service: Swift Proxy Other services allocated: Nova Compute nodes (3-6) Main service: Nova Cloud Compute Other services: RabbitMQ node Main service: RabbitMQ Other services allocated: MySQL Note: We will deploy the main services, as described above, to the desired nodes with juju deploy and the other services with jitsu deploy-to Section Status Comments Define where the services will be allocated PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Checkout the OpenStack charms locally $ mkdir -p $HOME/charms/precise $ cd $HOME/charms/precise $ for i in glance keystone nova-cloud-controller nova-compute openstack-dashboard cinder rabbitmq-server swift-storage swift-proxy mysql landscape-client; do charm get $i done Create the OpenStack configuration file Edit /home/ubuntu/charms/openstack.yaml on the maas node adding the following configuration needed for the openstack charms: keystone: Canonical 13 of 32 v1.6

openstack-origin: cloud:precise-folsom/updates admin-password: openstack nova-cloud-controller: openstack-origin: cloud:precise-folsom/updates network-manager: FlatDHCPManager bridge-interface: br100 bridge-ip: 1.1.21.9 bridge-netmask: 255.255.255.224 config-flags: ec2_private_dns_show_ip=true,flat_network_bridge=br100,public_interface=eth0 nova-compute: openstack-origin: cloud:precise-folsom/updates bridge-interface: br100 bridge-ip: 1.1.21.8 bridge-netmask: 255.255.255.224 flat-interface: eth0 config-flags: ec2_private_dns_show_ip=true,flat_network_bridge=br100,public_interface=eth0 swift-proxy: openstack-origin: cloud:precise-folsom/updates country: "US" state: "NJ" locale: "Some locale" auth-type: keystone swift-storage: openstack-origin: cloud:precise-folsom/updates block-device: sdb overwrite: "true" # remove overwrite: true after deployment so the block device is not erased if you deploy again glance: openstack-origin: cloud:precise-folsom/updates cinder: openstack-origin: cloud:precise-folsom/updates block-device: sdb overwrite: "true" openstack-dashboard: openstack-origin: cloud:precise-folsom/updates Section Status Checkout the OpenStack charms locally. Create the OpenStack configuration file PENDING DONE Canonical 14 of 32 v1.6

Comments If there are issues or comments worth pointing out include them here, if not leave it blank. Deploy the OpenStack Services Note: juju deploy deploys the service to a MAAS node and juju add-unit scales out the deployed service to additional nodes. By using constraints we can decide what node we deploy to or scale out to. Note: to watch the output of juju running hooks on each node, open an additional terminal on the maas server and run: $ juju debug-log Note: The OpenStack related services need the openstack.yaml files when we deploy them. Deploy Swift services to swift storage nodes We can start by deploying Swift Storage on every node allocated to Swift Storage. From the directory where we have the openstack.yaml file and the charms directory we run the following commands: $ juju deploy --repository. --config=openstack.yaml --constraints maas-name=swift-storage-01.customer.com local:precise/swift-storage $ juju set-constraints maas-name=swift-storage-02.customer.com $ juju add-unit swift-storage $ juju set-constraints maas-name=swift-storage-03.customer.com $ juju add-unit swift-storage $ juju set-constraints maas-name=swift-storage-04.customer.com $ juju add-unit swift-storage $ juju set-constraints maas-name=swift-storage-05.customer.com $ juju add-unit swift-storage Clear the constraints: $ juju set-constraints maas-name= Make sure that we don t overwrite sdc in future deployments: $ juju set swift-storage overwrite=false Deploy swift-proxy (Swift API) $ juju deploy --repository. --config=openstack.yaml --constraints Canonical 15 of 32 v1.6

maas-name=swift-api-01.customer.com local:precise/swift-proxy Deploy Nova Cloud Controller $ juju deploy --repository. --config=openstack.yaml --constraints maas-name=nova-cloud-controller-01.customer.com local:precise/nova-cloud-controller Find the ID of the nodes that will allocate each service If we need to deploy more than one service to a particular node, we will use the jitsu deploy-to command which works with node IDs instead of node names. With juju status we check the identification number assigned to each node and we will use it to deploy services to the desired nodes. If the output of juju status contains this: 17: agent-state: running dns-name: nova-cloud-controller-01.customer.com7ba9-11e2-88c2-003048fd7032/ instance-id: /MAAS/api/1.0/nodes/node-6de875ee-7ba9-11e2-88c2-003048fd7032/ instance-state: unknown In this case we will use the node ID 17 to deploy to nova-cloud-controller-01.customer.com the desired services. One at a time, waiting for each to finish. Note: if the services below have a dedicated machine we will use juju deploy as in the above services, instead jitsu deploy-to Deploy Glance $ jitsu deploy-to <machine-id> --repository. --config=openstack.yaml local:precise/glance Deploy Cinder $ jitsu deploy-to <machine-id> --repository. --config=openstack.yaml local:precise/cinder Deploy MySQL $ jitsu deploy-to <machine-id> --repository. local:precise/mysql Deploy RabbitMQ Server $ jitsu deploy-to <machine-id> --repository. local:precise/rabbitmq-server Deploy Nova Compute $ jitsu deploy-to <machine-id> --repository. --config=openstack.yaml local:precise/nova-compute Deploy Keystone Canonical 16 of 32 v1.6

jitsu deploy-to <machine-id> --repository. --config=openstack.yaml local:precise/keystone Deploy Horizon jitsu deploy-to <machine-id> --repository. --config=openstack.yaml local:precise/openstack-dashboard Section Status Comments Deploy the OpenStack Charms PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Add relations between the OpenStack services Check juju status Verify that the proper machines are deployed and that there are no errors reported: $ juju status Note: It is recommended to verify that each relation is added successfully using juju status before moving to the next relation. Some relations may involve the same node, such as mysql, and may conflict if configuration is happening on the same node at the same time. Note: It is a good practice to check the juju log during the creation of the relations: $ juju debug-log Start adding relations between charms: $ juju add-relation keystone mysql We wait until the relation is set. After it finishes check it with juju status: $ juju status mysql $ juju status keystone If the relations are set and the services started then we proceed with the rest. $ juju add-relation nova-cloud-controller mysql $ juju add-relation nova-cloud-controller rabbitmq-server $ juju add-relation nova-cloud-controller glance $ juju add-relation nova-cloud-controller keystone $ juju add-relation nova-compute mysql Canonical 17 of 32 v1.6

$ juju add-relation nova-compute rabbitmq-server $ juju add-relation nova-compute glance $ juju add-relation nova-compute nova-cloud-controller $ juju add-relation glance mysql $ juju add-relation glance keystone $ juju add-relation cinder keystone $ juju add-relation cinder mysql $ juju add-relation cinder rabbitmq-server $ juju add-relation cinder nova-cloud-controller $ juju add-relation openstack-dashboard keystone $ juju add-relation swift-proxy swift-storage $ juju add-relation swift-proxy keystone Finally, the output of juju status should show the all the relations. Section Status Comments Add relations between the OpenStack services PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Set up Access to Openstack and Start Booting VMs Once charms are deployed and relations are added, we can configure the private network on each compute node. Get the Keystone IP Address and admin_token Get the admin token from the machine running the keystone service: $ juju ssh keystone/0 $ sudo grep admin_token /etc/keystone/keystone.conf Save the admin_token to use it on the next step. Create an rc file with the OpenStack environment We create a nova.rc file that will have the required environment to run OpenStack commands (mainly nova). First find the Keystone hostname To find the Keystone hostname we can run juju status keystone: $ juju status keystone grep dns-name 2013-02-22 10:27:54,355 INFO Connecting to environment... 2013-02-22 10:27:55,038 INFO Connected to environment. Canonical 18 of 32 v1.6

2013-02-22 10:27:55,490 INFO 'status' command finished successfully dns-name: <keystone_hostname> Now create the rc file $ echo << EOF >./nova.rc export SERVICE_ENDPOINT=http://<keystone_hostname>:35357/v2.0/ export SERVICE_TOKEN=<keystone_admin_token> export OS_AUTH_URL=http://<keystone_hostname>:35357/v2.0/ export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_TENANT_NAME=admin EOF Load the rc file and check that the OpenStack environment Before we do any nova or glance command we will load the file we just created: $ source./nova.rc $ nova endpoints At this point the output of nova endpoints should show the information of all the available OpenStack endpoints. Download the Ubuntu Cloud Image $ mkdir ~/iso $ cd ~/iso $ wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-dis k1.img Import the Ubuntu Cloud Image into Glance Note: glance comes with the package glance-client which may need to be installed where you plan the run the command from $ apt-get install gla nce-client $ glance add name="precise x86_64" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img Create OpenStack private network Note: nova-manage can be run from the nova-cloud-controller node or any of the nova-compute nodes. To access the node we run the following command: $ juju ssh nova-cloud-controller/0 Note: If the flat-interface (physical interface used by the bridge connecting the VMs) in the compute nodes is not up, for instance if only eth0 is configured, then the interface needs to be up before we can boot instances. This will be needed in the compute nodes in /etc/network/interfaces and ifup eth1 after adding these lines: Canonical 19 of 32 v1.6

auto eth1 iface eth1 inet manual up ifconfig eth1 up $ sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --network_size=32 --multi_host=t --bridge_interface=eth0 --bridge=br100 To make sure that we have created the network we can now run the following command: $ sudo nova-manage network list Create OpenStack public / floating network $ sudo nova-manage floating create --ip_range=1.1.21.64/26 $ sudo nova-manage floating list Allow ping and ssh access adding them to the default security group Note: The following commands are run from a machine where we have the package python-novaclient installed and within a session where we have loaded the above created nova.rc file. $ nova secgroup-add-rule default icmp -1-1 0.0.0.0/0 $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 Create and register the ssh keys in OpenStack Generate a default keypair This is done in the system we use as the OpenStack nova client. We name the private key file admin-key. Make sure the file doesn t exist before creating it. $ ssh-keygen -t rsa -f ~/.ssh/admin-key Copy the public key into the Nova Cloud Controller We will name it admin-key: Note: In the precise version of python-novaclient the command works with --pub_key instead of --pub-key $ nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key And make sure it s been successfully created: $ nova keypair-list Create our first instance We created an image with glance before. Now we need the image ID to start our first Canonical 20 of 32 v1.6

instance. The ID can be found with this command: $ nova image-list Note: we can also use the command glance image-list Boot the instance: $ nova boot --flavor=m1.small --image=< image_id_from_glance_index > --key-name admin-key testserver1 Add a floating IP to the new instance First we allocate a floating IP from the ones we created above: $ nova floating-ip-create Then we associate the floating IP obtained above to the new instance: $ nova add-floating-ip 9363f677-2a80-447b-a606-a5bd4970b8e6 1.1.21.65 Create and attach a Cinder volume to the instance Note: All these steps can be also done through the Horizon Web UI We make sure that cinder works by creating a 1GB volume and attaching it to the VM: $ cinder create --display_name test-cinder1 1 Get the ID of the volume with cinder list: $ cinder list Attach it to the VM as vdb $ nova volume-attach test-server1 bbb5c5c2-a5fd-4fe1-89c2-d16fe91578d4 /dev/vdb Now we should be able to ssh the VM test-server1 from a server with the private key we created above and see that vdb appears in /proc/partitions Section Status Set up Access to Openstack and Start Booting VMs PENDING DONE Canonical 21 of 32 v1.6

Comments If there are issues or comments worth pointing out include them here, if not leave it blank. Add the nodes to Landscape The registration of the nodes should be in two parts: 1. First registering manually the MAAS and Juju nodes to Landscape 2. Then using the landscape-client charm to register subsequent Juju deployed nodes. Note: We will need the Landscape credentials (account name and password) to proceed with the registration of the nodes. Register the MAAS and Juju nodes to Landscape For the MAAS node and the Juju node, the registration is manually done. Register the MAAS node Log into the MAAS node and install the package and register the node: $ sudo apt-get install landscape-client $ sudo landscape-config --account-name <landscape account name> -p <landscape password> Register the Juju node $ juju ssh 0 $ sudo apt-get install landscape-client $ sudo landscape-config --account-name <landscape account name> -p <landscape password> Register the rest of the nodes with Juju Deploy the landscape-client charm The first thing we need is to create a configuration file for the landscape-client charm with the credentials: $ echo << EOF > landscape.yaml landscape-client: account-name: <landscape account name> registration-password: <landscape password> EOF The charm landscape-client was downloaded previously, if not download it to the charms/precise directory with charm get landscape-client. We deploy it with Juju first: Canonical 22 of 32 v1.6

$ juju deploy --repository. --config=landscape-client.yaml local:precise/landscape-client Register the nodes using the landscape-client charm The landscape charm is a subordinate charm and because of this it needs to be deployed by adding a relation to another service deployed in the nodes. Note: We only need one landscape-client per node but we may have more than one service per node. We will add one relation per node by adding relations to all the services deployed. If a node has two or more services the second time we add a relation nothing will happen. Add the relations: $ for i in glance keystone nova-cloud-controller nova-compute openstack-dashboard rabbitmq-server swift-storage swift-proxy mysql; do $ juju add-relation landscape-client $i done After the relations are added we can accept the nodes logging in with our credentials at https://landscape.canonical.com Section Status Comments Register the MAAS and Juju nodes to Landscape PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Enable VNC access to the OpenStack Instances By default, the Ubuntu Cloud charms don t enable VNC access to the OpenStack instances. To enable it, we need to manually install the required packages and configure the Nova Compute nodes and the VNC proxy. Set up the OpenStack VNC Proxy We need to decide what node will be our VNC Proxy. Let s use the Nova Cloud Controller for this setup: Check with juju status the service unit for nova cloud controller and ssh to it with juju: $ juju ssh nova-cloud-controller/0 Canonical 23 of 32 v1.6

Install the required packages: $ sudo apt-get install novnc nova-novncproxy websockify nova-consoleauth Using the public IP of the Nova Cloud Controller node for the VNC Proxy write this configuration in /etc/nova/nova.conf: vnc_enabled=true vncserver_listen="0.0.0.0" novncproxy_base_url="http://<ip of the proxy node>:6080/vnc_auto.html" And restart the services: $ sudo restart nova-consoleauth $ sudo restart nova-novncproxy $ sudo restart nova-scheduler Set up VNC on the Nova Compute nodes Find the IP for the bridge (usually on br100) and configure the VNC Proxy in /etc/nova/nova.conf Note: There will be no IP until a vm has already been run on the Nova Compute nodes Edit /etc/nova/nova.conf and add this lines: vnc_enabled=true vncserver_listen="0.0.0.0" novncproxy_base_url="http://<ip of the proxy node>:6080/vnc_auto.html" vncserver_proxyclient_address=<compute_node_management_ip> Change the Juju config flags of the nova-compute and nova-cloud-controller Get the current setting for config-flags for nova-compute and nova-cloud-controller charms: $ juju get nova-compute grep -A3 config-flags config-flags: description: Comma separated list of key=value config flags to be set in nova.conf. type: string value: ec2_private_dns_show_ip=true,flat_network_bridge=br100,public_interface=eth0 And set them to include the VNC Proxy configuration: $ juju set nova-compute Canonical 24 of 32 v1.6

config-flags=ec2_private_dns_show_ip=true,flat_network_bridge=br100,public_interfa ce=eth0,vnc_enabled=true,vncserver_listen=0.0.0.0,novncproxy_base_url=http://<ip of the proxy node>:6080/vnc_auto.html Do exactly the same for the nova-cloud-controller service: $ juju get nova-cloud-controller grep -A3 config-flags config-flags: description: Comma separated list of key=value config flags to be set in nova.conf. type: string value: ec2_private_dns_show_ip=true,flat_network_bridge=br100,public_interface=eth0 $ juju set nova-compute config-flags=ec2_private_dns_show_ip=true,flat_network_bridge=br100,public_interfa ce=eth0,vnc_enabled=true,vncserver_listen=0.0.0.0,novncproxy_base_url=http://<ip of the proxy node>:6080/vnc_auto.html Now you should be able to log into Horizon and access the consoles of the instances. Section Status Comments Enable VNC Access to the OpenStack Instances PENDING DONE If there are issues or comments worth pointing out include them here, if not leave it blank. Notes and Useful Scripts Shutting down and Starting up OpenStack The recommended order for shutting down this particular OpenStack deployment is as follows. When shutting down, it s recommended to stop all instances if possible. Shutdown: 1. Nova Compute 2. Nova Cloud Controller 3. Swift 4. Juju Bootstrap Node (Zookeeper) 5. MAAS Bringing up OpenStack: 1. MAAS 2. Juju Bootstrap Node 3. Nova Cloud Controller (wait until juju status completes successfully) 4. Swift Canonical 25 of 32 v1.6

5. Nova Compute Note: If after restarting juju ssh 0 doesn t log you in the bootstrap node, ssh bootstrap node and restart the juju agent: $ sudo restart juju-machine-agent Check IPMI privileges Check the version of IPMI. We detected that in old IPMI versions (1.29) some privileges for the user that MAAS creates are not set by default. Check the IPMI configuration: $ bmc-config -h 10.0.0.87 -u ADMIN -p ADMIN --checkout Ensure that Lan_Enable_IPMI_Msgs is set to Yes and Lan_Privilege_Limit is set to Administrator for the user that MAAS uses (Username maas or maas-enlist or maas-commission) Set them accordingly to the right user (in the example User3): $ bmc-config -h 10.0.0.87 -u ADMIN -p ADMIN --commit --key-pair="user3:lan_privilege_limit=administrator" Force a node to PXE boot with IPMI $ cat ipmi-config-reboot Section Chassis_Power_Conf Power_Restore_Policy EndSection Section Chassis_Boot_Flags Boot_Flags_Persistent Boot_Device EndSection No PXE Off_State_AC_Apply $ ipmi-chassis-config -h 10.0.0.91 -u ADMIN -p ADMIN --commit --filename ipmi-config-reboot $ ipmipower -h 10.0.0.91 -u ADMIN -p ADMIN --cycle --on-if-off 10.0.0.91: ok PXE boot all the nodes at once $ cat ipmi_list.txt 10.0.0.75 10.0.0.79 10.0.0.67 10.0.0.74 10.0.0.68 Canonical 26 of 32 v1.6

10.0.0.71 10.0.0.73 10.0.0.77 10.0.0.88 10.0.0.89 10.0.0.90 10.0.0.91 10.0.0.81 10.0.0.87 10.0.0.84 10.0.0.86 $ cat ipmi-config-reboot Section Chassis_Power_Conf Power_Restore_Policy EndSection Section Chassis_Boot_Flags Boot_Flags_Persistent Boot_Device EndSection No PXE Off_State_AC_Apply $ cat pxe-boot-all.py #!/usr/bin/env python import subprocess servers = open('ipmi_list.txt').readlines() for server in servers: server = server.strip() print "Server: %s" % server subprocess.check_output(['ipmi-chassis-config', '-h', '%s' % server, '-u', 'ADMIN', '-p', 'ADMIN', '--commit', '--filename', './ipmi-config-reboot']) subprocess.check_output(['ipmipower', '-h', '%s' % server, '-u', 'ADMIN', '-p', 'ADMIN', '--cycle', '--on-if-off']) Check IPMI status $ cat pxe-status.sh #!/bin/bash SERVER=${1} [ -z ${SERVER} ] && exit 1 echo "Server: ${SERVER}" ipmipower -h ${SERVER} -u ADMIN -p ADMIN --stat Restart Juju services Canonical 27 of 32 v1.6

If Juju doesn t work as expected, for instance juju ssh 0 not logging in, we can restart the Juju services from the zookeeper node this way: $ sudo restart juju-provision-agent $ sudo restart juju-machine-agent Increase the Client Connections for Zookeeper By default Zookeeper has a limit of 10 simultaneous client connections and this can lead to issues with Juju when we install a number of charms on the same node. Adding this line to /etc/zookeeper/conf/zoo.cfg will fix it: maxclientcnxns=30 References MAAS MaaS Website http://maas.ubuntu.com/ MaaS Documentation http://maas.ubuntu.com/docs/ Setup the MaaS server http://maas.ubuntu.com/docs/install.html Juju Juju documentation https://juju.ubuntu.com/docs/index.html https://juju.ubuntu.com/docs/getting-started.html Install Juju http://maas.ubuntu.com/docs/juju-quick-start.html Setup Juju http://maas.ubuntu.com/docs/juju-quick-start.html#your-api-key-ssh-key-and-enviro nments-yaml Juju FAQ https://juju.ubuntu.com/docs/faq.html Juju webcast http://www.brighttalk.com/community/cloud-computing/webcasts?q=juju Juju demo http://www.youtube.com/watch?v=6j2fqenypdy Canonical 28 of 32 v1.6

Juju Charms Create Juju charms https://juju.ubuntu.com/docs/write-charm.html Troubleshooting https://juju.ubuntu.com/docs/hook-debugging.html Booting from a volume: http://docs.openstack.org/trunk/openstack-compute/admin/content/boot-from-volu me.html VNC Configuration: http://docs.openstack.org/trunk/openstack-compute/admin/content/nova-vncproxyreplaced-with-nova-novncproxy.html VLAN Configuration: http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vla n-networking.html Full list of Juju Commands # Bootstrap juju bootstrap --constraints maas-name=zookeeper.customer.com juju set-constraints maas-name= # Deploy Swift Storage juju deploy --repository. --config=openstack.yaml --constraints maas-name=swift-storage-01.customer.com local:precise/swift-storage juju set-constraints maas-name=swift-storage-02.customer.com juju add-unit swift-storage juju set-constraints maas-name=swift-storage-03.customer.com juju add-unit swift-storage juju set-constraints maas-name=swift-storage-04.customer.com juju add-unit swift-storage juju set-constraints maas-name=swift-storage-05.customer.com juju add-unit swift-storage # Clear the constraints juju set-constraints maas-name= # Deploy Nova Cloud Controller juju deploy --repository. --config=openstack.yaml --constraints maas-name=nova-cloud-controller-01.customer.com local:precise/nova-cloud-controller Canonical 29 of 32 v1.6

# Deploy Siwft Proxy ( Swift API ) juju deploy --repository. --config=openstack.yaml --constraints maas-name=swift-api-01.customer.com local:precise/swift-proxy # Deploy Glance ( to swift-storage-01.customer.com ) jitsu deploy-to <machine-id-of-swift-storage-01> --repository. local:precise/glance # Deploy MySQL ( to nova-cloud-controller-01.customer.com ) jitsu deploy-to <machine-id-of-nova-cloud-controller> --repository. local:precise/mysql # Deploy RabbitMQ ( to nova-cloud-controller-01.customer.com ) jitsu deploy-to <machine-id-of-nova-cloud-controller> --repository. local:precise/rabbitmq-server # Deploy nova-compute ( to swift-api-01.customer.com ) jitsu deploy-to <machine-id-of-swift-api-01> --repository. --config=openstack.yaml local:precise/nova-compute # Deploy keystone ( to the zookeeper machine ) jitsu deploy-to 0 --repository. --config=openstack.yaml local:precise/keystone # Deploy Horizon ( to the zookeeper machine ) jitsu deploy-to 0 --repository. local:precise/openstack-dashboard #### Relate the services #### juju add-relation keystone mysql sleep 20 juju add-relation nova-cloud-controller mysql juju add-relation nova-cloud-controller rabbitmq-server juju add-relation nova-cloud-controller glance juju add-relation nova-cloud-controller keystone juju add-relation nova-compute mysql juju add-relation nova-compute rabbitmq-server juju add-relation nova-compute glance juju add-relation nova-compute nova-cloud-controller Canonical 30 of 32 v1.6

juju add-relation glance mysql juju add-relation glance keystone juju add-relation cinder keystone juju add-relation cinder mysql juju add-relation cinder rabbitmq-server juju add-relation cinder nova-cloud-controller juju add-relation openstack-dashboard keystone juju add-relation swift-proxy keystone juju add-relation swift-proxy swift-storage ## Get the keystone IP Address and admin_token ## juju status keystone - Get the IP address ( not the hostname ) of the machine running the keystone service - With that IP, get the admin token from /etc/keystone/keystone.conf --- juju ssh keystone/0 --- sudo grep admin_token /etc/keystone/keystone.conf --- Get the admin_token and save it somewhere ### Create an RC file for convenience ### echo << EOF >./nova.rc export SERVICE_ENDPOINT=http://<keystone_ip>:35357/v2.0/ export SERVICE_TOKEN=<keystone_admin_token> export OS_AUTH_URL=http://<keystone_ip>:35357/v2.0/ export OS_USERNAME=admin export OS_PASSWORD=openstack export OS_TENANT_NAME=admin EOF ### Source the nova.rc file ### source./nova.rc ### Download the image and import it in glance ### mkdir ~/iso cd ~/iso wget Canonical 31 of 32 v1.6

http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-dis k1.img glance add name="precise x86_64" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img ### Create private network ### sudo nova-manage network create --label=private --fixed_range_v4=1.1.21.32/27 --num_networks=1 --network_size=32 --multi_host=t --bridge_interface=eth0 --bridge=br100 ### Create public network ### sudo nova-manage floating create --ip_range=1.1.21.64/26 ### Add ping and ssh to the default security group ### nova secgroup-add-rule default icmp -1-1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 ### Create and register keys ### cd ~/.ssh ssh-keygen -t rsa -f ~/.ssh/admin-key nova keypair-add --pub-key ~/.ssh/admin-key.pub admin-key ### Create our first instance ### nova boot --flavor=m1.small --image=< image id from glance index > --key-name admin-key testserver1 Canonical 32 of 32 v1.6