1 Enterprise Private Cloud OpenStack Deployment in 20 Minutes (Part 1) Introduction Welcome to this Oracle Open World lab thanks for joining us. This lab will take you through the basics of how to configure OpenStack on Oracle Solaris 11. OpenStack is a popular open source cloud infrastructure that has been integrated into Oracle Solaris OpenStack includes a number of services that help you manage the compute, storage and network resources in your data center through a central web based dashboard. These services can be summarized as the following: Service Name Nova Cinder Neutron Keystone Glance Horizon Description Compute virtualization Block storage Software Defined Networking (SDN) Authentication between cloud services Image management and deployment Web based dashboard For this lab and the time allocated to us, we will simply set up OpenStack in a single node instance. For a typical enterprise deployment, these services would be spread across multiple nodes with load balancing and other high availability capabilities. With the Oracle Solaris 11.2 release, a new archive format was introduced called Unified Archives. Unified Archives provide easy golden image style deployment, allowing administrators to quickly snapshot a running system and deploy it as clones within a cloud environment. Using this technology, an OpenStack based
2 Unified Archive was created and made available which makes deploying this complex software easy on a single node: storage/solaris11/downloads/unified- archives html However, for this lab we will choose a manual route to give you more experience with the OpenStack services and how they are configured. Lab Setup This lab has the following set up: Oracle Solaris 11.2 (root password is solaris11) Hostname of solaris, IP address range of /21 IPS repository clone at /repository/publishers/solaris OpenStack configuration script located in /root/hol_single_host.py Oracle Solaris Non- Global Zone Unified Archive located in /root/ngzarchive.uar To start with, open up a Terminal window in the host OS and start an SSH connection with root/solaris11 as the user/password combination: # ssh Password: Oracle Corporation SunOS June Installing the OpenStack packages First we will install the OpenStack packages from the IPS package repository as follows: # pkg install openstack rabbitmq rad-evs-controller Packages to install: 182 Services to change: 3 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 182/ / / k/s PHASE ITEMS Installing new actions 26599/26599 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Now that we have successfully installed these packages, we will need to restart the rad:local SMF service. RAD (the Remote Administration Daemon) provides programmatic access to the administrative interfaces on Oracle Solaris 11 that we use in the Oracle Solaris plugins for OpenStack. # svcadm restart rad:local We will also need to enable the RabbitMQ service. RabbitMQ is a messaging system that enables communication between the core OpenStack services.
3 # svcadm enable rabbitmq # svcs rabbitmq STATE STIME FMRI online 23:58:04 svc:/application/rabbitmq:default 2. Configuring Keystone Keystone provides authentication between the core OpenStack services. It will be the first service that we will configure and enable. OpenStack uses a series of configuration files with defined sections that include key/value pairs. For this first service, we will manually configure the appropriate settings, but all future services will use a script for convenience. Edit /etc/keystone/keystone.conf and ensure the following settings are set as below: [DEFAULT] admin_token = ADMIN [identity] driver = keystone.identity.backends.sql.identity [catalog] driver = keystone.catalog.backends.sql.catalog [token] provider = keystone.token.providers.uuid.provider [signing] token_format = UUID Now enable the Keystone service: # svcadm enable -rs keystone # svcs keystone STATE STIME FMRI online 23:59:31 svc:/application/openstack/keystone:default In order to allow for successful authentication, we will need to populate the Keystone database with a number of users across different tenants that reflect the core OpenStack services. In our case we will use sample data provided by a script. In a production deployment you would associate Keystone with a directory service such as LDAP or Active Directory. User Tenant Password admin demo secrete nova service nova cinder service cinder neutron service neutron glance service glance Let s run this script now: # /usr/demo/openstack/keystone/sample_data.sh Property Value
4 adminurl id cdd38de578ffe450a4ebd17e6345ed72 internalurl publicurl service_id db9909b96b916b6ed04a818c6f407df Property Value adminurl id 48d62b0291f44c258f0bef5fe72024b9 internalurl publicurl service_id c38ced19a4894a5bc61cbb77e9868bbf Property Value adminurl id 975e3db88eb56836e779e1b0e8d2dd21 internalurl publicurl service_id 39daf3d31c0348f0ae32b04a2ed3dbc Property Value adminurl id a77c1ed7d1a44751afeed55e2e0bbc99 internalurl publicurl service_id 903f1738fc066deed8a8c4a38925d1e Property Value adminurl id 86d0e7f081d7e512b f391b6ee internalurl publicurl service_id 86b96889f88be522abf19d7ff8e7db Property Value adminurl id e822be94a5da3a73588e internalurl publicurl service_id 6d22986ee9c76880e0f0c0da4aa8fe0f Property Value adminurl id bbe5bf886bff4c089c0dbc42a65fa521 internalurl publicurl service_id f5c6aeb5a53bceb6f022b85e0b63956f Let s verify this result by setting environmental variables, SERVICE_ENDPOINT and SERVICE_TOKEN, and running the keystone client side command: # export SERVICE_ENDPOINT=http://localhost:35357/v2.0/ # export SERVICE_TOKEN=ADMIN # keystone user-list id name enabled
5 bdefb773d3c61fed79d96c5540f9766 admin True 8b54a70c235ee1179f15a198a70be099 cinder True 7949ac987dd5c514e778ba ec2 True d79d19dc2945ed758747c2e2d8ab7e89 glance True ac11eb0e1aed68f2c c8bade5 neutron True d9e6d0ddfbaf4ca6a6ee9bb951877d3d nova True eb3237eea75ae619aba6cf75a49f798f swift True Configuring Glance Glance is a service that provides image management in OpenStack. It responsible for storing the array of images that you use to install onto the compute notes when you create new VM instances. It is comprised of a few different services that we will need to configure first. For convenience we have provided a script to be able to do this quickly: #./hol_single_host.py glance configuring glance This script will configure the following files: /etc/glance/glance- api.conf /etc/glance/glance- registry.conf /etc/glance/glance- cache.conf /etc/glance/glance- api- paste.ini /etc/glance/glance- registry- paste.ini /etc/glance/glance- scrubber.conf and provide the appropriate configuration for the Glance endpoints (usually for the user and password information). Let s now enable the Glance services: # svcadm enable -rs glance-api glance-db glance-registry glance-scrubber We can check that this configuration is correct with the following: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=glance # export OS_USERNAME=glance # export OS_TENANT_NAME=service # glance image-list ID Name Disk Format Container Format Size Status As we can see from the above, we have successfully contacted the image registry, but there are no images currently loaded into Glance. The next step will be to populate Glance with an image that we can use for our instances. In the Oracle Solaris implementation we take advantage of a new archive type called Unified Archives. You may either choose to use an archive that we have provided as part of this VM or create your own archive:
6 Option 1: Create your own Unified Archive: Since we use Oracle Solaris Zones as the virtualization technology for compute, we will need to create a non- global zone. If you are tight on time already, consider choosing Option 2 below instead of this. # zonecfg -z myzone create # zoneadm -z myzone install The following ZFS file system(s) have been created: rpool/varshare/zones/myzone Progress being logged to /var/log/zones/zoneadm t002211z.myzone.install Image: Preparing at /system/zones/myzone/root. Install Log: /system/volatile/install.2985/install_log AI Manifest: /tmp/manifest.xml.jfaozf SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: myzone Installation: Starting... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 282/ / / M/s PHASE ITEMS Installing new actions 71043/71043 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Installation: Succeeded done. Note: Man pages can be obtained by installing pkg:/system/manual Done: Installation completed in seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /system/zones/myzone/root/var/log/zones/zoneadm t002211z.myzone.install # zoneadm -z myzone boot # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 myzone running /system/zones/myzone solaris excl Let s now log in to the zone and do the final configuration. # zlogin -C myzone [Connected to zone 'myzone' console] After logging in, you will be presented with the System Configuration Tool. We will need to do some final configuration prior to archiving this zone. This configuration will not be used when we deploy the zone, but we will need to do this prior to creating the archive. You can navigate through the configuration using the Function keys. Hostname Networking myzone Manual
7 DNS Name Services Timezone/Locale Root password Do not configure DNS None Choose any solaris11 SC profile successfully generated as: /etc/svc/profile/sysconfig/sysconfig /sc_profile.xml Exiting System Configuration Tool. Log is available at: /system/volatile/sysconfig/sysconfig.log.4666 Hostname: myzone myzone console login: We can now log in with root/solaris11: myzone console login: root Password: solaris11 Sep 11 00:33:57 myzone login: ROOT LOGIN /dev/console Oracle Corporation SunOS June 2014 Running the virtinfo command, we can see that we re in a non- global zone. virtinfo NAME CLASS non-global-zone current logical-domain parent Prior to creating the Unified Archive, we need to do one more configuration trick. When we deploy instances using OpenStack we typically provide an SSH public key pair that s used as the primary authentication mechanism to our instance. We need to ensure that this is a password- less operation, so need to do make some configuration changes to the SSH server service running within this non- global zone prior to snapshotting it. Edit /etc/ssh/sshd_config vi /etc/ssh/sshd_config Find the PermitRootLogin key/value pair and set it to without- password. PermitRootLogin without-password Now let s exit out of our zone with the ~~. escape sequence: ~~. # And finally we can create our Unified Archive: # archiveadm create -z myzone myzone.uar Initializing Unified Archive creation resources... Unified Archive initialized: /root/myzone.uar Logging to: /system/volatile/archive_log.5578 Executing dataset discovery... Dataset discovery complete Creating install media for zone(s)... Media creation complete Preparing archive system image... Beginning archive stream creation... Archive stream creation complete
8 Beginning final archive assembly... Archive creation complete And upload this image to Glance: cd # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=glance # export OS_USERNAME=glance # export OS_TENANT_NAME=service # glance image-create --container-format bare --disk-format raw --is-public true -- name "Base Zone" --property architecture=sparc64 --property hypervisor_type=solariszones --property vm_mode=solariszones < myzone.uar Property Value Property 'architecture' sparc64 Property 'hypervisor_type' solariszones Property 'vm_mode' solariszones checksum 336bdfe5f76876fe24907e e7 container_format bare created_at T00:52: deleted False deleted_at None disk_format raw id b42e47ee-d8dc-e50c-d6e0-9206d761ce41 is_public True min_disk 0 min_ram 0 name Base Zone owner f17341f0a2a24ec9ec5f9ca497e8c0cc protected False size status active updated_at T00:52: Option 2: Use the Existing Unified Archive: We have pre- created a Unified Archive that you can use for this lab. If you are tight on time, consider using the existing ngz-archive.uar file as follows: # glance image-create --container-format bare --disk-format raw --is-public true -- name "Base Zone" --property architecture=sparc64 --property hypervisor_type=solariszones --property vm_mode=solariszones < ngz-archive.uar Property Value Property 'architecture' x86_64 Property 'hypervisor_type' solariszones Property 'vm_mode' solariszones checksum 89ad653a c8ab4f8b431ed66b0 container_format bare created_at T00:13: deleted False deleted_at None disk_format raw id 37f73649-a046-e40c-eb34-e2b914c22005 is_public True min_disk 0 min_ram 0 name Base Zone owner f17341f0a2a24ec9ec5f9ca497e8c0cc protected False size status active updated_at T00:14: # glance image-list ID Name Disk Format Container Format Size Status
9 37f73649-a046-e40c-eb34-e2b914c22005 Base Zone raw bare active Configuring Nova Nova is the compute service in OpenStack responsible for scheduling and deploying new instances when required. Like Glance, it is comprised of several different services that need to be configured and enabled. We will use our script again to do this quickly: #./hol_single_host.py nova configuring nova Nova does require a little more care in terms of the start order of services, so we will first enable the conductor service (which essentially proxies access to the Nova database from the compute nodes), and then the rest of the services: # svcadm enable -rs nova-conductor # svcadm enable -rs nova-api-ec2 nova-api-osapi-compute nova-scheduler nova-cert novacompute Let s check that Nova is functioning correctly by setting up some environmental variables and viewing the endpoints: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=nova # export OS_USERNAME=nova # export OS_TENANT_NAME=service # nova endpoints nova Value adminurl id 08eb495c11864f67d4a0e58c8ce53e8b internalurl publicurl servicename nova neutron Value adminurl id 96e693c539c0ca3ee5f0c04e958c33fe internalurl publicurl glance Value adminurl id 121ad7a65c0fce b2c0c7c3fb internalurl publicurl cinder Value adminurl id ee83dab8b39d4d0ad480a75cadb965dc internalurl publicurl
10 ec2 Value adminurl id 1558b719141ae2fed54ff0bfe80cb646 internalurl publicurl swift Value adminurl id 51f1908de52f68af984c e0b internalurl publicurl keystone Value adminurl id 371c73559bd842d6b961d021eeeaa2e5 internalurl publicurl It looks to be functioning properly, so we can continue. 5. Configuring Cinder Cinder provides block storage in OpenStack typically the storage that you would use to attach to compute instances. As before, we will need to configure and enable several services: #./hol_single_host.py cinder configuring cinder # svcadm enable -rs cinder-api cinder-db cinder-scheduler cinder-volume:setup cindervolume:default Again, let s double check that everything is working ok: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=cinder # export OS_USERNAME=cinder # export OS_TENANT_NAME=service # cinder list ID Status Display Name Size Volume Type Bootable Attached to This looks correct as we have not allocated any block storage to date. 6. Configuring Neutron Neutron provides networking capabilities in OpenStack, enabling VMs to talk to each other within the same tenants and subnets, and directly to the outside world. This is achieved using a number of different services. Behind the Oracle Solaris implementation is the Elastic Virtual Switch (EVS) that provides the necessary plumbing to span multiple compute nodes and route traffic appropriately. We will need to do some configuration outside OpenStack to provide a level of trust between EVS and Neutron using SSH keys and RAD.
11 Let s first generate SSH keys for evsuser, neutron and root users: # su - evsuser -c "ssh-keygen -N '' -f /var/user/evsuser/.ssh/id_rsa -t rsa" Generating public/private rsa key pair. Your identification has been saved in /var/user/evsuser/.ssh/id_rsa. Your public key has been saved in /var/user/evsuser/.ssh/id_rsa.pub. The key fingerprint is: 13:cb:06:c4:88:5e:10:7d:84:8b:c8:38:30:83:89:9f # su - neutron -c "ssh-keygen -N '' -f /var/lib/neutron/.ssh/id_rsa -t rsa" Generating public/private rsa key pair. Created directory '/var/lib/neutron/.ssh'. Your identification has been saved in /var/lib/neutron/.ssh/id_rsa. Your public key has been saved in /var/lib/neutron/.ssh/id_rsa.pub. The key fingerprint is: 13:d6:ef:22:4b:f0:cf:9f:14:e3:ee:50:05:1a:c7:a5 # ssh-keygen -N '' -f /root/.ssh/id_rsa -t rsa Generating public/private rsa key pair. Created directory '/root/.ssh'. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: c1:6f:a5:38:fc:11:85:16:ad:1d:ad:cd:2f:38:ce:26 We then need to take the various SSH public keys and include them in authorized_keys to provide password less access between these services: # cat /var/user/evsuser/.ssh/id_rsa.pub /var/lib/neutron/.ssh/id_rsa.pub /root/.ssh/id_rsa.pub >> /var/user/evsuser/.ssh/authorized_keys Finally, we need to quickly log into these and answer the one time prompt: # su - evsuser -c "ssh true" The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 36:9b:74:4b:e9:57:11:70:bc:71:d6:4d:77:b4:74:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. # su - neutron -c "ssh true" The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 36:9b:74:4b:e9:57:11:70:bc:71:d6:4d:77:b4:74:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. # ssh true The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is 36:9b:74:4b:e9:57:11:70:bc:71:d6:4d:77:b4:74:b3. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. EVS uses the concept of a controller to manage the elastic virtual switch across the resources in the data center. We need to set the configuration to this single host and initialize the EVS database: # evsadm set-prop -p # evsadm # evsadm show-prop PROPERTY PERM VALUE DEFAULT controller rw -- For this setup, we will use VXLANs to appropriately tag our network traffic and provide isolation. We can do this configuration as follows: # evsadm set-controlprop -p l2-type=vxlan # evsadm set-controlprop -p vxlan-range= We will also need to set the uplink port for the controller to be net0 (the only NIC available to us):
12 # evsadm set-controlprop -p uplink-port=net0 # evsadm show-controlprop PROPERTY PERM VALUE DEFAULT HOST l2-type rw vxlan vlan -- uplink-port rw net vlan-range rw vlan-range-avail r vxlan-addr rw vxlan-ipvers rw v4 v4 -- vxlan-mgroup rw vxlan-range rw vxlan-range-avail r Now that we have done the basic configuration with EVS, we can go ahead and configure Neutron to use this configuration. We will use the script for convenience. #./hol_single_host.py neutron configuring neutron # svcadm enable -rs neutron-server neutron-dhcp-agent Let s test Neutron and make sure things are working: # export OS_AUTH_URL=http://localhost:5000/v2.0/ # export OS_PASSWORD=neutron # export OS_USERNAME=neutron # export OS_TENANT_NAME=service # neutron net-list We see an empty result. This is expected since we haven t created any networks yet. 7. Configuring Horizon Finally we can configure Horizon, which is the web dashboard for OpenStack, providing self- service capabilities in a multi- tenant environment. Let s go ahead and do that. #./hol_single_host.py horizon configuring horizon # cp /etc/apache2/2.2/samples-conf.d/openstack-dashboard-http.conf /etc/apache2/2.2/conf.d # svcadm enable apache22 # svcs apache22 STATE STIME FMRI online 1:53:42 svc:/network/http:apache22 8. Logging into Horizon Within the host environment, open up a browser and navigate to the IP address allocated to you, Use admin/secrete as the user/password combination.
13 After signing in you will see the main dashboard for the OpenStack administrator. On the left part of the screen you will see two tabs one that shows the administration panel, the other that shows the project panel that gives us the list of projects that this current user is a member of. We can think of projects as a way to provide organizational groupings. Instead of launching an instance as an administrator, let s go and create a new user under the Admin tab. Select the Users menu entry to display the following screen.
14 We can see that there are a few users already defined these users either represent the administrator or are for the various OpenStack services. Let s go ahead and click on the Create User button and fill in some details for this user. We will include them in the demo project for now, but we could equally have created a new project if we wanted to.
15 Sign out and log in as this new user. The next thing we need to do is to add a keypair for our user. Choose the Access & Security menu entry to get the following screen: There are no keypairs currently defined. Let s go ahead by clicking the Import Keypair button. In our case let s use the SSH public key of our global zone: cat.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0Khp4Th5VcKQW4LttqzKAR8O60gj43cB0CbdpiizEhXEbVgjI7IlnZlo9i SEFpJlnZrFQC8MU2L7Hn+CD5nXLT/uK90eAEVXVqwc4Y7IVbEjrABQyB74sGnJy+SHsCGgetjwVrifR9fkxFHg jxxkounxrpme86hdjrpzljfgyzzezjrtd1erwvnshhjdzmuac7cilfjen/wssm8tosakh+zwehwy3o08nzg2iw dmimpbwpwtrohjsh3w7xkde85d7uzebnjpd9kdaw6omxsy5clgv6geouexz/j4k29worr1xkr3jirqqlf3kw4y uk9jui/gphg2ltohisgjoelorq== Having successfully imported the SSH keypair, let s now create a network for this instance. Choose the Networks menu entry to get the following screen:
16 There are no networks currently defined. Let s create a network by clicking on the Create Network button. Let s create a network called mynetwork with a subnet called mysubnet using the x.0/24 address range. This means that instances that choose this network will be created within this range starting at x.3. Once we create our network, we should see it successfully created in the following screen:
17 Now we are ready to launch a new instance. Choose the Instances menu entry to get the following screen: 9. Launching an Instance Let s launch a new instance by clicking on the Launch Instance button. We will call our instance myinstance. We will give it an Oracle Solaris non- global zone tiny flavor. Flavors represent the size of the resources that we should give this instance. We can see here that we will get a root disk of 10GB and 2,048MB RAM. We will choose to boot this instance from the image that s stored in Glance that we uploaded called Base Zone.
18 Once we are happy with the Details tab, we can move onto the Access & Security tab. We can see that our keypair has been pre- selected, so we can immediately move on to the Networking tab. Here we will need to select mynetwork as our next. Once we have finished this, we can click on the Launch button. After a little bit of time we can see that our instance has successfully booted with an IP address of x.3.
19 We are now ready to log into this instance. In this lab we took the simple path of just setting up an internal network topology. In a typical cloud environment we would set up an external network that VMs could communicate through to the outside world. To access these VMs, we will need to access them through the global zone. ssh The authenticity of host ' ( )' can't be established. RSA key fingerprint is 89:64:96:91:67:ab:6b:35:58:37:35:b8:ab:f3:e5:98. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ' ' (RSA) to the list of known hosts. Last login: Thu Sep 11 00:33: Oracle Corporation SunOS June 2014 ipadm NAME CLASS/TYPE STATE UNDER ADDR lo0 loopback ok lo0/v4 static ok /8 lo0/v6 static ok -- ::1/128 net0 ip ok net0/dhcp inherited ok /24 exit logout Connection to closed. 10. Behind the Scenes From the global zone, let s see what has been created with OpenStack. Let s first check to see what zones have been created: zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 2 instance running /system/zones/instance solaris excl We can see that we have one non-global zone successfully running which corresponds to our Nova instance. Let s now check to see what networks have been created for this: ipadm NAME CLASS/TYPE STATE UNDER ADDR evsaf75747a_3_0 ip ok evsaf75747a_3_0/v4 static ok /24 lo0 loopback ok lo0/v4 static ok /8 lo0/v6 static ok -- ::1/128 net0 ip ok net0/v4 static ok /24