Extensible Access Control Framework for Cloud based Applications 19-02-2014 Version 1.0 OpenStack Installation Guideline [Grizzly Release] Dr. Muhammad Awais Shibli [Principal Investigator] Dr. Arshad Ali [Co-Principal Investigator] National ICT R & D [Funding Organization]
Table of Contents 1. INTRODUCTION... 4 1.1 Installation of Ubuntu OS... 4 1.2 Requirements... 5 2. PREPARING NODE/SYSTEM... 6 2.1 Add Repository... 6 2.2 Install NTP... 6 2.3 Install Mysql... 7 2.4 Install Messaging Service... 7 2.5 Install linux bridging software... 8 2.6 Enable IP forwarding... 8 3. INSTALLING OPENSTACK IDENTITY SERVICE (KEYSTONE)... 9 4. INSTALLING OPENSTACK IMAGE SERVICE (GLANCE)... 13 5. INSTALLING OPENSTACK NETWORKING SERVICE (QUANTUM)... 18 6. INSTALLING OPENSTACK COMPUTE SERVICE (NOVA)... 23 7. INSTALLING OPENSTACK CINDER COMPONENT (VOLUME)... 31 8. INSTALLING OPENSTACK DASHBOARD COMPONENT (HORIZON)... 37 9. APPENDIX... 45 9.1 Appendix A (Configuration Files)... 46 9.2 Appendix B (About Ubunt Installation)... 75 9.3 Appendix C (Configuring Hypervisor )... 76
Preface This Openstack Installation manual is aimed at Researchers, technologists, and system administrators eager to understand and deploy Cloud computing infrastructure projects based upon OpenStack software. This manualintends to help the organizations looking to set up an OpenStack based private Cloud.OpenStack is a collection of open source software projects that enterprises/service providers can use to setup and run their cloud compute and storage infrastructure. Rackspace and NASA are the key initial contributors to the stack. This manual describes instructions for manually installing OpenStack Grizzly release on 64- bitubuntu Server/Desktop 12.04LTS with keystone authentication and dashboard. Specifically, the instructions describe how to install Cloud controller and Compute on single machine (node) In this manual, we have included Open Stack Compute Infrastructure (Nova), OpenStack Imaging Service (Glance), OpenStack identity service (Keystone), Openstack Volume (Cinder), OpenStack Networking (Quantum) and Openstack Administrative Web-Interface Horizon (dashboard). Target Audience Our aim has been to provide a guide for beginners who are new to OpenStack. Good familiarity with virtualization is assumed, as troubleshooting OpenStack related problems requires a good knowledge of virtualization. Similarly, familiarity with Cloud Computing concepts and terminology will be of help. Acknowledgement Most of the content has been borrowed from web resources like manuals, documentation, white papers etc. from OpenStack and Canonical; numerous posts on forums; discussions on theopenstack IRC Channel and many articles on the web. We would like to thank the authors of all these resources. Conventions Commands and paths of configuration files are shown in Bold & Italic. Setting of configuration files are shown in Italic.
1. INTRODUCTION We will deploy Cloud Controller and Compute from the OpenStack Grizzly release manually on a single machine running Ubuntu 12.04, 64-bit Server/Desktop. Setting up swift is not part of the instructions. The machine will use FlatDHCP networking mode. We will then add another compute machine that will run its own nova-network. We will use Grizzly final release from Ubuntu Cloud Archive. In our case, Cloud Controller and Compute services will be on single node. We will install OpenStack components sucha as Quantum, Nova, Keystone, Glance, Horizon, Cinder and other tools such as LinuxBridge, KVM. 1.1 INSTALLATION ON UBUNTU OS This guide is for Ubuntu 12.04 LTS OS. Before installation of Openstack Cloud, Ubuntu Operating System must be installed on the system. More detail about Ubuntu server/desktop installation is given in the Appendix B of this manual. If your Openstack Cloud will be behind the proxy then following changes are required in.bashrc and environment file (/etc/environment) of Ubuntu OS. To apply following changes on the server, please reboot the machine. We have assigned static IP address 10.2.31.168 to Ubuntu machine and Proxy address is 10.3.3.3:8080 in our scenerio. 1. Type following command in the terminal. Please replace User_Name in the command with the username on your system. $ sudo nano /home/user_name/.bashrc Add folloiwng lines at the end of file and save it. no_proxy="localhost,127.0.0.1,http://10.2.31.168:5000,http://10.2.31.168:9292,http://1 0.2.31.168:6080,http://10.2.31.168:6080/vnc_auto.html"
2. Same setting for all users's a, added below given lines in environment file. $ sudo nano /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" http_proxy="http://10.3.3.3:8080/" https_proxy="https://10.3.3.3:8080/" ftp_proxy="ftp://10.3.3.3:8080/" socks_proxy="socks://10.3.3.3:8080/" no_proxy="localhost,127.0.0.1,http://10.2.31.168:5000,http://10.2.31.168:9292,http://10.2.31.1 68:6080,http://10.2.31.168:6080/vnc_auto.html,10.2.31.168:5672" 1.2. REQUIREMENTS We required only single NIC on the server with IP address (10.2.31.168).Our example Installation Architectures is shown below. Only one server will run all nova- services and also drives all the virtual instances.
2. PREPARING NODE/SYSTEM After installation of Ubuntu 12.04 Server/Desktop, we will prepare our system to run openstack. Run following command to become root. $sudo -i 1. Add Grizzly repositories: #apt-get installubuntu-cloud-keyring python-software-properties software-propertiescommon python-keyring #echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list 2. Now update your system: #apt-get update #apt-get upgrade #apt-get dist-upgrade 3. Networking: Set the static IP address of Ethernet interface. #nano /etc/network/interface auto eth1 iface eth1 inet static address 10.2.31.168 netmask 255.255.255.0 gateway 10.2.31.1 dns-nameservers 8.8.8.8 4. Restart the networking service to apply setting: # /etc/init.d/networking restart
5. Installing Network Time Protocol (NTP): # apt-get install -y ntp Set up the NTP server on your controller node so that it receives data by modifying the ntp.conffile and restarting the service. # sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\ nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf # service ntp restart 6. Installing MySQL Install MySQL and specify a password for the root user: # apt-get install-y python-mysqldbmysql-server Use sed to edit /etc/mysql/my.cnf to change bind-address from localhost (127.0.0.1) to any (0.0.0.0) and restart the mysql service, as root. #sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf # servicemysql restart 7. Installing Messaging Server Install the messaging queue server. Typically this is either Qpid or RabbitMQ but ZeroMQ (0MQ) is also available. # apt-get installrabbitmq-server Change the password of default user 'guest' using following command. #rabbitmqctlchange_password guest password Bydefault RabbitMQ listens on localhost (127.0.0.1). But it can be change to system ip address (like 10.2.31.168). In our case, RabbitMq is listening on localhost and port 5672. We will use this setting of RabbitMq in nova, quantum, cinder and glance components. You can get more detail about it setting by typing rabbitmqctl in terminal
Restart it. # /etc/init.d/rabbitmq-server restart 8. Check RabbitMQ status: # /etc/init.d/rabbitmq-server status
9. Listening Status using netstat 10. Other Services This package used for bridging on linux #apt-get install -y vlan bridge-utils 11. Enable IP Forwarding on Server. #sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf To save you from rebooting, perform the following #sysctl net.ipv4.ip_forward=1
3. INSTALLINGOPENSTACK IDENTITY SERVICE (KEYSTONE) Keystone is an OpenStack project that provides Identity, Token, Catalog and Policy services for use specifically by projects in the OpenStack family. 1. Install keystone: # apt-get install-y keystone Verify your keystone is running: #service keystone status To manually create the database, start the mysql command line client by running: #mysql -u root -p Enter the mysql root user's password when prompted. 2. Create the keystone database. >CREATE DATABASE keystone; >GRANT ALL ON keystone.* TO 'keystoneuser'@'%' IDENTIFIED BY 'keystonepass'; >quit;
Update the connection attribute in the /etc/keystone/keystone.conf to the new database: sql_connection = mysql://keystoneuser:keystonepass@10.2.31.168/keystone 3. Restart the identity service: # service keystone restart 4. Synchronize and populate the database: # keystone-manage db_sync Fill up the keystone database using the two scripts available at following link. (https://github.com/mseknibilel/openstack-grizzly-install- Guide/tree/master/KeystoneScripts): Modify the HOST_IP and HOST_IP_EXT variables before executing the scripts. # nano /home/test/desktop/keystone_bashic.sh
# nano /home/test/desktop/keystone_endpoints_basic.sh 5. Run following command to change the permission on bash scripts. # chmod +x keystone_basic.sh # chmod +x keystone_endpoints_basic.sh
#./keystone_basic.sh #./keystone_endpoints_basic.sh 6. Create a simple credential file and load it so you won't be bothered later: # nano creds export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin_pass export OS_AUTH_URL="http://10.2.31.168:5000/v2.0/" Load it using following command. #source creds 7. To test Keystone, we use a simple CLI command: #keystone user-list
#keystone endpoint-list Troubleshooting the Identity Service (Keystone) To begin troubleshooting, look at the logs in the /var/log/keystone/keystone.log file (the location of log files is configured in the /etc/keystone/logging.conf file). It shows all the components that have come in to the WSGI request, and will ideally have an error in that log that explains why an authorization request failed. If you're not seeing the request at all in those logs, then run keystone with "--debug" where --debug is passed in directly after the CLI command prior to parameters.
4. INSTALLING OPENSTACK IMAGE SERVICE (GLANCE) The OpenStack Image Service provides discovery, registration and delivery services for disk and server images. The ability to copy or snapshot a server image and immediately store it away is a powerful capability of the OpenStack cloud operating system. Stored images can be used as a template to get new servers up and running quickly and more consistently if you are provisioning multiple servers than installing a server operating system and individually configuring additional services 1. Install the Image service: # apt-get -y install glance 2. Verify your glance services are running: #service glance-api status #service glance-registry status 3. Configuring the Image Service database backend Configure the backend data store. Create a glance MySQL database and g rants the user full access to the glance MySQL database. Start the MySQL command line client by running: #mysql -u root -p Enter the MySQL root user's password when prompted.to configures the MySQL database, create the glance database.
>CREATE DATABASE glance; >GRANT ALL ON glance.* TO 'glanceuser'@'%' IDENTIFIED BY 'glancepass'; >quit; The Image service has a number of options that you can use to configure the Glance API server, optionally the Glance Registry server, and the various storage backends that Glance can use to store images. 4. Update /etc/glance/glance-api-paste.ini with: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory delay_auth_decision = true auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 5. Update the /etc/glance/glance-registry-paste.ini with: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = 10.2.31.168 auth_port = 35357
auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 6. Update /etc/glance/glance-api.conf with: sql_connection = mysql://glanceuser:glancepass@10.2.31.168/glance And add following lines at the end of glance-api.conf file [paste_deploy] flavor = keystone We are using RabbitMQ for messaging between openstack components. Following changesare required in glance-api.conf forrabbitmq
7. Update the /etc/glance/glance-registry.conf with: sql_connection = mysql://glanceuser:glancepass@10.2.31.168/glance And add following lines at the end of glance-registry.conf file [paste_deploy] flavor = keystone 8. Restart the glance-api and glance-registry services: #service glance-api restart; service glance-registry restart Now you can populate or migrate or syncchronize the database. # glance-manage db_sync 9. Restart the services again to take into account the new modifications: #service glance-api restart; service glance-registry restart
10. To test Glance, upload the cirros cloud image directly from the internet: #glance image-create --name myfirstimage --is-public true --container-format bare -- disk-format qcow2 --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img 11. Now list the image to see what you have just uploaded: #glance index #glance image-list In Glance, Image of different OSes can be added after complete installation of Openstack using its Dashboard (GUI). Troubleshooting the Image Service (Glance) To begin troubleshooting, look at the logs in the /var/log/glance/registry.log or /var/log/glance/api.log.
5. INSTALLING OPENSTACKNETWORKING SERVICE (QUANTUM) Quantum (Now know as Neutron) is an OpenStack project to provide "networking as a service" between interface devices (e.g., vnics) managed by other Openstack services (e.g., nova). 1. Install the Quantum components: # apt-get install -y quantum-server quantum-plugin-linuxbridge quantum-pluginlinuxbridge-agent dnsmasq quantum-dhcp-agent quantum-l3-agent 2. Configuring the quantum database backendstart the MySQL command line client by running: # mysql -u root -p Enter the MySQL root user's password when prompted. To configure the MySQL database, create the glance database. >CREATE DATABASE quantum; >GRANT ALL ON quantum.* TO 'quantumuser'@'%' IDENTIFIED BY 'quantumpass'; >quit; 3. Verify all Quantum components are running: #cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i status; done
4. Edit the /etc/quantum/quantum.conf file: core_plugin = quantum.plugins.linuxbridge.lb_quantum_plugin.linuxbridgepluginv2 Add following line at the end of file. [keystone_authtoken] auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = service_pass signing_dir = /var/lib/quantum/keystone-signing 5. Messaging queue (RabbitMq) setting in quatum.conf file. # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = 10.2.31.168 # Password of the RabbitMQ server
rabbit_password = password # Port where RabbitMQ server is running/listening rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = localhost:5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / 6. Edit /etc/quantum/api-paste.ini [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = service_pass
7. Edit the LinuxBridge plugin config file /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini with: # under [DATABASE] section sql_connection = mysql://quantumuser:quantumpass@10.2.31.168/quantum # under [LINUX_BRIDGE] section physical_interface_mappings = physnet1:eth0 # under [VLANS] section tenant_network_type = vlan network_vlan_ranges = physnet1:1000:2999 8. Edit the /etc/quantum/l3_agent.ini interface_driver = quantum.agent.linux.interface.bridgeinterfacedriver
9. Edit the /etc/quantum/dhcp_agent.ini interface_driver = quantum.agent.linux.interface.bridgeinterfacedriver 10. Update /etc/quantum/metadata_agent.ini # The Quantum user information for accessing the Quantum API. auth_url = http://10.2.31.168:35357/v2.0 auth_region = RegionOne admin_tenant_name = service admin_user = quantum admin_password = service_pass # IP address used by Nova metadata server nova_metadata_ip = 10.2.31.168 # TCP Port used by Nova metadata server nova_metadata_port = 8775 metadata_proxy_shared_secret = helloopenstack
11. After changes in the file, restart all quantum services #cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done Troubleshooting the Networking Service (Quantum) To begin troubleshooting, look at the logs in the /var/log/quantum/server.log file.
6. INSTALLING OPENSTACK COMPUTE SERVICE (NOVA) 6.1 CONFIGURING THE HYPERVISOR For production environments the most tested hypervisors are KVM and Xen-based hypervisors. KVM runs through libvirt, Xen runs best through XenAPI calls. KVM is selected by default and requires the least additional configuration. This guide offers information for KVM and Qemu hypervisors. Details about the Hypervisor's are given in Appendix C of this manual. 6.1.1 KVM KVM is configured as the default hypervisor for Compute in Openstack. The KVM hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk) 1. Checking for hardware virtualization support The processors of your compute host need to support virtualization technology (VT) to use KVM.If you are running on Ubuntu use the kvm-ok command to check if your processor has VT support, it is enabled in the BIOS, and KVM is installed properly, as root: kvm-ok command is available in cpu-checker package so install it first. # apt-get installcpu-checker # kvm-ok 2. Output of command If KVM is enabled, the output should look something like: INFO: /dev/kvm exists KVM acceleration can be used.
If KVM is not enabled, the output should look something like: INFO: Your CPU does not support KVM extensions In the case that KVM acceleration is not supported, Compute should be configured to use a different hypervisor, such as QEMU or Xen. 3. KVM installation Now install pakcage for KVM hypervisor: # apt-get install -y kvmlibvirt-bin pm-utils Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to: cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet","/dev/net/tun" ] 4. Delete default virtual bridge # virsh net-destroy default # virsh net-undefine default 5. Restart the libvirt service to load the new values: # servicelibvirt-bin restart
In this manual, we are using qemu instead of kvm because kvm support is not available in our hardware. If KVM support is available in your hardware then replace qemu with kvm in below setting for your deployment. 6.2 NOVA INSTALLATION 1. First of all, install nova components (Compute Services): # apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler novanovncproxy nova-doc nova-conductor nova-compute-kvm 2. Check the status of all nova-services: # cd /etc/init.d/; for i in $( ls nova-* ); do service $i status; cd; done 3. Now we will configure the MySQL Database for Nova. Start the mysql command line client by running: #mysql -u root -p Enter the mysql root user's password when prompted.
4. Create database for Nova: >CREATE DATABASE nova; >GRANT ALL ON nova.* TO 'novauser'@'%' IDENTIFIED BY 'novapass'; >quit; 5. Now modify authtoken section in the /etc/nova/api-paste.ini file to this: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = service_pass signing_dirname = /tmp/keystone-signing-nova # Workaround for https://bugs.launchpad.net/nova/+bug/1154809 auth_version = v2.0
6. Modify the /etc/nova/nova.conf like this: [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/run/lock/nova verbose=true api_paste_config=/etc/nova/api-paste.ini compute_scheduler_driver=nova.scheduler.simple.simplescheduler rabbit_host=localhost rabbit_port=5672 rabbit_userid="guest" rabbit_password = "password" rabbit_virtual_host="/" libvirt_use_virtio_for_bridges=true connection_type=libvirt libvirt_type=qemu #libvirt_type=kvm nova_url=http://10.2.31.168:8774/v1.1/ sql_connection=mysql://novauser:novapass@10.2.31.168/nova root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
# Auth use_deprecated_auth=false auth_strategy=keystone # Imaging service glance_api_servers=10.2.31.168:9292 image_service=nova.image.glance.glanceimageservice # Vnc configuration novnc_enabled=true novncproxy_base_url=http://10.2.31.168:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=10.2.3.168 vncserver_listen=0.0.0.0 # Metadata service_quantum_metadata_proxy = True quantum_metadata_proxy_shared_secret = helloopenstack # Network settings network_api_class=nova.network.quantumv2.api.api quantum_url=http://10.2.31.168:9696 quantum_auth_strategy=keystone quantum_admin_tenant_name=service quantum_admin_username=quantum quantum_admin_password=service_pass quantum_admin_auth_url=http://10.2.31.168:35357/v2.0 libvirt_vif_driver=nova.virt.libvirt.vif.quantumlinuxbridgevifdriver linuxnet_interface_driver=nova.network.linux_net.linuxbridgeinterfacedriver firewall_driver=nova.virt.libvirt.firewall.iptablesfirewalldriver
# Compute # compute_driver=libvirt.libvirtdriver # Cinder # volume_api_class=nova.volume.cinder.api osapi_volume_listen_port=5900 7. Edit the /etc/nova/nova-compute.conf [DEFAULT] libvirt_type=qemu #libvirt_type=kvm compute_driver=libvirt.libvirtdriver
8. Synchronize and populate your nova database: #nova-manage db sync 9. Restart nova-* services: # cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done 10. Check for the smiling faces on nova-* services to confirm your installation: # nova-manage service list
Troubleshooting the Compute Service (Nova) Trying to launch a new virtual machine instance fails with the ERROR state, and the following error appears in /var/log/nova/nova-compute.log libvirterror: internal error no supported architecture for os type 'hvm' This is a symptom that the KVM kernel modules have not been loaded.if you cannot start VMs after installation without rebooting, it's possible the permissions are not correct. This can happen if you load the KVM module before you've installed nova-compute.
2. INSTALLING OPENSTACK CINDER COMPONENTS (VOLUME) Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release. 1. Install the required packages: # apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget openiscsiiscsitarget-dkms 2. Configure the iscsi services: #sed -i 's/false/true/g' /etc/default/iscsitarget 3. Restart the services: #service iscsitarget start #service open-iscsi start
4. Now we will configure the MySQL Database for Nova. Start the mysql command line client by running: #mysql -u root -p Enter the mysql root user's password when prompted. 5. Create database for Cinder: >CREATE DATABASE cinder; >GRANT ALL ON cinder.* TO 'cinderuser'@'%' IDENTIFIED BY 'cinderpass'; >quit; 6. Configure /etc/cinder/api-paste.ini like the following: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory service_protocol = http service_host = 10.2.31.168 service_port = 5000 auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = service_pass signing_dir = /var/lib/cinder
7. Edit the /etc/cinder/cinder.conf to: [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf sql_connection = mysql://cinderuser:cinderpass@10.2.31.168/cinder api_paste_confg = /etc/cinder/api-paste.ini #iscsi_helper = tgtadm iscsi_helper=ietadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes
8. RabbitMQ setting in /etc/cinder/cinder.conf # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = 10.2.31.168 # Password of the RabbitMQ server rabbit_password = password # Port where RabbitMQ server is running/listening rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = 10.2.31.168:5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / 9. Synchronize your database: # cinder-manage db sync 10. Create a volumegroup and name it cinder-volumes: #dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2g
#losetup /dev/loop2 cinder-volumes #fdisk /dev/loop2 Type in the followings: n p 1 ENTER ENTER t 8e w
11. Proceed to create the physical volume then the volume group: # pvcreate -ff /dev/loop2 # vgcreate cinder-volumes /dev/loop2 Beware that this volume group gets lost after a system reboot. so write follwoing line in /etc/rc.local filebefore the exit 0 line. #nano /etc/rc.local losetup /dev/loop2 %Your_path_to_cinder_volumes% 12. Restart the cinder services: #cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done
13. Verify if cinder services are running: #cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done Troubleshooting the Cinder Component (Volume) To begin troubleshooting, look at the logs in the /var/log/cinder/cinder.log file.
3. INSTALLING OPENSTACKDASHBOARD (HORIZON) You can use a dashboard interface with an OpenStack Compute installation with a web-based console provided by the Openstack-Dashboard project. 1. Install the OpenStack Dashboard: # apt-get installopenstack-dashboard memcached If you don't like the OpenStackubuntu theme, you can remove the package to disable it: # dpkg --purge openstack-dashboard-ubuntu-theme 2. Reload Apache and memcached: # service apache2 restart; service memcached restart 3. Validating the Dashboard Install: To validate the Dashboard installation, point your web browser to10.2.31.168/horizon. Once you connect to the Dashboard with the URL, you should see a login window. Enter the credentials for users you created with the Identity Service, Keystone (credential admin<username>:admin_pass<passowrd>).
Main Dashboard of Openstack:
4. First Instance using Dashboard (VM launch): After successful login in dashboard, Go to Project tenant and create new network for your new VM instance. 5. Network Setting: Click on Network meanu. Then Cretae new Network by clicking on +Create Network button: Set network name and Subnet details such as network address, gateway address etc. Network with "new" Name is shown in the below figure.
6. After Network Creation, generate RSA keys by clicking on Access & Security Option and then Gnerate KeyPair in Project.
7. To launch new Instance, Click on Instace menu in Dashbaord which is shwon belown. After this Click on +Launch Instance button. 2 1 8. Set Instance details such as Image source, Instance name and Flavor. Also set Network for you new instance.
9. As shown in the below figure, new network selected for VM instance. If no error occurs during instance creation phase, then new instance will be display in openstack dashboard under instance option.
10. If your system is using proxy and your cloud server is also in same network then include the IP address of cloud in the ignore list of Firefox. Otherwise Instance console will not work. Go to Options -->Advnaced-->Network-->Setting 11. Console View of Instance is shown in below figure by clicking on console tab.
Logs of instance by clicking on the log tab:
Appendix Appendix A Sample configuration Files: 1. environment --------------------------------------------------------------------------------------------------------------------- http_proxy="http://10.3.3.3:8080/" https_proxy="https://10.3.3.3:8080/" ftp_proxy="ftp://10.3.3.3:8080/" socks_proxy="socks://10.3.3.3:8080/" no_proxy="localhost,127.0.0.1,http://10.2.31.168:5000,http://10.2.31.168:9292,http://10.2.31.1 68:6080,http://10.2.31.168:6080/vnc_auto.html,10.2.31.168:5672" 2..bashrc --------------------------------------------------------------------------------------------------------------------- no_proxy="localhost,127.0.0.1,http://10.2.31.168:5000,http://10.2.31.168:9292,http://10.2.31.1 68:6080,http://10.2.31.168:6080/vnc_auto.html" 3. creds --------------------------------------------------------------------------------------------------------------------- #Paste the following: export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin_pass export OS_AUTH_URL="http://10.2.31.168:5000/v2.0/" #export OS_AUTH_URL=http://192.168.100.51:5000/v2.0/ 4. keystone.conf --------------------------------------------------------------------------------------------------------------------- [DEFAULT] # A "shared secret" between keystone and other openstack services # admin_token = ADMIN
# The IP address of the network interface to listen on # bind_host = 0.0.0.0 # The port number which the public service listens on # public_port = 5000 # The port number which the public admin listens on # admin_port = 35357 # The base endpoint URLs for keystone that are advertised to clients # (NOTE: this does NOT affect how keystone listens for connections) # public_endpoint = http://localhost:%(public_port)d/ # admin_endpoint = http://localhost:%(admin_port)d/ # The port number which the OpenStack Compute service listens on # compute_port = 8774 # Path to your policy definition containing identity actions # policy_file = policy.json # Rule to check if no matching policy definition is found # FIXME(dolph): This should really be defined as [policy] default_rule # policy_default_rule = admin_required # Role for migrating membership relationships # During a SQL upgrade, the following values will be used to create a new role # that will replace records in the user_tenant_membership table with explicit # role grants. After migration, the member_role_id will be used in the API # add_user_to_project, and member_role_name will be ignored. # member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab # member_role_name = _member_ # === Logging Options === # Print debugging output # (includes plaintext request logging, potentially including passwords) # debug = False # Print more verbose output # verbose = False # Name of log file to output to. If not set, logging will go to stdout.
log_file = keystone.log # The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone # Use syslog for logging. # use_syslog = False # syslog facility to receive log lines # syslog_log_facility = LOG_USER # If this option is specified, the logging configuration file specified is # used and overrides any other logging options specified. Please see the # Python logging module documentation for details on logging configuration # files. # log_config = logging.conf # A logging.formatter log message format string which may use any of the # available logging.logrecord attributes. # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # Format string for %(asctime)s in log records. # log_date_format = %Y-%m-%d %H:%M:%S # onready allows you to send a notification when the process is ready to serve # For example, to have it notify using systemd, one could set shell command: # onready = systemd-notify --ready # or a module with notify() method: # onready = keystone.common.systemd [sql] # The SQLAlchemy connection string used to connect to the database #connection = sqlite:////var/lib/keystone/keystone.db # the timeout before idle sql connections are reaped connection = mysql://keystoneuser:keystonepass@10.2.31.168/keystone # idle_timeout = 200 [identity] driver = keystone.identity.backends.sql.identity
# This references the domain to use for all Identity API v2 requests (which are # not aware of domains). A domain with this ID will be created for you by # keystone-manage db_sync in migration 008. The domain referenced by this ID # cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. # There is nothing special about this domain, other than the fact that it must # exist to order to maintain support for your v2 clients. # default_domain_id = default [trust] driver = keystone.trust.backends.sql.trust # delegation and impersonation features can be optionally disabled # enabled = True [catalog] # dynamic, sql-based backend (supports API/CLI-based management commands) driver = keystone.catalog.backends.sql.catalog # static, file-based backend (does *NOT* support any management commands) # driver = keystone.catalog.backends.templated.templatedcatalog # template_file = default_catalog.templates [token] driver = keystone.token.backends.sql.token # Amount of time a token should remain valid (in seconds) # expiration = 86400 [policy] driver = keystone.policy.backends.sql.policy [ec2] driver = keystone.contrib.ec2.backends.sql.ec2 [ssl] #enable = True #certfile = /etc/keystone/ssl/certs/keystone.pem #keyfile = /etc/keystone/ssl/private/keystonekey.pem #ca_certs = /etc/keystone/ssl/certs/ca.pem #cert_required = True
[signing] #token_format = PKI #certfile = /etc/keystone/ssl/certs/signing_cert.pem #keyfile = /etc/keystone/ssl/private/signing_key.pem #ca_certs = /etc/keystone/ssl/certs/ca.pem #key_size = 1024 #valid_days = 3650 #ca_password = None [ldap] # url = ldap://localhost # user = dc=manager,dc=example,dc=com # password = None # suffix = cn=example,cn=com # use_dumb_member = False # allow_subtree_delete = False # dumb_member = cn=dumb,dc=example,dc=com # Maximum results per page; a value of zero ('0') disables paging (default) # page_size = 0 # The LDAP dereferencing option for queries. This can be either 'never', # 'searching', 'always', 'finding' or 'default'. The 'default' option falls # back to using default dereferencing configured by your ldap.conf. # alias_dereferencing = default # The LDAP scope for queries, this can be either 'one' # (onelevel/singlelevel) or 'sub' (subtree/wholesubtree) # query_scope = one # user_tree_dn = ou=users,dc=example,dc=com # user_filter = # user_objectclass = inetorgperson # user_domain_id_attribute = businesscategory # user_id_attribute = cn # user_name_attribute = sn # user_mail_attribute = email # user_pass_attribute = userpassword # user_enabled_attribute = enabled # user_enabled_mask = 0 # user_enabled_default = True
# user_attribute_ignore = tenant_id,tenants # user_allow_create = True # user_allow_update = True # user_allow_delete = True # user_enabled_emulation = False # user_enabled_emulation_dn = # tenant_tree_dn = ou=groups,dc=example,dc=com # tenant_filter = # tenant_objectclass = groupofnames # tenant_domain_id_attribute = businesscategory # tenant_id_attribute = cn # tenant_member_attribute = member # tenant_name_attribute = ou # tenant_desc_attribute = desc # tenant_enabled_attribute = enabled # tenant_attribute_ignore = # tenant_allow_create = True # tenant_allow_update = True # tenant_allow_delete = True # tenant_enabled_emulation = False # tenant_enabled_emulation_dn = # role_tree_dn = ou=roles,dc=example,dc=com # role_filter = # role_objectclass = organizationalrole # role_id_attribute = cn # role_name_attribute = ou # role_member_attribute = roleoccupant # role_attribute_ignore = # role_allow_create = True # role_allow_update = True # role_allow_delete = True # group_tree_dn = # group_filter = # group_objectclass = groupofnames # group_id_attribute = cn # group_name_attribute = ou # group_member_attribute = member # group_desc_attribute = desc
# group_attribute_ignore = # group_allow_create = True # group_allow_update = True # group_allow_delete = True [auth] methods = password,token password = keystone.auth.plugins.password.password token = keystone.auth.plugins.token.token [filter:debug] paste.filter_factory = keystone.common.wsgi:debug.factory [filter:token_auth] paste.filter_factory = keystone.middleware:tokenauthmiddleware.factory [filter:admin_token_auth] paste.filter_factory = keystone.middleware:admintokenauthmiddleware.factory [filter:xml_body] paste.filter_factory = keystone.middleware:xmlbodymiddleware.factory [filter:json_body] paste.filter_factory = keystone.middleware:jsonbodymiddleware.factory [filter:user_crud_extension] paste.filter_factory = keystone.contrib.user_crud:crudextension.factory [filter:crud_extension] paste.filter_factory = keystone.contrib.admin_crud:crudextension.factory [filter:ec2_extension] paste.filter_factory = keystone.contrib.ec2:ec2extension.factory [filter:s3_extension] paste.filter_factory = keystone.contrib.s3:s3extension.factory [filter:url_normalize] paste.filter_factory = keystone.middleware:normalizingfilter.factory
[filter:sizelimit] paste.filter_factory = keystone.middleware:requestbodysizelimiter.factory [filter:stats_monitoring] paste.filter_factory = keystone.contrib.stats:statsmiddleware.factory [filter:stats_reporting] paste.filter_factory = keystone.contrib.stats:statsextension.factory [filter:access_log] paste.filter_factory = keystone.contrib.access:accesslogmiddleware.factory [app:public_service] paste.app_factory = keystone.service:public_app_factory [app:service_v3] paste.app_factory = keystone.service:v3_app_factory [app:admin_service] paste.app_factory = keystone.service:admin_app_factory [pipeline:public_api] pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service [pipeline:admin_api] pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service [pipeline:api_v3] pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3 [app:public_version_service] paste.app_factory = keystone.service:public_version_app_factory [app:admin_version_service] paste.app_factory = keystone.service:admin_version_app_factory
[pipeline:public_version_api] pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_version_service [pipeline:admin_version_api] pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_service [composite:main] use = egg:paste#urlmap /v2.0 = public_api /v3 = api_v3 / = public_version_api [composite:admin] use = egg:paste#urlmap /v2.0 = admin_api /v3 = api_v3 / = admin_version_api 5. glance-registry.conf --------------------------------------------------------------------------------------------------------------------- [DEFAULT] # Show more verbose log output (sets INFO log level output) #verbose = False # Show debugging output in logs (sets DEBUG log level output) #debug = False # Address to bind the registry server bind_host = 0.0.0.0 # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/registry.log # Backlog requests when creating socket backlog = 4096
# TCP_KEEPIDLE value in seconds when creating socket. # Not supported on OS X. #tcp_keepidle = 600 # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_e ngine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glanceuser:glancepass@10.2.31.168/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25 # Role used to identify an authenticated user as administrator #admin_role = admin # Whether to automatically create the database tables. # Default: False #db_auto_create = False # ================= Syslog Options ============================ # Send logs to syslog (/dev/log) instead of to file specified # by `log_file` #use_syslog = False
# Facility to use. If unset defaults to LOG_USER. #syslog_log_facility = LOG_LOCAL1 # ================= SSL Options =============================== # Certificate file to use when starting registry server securely #cert_file = /path/to/certfile # Private key file to use when starting registry server securely #key_file = /path/to/keyfile # CA certificate file to use to verify connecting clients #ca_file = /path/to/cafile [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% [paste_deploy] # Name of the paste configuration file that defines the available pipelines #config_file = glance-registry-paste.ini # Partial name of a pipeline in your paste configuration file with the # service name removed. For example, if your paste section name is # [pipeline:glance-registry-keystone], you would configure the flavor below # as 'keystone'. #flavor= [paste_deploy] flavor = keystone 6. glance-registry-paste.ini --------------------------------------------------------------------------------------------------------------------- # Use this pipeline for no auth - DEFAULT [pipeline:glance-registry] pipeline = unauthenticated-context registryapp # Use this pipeline for keystone auth
[pipeline:glance-registry-keystone] pipeline = authtoken context registryapp [app:registryapp] paste.app_factory = glance.registry.api.v1:api.factory [filter:context] paste.filter_factory = glance.api.middleware.context:contextmiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:unauthenticatedcontextmiddleware.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 7. glance-api.conf --------------------------------------------------------------------------------------------------------------------- [DEFAULT] # Show more verbose log output (sets INFO log level output) #verbose = False # Show debugging output in logs (sets DEBUG log level output) #debug = False # Which backend scheme should Glance use by default is not specified # in a request to add a new image to Glance? Known schemes are determined # by the known_stores option below. # Default: 'file' default_store = file # List of which store classes and store class locations are # currently known to glance at startup. #known_stores = glance.store.filesystem.store,
# glance.store.http.store, # glance.store.rbd.store, # glance.store.s3.store, # glance.store.swift.store, # Maximum image size (in bytes) that may be uploaded through the # Glance API server. Defaults to 1 TB. # WARNING: this value should only be increased after careful consideration # and must be set to a value under 8 EB (9223372036854775808). #image_size_cap = 1099511627776 # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9292 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/api.log # Backlog requests when creating socket backlog = 4096 # TCP_KEEPIDLE value in seconds when creating socket. # Not supported on OS X. #tcp_keepidle = 600 # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_e ngine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glanceuser:glancepass@10.2.31.168/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects
# before MySQL can drop the connection. sql_idle_timeout = 3600 # Number of Glance API worker processes to start. # On machines with more than one CPU increasing this value # may improve performance (especially if using SSL with # compression turned on). It is typically recommended to set # this value to the number of CPUs present on your machine. workers = 1 # Role used to identify an authenticated user as administrator #admin_role = admin # Allow unauthenticated users to access the API with read-only # privileges. This only applies when using ContextMiddleware. #allow_anonymous_access = False # Allow access to version 1 of glance api #enable_v1_api = True # Allow access to version 2 of glance api #enable_v2_api = True # Return the URL that references where the data is stored on # the backend storage system. For example, if using the # file system store a URL of 'file:///path/to/image' will # be returned to the user in the 'direct_url' meta-data field. # The default value is false. #show_image_direct_url = False # ================= Syslog Options ============================ # Send logs to syslog (/dev/log) instead of to file specified # by `log_file` #use_syslog = False # Facility to use. If unset defaults to LOG_USER. #syslog_log_facility = LOG_LOCAL0
# ================= SSL Options =============================== # Certificate file to use when starting API server securely #cert_file = /path/to/certfile # Private key file to use when starting API server securely #key_file = /path/to/keyfile # CA certificate file to use to verify connecting clients #ca_file = /path/to/cafile # ================= Security Options ========================== # AES key for encrypting store 'location' metadata, including # -- if used -- Swift or S3 credentials # Should be set to a random string of length 16, 24 or 32 bytes #metadata_encryption_key = <16, 24 or 32 char registry metadata key> # ============ Registry Options =============================== # Address to find the registry server registry_host = 0.0.0.0 # Port the registry server is listening on registry_port = 9191 # What protocol to use when connecting to the registry server? # Set to https for secure HTTP communication registry_client_protocol = http # The path to the key file to use in SSL connections to the # registry server, if any. Alternately, you may set the # GLANCE_CLIENT_KEY_FILE environ variable to a filepath of the key file #registry_client_key_file = /path/to/key/file # The path to the cert file to use in SSL connections to the # registry server, if any. Alternately, you may set the # GLANCE_CLIENT_CERT_FILE environ variable to a filepath of the cert file #registry_client_cert_file = /path/to/cert/file
# The path to the certifying authority cert file to use in SSL connections # to the registry server, if any. Alternately, you may set the # GLANCE_CLIENT_CA_FILE environ variable to a filepath of the CA cert file #registry_client_ca_file = /path/to/ca/file # When using SSL in connections to the registry server, do not require # validation via a certifying authority. This is the registry's equivalent of # specifying --insecure on the command line using glanceclient for the API # Default: False #registry_client_insecure = False # The period of time, in seconds, that the API server will wait for a registry # request to complete. A value of '0' implies no timeout. # Default: 600 #registry_client_timeout = 600 # Whether to automatically create the database tables. # Default: False #db_auto_create = False # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue), qpid (via a Qpid # message queue), or noop (no notifications sent, the default) notifier_strategy = rabbit #notifier_strategy = noop #Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost #rabbit_host = 10.2.31.168 rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = password rabbit_virtual_host = / rabbit_notification_exchange = glance
rabbit_notification_topic = notifications rabbit_durable_queues = False # Configuration options if sending notifications via Qpid (these are # the defaults) qpid_notification_exchange = glance qpid_notification_topic = notifications qpid_host = localhost qpid_port = 5672 qpid_username = qpid_password = qpid_sasl_mechanisms = qpid_reconnect_timeout = 0 qpid_reconnect_limit = 0 qpid_reconnect_interval_min = 0 qpid_reconnect_interval_max = 0 qpid_reconnect_interval = 0 qpid_heartbeat = 5 # Set to 'ssl' to enable SSL qpid_protocol = tcp qpid_tcp_nodelay = True # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to filesystem_store_datadir = /var/lib/glance/images/ # ============ Swift Store Options ============================= # Version of the authentication service to use # Valid versions are '2' for keystone and '1' for swauth and rackspace swift_store_auth_version = 2 # Address where the Swift authentication service lives # Valid schemes are 'http://' and 'https://' # If no scheme specified, default to 'https://' # For swauth, use something like '127.0.0.1:8080/v1.0/' swift_store_auth_address = 127.0.0.1:5000/v2.0/
# User to authenticate against the Swift authentication service # If you use Swift authentication service, set it to 'account':'user' # where 'account' is a Swift storage account and 'user' # is a user in that account swift_store_user = jdoe:jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!) # # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ swift_enable_snet = False # If set to True enables multi-tenant storage mode which causes Glance images # to be stored in tenant specific Swift accounts. #swift_store_multi_tenant = False
# A list of swift ACL strings that will be applied as both read and # write ACLs to the containers created by Glance in multi-tenant # mode. This grants the specified tenants/users read and write access # to all newly created image objects. The standard swift ACL string # formats are allowed, including: # <tenant_id>:<username> # <tenant_name>:<username> # *:<username> # Multiple ACLs can be combined using a comma separated list, for # example: swift_store_admin_tenants = service:glance,*:admin #swift_store_admin_tenants = # The region of the swift endpoint to be used for single tenant. This setting # is only necessary if the tenant has multiple swift endpoints. #swift_store_region = # ============ S3 Store Options ============================= # Address where the S3 authentication service lives # Valid schemes are 'http://' and 'https://' # If no scheme specified, default to 'http://' s3_store_host = 127.0.0.1:8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = <lowercased 20-char aws access key>glance
# Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # When sending images to S3, the data will first be written to a # temporary buffer on disk. By default the platform's temporary directory # will be used. If required, an alternative directory can be specified here. #s3_store_object_buffer_dir = /path/to/dir # When forming a bucket url, boto will either set the bucket name as the # subdomain or as the first token of the path. Amazon's S3 service will # accept it as the subdomain, but Swift's S3 middleware requires it be # in the path. Set this to 'path' or 'subdomain' - defaults to 'subdomain'. #s3_store_bucket_url_format = subdomain # ============ RBD Store Options ============================= # Ceph configuration file path # If using cephx authentication, this file should # include a reference to the right keyring # in a client.<user> section rbd_store_ceph_conf = /etc/ceph/ceph.conf # RADOS user to authenticate as (only applicable if using cephx) rbd_store_user = glance # RADOS pool in which images are stored rbd_store_pool = images # Images will be chunked into objects of this size (in megabytes). # For best performance, this should be a power of two rbd_store_chunk_size = 8 # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False # Delayed delete time in seconds scrub_time = 43200
# Directory that the scrubber will use to remind itself of what to delete # Make sure this is also set in glance-scrubber.conf scrubber_datadir = /var/lib/glance/scrubber # =============== Image Cache Options =========================== # Base directory that the Image Cache uses image_cache_dir = /var/lib/glance/image-cache/ [keystone_authtoken] auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% [paste_deploy] # Name of the paste configuration file that defines the available pipelines #config_file = glance-api-paste.ini # Partial name of a pipeline in your paste configuration file with the # service name removed. For example, if your paste section name is # [pipeline:glance-api-keystone], you would configure the flavor below # as 'keystone'. #flavor= [paste_deploy] flavor = keystone 8. glance-api-paste.ini --------------------------------------------------------------------------------------------------------------------- # Use this pipeline for no auth or image caching - DEFAULT [pipeline:glance-api] pipeline = versionnegotiation unauthenticated-context rootapp # Use this pipeline for image caching and no auth [pipeline:glance-api-caching] pipeline = versionnegotiation unauthenticated-context cache rootapp # Use this pipeline for caching w/ management interface but no auth [pipeline:glance-api-cachemanagement]
pipeline = versionnegotiation unauthenticated-context cache cachemanage rootapp # Use this pipeline for keystone auth [pipeline:glance-api-keystone] pipeline = versionnegotiation authtoken context rootapp # Use this pipeline for keystone auth with image caching [pipeline:glance-api-keystone+caching] pipeline = versionnegotiation authtoken context cache rootapp # Use this pipeline for keystone auth with caching and cache management [pipeline:glance-api-keystone+cachemanagement] pipeline = versionnegotiation authtoken context cache cachemanage rootapp [composite:rootapp] paste.composite_factory = glance.api:root_app_factory /: apiversions /v1: apiv1app /v2: apiv2app [app:apiversions] paste.app_factory = glance.api.versions:create_resource [app:apiv1app] paste.app_factory = glance.api.v1.router:api.factory [app:apiv2app] paste.app_factory = glance.api.v2.router:api.factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:versionnegotiationfilter.factory [filter:cache] paste.filter_factory = glance.api.middleware.cache:cachefilter.factory [filter:cachemanage] paste.filter_factory = glance.api.middleware.cache_manage:cachemanagefilter.factory [filter:context] paste.filter_factory = glance.api.middleware.context:contextmiddleware.factory
[filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:unauthenticatedcontextmiddleware.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory delay_auth_decision = true auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 9. nova.conf --------------------------------------------------------------------------------------------------------------------- #[DEFAULT] #dhcpbridge_flagfile=/etc/nova/nova.conf #dhcpbridge=/usr/bin/nova-dhcpbridge #logdir=/var/log/nova #state_path=/var/lib/nova #lock_path=/var/lock/nova #force_dhcp_release=true #iscsi_helper=tgtadm #root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf #verbose=true #ec2_private_dns_show_ip=true #api_paste_config=/etc/nova/api-paste.ini #volumes_path=/var/lib/nova/volumes #enabled_apis=ec2,osapi_compute,metadata [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/run/lock/nova verbose=true api_paste_config=/etc/nova/api-paste.ini compute_scheduler_driver=nova.scheduler.simple.simplescheduler #rabbit_host=10.2.31.168
rabbit_host="localhost" rabbit_password="password" rabbit_port=5672 rabbit_use_ssl=false rabbit_userid="guest" rabbit_virtual_host="/" libvirt_use_virtio_for_bridges=true connection_type=libvirt libvirt_type=qemu nova_url=http://10.2.31.168:8774/v1.1/ sql_connection=mysql://novauser:novapass@10.2.31.168/nova root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf # Auth use_deprecated_auth=false auth_strategy=keystone # Imaging service glance_api_servers=10.2.31.168:9292 image_service=nova.image.glance.glanceimageservice # Vnc configuration novnc_enabled=true novncproxy_base_url=http://10.2.31.168:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=10.2.31.168 vncserver_listen=0.0.0.0 # Metadata service_quantum_metadata_proxy = True quantum_metadata_proxy_shared_secret = helloopenstack # Network settings network_api_class=nova.network.quantumv2.api.api quantum_url=http://10.2.31.168:9696 quantum_auth_strategy=keystone quantum_admin_tenant_name=service quantum_admin_username=quantum quantum_admin_password=service_pass quantum_admin_auth_url=http://10.2.31.168:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.quantumlinuxbridgevifdriver linuxnet_interface_driver=nova.network.linux_net.linuxbridgeinterfacedriver firewall_driver=nova.virt.libvirt.firewall.iptablesfirewalldriver # Compute # compute_driver=libvirt.libvirtdriver # Cinder # volume_api_class=nova.volume.cinder.api osapi_volume_listen_port=5900 10. nova-compute.conf --------------------------------------------------------------------------------------------------------------------- [DEFAULT] libvirt_type=qemu #libvirt_type=kvm #compute_driver=libvirt.libvirtdriver compute_driver=libvirt.libvirtdriver #libvirt_vif_type=ethernet #libvirt_vif_driver=nova.virt.libvirt.vif.quantumlinuxbridgevifdriver 11. api-paste.ini --------------------------------------------------------------------------------------------------------------------- ############ # Metadata # ############ [composite:metadata] use = egg:paste#urlmap /: meta [pipeline:meta] pipeline = ec2faultwrap logrequest metaapp [app:metaapp] paste.app_factory = nova.api.metadata.handler:metadatarequesthandler.factory ####### # EC2 #
[composite:ec2] use = egg:paste#urlmap /services/cloud: ec2cloud [composite:ec2cloud] use = call:nova.api.auth:pipeline_factory noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor [filter:ec2faultwrap] paste.filter_factory = nova.api.ec2:faultwrapper.factory [filter:logrequest] paste.filter_factory = nova.api.ec2:requestlogging.factory [filter:ec2lockout] paste.filter_factory = nova.api.ec2:lockout.factory [filter:ec2keystoneauth] paste.filter_factory = nova.api.ec2:ec2keystoneauth.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:noauth.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.cloudcontroller paste.filter_factory = nova.api.ec2:requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:authorizer.factory [filter:validator] paste.filter_factory = nova.api.ec2:validator.factory [app:ec2executor] paste.app_factory = nova.api.ec2:executor.factory ############# # Openstack # #############
[composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v1.1: openstack_compute_api_v2 /v2: openstack_compute_api_v2 [composite:openstack_compute_api_v2] use = call:nova.api.auth:pipeline_factory noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2 keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2 keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2 [filter:faultwrap] paste.filter_factory = nova.api.openstack:faultwrapper.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:noauthmiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.compute.limits:ratelimitingmiddleware.factory [filter:sizelimit] paste.filter_factory = nova.api.sizelimit:requestbodysizelimiter.factory [app:osapi_compute_app_v2] paste.app_factory = nova.api.openstack.compute:apirouter.factory [pipeline:oscomputeversions] pipeline = faultwrap oscomputeversionapp [app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = nova.api.auth:novakeystonecontext.factory
[filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = 10.2.31.168 #auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = service_pass # signing_dir is configurable, but the default behavior of the authtoken # middleware should be sufficient. It will create a temporary directory # in the home directory for the user the nova process is running as. signing_dir = /var/lib/nova/keystone-signing # Workaround for https://bugs.launchpad.net/nova/+bug/1154809 auth_version = v2.0 12. cinder.conf --------------------------------------------------------------------------------------------------------------------- [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf sql_connection = mysql://cinderuser:cinderpass@10.2.31.168/cinder api_paste_confg = /etc/cinder/api-paste.ini #iscsi_helper = tgtadm iscsi_helper=ietadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes ####################rabbit host # kombu_ssl_ca_certs = # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = 10.2.31.168 # Password of the RabbitMQ server rabbit_password = password # Port where RabbitMQ server is running/listening
rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = 10.2.31.168:5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / # Maximum retries with trying to connect to RabbitMQ # (the default of 0 implies an infinite retry count) # rabbit_max_retries = 0 # RabbitMQ connection retry interval # rabbit_retry_interval = 1 # Use HA queues in RabbitMQ (x-ha-policy: all).you need to # wipe RabbitMQ database when changing this option. (boolean value) # rabbit_ha_queues = false 13. quantum.conf --------------------------------------------------------------------------------------------------------------------- [DEFAULT] # Default log level is INFO # verbose and debug has the same result. # One of them will set DEBUG log level output # debug = False # verbose = False # Where to store Quantum state files. This directory must be writable by the # user executing the agent. # state_path = /var/lib/quantum # Where to store lock files lock_path = $state_path/lock # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # log_date_format = %Y-%m-%d %H:%M:%S # use_syslog -> syslog # log_file and log_dir -> log_dir/log_file # (not log_file) and log_dir -> log_dir/{binary_name}.log # use_stderr -> stderr
# (not user_stderr) and (not log_file) -> stdout # publish_errors -> notification system # use_syslog = False # syslog_log_facility = LOG_USER # use_stderr = True # log_file = # log_dir = # publish_errors = False # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9696 # Path to the extensions. Note that this can be a colon-separated list of # paths. For example: # api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions # The path of quantum.extensions is appended to this, so if your # extensions are in there you don't need to specify them here # api_extensions_path = # Quantum plugin provider module # core_plugin = #core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.ovsquantumpluginv2 core_plugin = quantum.plugins.linuxbridge.lb_quantum_plugin.linuxbridgepluginv2 # Advanced service modules # service_plugins = # Paste configuration file api_paste_config = /etc/quantum/api-paste.ini # The strategy to be used for auth. # Supported values are 'keystone'(default), 'noauth'. # auth_strategy = keystone
# Base MAC address. The first 3 octets will remain unchanged. If the # 4h octet is not 00, it will also used. The others will be # randomly generated. # 3 octet # base_mac = fa:16:3e:00:00:00 # 4 octet # base_mac = fa:16:3e:4f:00:00 # Maximum amount of retries to generate a unique MAC address # mac_generation_retries = 16 # DHCP Lease duration (in seconds) # dhcp_lease_duration = 120 # Allow sending resource operation notification to DHCP agent # dhcp_agent_notification = True # Enable or disable bulk create/update/delete operations # allow_bulk = True # Enable or disable pagination # allow_pagination = False # Enable or disable sorting # allow_sorting = False # Enable or disable overlapping IPs for subnets # Attention: the following parameter MUST be set to False if Quantum is # being used in conjunction with nova security groups and/or metadata service. # allow_overlapping_ips = False # Ensure that configured gateway is on subnet # force_gateway_on_subnet = False # RPC configuration options. Defined in rpc init # The messaging module to use, defaults to kombu. # rpc_backend = quantum.openstack.common.rpc.impl_kombu # Size of RPC thread pool # rpc_thread_pool_size = 64, # Size of RPC connection pool # rpc_conn_pool_size = 30 # Seconds to wait for a response from call or multicall # rpc_response_timeout = 60 # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. # rpc_cast_timeout = 30
# Modules of exceptions that are permitted to be recreated # upon receiving exception data from an rpc call. # allowed_rpc_exception_modules = quantum.openstack.common.exception, nova.exception # AMQP exchange to connect to if using RabbitMQ or QPID control_exchange = quantum # If passed, use a fake RabbitMQ provider # fake_rabbit = False # Configuration options if sending notifications via kombu rpc (these are # the defaults) # SSL version to use (valid only if SSL enabled) # kombu_ssl_version = # SSL key file (valid only if SSL enabled) # kombu_ssl_keyfile = # SSL cert file (valid only if SSL enabled) # kombu_ssl_certfile = # SSL certification authority file (valid only if SSL enabled)' # kombu_ssl_ca_certs = # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = 10.2.31.168 # Password of the RabbitMQ server rabbit_password = password # Port where RabbitMQ server is running/listening rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = localhost:5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / # Maximum retries with trying to connect to RabbitMQ # (the default of 0 implies an infinite retry count) # rabbit_max_retries = 0 # RabbitMQ connection retry interval # rabbit_retry_interval = 1 # Use HA queues in RabbitMQ (x-ha-policy: all).you need to # wipe RabbitMQ database when changing this option. (boolean value)
# rabbit_ha_queues = false # QPID # rpc_backend=quantum.openstack.common.rpc.impl_qpid # Qpid broker hostname # qpid_hostname = localhost # Qpid broker port # qpid_port = 5672 # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port' # qpid_hosts = localhost:5672 # Username for qpid connection # qpid_username = '' # Password for qpid connection # qpid_password = '' # Space separated list of SASL mechanisms to use for auth # qpid_sasl_mechanisms = '' # Seconds between connection keepalive heartbeats # qpid_heartbeat = 60 # Transport to use, either 'tcp' or 'ssl' # qpid_protocol = tcp # Disable Nagle algorithm # qpid_tcp_nodelay = True # ZMQ # rpc_backend=quantum.openstack.common.rpc.impl_zmq # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. # rpc_zmq_bind_address = * # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = quantum.openstack.common.notifier.no_op_notifier # Logging driver
# notification_driver = quantum.openstack.common.notifier.log_notifier # RPC driver. DHCP agents needs it. notification_driver = quantum.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic name(s) or to set logging level default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications # Default maximum number of items returned in a single response, # value == infinite and value < 0 means no max limit, and value must # greater than 0. If the number of items requested is greater than # pagination_max_limit, server will just return pagination_max_limit # of number of items. # pagination_max_limit = -1 # Maximum number of DNS nameservers per subnet # max_dns_nameservers = 5 # Maximum number of host routes per subnet # max_subnet_host_routes = 20 # Maximum number of fixed ips per port # max_fixed_ips_per_port = 5 # =========== items for agent management extension ============= # Seconds to regard the agent as down. # agent_down_time = 5 # =========== end of items for agent management extension ===== # =========== items for agent scheduler extension ============= # Driver to use for scheduling network to DHCP agent # network_scheduler_driver = quantum.scheduler.dhcp_agent_scheduler.chancescheduler # Driver to use for scheduling router to a default L3 agent # router_scheduler_driver = quantum.scheduler.l3_agent_scheduler.chancescheduler
# Allow auto scheduling networks to DHCP agent. It will schedule non-hosted # networks to first DHCP agent which sends get_active_networks message to # quantum server # network_auto_schedule = True # Allow auto scheduling routers to L3 agent. It will schedule non-hosted # routers to first L3 agent which sends sync_routers message to quantum server # router_auto_schedule = True # =========== end of items for agent scheduler extension ===== [QUOTAS] # resource name(s) that are supported in quota features # quota_items = network,subnet,port # default number of resource allowed per tenant, minus for unlimited # default_quota = -1 # number of networks allowed per tenant, and minus means unlimited # quota_network = 10 # number of subnets allowed per tenant, and minus means unlimited # quota_subnet = 10 # number of ports allowed per tenant, and minus means unlimited # quota_port = 50 # number of security groups allowed per tenant, and minus means unlimited # quota_security_group = 10 # number of security group rules allowed per tenant, and minus means unlimited # quota_security_group_rule = 100 # default driver to use for quota checks # quota_driver = quantum.quota.confdriver [DEFAULT_SERVICETYPE] # Description of the default service type (optional) # description = "default service type" # Enter a service definition line for each advanced service provided # by the default service type. # Each service definition should be in the following format:
# <service>:<plugin>[:driver] [AGENT] # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real # root filter facility. # Change to "sudo" to skip the filtering and just run the comand directly # root_helper = sudo root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf # =========== items for agent management extension ============= # seconds between nodes reporting state to server, should be less than # agent_down_time # report_interval = 4 # =========== end of items for agent management extension ===== [keystone_authtoken] auth_host = 10.2.31.168 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = service_pass signing_dir = /var/lib/quantum/keystone-signing
Appendix B About Ubuntu Before installation of Ubuntu: 1. Please check that your system supports 32 bit or 64 bit Operating system. 2. Ubuntu is available for Desktop and Servers. If you are beginner then we suggest you to use Ubuntu Dektop for openstack deployment. Ubuntu Server does not include GUI so it is not for begginers. 3. We recommend Long Term Support (LTS) release of Ubuntu OS for Openstack 4. Ubuntu Desktop is available at http://www.ubuntu.com/download/desktop. You can download it for 32 bit or 64 bit System. 5. Ubuntu Desktop is available at http://www.ubuntu.com/download/server. You can download it for 32 bit or 64 bit System. Ubuntu Installation 1. Go to following link for Ubuntu Desktop Installation http://www.wikihow.com/install-ubuntu-linux http://www.howtoforge.com/the-perfect-desktop-ubuntu-12.04-lts-precise-pangolin 2. Go to following link for Ubuntu Server Installation http://ubuntuserverguide.com/2012/05/how-to-install-ubuntu-server-12-04-lts-precisepangolin-included-screenshot.html Useful links for beginners: Go through the following links if you are beginner in Ubuntu http://freshtutorial.com/basic-ubuntu-command-tutorial-for-beginners/ http://www.tuxarena.com/static/intro_linux_cli.php https://help.ubuntu.com/community/beginners/bashscripting http://beginlinux.com/twitter/1094-the-beginners-guide-to-the-ubuntu-terminal http://beginlinux.com/twitter/1094-the-beginners-guide-to-the-ubuntu-terminal
Appendix C CONFIGURING THE HYPERVISOR For production environments the most tested hypervisors are KVM and Xen-based hypervisors. KVM runs through libvirt, Xen runs best through XenAPI calls. KVM is selected by default and requires the least additional configuration. This guide offers information for KVM and Qemu hypervisors. KVM KVM is configured as the default hypervisor for Compute in Openstack. The KVM hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk) 1. Checking for hardware virtualization support The processors of your compute host need to support virtualization technology (VT) to use KVM.If you are running on Ubuntu use the kvm-ok command to check if your processor has VT support, it is enabled in the BIOS, and KVM is installed properly, as root: kvm-ok command is available in cpu-checker package so install it first. # apt-get installcpu-checker # kvm-ok If KVM is enabled, the output should look something like: INFO: /dev/kvm exists KVM acceleration can be used.
If KVM is not enabled, the output should look something like: INFO: Your CPU does not support KVM extensions In the case that KVM acceleration is not supported, Compute should be configured to use a different hypervisor, such as QEMU or Xen.On distributions that don't have kvm-ok, you can check if your processor has VT support by looking at the processor flags in the /proc/cpuinfo file. For Intel processors, look for the vmx flag, and for AMD processors, look for the svm flag. A simple way to check is to run the following command and see if there is any output: $ egrep '(vmx svm)' --color=always /proc/cpuinfo Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the above command produced no output, you may need to reboot your machine, enter the system BIOS, and enable the VT option. 2. Enabling KVM: KVM requires the kvm and either kvm-intel or kvm-amd modules to be loaded. This may have been configured automatically on your distribution when KVM is installed.you can check that they have been loaded using lsmod, as follows, with expected output for Intelbased processors: $lmsod grepkvm kvm_intel 137721 9 kvm 415459 1 kvm_intel The following sections describe how to load the kernel modules for Intel-based and AMD-based processors if they were not loaded automatically by your distribution's KVM installation process.
Intel-based processors: If your compute host is Intel-based, run the following as root to load the kernel modules: #modprobekvm #modprobekvm-intel Add the following lines to /etc/modules so that these modules will load on reboot: kvmkvm-intel AMD-based processors: If your compute host is AMD-based, run the following as root to load the kernel modules: #modprobekvm #modprobekvm-amd Add the following lines to /etc/modules so that these modules will load on reboot: kvmkvm-amd 3. KVM installation Now install pakcage for KVM hypervisor: # apt-get install -y kvmlibvirt-bin pm-utils Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to: cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet","/dev/net/tun" ]
Delete default virtual bridge # virsh net-destroy default # virsh net-undefine default Restart the libvirt service to load the new values: # servicelibvirt-bin restart QEMU From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment. The typical uses cases for QEMU are running on older hardware that lacks virtualization support. Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests. KVM requires hardware support for acceleration. If hardware support is not available (e.g., if you are running Compute inside of a VM and the hypervisor does not expose the required hardware support), you can use QEMU instead. KVM and QEMU have the same level of support in OpenStack, but KVM will provide better performance. To enable QEMU in openstack, add following lines in nova.conf: compute_driver=libvirt.libvirtdriver libvirt_type=qemu The QEMU hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk)