OpenStack Installation Guideline [Grizzly Release]

Size: px
Start display at page:

Download "OpenStack Installation Guideline [Grizzly Release]"

Transcription

1 Extensible Access Control Framework for Cloud based Applications Version 1.0 OpenStack Installation Guideline [Grizzly Release] Dr. Muhammad Awais Shibli [Principal Investigator] Dr. Arshad Ali [Co-Principal Investigator] National ICT R & D [Funding Organization]

2 Table of Contents 1. INTRODUCTION Installation of Ubuntu OS Requirements PREPARING NODE/SYSTEM Add Repository Install NTP Install Mysql Install Messaging Service Install linux bridging software Enable IP forwarding INSTALLING OPENSTACK IDENTITY SERVICE (KEYSTONE) INSTALLING OPENSTACK IMAGE SERVICE (GLANCE) INSTALLING OPENSTACK NETWORKING SERVICE (QUANTUM) INSTALLING OPENSTACK COMPUTE SERVICE (NOVA) INSTALLING OPENSTACK CINDER COMPONENT (VOLUME) INSTALLING OPENSTACK DASHBOARD COMPONENT (HORIZON) APPENDIX Appendix A (Configuration Files) Appendix B (About Ubunt Installation) Appendix C (Configuring Hypervisor )... 76

3 Preface This Openstack Installation manual is aimed at Researchers, technologists, and system administrators eager to understand and deploy Cloud computing infrastructure projects based upon OpenStack software. This manualintends to help the organizations looking to set up an OpenStack based private Cloud.OpenStack is a collection of open source software projects that enterprises/service providers can use to setup and run their cloud compute and storage infrastructure. Rackspace and NASA are the key initial contributors to the stack. This manual describes instructions for manually installing OpenStack Grizzly release on 64- bitubuntu Server/Desktop 12.04LTS with keystone authentication and dashboard. Specifically, the instructions describe how to install Cloud controller and Compute on single machine (node) In this manual, we have included Open Stack Compute Infrastructure (Nova), OpenStack Imaging Service (Glance), OpenStack identity service (Keystone), Openstack Volume (Cinder), OpenStack Networking (Quantum) and Openstack Administrative Web-Interface Horizon (dashboard). Target Audience Our aim has been to provide a guide for beginners who are new to OpenStack. Good familiarity with virtualization is assumed, as troubleshooting OpenStack related problems requires a good knowledge of virtualization. Similarly, familiarity with Cloud Computing concepts and terminology will be of help. Acknowledgement Most of the content has been borrowed from web resources like manuals, documentation, white papers etc. from OpenStack and Canonical; numerous posts on forums; discussions on theopenstack IRC Channel and many articles on the web. We would like to thank the authors of all these resources. Conventions Commands and paths of configuration files are shown in Bold & Italic. Setting of configuration files are shown in Italic.

4 1. INTRODUCTION We will deploy Cloud Controller and Compute from the OpenStack Grizzly release manually on a single machine running Ubuntu 12.04, 64-bit Server/Desktop. Setting up swift is not part of the instructions. The machine will use FlatDHCP networking mode. We will then add another compute machine that will run its own nova-network. We will use Grizzly final release from Ubuntu Cloud Archive. In our case, Cloud Controller and Compute services will be on single node. We will install OpenStack components sucha as Quantum, Nova, Keystone, Glance, Horizon, Cinder and other tools such as LinuxBridge, KVM. 1.1 INSTALLATION ON UBUNTU OS This guide is for Ubuntu LTS OS. Before installation of Openstack Cloud, Ubuntu Operating System must be installed on the system. More detail about Ubuntu server/desktop installation is given in the Appendix B of this manual. If your Openstack Cloud will be behind the proxy then following changes are required in.bashrc and environment file (/etc/environment) of Ubuntu OS. To apply following changes on the server, please reboot the machine. We have assigned static IP address to Ubuntu machine and Proxy address is :8080 in our scenerio. 1. Type following command in the terminal. Please replace User_Name in the command with the username on your system. $ sudo nano /home/user_name/.bashrc Add folloiwng lines at the end of file and save it. no_proxy="localhost, , :6080,

5 2. Same setting for all users's a, added below given lines in environment file. $ sudo nano /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" http_proxy=" https_proxy=" ftp_proxy="ftp:// :8080/" socks_proxy="socks:// :8080/" no_proxy="localhost, , 68:6080, REQUIREMENTS We required only single NIC on the server with IP address ( ).Our example Installation Architectures is shown below. Only one server will run all nova- services and also drives all the virtual instances.

6 2. PREPARING NODE/SYSTEM After installation of Ubuntu Server/Desktop, we will prepare our system to run openstack. Run following command to become root. $sudo -i 1. Add Grizzly repositories: #apt-get installubuntu-cloud-keyring python-software-properties software-propertiescommon python-keyring #echo deb precise-updates/grizzly main >> /etc/apt/sources.list.d/grizzly.list 2. Now update your system: #apt-get update #apt-get upgrade #apt-get dist-upgrade 3. Networking: Set the static IP address of Ethernet interface. #nano /etc/network/interface auto eth1 iface eth1 inet static address netmask gateway dns-nameservers Restart the networking service to apply setting: # /etc/init.d/networking restart

7 5. Installing Network Time Protocol (NTP): # apt-get install -y ntp Set up the NTP server on your controller node so that it receives data by modifying the ntp.conffile and restarting the service. # sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver \ nfudge stratum 10/g' /etc/ntp.conf # service ntp restart 6. Installing MySQL Install MySQL and specify a password for the root user: # apt-get install-y python-mysqldbmysql-server Use sed to edit /etc/mysql/my.cnf to change bind-address from localhost ( ) to any ( ) and restart the mysql service, as root. #sed -i 's/ / /g' /etc/mysql/my.cnf # servicemysql restart 7. Installing Messaging Server Install the messaging queue server. Typically this is either Qpid or RabbitMQ but ZeroMQ (0MQ) is also available. # apt-get installrabbitmq-server Change the password of default user 'guest' using following command. #rabbitmqctlchange_password guest password Bydefault RabbitMQ listens on localhost ( ). But it can be change to system ip address (like ). In our case, RabbitMq is listening on localhost and port We will use this setting of RabbitMq in nova, quantum, cinder and glance components. You can get more detail about it setting by typing rabbitmqctl in terminal

8 Restart it. # /etc/init.d/rabbitmq-server restart 8. Check RabbitMQ status: # /etc/init.d/rabbitmq-server status

9 9. Listening Status using netstat 10. Other Services This package used for bridging on linux #apt-get install -y vlan bridge-utils 11. Enable IP Forwarding on Server. #sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf To save you from rebooting, perform the following #sysctl net.ipv4.ip_forward=1

10 3. INSTALLINGOPENSTACK IDENTITY SERVICE (KEYSTONE) Keystone is an OpenStack project that provides Identity, Token, Catalog and Policy services for use specifically by projects in the OpenStack family. 1. Install keystone: # apt-get install-y keystone Verify your keystone is running: #service keystone status To manually create the database, start the mysql command line client by running: #mysql -u root -p Enter the mysql root user's password when prompted. 2. Create the keystone database. >CREATE DATABASE keystone; >GRANT ALL ON keystone.* TO 'keystoneuser'@'%' IDENTIFIED BY 'keystonepass'; >quit;

11 Update the connection attribute in the /etc/keystone/keystone.conf to the new database: sql_connection = mysql://keystoneuser:keystonepass@ /keystone 3. Restart the identity service: # service keystone restart 4. Synchronize and populate the database: # keystone-manage db_sync Fill up the keystone database using the two scripts available at following link. ( Guide/tree/master/KeystoneScripts): Modify the HOST_IP and HOST_IP_EXT variables before executing the scripts. # nano /home/test/desktop/keystone_bashic.sh

12 # nano /home/test/desktop/keystone_endpoints_basic.sh 5. Run following command to change the permission on bash scripts. # chmod +x keystone_basic.sh # chmod +x keystone_endpoints_basic.sh

13 #./keystone_basic.sh #./keystone_endpoints_basic.sh 6. Create a simple credential file and load it so you won't be bothered later: # nano creds export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin_pass export OS_AUTH_URL=" Load it using following command. #source creds 7. To test Keystone, we use a simple CLI command: #keystone user-list

14 #keystone endpoint-list Troubleshooting the Identity Service (Keystone) To begin troubleshooting, look at the logs in the /var/log/keystone/keystone.log file (the location of log files is configured in the /etc/keystone/logging.conf file). It shows all the components that have come in to the WSGI request, and will ideally have an error in that log that explains why an authorization request failed. If you're not seeing the request at all in those logs, then run keystone with "--debug" where --debug is passed in directly after the CLI command prior to parameters.

15 4. INSTALLING OPENSTACK IMAGE SERVICE (GLANCE) The OpenStack Image Service provides discovery, registration and delivery services for disk and server images. The ability to copy or snapshot a server image and immediately store it away is a powerful capability of the OpenStack cloud operating system. Stored images can be used as a template to get new servers up and running quickly and more consistently if you are provisioning multiple servers than installing a server operating system and individually configuring additional services 1. Install the Image service: # apt-get -y install glance 2. Verify your glance services are running: #service glance-api status #service glance-registry status 3. Configuring the Image Service database backend Configure the backend data store. Create a glance MySQL database and g rants the user full access to the glance MySQL database. Start the MySQL command line client by running: #mysql -u root -p Enter the MySQL root user's password when prompted.to configures the MySQL database, create the glance database.

16 >CREATE DATABASE glance; >GRANT ALL ON glance.* TO IDENTIFIED BY 'glancepass'; >quit; The Image service has a number of options that you can use to configure the Glance API server, optionally the Glance Registry server, and the various storage backends that Glance can use to store images. 4. Update /etc/glance/glance-api-paste.ini with: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory delay_auth_decision = true auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 5. Update the /etc/glance/glance-registry-paste.ini with: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = auth_port = 35357

17 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 6. Update /etc/glance/glance-api.conf with: sql_connection = mysql://glanceuser:glancepass@ /glance And add following lines at the end of glance-api.conf file [paste_deploy] flavor = keystone We are using RabbitMQ for messaging between openstack components. Following changesare required in glance-api.conf forrabbitmq

18 7. Update the /etc/glance/glance-registry.conf with: sql_connection = mysql://glanceuser:glancepass@ /glance And add following lines at the end of glance-registry.conf file [paste_deploy] flavor = keystone 8. Restart the glance-api and glance-registry services: #service glance-api restart; service glance-registry restart Now you can populate or migrate or syncchronize the database. # glance-manage db_sync 9. Restart the services again to take into account the new modifications: #service glance-api restart; service glance-registry restart

19 10. To test Glance, upload the cirros cloud image directly from the internet: #glance image-create --name myfirstimage --is-public true --container-format bare -- disk-format qcow2 --location Now list the image to see what you have just uploaded: #glance index #glance image-list In Glance, Image of different OSes can be added after complete installation of Openstack using its Dashboard (GUI). Troubleshooting the Image Service (Glance) To begin troubleshooting, look at the logs in the /var/log/glance/registry.log or /var/log/glance/api.log.

20 5. INSTALLING OPENSTACKNETWORKING SERVICE (QUANTUM) Quantum (Now know as Neutron) is an OpenStack project to provide "networking as a service" between interface devices (e.g., vnics) managed by other Openstack services (e.g., nova). 1. Install the Quantum components: # apt-get install -y quantum-server quantum-plugin-linuxbridge quantum-pluginlinuxbridge-agent dnsmasq quantum-dhcp-agent quantum-l3-agent 2. Configuring the quantum database backendstart the MySQL command line client by running: # mysql -u root -p Enter the MySQL root user's password when prompted. To configure the MySQL database, create the glance database. >CREATE DATABASE quantum; >GRANT ALL ON quantum.* TO 'quantumuser'@'%' IDENTIFIED BY 'quantumpass'; >quit; 3. Verify all Quantum components are running: #cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i status; done

21 4. Edit the /etc/quantum/quantum.conf file: core_plugin = quantum.plugins.linuxbridge.lb_quantum_plugin.linuxbridgepluginv2 Add following line at the end of file. [keystone_authtoken] auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = service_pass signing_dir = /var/lib/quantum/keystone-signing 5. Messaging queue (RabbitMq) setting in quatum.conf file. # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = # Password of the RabbitMQ server

22 rabbit_password = password # Port where RabbitMQ server is running/listening rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = localhost:5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / 6. Edit /etc/quantum/api-paste.ini [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = service_pass

23 7. Edit the LinuxBridge plugin config file /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini with: # under [DATABASE] section sql_connection = mysql://quantumuser:quantumpass@ /quantum # under [LINUX_BRIDGE] section physical_interface_mappings = physnet1:eth0 # under [VLANS] section tenant_network_type = vlan network_vlan_ranges = physnet1:1000: Edit the /etc/quantum/l3_agent.ini interface_driver = quantum.agent.linux.interface.bridgeinterfacedriver

24 9. Edit the /etc/quantum/dhcp_agent.ini interface_driver = quantum.agent.linux.interface.bridgeinterfacedriver 10. Update /etc/quantum/metadata_agent.ini # The Quantum user information for accessing the Quantum API. auth_url = auth_region = RegionOne admin_tenant_name = service admin_user = quantum admin_password = service_pass # IP address used by Nova metadata server nova_metadata_ip = # TCP Port used by Nova metadata server nova_metadata_port = 8775 metadata_proxy_shared_secret = helloopenstack

25 11. After changes in the file, restart all quantum services #cd /etc/init.d/; for i in $( ls quantum-* ); do sudo service $i restart; done Troubleshooting the Networking Service (Quantum) To begin troubleshooting, look at the logs in the /var/log/quantum/server.log file.

26 6. INSTALLING OPENSTACK COMPUTE SERVICE (NOVA) 6.1 CONFIGURING THE HYPERVISOR For production environments the most tested hypervisors are KVM and Xen-based hypervisors. KVM runs through libvirt, Xen runs best through XenAPI calls. KVM is selected by default and requires the least additional configuration. This guide offers information for KVM and Qemu hypervisors. Details about the Hypervisor's are given in Appendix C of this manual KVM KVM is configured as the default hypervisor for Compute in Openstack. The KVM hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk) 1. Checking for hardware virtualization support The processors of your compute host need to support virtualization technology (VT) to use KVM.If you are running on Ubuntu use the kvm-ok command to check if your processor has VT support, it is enabled in the BIOS, and KVM is installed properly, as root: kvm-ok command is available in cpu-checker package so install it first. # apt-get installcpu-checker # kvm-ok 2. Output of command If KVM is enabled, the output should look something like: INFO: /dev/kvm exists KVM acceleration can be used.

27 If KVM is not enabled, the output should look something like: INFO: Your CPU does not support KVM extensions In the case that KVM acceleration is not supported, Compute should be configured to use a different hypervisor, such as QEMU or Xen. 3. KVM installation Now install pakcage for KVM hypervisor: # apt-get install -y kvmlibvirt-bin pm-utils Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to: cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet","/dev/net/tun" ] 4. Delete default virtual bridge # virsh net-destroy default # virsh net-undefine default 5. Restart the libvirt service to load the new values: # servicelibvirt-bin restart

28 In this manual, we are using qemu instead of kvm because kvm support is not available in our hardware. If KVM support is available in your hardware then replace qemu with kvm in below setting for your deployment. 6.2 NOVA INSTALLATION 1. First of all, install nova components (Compute Services): # apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler novanovncproxy nova-doc nova-conductor nova-compute-kvm 2. Check the status of all nova-services: # cd /etc/init.d/; for i in $( ls nova-* ); do service $i status; cd; done 3. Now we will configure the MySQL Database for Nova. Start the mysql command line client by running: #mysql -u root -p Enter the mysql root user's password when prompted.

29 4. Create database for Nova: >CREATE DATABASE nova; >GRANT ALL ON nova.* TO IDENTIFIED BY 'novapass'; >quit; 5. Now modify authtoken section in the /etc/nova/api-paste.ini file to this: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = service_pass signing_dirname = /tmp/keystone-signing-nova # Workaround for auth_version = v2.0

30 6. Modify the /etc/nova/nova.conf like this: [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/run/lock/nova verbose=true api_paste_config=/etc/nova/api-paste.ini compute_scheduler_driver=nova.scheduler.simple.simplescheduler rabbit_host=localhost rabbit_port=5672 rabbit_userid="guest" rabbit_password = "password" rabbit_virtual_host="/" libvirt_use_virtio_for_bridges=true connection_type=libvirt libvirt_type=qemu #libvirt_type=kvm nova_url= sql_connection=mysql://novauser:novapass@ /nova root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

31 # Auth use_deprecated_auth=false auth_strategy=keystone # Imaging service glance_api_servers= :9292 image_service=nova.image.glance.glanceimageservice # Vnc configuration novnc_enabled=true novncproxy_base_url= novncproxy_port=6080 vncserver_proxyclient_address= vncserver_listen= # Metadata service_quantum_metadata_proxy = True quantum_metadata_proxy_shared_secret = helloopenstack # Network settings network_api_class=nova.network.quantumv2.api.api quantum_url= quantum_auth_strategy=keystone quantum_admin_tenant_name=service quantum_admin_username=quantum quantum_admin_password=service_pass quantum_admin_auth_url= libvirt_vif_driver=nova.virt.libvirt.vif.quantumlinuxbridgevifdriver linuxnet_interface_driver=nova.network.linux_net.linuxbridgeinterfacedriver firewall_driver=nova.virt.libvirt.firewall.iptablesfirewalldriver

32 # Compute # compute_driver=libvirt.libvirtdriver # Cinder # volume_api_class=nova.volume.cinder.api osapi_volume_listen_port= Edit the /etc/nova/nova-compute.conf [DEFAULT] libvirt_type=qemu #libvirt_type=kvm compute_driver=libvirt.libvirtdriver

33 8. Synchronize and populate your nova database: #nova-manage db sync 9. Restart nova-* services: # cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done 10. Check for the smiling faces on nova-* services to confirm your installation: # nova-manage service list

34 Troubleshooting the Compute Service (Nova) Trying to launch a new virtual machine instance fails with the ERROR state, and the following error appears in /var/log/nova/nova-compute.log libvirterror: internal error no supported architecture for os type 'hvm' This is a symptom that the KVM kernel modules have not been loaded.if you cannot start VMs after installation without rebooting, it's possible the permissions are not correct. This can happen if you load the KVM module before you've installed nova-compute.

35 2. INSTALLING OPENSTACK CINDER COMPONENTS (VOLUME) Cinder provides an infrastructure for managing volumes in OpenStack. It was originally a Nova component called nova-volume, but has become an independent project since the Folsom release. 1. Install the required packages: # apt-get install -y cinder-api cinder-scheduler cinder-volume iscsitarget openiscsiiscsitarget-dkms 2. Configure the iscsi services: #sed -i 's/false/true/g' /etc/default/iscsitarget 3. Restart the services: #service iscsitarget start #service open-iscsi start

36 4. Now we will configure the MySQL Database for Nova. Start the mysql command line client by running: #mysql -u root -p Enter the mysql root user's password when prompted. 5. Create database for Cinder: >CREATE DATABASE cinder; >GRANT ALL ON cinder.* TO IDENTIFIED BY 'cinderpass'; >quit; 6. Configure /etc/cinder/api-paste.ini like the following: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory service_protocol = http service_host = service_port = 5000 auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = service_pass signing_dir = /var/lib/cinder

37 7. Edit the /etc/cinder/cinder.conf to: [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf sql_connection = mysql://cinderuser:cinderpass@ /cinder api_paste_confg = /etc/cinder/api-paste.ini #iscsi_helper = tgtadm iscsi_helper=ietadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes

38 8. RabbitMQ setting in /etc/cinder/cinder.conf # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = # Password of the RabbitMQ server rabbit_password = password # Port where RabbitMQ server is running/listening rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = :5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / 9. Synchronize your database: # cinder-manage db sync 10. Create a volumegroup and name it cinder-volumes: #dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2g

39 #losetup /dev/loop2 cinder-volumes #fdisk /dev/loop2 Type in the followings: n p 1 ENTER ENTER t 8e w

40 11. Proceed to create the physical volume then the volume group: # pvcreate -ff /dev/loop2 # vgcreate cinder-volumes /dev/loop2 Beware that this volume group gets lost after a system reboot. so write follwoing line in /etc/rc.local filebefore the exit 0 line. #nano /etc/rc.local losetup /dev/loop2 %Your_path_to_cinder_volumes% 12. Restart the cinder services: #cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i restart; done

41 13. Verify if cinder services are running: #cd /etc/init.d/; for i in $( ls cinder-* ); do sudo service $i status; done Troubleshooting the Cinder Component (Volume) To begin troubleshooting, look at the logs in the /var/log/cinder/cinder.log file.

42 3. INSTALLING OPENSTACKDASHBOARD (HORIZON) You can use a dashboard interface with an OpenStack Compute installation with a web-based console provided by the Openstack-Dashboard project. 1. Install the OpenStack Dashboard: # apt-get installopenstack-dashboard memcached If you don't like the OpenStackubuntu theme, you can remove the package to disable it: # dpkg --purge openstack-dashboard-ubuntu-theme 2. Reload Apache and memcached: # service apache2 restart; service memcached restart 3. Validating the Dashboard Install: To validate the Dashboard installation, point your web browser to /horizon. Once you connect to the Dashboard with the URL, you should see a login window. Enter the credentials for users you created with the Identity Service, Keystone (credential admin<username>:admin_pass<passowrd>).

43 Main Dashboard of Openstack:

44 4. First Instance using Dashboard (VM launch): After successful login in dashboard, Go to Project tenant and create new network for your new VM instance. 5. Network Setting: Click on Network meanu. Then Cretae new Network by clicking on +Create Network button: Set network name and Subnet details such as network address, gateway address etc. Network with "new" Name is shown in the below figure.

45 6. After Network Creation, generate RSA keys by clicking on Access & Security Option and then Gnerate KeyPair in Project.

46 7. To launch new Instance, Click on Instace menu in Dashbaord which is shwon belown. After this Click on +Launch Instance button Set Instance details such as Image source, Instance name and Flavor. Also set Network for you new instance.

47 9. As shown in the below figure, new network selected for VM instance. If no error occurs during instance creation phase, then new instance will be display in openstack dashboard under instance option.

48 10. If your system is using proxy and your cloud server is also in same network then include the IP address of cloud in the ignore list of Firefox. Otherwise Instance console will not work. Go to Options -->Advnaced-->Network-->Setting 11. Console View of Instance is shown in below figure by clicking on console tab.

49 Logs of instance by clicking on the log tab:

50

51 Appendix Appendix A Sample configuration Files: 1. environment http_proxy=" https_proxy=" ftp_proxy="ftp:// :8080/" socks_proxy="socks:// :8080/" no_proxy="localhost, , 68:6080, 2..bashrc no_proxy="localhost, , 68:6080, 3. creds #Paste the following: export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin_pass export OS_AUTH_URL=" #export OS_AUTH_URL= 4. keystone.conf [DEFAULT] # A "shared secret" between keystone and other openstack services # admin_token = ADMIN

52 # The IP address of the network interface to listen on # bind_host = # The port number which the public service listens on # public_port = 5000 # The port number which the public admin listens on # admin_port = # The base endpoint URLs for keystone that are advertised to clients # (NOTE: this does NOT affect how keystone listens for connections) # public_endpoint = # admin_endpoint = # The port number which the OpenStack Compute service listens on # compute_port = 8774 # Path to your policy definition containing identity actions # policy_file = policy.json # Rule to check if no matching policy definition is found # FIXME(dolph): This should really be defined as [policy] default_rule # policy_default_rule = admin_required # Role for migrating membership relationships # During a SQL upgrade, the following values will be used to create a new role # that will replace records in the user_tenant_membership table with explicit # role grants. After migration, the member_role_id will be used in the API # add_user_to_project, and member_role_name will be ignored. # member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab # member_role_name = _member_ # === Logging Options === # Print debugging output # (includes plaintext request logging, potentially including passwords) # debug = False # Print more verbose output # verbose = False # Name of log file to output to. If not set, logging will go to stdout.

53 log_file = keystone.log # The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone # Use syslog for logging. # use_syslog = False # syslog facility to receive log lines # syslog_log_facility = LOG_USER # If this option is specified, the logging configuration file specified is # used and overrides any other logging options specified. Please see the # Python logging module documentation for details on logging configuration # files. # log_config = logging.conf # A logging.formatter log message format string which may use any of the # available logging.logrecord attributes. # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # Format string for %(asctime)s in log records. # log_date_format = %Y-%m-%d %H:%M:%S # onready allows you to send a notification when the process is ready to serve # For example, to have it notify using systemd, one could set shell command: # onready = systemd-notify --ready # or a module with notify() method: # onready = keystone.common.systemd [sql] # The SQLAlchemy connection string used to connect to the database #connection = sqlite:////var/lib/keystone/keystone.db # the timeout before idle sql connections are reaped connection = mysql://keystoneuser:keystonepass@ /keystone # idle_timeout = 200 [identity] driver = keystone.identity.backends.sql.identity

54 # This references the domain to use for all Identity API v2 requests (which are # not aware of domains). A domain with this ID will be created for you by # keystone-manage db_sync in migration 008. The domain referenced by this ID # cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. # There is nothing special about this domain, other than the fact that it must # exist to order to maintain support for your v2 clients. # default_domain_id = default [trust] driver = keystone.trust.backends.sql.trust # delegation and impersonation features can be optionally disabled # enabled = True [catalog] # dynamic, sql-based backend (supports API/CLI-based management commands) driver = keystone.catalog.backends.sql.catalog # static, file-based backend (does *NOT* support any management commands) # driver = keystone.catalog.backends.templated.templatedcatalog # template_file = default_catalog.templates [token] driver = keystone.token.backends.sql.token # Amount of time a token should remain valid (in seconds) # expiration = [policy] driver = keystone.policy.backends.sql.policy [ec2] driver = keystone.contrib.ec2.backends.sql.ec2 [ssl] #enable = True #certfile = /etc/keystone/ssl/certs/keystone.pem #keyfile = /etc/keystone/ssl/private/keystonekey.pem #ca_certs = /etc/keystone/ssl/certs/ca.pem #cert_required = True

55 [signing] #token_format = PKI #certfile = /etc/keystone/ssl/certs/signing_cert.pem #keyfile = /etc/keystone/ssl/private/signing_key.pem #ca_certs = /etc/keystone/ssl/certs/ca.pem #key_size = 1024 #valid_days = 3650 #ca_password = None [ldap] # url = ldap://localhost # user = dc=manager,dc=example,dc=com # password = None # suffix = cn=example,cn=com # use_dumb_member = False # allow_subtree_delete = False # dumb_member = cn=dumb,dc=example,dc=com # Maximum results per page; a value of zero ('0') disables paging (default) # page_size = 0 # The LDAP dereferencing option for queries. This can be either 'never', # 'searching', 'always', 'finding' or 'default'. The 'default' option falls # back to using default dereferencing configured by your ldap.conf. # alias_dereferencing = default # The LDAP scope for queries, this can be either 'one' # (onelevel/singlelevel) or 'sub' (subtree/wholesubtree) # query_scope = one # user_tree_dn = ou=users,dc=example,dc=com # user_filter = # user_objectclass = inetorgperson # user_domain_id_attribute = businesscategory # user_id_attribute = cn # user_name_attribute = sn # user_mail_attribute = # user_pass_attribute = userpassword # user_enabled_attribute = enabled # user_enabled_mask = 0 # user_enabled_default = True

56 # user_attribute_ignore = tenant_id,tenants # user_allow_create = True # user_allow_update = True # user_allow_delete = True # user_enabled_emulation = False # user_enabled_emulation_dn = # tenant_tree_dn = ou=groups,dc=example,dc=com # tenant_filter = # tenant_objectclass = groupofnames # tenant_domain_id_attribute = businesscategory # tenant_id_attribute = cn # tenant_member_attribute = member # tenant_name_attribute = ou # tenant_desc_attribute = desc # tenant_enabled_attribute = enabled # tenant_attribute_ignore = # tenant_allow_create = True # tenant_allow_update = True # tenant_allow_delete = True # tenant_enabled_emulation = False # tenant_enabled_emulation_dn = # role_tree_dn = ou=roles,dc=example,dc=com # role_filter = # role_objectclass = organizationalrole # role_id_attribute = cn # role_name_attribute = ou # role_member_attribute = roleoccupant # role_attribute_ignore = # role_allow_create = True # role_allow_update = True # role_allow_delete = True # group_tree_dn = # group_filter = # group_objectclass = groupofnames # group_id_attribute = cn # group_name_attribute = ou # group_member_attribute = member # group_desc_attribute = desc

57 # group_attribute_ignore = # group_allow_create = True # group_allow_update = True # group_allow_delete = True [auth] methods = password,token password = keystone.auth.plugins.password.password token = keystone.auth.plugins.token.token [filter:debug] paste.filter_factory = keystone.common.wsgi:debug.factory [filter:token_auth] paste.filter_factory = keystone.middleware:tokenauthmiddleware.factory [filter:admin_token_auth] paste.filter_factory = keystone.middleware:admintokenauthmiddleware.factory [filter:xml_body] paste.filter_factory = keystone.middleware:xmlbodymiddleware.factory [filter:json_body] paste.filter_factory = keystone.middleware:jsonbodymiddleware.factory [filter:user_crud_extension] paste.filter_factory = keystone.contrib.user_crud:crudextension.factory [filter:crud_extension] paste.filter_factory = keystone.contrib.admin_crud:crudextension.factory [filter:ec2_extension] paste.filter_factory = keystone.contrib.ec2:ec2extension.factory [filter:s3_extension] paste.filter_factory = keystone.contrib.s3:s3extension.factory [filter:url_normalize] paste.filter_factory = keystone.middleware:normalizingfilter.factory

58 [filter:sizelimit] paste.filter_factory = keystone.middleware:requestbodysizelimiter.factory [filter:stats_monitoring] paste.filter_factory = keystone.contrib.stats:statsmiddleware.factory [filter:stats_reporting] paste.filter_factory = keystone.contrib.stats:statsextension.factory [filter:access_log] paste.filter_factory = keystone.contrib.access:accesslogmiddleware.factory [app:public_service] paste.app_factory = keystone.service:public_app_factory [app:service_v3] paste.app_factory = keystone.service:v3_app_factory [app:admin_service] paste.app_factory = keystone.service:admin_app_factory [pipeline:public_api] pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service [pipeline:admin_api] pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service [pipeline:api_v3] pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3 [app:public_version_service] paste.app_factory = keystone.service:public_version_app_factory [app:admin_version_service] paste.app_factory = keystone.service:admin_version_app_factory

59 [pipeline:public_version_api] pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_version_service [pipeline:admin_version_api] pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_service [composite:main] use = egg:paste#urlmap /v2.0 = public_api /v3 = api_v3 / = public_version_api [composite:admin] use = egg:paste#urlmap /v2.0 = admin_api /v3 = api_v3 / = admin_version_api 5. glance-registry.conf [DEFAULT] # Show more verbose log output (sets INFO log level output) #verbose = False # Show debugging output in logs (sets DEBUG log level output) #debug = False # Address to bind the registry server bind_host = # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/registry.log # Backlog requests when creating socket backlog = 4096

60 # TCP_KEEPIDLE value in seconds when creating socket. # Not supported on OS X. #tcp_keepidle = 600 # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: ngine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glanceuser:glancepass@ /glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25 # Role used to identify an authenticated user as administrator #admin_role = admin # Whether to automatically create the database tables. # Default: False #db_auto_create = False # ================= Syslog Options ============================ # Send logs to syslog (/dev/log) instead of to file specified # by `log_file` #use_syslog = False

61 # Facility to use. If unset defaults to LOG_USER. #syslog_log_facility = LOG_LOCAL1 # ================= SSL Options =============================== # Certificate file to use when starting registry server securely #cert_file = /path/to/certfile # Private key file to use when starting registry server securely #key_file = /path/to/keyfile # CA certificate file to use to verify connecting clients #ca_file = /path/to/cafile [keystone_authtoken] auth_host = auth_port = auth_protocol = http admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% [paste_deploy] # Name of the paste configuration file that defines the available pipelines #config_file = glance-registry-paste.ini # Partial name of a pipeline in your paste configuration file with the # service name removed. For example, if your paste section name is # [pipeline:glance-registry-keystone], you would configure the flavor below # as 'keystone'. #flavor= [paste_deploy] flavor = keystone 6. glance-registry-paste.ini # Use this pipeline for no auth - DEFAULT [pipeline:glance-registry] pipeline = unauthenticated-context registryapp # Use this pipeline for keystone auth

62 [pipeline:glance-registry-keystone] pipeline = authtoken context registryapp [app:registryapp] paste.app_factory = glance.registry.api.v1:api.factory [filter:context] paste.filter_factory = glance.api.middleware.context:contextmiddleware.factory [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:unauthenticatedcontextmiddleware.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 7. glance-api.conf [DEFAULT] # Show more verbose log output (sets INFO log level output) #verbose = False # Show debugging output in logs (sets DEBUG log level output) #debug = False # Which backend scheme should Glance use by default is not specified # in a request to add a new image to Glance? Known schemes are determined # by the known_stores option below. # Default: 'file' default_store = file # List of which store classes and store class locations are # currently known to glance at startup. #known_stores = glance.store.filesystem.store,

63 # glance.store.http.store, # glance.store.rbd.store, # glance.store.s3.store, # glance.store.swift.store, # Maximum image size (in bytes) that may be uploaded through the # Glance API server. Defaults to 1 TB. # WARNING: this value should only be increased after careful consideration # and must be set to a value under 8 EB ( ). #image_size_cap = # Address to bind the API server bind_host = # Port the bind the API server to bind_port = 9292 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/api.log # Backlog requests when creating socket backlog = 4096 # TCP_KEEPIDLE value in seconds when creating socket. # Not supported on OS X. #tcp_keepidle = 600 # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: ngine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glanceuser:glancepass@ /glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects

64 # before MySQL can drop the connection. sql_idle_timeout = 3600 # Number of Glance API worker processes to start. # On machines with more than one CPU increasing this value # may improve performance (especially if using SSL with # compression turned on). It is typically recommended to set # this value to the number of CPUs present on your machine. workers = 1 # Role used to identify an authenticated user as administrator #admin_role = admin # Allow unauthenticated users to access the API with read-only # privileges. This only applies when using ContextMiddleware. #allow_anonymous_access = False # Allow access to version 1 of glance api #enable_v1_api = True # Allow access to version 2 of glance api #enable_v2_api = True # Return the URL that references where the data is stored on # the backend storage system. For example, if using the # file system store a URL of 'file:///path/to/image' will # be returned to the user in the 'direct_url' meta-data field. # The default value is false. #show_image_direct_url = False # ================= Syslog Options ============================ # Send logs to syslog (/dev/log) instead of to file specified # by `log_file` #use_syslog = False # Facility to use. If unset defaults to LOG_USER. #syslog_log_facility = LOG_LOCAL0

65 # ================= SSL Options =============================== # Certificate file to use when starting API server securely #cert_file = /path/to/certfile # Private key file to use when starting API server securely #key_file = /path/to/keyfile # CA certificate file to use to verify connecting clients #ca_file = /path/to/cafile # ================= Security Options ========================== # AES key for encrypting store 'location' metadata, including # -- if used -- Swift or S3 credentials # Should be set to a random string of length 16, 24 or 32 bytes #metadata_encryption_key = <16, 24 or 32 char registry metadata key> # ============ Registry Options =============================== # Address to find the registry server registry_host = # Port the registry server is listening on registry_port = 9191 # What protocol to use when connecting to the registry server? # Set to https for secure HTTP communication registry_client_protocol = http # The path to the key file to use in SSL connections to the # registry server, if any. Alternately, you may set the # GLANCE_CLIENT_KEY_FILE environ variable to a filepath of the key file #registry_client_key_file = /path/to/key/file # The path to the cert file to use in SSL connections to the # registry server, if any. Alternately, you may set the # GLANCE_CLIENT_CERT_FILE environ variable to a filepath of the cert file #registry_client_cert_file = /path/to/cert/file

66 # The path to the certifying authority cert file to use in SSL connections # to the registry server, if any. Alternately, you may set the # GLANCE_CLIENT_CA_FILE environ variable to a filepath of the CA cert file #registry_client_ca_file = /path/to/ca/file # When using SSL in connections to the registry server, do not require # validation via a certifying authority. This is the registry's equivalent of # specifying --insecure on the command line using glanceclient for the API # Default: False #registry_client_insecure = False # The period of time, in seconds, that the API server will wait for a registry # request to complete. A value of '0' implies no timeout. # Default: 600 #registry_client_timeout = 600 # Whether to automatically create the database tables. # Default: False #db_auto_create = False # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue), qpid (via a Qpid # message queue), or noop (no notifications sent, the default) notifier_strategy = rabbit #notifier_strategy = noop #Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost #rabbit_host = rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = password rabbit_virtual_host = / rabbit_notification_exchange = glance

67 rabbit_notification_topic = notifications rabbit_durable_queues = False # Configuration options if sending notifications via Qpid (these are # the defaults) qpid_notification_exchange = glance qpid_notification_topic = notifications qpid_host = localhost qpid_port = 5672 qpid_username = qpid_password = qpid_sasl_mechanisms = qpid_reconnect_timeout = 0 qpid_reconnect_limit = 0 qpid_reconnect_interval_min = 0 qpid_reconnect_interval_max = 0 qpid_reconnect_interval = 0 qpid_heartbeat = 5 # Set to 'ssl' to enable SSL qpid_protocol = tcp qpid_tcp_nodelay = True # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to filesystem_store_datadir = /var/lib/glance/images/ # ============ Swift Store Options ============================= # Version of the authentication service to use # Valid versions are '2' for keystone and '1' for swauth and rackspace swift_store_auth_version = 2 # Address where the Swift authentication service lives # Valid schemes are ' and ' # If no scheme specified, default to ' # For swauth, use something like ' :8080/v1.0/' swift_store_auth_address = :5000/v2.0/

68 # User to authenticate against the Swift authentication service # If you use Swift authentication service, set it to 'account':'user' # where 'account' is a Swift storage account and 'user' # is a user in that account swift_store_user = jdoe:jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!) # # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. -> swift_enable_snet = False # If set to True enables multi-tenant storage mode which causes Glance images # to be stored in tenant specific Swift accounts. #swift_store_multi_tenant = False

69 # A list of swift ACL strings that will be applied as both read and # write ACLs to the containers created by Glance in multi-tenant # mode. This grants the specified tenants/users read and write access # to all newly created image objects. The standard swift ACL string # formats are allowed, including: # <tenant_id>:<username> # <tenant_name>:<username> # *:<username> # Multiple ACLs can be combined using a comma separated list, for # example: swift_store_admin_tenants = service:glance,*:admin #swift_store_admin_tenants = # The region of the swift endpoint to be used for single tenant. This setting # is only necessary if the tenant has multiple swift endpoints. #swift_store_region = # ============ S3 Store Options ============================= # Address where the S3 authentication service lives # Valid schemes are ' and ' # If no scheme specified, default to ' s3_store_host = :8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = <lowercased 20-char aws access key>glance

70 # Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # When sending images to S3, the data will first be written to a # temporary buffer on disk. By default the platform's temporary directory # will be used. If required, an alternative directory can be specified here. #s3_store_object_buffer_dir = /path/to/dir # When forming a bucket url, boto will either set the bucket name as the # subdomain or as the first token of the path. Amazon's S3 service will # accept it as the subdomain, but Swift's S3 middleware requires it be # in the path. Set this to 'path' or 'subdomain' - defaults to 'subdomain'. #s3_store_bucket_url_format = subdomain # ============ RBD Store Options ============================= # Ceph configuration file path # If using cephx authentication, this file should # include a reference to the right keyring # in a client.<user> section rbd_store_ceph_conf = /etc/ceph/ceph.conf # RADOS user to authenticate as (only applicable if using cephx) rbd_store_user = glance # RADOS pool in which images are stored rbd_store_pool = images # Images will be chunked into objects of this size (in megabytes). # For best performance, this should be a power of two rbd_store_chunk_size = 8 # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False # Delayed delete time in seconds scrub_time = 43200

71 # Directory that the scrubber will use to remind itself of what to delete # Make sure this is also set in glance-scrubber.conf scrubber_datadir = /var/lib/glance/scrubber # =============== Image Cache Options =========================== # Base directory that the Image Cache uses image_cache_dir = /var/lib/glance/image-cache/ [keystone_authtoken] auth_host = auth_port = auth_protocol = http admin_tenant_name = %SERVICE_TENANT_NAME% admin_user = %SERVICE_USER% admin_password = %SERVICE_PASSWORD% [paste_deploy] # Name of the paste configuration file that defines the available pipelines #config_file = glance-api-paste.ini # Partial name of a pipeline in your paste configuration file with the # service name removed. For example, if your paste section name is # [pipeline:glance-api-keystone], you would configure the flavor below # as 'keystone'. #flavor= [paste_deploy] flavor = keystone 8. glance-api-paste.ini # Use this pipeline for no auth or image caching - DEFAULT [pipeline:glance-api] pipeline = versionnegotiation unauthenticated-context rootapp # Use this pipeline for image caching and no auth [pipeline:glance-api-caching] pipeline = versionnegotiation unauthenticated-context cache rootapp # Use this pipeline for caching w/ management interface but no auth [pipeline:glance-api-cachemanagement]

72 pipeline = versionnegotiation unauthenticated-context cache cachemanage rootapp # Use this pipeline for keystone auth [pipeline:glance-api-keystone] pipeline = versionnegotiation authtoken context rootapp # Use this pipeline for keystone auth with image caching [pipeline:glance-api-keystone+caching] pipeline = versionnegotiation authtoken context cache rootapp # Use this pipeline for keystone auth with caching and cache management [pipeline:glance-api-keystone+cachemanagement] pipeline = versionnegotiation authtoken context cache cachemanage rootapp [composite:rootapp] paste.composite_factory = glance.api:root_app_factory /: apiversions /v1: apiv1app /v2: apiv2app [app:apiversions] paste.app_factory = glance.api.versions:create_resource [app:apiv1app] paste.app_factory = glance.api.v1.router:api.factory [app:apiv2app] paste.app_factory = glance.api.v2.router:api.factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:versionnegotiationfilter.factory [filter:cache] paste.filter_factory = glance.api.middleware.cache:cachefilter.factory [filter:cachemanage] paste.filter_factory = glance.api.middleware.cache_manage:cachemanagefilter.factory [filter:context] paste.filter_factory = glance.api.middleware.context:contextmiddleware.factory

73 [filter:unauthenticated-context] paste.filter_factory = glance.api.middleware.context:unauthenticatedcontextmiddleware.factory [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory delay_auth_decision = true auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = service_pass 9. nova.conf #[DEFAULT] #dhcpbridge_flagfile=/etc/nova/nova.conf #dhcpbridge=/usr/bin/nova-dhcpbridge #logdir=/var/log/nova #state_path=/var/lib/nova #lock_path=/var/lock/nova #force_dhcp_release=true #iscsi_helper=tgtadm #root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf #verbose=true #ec2_private_dns_show_ip=true #api_paste_config=/etc/nova/api-paste.ini #volumes_path=/var/lib/nova/volumes #enabled_apis=ec2,osapi_compute,metadata [DEFAULT] logdir=/var/log/nova state_path=/var/lib/nova lock_path=/run/lock/nova verbose=true api_paste_config=/etc/nova/api-paste.ini compute_scheduler_driver=nova.scheduler.simple.simplescheduler #rabbit_host=

74 rabbit_host="localhost" rabbit_password="password" rabbit_port=5672 rabbit_use_ssl=false rabbit_userid="guest" rabbit_virtual_host="/" libvirt_use_virtio_for_bridges=true connection_type=libvirt libvirt_type=qemu nova_url= root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf # Auth use_deprecated_auth=false auth_strategy=keystone # Imaging service glance_api_servers= :9292 image_service=nova.image.glance.glanceimageservice # Vnc configuration novnc_enabled=true novncproxy_base_url= novncproxy_port=6080 vncserver_proxyclient_address= vncserver_listen= # Metadata service_quantum_metadata_proxy = True quantum_metadata_proxy_shared_secret = helloopenstack # Network settings network_api_class=nova.network.quantumv2.api.api quantum_url= quantum_auth_strategy=keystone quantum_admin_tenant_name=service quantum_admin_username=quantum quantum_admin_password=service_pass quantum_admin_auth_url=

75 libvirt_vif_driver=nova.virt.libvirt.vif.quantumlinuxbridgevifdriver linuxnet_interface_driver=nova.network.linux_net.linuxbridgeinterfacedriver firewall_driver=nova.virt.libvirt.firewall.iptablesfirewalldriver # Compute # compute_driver=libvirt.libvirtdriver # Cinder # volume_api_class=nova.volume.cinder.api osapi_volume_listen_port= nova-compute.conf [DEFAULT] libvirt_type=qemu #libvirt_type=kvm #compute_driver=libvirt.libvirtdriver compute_driver=libvirt.libvirtdriver #libvirt_vif_type=ethernet #libvirt_vif_driver=nova.virt.libvirt.vif.quantumlinuxbridgevifdriver 11. api-paste.ini ############ # Metadata # ############ [composite:metadata] use = egg:paste#urlmap /: meta [pipeline:meta] pipeline = ec2faultwrap logrequest metaapp [app:metaapp] paste.app_factory = nova.api.metadata.handler:metadatarequesthandler.factory ####### # EC2 #

76 [composite:ec2] use = egg:paste#urlmap /services/cloud: ec2cloud [composite:ec2cloud] use = call:nova.api.auth:pipeline_factory noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor [filter:ec2faultwrap] paste.filter_factory = nova.api.ec2:faultwrapper.factory [filter:logrequest] paste.filter_factory = nova.api.ec2:requestlogging.factory [filter:ec2lockout] paste.filter_factory = nova.api.ec2:lockout.factory [filter:ec2keystoneauth] paste.filter_factory = nova.api.ec2:ec2keystoneauth.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:noauth.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.cloudcontroller paste.filter_factory = nova.api.ec2:requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:authorizer.factory [filter:validator] paste.filter_factory = nova.api.ec2:validator.factory [app:ec2executor] paste.app_factory = nova.api.ec2:executor.factory ############# # Openstack # #############

77 [composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v1.1: openstack_compute_api_v2 /v2: openstack_compute_api_v2 [composite:openstack_compute_api_v2] use = call:nova.api.auth:pipeline_factory noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2 keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2 keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2 [filter:faultwrap] paste.filter_factory = nova.api.openstack:faultwrapper.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:noauthmiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.compute.limits:ratelimitingmiddleware.factory [filter:sizelimit] paste.filter_factory = nova.api.sizelimit:requestbodysizelimiter.factory [app:osapi_compute_app_v2] paste.app_factory = nova.api.openstack.compute:apirouter.factory [pipeline:oscomputeversions] pipeline = faultwrap oscomputeversionapp [app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = nova.api.auth:novakeystonecontext.factory

78 [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host = #auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = service_pass # signing_dir is configurable, but the default behavior of the authtoken # middleware should be sufficient. It will create a temporary directory # in the home directory for the user the nova process is running as. signing_dir = /var/lib/nova/keystone-signing # Workaround for auth_version = v cinder.conf [DEFAULT] rootwrap_config = /etc/cinder/rootwrap.conf sql_connection = mysql://cinderuser:cinderpass@ /cinder api_paste_confg = /etc/cinder/api-paste.ini #iscsi_helper = tgtadm iscsi_helper=ietadm volume_name_template = volume-%s volume_group = cinder-volumes verbose = True auth_strategy = keystone state_path = /var/lib/cinder lock_path = /var/lock/cinder volumes_dir = /var/lib/cinder/volumes ####################rabbit host # kombu_ssl_ca_certs = # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = # Password of the RabbitMQ server rabbit_password = password # Port where RabbitMQ server is running/listening

79 rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = :5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / # Maximum retries with trying to connect to RabbitMQ # (the default of 0 implies an infinite retry count) # rabbit_max_retries = 0 # RabbitMQ connection retry interval # rabbit_retry_interval = 1 # Use HA queues in RabbitMQ (x-ha-policy: all).you need to # wipe RabbitMQ database when changing this option. (boolean value) # rabbit_ha_queues = false 13. quantum.conf [DEFAULT] # Default log level is INFO # verbose and debug has the same result. # One of them will set DEBUG log level output # debug = False # verbose = False # Where to store Quantum state files. This directory must be writable by the # user executing the agent. # state_path = /var/lib/quantum # Where to store lock files lock_path = $state_path/lock # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # log_date_format = %Y-%m-%d %H:%M:%S # use_syslog -> syslog # log_file and log_dir -> log_dir/log_file # (not log_file) and log_dir -> log_dir/{binary_name}.log # use_stderr -> stderr

80 # (not user_stderr) and (not log_file) -> stdout # publish_errors -> notification system # use_syslog = False # syslog_log_facility = LOG_USER # use_stderr = True # log_file = # log_dir = # publish_errors = False # Address to bind the API server bind_host = # Port the bind the API server to bind_port = 9696 # Path to the extensions. Note that this can be a colon-separated list of # paths. For example: # api_extensions_path = extensions:/path/to/more/extensions:/even/more/extensions # The path of quantum.extensions is appended to this, so if your # extensions are in there you don't need to specify them here # api_extensions_path = # Quantum plugin provider module # core_plugin = #core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.ovsquantumpluginv2 core_plugin = quantum.plugins.linuxbridge.lb_quantum_plugin.linuxbridgepluginv2 # Advanced service modules # service_plugins = # Paste configuration file api_paste_config = /etc/quantum/api-paste.ini # The strategy to be used for auth. # Supported values are 'keystone'(default), 'noauth'. # auth_strategy = keystone

81 # Base MAC address. The first 3 octets will remain unchanged. If the # 4h octet is not 00, it will also used. The others will be # randomly generated. # 3 octet # base_mac = fa:16:3e:00:00:00 # 4 octet # base_mac = fa:16:3e:4f:00:00 # Maximum amount of retries to generate a unique MAC address # mac_generation_retries = 16 # DHCP Lease duration (in seconds) # dhcp_lease_duration = 120 # Allow sending resource operation notification to DHCP agent # dhcp_agent_notification = True # Enable or disable bulk create/update/delete operations # allow_bulk = True # Enable or disable pagination # allow_pagination = False # Enable or disable sorting # allow_sorting = False # Enable or disable overlapping IPs for subnets # Attention: the following parameter MUST be set to False if Quantum is # being used in conjunction with nova security groups and/or metadata service. # allow_overlapping_ips = False # Ensure that configured gateway is on subnet # force_gateway_on_subnet = False # RPC configuration options. Defined in rpc init # The messaging module to use, defaults to kombu. # rpc_backend = quantum.openstack.common.rpc.impl_kombu # Size of RPC thread pool # rpc_thread_pool_size = 64, # Size of RPC connection pool # rpc_conn_pool_size = 30 # Seconds to wait for a response from call or multicall # rpc_response_timeout = 60 # Seconds to wait before a cast expires (TTL). Only supported by impl_zmq. # rpc_cast_timeout = 30

82 # Modules of exceptions that are permitted to be recreated # upon receiving exception data from an rpc call. # allowed_rpc_exception_modules = quantum.openstack.common.exception, nova.exception # AMQP exchange to connect to if using RabbitMQ or QPID control_exchange = quantum # If passed, use a fake RabbitMQ provider # fake_rabbit = False # Configuration options if sending notifications via kombu rpc (these are # the defaults) # SSL version to use (valid only if SSL enabled) # kombu_ssl_version = # SSL key file (valid only if SSL enabled) # kombu_ssl_keyfile = # SSL cert file (valid only if SSL enabled) # kombu_ssl_certfile = # SSL certification authority file (valid only if SSL enabled)' # kombu_ssl_ca_certs = # IP address of the RabbitMQ installation rabbit_host = localhost #rabbit_host = # Password of the RabbitMQ server rabbit_password = password # Port where RabbitMQ server is running/listening rabbit_port = 5672 # RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port' rabbit_hosts = localhost:5672 # User ID used for RabbitMQ connections rabbit_userid = guest # Location of a virtual RabbitMQ installation. rabbit_virtual_host = / # Maximum retries with trying to connect to RabbitMQ # (the default of 0 implies an infinite retry count) # rabbit_max_retries = 0 # RabbitMQ connection retry interval # rabbit_retry_interval = 1 # Use HA queues in RabbitMQ (x-ha-policy: all).you need to # wipe RabbitMQ database when changing this option. (boolean value)

83 # rabbit_ha_queues = false # QPID # rpc_backend=quantum.openstack.common.rpc.impl_qpid # Qpid broker hostname # qpid_hostname = localhost # Qpid broker port # qpid_port = 5672 # Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672) # qpid_hosts is defaulted to '$qpid_hostname:$qpid_port' # qpid_hosts = localhost:5672 # Username for qpid connection # qpid_username = '' # Password for qpid connection # qpid_password = '' # Space separated list of SASL mechanisms to use for auth # qpid_sasl_mechanisms = '' # Seconds between connection keepalive heartbeats # qpid_heartbeat = 60 # Transport to use, either 'tcp' or 'ssl' # qpid_protocol = tcp # Disable Nagle algorithm # qpid_tcp_nodelay = True # ZMQ # rpc_backend=quantum.openstack.common.rpc.impl_zmq # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. # rpc_zmq_bind_address = * # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = quantum.openstack.common.notifier.no_op_notifier # Logging driver

84 # notification_driver = quantum.openstack.common.notifier.log_notifier # RPC driver. DHCP agents needs it. notification_driver = quantum.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic name(s) or to set logging level default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications # Default maximum number of items returned in a single response, # value == infinite and value < 0 means no max limit, and value must # greater than 0. If the number of items requested is greater than # pagination_max_limit, server will just return pagination_max_limit # of number of items. # pagination_max_limit = -1 # Maximum number of DNS nameservers per subnet # max_dns_nameservers = 5 # Maximum number of host routes per subnet # max_subnet_host_routes = 20 # Maximum number of fixed ips per port # max_fixed_ips_per_port = 5 # =========== items for agent management extension ============= # Seconds to regard the agent as down. # agent_down_time = 5 # =========== end of items for agent management extension ===== # =========== items for agent scheduler extension ============= # Driver to use for scheduling network to DHCP agent # network_scheduler_driver = quantum.scheduler.dhcp_agent_scheduler.chancescheduler # Driver to use for scheduling router to a default L3 agent # router_scheduler_driver = quantum.scheduler.l3_agent_scheduler.chancescheduler

85 # Allow auto scheduling networks to DHCP agent. It will schedule non-hosted # networks to first DHCP agent which sends get_active_networks message to # quantum server # network_auto_schedule = True # Allow auto scheduling routers to L3 agent. It will schedule non-hosted # routers to first L3 agent which sends sync_routers message to quantum server # router_auto_schedule = True # =========== end of items for agent scheduler extension ===== [QUOTAS] # resource name(s) that are supported in quota features # quota_items = network,subnet,port # default number of resource allowed per tenant, minus for unlimited # default_quota = -1 # number of networks allowed per tenant, and minus means unlimited # quota_network = 10 # number of subnets allowed per tenant, and minus means unlimited # quota_subnet = 10 # number of ports allowed per tenant, and minus means unlimited # quota_port = 50 # number of security groups allowed per tenant, and minus means unlimited # quota_security_group = 10 # number of security group rules allowed per tenant, and minus means unlimited # quota_security_group_rule = 100 # default driver to use for quota checks # quota_driver = quantum.quota.confdriver [DEFAULT_SERVICETYPE] # Description of the default service type (optional) # description = "default service type" # Enter a service definition line for each advanced service provided # by the default service type. # Each service definition should be in the following format:

86 # <service>:<plugin>[:driver] [AGENT] # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real # root filter facility. # Change to "sudo" to skip the filtering and just run the comand directly # root_helper = sudo root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf # =========== items for agent management extension ============= # seconds between nodes reporting state to server, should be less than # agent_down_time # report_interval = 4 # =========== end of items for agent management extension ===== [keystone_authtoken] auth_host = auth_port = auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = service_pass signing_dir = /var/lib/quantum/keystone-signing

87 Appendix B About Ubuntu Before installation of Ubuntu: 1. Please check that your system supports 32 bit or 64 bit Operating system. 2. Ubuntu is available for Desktop and Servers. If you are beginner then we suggest you to use Ubuntu Dektop for openstack deployment. Ubuntu Server does not include GUI so it is not for begginers. 3. We recommend Long Term Support (LTS) release of Ubuntu OS for Openstack 4. Ubuntu Desktop is available at You can download it for 32 bit or 64 bit System. 5. Ubuntu Desktop is available at You can download it for 32 bit or 64 bit System. Ubuntu Installation 1. Go to following link for Ubuntu Desktop Installation Go to following link for Ubuntu Server Installation Useful links for beginners: Go through the following links if you are beginner in Ubuntu

88 Appendix C CONFIGURING THE HYPERVISOR For production environments the most tested hypervisors are KVM and Xen-based hypervisors. KVM runs through libvirt, Xen runs best through XenAPI calls. KVM is selected by default and requires the least additional configuration. This guide offers information for KVM and Qemu hypervisors. KVM KVM is configured as the default hypervisor for Compute in Openstack. The KVM hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk) 1. Checking for hardware virtualization support The processors of your compute host need to support virtualization technology (VT) to use KVM.If you are running on Ubuntu use the kvm-ok command to check if your processor has VT support, it is enabled in the BIOS, and KVM is installed properly, as root: kvm-ok command is available in cpu-checker package so install it first. # apt-get installcpu-checker # kvm-ok If KVM is enabled, the output should look something like: INFO: /dev/kvm exists KVM acceleration can be used.

89 If KVM is not enabled, the output should look something like: INFO: Your CPU does not support KVM extensions In the case that KVM acceleration is not supported, Compute should be configured to use a different hypervisor, such as QEMU or Xen.On distributions that don't have kvm-ok, you can check if your processor has VT support by looking at the processor flags in the /proc/cpuinfo file. For Intel processors, look for the vmx flag, and for AMD processors, look for the svm flag. A simple way to check is to run the following command and see if there is any output: $ egrep '(vmx svm)' --color=always /proc/cpuinfo Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the above command produced no output, you may need to reboot your machine, enter the system BIOS, and enable the VT option. 2. Enabling KVM: KVM requires the kvm and either kvm-intel or kvm-amd modules to be loaded. This may have been configured automatically on your distribution when KVM is installed.you can check that they have been loaded using lsmod, as follows, with expected output for Intelbased processors: $lmsod grepkvm kvm_intel kvm kvm_intel The following sections describe how to load the kernel modules for Intel-based and AMD-based processors if they were not loaded automatically by your distribution's KVM installation process.

90 Intel-based processors: If your compute host is Intel-based, run the following as root to load the kernel modules: #modprobekvm #modprobekvm-intel Add the following lines to /etc/modules so that these modules will load on reboot: kvmkvm-intel AMD-based processors: If your compute host is AMD-based, run the following as root to load the kernel modules: #modprobekvm #modprobekvm-amd Add the following lines to /etc/modules so that these modules will load on reboot: kvmkvm-amd 3. KVM installation Now install pakcage for KVM hypervisor: # apt-get install -y kvmlibvirt-bin pm-utils Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to: cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet","/dev/net/tun" ]

91 Delete default virtual bridge # virsh net-destroy default # virsh net-undefine default Restart the libvirt service to load the new values: # servicelibvirt-bin restart QEMU From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment. The typical uses cases for QEMU are running on older hardware that lacks virtualization support. Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests. KVM requires hardware support for acceleration. If hardware support is not available (e.g., if you are running Compute inside of a VM and the hypervisor does not expose the required hardware support), you can use QEMU instead. KVM and QEMU have the same level of support in OpenStack, but KVM will provide better performance. To enable QEMU in openstack, add following lines in nova.conf: compute_driver=libvirt.libvirtdriver libvirt_type=qemu The QEMU hypervisor supports the following virtual machine image formats: Raw QEMU Copy-on-write (qcow2) VMWare virtual machine disk format (vmdk)

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting

Introduction to Openstack, an Open Cloud Computing Platform. Libre Software Meeting Introduction to Openstack, an Open Cloud Computing Platform Libre Software Meeting 10 July 2012 David Butler BBC Research & Development [email protected] Introduction: Libre Software Meeting 2012

More information

1 Keystone OpenStack Identity Service

1 Keystone OpenStack Identity Service 1 Keystone OpenStack Identity Service In this chapter, we will cover: Creating a sandbox environment using VirtualBox and Vagrant Configuring the Ubuntu Cloud Archive Installing OpenStack Identity Service

More information

OpenStack Introduction. November 4, 2015

OpenStack Introduction. November 4, 2015 OpenStack Introduction November 4, 2015 Application Platforms Undergoing A Major Shift What is OpenStack Open Source Cloud Software Launched by NASA and Rackspace in 2010 Massively scalable Managed by

More information

Mirantis www.mirantis.com/training

Mirantis www.mirantis.com/training TM Mirantis www.mirantis.com/training Goals Understand OpenStack purpose and use cases Understand OpenStack ecosystem o history o projects Understand OpenStack architecture o logical architecture o components

More information

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University

Cloud on TEIN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University Cloud on TEIN Part I: OpenStack Cloud Deployment Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat University Outline Objectives Part I: OpenStack Overview How OpenStack

More information

OpenStack Beginner s Guide (for Ubuntu - Precise)

OpenStack Beginner s Guide (for Ubuntu - Precise) OpenStack Beginner s Guide (for Ubuntu - Precise) v3.0, 7 May 2012 Atul Jha Johnson D Kiran Murari Murthy Raju Vivek Cherian Yogesh Girikumar OpenStack Beginner s Guide (for Ubuntu - Precise) v3.0, 7 May

More information

Configuring Keystone in OpenStack (Essex)

Configuring Keystone in OpenStack (Essex) WHITE PAPER Configuring Keystone in OpenStack (Essex) Joshua Tobin April 2012 Copyright Canonical 2012 www.canonical.com Executive introduction Keystone is an identity service written in Python that provides

More information

Summarized OpenStack Install Guide

Summarized OpenStack Install Guide Summarized OpenStack Install Guide Telmo Silva Morais Student of Doctoral Program of Informatics Engineering Computer Systems Security Faculty of Engineering, University of Porto Porto, Portugal [email protected]

More information

vrealize Operations Management Pack for OpenStack

vrealize Operations Management Pack for OpenStack vrealize Operations Management Pack for This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more

More information

การใช งานและต ดต งระบบ OpenStack ซอฟต แวร สาหร บบร หารจ ดการ Cloud Computing เบ องต น

การใช งานและต ดต งระบบ OpenStack ซอฟต แวร สาหร บบร หารจ ดการ Cloud Computing เบ องต น การใช งานและต ดต งระบบ OpenStack ซอฟต แวร สาหร บบร หารจ ดการ Cloud Computing เบ องต น Kasidit Chanchio [email protected] Thammasat University Vasinee Siripoonya Electronic Government Agency of Thailand

More information

Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date

Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date Ubuntu Cloud Infrastructure - Jumpstart Deployment Customer - Date Participants Consultant Name, Canonical Cloud Consultant,[email protected] Cloud Architect Name, Canonical Cloud Architect,

More information

Release Notes for Fuel and Fuel Web Version 3.0.1

Release Notes for Fuel and Fuel Web Version 3.0.1 Release Notes for Fuel and Fuel Web Version 3.0.1 June 21, 2013 1 Mirantis, Inc. is releasing version 3.0.1 of the Fuel Library and Fuel Web products. This is a cumulative maintenance release to the previously

More information

How To Install Openstack On Ubuntu 14.04 (Amd64)

How To Install Openstack On Ubuntu 14.04 (Amd64) Getting Started with HP Helion OpenStack Using the Virtual Cloud Installation Method 1 What is OpenStack Cloud Software? A series of interrelated projects that control pools of compute, storage, and networking

More information

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé <[email protected]>

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé <berrange@redhat.com> An Introduction to OpenStack and its use of KVM Daniel P. Berrangé About me Contributor to multiple virt projects Libvirt Developer / Architect 8 years OpenStack contributor 1 year

More information

A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor.

A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor. A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor. Vivek Juneja Cloud Computing COE Torry Harris Business Solutions INDIA Contents

More information

OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora

OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora docs.openstack.org OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora (2013-06-11) Copyright 2012, 2013 OpenStack Foundation All rights reserved. The OpenStack system has several

More information

Manila OpenStack File Sharing Service

Manila OpenStack File Sharing Service Manila OpenStack File Sharing Service August 2015 Author: Mihai Patrascoiu Supervisor: Jose Castro Leon CERN openlab Summer Student Report 2015 Project Specification The CERN Computer Centre is host to

More information

Installation Runbook for Avni Software Defined Cloud

Installation Runbook for Avni Software Defined Cloud Installation Runbook for Avni Software Defined Cloud Application Version 2.5 MOS Version 6.1 OpenStack Version Application Type Juno Hybrid Cloud Management System Content Document History 1 Introduction

More information

EMC Data Protection Search

EMC Data Protection Search EMC Data Protection Search Version 1.0 Security Configuration Guide 302-001-611 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published April 20, 2015 EMC believes

More information

Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM

Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM Bern University of Applied Sciences Engineering and Information Technology Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM By Franz Meyer Version 1.0 February 2011 Virtualization Architecture

More information

Installing and Using the vnios Trial

Installing and Using the vnios Trial Installing and Using the vnios Trial The vnios Trial is a software package designed for efficient evaluation of the Infoblox vnios appliance platform. Providing the complete suite of DNS, DHCP and IPAM

More information

Keywords OpenStack, private cloud, infrastructure, AWS, IaaS

Keywords OpenStack, private cloud, infrastructure, AWS, IaaS Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Deployment

More information

Privileged Cloud Storage By MaaS JuJu

Privileged Cloud Storage By MaaS JuJu Privileged Cloud Storage By MaaS JuJu Sarita Shankar Pol 1, S. V. Gumaste 2 1 Computer Engineering, Sharadchandra College of Engineering, Otur (Pune), India 2 Professor, Computer Engineering, Sharadchandra

More information

VMware vcenter Log Insight Getting Started Guide

VMware vcenter Log Insight Getting Started Guide VMware vcenter Log Insight Getting Started Guide vcenter Log Insight 1.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

Snakes on a cloud. A presentation of the OpenStack project. Thierry Carrez Release Manager, OpenStack

Snakes on a cloud. A presentation of the OpenStack project. Thierry Carrez Release Manager, OpenStack Snakes on a cloud A presentation of the OpenStack project Thierry Carrez Release Manager, OpenStack Cloud? Buzzword End-user services Software as a Service (SaaS) End-user services Online storage / streaming

More information

Acano solution. Virtualized Deployment R1.1 Installation Guide. Acano. February 2014 76-1025-03-B

Acano solution. Virtualized Deployment R1.1 Installation Guide. Acano. February 2014 76-1025-03-B Acano solution Virtualized Deployment R1.1 Installation Guide Acano February 2014 76-1025-03-B Contents Contents 1 Introduction... 3 1.1 Before You Start... 3 1.1.1 About the Acano virtualized solution...

More information

How To Run A Powerpoint On A Powerline On A Pc Or Macbook

How To Run A Powerpoint On A Powerline On A Pc Or Macbook Anexo 2 : Ficheros de Configuración. OpenStack configura las variables principales, de cada uno de sus componentes, en ficheros planos txt. Es habitual cometer fallos en la configuración de los ficheros

More information

Comodo MyDLP Software Version 2.0. Installation Guide Guide Version 2.0.010215. Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013

Comodo MyDLP Software Version 2.0. Installation Guide Guide Version 2.0.010215. Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013 Comodo MyDLP Software Version 2.0 Installation Guide Guide Version 2.0.010215 Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013 Table of Contents 1.About MyDLP... 3 1.1.MyDLP Features... 3

More information

CA Performance Center

CA Performance Center CA Performance Center Single Sign-On User Guide 2.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is

More information

Migration of virtual machine to cloud using Openstack Python API Clients

Migration of virtual machine to cloud using Openstack Python API Clients Migration of virtual machine to cloud using Openstack Python API Clients Jyoti Joshi 1, Manasi Thakur 2, Saurabh Mhatre 3, Pradnya Usatkar 4, Afrin Parmar 5 1 Assistant Professor Computer, R.A.I.T., University

More information

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Access CloudStack web interface via: Internal access links: http://cloudstack.doc.ic.ac.uk

More information

Installing and Configuring vcloud Connector

Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

SUSE Cloud. www.suse.com. End User Guide. August 06, 2014

SUSE Cloud. www.suse.com. End User Guide. August 06, 2014 SUSE Cloud 4 August 06, 2014 www.suse.com End User Guide End User Guide List of Authors: Tanja Roth, Frank Sundermeyer Copyright 2006 2014 Novell, Inc. and contributors. All rights reserved. Licensed under

More information

http://www.trendmicro.com/download

http://www.trendmicro.com/download Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,

More information

Cloud on TIEN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat

Cloud on TIEN Part I: OpenStack Cloud Deployment. Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat Cloud on TIEN Part I: OpenStack Cloud Deployment Vasinee Siripoonya Electronic Government Agency of Thailand Kasidit Chanchio Thammasat Outline Part I: OpenStack Overview How OpenStack components work

More information

CS312 Solutions #6. March 13, 2015

CS312 Solutions #6. March 13, 2015 CS312 Solutions #6 March 13, 2015 Solutions 1. (1pt) Define in detail what a load balancer is and what problem it s trying to solve. Give at least two examples of where using a load balancer might be useful,

More information

IceWarp to IceWarp Server Migration

IceWarp to IceWarp Server Migration IceWarp to IceWarp Server Migration Registered Trademarks iphone, ipad, Mac, OS X are trademarks of Apple Inc., registered in the U.S. and other countries. Microsoft, Windows, Outlook and Windows Phone

More information

BlackBerry Enterprise Service 10. Version: 10.2. Configuration Guide

BlackBerry Enterprise Service 10. Version: 10.2. Configuration Guide BlackBerry Enterprise Service 10 Version: 10.2 Configuration Guide Published: 2015-02-27 SWD-20150227164548686 Contents 1 Introduction...7 About this guide...8 What is BlackBerry Enterprise Service 10?...9

More information

OnCommand Performance Manager 1.1

OnCommand Performance Manager 1.1 OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

NOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3

NOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3 NOC PS manual Copyright Maxnet 2009 2015 All rights reserved Page 1/45 Table of contents Installation...3 System requirements...3 Network setup...5 Installation under Vmware Vsphere...8 Installation under

More information

OpenStack & Hyper-V. Alessandro Pilo- CEO Cloudbase Solu.ons @cloudbaseit

OpenStack & Hyper-V. Alessandro Pilo- CEO Cloudbase Solu.ons @cloudbaseit OpenStack & Hyper-V Alessandro Pilo- CEO Cloudbase Solu.ons @cloudbaseit Cloudbase Solutions Company started in Italy as.net / Linux interop dev and consulting Branch started in Timisoara in 2012 to hire

More information

INUVIKA OVD VIRTUAL DESKTOP ENTERPRISE

INUVIKA OVD VIRTUAL DESKTOP ENTERPRISE INUVIKA OVD VIRTUAL DESKTOP ENTERPRISE MICROSOFT ACTIVE DIRECTORY INTEGRATION Agostinho Tavares Version 1.0 Published 06/05/2015 This document describes how Inuvika OVD 1.0 can be integrated with Microsoft

More information

Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo

Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo Installation Runbook for F5 Networks BIG-IP LBaaS Plugin for OpenStack Kilo Application Version F5 BIG-IP TMOS 11.6 MOS Version 7.0 OpenStack Version Application Type Openstack Kilo Validation of LBaaS

More information

Eucalyptus 3.4.2 User Console Guide

Eucalyptus 3.4.2 User Console Guide Eucalyptus 3.4.2 User Console Guide 2014-02-23 Eucalyptus Systems Eucalyptus Contents 2 Contents User Console Overview...4 Install the Eucalyptus User Console...5 Install on Centos / RHEL 6.3...5 Configure

More information

Introduction to the EIS Guide

Introduction to the EIS Guide Introduction to the EIS Guide The AirWatch Enterprise Integration Service (EIS) provides organizations the ability to securely integrate with back-end enterprise systems from either the AirWatch SaaS environment

More information

rackspace.com/cloud/private

rackspace.com/cloud/private rackspace.com/cloud/private Rackspace Private Cloud Networking (2015-10-07) Copyright 2014 Rackspace All rights reserved. This documentation is intended to help users understand OpenStack Networking in

More information

Guide to the LBaaS plugin ver. 1.0.2 for Fuel

Guide to the LBaaS plugin ver. 1.0.2 for Fuel Guide to the LBaaS plugin ver. 1.0.2 for Fuel Load Balancing plugin for Fuel LBaaS (Load Balancing as a Service) is currently an advanced service of Neutron that provides load balancing for Neutron multi

More information

VMware Identity Manager Connector Installation and Configuration

VMware Identity Manager Connector Installation and Configuration VMware Identity Manager Connector Installation and Configuration VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until the document

More information

Use Enterprise SSO as the Credential Server for Protected Sites

Use Enterprise SSO as the Credential Server for Protected Sites Webthority HOW TO Use Enterprise SSO as the Credential Server for Protected Sites This document describes how to integrate Webthority with Enterprise SSO version 8.0.2 or 8.0.3. Webthority can be configured

More information

Introduction to Mobile Access Gateway Installation

Introduction to Mobile Access Gateway Installation Introduction to Mobile Access Gateway Installation This document describes the installation process for the Mobile Access Gateway (MAG), which is an enterprise integration component that provides a secure

More information

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems RH413 Manage Software Updates Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems Allocate an advanced file system layout, and use file

More information

insync Installation Guide

insync Installation Guide insync Installation Guide 5.2 Private Cloud Druva Software June 21, 13 Copyright 2007-2013 Druva Inc. All Rights Reserved. Table of Contents Deploying insync Private Cloud... 4 Installing insync Private

More information

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager [email protected]. Product Marketing Manager djarvis@suse.

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse. SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager [email protected] Product Marketing Manager [email protected] SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack

More information

SOA Software API Gateway Appliance 7.1.x Administration Guide

SOA Software API Gateway Appliance 7.1.x Administration Guide SOA Software API Gateway Appliance 7.1.x Administration Guide Trademarks SOA Software and the SOA Software logo are either trademarks or registered trademarks of SOA Software, Inc. Other product names,

More information

rackspace.com/cloud/private

rackspace.com/cloud/private rackspace.com/cloud/private Rackspace Private Cloud Software v 3.0 (2013-03-06) Copyright 2013 Rackspace All rights reserved. This guide is intended to assist Rackspace customers in downloading and installing

More information

OpenStack Cloud Administrator Guide

OpenStack Cloud Administrator Guide docs.openstack.org OpenStack Cloud Administrator Guide current (2015-05-01) Copyright 2013-2015 OpenStack Foundation Some rights reserved. OpenStack offers open source software for cloud administrators

More information

HyTrust Appliance Administration Guide

HyTrust Appliance Administration Guide HyTrust Appliance Administration Guide Version 3.0.2 October, 2012 HyTrust Appliance Administration Guide Copyright 2009-2012 HyTrust Inc. All Rights Reserved. HyTrust, Virtualization Under Control and

More information

Setup Guide Access Manager 3.2 SP3

Setup Guide Access Manager 3.2 SP3 Setup Guide Access Manager 3.2 SP3 August 2014 www.netiq.com/documentation Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE

More information

Building a Penetration Testing Virtual Computer Laboratory

Building a Penetration Testing Virtual Computer Laboratory Building a Penetration Testing Virtual Computer Laboratory User Guide 1 A. Table of Contents Collaborative Virtual Computer Laboratory A. Table of Contents... 2 B. Introduction... 3 C. Configure Host Network

More information

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor

More information

DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT

DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT DOCUMENTATION ON ADDING ENCRYPTION TO OPENSTACK SWIFT BY MUHAMMAD KAZIM & MOHAMMAD RAFAY ALEEM 30/11/2013 TABLE OF CONTENTS CHAPTER 1: Introduction to Swift...3 CHAPTER 2: Deploying OpenStack.. 4 CHAPTER

More information

SUSE Cloud. www.suse.com. OpenStack End User Guide. February 20, 2015

SUSE Cloud. www.suse.com. OpenStack End User Guide. February 20, 2015 SUSE Cloud 5 www.suse.com February 20, 2015 OpenStack End User Guide OpenStack End User Guide Abstract OpenStack is an open-source cloud computing platform for public and private clouds. A series of interrelated

More information

Unitrends Virtual Backup Installation Guide Version 8.0

Unitrends Virtual Backup Installation Guide Version 8.0 Unitrends Virtual Backup Installation Guide Version 8.0 Release June 2014 7 Technology Circle, Suite 100 Columbia, SC 29203 Phone: 803.454.0300 Contents Chapter 1 Getting Started... 1 Version 8 Architecture...

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information

Deploying workloads with Juju and MAAS in Ubuntu 13.04

Deploying workloads with Juju and MAAS in Ubuntu 13.04 Deploying workloads with Juju and MAAS in Ubuntu 13.04 A Dell Technical White Paper Kent Baxley Canonical Field Engineer Jose De la Rosa Dell Software Engineer 2 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Secure Messaging Server Console... 2

Secure Messaging Server Console... 2 Secure Messaging Server Console... 2 Upgrading your PEN Server Console:... 2 Server Console Installation Guide... 2 Prerequisites:... 2 General preparation:... 2 Installing the Server Console... 2 Activating

More information

CloudCIX Bootcamp. The essential IaaS getting started guide. http://www.cix.ie

CloudCIX Bootcamp. The essential IaaS getting started guide. http://www.cix.ie The essential IaaS getting started guide. http://www.cix.ie Revision Date: 17 th August 2015 Contents Acronyms... 2 Table of Figures... 3 1 Welcome... 4 2 Architecture... 5 3 Getting Started... 6 3.1 Login

More information

Web Application Firewall

Web Application Firewall Web Application Firewall Getting Started Guide August 3, 2015 Copyright 2014-2015 by Qualys, Inc. All Rights Reserved. Qualys and the Qualys logo are registered trademarks of Qualys, Inc. All other trademarks

More information

F-Secure Messaging Security Gateway. Deployment Guide

F-Secure Messaging Security Gateway. Deployment Guide F-Secure Messaging Security Gateway Deployment Guide TOC F-Secure Messaging Security Gateway Contents Chapter 1: Deploying F-Secure Messaging Security Gateway...3 1.1 The typical product deployment model...4

More information

FileMaker Server 14. FileMaker Server Help

FileMaker Server 14. FileMaker Server Help FileMaker Server 14 FileMaker Server Help 2007 2015 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and FileMaker Go are trademarks

More information

RSA Authentication Manager 8.1 Virtual Appliance Getting Started

RSA Authentication Manager 8.1 Virtual Appliance Getting Started RSA Authentication Manager 8.1 Virtual Appliance Getting Started Thank you for purchasing RSA Authentication Manager 8.1, the world s leading two-factor authentication solution. This document provides

More information

Getting Started with the CLI and APIs using Cisco Openstack Private Cloud

Getting Started with the CLI and APIs using Cisco Openstack Private Cloud Tutorial Getting Started with the CLI and APIs using Cisco Openstack Private Cloud In this tutorial we will describe how to get started with the OpenStack APIs using the command line, the REST interface

More information

Cloud Platform Comparison: CloudStack, Eucalyptus, vcloud Director and OpenStack

Cloud Platform Comparison: CloudStack, Eucalyptus, vcloud Director and OpenStack Cloud Platform Comparison: CloudStack, Eucalyptus, vcloud Director and OpenStack This vendor-independent research contains a product-by-product comparison of the most popular cloud platforms (along with

More information

UZH Experiences with OpenStack

UZH Experiences with OpenStack GC3: Grid Computing Competence Center UZH Experiences with OpenStack What we did, what went well, what went wrong. Antonio Messina 29 April 2013 Setting up Hardware configuration

More information

NSi Mobile Installation Guide. Version 6.2

NSi Mobile Installation Guide. Version 6.2 NSi Mobile Installation Guide Version 6.2 Revision History Version Date 1.0 October 2, 2012 2.0 September 18, 2013 2 CONTENTS TABLE OF CONTENTS PREFACE... 5 Purpose of this Document... 5 Version Compatibility...

More information

Postgres on OpenStack

Postgres on OpenStack Postgres on OpenStack Dave Page 18/9/2014 2014 EnterpriseDB Corporation. All rights reserved. 1 Introduction PostgreSQL: Core team member pgadmin lead developer Web/sysadmin teams PGCAC/PGEU board member

More information

rackspace.com/cloud/private

rackspace.com/cloud/private rackspace.com/cloud/private Rackspace Private Cloud (2014-03-31) Copyright 2014 Rackspace All rights reserved. This guide is intended to assist Rackspace customers in downloading and installing Rackspace

More information

OpenStack Towards a fully open cloud. Thierry Carrez Release Manager, OpenStack

OpenStack Towards a fully open cloud. Thierry Carrez Release Manager, OpenStack OpenStack Towards a fully open cloud Thierry Carrez Release Manager, OpenStack Cloud? Why we need open source IaaS A cloud building block Emergence of a standard Eliminate cloud vendor lock-in Enable federation

More information

RealPresence Platform Director

RealPresence Platform Director RealPresence CloudAXIS Suite Administrators Guide Software 1.3.1 GETTING STARTED GUIDE Software 2.0 June 2015 3725-66012-001B RealPresence Platform Director Polycom, Inc. 1 RealPresence Platform Director

More information

Avalanche Remote Control User Guide. Version 4.1.3

Avalanche Remote Control User Guide. Version 4.1.3 Avalanche Remote Control User Guide Version 4.1.3 ii Copyright 2012 by Wavelink Corporation. All rights reserved. Wavelink Corporation 10808 South River Front Parkway, Suite 200 South Jordan, Utah 84095

More information

VMware vcenter Log Insight Getting Started Guide

VMware vcenter Log Insight Getting Started Guide VMware vcenter Log Insight Getting Started Guide vcenter Log Insight 2.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

McAfee Endpoint Encryption for PC 7.0

McAfee Endpoint Encryption for PC 7.0 Migration Guide McAfee Endpoint Encryption for PC 7.0 For use with epolicy Orchestrator 4.6 Software COPYRIGHT Copyright 2012 McAfee, Inc. Do not copy without permission. TRADEMARK ATTRIBUTIONS McAfee,

More information

Install Your Own OpenStack Cloud Essex Edition

Install Your Own OpenStack Cloud Essex Edition Install Your Own OpenStack Cloud Essex Edition Eric Dodémont (dodeeric) [email protected] Version: 1.03 16 May 2012 License: Install Your Own OpenStack Cloud Essex Edition by Eric Dodémont is licensed

More information

Today. 1. Private Clouds. Private Cloud toolkits. Private Clouds and OpenStack Introduction

Today. 1. Private Clouds. Private Cloud toolkits. Private Clouds and OpenStack Introduction Today Private Clouds and OpenStack Introduction 1. Private Clouds 2. Introduction to OpenStack What is OpenStack? Architecture and Main components Demo: basic commands Luis Tomás Department of Computing

More information

Getting Started with OpenStack and VMware vsphere TECHNICAL MARKETING DOCUMENTATION V 0.1/DECEMBER 2013

Getting Started with OpenStack and VMware vsphere TECHNICAL MARKETING DOCUMENTATION V 0.1/DECEMBER 2013 Getting Started with OpenStack and VMware vsphere TECHNICAL MARKETING DOCUMENTATION V 0.1/DECEMBER 2013 Table of Contents Introduction.... 3 1.1 VMware vsphere.... 3 1.2 OpenStack.... 3 1.3 Using OpenStack

More information

Create a virtual machine at your assigned virtual server. Use the following specs

Create a virtual machine at your assigned virtual server. Use the following specs CIS Networking Installing Ubuntu Server on Windows hyper-v Much of this information was stolen from http://www.isummation.com/blog/installing-ubuntu-server-1104-64bit-on-hyper-v/ Create a virtual machine

More information

CERN Cloud Infrastructure. Cloud Networking

CERN Cloud Infrastructure. Cloud Networking CERN Cloud Infrastructure Cloud Networking Contents Physical datacenter topology Cloud Networking - Use cases - Current implementation (Nova network) - Migration to Neutron 7/16/2015 2 Physical network

More information

How To Use Openstack On Your Laptop

How To Use Openstack On Your Laptop Getting Started with OpenStack Charles Eckel, Cisco DevNet ([email protected]) Agenda What is OpenStack? Use cases and work loads Demo: Install and operate OpenStack on your laptop Getting help and additional

More information

Integrating PISTON OPENSTACK 3.0 with Microsoft Active Directory

Integrating PISTON OPENSTACK 3.0 with Microsoft Active Directory Integrating PISTON OPENSTACK 3.0 with Microsoft Active Directory May 21, 2014 This edition of this document applies to Piston OpenStack 3.0. To send us your comments about this document, e-mail [email protected].

More information

Enterprise Manager. Version 6.2. Installation Guide

Enterprise Manager. Version 6.2. Installation Guide Enterprise Manager Version 6.2 Installation Guide Enterprise Manager 6.2 Installation Guide Document Number 680-028-014 Revision Date Description A August 2012 Initial release to support version 6.2.1

More information

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure TECHNICAL WHITE PAPER Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure A collaboration between Canonical and VMware

More information