Opennebula and The Xen Hypervisor
|
|
|
- Dorothy Strickland
- 10 years ago
- Views:
Transcription
1 NATIONAL COLLEGE OF IRELAND Opennebula and The Xen Hypervisor Marc Reilly Michael O Cearra 12/20/2012 [email protected]
2 Table of Contents Hybrid cloud design... 3 Operating systems... 3 Debian... 3 OpeneSuse... 3 Ubuntu Server... 3 Management tools... 4 OpenNebula... 4 XenServer... 5 XCP... 7 Possible Designs... 8 Example Example Chosen design... 9 Private Cloud Installation Guide Pre-requisites Configuring the nodes Node prerequisites Node Configuration Launching a virtual machine Installing Sunstone self service portal Provisioning of the public cloud Register an account with amazon web services Set up ssh keys on public cloud Generate x.509 certificate to authenticate openenbula with AWS Download the aws api Then we must configure Opennebula drivers and set the EC2 information manager configuration EC2 API Tools configuration Add AWS host to Opennebula Launch instance Conclusion... 38
3
4 Hybrid cloud design When deciding which was the best possible design for building our hybrid cloud, we came up with a wide range of cloud management tools, monitoring tools and Operating systems for which to install these components onto. We are going to explain from here, which different components we tried out, the features which they offered and the reasons for which we ended up choosing the hypervisor, management tools, monitoring tools and operating systems that we did. We will firstly talk about each section then lastly show and explain the design which we stuck too. Operating systems Debian Debian is a free and open source operating system made up of a series of software packages. It includes GNU operating system tools and a Linux kernel. It has proven to be one of the most popular Linux distributions out there. It is compatible with Xen and has a GUI integrated with it. Problem : The problem we had with this is that once we had it installed, while we were installing Xen onto it, we kept getting various errors along with the internet not working which at the time we felt better to try another operating system.. OpeneSuse OpenSuse is also a free and open source operating system made up of a series of software packages. It includes GNU operating system tools and a Linux kernel. It is compatible with Xen and includes both a default graphical user interface (GUI) and a command line interface option. Problem : we also encountered problems along with installing Xen as when we were attempting to integrate it with Xen, we came across errors which were very time consuming to overcome so we felt the best course of action was to try another operating system. Ubuntu Server Ubuntu Server Edition can also run on VMware ESX Server, Oracle's VirtualBox and VM, Citrix Systems XenServer hypervisors, Microsoft Hyper-V, as well as Kernel-based Virtual Machine. Its firewall is extended to common services used by the operating system. Ubuntu LTS Server Edition supports two major architectures: Intel x86 and AMD64. The server edition uses a screen
5 mode character-based interface for the installation, instead of a graphical installation process. It consists of the open core Eucalyptus, libvirt, KVM or Xen virtualization technology. Problems : we had none with this, it had no GUI so we worked on its command prompt but other than a few errors, we were able to install and configure Xen onto it successfully. Management tools OpenNebula OpenNebula is an open sourced cloud computing management application which allows for it to manage various different structure of datacentres and clouds. It has the ability to manage private, public and hybrid clouds. It helps bind a range of technologies such as storage, network, virtualization, monitoring and security to help form a platform which deploys multi-tier services as virtual machines combining different cloud structures. Its features allows for integration, management, scalability, security and so on. It also emphasises that it can promise standardisation, interoperability and portability giving cloud users a wide range of cloud interfaces to choose from including EC2 Query and vcloud and hypervisors such as Xen. It can accommodate multiple hardare and software combinations in a data centre. Its features include : User management: It is possible to configure multiple users, who will have access only to their own instances, the ability to account for used resources, and with limits enforced by quota VM Image management: Every disk image is registered and managed by a centralized image catalog Virtual Network management: It is possible to define multiple networks bonded to different physical interfaces, with either static or dynamic IP address assignment Virtual Machine management: Every machine has its own set of characteristics (for example, CPU, memory, disk storage, and virtual network) and can be launched under every available hypervisor of our cluster Service management: A group of virtual machines can be grouped for being deployed together at boot time, and every virtual machine can be configured at boot time, without the need to assign different disk images for similar machines Infrastructure management: The physical hosts can be managed alone or grouped on independent clusters, and it is useful when you have a heterogeneous environment
6 Storage management: The support for most common storage solutions is found in data centers such as shared storage such as Network Attached Storage (NAS) with specific support for optimal disk image management Information management: Every host and every virtual machine is actively monitored every few seconds, and it is already available in integration with standard monitoring tools such as Ganglia Scheduling: Virtual machines are deployed on host nodes following specific user requirements and resource-aware policies, such as packing, striping, or load-aware User interface: It includes the command-line tools available for managing every aspect of OpenNebula. Operations center: Most of the information and tasks available from the command line are available on web interfaces browsable with any modern web browser on any operating system For a Hybrid cloud, which uses both local and remote resources, the two main features available are as follows: Cloud-bursting: It is the ability to add computing resources to your local infrastructure, using external resources, in order to meet peak demands or implement high-availability/disaster recovery strategies. This is essential for having a flexible and reliable infrastructure. Federation: It is the ability to combine together different clusters, dislocated in different physical positions, enabling higher levels of scalability and reliability. Problems : We found no problems with it what so ever, judging by the features which are described, we came up with de conclusion that it was the best suited management tool for our cloud. XenServer XenServer is an enterprise-ready, commercial virtualization platform that contains all the capabilities in which ti create and manage a virtual infrastructure. Some of its features include : Datacenter automation with XenServer : With Citrix XenServer, organizations can automate key IT processes to improve service delivery and business continuity for virtual environments resulting in both time and money savings while providing more responsive IT services.
7 Site Recovery : Provides site-to-site disaster recovery planning and services for virtual environments. Site recovery is easy to set up, fast to recover, and has the ability to frequently test to ensure disaster recovery plans remain valid. Dynamic Workload Balancing : Improves system utilization and increases application performance by automatically balancing two virtual machines within a resource pool. Workload balancing intelligently places VMs on the most suitable host in the resource pool by matching application requirements to available hardware resources. High Availability : Automatically restarts virtual machines if a failure occurs at the VM, hypervisor, or server level. The auto restart capability allows users to protect all virtualized applications and bring higher levels of availability to the business. Automated VM Protection and Recovery : Utilizing an easy set-up wizard, administrators can create snapshot and archival policies. Regularly scheduled snapshots help to protect against data loss in case of a VM failure. The policies established are based on snapshot type, frequency, amount of historical data that is retained, and an archive location. Recovering a VM is completed by simply choosing the last good known archive. Memory Optimization : Reduces costs and improves application performance and protection by sharing unused server memory between VMs on the host server. Storage XenMotion : Move live running virtual machines and their associated virtual disk image within and across resource pools leveraging local and shared storage. This enables users to move a VM and its VDI from a development to production environment, move between tiers of storage when a VM is limited by storage capacity, and perform maintenance and upgrades with zero downtime. Conversion Tools XenMotion : Citrix XenMotion eliminates the need for planned downtime by enabling active virtual machines to be moved to a new host with no application outages or downtime Web Console with Delegated Admin: Web Self Service provides IT administrators with a simple webbased console to delegate individual VM rights to application owners as well as a way for application owners to manage day to day operations of their VM's. Provisioning Services: Reduce storage requirements by creating a set of golden images which can be streamed to both physical and virtual servers for fast, consistent, and reliable application deployments. Distributed Virtual Switching: Create a multi-tenant, highly secure and extremely flexible network fabric that allows VMs to move freely within the network while maintaining security and control. XenServer Conversion Manager Automate the process of converting VMware virtual machines into XenServer virtual machines with this simple batch conversion tool. Heterogeneous Pools: Enables resource pools to contain servers with different processor types, and support full XenMotion, high availability, workload balancing, and shared storage functionality.
8 Problem : The main problem we had with this virtualisation management was that we had to use open source software and we then came aware that this was commercial which then made it impossible for us to use it. XCP Xen Cloud Platform is an open source management virtualization provided for cloud computing. It includes the Xen hypervisor, an enterprise ready XEN API toolstack and the ability to integrate for cloud, storage and networking solutions. Some other features include: VM lifecycle: live snapshots, checkpoint, migration Resource pools: flexible storage and networking Event tracking: progress, notification Upgrade and patching capabilities Real-time performance monitoring and alerting Built-in support and templates for Windows and Linux guests Open vswitch support built-in Storage XenMotion live Migration (cross-pool migration, VDI migration) Problem : The problem which we had with that was the fact there were some many errors when integrating it with Xen which made us believe that there was a better virtualisation management which would be more suited to Xen. Final outcome: We decided to work with UbuntuServer as the operating system because as stated before, it was working perfectly with Xen with also the added security of just using the command line instead of a GUI appealed to us. We decided to us OpenNebula for a various amount of reasons, it had a monitoring tool built into it which appealed to us as integration between OpenNebula and the tool would be no issue, we also liked the look of it as it took note of all the important aspects of the cloud. It also had load balancing integrated with it which would be a big help if we were to attempt to push it into the public cloud and lastly it gave us more information on the VM s we instantiated rather then just using Xen itself.
9 Possible Designs Example 1 Xen Storage ssh OpenNebula Storage For the first possible design, we had two hosts, one would have Xen on it, the second host would have OpenNebula on it and both would have storage operated by SSH. Whenever we needed to deploy an image we would send it over the two hosts, this means a lot of time will be taken up transferring these images. What is happening here is firstly the network, image and template are created in OpenNebula, and when the VM is instantiated, everything is sent over to the other host. The main problem here is the waiting time when the image was being transferred as it was taking far too long. The other problem here was the storage was a point of failure, it wasn t shared but was also in the same host as the virtualisation management and the hypervisor respectively. Example 2 Xen Storage Shared storage OpenNebula Storage For the first possible design, we had two hosts, one would have Xen on it, the second host would have OpenNebula on it and both would have storage operated by shared storage. This would mean that deploying an image from one host to another is unnecessary as both hosts are sharing the storage. What is happening here is firstly the network, image and template are created in OpenNebula, and when the VM is instantiated, everything is sent over to the other host. This architecture is different to the first design as this has shared storage meaning there was no waiting time waiting for the image to transfer. The bad point here was the same with the first design
10 thought in that the storage was a one point failure as if one of the storages went down, then either the hypervisor or virtualisation management would be affected also. Chosen design Replication server1 DRBBD Public Heartbeat NFS Replication server2 OpenNebula Node Xen DRBBD Heartbeat NFS SHARED STORAGE OpenNebula Front - end This was the third and final design which we picked. What is happening here is firstly the network, image and template are created in OpenNebula, and when the VM is instantiated, everything is sent over to the other host. What is different is that /var/lib/one will be made available between replication server 1 and server2. Same folder will be exposed out using NFS share so that OpenNebula front end and OpenNebula Node together can share it. There will be high availability as there are two storage areas. There is heartbeat also present at all times. Shared storage is present, therefore all nodes have access to any update data in the storage. Both
11 storages are replicating off each other. Once a Vm has reached its max memory, it will then be pushed into the public cloud to EC2. Private Cloud Installation Guide Pre-requisites Install and configure four virtual machines in vmware workstation. For this example I downloaded the ISO files and used these four Ubuntu virtual machines- Host Name Operating System Use Configuration Server 1 Ubuntu server LTS Primary Network attached storage node RAM: 1GB CPU: 1 PROCESSORS Server 2 Ubuntu server LTS Secondary Network attached storage node RAM: 1GB CPU: 1 PROCESSORS oneserver Ubuntu Desktop Opennebula frontend RAM: 1.2GB CPU: 2 PROCESSORS Onehost Ubuntu server LTS Opennebula Node (Xen hypervisor node) RAM: 5GB CPU: 4 PROCESSORS To create these virtual machines (VMs) in workstation 1. Open workstation 2. Click File and then new virtual machine 3. Select custom and next GB 4. Select next again and choose I will install the operating system later and continue 5. Then select linux and Ubuntu 64bit 6. On the next screen you will be greeted by the following screen. Name the vms as the configuration above. (do the same for memory and cpu also)
12 And 7. The next screen cantains the network configuration, where you should select NAT. This will allow you to freely assign network configuration to your host vms. 8. After this choose the defaults until you reach the specify disk capacity page. The configuration I ve chosen is below but you are free to provision resources according to your needs. Host Name Capacity Server 1 50 GB Server 2 50 GB Oneserver 60 GB Onehost Continue until you reach this screen. Select customize hardware. 10. Then select new cd/dvd select your iso image and the connect on power on button. Click close and finish.
13 11. We will now configure the host network. VMWare Workstation Workstation network configuration The next step is to configure the network in which the workstation vms will connect to 1. Select edit and virtual network editor at the top of the window in workstation. 2. Select NAT and change the subnet IP Address to and unselect DCHP. 3. Then select NAT settings and make sure the gateway IP is set to Select ok and then then ok again 5. The host network is now configured for your vms and Your vms are now ready to be installed and configured Server 1 and server 2 installation Server 1 and server 2 are being used as network attached storage (NAS) replication servers. Therefore installation of both these servers will be identical apart from hostnames and IP addresses. To manuallt input network configuration press enter when the os is looking to find the network configuration using DCHP. Also when addition packages are offered for installation, select openssh. The configuration used for these servers is Host Name Username Operating System Network configuration Partitioning Name server 1 localadmin Ubuntu server IP address: Sda LTS Domain: xencloud.com Sda5 server 2 localadmin Ubuntu server IP address: Sda LTS Domain: xencloud.com Sda7 Sda8 Size 100 MB 5GB 1GB 200MB 60GB When both these servers are installed using the recommended configurations we will move on to the oneserver setup. Mount point Boot Root Swap Don t mount Don t mount Oneserver Installation The oneserver node will be the Opennebula frontend. The front end is used to administer the hosts i.e. hypervisors. It is on the frontend that we will install Opennebula. The network configuration for the frontend is different and I shall go through it later on. When prompted insert/implement the following attributes.
14 Host Name Username Operating System Network configuration Partitioning Name oneserver localadmin Ubuntu Desktop IP address: Sda Domain: xencloud.com Sda5 Size 55GB 5GB Mount point Root swap Onehost Installation The onehost node is where we will install the hypervisor. The installation follows the same procedures as the NAS server except you must use the following configuration. Host Name Username Operating System Network configuration Partitioning Name Onehost localadmin Ubuntu Server IP address: Sda Domain: xencloud.com Sda5 Size 95GB 5GB Mount point Root swap Configuring the nodes For the sake of transparency and ease of access we configured the nodes through a secure shell (ssh) via the oneserver node. In order to do this we must first configure the network settings for oneserver. To do this we must follow these steps Node prerequisites VT-x enabled You must enable virtualization in the bios of the host machine and in workstation by accessing the vm s settings and enabling vt-x You must also go to the onehost.vmx file usually stored under \Documents\Virtual Machines\xen onehost\xen onehost.vmx. open that file in notepad and insert the following line, save it and HVM will be enabled. hypervisor.cpuid.v0="false "
15 Oneserver network configuration Boot the oneserver and login using localadmin Open a terminal and edit the network interfaces file. To do this we issue the following command Sudo pico /etc/network/interfaces We then edit the file so it looks like this auto lo iface lo inet loopback auto eth0 iface eth0 inet static address netmask network broadcast gateway dns-nameservers dns-search xencloud.com We then save the new configuration (Ctrl+o) and exit (Ctrl + x) We must then add our new hosts to /etc/hosts file. Please make sure the following hosts are added. sudo pico /etc/hosts server1.xencloud.com server server2.xencloud.com server oneserver.xencloud.com oneserver onehost.xencloud.com onehost Crtrl + o then ctrl + x. sudo reboot on reboot the vm should have network access. After installation you should update the packages using the command
16 sudo apt-get update if the network was configured correctly apt should get updated. Secure shell setup Secure shell provides secure network services between nodes. We will use it to access remote nodes from the frontend. The first step is to make sure openssh-server is installed on all nodes. To install it run the following command on all nodes. Sudo apt-get install openssh-server You must then generate a keygen in each host by running the following command. It will assign a public and primary key to the node. You must not share your private key with anybody ssh-keygen After it is installed and the keys are generated in each node we can ssh to remote nodes using the following syntax username@host e.g oneadmin@server1 The first time you connect it will ask you if you want to add the host to a list of known hosts, you must select yes. You will be then prompted for the remote host user s password. After entering the password you should connect.(note the hostname on the terminal to make sure the connection was successful) You can also configure password-less access which we will go through later in the Opennebula installation. Node Configuration Server 1 and server 2 Server 1 and server 2 are replication servers providing high availability RAID1 in case of node failure. This is a very important feature in providing better service and un-interupted service. The main packages used here are nfs-kernel-server a. this package provides storage over the network drbd8 a. This is a package is used to provide highly available clusters by replication. When configured it will replicate the contents of the primary storage to the secondary storage. It can be called network based RAID1 Heartbeat a. This package allows nodes in a cluster to be aware of other nodes activity/inactivity. This is done by both nodes sending signals to each other and waiting for a reply. If a reply is received it is presumed that the other node is still available and no problems are present. However if no reply is
17 received an error in the other node is presumed. This tool was developed for providing highly available clusters. Therefore when we use it with server1 (primary node) and server 2 (secondary node) we can implement high availability i.e. when the primary node fails the signal to the secondary server will not arrive, therefore the secondary server will take it s place in providing storage to the network. We will now install and configure these packages on server1 and server2 through a ssh connection from the terminal in oneserver. All features are to be installed on both servers unless specifically stated. 1. Power on server1, server2 and oneserver 2. Open two terminal windows in oneserver and ssh to server1 and server2 (as below). Accept the host and enter the user passwords you set Terminal1 : ssh localadmin@server1 Terminal2 : ssh localadmin@server2 a. Network File Server and replication setup 3. We now have to install ntp and ntpdate in order to syncronise time. This will help in communication and logs Sudo apt-get install ntp ntpdate
18 4. The next step is to install the network file storage server. This will help provide access to storage throughout the network. Sudo apt-get install nfs-kernel-server 5. The network fileserver will be controlled by the heartbeat package therefore we must remove it from the start up. Enter the following command twice on both servers. It should return the values below. Sudo update-rc.d -f nfs-kernel-server remove 6. You must now edit the /etc/exports file in both servers to make the folder /var/lib/one/ available over the network. sudo pico /etc/exports #insert the following line sudo apt-get install drbd8-utils drbdlinks /var/lib/one/ / (rw,no_root_squash,no_all_squash,sync) 7. You must now install and configure drbd8 as discussed earlier. sudo apt-get install drbd8-utils drbdlinks 8. Now we must change drbd8 configuration (/etc/drbd.conf) to suit our installation. e.g. set storage devices, synchronisation rate and error handling.
19 resource r0 { protocol C; handlers { pri-on-incon-degr "halt -f"; } startup { drbdadm create-md r0 drbdadm up all cat /proc/drbd server drbdadm -- --overwrite-data-of-peer primary all drbdadm disconnect r0 degr-wfc-timeout 120; ## 2 minutes. } disk { on-io-error detach; } Net { } syncer { } on server1 { } on server2 { } } rate 10M; al-extents 257; device /dev/drbd0; disk /dev/sda8; # Data partition on server 1 address :7788; #IP address on server 1 meta-disk /dev/sda7[0]; # Metadata for DRBD on server 1 device /dev/drbd0; disk /dev/sda8; # Data partition on server 2 address :7788; # IP address on server 2 meta-disk /dev/sda7[0]; # Metadata for DRBD on server 1 2 in Server 1&2 : load the DRBD kernel module 9. When the configuration is set we then must load the drbd kernel sudo modprobe drbd
20 10. Now we must create the meta-disk for drbd. This is basically a partition used to store meta data i.e. data about data the who, what where and when information about how data is dealt with within drbd. We created the partition of 200MB during the installation. sudo dd if=/dev/zero of=/dev/sda7 bs=1m sudo count= We can now start drbdadm drbd. This create-md will start r0 synchronisation between the servers. You can test this by running the second command. The expected output is below. sudo drbdadm up all sudo cat /proc/drbd a. If that message is shown its good news as it means they are connected, if not check Check your configuration files. 12. The next step is to set your primary and secondary server. The primary server will sync its files to the secondary. To do this run the following command on the primary server ONLY(server1 in this case) sudo drbdadm -- --overwrite-data-of-peer primary all sudo drbdadm disconnect r0 sudo drbdadm -- connect all a. Now the primary server will start syncing with the secondry. To test and see its progress run: sudo cat /proc/drbd b. The output shows the synchronisation progress, transfer speed and time remaining. 13. The last steps in configuring the network file storage is making the NFS folder identicle in both servers to ensure both behave identically in case of failure. To do this you must first make the folder /var/lib/one. This is where the Opennebula drivers and datastores will be stored.
21 sudo mkdir /var/lib/one a. Then you must mount the shared partition to /var/lib/one and then move the nfs folder to /var/lib/one. You must then link /var/lib/one/nfs with /var/lib/nfs. i. These commands should only be carried out in server1 only sudo mount -t ext3 /dev/drbd0 /var/lib/one sudo mv /var/lib/nfs/ /var/lib/one sudo ln -s /var/lib/one/nfs/ /var/lib/nfs sudo umount /var/lib/one b. Then you must make changes to the NFS in server2 rm -fr /var/lib/nfs/ ln -s /var/lib/one/nfs/ /var/lib/nfs Heartbeat 1. The first step is to install the heartbeat package sudo apt-get install heartbeat 2. When heartbeat is installed we create a configuration file(/etc/heartbeat/ha.cf) specifying different attributes such as hosts and logs sudo pico / etc/heartbeat/ha.cf #insert into ha.cf logfacility local0 keepalive 2 deadtime The next step is adding high availability bcast resources eth0 to the /etc/heartbeat/haresources file. Here we define the sahred folder for heartbeat so it can monitor it. node server1 server2 sudo pico /etc/heartbeat/haresources #add the followingline to the file, save and then exit server1 IPaddr:: /24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/var/lib/one::ext3 nfs-kernel-server
22 4. The next step is to change permissions on the heartbeat authkeys folder so no users can access it without permission sudo chmod 600 /etc/heartbeat/authkeys 5. Now we will start the drbd and heartbeat services sudo service drbd start sudo service heartbeat start onenode setup onenode is used with the xen hypervisor to run the virtual machines. We installed the operating system earlier so now we will move on to configuring the operating system and install the Xen hypervisor. But first we must add the network bridge for xen and add the oneadmin user to the system for Opennebula. We configured the network already during setup so now we will configure the xenbr0 bridge. Xenbr0 is the default xen network bridge. A network bridge allows the vms connect to the network using the hosts IP address. To configure the bridge we use the package bridge-utils which is a tool which allows administrators create and manage bridged networks. 1. To install bridge utils we run the command sudo apt-get install bridge-utils
23 2. After the package installs we will edit the network interface (/etc/network/interfaces) as we did with oneserver. sudo pico /etc/network/interfaces We then edit the file so it looks like this auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto xenbr0 iface xenbr0 inet static address netmask network broadcast gateway dns-nameservers dns-search xencloud.com bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off We must then add our hosts to /etc/hosts file. Please make sure the following hosts are added. sudo pico /etc/hosts oneserver.xencloud.com oneserver onehost.xencloud.com onehost i. We should then reboot the machine to make changes or you could run the command sudo /etc/init.d/networking restart
24 3. When the system reboots we can add the oneadmin user. This is the account used by Opennebula for administration purposes. It is important that you follow these steps carefully as wrong configuration can cause errors later on in the installation. a. Firstly you add the group oneadmin with the id sudo groupadd -g oneadmin b. Next you add user oneadmin with the id to the group oneadmin with the folder /var/lib/one/ as its home directory sudo useradd -u m oneadmin -d /var/lib/one -s /bin/bash -g oneadmin c. Now we must set a password for oneadmin sudo passwd oneadmin d. We must set ownership of the /var/lib/one/ folder to user and group oneadmin sudo chown -R oneadmin:oneadmin /var/lib/one e. Now test that your login works and then exit again to the localadmin user. su -l oneadmin exit 4. The next step is to install,configure and mount the network file storage.to do this we use the package nfs-common. a. To install this package sudo apt-get install nfs-common b. Now we must configure the machine to use the NFS from the replication server. To do this we must edit the file /etc/fstab. sudo pico /etc/fstab #insert the following, save and exit :/var/lib/one /var/lib/one nfs rw,vers=3 0 0 c. Now we will create the directory /var/lib/one if it does not exist and mount it to the network file storage. The last command mount should show as `the network storage has been mounted successfully. sudo mkdir /var/lib/one/ sudo mount /var/lib/one mount
25 Setup the Xen Hypervisor 1. Now that the node has been fully configured we can install the Xen hypervisor. To do this we run the following commands a. To install the hypervisor sudo apt-get install xen-hypervisor-amd64 b. You must then set the grub to boot xen as the default option and run the command update grub in order to save the changes sudo sed -i 's/grub_default=.*\+/grub_default="xen 4.1-amd64"/' /etc/default/grub sudo update-grub c. You must then set the default toolstack for xen as xm as that is what is used with Opennebula. Also add a limit to how cpu and memory Xen can use by editing the grub. Change appamor= to appamor= 0 and add the folloing line in, instead of grub default GRUB_CMDLINE_XEN="dom0_mem=2G,max:2G dom0_max_vcpus=2" sudo sed -i 's/toolstack=.*\+/toolstack="xm"/' /etc/default/xen sudo update-grub sudo pico /etc/default/grub GRUB_CMDLINE_XEN="dom0_mem=2G,max:2G dom0_max_vcpus=2" #save and exit sudo update-grub sudo reboot d. Also a requirement for opennebula is that it is able to run ruby files. Specifically xenrb from the frontend. Therefore ruby will need to be installed sudo apt-get install ruby i. It is also important to note additional requirements if you are running xen When running a vm Opennebula points to the wrong folder for keymaps. A fix to this is creating a syslink using the following command ln -s /usr/share/qemu-linaro/keymaps /usr/share/qemu/
26 sudo visudo 2. You will also need to grant the oneadmin user password less sudo commands in order to run scripts from Opennebula. In order to do this you edit the sudoers file as localadmin as follows. #add the following two lines, save and exit %xen ALL=(ALL) NOPASSWD: /usr/sbin/xm * %xen ALL=(ALL) NOPASSWD: /usr/sbin/xentop * 2. Now that Xen is installed the node is nearly fully configured. We just need to make sure openssh-server is installed to allow Opennebula connect to the node. sudo apt-get install openssh-server Now that the onenode is fully configured we can move back and configure oneserver the front-end and start getting operational! oneserver frontend setup 1. As with the previous node - onenode we add the oneadmin user. This is the account used by Opennebula for administration purposes. It is important that you follow these steps carefully as wrong configuration can cause errors later on in the installation. a. Firstly we create the /var/lib/ folder if it has not been previously created on this node sudo mkdir p /var/lib b. Then you add the group oneadmin with the id sudo groupadd -g oneadmin c. Next you add the user oneadmin with the id to the group oneadmin with the folder /var/lib/one/ as its home directory sudo useradd -u m oneadmin -d /var/lib/one -s /bin/bash -g oneadmin d. Now we must set a password for oneadmin sudo passwd oneadmin e. We must set ownership of the /var/lib/one/ folder to user and group oneadmin
27 sudo chown -R oneadmin:oneadmin /var/lib/one f. Now test that your login works and then exit again to the localadmin user. su -l oneadmin exit 2. The next step is to install,configure and mount the network file server. To do this we use the package nfs-kernel-server. a. To install this package b. Now we must configure the machine to use the NFS from the replication server so that all files created reside on the network file storage and not on local storage. To do this we must edit the file /etc/fstab. 1. sudo pico /etc/fstab sudo #insert mkdir the following, -p /var/lib/ save and exit sudo apt-get install nfs-kernel-server sudo :/var/lib/one groupadd -g oneadmin /var/lib/one nfs rw,vers=3 0 0 c. Now we must edit the /etc/exports folder to make /var/lib/one accessible to other nodes (where is IP of onenode.) sudo pico /etc/exports #add the following line, save and exit /var/lib/one (rw,sync,no_subtree_check,no_root_squash,anonuid=10000,anongid=10000) d. For the last step we must start the Network File server. It should return the status from the picture below. We can also list the storage devices using df h to confirm the directory is mounted. sudo /etc/init.d/nfs-kernel-server start df -h
28 3. The next step is to set up password-less secure shell access to the onenode. To do this we must log in as oneadmin and generate our public and private keys. When prompted to enter a passphrase and directory for the ssh keys just press enter i.e. choose the defaults. Note: You must be logged in as oneadmin su -l oneadmin ssh-keygen a. When the keys has been successfully generated we need to copy the oneserver public key into the authorized_keys file in order to allow passwordless access. To do this use the following command. cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys b. The next step is to create a configuration file (/var/lib/one/.ssh/config) for ssh. Here we tell openssh to accept requests from all hosts without strict host key checking Pico ~/.ssh/config #add the following lines to the file, save and exit Host * StrictHostKeyChecking no Now that all nodes are configured correctly it is time to install Opennebula on oneserver Opennebula Installation There are two ways you can install Opennebula Through the Ubuntu package repository Through Source We have chosen to install from source as it is the latest version and therefore better support is available and all known bugs have been patched. The Ubuntu repository takes a while to be updated. Opennebula version 4 is in the repository while the latest stable release available from source is Opennebula 3.8. We will retrieve the source code from the official Opennebula github repository, compile it and then install it. 1. To retrieve the source from github we will need two packages git and git-core. sudo apt-get install git git-core
29 2. When these packages are installed we can now clone the github repository to our oneserver node. I recommend you create a dedicated folder for the Opennebula source in case you need to roll back to an earlier version and re-compile/re-install. Therefore we will create a dedicated directory. sudo mkdir /var/lib/opennebula_source/ sudo chown -R oneadmin:oneadmin /var/lib/opennebula_source/ sudo mkdir /var/lib/opennebula_source/opennebula_3.8 cd /var/lib/opennebula_source/opennebula_ Now that we have created the folder structure and change location to the specified folder we can login as oneadmin and clone the source code from Github. Su l oneadmin cd /var/lib/opennebula_source/opennebula_3.8 git clone exit a. Before we can compile and install the source there are a number of packages we need to install in order to carry out this task and run Opennebula cd /var/lib/opennebula_source/opennebula_3.8 sudo apt-get install gitlibcurl3 libmysqlclient18 libruby1.8 libsqlite3-ruby libsqlite3-ruby1.8 libxmlrpc-c3-dev libxmlrpc-core-c3 mysql-common ruby ruby1.8 sudo apt-get install libxml2-dev libmysqlclient-dev libmysql++-dev libsqlite3-ruby libexpat1-dev sudo apt-get install libc6 libgcc1 libmysqlclient18 libpassword-ruby libsequel-ruby libsqlite3-0 libssl0.9.8 libstdc++6 libxml2 sudo apt-get install ruby rubygems libmysql-ruby libsqlite3-ruby libamazonec2-ruby sudo apt-get install libsqlite3-dev libxmlrpc-c3-dev g++ ruby libopenssl-ruby libssl-dev ruby-dev sudo apt-get install libxml2-dev libmysqlclient-dev libmysql++-dev libsqlite3-ruby libexpat1-dev sudo apt-get install rake rubygems libxml-parser-ruby1.8 libxslt1-dev genisoimage scons sudo gem install nokogiri rake xmlparser sudo apt-get install mysql-server
30 b. One the packages are installed we have to configure the mysql server. Wemust add the oneadmin user and grant all priviledges to it by entering the mysql shell and executing the following queries mysql -uroot -pgalway123 CREATE USER IDENTIFIED BY 'oneadmin'; CREATE DATABASE opennebula; GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'oneadmin'; quit; c. Now we can compile and install Opennebula using some of the packages just installed. We must make sure we execute the commands as root from the source codes folder. cd /var/lib/opennebula_source/opennebula_3.8 sudo scons sqlite=no mysql=yes sudo./install.sh -u oneadmin -g oneadmin -d /var/lib/one Now Opennebula is installed! But we have to do a couple of configuration changes before we can become operational Configuring Opennebula Before starting Opennebula we must 1. create a profile file defining environmental variables for Opennebula s drivers. These variables are used to point towards certain Opennebula folders. su l oneadmin pico ~/.bash_profile #add the following to the file, save and exit export ONE_LOCATION=/var/lib/one export ONE_AUTH=$ONE_LOCATION/.one/one_auth export ONE_XMLRPC= exportpath=$one_location/bin:/usr/local/bin:/var/lib/gems/1.8/bin/:/var/lib/gems/1.8/:$path
31 2. Once you have created the file and saved it you should execute the file and test it using echo (it should return /var/lib/one) to see if its successful. source ~/.bash_profile echo $ONE_LOCATION 3. Now you must store the Opennebula username and password in the.one auth file. This will be your Opennebula username and password and is different to your Ubuntu Opennebula password. You must also set ownership to oneadmin so unauthorised access and editding cannot take place mkdir ~/.one echo "oneadmin:<type THE PASSWORD HERE>" > /.one/one_auth sudo chown oneadmin:oneadmin /var/lib/one/.one/one_auth 4. We must now edit the oned configuration file to make Opennebula suit our installation. By default openenbula activates KVM drivers in this file. We must comment these out and uncomment the Xen drivers i.e. remove # on xen drivers while inserting # before each KVM driver. We also need to change the database driver from sqlite to mysql (below). su l oneadmin pico ~/etc/oned.conf #DB = [ backend = "sqlite" ] b. Set SQL as MYSQL-uncomment #lines 61 through 66 or near by DB = [ backend = "mysql", server = "localhost", port = 0, #save and exit user = "oneadmin", passwd = "oneadmin", db_name = "opennebula" ] 5. Now that one is configured, log in as oneadmin and you can start opennebula using the command su l oneadmin one start
32 Using Opennebula Now that we have configured Opennebula we can add,delete and monitor hosts. We will start by adding the host onenode. To do this we make sure Opennebula is running and if it is we can add onenode to the cluster su l oneadmin onehost create onenode im im_xen vm vmm_xen net dummy If the host was added correctly you will be given an id. With this id you can monitor the host. You can also monitor the host using the onehost list or onehost top command. onehost top onehost show 5 To demonstrate creating a virtual network, adding an image to the repository, adding a template and deploying a virtual machine we will use the ttyl linux operating system. Ttylinux is a small linux distribution ideal for our environment due to lack of resources.
33 Launching a virtual machine The first step is retrieving the required image and storing it in oneserver. 1. To do this we create a folder for storing it and then download it to that folder mkdir /var/lib/image_templates/ttylinux cd /var/lib/image_templates/ttylinux wget 2. Now we must untar the image using the following command tar xvf ttylinux.tar.gz 3. After the file unpacks we must create a network, image and deployment template Here are the examples I used a. network.net pico network.onenet NAME = "my first network" TYPE = FIXED BRIDGE = xenbr0 i. In order to create the network we use the onevnet command and it creates the network in Opennebula LEASES = [ IP=" "] onevnet create network.net b. image.one pico image.one NAME = ttylinux PATH = "/var/lib/image_templates/ttylinux/ttylinux.img" TYPE = OS PUBLIC = YES DESCRIPTION = "ttyl image" i. In order to create the image we use the oneimage command and it creates the image in the Opennebula datastore. You can select which datastore based on the d tag. When the image creates it will give you an id oneimage create image.one d default
34 c. myfirsttemplate.one CPU="1" DISK=[ BUS="ide", IMAGE=" ttylinux ", IMAGE_UNAME="oneadmin", READONLY="no", TARGET="xvda" ] GRAPHICS=[ LISTEN=" ", TYPE="vnc" ] MEMORY="512" NAME="my first vm" NIC=[ NETWORK="my first network ", NETWORK_UNAME="oneadmin" ] OS=[ KERNEL="/usr/lib/xen-default/boot/hvmloader" ] RAW=[ DATA="builder = 'hvm' shadow_memory = 8 device_model = '/usr/lib/xen-default/bin/qemu-dm' boot = \"c\"", TYPE="xen" ] REQUIREMENTS="HOSTNAME = \"onehost\"" TEMPLATE_ID="1" i. In order to create the template we use the onetemplate command and it creates the template in Opennebula ready to deploy the vm. We then use the command onetemplate instantiate <template id> to instantiate the vm. onetemplate create myfirsttemplate.one onetemplate instantiate 1 The vm should now be running. You can check its status using the command onevm top You now have a working private cloud. However to make life easier for administrators and users Opennebula also has a self-service web portal used for managing and monitoring infrastructure and it is called the sunstone server. Installing Sunstone self service portal To install sunstone you do the following 1. Firstly, to run sunstone you need to install the gems nessesary to run the server. sudo ~/share/install_gems sunstone
35 2. When the installation finishes you can then install novnc using the following command cd ~/share sudo./install_novnc.sh 3. Sunstone is now ready to run! Simply enter the following command and then open firefox/chromium and navigate to Sunstone-server start When logged in to sunstone the user can use all of opennebula s features though the very attractive gui provided. It also provides a very valuable monitoring tool. In this example of monitoring, this is a great tool as it shows a lot. It shows the amount of memory which has been allocated, the cpu usage, if there is any additional memory for your VM s. we are shown how many users there are per group and also how many groups there are.
36 We can also see in an easier way, the different VM s available to us including if they are running properly and who has rights to them. The ip s are shown and also the log associated with that VM which will show us step-by-step what is wrong with the VM if there is an error. There is a chart showing how much memory is being used or if it is being overloaded or if the network is being overused. This is an example of monitoring in Xen, here we have sufficient information about our VM such as the name, size, state and owner of each VM which is created. We can find out when they have been created, along with that we can show information as regards the network. There we can see where it belongs too and the type of network that it is. We can also see the templates which we created. To see the images and templates we created, we used the command oneimage host to see the images and onetemplate list to see the templates.
37 Provisioning of the public cloud This section details the steps taken to launch an instance on amazon web services through our local Opennebula infrastructure. AWS drivers are built into Opennebula therefore making implementation a lot easier for users and provide infinite scalability. These factors make hybrid cloud computing with Opennebula an attractive proposition. We have not implemented a public cloud integration yet but have research the topic and we have found that the following is the procedure to follow in configuring Opennebula with AWS. 1. Register an account with amazon web services To register with amazon web services you simply go the click register and enter your details including credit card information for payment and identification reasons. When you are fuly registered log in and navigate to the aws console. 2. Set up ssh keys on public cloud Enter the key section in the aws management console select import keypair. This will open a prompt allowing you to upload the public key of your front-end. This will allow you toaccess future instances remotely. 3. Generate x.509 certificate to authenticate openenbula with AWS In order to use the remote amazon api s we need to generate a X.509 certificate to authenticate with AWS. To do this we need to access the security credentials section of your AWS account ( When at this page select access credentials and select the X.509 certificate tab. Generate a new certificate and download it to your /var/lib/one/etc and change permissions on the file so only oneadmin can access it. 4. Download the aws api Before strting download the Java JRE package on the frontend as it is required by AWS api tools. sudo apt-get install openjdk-6-jre-headless Edit your profile file to include the following line export JAVA_HOME=/usr/lib/jvm/java-6-openjdk and then load the source. Now download the amazon api tools from and save them in a folder on the front-end e.g /var/lib/one/ec2-api-tools. 5. Then we must configure Opennebula drivers and set the EC2 information manager configuration The next step is to uncomment the AWS drivers in the openenbula configuration file and reboot the frontend. (oned.conf) Also in the /var/lib/one/etc/im_ec2/im_ec2.conf file we can set the instance types and numbers permitted. This is a very important feature as you do not want to over provision resources as it can become very costly. 6. EC2 API Tools configuration Now we must tell opennebula where the aws api tools, certificate and keys are stored. To do this we edit the /var/lib/one/etc/vmm_ec2/vmm_ec2rc file and change the relevant paths to where we
38 have stored the keys, api and cert. it is also good practice to assign environment variables to your credentials and the api to give ease of access. To do this we edit the profile file and insert the following. export EC2_HOME=$ONE_LOCATION/share/ec2-api-tools export EC2_PRIVATE_KEY=$ONE_LOCATION/etc/pk.pem export EC2_CERT=$ONE_LOCATION/etc/cert.pem PATH=$EC2_HOME/bin:$PATH 7. Add AWS host to Opennebula One all those settings are modified we should be able to add a AWS host the same way as we would a normal local host except using the AWS drivers. onehost create ec2 im_ec2 vmm_ec2 tm_dummy dummy We have also research monitoring tools for the public cloud and have found a feature called cloud watch which is a service provided by amazon cloud watch which when configured right can return stats on availability and status. But we cannot directly monitor the vms on AWS as there is no direct access to the hosts on AWS for security reasons. 8. Launch instance Firstly we need to define a vm template like before. Which will containonly the the instance type and the uniquie AWS image identifier (AMI). Every image on aws is assigned a AMI and it is a very good way of identifying different images. The following is an example image template named myfirstaws.one EC2 = [ AMI="ami-4a0df923", INSTANCETYPE=t1.micro ] Then we must instantiate the image using the usual command of onevm. To checkif the deployment was successful run the onevm show command as below, with the image id. It should return the information on the vm includeing the IP address. If it does the vm has been successfully created. onevm create myfirstaws.one onevm show <imageid>
39 Conclusion In conclusion we have set up a private cloud infrastructure which has the capability to burst onto AWS.
BUILDING A HYBRID CLOUD WITH OpenNebula. OpenNebula connecting with Private cloud [KVM hypervisor]and Public cloud [Amazon EC2]
place for practical guides on cloud computing BUILDING A HYBRID CLOUD WITH OpenNebula OpenNebula connecting with Private cloud [KVM hypervisor]and Public cloud [Amazon EC2] Table of Contents Hardware/software
Building a Penetration Testing Virtual Computer Laboratory
Building a Penetration Testing Virtual Computer Laboratory User Guide 1 A. Table of Contents Collaborative Virtual Computer Laboratory A. Table of Contents... 2 B. Introduction... 3 C. Configure Host Network
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor
VX 9000E WiNG Express Manager INSTALLATION GUIDE
VX 9000E WiNG Express Manager INSTALLATION GUIDE 2 VX 9000E WiNG Express Manager Service Information If you have a problem with your equipment, contact support for your region. Support and issue resolution
Create a virtual machine at your assigned virtual server. Use the following specs
CIS Networking Installing Ubuntu Server on Windows hyper-v Much of this information was stolen from http://www.isummation.com/blog/installing-ubuntu-server-1104-64bit-on-hyper-v/ Create a virtual machine
High Availability with DRBD & Heartbeat. Chris Barber http://www.cb1inc.com
High Availability with DRBD & Heartbeat Chris Barber http://www.cb1inc.com What is DRBD? Distributed Replicated Block Device RAID 1 mirror across network Realtime replications Linux-only kernel module
Cloud Implementation using OpenNebula
Cloud Implementation using OpenNebula Best Practice Document Produced by the MARnet-led working group on campus networking Authors: Vasko Sazdovski (FCSE/MARnet), Boro Jakimovski (FCSE/MARnet) April 2016
A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor.
A technical whitepaper describing steps to setup a Private Cloud using the Eucalyptus Private Cloud Software and Xen hypervisor. Vivek Juneja Cloud Computing COE Torry Harris Business Solutions INDIA Contents
Private Cloud in Educational Institutions: An Implementation using UEC
Private Cloud in Educational Institutions: An Implementation using UEC D. Sudha Devi L.Yamuna Devi K.Thilagavathy,Ph.D P.Aruna N.Priya S. Vasantha,Ph.D ABSTRACT Cloud Computing, the emerging technology,
CDH installation & Application Test Report
CDH installation & Application Test Report He Shouchun (SCUID: 00001008350, Email: [email protected]) Chapter 1. Prepare the virtual machine... 2 1.1 Download virtual machine software... 2 1.2 Plan the guest
NOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3
NOC PS manual Copyright Maxnet 2009 2015 All rights reserved Page 1/45 Table of contents Installation...3 System requirements...3 Network setup...5 Installation under Vmware Vsphere...8 Installation under
Quick Start Guide for VMware and Windows 7
PROPALMS VDI Version 2.1 Quick Start Guide for VMware and Windows 7 Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the
VMTurbo Operations Manager 4.5 Installing and Updating Operations Manager
VMTurbo Operations Manager 4.5 Installing and Updating Operations Manager VMTurbo, Inc. One Burlington Woods Drive Burlington, MA 01803 USA Phone: (781) 373---3540 www.vmturbo.com Table of Contents Introduction
Deployment of Private, Hybrid & Public Clouds with OpenNebula
EL / LAK (FOSS) 2010 May 14th, 2010 Deployment of Private, Hybrid & Public Clouds with OpenNebula University Complutense of Madrid The Anatomy of an IaaS Cloud Deployment of Private, Hybrid & Public Clouds
PHD Virtual Backup for Hyper-V
PHD Virtual Backup for Hyper-V version 7.0 Installation & Getting Started Guide Document Release Date: December 18, 2013 www.phdvirtual.com PHDVB v7 for Hyper-V Legal Notices PHD Virtual Backup for Hyper-V
How To Install Openstack On Ubuntu 14.04 (Amd64)
Getting Started with HP Helion OpenStack Using the Virtual Cloud Installation Method 1 What is OpenStack Cloud Software? A series of interrelated projects that control pools of compute, storage, and networking
Acronis Backup & Recovery 11.5
Acronis Backup & Recovery 11.5 Update 2 Installation Guide Applies to the following editions: Advanced Server Server for Windows Virtual Edition Server for Linux Advanced Server SBS Edition Workstation
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal
Procedure to Create and Duplicate Master LiveUSB Stick
Procedure to Create and Duplicate Master LiveUSB Stick A. Creating a Master LiveUSB stick using 64 GB USB Flash Drive 1. Formatting USB stick having Linux partition (skip this step if you are using a new
Building Elastix-2.4 High Availability Clusters with DRBD and Heartbeat (using a single NIC)
Building Elastix2.4 High Availability Clusters with DRBD and Heartbeat (using a single NIC) This information has been modified and updated by Nick Ross. Please refer to the original document found at:
Building a Private Cloud Cloud Infrastructure Using Opensource
Cloud Infrastructure Using Opensource with Ubuntu Server 10.04 Enterprise Cloud (Eucalyptus) OSCON (Note: Special thanks to Jim Beasley, my lead Cloud Ninja, for putting this document together!) Introduction
Installing and Administering VMware vsphere Update Manager
Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document
RingStor User Manual. Version 2.1 Last Update on September 17th, 2015. RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816.
RingStor User Manual Version 2.1 Last Update on September 17th, 2015 RingStor, Inc. 197 Route 18 South, Ste 3000 East Brunswick, NJ 08816 Page 1 Table of Contents 1 Overview... 5 1.1 RingStor Data Protection...
StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the
Virtual Appliance Setup Guide
Virtual Appliance Setup Guide 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
How To Install An Org Vm Server On A Virtual Box On An Ubuntu 7.1.3 (Orchestra) On A Windows Box On A Microsoft Zephyrus (Orroster) 2.5 (Orner)
Oracle Virtualization Installing Oracle VM Server 3.0.3, Oracle VM Manager 3.0.3 and Deploying Oracle RAC 11gR2 (11.2.0.3) Oracle VM templates Linux x86 64 bit for test configuration In two posts I will
Creating a DUO MFA Service in AWS
Amazon AWS is a cloud based development environment with a goal to provide many options to companies wishing to leverage the power and convenience of cloud computing within their organisation. In 2013
How to Test Out Backup & Replication 6.5 for Hyper-V
How to Test Out Backup & Replication 6.5 for Hyper-V Mike Resseler May, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication
RSA Authentication Manager 8.1 Virtual Appliance Getting Started
RSA Authentication Manager 8.1 Virtual Appliance Getting Started Thank you for purchasing RSA Authentication Manager 8.1, the world s leading two-factor authentication solution. This document provides
Unitrends Virtual Backup Installation Guide Version 8.0
Unitrends Virtual Backup Installation Guide Version 8.0 Release June 2014 7 Technology Circle, Suite 100 Columbia, SC 29203 Phone: 803.454.0300 Contents Chapter 1 Getting Started... 1 Version 8 Architecture...
OpenNebula Open Souce Solution for DC Virtualization. C12G Labs. Online Webinar
OpenNebula Open Souce Solution for DC Virtualization C12G Labs Online Webinar What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision on Virtualized Environments I m using virtualization/cloud,
Backup & Disaster Recovery Appliance User Guide
Built on the Intel Hybrid Cloud Platform Backup & Disaster Recovery Appliance User Guide Order Number: G68664-001 Rev 1.0 June 22, 2012 Contents Registering the BDR Appliance... 4 Step 1: Register the
Team Foundation Server 2013 Installation Guide
Team Foundation Server 2013 Installation Guide Page 1 of 164 Team Foundation Server 2013 Installation Guide Benjamin Day [email protected] v1.1.0 May 28, 2014 Team Foundation Server 2013 Installation Guide
Barracuda Message Archiver Vx Deployment. Whitepaper
Barracuda Message Archiver Vx Deployment Whitepaper Document Scope This document provides guidance on designing and deploying Barracuda Message Archiver Vx on VMware vsphere Document Scope, and Microsoft
SI455 Advanced Computer Networking. Lab2: Adding DNS and Email Servers (v1.0) Due 6 Feb by start of class
SI455 Advanced Computer Networking Lab2: Adding DNS and Email Servers (v1.0) Due 6 Feb by start of class WHAT TO HAND IN: 1. Completed checklist from the last page of this document 2. 2-4 page write-up
AlienVault Unified Security Management (USM) 4.x-5.x. Deploying HIDS Agents to Linux Hosts
AlienVault Unified Security Management (USM) 4.x-5.x Deploying HIDS Agents to Linux Hosts USM 4.x-5.x Deploying HIDS Agents to Linux Hosts, rev. 2 Copyright 2015 AlienVault, Inc. All rights reserved. AlienVault,
INUVIKA TECHNICAL GUIDE
--------------------------------------------------------------------------------------------------- INUVIKA TECHNICAL GUIDE ENTERPRISE EVALUATION GUIDE OVD Enterprise External Document Version 1.1 Published
HP SDN VM and Ubuntu Setup
HP SDN VM and Ubuntu Setup Technical Configuration Guide Version: 1 September 2013 Table of Contents Introduction... 2 Option 1: VirtualBox Preconfigured Setup... 2 Option 2: VMware Setup (from scratch)...
VELOCITY. Quick Start Guide. Citrix XenServer Hypervisor. Server Mode (Single-Interface Deployment) Before You Begin SUMMARY OF TASKS
If you re not using Citrix XenCenter 6.0, your screens may vary. VELOCITY REPLICATION ACCELERATOR Citrix XenServer Hypervisor Server Mode (Single-Interface Deployment) 2013 Silver Peak Systems, Inc. This
In order to upload a VM you need to have a VM image in one of the following formats:
What is VM Upload? 1. VM Upload allows you to import your own VM and add it to your environment running on CloudShare. This provides a convenient way to upload VMs and appliances which were already built.
OpenNebula Open Souce Solution for DC Virtualization
OSDC 2012 25 th April, Nürnberg OpenNebula Open Souce Solution for DC Virtualization Constantino Vázquez Blanco OpenNebula.org What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision on Virtualized
Consolidated Monitoring, Analysis and Automated Remediation For Hybrid IT Infrastructures. Goliath Performance Monitor Installation Guide v11.
Consolidated Monitoring, Analysis and Automated Remediation For Hybrid IT Infrastructures Goliath Performance Monitor Installation Guide v11.5 (v11.5) Document Date: March 2015 www.goliathtechnologies.com
Index C, D. Background Intelligent Transfer Service (BITS), 174, 191
Index A Active Directory Restore Mode (DSRM), 12 Application profile, 293 Availability sets configure possible and preferred owners, 282 283 creation, 279 281 guest cluster, 279 physical cluster, 279 virtual
OpenNebula Open Souce Solution for DC Virtualization
13 th LSM 2012 7 th -12 th July, Geneva OpenNebula Open Souce Solution for DC Virtualization Constantino Vázquez Blanco OpenNebula.org What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision
Options in Open Source Virtualization and Cloud Computing. Andrew Hadinyoto Republic Polytechnic
Options in Open Source Virtualization and Cloud Computing Andrew Hadinyoto Republic Polytechnic No Virtualization Application Operating System Hardware Virtualization (general) Application Application
Quick Start Guide for Parallels Virtuozzo
PROPALMS VDI Version 2.1 Quick Start Guide for Parallels Virtuozzo Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the current
Virtual Appliance Setup Guide
The Virtual Appliance includes the same powerful technology and simple Web based user interface found on the Barracuda Web Application Firewall hardware appliance. It is designed for easy deployment on
How to Create, Setup, and Configure an Ubuntu Router with a Transparent Proxy.
In this tutorial I am going to explain how to setup a home router with transparent proxy using Linux Ubuntu and Virtualbox. Before we begin to delve into the heart of installing software and typing in
Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM
Bern University of Applied Sciences Engineering and Information Technology Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM By Franz Meyer Version 1.0 February 2011 Virtualization Architecture
How To Install Acronis Backup & Recovery 11.5 On A Linux Computer
Acronis Backup & Recovery 11.5 Server for Linux Update 2 Installation Guide Copyright Statement Copyright Acronis International GmbH, 2002-2013. All rights reserved. Acronis and Acronis Secure Zone are
Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk.
Windows Template Creation Guide How to build your own Windows VM templates for deployment in Cloudturk. TABLE OF CONTENTS 1. Preparing the Server... 2 2. Installing Windows... 3 3. Creating a Template...
GRAVITYZONE HERE. Deployment Guide VLE Environment
GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including
Windows Server 2008 R2 Essentials
Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution
Virtual Data Centre. User Guide
Virtual Data Centre User Guide 2 P age Table of Contents Getting Started with vcloud Director... 8 1. Understanding vcloud Director... 8 2. Log In to the Web Console... 9 3. Using vcloud Director... 10
Veeam Backup Enterprise Manager. Version 7.0
Veeam Backup Enterprise Manager Version 7.0 User Guide August, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may
Virtualization & Cloud Computing (2W-VnCC)
Virtualization & Cloud Computing (2W-VnCC) DETAILS OF THE SYLLABUS: Basics of Networking Types of Networking Networking Tools Basics of IP Addressing Subnet Mask & Subnetting MAC Address Ports : Physical
If you re not using Citrix XenCenter 6.0, your screens may vary. Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0
If you re not using Citrix XenCenter 6.0, your screens may vary. VXOA VIRTUAL APPLIANCES Citrix XenServer Hypervisor In-Line Deployment (Bridge Mode) 2012 Silver Peak Systems, Inc. Support Limitations
Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)
Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Access CloudStack web interface via: Internal access links: http://cloudstack.doc.ic.ac.uk
Virtual Appliances. Virtual Appliances: Setup Guide for Umbrella on VMWare and Hyper-V. Virtual Appliance Setup Guide for Umbrella Page 1
Virtual Appliances Virtual Appliances: Setup Guide for Umbrella on VMWare and Hyper-V Virtual Appliance Setup Guide for Umbrella Page 1 Table of Contents Overview... 3 Prerequisites... 4 Virtualized Server
Thinspace deskcloud. Quick Start Guide
Thinspace deskcloud Quick Start Guide Version 1.2 Published: SEP-2014 Updated: 16-SEP-2014 2014 Thinspace Technology Ltd. All rights reserved. The information contained in this document represents the
Using VirtualBox ACHOTL1 Virtual Machines
Using VirtualBox ACHOTL1 Virtual Machines The steps in the Apache Cassandra Hands-On Training Level One courseware book were written using VMware as the virtualization technology. Therefore, it is recommended
Opsview in the Cloud. Monitoring with Amazon Web Services. Opsview Technical Overview
Opsview in the Cloud Monitoring with Amazon Web Services Opsview Technical Overview Page 2 Opsview In The Cloud: Monitoring with Amazon Web Services Contents Opsview in The Cloud... 3 Considerations...
OpenNebula The Open Source Solution for Data Center Virtualization
LinuxTag April 23rd 2012, Berlin OpenNebula The Open Source Solution for Data Center Virtualization Hector Sanjuan OpenNebula.org 1 What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision
Enterprise Manager. Version 6.2. Installation Guide
Enterprise Manager Version 6.2 Installation Guide Enterprise Manager 6.2 Installation Guide Document Number 680-028-014 Revision Date Description A August 2012 Initial release to support version 6.2.1
Verax Service Desk Installation Guide for UNIX and Windows
Verax Service Desk Installation Guide for UNIX and Windows March 2015 Version 1.8.7 and higher Verax Service Desk Installation Guide 2 Contact Information: E-mail: [email protected] Internet: http://www.veraxsystems.com/
13.1 Backup virtual machines running on VMware ESXi / ESX Server
13 Backup / Restore VMware Virtual Machines Tomahawk Pro This chapter describes how to backup and restore virtual machines running on VMware ESX, ESXi Server or VMware Server 2.0. 13.1 Backup virtual machines
TimeIPS Server. IPS256T Virtual Machine. Installation Guide
TimeIPS Server IPS256T Virtual Machine Installation Guide TimeIPS License Notification The terms and conditions applicable to the license of the TimeIPS software, sale of TimeIPS hardware and the provision
SERVER CLOUD DISASTER RECOVERY. User Manual
SERVER CLOUD DISASTER RECOVERY User Manual 1 Table of Contents 1. INTRODUCTION... 3 2. ACCOUNT SETUP OVERVIEW... 3 3. GETTING STARTED... 6 3.1 Sign up... 6 4. ACCOUNT SETUP... 8 4.1 AWS Cloud Formation
Introduction to Cloud Computing
Introduction to Cloud Computing Shang Juh Kao Dept. of Computer Science and Engineering National Chung Hsing University 2011/10/27 CSE, NCHU 1 Table of Contents 1. Introduction ( 資 料 取 自 NCHC 自 由 軟 體 實
Using iscsi with BackupAssist. User Guide
User Guide Contents 1. Introduction... 2 Documentation... 2 Terminology... 2 Advantages of iscsi... 2 Supported environments... 2 2. Overview... 3 About iscsi... 3 iscsi best practices with BackupAssist...
HP Client Automation Standard Fast Track guide
HP Client Automation Standard Fast Track guide Background Client Automation Version This document is designed to be used as a fast track guide to installing and configuring Hewlett Packard Client Automation
System Administration Training Guide. S100 Installation and Site Management
System Administration Training Guide S100 Installation and Site Management Table of contents System Requirements for Acumatica ERP 4.2... 5 Learning Objects:... 5 Web Browser... 5 Server Software... 5
How To Set Up Egnyte For Netapp Sync For Netapp
Egnyte Storage Sync For NetApp Installation Guide Introduction... 2 Architecture... 2 Key Features... 3 Access Files From Anywhere With Any Device... 3 Easily Share Files Between Offices and Business Partners...
Rally Installation Guide
Rally Installation Guide Rally On-Premises release 2015.1 [email protected] www.rallydev.com Version 2015.1 Table of Contents Overview... 3 Server requirements... 3 Browser requirements... 3 Access
Install Guide for JunosV Wireless LAN Controller
The next-generation Juniper Networks JunosV Wireless LAN Controller is a virtual controller using a cloud-based architecture with physical access points. The current functionality of a physical controller
RED HAT ENTERPRISE VIRTUALIZATION
Giuseppe Paterno' Solution Architect Jan 2010 Red Hat Milestones October 1994 Red Hat Linux June 2004 Red Hat Global File System August 2005 Red Hat Certificate System & Dir. Server April 2006 JBoss April
vsphere Replication for Disaster Recovery to Cloud
vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
Installation Guide for Citrix XenServer 5.5
white paper Installation Guide for Citrix XenServer 5.5 Title: Installation Guide for Citrix XenServer 5.5 Author(s): Xtravirt (Paul Buckle) Target Audience: Technical - Novice Current Revision: 1.0 (Jul
Core Protection for Virtual Machines 1
Core Protection for Virtual Machines 1 Comprehensive Threat Protection for Virtual Environments. Installation Guide e Endpoint Security Trend Micro Incorporated reserves the right to make changes to this
Consolidated Monitoring, Analysis and Automated Remediation For Hybrid IT Infrastructures. Goliath Performance Monitor Installation Guide v11.
Consolidated Monitoring, Analysis and Automated Remediation For Hybrid IT Infrastructures Goliath Performance Monitor Installation Guide v11.6 (v11.6) Document Date: August 2015 www.goliathtechnologies.com
XenDesktop Implementation Guide
Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Implementation Guide Pooled Desktops (Local and Remote) www.citrix.com Contents Contents... 2 Overview... 4 Initial Architecture... 5 Installation
Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V
Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised
Freshservice Discovery Probe User Guide
Freshservice Discovery Probe User Guide 1. What is Freshservice Discovery Probe? 1.1 What details does Probe fetch? 1.2 How does Probe fetch the information? 2. What are the minimum system requirements
CXS-203-1 Citrix XenServer 6.0 Administration
Page1 CXS-203-1 Citrix XenServer 6.0 Administration In the Citrix XenServer 6.0 classroom training course, students are provided with the foundation necessary to effectively install, configure, administer,
Signiant Agent installation
Signiant Agent installation Release 11.3.0 March 2015 ABSTRACT Guidelines to install the Signiant Agent software for the WCPApp. The following instructions are adapted from the Signiant original documentation
Citrix XenServer Workload Balancing 6.5.0 Quick Start. Published February 2015 1.0 Edition
Citrix XenServer Workload Balancing 6.5.0 Quick Start Published February 2015 1.0 Edition Citrix XenServer Workload Balancing 6.5.0 Quick Start Copyright 2015 Citrix Systems. Inc. All Rights Reserved.
Interworks. Interworks Cloud Platform Installation Guide
Interworks Interworks Cloud Platform Installation Guide Published: March, 2014 This document contains information proprietary to Interworks and its receipt or possession does not convey any rights to reproduce,
OpenNebula @ Open Cloud Day. http://ungleich.ch
2015 http://ungleich.ch Introduction Nico Schottelius President of ungleich GmbH Linux/Unix Infrastructure Services Application Hosting (RubyOnRails, Django, Nodejs) Hosting based on OpenNebula and cdist
October 2011. Gluster Virtual Storage Appliance - 3.2 User Guide
October 2011 Gluster Virtual Storage Appliance - 3.2 User Guide Table of Contents 1. About the Guide... 4 1.1. Disclaimer... 4 1.2. Audience for this Guide... 4 1.3. User Prerequisites... 4 1.4. Documentation
Overview... 2. Customer Login... 2. Main Page... 2. VM Management... 4. Creation... 4 Editing a Virtual Machine... 6
July 2013 Contents Overview... 2 Customer Login... 2 Main Page... 2 VM Management... 4 Creation... 4 Editing a Virtual Machine... 6 Disk Management... 7 Deletion... 7 Power On / Off... 8 Network Management...
Quick Start Guide. Citrix XenServer Hypervisor. Server Mode (Single-Interface Deployment) Before You Begin SUMMARY OF TASKS
Quick Start Guide VX VIRTUAL APPLIANCES If you re not using Citrix XenCenter 6.0, your screens may vary. Citrix XenServer Hypervisor Server Mode (Single-Interface Deployment) 2013 Silver Peak Systems,
Comodo MyDLP Software Version 2.0. Installation Guide Guide Version 2.0.010215. Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013
Comodo MyDLP Software Version 2.0 Installation Guide Guide Version 2.0.010215 Comodo Security Solutions 1255 Broad Street Clifton, NJ 07013 Table of Contents 1.About MyDLP... 3 1.1.MyDLP Features... 3
OGF25/EGEE User Forum Catania, Italy 2 March 2009
OGF25/EGEE User Forum Catania, Italy 2 March 2009 Constantino Vázquez Blanco Javier Fontán Muiños Raúl Sampedro Distributed Systems Architecture Research Group Universidad Complutense de Madrid 1/31 Outline
Deploy the ExtraHop Discover Appliance with Hyper-V
Deploy the ExtraHop Discover Appliance with Hyper-V 2016 ExtraHop Networks, Inc. All rights reserved. This manual, in whole or in part, may not be reproduced, translated, or reduced to any machine-readable
Bosch Video Management System High availability with VMware
Bosch Video Management System High availability with VMware en Technical Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3
Syncplicity On-Premise Storage Connector
Syncplicity On-Premise Storage Connector Implementation Guide Abstract This document explains how to install and configure the Syncplicity On-Premise Storage Connector. In addition, it also describes how
Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015
Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this
PARA-VIRTUALIZATION IMPLEMENTATION IN UBUNTU WITH XEN HYPERVISOR Bachelor Thesis by: Gabriel IRO & Mehmet Batuhan ÖZCAN
BLEKINGE INSTITUTE OF TECHNOLOGY PARA-VIRTUALIZATION IMPLEMENTATION IN UBUNTU WITH XEN HYPERVISOR Bachelor Thesis by: Gabriel IRO & Mehmet Batuhan ÖZCAN Adviser: Prof. Dr. Kurt Tutschku Date of Filing:
Building Clouds with OpenNebula 3.2
OSDC 2012 24 th April. Nürnberg Building Clouds with OpenNebula 3.2 Constantino Vázquez Blanco dsa-research.org Distributed Systems Architecture Research Group Universidad Complutense de Madrid Building
Team Foundation Server 2012 Installation Guide
Team Foundation Server 2012 Installation Guide Page 1 of 143 Team Foundation Server 2012 Installation Guide Benjamin Day [email protected] v1.0.0 November 15, 2012 Team Foundation Server 2012 Installation
