Using Open Source Virtualization Technology In Computer Education SUNY IT Master's Project By: Ronny L. Bull Master's Project Advisor: Geethapriya Thamilarasu, Ph.D. Date: 9-04-2011 Abstract: This paper outlines the work and research I performed while working on my Computer Science Master's Project at SUNY IT. The goal of this project was to create a scalable virtualization server cluster using Open Source technology for use by the SUNY IT Computer Science and Network Computer Security departments. The servers are to be used by students and faculty for labs, development, and research work. One server has also been dedicated to the Computer Science Network Administration department (a.k.a. The Dognet) for use in the migration of some of the current aging physical department servers to a virtual environment. A method for integrating the current Computer Science department LDAP user authentication has been provided, and virtual machines can be assigned to specific users which then can be accessed and controlled over the web via a java enabled browser. Table of Contents: 1) Virtualization Technology Overview 2) Hypervisor Options & My Choice 3) Server Hardware & Issues 4) XCP Installation 5) Setting Up ISO Image Storage 6) Extending The Default LVM VG 7) Networking,VLANS, & Firewall Rules 8) Server Management 9) Bug Workarounds 10) User Access & Authentication 11) Conclusion 12) Future Research 13) Bibliography 1) Virtualization Technology Overview: Virtualization Technology allows an organization to leverage the power of multi-core server hardware to host multiple virtual servers on a single physical host. This allows the organization to consolidate a large server farm down to a few powerful machines which have the advantage of saving space and energy costs while still retaining the same level of service that they previously relied upon. Server virtualization is made possible by
the use of a hypervisor which is the basic abstraction layer of software that sits directly on the hardware below any operating systems. It is responsible for CPU scheduling and memory partitioning of the various virtual machines running on the physical hardware. It not only abstracts the physical hardware for the virtual machines, but it also controls the execution of virtual machines as they share the common processing environment. The hypervisor has no knowledge of networking, external storage devices, video, or any other common I/O functions. 1 The machines that run on top of the hypervisor are broken up into two different categories: host and guest. The host operating system is a privileged virtual machine that has special rights to access physical I/O resources as well as interact with the other virtual machines running on the system. The guest operating systems have no direct access to the physical hardware on the machine, and rely on the host operating system to manage them. As a result the host operating system is required to be up and running before any guests are allowed to start. 1 2) Hypervisor Options & My Choice: Since the project only had a hardware budget I had to make due with a free and Open Source hypervisor solution. There are many virtualization products offered today, and all of the major players offer their hypervisors for free. However not all of them are Open Source, and additional costs tend to creep up to unlock functionality via separate utilities. Some of these products are; Microsoft Hyper-V, Citrix XenServer, and VMWare ESXi. Although they are all very powerful and capable products, the extra costs involved to effectively utilize them was a limiting factor. So instead I decided to use Xen, the Open Source hypervisor. Citrix bases all of it's product on the Open Source Xen hypervisor technology, and after some research I discovered that the Xen community developed an Open Source version of the Citrix XenServer called Xen Cloud Platform 3 which is basically a clone of Citrix XenServer version 5.6 FP1. It uses the same Xen API calls as Citrix XenServer, and can be managed effectively using the free Citrix XenCenter 4 virtual server management utility. Xen Cloud Platform or XCP for short is
a fairly new Linux distribution that runs the Xen hypervisor and a minimal CentOS Linux based host operating system (a.k.a. Domain0). Because it is so new and still considered to be in a testing state there is very little documentation available. Most of the installation and setup I performed was done by trial and error, however since XCP is based on Citrix XenServer version 5.6 FP1 I was able to reference the manual 2 for that specific product to figure out the API commands necessary to setup the servers. 3) Server Hardware Specs & Issues The project started with the construction of three identical custom built, quiet computing, rack mountable servers. The parts were ordered from the Internet, and the servers were built in house. The following hardware was used in each server: Motherboard: SUPERMICRO MBD- X9SCM-O Server Motherboard (Sandy Bridge) w/ 2 integrated 1000MB Intel NIC's Processor: Intel Xeon E3-1240 @ 3.30GHz Quad Core w/ Hyper Threading RAM: 16 GB Crucial DDR3 SDRAM ECC Unbuffered Server Memory Hard Drives: 2x Seagate Momentus XT 500GB Hybrid Hard Drive Mounts: 2x Mushkin Enhanced drive adapter bracket Rack Mount Case: Antec Take 4 + 4U With 650W Power Supply (Quiet Computing) Rack Rails: Antec 20" Side Rails Total Cost Per Server: $1,331.46 After the servers were assembled trial installations were performed and some issues were encountered. Two of the three servers were locking up repeatedly during heavy I/O usage. At first overheating was suspected but after extensive hardware stress tests and part swapping it was deduced that two out of the three motherboards were found to be faulty and were ultimately RMA'd for repair. Once the hardware was all straightened out I began installing Xen Cloud Platform 1.1 Beta on the three servers. 4) XCP Installation: I downloaded the Xen Cloud Platform 1.1 Beta ISO image from the web 6, and burned the image to a disc. I then temporarily installed
a CDROM drive into each server, and began the installations. The initial XCP installation was pretty straight forward. After answering a few prompts and setting up the management interface IP address settings I rebooted the server and was greeted by a panda bear floating in the clouds. The server finished booting and an ncurses based menu appeared on the screen which displayed some basic information about the server, and allowed for a few simple configuration settings to be edited as well as provided menu options to control installed virtual machines and other resources. Once the networking for the management interface had been established this menu was also accessible via SSH. partition using the entire drive, and installed the operating system on it. After the installation was completed I setup a 50GB ext3 partition on the second hard drive (/dev/sdb) in each machine to be used as a local ISO image storage repository. Once the partition was created and formatted I had to let the Xen API know about it. In order to do this I performed the following commands on each server: mkdir -p /var/opt/xen/iso_import mount /dev/sdb1 /var/opt/xen/iso_import Then I added the following entries to /etc/fstab /dev/sdb1 /var/opt/xen/iso_import ext3 defaults 0 0 Finally I had to register the new storage repository with the Xen API: xe sr-create name-label=iso_import type=iso \ device-config:location=/var/opt/xen/iso_import/ \ device-config:legacy_mode=true content-type=iso Once the above steps were performed I could place ISO images directly into /var/opt/xen/iso_import and they would become immediately available for use when installing new virtual machines using XenCenter. Figure 1: xsconsole viewed in a remote terminal via SSH. 5) Setting Up ISO Image Storage: All three servers were equipped with two 500GB hard drives. During the XCP installation I chose for it to only use the first hard drive (/dev/sda). It created an LVM 6) Extending The Default LVM VG: After creating the ISO storage repository on the first partition of the second hard disk (/dev/sdb1) I still had 450GB of free space left to use. I decided to extend the default LVM volume group that was setup on the first hard
disk (/dev/sda) to include the rest of the space available on the second drive. This would give each server an additional 450GB of space to house additional virtual machines rather than just using the 500GB from the first hard drive. Whenever a new virtual machine is created the Xen API creates a new LVM partition within the Default Volume Group of the size requested by the user. The use of LVM provides a scalable storage solution that allows for the addition of extra hard disks down the road. To add the rest of the second hard drive to the Default LVM Volume Group I created another partition on the second hard disk using fdisk and set it to type 8e which is LinuxLVM. I then used the following commands to extend the Default LVM Volume Group to /dev/sdb2. First I had to create the physical volume: pvcreate /dev/sdb1 Then I needed to find out the name of the Default LVM Volume Group: pvdisplay Which returned a few lines of information, of which I was interested in the VG Name: VG Name VG_XenStorage-be47438c-f310-dcf6-744e- 651ba2bfaff9 I then added the newly crated physical volume to the existing volume group: vgextend /dev/vg_xenstorage-be47438c-f310-dcf6-744e- 651ba2bfaff9 /dev/sdb1 After running the vgs command I was able to verify that the additional 450GB was added to the Default Volume Group. 7) Networking, VLANS, & Firewall Rules Once the storage was all setup I had to configure the networking and firewall rules for each interface. The first network interface was assigned at installation to be used as the management port, and was put on the CSAdmin subnet. Firewall rules were setup that only allow access to the management interface by authorized administrators and the web access appliance. The management interface is the only interface that allows the use of API calls over it, and by locking access down to authorized users and appliances only it prevents unauthorized users from making changes to the system. The second network card was connected to a VLAN trunked port on the switch, and setup to be used by the virtual machines for network traffic. Multiple VLANS were created and bound to it for the machines to be assigned to. By using VLANS on the second interface virtual machines are able to be placed on different subnets depending on their use and
who they are assigned to. Multiple VLANS were created on each server by using the following commands: First I created a new network with a label: xe network-create name-label=network201 This returned a unique ID: d14427c2-4940-0db1-7d96-f55434af319e Then I assigned the VLAN tag 201 to this network and bound it to the physical interface eth1. xe vlan-create network-uuid=85a9aba5-73ea-4008-0a28-395c96be30fd \ pif-uuid=e602777f-b4e9-e231-7858-81189c47c434 vlan=201 Once all of the VLANS were created and assigned to the physical interface eth1 I needed to assign eth1 itself to a VLAN and setup some necessary firewall rules so it could pass network traffic and respond to requests. To configure eth1 I performed the following command: xe pif-reconfigure-ip uuid=d14427c2-4940-0db1-7d96- f55434af319e \ mode=static IP=192.168.201.11 netmask=255.255.255.0 \ gateway=192.168.201.1 DNS=192.168.201.3 eth1 was assigned to VLAN 201 and was given a static IP address on the student subnet. 8) Server Management I now had three identical servers with Xen Cloud Platform 1.1 Beta installed, storage setup, and networking configured. One thing to note here is that I could not simply clone the first machine after it was complete in order to produce the second and third servers. Every resource registered in the Xen API is assigned a unique ID that would complicate things immensely if the machines were just clones of an original. All three machines would be setup with the exact same UUID's for each resource it held. So each machine had to be individually setup from scratch. To manage these servers Citrix XenCenter 4 was installed on the administrator workstation Tolmin in the Dognet Lounge, and firewall rules were added that allowed Tolmin to connect to the management interfaces on each server. XenCenter allows an administrator to connect to multiple XenServer based servers and remotely manage them through a single application. The application lets the administrator create, export, import, connect to, and manage virtual machines. It also allows the administrator to manage server resources, take snapshots of virtual machines, create templates from existing virtual machines, and fine tune virtual machine configurations and CPU priority. The three servers were added to XenCenter and I configured three server pools
to assign the servers to; CSAdmin, NCS- Student, & CS-Student. Xen1 was assigned to the CSAdmin pool to be used by the CSAdmin department to migrate servers to. Xen2 was assigned to the NCS-Student pool to be used to house virtual machines that the NCS students will use in labs. Xen3 was added to the CS- Student pool to be used by CS students for labs, research, and development work. Due to the scalability of the project more servers are expected to be added to these pools as they become available from ITS. 9) Bug Workarounds: While setting up the server cluster I encountered a few bugs with the XCP software. I was able to come up with workarounds that I submitted to the XCP developers for consideration, and to make them aware of the glitches. The first issue I encountered was when I tried to install a new Linux virtual machine using one of the included Linux templates. I found that every Linux template I tried would not allow me to boot to the installation CDROM even if I specified it as the default boot device. It turned out that there is a bug in all of the Linux templates that causes them not to set the boot priority upon virtual machine creation. To get by this issue you can use the Other Installation Media template to install any Linux distribution, or an already created virtual machine can be fixed by doing the following at the command line: xe vm-param-set uuid=<vm UUID> HVM-boot-policy=BIOS\ order HVM-boot-params:order="dc" Where VM UUID is the UUID returned for the specific virtual machine when using the xe vm-list command. What this command does it it sets the boot order for the virtual machine to 'dc' (CDROM then hard drive). The second bug I encountered took 30 days to show itself, it had to do with licensing. XCP is supposed to be free and Open Source software with no license restrictions, however when the developers were porting XenServer they neglected to remove the license file that corresponded to it and since XenServer has a trial license of 30 days that also applied to XCP. To fix this issue I simply had to stop the Xen API service and edit the file /var/xapi/state.db. In that file there is a field called expiry, setting that variable to 30 days in the future fixes the license issue temporarily. This bug is supposed to be fixed in the next XCP update but for the
time being I wrote a bash script that does it automatically. The last bug I encountered was similar to the second. Whenever an administrator tried to take a snapshot of a virtual machine from XenCenter on Tolmin they would get an error that stated: Snapshots require XenServer 5.5 or later. This seemed a bit strange since XCP was based on XenServer 5.6 FP1. It turned out that the Xen API needed to be fooled again. I had to stop the Xen API service then edit the file /opt/xensource/bin/xapi and changed the version variable from 1.1.0 to 5.6.0. Restarting the service after the change allowed snapshots to be taken from XenCenter on Tolmin. This fix does not persist after an update to XCP so I created a simple script that can be run to make the change after an update. 10) User Access & Authentication: Once all of the servers were setup and the bugs were squashed I had to setup a way for administrators to be able to provision virtual machines to individual users, and allow those users to access their virtual machines via a web page by authenticating with their CS LDAP credentials. After trying a few different approaches I settled on an Open Source project called XVP Appliance 6 which was designed to be a drop in virtual machine that could be used to access virtual machines securely over the web that were hosted on Citrix XenServer and XCP servers. To install the appliance I simply imported it into the Xen1 server, fired it up, changed the hostname to xen1-web, and gave it an IP address on the student subnet. I had to create rules on the firewalls to allow traffic to access it as well as give it access to the management ports on all of the Xen servers. Then I set it up to use the CS LDAP server for authentication. Once the initial configuration changes were made I was able to use a command line utility to add the three server pools to the appliance, which also added the corresponding virtual machines that were assigned to each pool. Users were also added via the utility simply by going to the user menu and typing in a CS LDAP user name to add to the system, then assigning that user a virtual machine from a pool. Access rights are used on a per machine basis to allow for fine grained control on who gets to do what with each virtual machine.
Once a user has been setup in the system they can visit https://xen1-web.cs.sunyit.edu in a java enabled browser to access their virtual machines. Upon visiting the site the user is prompted for their authentication credentials, then is displayed with a list of virtual machines that they have access to. Controls are provided that allow the user to start, shutdown, and restart each machine. They also can gain console access to their machines through the interface. All virtual machine console communication is done over the standard VNC port 5900 from the web server to the host server the virtual machine resides on. The end user only needs access to port 443 on the web server for secure http communication. Figure 3: XVP Appliance console view of a Windows XP VM 11) Conclusion: Xen Cloud Platform proved to be a stable and robust competitor to the other enterprise level virtualization options that are currently offered. It offers a low cost (hardware only) highly scalable solution to server virtualization, and allows for the creation of Windows, Linux, and FreeBSD guests. The addition of a web based end user interface and LDAP authentication provides a secure way for students and faculty to access their virtual machines from anywhere they have internet access (provided that a proper firewall rule is Figure 2: XVP Appliance Administrator View created). VLAN networking also allows administrators to make certain that virtual machines are assigned to the proper subnets, and that no machines are put on a subnet where they can do harm to production servers and desktops. In addition lab environments can be
consolidated, and students can now each have their own set of virtual machines to work with rather than sharing, maximizing their learning experience. 12) Future Research: This project has opened my eyes to the some possible research opportunities in the realm of virtualization. Once such area that could be investigated is how to effectively optimize virtual machine resources so that you could maximize the amount of virtual machines you could have running on a single host. Currently each of the servers is outfitted with 16GB of RAM (upgradable to 32GB). Each running virtual machine consumes a fixed amount of RAM that it was allocated upon creation. If the sum of all of the running virtual machines memory comes close to or exceeds the 16GB limit then no new machines can be started. Finding a way to optimize the amount of RAM allocated to each machine would be beneficial in allowing more machines to be running on the server at a single time. These numbers would have to be figured on a per machine basis based on priority, Operating general guidelines could be established to help optimize installed virtual machines in order to maximize the amount of running virtual machines on a single host at a one time. 13) Bibliography: 1) Xen Overview: http://xen.org/files/marketing/howdoes XenWork.pdf 2) Citrix XenServer 5.6 Feature Pack 1 Administrator's Guide. http://support.citrix.com/article/ctx127 321 3) Xen Cloud Platform: http://xen.org/products/cloudxen.html 4) Citrix XenCenter: http://community.citrix.com/display/xs/x encenter 5) XCP Download: http://xen.org/download/xcp/index.html 6) XVP Appliance: http://www.xvpsource.org/? topic=about&page=xvpappliance 7) My notes in more detail: http://ronnybull.com/portfolio/mastersproject/ System usage, load, and purpose. However