Technical white paper HP Helion CloudSystem 9.0 Managing multiple hypervisors with OpenStack technology Table of contents Executive summary... 2 HP Helion CloudSystem 9.0 overview... 2 Introducing HP Helion CloudSystem... 2 HP Helion CloudSystem Foundation... 2 HP Helion CloudSystem Enterprise... 3 Supported hypervisors in HP Helion CloudSystem 9.0... 3 Configuring compute nodes... 3 vcenter and ESXi compute node clusters... 3 Microsoft Hyper-V... 7 Activation of a Windows 2012 R2 Compute node... 9 Linux KVM... 10 Monitoring hypervisors in HP Helion CloudSystem 9.0... 12 Working with images... 13 Uploading images in Glance... 14 Deploying instances... 17 HP Helion CloudSystem Enterprise Cloud Service Automation... 19 Topology Service Design... 19 Create and Publish Service Offering... 22 Summary... 24 For more information... 25 Click here to verify the latest version of this document
Executive summary In this document we will explain how to use multiple heterogeneous hypervisors as HP CloudSystem 9.0 Compute Nodes. CloudSystem 9.0 supports VMware ESXi, Microsoft Hyper-V, and Red Hat Enterprise Linux (RHEL) KVM as Compute Node resources. This document will demonstrate how to configure and deploy instances to each of these supported platforms. Additionally we will create an HP Helion CloudSystem Enterprise Topology Service Design that will deploy virtual machine instances across multiple platforms. Target audience: This document is intended for system integrators and administrators of HP Helion CloudSystem Enterprise. The reader should be familiar with HP Helion CloudSystem and HP Cloud Service Automation. HP Helion CloudSystem 9.0 overview Introducing HP Helion CloudSystem HP Helion CloudSystem is the industry s most complete, fully-integrated, end-to-end private cloud solution, delivering automation, orchestration and control, across multiple clouds. Over 3,000 customers, worldwide, are using HP Helion CloudSystem today for quickly deploying IT services, managing or developing applications, streamlining operations, and more. From basic infrastructure cloud services to the most advanced application cloud services, HP Helion CloudSystem 9.0 offers enterprises and service providers a clear path to hybrid cloud. With the incorporation of HP Helion OpenStack and the HP Helion Development Platform (HDP) into the new HP Helion CloudSystem 9.0 offering, we ve integrated a more complete OpenStack -based software offering directly into the product and added Cloud Foundry technology allowing you to create a modern developer environment in which to develop and deploy cloud native applications. HP Helion CloudSystem works in a heterogeneous environment and includes hybrid cloud management software, and based on the customer s unique needs may also include servers, storage and networking, combined with installation services, making it even more efficient to deploy a private cloud. HP Helion CloudSystem comes in two editions: HP Helion CloudSystem Foundation and HP Helion CloudSystem Enterprise. HP Helion CloudSystem Foundation If your organization needs a platform that supports basic cloud infrastructure services then you will need the easy-todeploy HP Helion CloudSystem Foundation, which targets simple IaaS and delivered in the form of a virtual appliance. In addition to being a strong choice when simple infrastructure service delivery is the primary cloud objective, the small network footprint, and lower price of HP Helion CloudSystem Foundation software make it useful as a door opening offer for customers just getting started with cloud. It is built on HP Helion OpenStack and includes the HP Helion Development Platform (HDP) to enable the development of cloud-native applications. With HP Helion CloudSystem Foundation you benefit from: Open APIs for both administrative and cloud service functions, including OpenStack-based APIs, enabling highly automated cloud delivery Added support for OpenStack Swift object storage expands the scope of storage-centric use cases and environments to provision Includes the HP Helion Development Platform (HDP), which leverages leading open source technologies such as Cloud Foundry, Docker, and OpenStack providing greater support for cloud native application development Easier installation via a software appliance delivery model Support for deployment of services to VMware, Red Hat KVM and Microsoft Hyper-V host environments providing a more enterprise-friendly tool to provision virtualized environments in the cloud Enhanced networking with support for DVR and VXLAN for improved multi-tenancy Intelligently target workload to infrastructure through support of availability zones and host aggregates improves separation of usage by different groups and services 2
HP Helion CloudSystem Enterprise If you need a more robust, advanced cloud solution, then you are ready for HP Helion CloudSystem Enterprise. HP Helion CloudSystem Enterprise enables the broadest range of use cases and supports delivery of IaaS, PaaS, and SaaS models. These services are delivered through the inclusion of software such as HP Cloud Service Automation (CSA) and HP Operations Orchestration (OO). In addition, it includes HP Helion CloudSystem Foundation as well as HP Matrix Operating Environment (OE) as an alternate infrastructure provider that enables the delivery of a broad range of cloud services. HP Helion CloudSystem Enterprise is also delivered in the form of a virtual appliance. HP Helion CloudSystem Enterprise provides all the capabilities and value of HP Helion CloudSystem Foundation plus: Advanced infrastructure and application service delivery in minutes Support for public cloud resource providers through CSA enabled enterprise-class lifecycle management for hybrid cloud services Expanded support of physical server provisioning via HP OneView integration allows you to leverage HP OneView profiles to create physical service designs in CSA CSA adds the ability to orchestrate through a self-service marketplace portal, allows both easier service lifecycle management and for IT consumers to request cloud services Investment protection for HP Matrix OE users; manage new OpenStack-based resource pools alongside your existing HP Matrix OE resource pools Enhanced drag-and-drop topology designer allows for rapid definition and orchestration of new multi-tier cloud services Improved Amazon Web Services (AWS) provider to support repatriating workload to HP Helion Eucalyptus private cloud Supported hypervisors in HP Helion CloudSystem 9.0 CloudSystem 9.0 supports KVM (QEMU), Microsoft Hyper-V, and VMware ESXi hypervisors. Organizations may be required to support virtual machine images in many different formats. A key to portability is the ability for a private cloud solution to support a variety of virtual machine image formats. Virtual machine images can originate from many sources within an organization: internal development, physical migration, external providers, acquisitions, etc. HP Helion CloudSystem can provide a single solution to manage, deploy, and develop solutions based on different image formats that exist within an organization. CloudSystem 9.0 provides support for heterogeneous hypervisors for nova compute that can host a variety of image formats including VMware ESXi vmdk, KVM qcow2, and Microsoft Hyper-V vhdx. Configuring compute nodes In this section we will discuss configuration options and provide a high level overview for configuring and activating CloudSystem 9 ESXi, Hyper-V, and KVM compute hypervisors. Refer to Part III Resource configuration in CloudSystem in the HP Helion CloudSystem 9 Administrator Guide for complete details on installing, configuring, and activating Compute node resources. vcenter and ESXi compute node clusters To configure VMware ESXi servers as nova compute node resources, the ESXi servers must be part of a VMware Distributed Resource Cluster (DRS). The ESXi servers must have a shared data store with sufficient capacity to host the organization s virtual machine instances. Each ESXi server in the cluster must be connected to the following CloudSystem networks: Datacenter Management Network Cloud Management Network Cloud Data Trunk 3
In Figure 1 we have a two node DRS cluster named Compute in the HP Helion CloudSystem Datacenter. In this configuration each cluster node has a 45GB local datastore and the cluster nodes share a 2TB datastore. Figure 1: vcenter CloudSystem 9.0 Compute Cluster Before a VMware DRS cluster can be recognized by CloudSystem and subsequently activated, the vcenter server must be registered. Use the HP Helion CloudSystem Operations Console to register a vcenter server by selecting Menu Integrated Tools Register vcenter. Complete the form as shown in Figure 2 and select REGISTER. Figure 2: Register vcenter with CloudSystem 4
Once the vcenter server is registered all DRS clusters managed by the vcenter server will be imported into HP Helion CloudSystem. The imported clusters can be viewed from the Operations Console Menu Compute Compute Nodes. The status of the imported clusters will show as IMPORTED. Figure 3: CloudSystem OPS CONSOLE Compute Nodes As mentioned previously, all DRS clusters are imported. In this example the only VMware DRS cluster that can be configured for use as an HP Helion CloudSystem Compute node resource is the Compute cluster. Prior to activating an ESXi DRS cluster ensure there are sufficient IP Addresses available on the Datacenter Management Network (DCM). During the initial installation HP Helion CloudSystem requires a minimum of 12 IP Addresses on the DCM to install HP Helion CloudSystem 9.0. The CloudSystem activation will install an OVSVAPP virtual machine on each ESXi server in the compute cluster and each OVSVAPP will require an additional IP Address on the DCM. To add additional IP Addresses to the Datacenter Management Network use the HP Helion CloudSystem OPS Console and select Menu System System Summary Networking UPDATE MANAGEMENT TRUNK DC Management Network Detail Select Add IP Range to add an additional range or Edit IP Range to expand the existing range. Figure 4: Data Center Management Network In our sample DCM configuration we have two DCM IP Address ranges, 192.168.145.58 192.168.145.69 and 192.169.145.70 192.168.145.75. The first range, 192.168.145.58 192.168.145.69, was specified during the First-Time Installer (FTI) process. The second range, 192.168.145.70 192.168.145.75, was added using the HP Helion CloudSystem 5
9.0 OPS Console. These additional IP Addresses will provide six DCM IP Addresses for each ESXi server that will be used as a compute node resource. The entire range 192.168.145.58 192.168.145.75 could have been specified during the First- Time Installer process. Additional IP Addresses can be added as required. Select the ellipsis to the right of the Compute cluster shown in Figure 3 and choose Activate. Enter the name(s) of the network adapter or adapters that are connected to the Cloud Data Trunk network and select Activate. In our example, shown in Figure 5, vmnic1 is connected to the Cloud Data Trunk. Figure 5: ESXi Compute Node Activation If there is an existing Virtual Distributed Switch connected to the Cloud Data Trunk, that switch can be specified by selecting the Distributed vswitch Name radio button 6
The activation script will create two Distributed Switches, Cloud-Data-Trunk_<datacenter name> and CS-OVS- Trunk_<datacenter name>. The Cloud-Data-Trunk_<datacenter name> is connected to the Cloud Data Trunk network using the Cloud Trunk Interface(s) specified in the Activation form. The CS-OVS-Trunk_<datacenter name> has no dvuplink connections. Figure 6: vcenter Cloud Data Trunk vds The results of the activation script can be viewed on the HP Helion CloudSystem Management appliance (ma1) by viewing the eon.log file (/var/log/eon/eon.log). Microsoft Hyper-V HP Helion CloudSystem 9.0 provides support for Microsoft Windows Server 2012 R2 Datacenter edition with Microsoft Hyper-V as a nova compute node resource. The Microsoft Windows 2012 R2 servers can be configured as standalone nodes or in a Microsoft Failover Cluster. In this example we have configured a Microsoft Failover Cluster as shown in Figure 7. Figure 7: Microsoft Hyper-V Failover Cluster 7
In the sample configuration we have configured a shared LUN (Cluster Disk 1) as a Cluster Shared Volume. This is where our Hyper-V virtual machine instances will be deployed. Figure 8: Microsoft Cluster Shared Volume The virtual machine instances are created on the Cluster Shared Storage Volume (CSV) C:\ClusterStorage\Volume1\instance-<ID>. The vhdx disk that is used to create the instances is copied from the Glance Repository to C:\ClusterStorage\Volumet1\_base_WIN01 and _base_win02. Notice the.vhdx file name, this name corresponds to the Glance image ID assigned to the image stored in the Glance repository. Figure 9: Microsoft Hyper-V HP Helion CloudSystem Instance Location 8
The Microsoft Hyper-V cluster was configured with four connections, three of which are used for connectivity by HP Helion CloudSystem 9.0. The DCM network is connected to the HP Helion CloudSystem 9.0 Datacenter Management Network. The CMN network connection is connected to the Cloud Management Network, and the DataTrunkInterface is connected to the Cloud Data Trunk. The DataTrunkInterface will be used by the Microsoft Hyper-V virtual switch that is created when the compute node is activated. Figure 10: Microsoft Hyper-V Compute Node Network Connections Activation of a Windows 2012 R2 Compute node Before activating a Windows 2012 R2 Compute node, FreeRDP-Webconnect must be downloaded and copied to the HP Helion CloudSystem management appliance. Copy FreeRDPWebConnect.msi to /var/csm/www/msi/ on the HP Helion CloudSystem management appliance ma1. Refer to the HP Helion CloudSystem 9.0 Administrator Guide for complete instructions for copying the FreeRDP installer and setting the appropriate file permissions. FreeRDP is used by HP Helion CloudSystem to enable RDP access to instances on Microsoft Hyper-V hosts using the Launch Console feature. The infrastructure administrator is responsible for security and other updates to FreeRDP. Use the HP Helion CloudSystem 9.0 Operations Console Menu Compute Nodes to view the individual Windows 2012 R2 compute nodes. The individual nodes will be added automatically to the HP Helion CloudSystem 9.0 Compute Nodes section when they receive a DHCP address from the Cloud Management Network. When the Windows 2012 R2 compute node is added it will have an initial state of IMPORTED until it is activated. Activate the compute node by selecting the compute node and clicking on the ellipsis to the right of the selected node and choosing Actions Activate. You will be presented with the screen shown in Figure 11. Complete this form by providing user account and password information for the Windows 2012 R2 compute node and the name of the interface that is connected to the Cloud Data Trunk. In our example the Cloud Data Trunk interface is named DataTrunkInterface. 9
Figure 11: Microsoft Hyper-V Compute Node Activation in HP Helion CloudSystem The activation script will perform the following actions on the Microsoft Hyper-V Compute Node: Create a virtual switch on the Windows 2012 R2 Hyper-V compute node with a default name of cs-datatrunk Install the HP Helion CloudSystem software in C:\Program Files (x86)\hp\cloudsystem Create and start the Neutron Agent Service and the Nova Compute Service The results of the activation script can be viewed on the HP Helion CloudSystem Management appliance (ma1) by viewing the eon.log file (/var/log/eon/eon.log). Linux KVM KVM Compute nodes support either Red Hat Enterprise Linux (RHEL) 6.5 or RHEL 7.0. In this example we are using a compute node with RHEL 6.5. This document provides a high level overview of the installation and activation tasks required for a RHEL 6.5 compute node, for complete detailed instructions please refer to the HP Helion CloudSystem 9.0 Administrator Guide. Install RHEL 6.5 and select Virtual Host during the installation. Create a local yum repository on the compute node using the RHEL installation DVD and install sysfsutils and sg3_utils. The RHEL KVM compute node requires three interfaces connected to the following CloudSystem networks: Datacenter Management Network Cloud Data Trunk Cloud Management Network In our example we have the following connections: eth0 is connected to the Datacenter Management Network eth1 is connected to the Cloud Data Trunk eth2 is connected to the Cloud Management Network A DHCP identifier must be set on the DHCP client before the KVM compute node will be listed in the HP Helion CloudSystem 9.0 OPS Console under Compute Nodes. To configure the identifier, add the following lines to the /etc/dhcp/dhclient.conf file on the compute node: send dhcp-client-identifier "kvm"; option is_kvm_node code 214 = string; send is_kvm_node "yes"; 10
Make the following changes to the interface on the Cloud Management Network, in our example this is eth2: DEVICE="eth2" BOOTPROTO="dhcp" NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" PEERDNS="no" PERSISTENT_DHCLIENT=1 DHCP_HOSTNAME=<NON-FQDN> Ensure the KVM Compute node has sufficient disk space available to host the HP Helion CloudSystem instances that will be deployed. The default location for HP Helion CloudSystem instances on the KVM compute node is /var/lib/nova/instances. In our example we have mounted a 1TB volume Figure 12: Mount Point for RHEL Compute Node Instances After following the instructions in the HP Helion CloudSystem 9.0 Administrator Guide on Creating KVM compute nodes, the Compute node will be available for activation in the HP Helion CloudSystem OPS Console. Configure the Activation form by specifying the following parameters: Hypervisor Type: KVM User Name: root Password: Teamed NICs: one or more network interfaces connected to the Cloud Data Trunk Figure 13: RHEL Compute Node Activation in HP Helion CloudSystem Select Activate to launch the activation script and activate the KVM Compute Node. The results of the activation script can be viewed on the HP Helion CloudSystem Management appliance (ma1) by viewing the eon.log file (/var/log/eon/eon.log). 11
Monitoring hypervisors in HP Helion CloudSystem 9.0 Using the HP Helion CloudSystem OPS CONSOLE Compute Nodes view, the administrator can view information about the compute node resources available to HP Helion CloudSystem. In Figure 14 we can see that there are four compute nodes activated. These compute nodes include two Windows 2012 Hyper-V servers (WIN01 and WIN02) which are configured in a Microsoft Failover Cluster, a single ESXi DRS cluster named Compute that consists of two ESXi 5.5u2 servers, and a single RHEL 6.5 KVM server. The state of these compute nodes shows ACTIVATED. Prior to activation the state will show as IMPORTED and during activation the state will show as ACTIVATING. If the activation process fails the compute node state will revert back to IMPORTED. Compute node failures can be analyzed by viewing the eon log file on the management appliance (ma1) /var/log/eon/eon.log. Figure 14: HP Helion CloudSystem OPS Console Compute Nodes Using the HP Helion OpenStack User Portal (Horizon) interface we can view the hypervisors and compute resources available to CloudSystem 9.0. This view will provide information on the resources available to each hypervisor configured for instance deployment in HP Helion CloudSystem. Figure 15: HP Helion OpenStack User Portal Hypervisors 12
The hypervisors can also be listed from the command line of the cloud controllers, cmc, cc1, and cc2, by entering nova hypervisor-list. root@cmc:~# nova hypervisor-list +----+-----------------------+ ID Hypervisor hostname +----+-----------------------+ 7 rhel01.cloud.internal 13 domain-c878(compute) 16 WIN01 19 WIN02 +----+-----------------------+ Details of each hypervisor can be retrieved by entering nova hypervisor-show <id> using the ID for the hypervisor returned from the nova hypervisor-list command. nova hypervisor-show <ID> Working with images HP Helion CloudSystem 9.0 supports several image formats. In this document we will demonstrate how to deploy VMDK, VHDX, QCOW2 image formats with both Windows and Linux operating systems. Links to publicly available image sources can be found at the URL below. http://docs.openstack.org/image-guide/content/ch_obtaining_images.html Image formats can be converted from one format to another using the qemu-img convert command, for example a qcow2 image can be converted to a vmdk image. qemu-img convert [-c] [-p] [-f fmt] [-t cache] [-O output_fmt] [-o options] [-S sparse_size] filename [filename2 [...]] output_filename 13
Uploading images in Glance Use the HP Helion OpenStack User Portal (Horizon) to upload images into the Glance Repository. Select Project Admin Images Create Image Figure 16: Create Image Form Use the Create An Image form to specify the following information about the image: Name Description Source Format OS Type Images can also be created from the command line using the Glance command: glance image-create --name Ubuntu-1nic --is-public=true --container-format=bare -- disk-format=vmdk --property vmware_disktype="sparse" --property vmware_adaptertype="lsilogic" < Ubuntu-12.04-1nic-password.vmdk 14
An instance will be deployed onto the appropriate hypervisor compute resource depending upon the disk image format type of the image, in our example we ve used VHD, VMDK, and QCOW2. Some hypervisors can support multiple image formats. For example KVM hypervisors can support both QCOW2 and VMDK image formats. To ensure an instance based on an image will be deployed to a specific hypervisor, the properties of an image can be modified to specify the hypervisor type. Use the Glance commands shown below to add an image hypervisor_type property. In this example we have updated existing Glance images by adding the hypervisor_type property and setting it to hyperv, QEMU, or vmware. glance image-update --property hypervisor_type=hyperv win2012-vhdx glance image-update --property hypervisor_type=qemu win2012-qcow2 glance image-update --property hypervisor_type=vmware Ubuntu-1nic The HP Helion OpenStack User Portal (Horizon) can be used to update the metadata of an image by selecting Admin Images Select Image Edit Update Metadata. Figure 17: Image Metadata The hypervisor_type property can be specified when creating the image from the command line as shown below: glance image-create --name vivid-server --is-public=true --container-format=bare - -disk-format=qcow2 --property hypervisor_type=qemu < vivid-server-cloudimg.img 15
A list of images can be viewed using the HP Helion OpenStack User Portal (Horizon). Figure 18: Images View Images can also be listed using the Glance command as shown below. Glance image-list Figure 19: Glance Image List Glance image-show <ID> will provide details about a specific image: glance --insecure image-show 55dfcc4a-586b-438f-99ab-a5f8be3a567e Figure 20: Glance Image Detail 16
Deploying instances Use the HP Helion OpenStack User Portal (Horizon) interface to launch an instance by selecting Project Compute Instances Launch Instance. Complete the Launch Instance form and the associated tabs. Figure 21: HP Helion OpenStack User Portal Launch Instance Instances can also be created at the command line using the nova command as shown below. nova boot --image "vivid-server" --flavor m1.small --nic net-id=23e4224b-6643-479d-85c2-dd98ecd1b6f6 vivid-test The deployed instances can be viewed from the HP Helion OpenStack User Portal (Horizon) Instances view. Figure 22: HP Helion OpenStack User Portal Instances 17
Additionally the instances can be viewed from the command line using the nova list command. Figure 23: List instances with Nova The nova show <ID> command can be used to view additional details about the instance. Figure 24: Instance detail with nova show 18
The network topology view below shows the three virtual machine instances all running on the same tenant vlan, Tenant 1. Each instance is running on a different hypervisor. Figure 25: HP Helion OpenStack User Portal Network Topology HP Helion CloudSystem Enterprise Cloud Service Automation Next we ll demonstrate how the instances can be launched to the different hypervisors by creating a simple CSA Topology Service Design and the CSA Marketplace Portal Subscription. Now that the images have been configured and tested by deploying instances onto the respective HP Helion CloudSystem Compute Nodes the images can be used in HP Helion CloudSystem Enterprise service designs. Topology Service Design Using the CSA admin portal we will create a topology service design to deploy instances to multiple supported hypervisors. From the CSA admin portal select Designs Topology Designer Create. Provide a Display Name for your Topology Design. In your topology design select the Designer tab, this will provide a template to add components to your design. Drag the following components from the component list to your design form. OpenStack Server OpenStack Network Interface OpenStack Private Network OpenStack Security Group OpenStack Floating IP OpenStack External Network OpenStack Router 19
Complete the properties for each component. The information for each component can be obtained using the HP Helion OpenStack User Portal (Horizon) or from the command line using OpenStack-based commands. In our sample configuration in Figure 26 the keypairname value is set to csakeypair. This key pair is created by logging into the HP Helion OpenStack User Portal as the enterpriseinternaluser and generating a key pair by selecting Project Access & Security Key Pairs Create Key Pair. The service design must use the enterpriseinternaluser key pair when testing the design later in this section. Each component will require a relationship that is created by selecting the component and dragging a line to the corresponding component. The relationship requirements will be highlighted by an alert showing a red triangle with an exclamation mark indicating a relationship is required. Hovering over the alert will display the required components and relationships. Figure 26: Cloud Service Automation Topology Design Our sample service design contains three CSA groups. We have created three CSA groups by selecting the Manage Groups button at the bottom of the Topology Designer page. The groups are named Hyper-V, VMware, and KVM. Each group contains an OpenStack Server component, an OpenStack Floating IP component, and an OpenStack Network Interface component. The images specified in the OpenStack Server component of each group are configured to deploy instances to Microsoft Hyper-V, VMware, and KVM respectively. Components are added to CSA groups by selecting the orange dot on the component and edit property icon. This will display the Edit Component form where the Group can be selected. The OpenStack Security Group component, OpenStack Private Network component, OpenStack External Network component, and OpenStack Router component are shared by all three CSA groups. By using CSA groups we can define operations that are performed on groups of servers. For example in this sample service design we can choose to deploy a different number of virtual machine instances to each group, three Microsoft Hyper-V instances, two VMware instances, and one KVM instance. We could also specify the application software that is deployed to each group creating a multi-tiered application. 20
While still in the Topology Design, test the service design by selecting the Test tab and choosing the Test Run button. A successful test will display the Service Instance Status as Online as shown in Figure 27. Figure 27: Topology Design Test The instances can also be viewed in the HP Helion OpenStack User Portal (Horizon) by selecting Project Compute Instances. Figure 28: HP Helion OpenStack User Portal Topology Design Deployed Instances With the administrative permissions to the HP Helion CloudSystem Project demo we can view where the instances are deployed to by selecting Admin System Instances. We can see in Figure 28 that our instances are deployed to the following Hosts: win01.hpiscmgmt.local vcenter-compute rhel01.hpiscmgmt.local Figure 29: HP Helion OpenStack User Portal Instances Now that our CSA Topology service design has been successfully tested we can publish it to the HP Helion CloudSystem Marketplace Portal. While still in the Topology Design choose Publish. Once a design has been published no further changes can be made to the design. If a change is required a new version must be created and published. 21
Create and Publish Service Offering Next create an Offering from the CSA Admin portal by selecting Offerings Create, complete the Create Offering form and select the service design that was created in the Topology Service Designer, provide a display name and version number and choose Create. Figure 30: Create Offering Next publish the Offering to a Marketplace Portal catalog. While still in the Offering select the Publishing tab and choose Publish. In this example we will publish our offering to the Global Shared Catalog. Select a category for the offering, we selected Application Servers, and chose Publish. Figure 31: Publish Offering This Offering is now available in the Marketplace Portal Global Shared Catalog. Figure 32: Marketplace Portal 22
To deploy the service, select Offering choose Check out choose Submit Request. Figure 33: Marketplace Portal Service Checkout Once the service has deployed successfully it will be displayed as Online in the My Services view of the Marketplace Portal. Figure 34: Marketplace Portal Online Service 23
By selecting the service and viewing the Service Topology we can see the Topology Design of the deployed service. This view shows us that three servers were created, one on each hypervisor, and each server was assigned an OpenStack Network Interface connected to the OpenStack Private Network and protected by the OpenStack Security Group. The three servers are each assigned an OpenStack Floating IP Address which are connected to the OpenStack Router. The OpenStack Private Network is also connected to the OpenStack Router. The OpenStack Router is connected to the OpenStack External Network. Figure 35: Marketplace Portal Service Topology Summary In this white paper we have provided an overview of how to configure the supported hypervisors, Microsoft Hyper-V, KVM, and VMware, as compute resources in an HP Helion CloudSystem 9.0 environment. We have also demonstrated how to configure and upload images in different formats that can be used with each of the supported hypervisors. The hypervisors can be seamlessly integrated into an HP Helion CloudSystem 9.0 Enterprise Topology design allowing organizations to create and deploy service designs in a heterogeneous hypervisor environment. Finally this document demonstrated how to create and publish an HP Helion CloudSystem Enterprise Topology Service Design to deploy HP Helion CloudSystem Enterprise instances across a heterogeneous multi-hypervisor environment using the HP Helion CloudSystem Enterprise Marketplace Portal. 24
For more information For more information on HP Helion CloudSystem please visit: HP Helion CloudSystem Solutions, hp.com/go/cloudsystem HP Helion CloudSystem Evaluation, hp.com/go/trycloudsystem To help us improve our documents, please provide feedback at hp.com/solutions/feedback. Sign up for updates hp.com/go/getupdated Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft, Windows, and Windows Server are trademarks of the Microsoft group of companies. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. VMware is a registered trademark or trademarks of VMware, Inc. in the United States and/or other jurisdictions. The OpenStack Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation s permission. We are not affiliated with, endorsed, or sponsored by the OpenStack Foundation, or the OpenStack community. 4AA6-2500ENW, October 2015