Red Hat Cloud Foundations Reference Architecture. Edition One: Private IaaS Clouds

Size: px
Start display at page:

Download "Red Hat Cloud Foundations Reference Architecture. Edition One: Private IaaS Clouds"

Transcription

1 Red Hat Cloud Foundations Reference Architecture Edition One: Private IaaS Clouds Version 1.0 April 2010

2 Red Hat Cloud Foundations Reference Architecture Edition One: Private IaaS Clouds 1801 Varsity Drive Raleigh NC USA Phone: Phone: Fax: PO Box Research Triangle Park NC USA Linux is a registered trademark of Linus Torvalds. Red Hat, Red Hat Enterprise Linux and the Red Hat "Shadowman" logo are registered trademarks of Red Hat, Inc. in the United States and other countries. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. UNIX is a registered trademark of The Open Group. Intel, the Intel logo, Xeon and Itanium are registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. All other trademarks referenced herein are the property of their respective owners by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version is presently available at The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable for technical or editorial errors or omissions contained herein. Distribution of modified versions of this document is prohibited without the explicit permission of Red Hat Inc. Distribution of this work or derivative of this work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from Red Hat Inc. The GPG fingerprint of the key is: CA B D6 9D FC 65 F6 EC C CD DB 42 A6 0E 2

3 Table of Contents 1 Executive Summary Cloud Computing: Definitions Essential Characteristics On-demand Self-Service Resource Pooling Rapid Elasticity Measured Service Service Models Cloud Infrastructure as a Service (IaaS) Cloud Platform as a Service (PaaS) Cloud Software as a Service (SaaS) Examples of Cloud Service Models Deployment Models Private Cloud Public Cloud Hybrid Cloud Community Cloud Red Hat and Cloud Computing Evolution, not Revolution A Phased Approach to Cloud Computing Unlocking the Value of the Cloud Redefining the Cloud Deltacloud A High Level Functional View of Cloud Computing Cloud User / Tenant User Log-In VM Deployment & Monitoring VM Orchestration & Discovery Cloud Provider / Administrator Tenant Account Management Virtualization Substrate Management Software Life-Cycle Management

4 4.2.4 Operations Management Cloud Provider Functionality - Creating/Managing an IaaS Cloud Infrastructure Multi-Cloud Configurations Red Hat Cloud: Software Stack and Infrastructure Components Red Hat Enterprise Linux Red Hat Enterprise Virtualization (RHEV) for Servers Red Hat Network (RHN) Satellite Cobbler JBoss Enterprise Middleware JBoss Enterprise Application Platform (EAP) JBoss Operations Network (JON) Red Hat Enterprise MRG Grid Proof-of-Concept System Configuration Hardware Configuration Software Configuration Storage Configuration Network Configuration Deploying Cloud Infrastructure Services Network Gateway Install First Management Node Create Satellite System Create Satellite VM Configure DHCP Configure DNS Install and Configure RHN Satellite Software Configure Multiple Organizations Configure Custom Channels for RHEL 5.5 Beta Cobbler Configure Cobbler Configure Cobbler Management of DHCP Configure Cobbler Management of DNS Configure Cobbler Management of PXE Build Luci VM Install Second Management Node Configure RHCS

5 7.7 Configure VMs as Cluster Services Create Cluster Service of Satellite VM Create Cluster Service of Luci VM Configure NFS Service (for ISO Library) Create RHEV Management Platform Create VM Create Cluster Service of VM Install RHEV-M Software Configure the Data Center Deploying VMs in Hypervisor Hosts Deploy RHEV-H Hypervisor Deploy RHEL Guests (PXE / ISO / Template) on RHEV-H Host Deploying RHEL VMs using PXE Deploying RHEL VMs using ISO Library Deploying RHEL VMs using Templates Deploy Windows Guests (ISO / Template) on RHEV-H Host Deploying Window VMs using ISO Library Deploying Windows VMs using Templates Deploy RHEL + KVM Hypervisor Host Deploy RHEL Guests (PXE / ISO / Template) on KVM Hypervisor Host Deploying RHEL VMs using PXE Deploying RHEL VMs using ISO Library Deploying RHEL VMs using Templates Deploy Windows Guests (ISO / Template) on KVM Hypervisor Host Deploying Window VMs using ISO Library Deploying Windows VMs using Templates Deploying Applications in RHEL VMs Deploy Application in RHEL VMs Configure Application and Deploy Using Satellite Deploy Application Using Template Scale Application Deploying JBoss Applications in RHEL VMs Deploy JON Server in Management Services Cluster Deploy JBoss EAP Application in RHEL VMs Deploy Using Satellite

6 Deploy Using Template Scale JBoss EAP Application Deploying MRG Grid Applications in RHEL VMs Deploy MRG Manager in Management Services Cluster Deploy MRG Grid in RHEL VMs Deploy MRG Grid Application Scale MRG Grid Application Cloud End-User Use-Case Scenarios References Appendix A: Configuration Files A.1 Satellite answers.txt A.2 Cobbler settings A.3 rhq-install.sh A.4 Configuration Channels Files

7 1 Executive Summary Red Hat's suite of open source software provides a rich infrastructure for cloud providers to build public/private cloud offerings. This Volume 1 guide for deploying the Red Hat infrastructure for a private cloud describes the foundation for building a Red Hat Private cloud: 1. Deployment of infrastructure management services, e.g., Red Hat Network (RHN) Satellite, Red Hat Enterprise Virtualization (RHEV) Manager (RHEV-M), DNS service, DHCP service, PXE server, NFS server for ISO images, JON, MRG Manager - most of them installed in virtual machines (VMs) in a Red Hat Cluster Suite (RHCS) cluster for high availability. 2. Deployment of a farm of RHEV host systems (either in the form of RHEV Hypervisors or as RHEL+KVM) to run tenants' VMs. 3. Demonstrate sample RHEL application(s), JBoss application(s) and MRG Grid application(s) respectively in the tenant VMs. Section 2 presents some commonly used definitions of cloud computing. Section 3 discusses the phased adoption of cloud computing by enterprises from the use of virtualization, to the deployment of internal clouds and leading to full-functional utility computing using private and public clouds. Section 4 describes a high level functional view of cloud computing. The model is described in terms of: Cloud administrator/provider actions and flows - to create and maintain the cloud infrastructure Cloud user/tenant actions and flows - to deploy and manage applications in the cloud Section 5 describes the software infrastructure for the Red Hat Cloud. Section 6 describes the configuration used for the proof-of-concept. Section 7 is a detailed step-by-step guide for deploying cloud infrastructure management services in a Red Hat Cluster Suite (RHCS) cluster for high availability. Section 8 is a detailed step-by-step guide for deploying RHEV host systems to run tenants' VMs. Section 9 is a detailed step-by-step guide for deploying and scaling a sample RHEL application in tenant VMs. Section 10 is a detailed step-by-step guide for deploying and scaling a sample JBoss application in tenant VMs. 7

8 Section 11 is a detailed step-by-step guide for deploying and scaling a sample MRG Grid application in tenant VMs. Section 12 describes some end-user use-cases scenarios of the cloud infrastructure outlined in Section 6 through Section 11 above. Section 13 lists referenced documents. Future versions of the Red Hat Cloud Reference Architecture will take these concepts further: Red Hat Cloud Reference Architecture: Adding self-service Red Hat Cloud Reference Architecture: Managing mixed private clouds Red Hat Cloud Reference Architecture: Adding public clouds Red Hat Cloud Reference Architecture: Creating large-scale clouds 8

9 2 Cloud Computing: Definitions Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models. The following definitions have been proposed by National Institute of Standards and Technology (NIST) in the document found at 2.1 Essential Characteristics Cloud computing creates an illusion of infinite computing resources available on demand, thereby eliminating the need for Cloud Computing users to plan far ahead for provisioning On-demand Self-Service A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service s provider Resource Pooling The provider s computing resources are pooled to serve multiple consumers using a multitenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or data center). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines Rapid Elasticity Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time Measured Service Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. 9

10 2.2 Service Models Cloud Infrastructure as a Service (IaaS) The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and invoke arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls) Cloud Platform as a Service (PaaS) The capability provided to the consumer is to deploy onto the cloud infrastructure consumercreated or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations Cloud Software as a Service (SaaS) The capability provided to the consumer is to use the provider s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based ). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. 10

11 2.2.4 Examples of Cloud Service Models Figure 1 11

12 2.3 Deployment Models Private Cloud The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise. Figure 2 12

13 2.3.2 Public Cloud The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Figure 3 13

14 2.3.3 Hybrid Cloud The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., load-balancing between clouds). Figure Community Cloud The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise. 14

15 3 Red Hat and Cloud Computing 3.1 Evolution, not Revolution A Phased Approach to Cloud Computing While cloud computing requires virtualization as an underlying and essential technology, it is inaccurate to equate cloud computing with virtualization. The figure below displays the different levels of abstraction addressed by virtualization and cloud computing respectively. Figure 5: Levels of Abstraction 15

16 The following figure illustrates a phased approach to technology adoption starting with server consolidation using 'virtualization', then automating large deployments of virtualization within an enterprise using 'private clouds', and finally extending private clouds to hybrid environments leveraging public clouds as a utility. Figure 6: Phases of Technology Adoption in the Enterprise 16

17 3.2 Unlocking the Value of the Cloud Red Hat's approach does not lock an enterprise into one vendor's cloud stack, but instead offers a rich set of solutions for building a cloud. These can be used alone or in conjunction with components from third-party vendors to create the optimal cloud to meet unique needs. Cloud computing is one of the most important shifts in information technology to occur in decades. It has the potential to improve the agility of organizations by allowing them to: 1. Enhance their ability to respond to opportunities, 2. Bond more tightly with customers and partners, and 3. Reduce the cost to acquire and use IT in ways never before possible. Red Hat is proud to be a leader in delivering the infrastructure necessary for reliable, agile, and cost-effective cloud computing. Red Hat's cloud vision is unlike that of any other IT vendor. Red Hat recognizes that IT infrastructure is - and will continue to be - composed of pieces from many different hardware and software vendors. Red Hat enables the use and management of these diverse assets as one cloud. Enabling cloud to be an evolution, not a revolution. Red Hat's vision spans the entire range of cloud models: Building an internal Infrastructure as a Service (IaaS) cloud, or seamlessly using a third-party's cloud Creating new Linux, LAMP, or Java applications online, as a Platform as a Service (PaaS) Providing the easiest path to migrating applications to attractive Software as a Service (SaaS) models Red Hat's open source approach to cloud computing protects existing investment and manages diverse investments as one cloud -- whether Linux or Windows, Red Hat Enterprise Virtualization, VMware or Microsoft Hyper-V, Amazon EC2 or another vendor's IaaS,.Net or Java, JBoss or WebSphere, x86 or mainframe. 17

18 3.3 Redefining the Cloud Cloud computing is the first major market wave where open source technologies are built in from the beginning, powering the vast majority of early clouds. Open source products that make up Red Hat's cloud infrastructure include: Red Hat Enterprise Virtualization Red Hat Enterprise Linux Red Hat Network Satellite Red Hat Enterprise MRG Grid JBoss Enterprise Middleware In addition Red Hat is leading work on and investing in several open source projects related to computing. As these projects mature, and after undergo rigorous testing, tuning, and hardening, the ideas from many of these projects may be incorporated into future version of the Red Hat cloud infrastructure. These projects include: Deltacloud - Abstracts the differences between clouds BoxGrinder - Making it easy to grind out server configurations for a multitude of virtualization fabrics Cobbler - Installation server for rapid set up of network installation equipment Condor - Batch system managing millions of machines worldwide CoolingTower - Simple application-centric tool for deploying applications in the cloud Hail - Umbrella cloud computing project for cloud services Infinispan - Extremely scalable, highly available data grid platform Libvirt - Common, generic, and scalable layer to securely manage domains on a node Spice - Open remote computing solution or solution for interaction with virtualized desktop devices Thincrust - Tools to build appliances for the cloud Deltacloud The goal of Deltacloud is simple: making many clouds act as one. Deltacloud aims to bridge the differences between diverse silos of infrastructure, allowing them to be managed as one. Organizations today may have different clouds built on, for example, Red Hat Enterprise Virtualization, VMware, or Hyper-V. The Deltacloud project is designed to make them manageable as one cloud, one pool of resources. Or organizations may wish to use internal cloud capacity, as well as Amazon EC2, and perhaps capacity from other IaaS providers. The Deltacloud project is designed to make these manageable as one. Today each IaaS cloud presents a unique API that developers and ISVs need to write to in order to consume the cloud service. The Deltacloud effort is creating a common, REST-based API, such that developers can write once and manage anywhere. Deltacloud is cloud broker, so to speak, with drivers that map the API to both public clouds like EC2 and private virtualized clouds based on VMware and Red Hat Enterprise Linux with integrated KVM virtualization technology. The API can be test driven with the self-service web console, which 18

19 is also a part of the Deltacloud effort. While a young project, the response has been overwhelming and the potential impact on users, developers, and IT to consume cloud services via a common set of tools is epic. To learn more about the Deltacloud project, visit Red Hat's unique open source development model means that one can observe, participate in, and improve the development of our technologies with us. It is done in the open to ensure interoperability and compatibility. It yields uncompromising, stable, reliable, secure, enterprise-class infrastructure software, which powers the world's markets, businesses, governments, and defense organizations. The power of this model is being harnessed to drive the cloud forward. 19

20 4 A High Level Functional View of Cloud Computing The Red Hat infrastructure for cloud computing is described in terms of: 1. Cloud administrator/provider interfaces to create and maintain the cloud infrastructure 2. Cloud user/tenant interfaces to deploy and manage applications in the cloud Figure 7: Cloud Provider & Tenants Note: Most cloud architecture write-ups only describe the cloud user interface. Since this reference architecture is intended to help enterprises set up private clouds using the Red Hat infrastructure, this document provides an overview of the cloud provider interfaces in addition to the cloud tenant interfaces. 20

21 Figure 8: Cloud Components & Interfaces 21

22 4.1 Cloud User / Tenant The cloud user (or tenant) uses the user portal interfaces to deploy and manage their application on top of a cloud infrastructure offered by a cloud provider. Three types of user portal functionality are covered at a very high level in this section: 1. User Log-In 2. VM Deployment & Monitoring 3. VM Orchestration & Discovery User Log-In User Account Management enables cloud users to create new accounts, log into existing accounts, and gain access to their (active or dormant) VMs. The user portal supports all these functions via a web/api interface which supports multitenancy, i.e., each user (or tenant) has secure access to only their VMs and is isolated from other VMs it does not own VM Deployment & Monitoring The workhorses in a cloud are virtual machines loaded with the executable images (templates) of the application stack with access to application data/storage, network connections, and a user portal. The user portal enables functions like import/export/backup of images in the VM, add/edit VM resources, and state control of the VM via commands such as run, shutdown and suspend VM Orchestration & Discovery There are many patterns of how a cloud is used as a utility. For example, one IaaS pattern may be where the cloud provides fast provisioning of the pre-configured virtual machines. Other details of patterns of use may involve application data persisting across VM invocations (stateful) or not persisting across VM invocations (stateless), or IP connections persisting across VM invocations or not. If a user starts a group of VMs running client-server applications, the virtual machines running the clients should be able to locate virtual machines running the servers. VM orchestration and discovery services are used to organize VMs into group of cooperating virtual machines by assigning parameters to VMs that can be used to customize the VM instance according to its role. 22

23 4.2 Cloud Provider / Administrator The cloud provider has a set of management interfaces to create, monitor and manage the cloud infrastructure. Four types of cloud administrator functionality are covered at a very high level in his section: 1. Tenant Account Management 2. Virtualization Substrate Management 3. Application / Software / Image Life-Cycle Management 4. Operations Management Tenant Account Management User Account Management provides the security framework for creating and maintaining cloud user (or tenant) accounts. It tracks all the (virtual) hardware and software resources assigned to a tenant and provides the necessary isolation of a tenant's resources from unauthorized access. It offers an interface to track the resource consumption and billing information on a per tenant basis Virtualization Substrate Management Virtualization Substrate Management is a centralized management system to administer and control all aspects of a virtualized infrastructure including datacenters, clusters, hosts and virtual machines. It offers rich functionality via both an API as well as a Web browser GUI. Functions include: Live Migration: Dynamically move virtual machines between hosts with no service interruption. High Availability: Virtual machines automatically restart on another host in the case of host failure. Workload Management: Balance workloads in the datacenter by dynamically livemigrating virtual machines based on resource usage and policy. Power Management: During off-peak hours, concentrates virtual machines on fewer physical hosts to reduce power consumption on unused hosts. Maintenance Manager: Perform maintenance on hosts without guest downtime. Upgrade hypervisors directly from management system. Image Manager: Create new virtual machines based on templates. Use snapshots to create point-in-time image of virtual machines. Monitoring : Real time monitoring of virtual machines, host systems and storage. Alerts and notifications. 23

24 Security : Role based access control allowing fine grained access control and the creation of customized roles and responsibilities. Detailed audit trails covering GUI and API access. API : API for command line management and automation Centralized Host management : Manage all aspects of host configuration including network configuration, bonding, VLANs and storage Software Life-Cycle Management Software Life-Cycle Management is a software management solution deployed inside the customer's data center and firewall that provides software updates, configuration management, and life cycle management across both physical and virtual servers. It supports: Operating System software Middleware software Application software It also provides powerful systems administration capabilities such as provisioning and monitoring for large deployments and ensures that security fixes and configuration files are applied consistently across the entire environment Operations Management Since the virtualized environment exists in a physical environment, Operations Management is a catch-all category which covers a whole host of management functions required to install, configure and manage physical servers, storage and networks. Other functions covered by Operations Management include overall physical datacenter security, performance, high availability, disaster tolerance, SLA/QoS, energy management, software licensing, usage/billing/charge-back across divisions of a company Cloud Provider Functionality - Creating/Managing an IaaS Cloud Infrastructure Cloud provider / administrator functionality includes: 1. Create and mange cloud user accounts 2. Managing physical resources Servers Storage Network Power 3. Managing virtualization substrate Create virtual data centers and associated storage domains Configure virtualization clusters (comprising virtual hosts) within the virtual data 24

25 centers Create pre-configured VMs on virtual hosts with default resources = vcpus, vmem, vnetwork and vstorage Deploy Operating System and other software in pre-configured VMs Create templates for pre-configured VMs Offer interfaces to manage the virtualized environment: create new templates, shutdown/resume/snapshot/remove VMs Managing images, software stack / application life cycle Managing security users, groups, access controls, permissions Offering a scheduling / dispatching function for scheduling work Managing and monitor SLA / QoS policy Performance HA/DT Power Managing accounting / chargeback 25

26 4.3 Multi-Cloud Configurations Figure 9 takes the cloud functionality shown in Figure 8 and extends it to a multi-cloud configuration. Figure 9: Multi-Cloud Configuration - Components & Interfaces 26

27 5 Red Hat Cloud: Software Stack and Infrastructure Components Figure 10 maps Red Hat infrastructure components to the Cloud functionality shown in Figure 9. Figure 10: Mapping Red Hat Components for Cloud Functionality Recall that Red Hat itself does not operate a cloud but its suite of open source software provides the infrastructure with which cloud providers are able to build public/private cloud offerings. Specifically: 1. IaaS based on: RHEV MRG Grid 27

28 2. PaaS based on: JBoss Figure 11 depicts the software stack of Red Hat cloud infrastructure components. Figure 11: Red Hat Software Stack 28

29 5.1 Red Hat Enterprise Linux Red Hat Enterprise Linux (RHEL) is the world's leading open source application platform. On one certified platform, RHEL offers a choice of: Applications - Thousands of certified ISV applications Deployment - Including standalone or virtual servers, cloud computing, or software appliances Hardware - Wide range of platforms from the world's leading hardware vendors Red Hat has announced the fifth update to RHEL 5: Red Hat Enterprise Linux 5.5. RHEL 5.5 is designed to support newer Intel Xeon Nehalem-EX platform as well as the upcoming AMD Opteron 6000 Series platform (formerly code named Magny-Cours ). We expect the new platforms to leverage Red Hat s history in scalable performance with new levels of core counts, memory and I/O, offering users a very dense and scalable platform balanced for performance across many workload types. To increase the reliability of these systems, Red Hat supports Intel s expanded machine check architecture, CPU fail-over and memory sparing. Red Hat also continues to make enhancements to our virtualization platform. New to the RHEL 5.5 is support for greater guest density, meaning that more virtual machines can be supported on each physical server. Our internal testing to date has shown that this release can support significantly more virtual guests than other virtualization products. The new hardware and protocols included in the beta significantly improve networking scaling by providing direct access from the guest to the network. RHEL 5.5 also introduces improved interoperability with Microsoft Windows 7 with an update to Samba. This extends the Active Directory integration to better map users and groups on Red Hat Enterprise Linux systems and simplifies managing filesystems across platforms. An important feature of any RHEL update is that kernel and user application programming interfaces (APIs) remain unchanged, ensuring RHEL 5 applications do not need to be rebuilt or re-certified. The unchanged kernel and user APIs also extend to virtualized environments: with a fully integrated hypervisor, the application binary interface (ABI) consistency offered by RHEL means that applications certified to run on RHEL on physical machines are also certified when run on virtual machines. With this, the portfolio of thousands of certified applications for Red Hat Enterprise Linux applies to both environments. 29

30 5.2 Red Hat Enterprise Virtualization (RHEV) for Servers Red Hat Enterprise Virtualization (RHEV) for Servers is an end-to-end virtualization solution that is designed to enable pervasive data center virtualization, and unlock unprecedented capital and operational efficiency. RHEV is the ideal platform on which to build an internal or private cloud of Red Hat Enterprise Linux or Windows virtual machines. RHEV consists of the following two components: Red Hat Enterprise Virtualization Manager (RHEV-M) for servers: A feature-rich server virtualization management system that provides advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more. Red Hat Enterprise Virtualization Hypervisor (RHEV-H): A modern hypervisor based on Kernel-based Virtual Machine (KVM) virtual technology which can be deployed either as a standalone bare metal hypervisor (included with Red Hat Enterprise Virtualization for Servers), or as Red Hat Enterprise Linux 5.4 and later (purchased separately) installed as a hypervisor host. Some key characteristics of RHEV 2.1 are listed below: Scalability: Host: Up to 256 cores, 1 TB RAM Guest/VM: Up to 16 vcpus, 64 GB RAM Clusters: Over 50 hosts per cluster Predictable, scalable performance for enterprise workloads from SAP, Oracle, Microsoft, Apache, etc. Advanced features: Memory page sharing, advanced scheduling capabilities, and more, inherited from the Red Hat Enterprise Linux kernel Guest operating system support: Paravirtualized network and block drivers for highest performance Red Hat Enterprise Linux Guests (32-bit & 64-bit): Red Hat Enterprise Linux 3, 4 and 5 Microsoft Windows Guests (32-bit & 64-bit): Windows 2003 server, Windows 2008 server, Windows XP. SVVP, and WHQL certified Hardware support: All 64-bit x86 servers that support Intel VT or AMD-V technology and are certified for Red Hat Enterprise Linux 5 are certified for Red Hat Enterprise Virtualization. Red Hat Enterprise Virtualization supports NAS/NFS, Fibre Channel, and iscsi storage topologies. 30

31 5.3 Red Hat Network (RHN) Satellite All Red Hat network functionality is on the network, allowing much greater functionality and customization. The Satellite server connects with Red Hat over the public Internet to download new content and updates. This model also allows customers to take their Red Hat Network solution completely off-line if desired. Features include: An embedded database to store packages, profiles, and system information. Instantly update systems for security fixes or to provide packages or applications needed immediately. API layer allows the creation of scripts to automate functions or integrate with existing management applications. Distribute custom or 3rd party applications and updates. Create staged environments (development, test, production) to select, manage and test content in a structured manner. Create errata for custom content, or modify existing errata to provide specific information to different groups. Access to advanced features in the Provisioning Module, such as bare metal PXE boot provisioning and integrated network install trees. Access to Red Hat Network Monitoring Module for track system and application performance. RHN Satellite is Red Hat s on-premises systems management solution that provides software updates, configuration management, provisioning and monitoring across both physical and virtual Red Hat Enterprise Linux servers. It offers customers opportunities to gain enhanced performance, centralized control and higher scalability for their systems, while deployed on a management server located inside the customer s data center and firewall. In September 2009, Red Hat released RHN Satellite 5.3, the first fully open source version of the product. This latest version offers opportunities for increased flexibility and faster provisioning setups for customers with the incorporation of open source Cobbler technology in its provisioning architecture Cobbler Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It binds and automates many associated Linux tasks, eliminating the need for many various commands and applications when rolling out new systems and, in some cases, changing existing ones. With a simple series of commands, network installs can be configured for PXE, re-installations, media-based net-installs, and virtualized installs (supporting Xen and KVM). Cobbler can also optionally help with managing DHCP, DNS, and yum package mirroring infrastructure. In this regard, it is a more generalized automation application, rather than just dealing specifically with installations. There is also a lightweight built-in configuration management system as well as support for integrating with other configuration management systems. Cobbler has a command line interface as well as a web interface and several API access options. 31

32 5.4 JBoss Enterprise Middleware The following JBoss Enterprise Middleware Development Tools, Deployment Platforms and Management Environment are available via subscriptions that deliver industry leading SLAbased production and development support, patches and updates, multi-year maintenance policies and software assurance from Red Hat, the leader in open source solutions. Development Tools: JBoss Developer Studio - PE (Portfolio Edition): Everything needed to develop, test and deploy rich web applications, enterprise applications and SOA services. Enterprise Platforms: JBoss Enterprise Application Platform: Everything needed to deploy, and host enterprise Java applications and services. JBoss Enterprise Web Platform: A standards-based solution for light and rich Java web applications. JBoss Enterprise Web Server: a single enterprise open source solution for large scale websites and lightweight web applications. JBoss Enterprise Portal Platform: Platform for building and deploying portals for personalized user interaction with enterprise applications and automated business processes. JBoss Enterprise SOA Platform: A flexible, standards-based platform to integrate applications, SOA services, and business events as well as to automate business processes. JBoss Enterprise BRMS: An open source business rules management system that enables easy business policy and rules development, access, and change management. JBoss Enterprise Data Services Platform: Bridge the gap between diverse existing enterprise data sources and the new forms of data required by new projects, applications, and architectures. Enterprise Frameworks: JBoss Hibernate Framework: Industry-leading object/relational mapping and persistence. JBoss Seam Framework: Powerful application framework for building next generation Web 2.0 applications. JBoss Web Framework Kit: A combination of popular open source web frameworks for building light and rich Java applications. JBoss jbpm Framework: Business process automation and workflow engine. Management: JBoss Operations Network (JON): An advanced management platform for inventorying, administering, monitoring, and updating JBoss Enterprise Platform deployments. 32

33 5.4.1 JBoss Enterprise Application Platform (EAP) JBoss Enterprise Application Platform is the market leading platform for innovative and scalable Java applications. Integrated, simplified, and delivered by the leader in enterprise open source software, it includes leading open source technologies for building, deploying, and hosting enterprise Java applications and services. JBoss Enterprise Application Platform balances innovation with enterprise class stability by integrating the most popular clustered Java EE application server with next generation application frameworks. Built on open standards, JBoss Enterprise Application Platform integrates JBoss Application Server, with JBoss Hibernate, JBoss Seam, and other leading open source Java technologies from JBoss.org into a complete, simple enterprise solution for Java applications. Features and Benefits: Complete Eclipse-based Integrated Development Environment (JBoss Developer Studio) Built for Standards and Interoperability: JBoss EAP supports a wide range of Java EE and Web Services standards. Enterprise Java Beans and Java Persistence JBoss EAP bundles and integrates Hibernate, the de facto leader in Object/Relational mapping and persistence. Built-in Java naming and directory interface (JNDI) support Built-in JTA for two-phase commit transaction support JBoss Seam Framework and Web Application Services Caching, Clustering, and High Availability Security Services Web Services and Interoperability Integration and Messaging Services Embeddable, Service-Oriented Architecture microkernel Consistent Manageability JBoss Operations Network (JON) JON is an integrated management platform that simplifies the development, testing, deployment and monitoring of JBoss Enterprise Middleware. From the JON console one can: inventory resources from the operating system to applications. control and audit application configurations to standardize deployments. manage, monitor and tune applications for improved visibility, performance and availability. One central console provides an integrated view and control of JBoss middleware infrastructure. 33

34 The JON management platform (server-agent) delivers centralized systems management for the JBoss middleware product suite. With it one can coordinate the many stages of application life cycle and expose a cohesive view of middleware components through complex environments, improve operational efficiency and reliability through thorough visibility into production availability and performance, and effectively manage configuration and rollout of new applications across complex environments with a single, integrated tool. Auto-discover application resources: Operating systems, applications and services From one console, store, edit and set application configurations Start. stop or schedule an action on an application resource Remotely deploy applications Monitor and collect metric data for a particular platform, server or service Alert support personnel based upon application alert conditions Assign roles for users to enable fine-grained access control to JON services 34

35 5.5 Red Hat Enterprise MRG Grid MRG Grid provides high throughput and high performance computing. Additionally, it enables enterprises to move to a utility model of computing to help enterprises achieve both higher peak computing capacity and higher IT utilization by leveraging their existing infrastructure to build high performance grids. Based on the Condor project, MRG Grid provides the most advanced and scalable platform for high throughput and high performance computing with capabilities like: scalability to run the largest grids in the world. advanced features for handling priorities, workflows, concurrency limits, utilization, low latency scheduling, and more. support for a wide variety of tasks, ranging from sub-second calculations to longrunning, highly parallel (MPI) jobs. the ability to schedule to all available computing resources, including local grids, remote grids, virtual machines, idle desktop workstations, and dynamically provisioned cloud infrastructure. MRG Grid also enables enterprises to move to a utility model of computing, where they can: schedule a variety of applications across a heterogeneous pool of available resources. automatically handle seasonal workloads with high efficiency, utilization, and flexibility. dynamically allocate, provision, or acquire additional computing resources for additional applications and loads. execute across a diverse set of environments, ranging from virtual machines to baremetal hardware to cloud-based infrastructure. 35

36 6 Proof-of-Concept System Configuration This proof-of-concept for deploying the Red Hat infrastructure for a private cloud used the configuration shown in Figure 12 comprised of: 1. Infrastructure management services, e.g., Red Hat Network (RHN) Satellite, Red Hat Enterprise Virtualization Manager (RHEV-M), DNS service, DHCP service, PXE server, NFS server for ISO images, JON, MRG Manager - most of them installed in virtual machines (VMs) in a Red Hat Cluster Suite (RHCS) cluster for high availability. 2. A farm of RHEV host systems (either in the form of RHEV Hypervisors or as RHEL+KVM) to run tenants' VMs. 3. Sample RHEL application(s), JBoss application(s) and MRG Grid application(s) deployed in the tenant VMs. Figure 12 36

37 6.1 Hardware Configuration Hardware Systems NAT System [1 x HP ProLiant DL585 G2] Specifications Quad Socket, Dual Core, (8 cores) AMD Opteron 8222 GHz, 72GB RAM 4 x 72 GB SAS 15K internal disk drives 2 x Broadcom BCM5706 Gigabit Ethernet Controller Quad Socket, Quad Core (16 cores) Intel Xeon CPU 64GB RAM Management Cluster Nodes [2 x HP ProLiant DL580 G5] 4 x 72 GB SAS 15K internal disk drives 2 x QLogic ISP2432-based 4Gb FC HBA 1 x Intel 82572EI Gigabit Ethernet Controller 2 x Broadcom BCM5708 Gigabit Ethernet Controller Dual Socket, Quad Core, (8 cores) Intel Xeon CPU 48GB RAM Hypervisor Host Systems [2 x HP ProLiant DL370 G6] 6 x 146 GB SAS 15K internal disk drives 2 x QLogic ISP2532-based Dual-Port 8Gb FC HBA 4 x NetXen NX3031 1/10-Gigabit Ethernet Controller Table 1: Hardware Configuration 37

38 6.2 Software Configuration Software Version 5.5 Beta Red Hat Enterprise Linux (RHEL) ( el5 kernel) Red Hat Enterprise Virtualization (RHEV) 2.2 Beta Red Hat Network (RHN) Satellite 5.3 JBoss Enterprise Application Platform (EAP) 5.0 JBoss Operations Network (JON) 2.2 Red Hat Enterprise MRG Grid 1.2 Table 2: Software Configuration 38

39 6.3 Storage Configuration Hardware Specifications Storage Controller: Code Version: M100R18 Loader Code Version: x HP StorageWorks MSA2324fc Fibre Channel Storage Array + HP StorageWorks 70 Modular Smart Array with Dual Domain IO Module [24+25 x 146GB 10K RPM SAS disks] Memory Controller: Code Version: F300R22 Management Controller Code Version: W440R20 Loader Code Version: Expander Controller: Code Version: 1036 CPLD Code Version: 8 Hardware Version: 56 1 x HP StorageWorks 4/16 SAN Switch Firmware: v x HP StorageWorks 8/40 SAN Switch Firmware: v6.1.0a Table 3: Storage Hardware The MSA2324fc array was configured with four 11-disk RAID6 vdisks, each with spares. create create create create vdisk vdisk vdisk vdisk level level level level r6 r6 r6 r6 disks disks disks disks spare 1.12 VD spare 1.24 VD spare 2.12 VD spare VD4 39

40 LUNs were created and presented as outlined in the following table. Volume Size Presentation Purpose sat_disk 300 GB Management Cluster Satellite Server VM OS disk luci_disk 20 GB Management Cluster Luci server VM OS disk q_disk 50 MB Management Cluster Management Cluster Quorum jon_disk 40 GB Management Cluster JON VM OS Disk mgmtvirt_disk 300 GB Management Cluster Management Virtualization Storage rhevm_disk 30 GB RHEV-M OS Disk rhev-nfs-fs 300 GB Management Cluster rhevm-storage 1 TB Management Cluster Hypervisor Hosts RHEV-M ISO Library RHEV-M Storage Pool Table 4: LUN Configuration As an example, the following commands were used to create the 30 GB rhevm_disk LUN and present it exclusively to each HBA in the management cluster nodes. create volume rhevm-vm vdisk VD4 size 30GB lun 07 map volume rhevm-vm access rw ports a1,a2,b1,b2 lun 07 host monet_host0,degas_host0,degas_host1,monet_host1 unmap volume rhevm-storage 40

41 6.4 Network Configuration The components of this cloud infrastructure were staged in a private subnet, allowing the environment complete control of the network (e.g., DHCP, DNS, and {XE) without having to lobby IT for changes to support a segment which they would not maintain and control. Other configurations are supported but this one was the most time efficient for this exercise. While the infrastructure is in a private sub-net, access to and from the systems to the complete network is required. This was handled by configuring a system that has network connections to both the private subnet and the public network. This machine served as a gateway between the networks by configuring iptables to perform Network Address Translation (NAT). A system was configured to act as a NAT using the top address ( ) as a gateway and a network domain name of ra.rh.com. The initial estimated IP requirement was approximately 1000 address in an RFC 1918 (address allocation for private internet) address space. The decision was made to use a class B network which would be in the /12 space. This number of addresses requires a 22bit subnet mask (e.g., / which yields addresses through ). 41

42 7 Deploying Cloud Infrastructure Services This section provides a set of detail actions required to configure Red Hat products that constitute the infrastructure used for a private cloud. The goal is to create a set of highly available cloud infrastructure management services. These cloud management services will then be used to set up the cloud hosts, the VMs within those hosts and finally load applications in those VMs. High availability is achieved by clustering two RHEL nodes (active / passive) using the Red Hat Cluster Suite (RHCS). Each of the cluster nodes is set up to run RHEL 5.5 (with the bundled KVM hypervisor). For most management services a VM is created (using the KVM hypervisor and not RHEV-M) and configured as an RHCS service. And then the management service in installed in the VM, e.g., RHN Satellite VM, JON VM. A high level walk-through of the steps to create these highly available cloud infrastructure management services is presented below Install RHEL + KVM on a node Use Virt-manager to create a VM Install RHN Satellite in the VM (= Satellite VM) Synchronize Satellite with RHN & download packages from all appropriate channels / child channels: Base RHEL 5 Clustering (RHCS, ) Cluster storage (GFS, ) Virtualization (KVM, ) RHN Tools RHEV management agents for RHEL hosts 5. Use multi-organization support in Satellite - create a Tenant organization and Management organization 6. Configure cobbler Configure cobbler s management of DHCP Configure cobbler s management of DNS Configure cobbler s management of PXE 7. Provision MGMT-1 node from Satellite 8. Migrate Satellite-VM to MGMT-1 9. Provision additional cloud infrastructure management services on MGMT-1 (using Satellite where applicable = Satellite creates VM, installs OS and additional software) RHEL VM: LUCI Windows VM: RHEV-M RHEL VM: JON RHEL VM: MRG Manager NFS service 10. Provision MGMT-2 node from Satellite 11. Turn MGMT-1 and MGMT-2 into RHCS cluster 42

43 12. Make cloud infrastructure management services clustered services 13. Balance clustered services (for better performance) 14. Configure RHEV-M RHEV data center(s) RHEV cluster(s) within the data center(s) 43

44 7.1 Network Gateway The gateway system renoir.lab.bos.redhat.com was installed with a basic configuration of Red Hat Enterprise Linux 5.4 Advanced Platform and iptables was configured to perform network address translation to allow communication between the private subnet and the public network. Figure 13 The following details the procedure for this configuration. 1. Install Red Hat Enterprise Linux 5.4 Advanced Platform: a) Use obvious naming convention for operating system volume group (e.g., <hostname>natvg). b) Exclude all software groups when selecting software components. c) When prompted, configure the preferred network interface using DHCP. d) Set SELinux to permissive mode. e) Disable the firewall (iptables). 2. Configure Secure Shell (ssh) keys 44

45 3. To prevent /etc/resolv.conf from being overwritten by DHCP, convert eth0 (/etc/sysconfig/network-script/ifcfg-eth0) to a static IP DEVICE=eth0 BOOTPROTO=static NETMASK= IPADDR= HWADDR=00:1E:0B:BB:42:70 ONBOOT=yes TYPE=Ethernet 4. Configure eth1 (/etc/sysconfig/network-script/ifcfg-eth1) with gateway address for the private subnet DEVICE=eth1 BOOTPROTO=static NETMASK= IPADDR= HWADDR=00:1E:0B:BB:42:72 TYPE=Ethernet ONBOOT=yes 5. Update /etc/hosts with known addresses for NAT, DNS, etc. 6. To be able to search both public and private networks, edit /etc/resolv.conf to contain the following: search ra.rh.com,lab.bos.redhat.com nameserver # satellite system nameserver nameserver nameserver Edit /etc/sysclt.conf: Set net.ipv4.ip_forward=1 8. Enable, configure and save iptables settings using the following commands: chkconfig iptables on service iptables on iptables -F iptables -t nat -F iptables -t mangle -F iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth1 -j ACCEPT service iptables save 45

46 7.2 Install First Management Node Install and configure the first of the nodes that will comprise the management services cluster. Figure Disable fibre channel connectivity with system (e.g., switch port disable, cable pull, HBA disable, etc.). 2. Install Red Hat Enterprise Linux 5.5 Advanced Platform: a) Use obvious naming convention for operating system volume group (e.g., <hostname>cloudvg). b) Include the Clustering and Virtualization software groups when selecting software components. c) Select the Customize Now option and highlight the Virtualization entry at left. Check the box for KVM. Ensure Virtualization is unchecked. d) When prompted, configure the preferred network interface using: a static IP the NAT server IP address as a default route IP addresses for locally configured DNS 46

47 e) Set SELinux to permissive mode f) Enable the firewall (iptables) leaving ports open for ssh, http, and https. 3. Configure Secure Shell (ssh) keys 4. Update /etc/hosts with known addresses for NAT, DNS, etc. 5. Modify /etc/resolv.conf to contain the following: search ra.rh.com nameserver # satellite system IP 6. Configure NTP using the following commands: service ntpd start chkconfig ntpd on 7. Modify firewall rules to include openais, rgmanager, ricci, dlm, cssd, and vnc using the following commands: iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p udp --dport 5404,5405 -j ACCEPT # openais iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports 41966,41967,41968, j ACCEPT # rgmanager iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports j ACCEPT # ricci iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports j ACCEPT # dlm iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports 50006,50008, j ACCEPT # cssd iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p udp --dports j ACCEPT # cssd iptables -I RH-Firewall-1-INPUT -m state --state NEW -p tcp --destination-port j ACCEPT # vnc iptables -I RH-Firewall-1-INPUT -m state --state NEW -p tcp --destination-port j ACCEPT # vnc service iptables save 8. Disable ACPID: chkconfig acpid off 9. Configure device-mapper a) Enable device-mapper multipathing using the following commands: yum install device-mapper-multipath chkconfig multipathd on service multipathd start b) Edit /etc/multipath.conf accordingly to alias known devices 10. Configure cluster interconnect network 11.Enable fibre channel connectivity disabled in step To discover any fibre channel devices, either execute rescan-scsi-bus.sh or reboot the node. 47

48 7.3 Create Satellite System The satellite system provides the configuration management of the Red Hat Enterprise Linux system and is the network maintainer of DHCP, DNS and PXE Create Satellite VM Figure Convert primary network of management system to bridge to allow sharing. a) Create network bridge for virtualization: Create bridge configuration file /etc/sysconfig/network-scripts/ifcfg-cumulus0 DEVICE=cumulus0 TYPE=Bridge BOOTPROTO=static IPADDR= NETMASK= GATEWAY= ONBOOT=yes b) Modify the existing public network file (e.g., ifcfg-eth#) add BRIDGE=cumulus0 48

49 confirm BOOTPROTO=none remove/comment out any static IP address c) Restart network, confirming the bridge comes online service network restart d) Reboot node to make system services aware of network changes. 2. Create storage volume (e.g., sat_disk) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. 3. Create Virtual Machine, using virt-manager Name: (e.g., ra-sat-vm) Set Virtualization Method: Fully virtualized CPU architecture: x86_64 Hypervisor: kvm Select Local install media installation method OS Type: Linux OS Variant: Red Hat Enterprise Linux 5.4 or later Specify preferred installation media Specify Block device storage location (e.g., /dev/mapper/sat_disk) Specify Shared physical device network connection (e.g., cumulus0) Max memory: 8192 Startup memory: 8192 Virtual CPUs: 4 4. Install OS Red Hat Enterprise Linux 5.4 Advanced Platform Use local device (e.g., vda) for OS Use obvious naming convention for OS volume group (e.g., SatVMVG) Deselect all software groups Configure network interface eth0 with static IP address Set SELinux to permissive mode Enable firewall 5. Open required firewall ports: iptables -I RH-Firewall-1-INPUT -p tcp --dport 53 -j ACCEPT # DNS/named iptables -I RH-Firewall-1-INPUT -p udp --dport 53 -j ACCEPT # DNS/named iptables -I RH-Firewall-1-INPUT -p tcp --dport 68 -j ACCEPT # DHCP client iptables -I RH-Firewall-1-INPUT -p udp --dport 68 -j ACCEPT # DHCP client iptables -I RH-Firewall-1-INPUT -p udp --dport 69 -j ACCEPT # tftp iptables -I RH-Firewall-1-INPUT -p tcp --dport 69 -j ACCEPT # tftp iptables -I RH-Firewall-1-INPUT -p udp # HTTP iptables -I RH-Firewall-1-INPUT -p tcp 49 -m state --state NEW -m tcp -m state --state NEW -m udp -m state --state NEW -m tcp -m state --state NEW -m udp -m state --state NEW -m udp -m state --state NEW -m tcp -m udp --dport 80 -j ACCEPT -m tcp --dport 80 -j ACCEPT

50 # HTTP iptables -I RH-Firewall-1-INPUT -p udp -m # HTTPS iptables -I RH-Firewall-1-INPUT -p tcp -m # HTTPS iptables -I RH-Firewall-1-INPUT -p tcp -m ACCEPT # RHN Satellite Server Monitoring iptables -I RH-Firewall-1-INPUT -p udp -m ACCEPT # RHN Satellite Server Monitoring iptables -I RH-Firewall-1-INPUT -p tcp -m ACCEPT # XMPP Client Connection iptables -I RH-Firewall-1-INPUT -p udp -m ACCEPT # XMPP Client Connection iptables -I RH-Firewall-1-INPUT -p udp -m --dport j ACCEPT # Cobbler iptables -I RH-Firewall-1-INPUT -p tcp -m --dport j ACCEPT # Cobbler service iptables save udp --dport 443 -j ACCEPT tcp --dport 443 -j ACCEPT tcp --dport j udp --dport j tcp --dport j udp --dport j state --state NEW -m udp state --state NEW -m tcp Configure DHCP This initial DHCP configuration will provide immediate functionality and become the basis of the template when cobbler is configured. 1. Install the DHCP software package yum install dhcp 2. Create /etc/dhcpd.conf a) Start by using the sample configuration cp /usr/share/doc/dhcp*/dhcpd.conf.sample /etc/dhcpd.conf b) Edit the file, updating the following entries: subnet netmask routers domain name domain name server dynamic IP range hosts # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # authoritive; ddns-update-style interim; ignore client-updates; subnet netmask { # --- default gateway option routers option subnet-mask ; ; 50

51 option domain-name "ra.rh.com"; option domain-name-servers ; option time-offset ; # Eastern Standard Time range ; default-lease-time 21600; max-lease-time 43200; host monet { option host-name "monet.ra.rh.com"; hardware ethernet 00:1E:0B:42:7A; fixed-address ; } host degas { option host-name "degas.ra.rh.com"; hardware ethernet 00:21:5A:5C:2E:46; fixed-address ; } host ra-sat-vm { option host-name "ra-sat-vm.ra.rh.com"; hardware ethernet 54:52:00:6A:30:CA; fixed-address ; } host ra-luci-vm { option host-name "ra-luci-vm.ra.rh.com"; hardware ethernet 54:52:00:50:80:0A; fixed-address ; } host ra-rhevm-vm { option host-name "ra-rhevm-vm.ra.rh.com"; hardware ethernet 54:52:00:07:B0:85; fixed-address ; } host renoir { option host-name "renoir.ra.rh.com"; hardware ethernet 00:18:71:EB:87:9D; fixed-address ; } } 3. Check the syntax of the dhcpd.conf file and resolve any issues service dhcpd configtest 4. Start the service service dhcpd start chkconfig dhcpd on 5. Boot a test system and verify that an appropriate entry is produced in /var/lib/dhcpd/dhcpd.leases 51

52 7.3.3 Configure DNS 1. Install DNS software and related configuration tool yum install named system-config-bind 2. Edit /etc/host.conf to include the bind keyword order hosts,bind 3. Create a file that contains all hosts to be defined. Format should be: <IP Address> <Fully Qualified Host Name> 4. Invoke system-config-bind and perform the following to create the configuration file (/etc/named.conf) and zone files in /var/named: Import file of all defined hosts Define forwarders using options settings 5. Test configuration and resolve issues service named configtest 6. Start service service named start chkconfig named on 52

53 7.3.4 Install and Configure RHN Satellite Software This installation will use the embedded database for Satellite. For complete details, refer to the Red Hat Network Satellite Installation guide at Figure Register ra-sat-vm with central Red Hat Network rhn_register 2. Obtain a Satellite certificate and place in a known location. 3. Download redhat-rhn-satellite-5.3-server-x86_64-5-embedded-oracle.iso. Starting at the RHN website, select the following links: Download Software -> expand Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) -> Red Hat Network Satellite (v5.3 for Server v5 AMD64 / Intel64) -> Satellite Installer for RHEL-5 - (Embedded Database) 4. Mount the CD image mount -o loop /root/redhat-rhn-satellite-5.3-server-x86_64-5embedded-oracle.iso /media/cdrom 5. Create an answers.txt for the installation. 53

54 a) Copy the sample answers.txt cp /media/cdrom/install/answers.txt /tmp/ b) Edit the copied file addressing all the following required fields and any desired optional fields, refer to Appendix A: for the example used: admin- SSL data ssl-set-org ssl-set-org-unit ssl-password ssl-set-org ssl-set-city ssl-set-state ssl-set-country ssl-password satellite-cert-file ssl-config-sslvhost 6. Start installation cd /media/cdrom;./install.pl --answer-file=/tmp/answers.txt 7. After completion of installation, direct a Web browser to the displayed address and perform the following steps: a) Create Satellite Administrator b) General Configuration c) RHN Satellite Configuration Monitoring d) RHN Satellite Configuration Bootstrap e) RHN Satellite Configuration Restart 8. Prepare channels a) List authorized channels satellite-sync --list-channels b) Download base channel (could take several hours) satellite-sync -c rhel-x86_64-server-5 c) Optionally download any desired child channels using syntax described above Configure Multiple Organizations Using multiple organization can make a single satellite appears as multiple discreet instances. Organizations can be configured to share software channels. This configuration created a management and a tenant organization. The elements of the management organization consisted of the cluster members, NAT server, luci VM, JON VM, etc. All RHEV VMs will be registered to the tenant organization. Separating the organization will allow the tenants to have complete functional access to a satellite for RHEV based VMs and provide security by restricting the access to the management systems via satellite. 1. Access the administrator account of the satellite. Navigate to the Admin tab and select create new organization. Fill in all the fields: Organization Name 54

55 Desired Login Desired Password Confirm Password First Name Last Name 2. After selecting Create Organization the System Entitlement page will be displayed. Input the number of entitlements for each entitlement type this organization will be allocated and select Update Organization. 3. Navigate to the Software Channel Entitlements page. Update the channel entitlement allocation for all channels. 4. Navigate to the Trusts page. Select to trust all organizations and select Modify Trusts Configure Custom Channels for RHEL 5.5 Beta 1. Create new channel for each of the following: rhel5-5-x86_64-server [base channel] rhel5-5-x86_64-vt rhel5-5-x86_64-cluster rhel5-5-x86_64-clusterstorage a) Starting at the satellite home page, select the following links: Channels -> Manage Software Channels -> create new channel and provide the information below for each channel created: Channel Name Channel Label Parent Channel [None indicates base channel] Parent Channel Architecture (e.g., x86_64) Channel Summary Organization Sharing (e.g., public) 2. Place packages into created channels assumes distribution has been made available under /distro rhnpush -v -c rhel5-5-x86_64-server --server=http://localhost/app --dir=/distro/rhel5-server-x86_64/server -u admin -p <password> rhnpush -v -c rhel5-5-x86_64-vt --server=http://localhost/app --dir=/distrorhel5-server-x86_64/vt -u admin -p <password> rhnpush -v -c rhel5-5-x86_64-cluster --server=http://localhost/app --dir=/distro/rhel5-server-x86_64/cluster -u admin -p <password> rhnpush -v -c rhel5-5-x86_64-clusterstorage --server=http://localhost/app --dir=/distro/rhel5-serverx86_64/clusterstorage -u admin -p <password> 3. Clone the RHN Tools child channel as a RHEL5-5 child channel a) Starting at Satellite Home, select the following links: Channels -> Manage Software Channels -> clone channel Clone From: Red Hat Network Tools for RHEL Server (v.5 64-bit x86_64) 55

56 Clone: Current state of the channel (all errata) Click Create Channel In the Details page displayed Parent Channel: (e.g., rhel5-5-x86_64-server) Channel Name: use provided or specify name Channel Label: use provided or specify label Base Channel Architecture: x86_64 Channel Summary: use provided or specify summary Enter any optional (non asterisk) information as desired Click Create Channel On re-displayed Details page Organizational Sharing: Public Click Update Channel 4. Make distribution kickstartable a) Starting at Satellite Home, select the following links: Systems -> Kickstart -> Distributions -> create new distributions Distribution Label: (e.g., rhel5-5_x86-64) Tree Path: /distro/rhel5-server-x86_64 Base Channel: rhel5-5-x86_64-server Installer Generation: Red Hat Enterprise Linux 5 [optional] Kernel Options and Post Kernel Options Create Kickstart Distribution Cobbler RHN Satellite includes the Cobbler server that allows administrators to centralize their system installation and provisioning infrastructure. Cobbler is an installation server that collects the various methods of performing unattended system installations, whether it be server, workstation, or guest systems in a full or para-virtualized setup. Cobbler has several tools to assist in pre-installation guidance, kickstart file management, content channel management, and more Configure Cobbler The steps listed in this section perform the initial steps to configure cobbler. The sections that follow will provide the procedure for cobbler's management of additional services. 1. Configure the following settings in /etc/cobbler/settings. The complete settings file can be found in Appendix A.2 redhat_management_server: "ra-sat-vm.ra.rh.com" server: ra-sat-vm.ra.rh.com register_new_installs: 1 redhat_management_type: "site" DO NOT set scm_track_enabled: 1, unless git has been installed 2. Enable SELinux to all HTTPD web service components setsebool -P httpd_can_network_connect true 56

57 3. Check the configuration, ignore warning about version of reposync cobbler check 4. Synchronize cobbler controlled files cobbler sync 5. Restart satellite /usr/sbin/rhn-satellite restart Configure Cobbler Management of DHCP 1. Configure the following settings in /etc/cobbler/settings. The complete settings file can be found in Appendix A.2 manage_dhcp: 1 dhcpd_bin: /usr/sbin/dhcpd dhcpd_conf: /etc/dhcpd.conf restart_dhcp: 1 2. Verify [dhcp] section of /etc/cobbler/modules.conf is set as module = manage_isc 3. Create /etc/cobbler/dhcp.template based on existing /etc/dhcpd.conf created earlier with additional section of macros to add managed systems as shown in the excerpt below: # # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # authoritive; ddns-update-style interim; ignore client-updates; subnet netmask { [...] host renoir { option host-name "renoir.ra.rh.com"; hardware ethernet 00:1E:0B:BB:42:72; fixed-address ; } #for dhcp_tag in $dhcp_tags.keys(): ## group could be subnet if dhcp tags align with the subnets ## or any valid dhcpd.conf construct... if the default dhcp tag in cobbler ## is used, the group block can be deleted for a flat configuration ## group for Cobbler DHCP tag: $dhcp_tag #for mac in $dhcp_tags[$dhcp_tag].keys(): #set iface = $dhcp_tags[$dhcp_tag][$mac] host $iface.name { hardware ethernet $mac; #if $iface.ip_address: 57

58 fixed-address $iface.ip_address; #end if #if $iface.hostname: option host-name "$iface.hostname"; #end if } #end for #end for } 6. Synchronize cobbler controlled files cobbler sync 7. Verify generated /etc/dhcpd.conf Configure Cobbler Management of DNS 1. Configure the following settings in /etc/cobbler/settings. The complete settings file can be found in Appendix A.2 manage_dns: 1 restart_dns: 1 bind_bin: /usr/sbin/named named_conf: /etc/named.conf manage_forward_zones: - 'ra.rh.com' manage_reverse_zones: - ' ' - ' ' - ' ' - ' ' 2. Verify [dns] section of /etc/cobbler/modules.conf is set as module = manage_bind 3. Create /etc/cobbler/named.template based on existing /etc/named.conf created earlier. Modifications required include: removing zones references that will be managed as specified in /etc/cobbler/settings adding a section with macros for the managed zones // Red Hat BIND Configuration Tool // // Default initial "Caching Only" name server configuration // options { forwarders { port 53; port 53; }; forward first; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; 58

59 statistics-file "/var/named/data/named_stats.txt"; }; zone "." IN { type hint; file "named.root"; }; zone "localdomain." IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "localhost." IN { type master; file "localhost.zone"; allow-update { none; }; }; zone " in-addr.arpa." IN { type master; file "named.local"; allow-update { none; }; }; zone " ip6.arpa." IN { type master; file "named.ip6.local"; allow-update { none; }; }; zone "255.in-addr.arpa." IN { type master; file "named.broadcast"; allow-update { none; }; }; zone "0.in-addr.arpa." IN { type master; file "named.zero"; allow-update { none; }; }; #for $zone in $forward_zones zone "${zone}." { type master; file "$zone"; }; #end for #for $zone, $arpa in $reverse_zones zone "${arpa}." { type master; file "$zone"; }; #end for include /etc/rnds.key ; 59

60 4. Note: Zone files will be named as specified in /etc/cobbler/settings, changed from the original name specified in /etc/named.conf 5. Create zone templates a) Create /etc/cobbler/zone_templates mkdir /etc/cobbler/zone_templates b) Copy zone files for the managed zones from /var/named to /etc/cobbler/zone_templates changing to specified name and appending $host_record to the end of the contents of each file. 6. Synchronize cobbler controlled files cobbler sync 7. Verify generated /etc/named.conf Configure Cobbler Management of PXE 1. Configure the following settings in /etc/cobbler/settings. The complete settings file can be found in Appendix A.2 2. Verify the following setting in /etc/cobbler/settings next_server: ra-sat-vm.ra.rh.com 3. Edit /etc/xinetd.d/tftp to verify the following entry... disable=no 4. Add/Verify that /etc/cobbler/dhcp.template has the following entries: filename "pxelinux.0"; range dynamic-bootp ; next-server ; # satellite address 5. Synchronize cobbler controlled files cobbler sync 6. Verify a system can PXE boot 60

61 7.4 Build Luci VM Create and configure the virtual machine on which luci will run for cluster management. Figure On a management cluster node, create network bridge for cluster interconnect Create bridge configuration file /etc/sysconfig/network-scripts/ifcfg-ic0 DEVICE=ic0 BOOTPROTO=none ONBOOT=yes TYPE=Bridge IPADDR=<IP address> NETMASK=<IP mask> Modify existing interconnect network ifcfg-eth# file as follows: add BRIDGE=ic0 confirm BOOTPROTO=none remove/comment out any static IP address Verify bridge configuration with a network restart: service network restart 61

62 Reboot node to make system services aware of network changes 2. Create storage volume (e.g., luci_disk) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. 3. Using virt-manager, create the luci VM using the following input: Name: ra-luci-vm Set Virtualization Method: Fully virtualized CPU architecture: x86_64 Hypervisor: kvm Select Local install media installation method OS Type: Linux OS Variant: Red Hat Enterprise Linux 5.4 or later Specify preferred installation media Specify Block device storage location (e.g., /dev/mapper/luci_disk) Specify Shared physical device network connection (e.g., cumulus0) Max memory: 2048 Startup memory: 2048 Virtual CPUs: 2 4. Install OS: Red Hat Enterprise Linux 5.5 Advanced Platform Use local device (e.g., vda) for OS Use obvious naming convention for OS volume group (e.g., LuciVMVG) Deselect all software groups Configure network interface eth0 with static IP address Set SELinux to permissive mode Enable firewall 5. Open firewall ports 80, 443, and 8084: iptables -I RH-Firewall-1-INPUT iptables -I RH-Firewall-1-INPUT iptables -I RH-Firewall-1-INPUT iptables -I RH-Firewall-1-INPUT iptables -I RH-Firewall-1-INPUT ACCEPT # luci iptables -I RH-Firewall-1-INPUT ACCEPT # luci service iptables save 62 -p -p -p -p -p udp tcp udp tcp tcp -m -m -m -m -m udp tcp udp tcp tcp --dport --dport --dport --dport --dport 80 -j ACCEPT 80 -j ACCEPT 443 -j ACCEPT 443 -j ACCEPT j -p udp -m udp --dport j

63 7.5 Install Second Management Node Install and configure the next of the nodes that will comprise the management services cluster. Figure Disable fibre channel connectivity with system (e.g., switch port disable, cable pull, HBA disable, etc.). 2. Install Red Hat Enterprise Linux 5.5 Advanced Platform: a) Use obvious naming convention for operating system volume group (e.g., <hostname>cloudvg). b) Include the Clustering and Virtualization software groups when selecting software components. c) Select the Customize Now option and highlight the Virtualization entry at left. Check the box for KVM. Ensure Virtualization is unchecked. 63

64 d) When prompted, configure the preferred network interface using: a static IP the NAT server IP address as a default route IP addresses for locally configured DNS e) Set SELinux to permissive mode f) Enable the firewall (iptables) leaving ports open for ssh, http, and https. 3. Configure Secure Shell (ssh) keys 4. Update /etc/hosts with known addresses for NAT, DNS, etc. 5. Edit /etc/resolv.conf to contain the following: search ra.rh.com nameserver # satellite system IP 6. Configure NTP using the following commands: service ntpd start chkconfig ntpd on 7. Modify firewall rules to include openais, rgmanager, ricci, dlm, cssd, and vnc using the following commands: iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p udp --dport 5404,5405 -j ACCEPT # openais iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports 41966,41967,41968, j ACCEPT # rgmanager iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports j ACCEPT # ricci iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports j ACCEPT # dlm iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dports 50006,50008, j ACCEPT # cssd iptables -I RH-Firewall-1-INPUT -m state --state NEW -m multiport -p udp --dports j ACCEPT # cssd iptables -I RH-Firewall-1-INPUT -m state --state NEW -p tcp --destination-port j ACCEPT # vnc iptables -I RH-Firewall-1-INPUT -m state --state NEW -p tcp --destination-port j ACCEPT # vnc service iptables save service iptables restart 8. Disable ACPI daemon to allow an integrated fence device to shut down a server immediately rather than attempting a clean shutdown : chkconfig acpid off 9. Configure device-mapper a) Enable device-mapper multipathing using the following commands: yum install device-mapper-multipath chkconfig multipathd on service multipathd start b) Edit /etc/multipath.conf accordingly to alias known devices 64

65 10. Create cluster interconnect bridged network. Create bridge configuration file /etc/sysconfig/network-scripts/ifcfg-ic0 DEVICE=ic0 BOOTPROTO=none ONBOOT=yes TYPE=Bridge IPADDR=<IP address> NETMASK=<IP mask> Modify existing interconnect network file (e.g., ifcfg-eth#) as follows: add BRIDGE=ic0 confirm BOOTPROTO=none confirm ONBOOT=yes remove/comment out any static IP address Verify bridge configuration with a network restart: service network restart 11. Convert primary network of management system to bridge to allow sharing. a) Create network bridge for virtualization: Create bridge configuration file /etc/sysconfig/network-scripts/ifcfg-cumulus0 DEVICE=cumulus0 TYPE=Bridge BOOTPROTO=static IPADDR= NETMASK= GATEWAY= ONBOOT=yes b) Modify the existing public network file (e.g., ifcfg-eth#) add BRIDGE=cumulus0 confirm BOOTPROTO=none remove/comment out any static IP address c) Restart network, confirming the bridge comes online service network restart 12. Enable fibre channel connectivity disabled in step Reboot to discover fibre channel devices and make system services aware of network changes. 65

66 7.6 Configure RHCS Now that the clustering software is present on the targeted cluster nodes and the luci server, the clustering agent and server modules can be engaged. Figure Start the ricci service on each server that will join the cluster: service ricci start 2. On the remote server on which luci was installed, an administrative password must be set using luci_admin before the service can be started: luci_admin init 3. Restart luci: service luci restart 66

67 4. The first time luci is accessed via a web browser at https://<luci_servername>:8084, the user will need to accept two SSL certificates before being directed to the login page. 5. Enter the login name and chosen password to view the luci home page. 6. In the Luci Home page, click on the cluster tab at the top of the page and then on Create a New Cluster from the menubar on left. In the cluster creation window, enter the preferred name for the cluster (15 char max), the host names assigned to the local interconnect of each server and their root passwords. This window also provides options to: use the clustering software already present on the system or download the required packages enable shared storage support reboot the systems prior to joining the new cluster check to verify that system passwords are identical view the SSL certification fingerprints of each server 7. Note that it is possible to use the external hostnames of the servers to build a cluster. This means that the cluster will be using the public LAN for its inter-node communications and heartbeats. It also means that the server running luci will need to be able to access the clustered systems on the same public LAN. A safer and more highly recommended configuration is to use the interconnect names (or their IP addresses) when building the cluster. This will require that the luci server also have a 67

68 connection to the private LAN and will remove any possibilities of public IO traffic interfering with the cluster activities. 8. Click the Submit button to download (if selected) and install the cluster software packages onto each node, create the cluster configuration file, propagate the file to each cluster member, and start the cluster. This will then display the main configuration window for the newly created cluster. The General tab (shown below) displays cluster name and provides a method for modifying the configuration version and advanced cluster properties. 9. The Fence tab will display the fence and XVM daemon properties window. While the default value of Post-Join Delay is 3, a more practical setting is between 20 and 30 seconds, but can vary to user preference. For this effort, the default Post-Join Delay was set to 30 seconds while default values were used for the other parameters. Set the Post-Join Delay value as preferred and click Apply. 10.The Multicast tab displays the multicast configuration window. The default option to Let cluster choose the multicast address is selected because Red Hat Cluster software chooses the multicast address for management communication across clustered nodes. If the user must use a specific multicast address, click Specify the multicast 68

69 address manually, enter the address and click Apply for changes to take effect. Otherwise, leave the default selections alone. 11.The Quorum Partition tab displays the quorum partition configuration window. Reference the Considerations for Using Quorum Disk and Global Cluster Properties sections of Configuring and Managing a Red Hat Cluster for further considerations regarding the use of a cluster quorum device. To understand the use of quorum disk parameters and heuristics, refer to the qdisk(5) man page. Create storage volume (e.g., qdisk) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. The mkqdisk command will create the quorum partition. Specify the device and a unique identifying label: mkqdisk -c /dev/mapper/qdisk -l q_disk Now that appropriate label has been assigned to the quorum partition or disk, configure the newly labeled q_disk as the cluster quorum device. Once the preferred quorum attributes has been entered and any desired heuristic(s), 69

70 and their respective scores, have been defined, click Apply to create the quorum device. If further information regarding quorum partition details and heuristics is required, please reference: the Considerations for Using Quorum Disk and Global Cluster Properties sections of Configuring and Managing a Red Hat Cluster the Cluster Project FAQ Red Hat Knowledgebase Article ID the qdisk(5) man page 12. Once the initial cluster creation has completed, configure each of the clustered nodes. 70

71 13. A failover domain is a chosen subset of cluster members that are eligible to run a cluster service in the event of a node failure. From the cluster details window, click Failover Domains and then Add a Failover Domain. 71

72 14. Click on the Fence tab to configure a Fence Daemon. 72

73 15. Click on the Add a fence device for this level link at the bottom of the system details page to reveal the Fence Device form. Enter the information for the fence device being used. Click on Update main fence properties to proceed. 73

74 7.7 Configure VMs as Cluster Services Create Cluster Service of Satellite VM 1. If running, shut down the satellite VM prior to configuring it as a cluster service. This is due to the fact that when the Check the box to Automatically Start this Service option is enabled for a cluster service, it will automatically start the service as soon as it is created which will conflict with any currently running satellite VM. virsh shutdown ra-sat-vm 2. In the luci cluster configuration window, select the following links: Services -> Add a Virtual Machine Service and enter the information necessary to create the service: VM name: ra-sat-vm Path to VM Configuration Files: /etc/libvirt/qemu Leave VM Migration Mapping empty Migration Type: live Hypervisor: KVM Check the box to Automatically Start this Service Leave the NFS Lock Workaround and Run Exclusive boxes unchecked FO Domain: ciab_fod Recovery Policy: Restart Max restarts: 2 Length of restart: 60 Select Update Virtual Machine Service 74

75 7.7.2 Create Cluster Service of Luci VM When creating a service of the Luci VM, the VM can not be shut down prior to configuring the service because Luci is required to create the service itself. Given this, the service can be created but not configured to auto start. Once the service exists without the auto start option, it can be modified afterward to set the option accordingly. 1. In the luci cluster configuration window, select the following links: Services -> Add a Virtual Machine Service and enter the information necessary to create the service: VM name: ra-luci-vm Path to VM Configuration Files: /etc/libvirt/qemu Leave VM Migration Mapping empty Migration Type: live Hypervisor: KVM Leave the Automatically Start this Service, NFS Lock Workaround, and Run Exclusive boxes unchecked FO Domain: ciab_fod Recovery Policy: Restart Max restarts: 2 Length of restart: 60 Select Update Virtual Machine Service 2. In the luci cluster configuration window, select the following links: Services -> Configure a Service -> ra-luci-vm: Check the box to Automatically Start this Service Select Update Virtual Machine Service 3. Start the luci service on a management cluster node: clusvcadm -e vm:ra-luci-vm 75

76 7.8 Configure NFS Service (for ISO Library) Create and configure an NFS cluster service to provide storage for the RHEV-M ISO image library. Figure Modify firewall rules on all nodes in management cluster: iptables ACCEPT iptables ACCEPT iptables ACCEPT iptables ACCEPT iptables iptables iptables iptables iptables ACCEPT -I RH-Firewall-1-INPUT -p udp -m udp --dport j -I RH-Firewall-1-INPUT -p tcp -m tcp --dport j -I RH-Firewall-1-INPUT -p udp -m udp --dport j -I RH-Firewall-1-INPUT -p tcp -m tcp --dport j -I -I -I -I -I RH-Firewall-1-INPUT RH-Firewall-1-INPUT RH-Firewall-1-INPUT RH-Firewall-1-INPUT RH-Firewall-1-INPUT 76 -p -p -p -p -p tcp udp udp tcp udp -m -m -m -m -m tcp udp udp tcp udp --dport --dport --dport --dport --dport 111 -j ACCEPT 111 -j ACCEPT 662 -j ACCEPT 662 -j ACCEPT j

77 iptables ACCEPT iptables ACCEPT iptables ACCEPT iptables iptables iptables iptables -I RH-Firewall-1-INPUT -p tcp -m tcp --dport j -I RH-Firewall-1-INPUT -p udp -m udp --dport j -I RH-Firewall-1-INPUT -p tcp -m tcp --dport j -I -I -I -I RH-Firewall-1-INPUT RH-Firewall-1-INPUT RH-Firewall-1-INPUT RH-Firewall-1-INPUT -p -p -p -p udp tcp udp tcp -m -m -m -m udp tcp udp tcp --dport --dport --dport --dport j -j -j -j ACCEPT ACCEPT ACCEPT ACCEPT 2. Edit /etc/sysconfig/nfs on each node in the management cluster to verify the following lines are uncommented as shown in the file excerpt below: RQUOTAD_PORT=875 LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 STATD_PORT=662 STATD_OUTGOING_PORT= Enable NFS service: chkconfig nfs on 4. Create a storage volume (e.g., rhev-nfs-fs) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. 5. Create and check the file system on the target volume: mkfs -t ext3 /dev/mapper/rhev-nfs-fs fsck -y /dev/mapper/rhev-nfs-fs 6. In the luci cluster configuration window: a) Select the following links: Resources -> Add a Resource Select type: IP Address Enter reserved IP address Click Submit b) Select the following links: Resources -> Add a Resource Select type: File System Enter name Select ext3 Enter mountpoint: /rhev Path to mapper dev [e.g., /dev/mapper/rhev-nfs] Options: rw Click Submit c) Select the following links: Resources -> Add a Resource Select type: NFS Export Enter export name Click Submit d) Select the following links: Resources -> Add a Resource 77

78 Select type: NFS Client Enter name Enter FQDN of first management cluster node Options: rw Check the Allow to Recover box Click Submit e) Select the following links: Resources -> Add a Resource Select Type: NFS Client Enter name Enter FQDN of second management cluster node Options: rw Check the Allow to Recover box Click Submit f) Select the following links: Services -> Add a Service Service name: rhev-nfs Check the box to Automatically Start this Service Leave NFS lock workaround and Run Exclusive boxes unchecked 78

79 FO Domain: ciab_fod Recovery Policy: Restart Max restarts: 2 Length of restart: 60 Select Update Virtual Machine Service NOTE: When configuring the NFS export resource for an NFS service, it must be configured as a child of the File System resource. Additionally, each NFS client resource for an NFS service must be configured as a child of the NFS export resource. g) Following the child configuration rule as described in the previous step, add each of the above resources created in steps 'a' through 'e' (IP, NFS Export, both NFS Clients) to the rhev-nfs service using the "Add a resource to this service" button. 79

80 7.9 Create RHEV Management Platform Figure Create VM Create the virtual machine where the RHEV-M software will reside. 1. Create a storage volume (e.g., rhevm_disk) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. 2. Use virt-manager to create the RHEV-M VM Name: rhevm-vm Set Virtualization Method: Fully virtualized CPU architecture: x86_64 Hypervisor: kvm Select Local install media installation method OS Type: Windows OS Variant: Microsoft Windows 2008 Specify preferred installation media Specify Block device storage location (e.g., /dev/mapper/rhevm_disk) Specify Shared physical device network connection (e.g., cumulus0) 80

81 Max memory: 2048 Startup memory: 2048 Virtual CPUs: 2 3. Install Windows Server 2008 R2 Enterprise: a) Reference Section 15 [Installing with a virtualized floppy disk] of the Red Hat Virtualization Guide for instruction on installing the para-virtualized drivers during a Windows installation. Proceed with installation. b) Select language preference c) Select OS: Windows Server 2008 R2 Enterprise (Full Installation) d) Accept license terms e) Select Custom (Advanced) to install a new copy of Windows f) Load the PV driver if installer fails to identify any devices on which to install g) After system reboots (twice) and prepares for first use, set password when prompted h) The Initial Configuration Tasks window will provide the opportunity to: activate Windows set time zone enable automatic updates install available updates i) Disable Windows firewall Create Cluster Service of VM 1. In the luci cluster configuration window, select the following links: Services -> Add a Virtual Machine Service and enter the following: VM Name: rhevm-vm Path to VM Configuration Files: /etc/libvirt/qemu VM Migration Mapping: Migration Type: live Hypervisor: KVM Check the box to Automatically Start this Service Leave the NFS Lock Workaround and Run Exclusive boxes unchecked Failover Domain: ciab_fod Recovery Policy: Restart Max Restarts: 2 Length of Restart: 60 Select Update Virtual Machine Service 81

82 7.9.3 Install RHEV-M Software This release of the Red Hat Enterprise Virtualization Manager was hosted on Microsoft Windows Server 2008 R2 Enterprise. 1. Open TCP port on each management cluster node: iptables -I RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT 2. Install Windows Server 2008 R2 Enterprise and any applicable updates. 3. RHEV Manager utilizes.net Framework. Verify that.net Framework 3.5 is present on the system. In Windows Server 2008 R2,.NET Framework can be enabled in the Server Manager (Start -> All Programs -> Administrative Tools -> Server Manager, if it does not auto start at login). Once started, click Features to expand the category..net Framework is the first feature in the list of features to enable. If there are features already enabled and.net Framework is not listed among them, click Add Features to see the list of features remaining to add to install it. 4. Red Hat requires that Windows PowerShell 2.0 be installed. This is included in the Windows 2008 R2 installation but if it should not present on the system, the appropriate version for the OS can be obtained by searching the Microsoft web site. If PowerShell has been installed on the system, it will have its own icon in the Windows taskbar or a command window appears by typing 'powershell' in the Run... dialog box of the Start menu. 5. System and user authentication can be local or through the use of an Active Directory Domain. If there is an existing domain, an administrator can join using the Computer Name tab of the System Properties window. Another option would be to configure the system which runs the RHEV Manager software as a domain controller. 82

83 6. Prior to installing the RHEV Management software, repeat visits to Windows Update until there are no more applicable updates. Additionally, configure the system to schedule automatic Windows updates. 7. The RHEV-M installation program must be available to the server. While an ISO image containing the needed software can be downloaded using the download software link, the following procedure will reliably find the software components. From Red Hat Network using an account with the RHEV for Servers entitlement, select the Red Hat Enterprise Virtualization Manager Channel filter in the Channels tab. Expand the Red Hat Enterprise Virtualization entry and select the appropriate architecture for the product to be installed. Select the Downloads link near the top of the page. Select the Windows Installer to download the RHEV Manager installation program. While on this page, also download the images for Guest Tools ISO, VirtIO Drivers VFD, and VirtIO Drivers ISO. 83

84 84

85 8. Execute the installation program (e.g., rhevm exe). After the initial screen, accept the End User License Agreement. When the feature checklist screen is displayed, verify that all features have been selected. 9. Choose to either use an existing SQL Server DB or install the express version locally. After selecting to install SQLEXPRESS, a strong password must be entered for the 'sa' user. The destination folder for the install may be changed. The destination web site for the portal can be chosen next or the defaults are used. 10. On the next screen, specify whether to use Domain or local authentication. If local is used, provide the user name and password for an account belonging to the Administrators group. 11. In the next window, enter the organization and computer names for use in certificate generation. The option to change the net console port is provided. Proceeding past the Review screen, the installation begins. The installation process prompts the administrator to install OpenSSL, which provides secure connectivity to Red Hat Enterprise Virtualization Hypervisor and Enterprise Linux as well as other systems. pywin32 is installed on the server. If selected, as in this case, SQLEXPRESS is installed. The RHEV Manager is installed with no further interaction other than when 85

86 the install has completed. 12. Click the Finish button to complete the installation. 13. Verify the install by starting RHEV Manager. From the Start menu, select All Programs -> Red Hat -> RHEV Manager -> RHEVManager. The certificate is installed during the first portal access. At the Login screen enter the User Name and Password for the RHEV administrator, specified during installation, to present the following screen. 86

87 7.9.4 Configure the Data Center Create and configure a data center with a storage pool and a populated ISO image library for VM installations. Figure Create a new data center. In RHEV-M in the Data Centers tab, click the New button: Name: (e.g., Cloud_DC1) Description: [optional] Type: FCP Compatibility Version: Create a new cluster within the data center. In the Clusters tab, click the New button: Name: (e.g., dc1-clus1) Description: [optional] Data Center: Cloud_DC1 Memory Over Commit: Server Load CPU Name: (e.g., Intel Xeon) Compatibility Version: Add a host. Reference Sections 8.1 and 8.4 for the instructions to add a host. 4. Create the storage pool. Assuming a LUN for use as the storage pool exists and has been presented to all target hosts of this data center, select the Storage tab in RHEV Manager and click New Domain: 87

88 Name: (e.g., fc1_1tb) Domain Function: Data Storage Type: FCP Leave Build New Domain selected Ensure the correct host name is selected in the 'Use host' list Select the desired LUN from the list of Discovered LUNs and click Add to move it to the Selected LUNs window Click OK 5. Create the ISO Library. Select the Storage tab in RHEV Manager and click New Domain: Name: (e.g., ISO Library) Domain Function: ISO Storage Type: NFS Export Path: enter <server>:<path> to the exported mount point (e.g., rhev-nfs.ra.rh.com:/rhev/iso_library) Click OK 6. Attach the ISO library and storage pool to the data center. In the Data Centers tab, select/highlight the newly created data center: Click the Storage tab in the lower half of the window Click the Attach Domain button Select the check box corresponding to the newly created storage pool and click OK Click the Attach ISO button Select the check box corresponding to the newly created ISO image library Click OK 7. Populate the ISO library. The Guest Tools and VirtIO driver images that were downloaded when the RHEV Manager installer was downloaded are recommended software for availability in the ISO Library as well as any OS images desired for VM OS installs. NOTE: User must be Administrator to run RHEV Apps until BZ is resolved. On the RHEV Manager system, select Start -> All Programs -> Red Hat -> RHEV Manager -> ISO Uploader In the Red Hat Virtualization ISO Uploader window, press the Add button to select any or all of the images (.iso,.vfd) previously downloaded Select the correct Data Center from the pull down list Click the Upload button 88

89 8 Deploying VMs in Hypervisor Hosts After creating the cloud infrastructure management services, the next steps involve creating RHEV-H and RHEL hypervisor hosts. Once done, create VMs within those hosts for each possible use-case. For RHEV-H host: 1. Use Satellite to provision RHEV-HOST-1 2. Provision VMs on RHEV-HOST-1 Deploy RHEL VMs using: Use Case 1: ISO libraries via NFS service Use Case 2: Template via RHEV-M Use Case 3: PXE via Satellite Deploy Windows VMs using: Use Case 1: ISO libraries via NFS service Use Case 2: Template via RHEV-M For RHEL host: 1. Shut down VMs and put RHEV-H hosts in maintenance mode 2. Use Satellite to provision RHEL-HOST-1 3. Use RHEV-M to incorporate RHEL-HOST-1 as a RHEV host 4. Provision VMs on RHEL-HOST-1 Deploy RHEL VMs using: Use Case 1: ISO libraries via NFS service Use Case 2: Template via RHEV-M Use Case 3: PXE via Satellite Deploy Windows VMs using: Use Case 1: ISO libraries via NFS service Use Case 2: Template via RHEV-M 89

90 8.1 Deploy RHEV-H Hypervisor The RHEV Hypervisor is a live image and is delivered as a bootable ISO that will install the live image onto the local machine. Because Red Hat Enterprise Linux is primarily delivered as a collection of packages, RHN Satellite is used to manage these packages. Since the RHEV hypervisor is delivered as a live image, it can not currently be managed in a satellite channel, however, it can be configured as a PXE bootable image using cobbler. Figure Enable PXE of RHEV-H live image by performing the following procedures on the Satellite VM: a) Download RHEV Hypervisor Beta RPM (e.g., rhev-hypervisor el5rhev.noarch.rpm) Login to RHN Web Site Locate the Search field near the top of the page Select Packages search Enter 'rhev-hypervisor' in search box Select rhev-hypervisor Select Beta (e.g., rhev-hypervisor el5rhev.noarch) 90

91 Near the bottom of this page, select the Download Package link Install the package rpm -ivh rhev-hypervisor el5rhev.noarch.rpm b) Since later versions may be installed (e.g., Beta 2 or GA) rename the file to be identifiable: cd /usr/share/rhev-hypervisor mv rhev-hypervisor.iso rhev-hypervisor-2.2beta1.iso c) Using the livecd tools which were installed with the hypervisor package, generate the files needed for PXE livecd-iso-to-pxeboot rhev-hypervisor-2.2beta1.iso d) Also rename the generated tftpboot subdirectory to be more specific mv tftpboot tftpboot2.2beta1 e) Create cobbler distro from tftptboot file, ignore warnings related to exceeding kernel options length cobbler distro add --name="rhevh_2.2beta1" --kernel=/usr/share/rhev-hypervisor/tftpboot2.2beta1/vmlinuz0 --initrd=/usr/share/rhev-hypervisor/tftpboot2.2beta1/initrd0.img --kopts="rootflags=loop root=/rhev-hypervisor-2.2beta1.iso rootfstype=auto liveimg" f) Create cobbler profile which uses the recently created distro, this will used for interactive installations of the hypervisor. cobbler profile add --name=rhevh_2.2beta1 --distro=rhevh_2.2beta1 a) Create an additional cobbler profile supplying additional kernel options which will automate the hypervisor configuration and installation. cobbler profile add --name=rhevh_2.2beta1auto --distro=rhevh_2.2beta1 --kopts="storage_init=/dev/cciss/c0d0 storage_vol=::::: BOOTIF=eth0 management_server=ra-rhevmvm.ra.rh.com netconsole=ra-rhevm-vm.ra.rh.com" g) Synchronize cobblers configuration with the system filesystems cobbler sync 2. Prepare cobbler to PXE boot system a) If system does not have a cobbler system record, create one cobbler add system --name=vader.ra.rh.com -- profile=rhevh_2.2beta1auto mac=00:25:b3:a8:6f:19 --ip= hostname=vader.ra.rh.com --dns-name=vader.ra.rh.com b) If system does have a cobbler system record, modify to use automated profile cobbler system edit --name=vader.ra.rh.com --profile= rhevh_2.2beta1auto cobbler sync 91

92 3. PXE boot system a) Disable fibre channel connectivity with system (e.g., switch port disable, cable pull, HBA disable, etc.). b) Interact with BIOS to start PXE boot. System will install. c) Enable fibre channel connectivity disabled in step a) 4. At RHEV Manager Host tab, approve system 92

93 8.2 Deploy RHEL Guests (PXE / ISO / Template) on RHEVH Host Figure Deploying RHEL VMs using PXE 1. Configure Activation Key a) Starting at the satellite home page for the tenant user page, select the following links: Systems -> Activation Keys -> create new key and provide the information below: Description: (e.g., RHEL55key) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Create Activation Key b) Select Child Channel tab add RHN Tools, select Update Key 93

94 2. Configure Kickstart a) Starting at the satellite home page for the tenant user page, select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile and provide the information below: Label: (e.g., RHEL55guest) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: None Select Next to accept input and proceed to next page Select Default Download Location Select Next to accept input and proceed to next page Specify New Root Password and Verify Click Finish b) In the Kickstart Details -> Details tab Log custom post scripts Click Update Kickstart c) In the Kickstart Details -> Operating System tab Select Child Channels (e.g., RHN Tools) Since this is a base only install, verify no Repositories checkboxes are selected Click Update Kickstart d) In the Kickstart Details -> Advanced Options tab Verify reboot is selected Change firewall to enabled e) In the System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details f) In the Activation Keys tab Select RHEL55key Click Update Activation Keys 3. Confirm all active hosts are RHEV-H hosts, place any RHEL Hosts into maintenance mode. 4. Create RHEV VM a) At the RHEV Manager Virtual Machines tab, select New Server 94

95 b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., rhel55guest1) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: [blank] Memory Size : (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following Second Device: Network (PXE) d) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO e) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 5. Boot VM a) In the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) After initial PXE booting the Cobbler PXE boot menu will display, select the kickstart that was previously created, (e.g., RHEL55guest:22:tenants) e) The VM will reboot when the installation is complete Deploying RHEL VMs using ISO Library 1. If not already in place, populate the ISO library with the RHEL 5.5 ISO image, which can be downloaded via the RHN web site. NOTE: User must be Administrator to run RHEV Apps until BZ is resolved. a) On the RHEV Manager system, select Start -> All Programs -> Red Hat -> RHEV Manager -> ISO Uploader b) In the Red Hat Virtualization ISO Uploader window, press the Add button to select any or all of the images (.iso,.vfd) previously downloaded c) Select the correct Data Center from the pull down list d) Click the Upload button Place ISO image into ISO Library 95

96 2. Confirm all active hosts are RHEVH hosts, place any RHEL Hosts into maintenance 3. Create RHEV VM a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g.,. rhel55guest2) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: Blank Memory Size : (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: CD-ROM Select Attach CD checkbox Specify CD/DVD to mount (e.g., rhel-server-5.5-x86_64-dvd.iso) d) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO the defaults for the remaining entries are adequate e) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 4. Boot VM a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) The VM will boot the DVD, the remaining installation will need to be performed through the console. e) After the software is installed, the VM will prompt to reboot. After the reboot, answer the First Boot interrogation. Since the Satellite's specific certificate is not local to the VM at this time, skip registering with RHN. f) After system Login Screen displays, login and register with Satellite. Install Satellite certificate 96

97 rpm -ivh Start the rhn_register program and provide the following information Select to receive updates from Red Hat Network Satellite Specify Red Hat Network Location: (e.g., https://ra-sat-vm.ra.rh.com) Select and specify the SSL certificate : /usr/share/rhn/rhn-orgtrusted-ssl-cert Provide tenant user credentials Verify System Name and send Profile Data Deploying RHEL VMs using Templates 1. Confirm all active hosts are RHEVH hosts, place any RHEL hosts into maintenance mode. 2. Create Template a) Prepare the template system to register with the satellite upon booting Identify the activation key to use to register the system. The Activation Keys page (in Systems tab) of the satellite will list existing keys for each organization. Alternatively, if the system was PXE installed using satellite, the register command can be found in /root/cobbler.ks which includes the key used grep rhnreg cobbler.ks The following commands will place commands in the proper script to execute on the next boot cp /etc/rc.d/rc.local /etc/rc.d/rc.localpretemplate echo rhnreg_ks --force --serverurl=https://ra-satvm.ra.rh.com/xmlrpc --sslcacert=/usr/share/rhn/rhn-org-trustedssl-cert activationkey=22-f0b9a335f83c50ef9e5af6a520430aa1 >> /etc/rc.d/rc.local echo mv /etc/rc.d/rc.local.pretemplate /etc/rc.d/rc.local >> /etc/rc.d/rc.local b) Before shutting down the system which will be used to create a template, some level of clearing the configuration settings should be performed. At a minimum the hostname should not be hard-coded as this can lead to confusion when the hostname does not match the IP currently assigned. The following commands will remove the name that was set when installed, and DHCP will set the name upon boot cp /etc/sysconfig/network /tmp grep -v HOSTNAME= /tmp/network > /etc/sysconfig/network Alternatively, a more extensive method of clearing configuration setting is to use the sys-unconfig command. sys-unconfig will cause the system to reconfigure network, authentication and several other subsystems on next boot. 97

98 c) If already not shutdown, shutdown the VM d) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., RHEL55_temp) Description: [optional] While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete 3. Create New VM using template a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., rhel55guest3) Description: [optional] Template: (e.g., RHEL55_temp) Confirm or override the remaining entries c) If the Data Center and Cluster are set to v2.2 compatibility, the provisioning can be changed from thin to preallocated. in the Allocation tab, provide the following: Provisioning: Clone Provisioning: Preallocated 4. The newly created VM will have a Locked Image while being instantiated. When the process is complete the VM is ready to boot. a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) This system will be known to satellite with its progenitor's ID, therefore it must register with Satellite. Start the rhn_register program and provide the following information: Select to Yes, Continue when presented with the old systems registration Confirm to receive updates from Red Hat Network Satellite and the Red Hat Network Location: (e.g., https://ra-sat-vm.ra.rh.com) Provide tenant user credentials Verify System Name and send Profile Data 98

99 8.3 Deploy Windows Guests (ISO / Template) on RHEV-H Host Figure Deploying Window VMs using ISO Library 1. If not already in place, populate the ISO library with the ISO image or images needed for the Windows installation. The RHEV Tools CD and VirtIO driver virtual floppy drive should also be in the ISO Library. NOTE: User must be Administrator to run RHEV Apps until BZ is resolved. a) On the RHEV Manager system, select Start -> All Programs -> Red Hat -> RHEV Manager -> ISO Uploader b) In the Red Hat Virtualization ISO Uploader window, press the Add button to select any or all of the images (.iso,.vfd) previously downloaded 99

100 c) Select the correct Data Center from the pull down list d) Click the Upload button Place ISO image into ISO Library 2. Confirm all active hosts are RHEVH hosts, place any RHEL Hosts into maintenance 3. Create Windows VM a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., w2k3guest1) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: Blank Memory Size : (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: (e.g., Windows 2003) c) In the First Run tab Provide a Domain if used Verify Time Zone is correct d) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO the defaults for the remaining entries are adequate e) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 12) the defaults for the remaining entries are adequate 4. Boot VM a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run Once option of the Run button, or the Run Once option in the right mouse button menu, and provide the following entries: Attach Floppy checkbox Indicate the virtio-drivers vfd should be mounted Attach CD checkbox Indicate which CD/DVD should be mounted (e.g., Win2003R2-disk1.iso) Verify that Network is last in the Boot Sequence 100

101 c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) The VM will boot the DVD, the remaining installation will need to be performed through the console. e) Perform all the actions to install the Operating system. Some versions of windows will recognize the mount floppy and automatically use the VirtIO disk driver, other may require the operator to select the drivers to load. If a second CD/DVD is requires, activate the right mouse button menu on the VM and use the Change CD option locate near the bottom of the options. f) The CD should be changed to the RHEV tools using he right mouse button menu on the VM and use the Change CD option locate near the bottom of the options. Once the Disk is mounted, the RHEV Tools found on this disk should be installed. This will include VirtIO drivers not previously loaded (e.g., network) g) Red Hat recommends all applicable Window Updates be applied and to activate the Window installation Deploying Windows VMs using Templates 1. Confirm all active hosts are RHEVH hosts, place any RHEL Hosts into maintenance 2. Create Template a) Window systems should be 'sysprep'ed prior to be used as the source of a template. There are differences for various versions of Windows, therefore it is best to consult the Microsoft documentation for the exact procedure. The highlights of the process for Windows 2003 is highlighted below: create c:/sysprep folder Mount Installation ISO (disk1) [assume disk D:] Extract the content of Deploy.cab from the D:\Support\Tools into sysprep folder Create sysprep.ini file by executing C:/sysprep/setupmgr.exe Select Create New Select Sysprep Setup Select the appropriate software version (e.g., Window 2003 Server Standard Edition) Select to Fully Automate Provide a Name and Organization Specify the appropriate Time Zone Provide the Product Key Specify the computer Name should be automatically generated Provide the desired Administrator password in encrypted format Finish Since this sysprep.ini file may be used on other instance, the user may want to copy to a known shared location 101

102 Execute sysprep.exe Do not reset the grace period for activation Shutdown Mode should be set to Shut down Reseal The sysprep process will shutdown the VM b) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., w2k3_temp) Description: [optional] c) While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete. 3. Create New VM using template a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., w2k3guest2) Description: [optional] Template: (e.g., w2k3_temp) Confirm or override the remaining entries c) If the Data Center and Cluster are set to v2.2 compatibility, the provisioning can be changed from thin to preallocated. in the Allocation tab, provide the following: Provisioning: Clone Disk 1: Preallocated 4. The newly created VM will have a Locked Image while being instantiated. When the process is complete the VM is ready to Boot. a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) Respond to any system prompts upon booting the VM 102

103 8.4 Deploy RHEL + KVM Hypervisor Host Figure The satellite certificate being used does not include a entitlement for the RHEV management Agents beta channel. A custom channel will be created. a) On satellite server, create a directory to hold the packages mkdir -p /var/satellite/mychannels/mgmtagents/ b) Download packages Log in to Red Hat Network Select the Channels tab, then the All Beta channels tab Filter on Red Hat Enterprise Linux Expand the base channel to list all the child channels Select the x86_64 link next to the Red Hat Enterprise Virtualization Management Agent 5 Beta option Select the Packages tab Select all 4 packages and select the Download Packages button The next page informs the user that the multiple packages will be combined into 103

104 a tar file. Select the Download Selected Packages Now button. Save the tar file to the created directory c) Extract the files in the same directory tar xvf mgmtagentbeta1.tar --strip-components 1 d) Create Custom channel Log into RHN Satellite as the management organization administrator Select the Channels tab, the Manage Software Channels on the left side of the page, then the create new channel option near the top of the page providing the information below Channel Name Channel Label Parent Channel (e.g., rhel5-5-x86_64-server) Parent Channel Architecture (e.g., x86_64) Channel Summary Organization Sharing (e.g., public) e) Place previously downloaded and extracted packages into the created channel rhnpush -v -c rhev22mgmtagentsbeta1 --server=http://localhost/app --dir=/var/satellite/mychannels/mgmtagents -u manage -p <password> 2. Configure Activation Key a) Starting at the satellite home page for the tenant user page, select the following links: Systems -> Activation Keys -> create new key and provide the information below: Description: (e.g., RHELHkey) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning, Virtualization Platform Create Activation Key b) Select Child Channel tab select a the following RHN Tools Channel [e.g., Red Hat Network Tools for RHEL Server (v.5 64-bit x86_64)] Virtualization Channel (e.g., rhel5-5-x86_64-vt) RHEV Management Channel [e.g., Red Hat Enterprise Virt Management Agent (v.5 for x86_64)] Click Update Key 3. If not previously created, configure Kickstart a) Starting at the satellite home page for the tenant user page, select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile and provide the information below: Label: (e.g., RHELH55-x86_64) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: None 104

105 Select Next to accept input and proceed to next page Select Default Download Location Select Next to accept input and proceed to next page Specify New Root Password and Verify Click Finish b) In the Kickstart Details -> Details tab Log custom post scripts Click Update Kickstart c) In the Kickstart Details -> Operating System tab Select Child Channels (e.g., RHN Tools, Virtualization, RHEV Mgmt Agents) Click Update Kickstart d) In the Kickstart Details -> Variables Define disk=cciss/c0d0 e) In the Kickstart Details -> Advanced Options tab Change clearpart to --linux --drives=$disk Verify reboot is selected Change firewall to --enabled f) In the System Details -> Details tab Confirm SELinux is Permissive Enable Configuration Management and Remote Commands Click Update System Details g) In the System Details -> Partitioning tab partition swap --size= maxsize= ondisk=$disk partition /boot --fstype=ext3 --size=200 --ondisk=$disk partition pv.01 --size= grow --ondisk=$disk volgroup rhelh_vg pv.01 logvol / --vgname=rhelh_vg --name=rootvol --size= grow f) In the Activation Keys tab Select RHELHkey Click Update Activation Keys h) A single script will is used to disable GPG check of the custom channels since all beta packages have not been signed, open the requires firewall ports, installs some RHN tools, and make sure all installed software is up to date. echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[rhel5-5-x86_64-server-snap4]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[rhel5-5-x86_64-vt-snap4]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[clone-rhn-tools-rhel-x86_64-server-5]" >> /etc/yum/pluginconf.d/rhnplugin.conf 105

106 echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[rhev-mgmt-agents]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf /bin/cp /etc/sysconfig/iptables /tmp/iptables /usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp -m multiport --dport j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 49152: j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -m physdev --physdev-is-bridged -j ACCEPT" >> /etc/sysconfig/iptables /usr/bin/tail -2 /tmp/iptables >> /etc/sysconfig/iptables /usr/bin/yum -y install osad rhn-virtualization-host /sbin/chkconfig osad on /usr/bin/yum -y update 4. On satellite system, create a cobbler record for the system to be configured as the RHEL/KVM host. cobbler add system name=yoda.ra.rh.com profile=rhelh55-x86_64 --mac=00:25:b3:a9:b0:01 --ip= hostname=yoda.ra.rh.com --dns-name=yoda.ra.rh.com 5. Add system as a host to the RHEV Manager a) On the RHEV Manager Host tab, select New. Provide the following information in the New Host dialog. Name: (e.g., yoda.ra.rh.com) Address: (e.g., ) Verify the Host Cluster Root Password Optionally, enable Power Management and provide the necessary data 106

107 8.5 Deploy RHEL Guests (PXE / ISO / Template) on KVM Hypervisor Host Figure Deploying RHEL VMs using PXE 1. If not previously created, configure Activation Key a) Starting at the satellite home page for the tenant user page, select the following links: Systems -> Activation Keys -> create new key and provide the information below: Description: (e.g., RHEL55key) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Create Activation Key b) Select Child Channel tab add RHN Tools select Update Key 107

108 2. If not previously created, configure Kickstart a) Starting at the satellite home page for the tenant user page, select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile and provide the information below: Label: (e.g., RHEL55guest) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: None Select Next to accept input and proceed to next page Select Default Download Location Select Next to accept input and proceed to next page Specify New Root Password and Verify Click Finish b) In the Kickstart Details -> Details tab Log custom post scripts Click Update Kickstart c) In the Kickstart Details -> Operating System tab Select Child Channels (e.g., RHN Tools) Since this is a base only install, verify no Repositories checkboxes are selected Click Update Kickstart d) In the Kickstart Details -> Advanced Options tab Verify reboot is selected Change firewall to enabled e) In the System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details f) In the Activation Keys tab Select RHEL55key Click Update Activation Keys 3. Confirm all active hosts are RHEL/KVM hosts, place any RHEV Host into maintenance 4. Create RHEV VM a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., rhel55guest4) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: [blank] 108

109 Memory Size: (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: Network (PXE) d) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO e) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 5. Boot VM a) In the RHEV Manager Virtual Machines tab select the newly created VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) After initial PXE booting the Cobbler PXE boot menu will display, select the kickstart that was previously created, (e.g., RHEL55guest:22:tenants) e) The VM will reboot when the installation is complete Deploying RHEL VMs using ISO Library 1. If not already in place, populate the ISO library with the RHEL 5.5 ISO image, which can be downloaded via the RHN web site. NOTE: User must be Administrator to run RHEV Apps until BZ is resolved. a) On the RHEV Manager system, select Start -> All Programs -> Red Hat -> RHEV Manager -> ISO Uploader b) In the Red Hat Virtualization ISO Uploader window, press the Add button to select any or all of the images (.iso,.vfd) previously downloaded c) Select the correct Data Center from the pull down list d) Click the Upload button Place ISO image into ISO Library 2. Confirm all active hosts are RHEL/KVM hosts, place any RHEV Hosts into maintenance. 3. Create RHEV VM a) At the RHEV Manager Virtual Machines tab, select New Server 109

110 b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., rhel55guest5) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: Blank Memory Size : (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: CD-ROM Select Attach CD checkbox Specify CD/DVD to mount (e.g., rhel-server-5.5-x86_64-dvd.iso) d) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO the defaults for the remaining entries are adequate e) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 4. Boot VM a) In the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the right equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) The VM will boot the DVD, the remaining installation will need to be performed through the console. e) After the software is installed, the VM will prompt to reboot. After the reboot, answer the First Boot interrogation. Since the Satellite's specific certificate is not local to the VM at this time, skip registering with RHN. f) After system Login Screen displays, login and register with Satellite. Install Satellite certificate rpm -ivh Start the rhn_register program and provide the following information 110

111 Select to receive updates from Red Hat Network Satellite Specify Red Hat Network Location: (e.g., https://ra-sat-vm.ra.rh.com) Select and specify the SSL certificate : /usr/share/rhn/rhn-orgtrusted-ssl-cert Provide tenant user credentials Verify System Name and send Profile Data Deploying RHEL VMs using Templates 1. Confirm all active hosts are RHEL/KVM hosts, place any RHEV Hosts into maintenance 2. Create Template a) Prepare the template system to register with the satellite upon booting Identify the activation key to use to register the system. The Activation Keys page (in Systems tab) of the satellite will list existing keys for each organization. Alternatively, if the system was PXE installed using satellite, the register command can be found in /root/cobbler.ks which includes the key used grep rhnreg cobbler.ks The following commands will place commands in the proper script to execute on the next boot cp /etc/rc.d/rc.local /etc/rc.d/rc.localpretemplate echo rhnreg_ks --force --serverurl=https://ra-satvm.ra.rh.com/xmlrpc --sslcacert=/usr/share/rhn/rhn-org-trustedssl-cert --activationkey=22-f0b9a335f83c50ef9e5af6a520430aa1 >> /etc/rc.d/rc.local echo mv /etc/rc.d/rc.local.pretemplate /etc/rc.d/rc.local >> /etc/rc.d/rc.local b) Before shutting down the system which will be used to create a template, some level of clearing the configuration settings should be performed. At a minimum the hostname should not be hard-coded as this can lead to confusion when the hostname does not match the IP currently assigned. The following commands will remove the name that was set when installed, and DHCP will set the name upon boot cp /etc/sysconfig/network /tmp grep -v HOSTNAME= /tmp/network > /etc/sysconfig/network Alternatively, a more extensive method of clearing configuration setting is to use the sys-unconfig command. sys-unconfig will cause the system to reconfigure network, authentication and several other subsystems on next boot. c) If already not shutdown, shutdown the VM 111

112 d) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., RHEL55_temp) Description: [optional] While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete 3. Create New VM using template a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., rhel55guest6) Description: [optional] Template: (e.g., temp_rhel55) Confirm or override the remaining entries c) If the Data Center and Cluster are set to v2.2 compatibility, the provisioning can be changed from thin to preallocated. in the Allocation tab, provide the following: Provisioning: Clone Disk 1: Preallocated 4. The newly created VM will have a Locked Image while being instantiated. When the process is complete the VM is ready to Boot. a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the right equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) This system will be known to satellite with its progenitor's ID, therefore it must register with Satellite. Start the rhn_register program and provide the following information: Select to Yes, Continue when presented with the old systems registration Confirm to receive updates from Red Hat Network Satellite and the Red Hat Network Location: (e.g., https://ra-sat-vm.ra.rh.com) Provide tenant user credentials Verify System Name and send Profile Data 112

113 8.6 Deploy Windows Guests (ISO / Template) on KVM Hypervisor Host Figure Deploying Window VMs using ISO Library 1. If not already in place, populate the ISO library with the ISO image or images needed for the Windows installation. The RHEV Tools CD and VirtIO driver virtual floppy drive should also be in the ISO Library. NOTE: User must be Administrator to run RHEV Apps until BZ is resolved. a) On the RHEV Manager system, select Start -> All Programs -> Red Hat -> RHEV Manager -> ISO Uploader b) In the Red Hat Virtualization ISO Uploader window, press the Add button to select any or all of the images (.iso,.vfd) previously downloaded c) Select the correct Data Center from the pull down list 113

114 d) Click the Upload button Place ISO image into ISO Library 2. Confirm all active hosts are RHEL/KVM hosts, place any RHEV Hosts into maintenance 3. Create Windows VM a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., w2k8guest1) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: [blank] Memory Size : (e.g., 4096) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: (e.g., Windows 2008 R2) c) In the First Run tab Provide a Domain if used Verify Time Zone is correct d) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO the defaults for the remaining entries are adequate e) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 20) the defaults for the remaining entries are adequate 4. Boot VM a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run Once option of the Run button, or the Run Once option in the right mouse button menu, and provide the following entries: Attach Floppy checkbox Indicate the virtio-drivers vfd should be mounted Attach CD checkbox Indicate which CD/DVD should be mounted (e.g., en_windows_server_2008_r2_dvd.iso) 114

115 Verify that Network is last in the Boot Sequence c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) The VM will boot the CD/DVD, the remaining installation will need to be performed through the console. e) Perform all the actions to install the Operating system. Some versions of windows will recognize the mount floppy and automatically use the VirtIO disk driver, other may require the operator to select the drivers to load. If a second CD/DVD is requires, activate the right mouse button menu on the VM and use the Change CD option locate near the bottom of the options. f) The CD/DVD should be changed to the RHEV tools using he right mouse button menu on the VM and use the Change CD option locate near the bottom of the options. Once the Disk is mounted, the RHEV Tools found on this disk should be installed. This will include VirtIO drivers not previously loaded (e.g., network) g) Red Hat recommends all applicable Window Updates be applied and to activate the Window installation Deploying Windows VMs using Templates 1. Confirm all active hosts are RHEL/KVM hosts, place any RHEV Hosts into maintenance mode. 2. Create Template a) Window systems should be 'sysprep'ed prior to be used as the source of a template. There are differences for various versions of Windows, therefore it is best to consult the Microsoft documentation for the exact procedure. The highlights of the process for Windows 2008 is highlighted below: Create sysprep.ini file by executing C:\Windows\System 32\ sysprep\sysprep.exe Set System Cleanup Action to Enter System Out-of-Box Experience (OOBE) Shutdown Mode should be set to Shut down The sysprep process will shutdown the VM b) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., w2k8_temp) Description: [optional] c) While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete. 3. Create New VM using template a) At the RHEV Manager Virtual Machines tab, select New Server 115

116 b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., w2k8guest2) Description: [optional] Template: (e.g., w2k8_temp) Confirm or override the remaining entries c) If the Data Center and Cluster are set to v2.2 compatibility, the provisioning can be changed from thin to preallocated. in the Allocation tab, provide the following: Provisioning: Clone Disk 1: Preallocated 4. The newly created VM will have a Locked Image while being instantiated. When the process is complete the VM is ready to Boot. a) At the RHEV Manager Virtual Machines tab select the newly create VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) Respond to any system prompts upon booting the VM 116

117 9 Deploying Applications in RHEL VMs 9.1 Deploy Application in RHEL VMs The application is based on a server side Java exerciser. The standard means of executing this application will increasingly scale the workload depending on CPU horsepower that is available. Figure Configure Application and Deploy Using Satellite 1. Prepare Application a) Create script (javaapp) to run application in an infinite loop b) Create control script (javaappd) to be placed in /etc/init.d to automatically start and optionally stop the workload 117

118 c) Adjust any application settings as desired d) Create compressed tar file which contains entire inventory which will be delivered and installed onto target system 2. Build Application RPM a) As root, make sure rpm-build package is installed yum -y install rpm-build b) As a user create directory to using for RPM creation mkdir ~/rpmbuild cd ~/rpmbuild mkdir BUILD RPMS SOURCES SPECS SRPMS c) Create ~/.rpmmacros file identifying the top of the build structure echo "%_topdir /home/juser/rpmbuild" > ~/.rpmmacros d) Copy compressed tar file into SOURCES directory e) Create SPECS/javaApp.spec file referencing the previously created compressed tar file by name Summary: A tool which will start a Java based load on the system Name: javaapp Version: 1 Release: 0 License: GPL Group: Other Source0: javaapp.tgz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root #URL: BuildArch: noarch #BuildRequires: %description The javaapp install into /usr/javaapp and create a init script to start the load on the system upon reboot. %prep rm -rf $RPM_BUILD_DIR/javaApp zcat $RPM_SOURCE_DIR/javaApp.tgz tar -xvf %install rm -rf $RPM_BUILD_ROOT/usr rm -rf $RPM_BUILD_ROOT/etc install -d $RPM_BUILD_ROOT/usr/javaApp/xml install -d $RPM_BUILD_ROOT/etc/init.d install -m 755 javaapp/javaappd $RPM_BUILD_ROOT/etc/init.d install -m 755 javaapp/javaapp $RPM_BUILD_ROOT/usr/javaApp install -m 644 javaapp/check.jar javaapp/jbb.jar javaapp/specjbb_config.props javaapp/specjbb.props $RPM_BUILD_ROOT/usr/javaApp install -m 644 javaapp/xml/template-document.xml javaapp/xml/jbb-document.dtd $RPM_BUILD_ROOT/usr/javaApp/xml 118

119 %clean rm -rf %{buildroot} %files %defattr(-,root,root) /etc/init.d/javaappd /usr/javaapp/javaapp /usr/javaapp/check.jar /usr/javaapp/jbb.jar /usr/javaapp/specjbb_config.props /usr/javaapp/specjbb.props /usr/javaapp/xml/jbb-document.dtd /usr/javaapp/xml/template-document.xml %post chkconfig --add javaappd chkconfig javaappd on service javaappd start f) Build RPM rpmbuild -v -bb SPECS/javaApp.spec g) Copy RPM from RPMS/noarch/ to satellite system 3. Sign RPM a) As root on satellite system gpg --gen-key select default key type DSA and Elgamal specify desired key length of at least 1024 specify and confirm that key will not expire Specify and confirm Real Name and address Enter a passphrase Key will generate b) List keys gpg --list-keys --fingerprint c) The ~/.rpmmacros file should have content telling the key type and provide the key id which can be obtained from the listing above. %_signature gpg %_gpg_name 27D514A0 d) Sign RPM rpm --resign javaapp-1-0.noarch.rpm Enter the passphrase used when the key was generated e) Save the public key to a file gpg --export -a '<Real Name>' > public_key.txt 119

120 f) Place a copy of the public key to a web accessible area cp public_key.txt /var/www/html/pub/app-rpm-gpg-key 4. Create Custom Channel a) Log into RHN Satellite as the tenant organization administrator b) Select the Channels tab, the Manage Software Channels on the left side of the page, then the create new channel option near the top of the page providing the information below Channel Name: (e.g., ourapps) Channel Label: (e.g., ourapps) Parent Channel: (e.g., rhel5-5-x86_64-server) Parent Channel Architecture: x86_64 Channel Summary: (e.g., ourapps) Channel Description: In-house developed Applications Channel Access Control: Organization Sharing: public Security: GPG: GPG key URL: (e.g., ) Security: GPG: GPG key ID: (e.g., 27D514A0) Security: GPG: GPG key Fingerprint: (e.g., 6DC6 F770 4EA1 BCC6 A9E2 6A4A 5DA5 3D96 27D5 14A0) Create Channel c) Push package to channel: ls java*.rpm rhnpush -v -c ourapps --server=http://localhost/app -u tenant -p XXX -s 5. Configure GPG key a) As the tenant manager on the Satellite select the following links and provide the information below: Systems -> GPG and SSL Keys -> create new stored key/cert Description: (e.g., App-Sig) Type: GPG Select file to upload: (e.g., /var/www/html/pub/app-rpm-gpg-key) Create Key 6. Configure Activation Key a) Starting at the satellite home page for the tenant administrator, select the following links: Systems -> Activation Keys -> create new key and provide the information below: Description: (e.g., r55java-key)) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Create Activation Key 120

121 b) Select the Child Channels tab and select the follow channels: RHN Tools ourapps 7. Configure Kickstart a) Starting at the satellite home page for the tenant administrator, select the following links and provide the information below: Systems -> Kickstart -> Profiles -> create new kickstart profile Label: (e.g., RHEL55java) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: None Select Next to accept input and proceed to next page Select Default Download Location Select Next to accept input and proceed to next page Specify New Root Password and Verify Click Finish b) In the Kickstart Details -> Details tab Log custom post scripts Click Update Kickstart c) In the Kickstart Details -> Operating System tab Select Child Channels RHN Tools ourapps Since this is a base only install, verify no Repositories checkboxes are selected Click Update Kickstart d) In the Kickstart Details -> Advanced Options tab Verify reboot is selected Change firewall to --enabled e) In the System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details f) In the System Details -> Partitioning tab Change volume group name (e.g., JavaAppVM) Click Update g) In the System Details -> GPG & SSL tab select App-Sig and RHN-ORG-TRUSTED-SSL-CERT keys 121

122 h) In the Activation Keys tab Select r55java-key Click Update Activation Keys i) In the Scripts tab A single script is used to disable GPG checking of the custom channels since all beta packages have not been signed; install some RHN tools, java and the javaapp; and ensure all installed software is up to date echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[rhel5-5-x86_64-server-snap4]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[clone-2-rhn-tools-rhel-x86_64-server-5]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf /usr/bin/yum -y install java openjdk /usr/bin/yum -y install rhn-virtualization-host /usr/bin/yum -y install osad /sbin/chkconfig osad on /usr/bin/yum -y update /usr/bin/yum -y install javaapp 8. Create RHEV VM a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., javaapp1) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: [blank] Memory Size: (e.g., 4096) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 4) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: Network (PXE) d) Select OK e) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO 122

123 f) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 9. Boot VM a) In the RHEV Manager Virtual Machines tab, select the newly created VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) After initial PXE booting the Cobbler PXE boot menu will display, select the kickstart that was previously created, (e.g., RHEL55java:22:tenants) e) The VM will reboot when the installation is complete Deploy Application Using Template 1. Create Template a) Prepare the template system to register with the satellite upon booting Identify the activation key to use to register the system. The Activation Keys page (in Systems tab) of the satellite will list existing keys for each organization. Alternatively, if the system was PXE installed using satellite, the register command can be found in /root/cobbler.ks which includes the key used grep rhnreg cobbler.ks The following commands will place commands in the proper script to execute on the next boot cp /etc/rc.d/rc.local /etc/rc.d/rc.localpretemplate echo rhnreg_ks --force --serverurl=https://ra-satvm.ra.rh.com/xmlrpc --sslcacert=/usr/share/rhn/rhn-org-trustedssl-cert --activationkey=22-f0b9a335f83c50ef9e5af6a520430aa1 >> /etc/rc.d/rc.local echo mv /etc/rc.d/rc.local.pretemplate /etc/rc.d/rc.local >> /etc/rc.d/rc.local b) Before shutting down the system which will be used to create a template, some level of clearing the configuration settings should be performed. At a minimum the hostname should not be hard-coded as this can lead to confusion when the hostname does not match the IP currently assigned. The following commands will remove the name that was set when installed, and DHCP will set the name upon boot cp /etc/sysconfig/network /tmp grep -v HOSTNAME= /tmp/network > /etc/sysconfig/network Alternatively, a more extensive method of clearing configuration setting is to use 123

124 the sys-unconfig command. sys-unconfig will cause the system to reconfigure network, authentication and several other subsystems on next boot. c) If already not shutdown, shutdown the VM d) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., RHEL55_temp) Description: [optional] While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete 2. Create New VM using template a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., javaapp2) Description: [optional] Template: (e.g., temp_javaapp) Confirm or override the remaining entries c) If the Data Center and Cluster are set to v2.2 compatibility, the provisioning can be changed from thin to preallocated. in the Allocation tab, provide the following: Provisioning: Clone Disk 1: Preallocated 3. The newly created VM will have a Locked Image while being instantiated. When the process is complete the VM is ready to Boot. a) At the RHEV Manager Virtual Machines tab select the newly created VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option 124

125 9.2 Scale Application Figure 30 Using the previously created template and a power shell script, multiple instances of the javaapp VMs can be quickly deployed. 1. Create Power shell script a) A folder was create to house scripts on the RHEV Manager (e.g., C:\scripts) b) Using an editor (e.g., notepad) created a file (e.g., add-vms.ps1) # add-vms # tempname - source template (can not be Blank) # basename - base name of created guest (default: guest) # num - number to create (default: 1) # run -start VMs (default: no) Param($baseName = 'guest', $tempname, $num = 1, [switch]$run) if ($tempname -eq $null) { 125

126 write-host "Must specify a template!" exit } <# write-host "basename = $basename" write-host "tempname = $tempname" write-host " num = $num" write-host " run = $run" #> $my_clusid = -1; $my_temp = select-template -SearchText $tempname if ($my_temp -eq $null) { Write-host "No matching templates found!" exit } elseif ($my_temp.count -gt 1) { Write-host "Too many matching templates found!" exit } elseif ($my_temp.name -eq "Blank") { Write-host "Can not use Blank template!" exit } #search for matching basenames $matches = select-vm -searchtext "$basename" where {$_.name -like "$basename*"} if ($matches -ne $null) { $measure = $matches select-object name foreach { $_.name.replace("$basename","") } measure-object -max $start = $measure.maximum + 1 $x = $matches select-object -first 1 $my_clusid = $x.hostclusterid } else { $start = 1 } $id = $my_temp.hostclusterid $clus = select-cluster where { $_.ClusterID -eq $id } if ($clus -ne $null) { if ($clus.isinitialized -eq $true) { $my_clusid = $id } else { write-host "Cluster of Template is not initialized!" exit } } #loop over adds for ($i=$start; $i -lt $start + $num; $i++) { 126

127 # write-host "-name $basename$i -templateobject $my_temp -HostClusterId $my_clusid -copytemplate -Vmtype server" if ( $run -eq $true ) { $my_vm = add-vm -name $basename$i -templateobject $my_temp -HostClusterId $my_clusid -copytemplate -Vmtype server start-vm -VmObject $my_vm } else { $my_vm = add-vm -name $basename$i -templateobject $my_temp -HostClusterId $my_clusid -copytemplate -Vmtype server -Async } } 2. Use the script to create multiple VMs a) On the RHEV Manager select the following from the Start menu: All Programs -> Red Hat -> RHEV Manager -> RHEV Manager Scripting Library b) In the power shell window, log in with a superuser account Login-User -user admin -p <password> -domain ra-rhevm-vm c) Change to the scripts directory cd c:/scripts d) Call the script to asynchronously create 5 VMs named javaapp#./add-vms -tempname temp_javaapp -basename javaapp -num 5 e) After the VMs finish creating, the operator can select all desired and press Run f) Or if desired, the operator can call the script with the -run option which will start each VM as is it synchronously created./add-vms -tempname temp_javaapp -basename javaapp -num 5 -run 127

128 10 Deploying JBoss Applications in RHEL VMs 10.1 Deploy JON Server in Management Services Cluster The JON server will be deployed using the Satellite server to provision the Virtual Machine. Figure Create storage volume (e.g., mgmtvirt_disk) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. 2. Create the MgmtVirtVG from the disk. a) Initialize the disk for LVM pvcreate /dev/mapper/mgmtvirt_disk b) create VM vgcreate MgmtVirtVG /dev/mapper/mgmtvirt_disk 3. Configure Activation Key 128

129 a) Log into satellite as 'manage', select the following links: Systems -> Activation Keys -> create new key Description: (e.g., JONkey) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Click Create Activation Key b) Select Child Channel tab add RHN Tools select Update Key 4. Configure Kickstart a) Log into satellite as 'manage', select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile Label: RHEL55_JON_VM Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: KVM Virtualization Guest Click Next to accept input and proceed to next page Click Default Download Location Click Next to accept input and proceed to next page Specify New Root Password, Verify Click Finish b) Select the following links: Kickstart Details -> Details tab Virtual Memory (in MB): 4096 Number of Virtual CPUs: 2 Virtual Disk Space (in GB): 40 Virtual Bridge: cumulus0 Log custom post scripts Click Update Kickstart c) Select the following links: Kickstart Details -> Operating System tab Select RHN Tools Child Channels Uncheck all Repositories Click Update Kickstart d) Select the following links: Kickstart Details -> Advanced Options tab Verify reboot is selected Change firewall to enabled e) Select the following links: System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details 129

130 f) Select the following links: System Details -> Partitioning tab Change myvg to JONVG Click Update g) Select the following link: Activation Keys tab Select JONkey Click Update Activation Key h) Select Scripts tab Script 1 installs additional software working around some unsigned packages that exists in the Beta release echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[rhel5-5-x86_64-server-snap4]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf yum install postgresql84 -y yum install postgresql84-server -y yum install java openjdk.x86_64 -y Script 2 opens the firewall ports needed and updates any packages #JBoss Required Ports /bin/cp /etc/sysconfig/iptables /tmp/iptables /usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables 130

131 /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 67 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 68 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables # Jon Specific Ports /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> 131

132 /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /usr/bin/tail -2 /tmp/iptables >> /etc/sysconfig/iptables /usr/bin/yum -y update Script 3 downloads a script which will install and configure the JON software and then invokes the script. Refer to Appendix A.3 for contents of the rhqinstall.sh script cd /tmp wget chmod 777./rhq-install.sh./rhq-install.sh 5. Prepare download area a) Create /var/www/html/pub/kits on Satellite server and set permissions mkdir /var/www/html/pub/kits chmod 777 /var/www/html/pub/kits b) Create script to download needed files #Jon # In lieu of Satellite server these wgets procure the JON zips to be staged on the sat server. # Subsequent wgets in a post install script place it on new VMs wget wget c) Execute Script cd /var/www/html/pub/kits./download_jon.sh 6. Provision JON VM a) On Satellite as 'manage', select the following links: Systems -> Monet (Mgmt Server) -> Virtualization -> Provisioning Check button next to the RHEL55_JON_VM kickstart profile Guest Name: ra-jon-vm Select Advanced Configuration Virtual Storage Path: MgmtVirtVG Select Schedule Kickstart and Finish b) Speed the install by logging on to monet (Mgmt Server) Check in with Satellite and watch verbose output If desired, watch installation rhn_check -vv & virt-viewer ra-jon-vm & Obtain VM MAC address (for use in cobbler system entry) 132

133 grep "mac add" /etc/libvirt/qemu/ra-jon-vm.xml c) The cobbler system entry for VM is not complete, therefore make changes on Satellite server Determine VM's cobbler system entry (e.g., monet.ra.rh.com:2:ra-jon-vm) Remove this entry cobbler system remove --name=monet.ra.rh.com:2:ra-jon-vm Add complete entry cobbler list cobbler add system --name=ra-jon-vm.ra.rh.com --profile=rhel55_jon_vm:2:management --mac=00:16:3e:5e:38:1f --ip= hostname=ra-jon-vm.ra.rh.com --dnsname=ra-jon-vm.ra.rh.com Synchronize cobbler and system files cobbler sync d) The hostname may have been set to a temporary DHCP name, change this to the new registered name by logging into VM edit /etc/sysconfig/network, remove name after '=' in HOSTNAME entry reboot 7. Configure VM as a cluster service a) Shutdown the VM so that when the cluster starts an instance there is only one active virsh shutdown ra-jon-vm b) Copy VM definition to all cluster members scp /etc/libvirt/qemu/ra-jon-vm.xml degascl.ra.rh.com:/etc/libvirt/qemu/ c) Log into the luci home page and follow links: cluster -> ciab -> Services -> add a virtual machine service Virtual machine name: ra-jon-vm Path to VM configuration files: /etc/libvirt/qemu Migration type: Live Hypervisor: Automatic Check Automatically start this service box Failover Domain: ciab_fod Recovery policy: Restart Max restart failures: 2 Length of time after which to forget a restart: 60 d) Test service migration clusvcadm -M vm:ra-jon-vm -m monet-cl.ra.rh.com e) Test access to JON console URL: Login: rhqadmin / rhqadmin 133

134 10.2 Deploy JBoss EAP Application in RHEL VMs The application is based on a server side Java exerciser. The standard means of executing this application will increasingly scale the workload depending on CPU horsepower that is available. Figure Deploy Using Satellite 1. Configure Activation Key a) Starting at the satellite home page for the tenant administrator, select the following links: Systems -> Activation Keys -> create new key and provide the information below: Description: (e.g., r55jboss-key)) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Create Activation Key 134

135 b) Select the Child Channels tab and select the follow channels: RHN Tools 2. Configure Kickstart a) Starting at the satellite home page for the tenant administrator, select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile and provide the information below Label: (e.g., RHEL55jboss) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: None Select Next to accept input and proceed to next page Select Default Download Location Select Next to accept input and proceed to next page Specify New Root Password and Verify Click Finish b) In the Kickstart Details -> Details tab Log custom post scripts Click Update Kickstart c) In the Kickstart Details -> Operating System tab Select Child Channels RHN Tools Since this is a base only install, verify no Repository checkboxes are selected Click Update Kickstart d) In the Kickstart Details -> Advanced Options tab Verify reboot is selected Change firewall to --enabled e) In the System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details f) In the System Details -> Partitioning tab Change volume group name (e.g., jbossvm) Click Update Partitions g) In the System Details -> GPG & SSL tab select RHN-ORG-TRUSTED-SSL-CERT key Update Keys f) In the Activation Keys tab Select r55jboss-key Click Update Activation Keys 135

136 h) Scripts tab A post installation script is used to: disable GPG checking of custom channels (due to not all beta packages having been signed) open JBoss specific firewall ports ensure all installed software is up to date install and configure JBoss EAP and JON agent deploy a JBosss application # set required firewall ports /bin/cp /etc/sysconfig/iptables /tmp/iptables /usr/bin/head -n -2 /tmp/iptables > /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables 136

137 /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p udp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 67 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport 68 -m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /bin/echo "-A RH-Firewall-1-INPUT -p tcp --dport m state --state NEW -j ACCEPT" >> /etc/sysconfig/iptables /usr/bin/tail -2 /tmp/iptables >> /etc/sysconfig/iptables # disable GPG checking of custom channels echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[rhel5-5-x86_64-server-snap4]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "[clone-2-rhn-tools-rhel-x86_64-server-5]" >> /etc/yum/pluginconf.d/rhnplugin.conf echo "gpgcheck=0" >> /etc/yum/pluginconf.d/rhnplugin.conf # install required packages /usr/bin/yum -y install java openjdk /usr/bin/yum -y install osad /sbin/chkconfig osad on /usr/bin/yum -y update # download, install and configure JBoss EAP cd /root 137

138 wget unzip jboss-eap-*.ga.zip cd jboss-eap*/jboss-as/server/default/conf/props cat jmx-console-users.properties sed -e 's/# admin=admin/admin=100yard-/' > jmx-consoleusers.properties2 mv -f jmx-console-users.properties2 jmx-console-users.properties # download, install and configure the JON agent cd /root wget java -jar /root/rhq-enterprise-agent-default.ga.jar --install cd /root/rhq-agent/conf line=`grep -n "key=\"rhq.agent.configuration-setup-flag" agent-configuration.xml cut -d: -f1` before=`expr $line - 1` after=`expr $line + 1` sed -e "${after}d" -e "${before}d" agent-configuration.xml > agent-configuration.xml2 \mv agent-configuration.xml2 agent-configuration.xml sed -e '/rhq.agent.configuration-setup-flag/s/false/true/g' agent-configuration.xml > agentconfiguration.xml2 \mv agent-configuration.xml2 agent-configuration.xml sed -e "/rhq.agent.server.bind-address/s/value=\".*\"/value=\"ra-jon-vm.ra.rh.com\"/g" agentconfiguration.xml > agent-configuration.xml2 \mv agent-configuration.xml2 agent-configuration.xml cd /root/rhq-agent/bin \mv rhq-agent-env.sh rhq-agent-env.sh.orig wget # deploy the test app cd /root/jboss-eap*/jboss-as/server/default/deploy wget wget # configure JBoss and JON agent to auto start cd /etc/init.d wget sed -e "s/readlink/readlink -e/g" /root/rhq-agent/bin/rhq-agent-wrapper.sh > /root/rhq-agent/bin/rhqagent-wrapper.sh2 \mv /root/rhq-agent/bin/rhq-agent-wrapper.sh2 /root/rhq-agent/bin/rhq-agent-wrapper.sh ln -s /root/rhq-agent/bin/rhq-agent-wrapper.sh. chmod +x jboss-eap rhq-agent-wrapper.sh /sbin/chkconfig --add jboss-eap /sbin/chkconfig --add rhq-agent-wrapper.sh /sbin/chkconfig rhq-agent-wrapper.sh on /sbin/chkconfig jboss-eap on 3. Create control script (jboss-eap) to be provisioned into /etc/init.d to automatically start and optionally stop the workload. #!/bin/sh # # jboss-eap # Start jboss-eap 138

139 # chkconfig: # description: Starts and stops jboss-eap # # Source function library.. /etc/init.d/functions IPADDR=`ifconfig eth0 awk -F: '/172.20/ {print $2}' awk '{print $1}'` start() { cd /root/jboss-eap*/jboss-as/bin nohup./run.sh -b $IPADDR & } stop() { cd /root/jboss-eap*/jboss-as/bin./shutdown.sh -S -s jnp://$ipaddr:1099 -u admin -p <password> } status_at() { cd /root/jboss-eap*/jboss-as/bin status./run.sh } case "$1" in start) # stop start RETVAL=$? ;; stop) stop RETVAL=$? ;; status) status_at RETVAL=$? ;; *) echo $"Usage: $0 {start stop status}" exit 1 ;; esac exit $RETVAL 139

140 4. To run the JON agent at startup, some of the parameters in rhq-agent-env.sh will need to be enabled by removing the # symbol that appears at the start of each line. The following three parameters are mandatory, were uncommented, and set accordingly for this effort: RHQ_AGENT_HOME= /root/rhq-agent - This is the directory above the agent installation bin directory. RHQ_AGENT_JAVA_HOME= /usr/lib/jvm/jre This is the directory above the bin folder for JDK. RHQ_AGENT_PIDFILE_DIR=/var/run - A directory writable by the user that executes the agent. It defaults to /var/run but if /var/run is not writable, use $RHQ_AGENT_HOME/bin. Note that this is only applicable for JON Agent versions 2.1.2SP1 and earlier. Modifications have been made in subsequent versions of the agent that will fall back to a writable directory. NOTE: If #RHQ_AGENT_PIDFILE_DIR is modified and the OS is RHEL, a parallel change is required to the chkconfig "pidfile" location at the top of the rhqagent-wrapper.sh script. 5. Copy files to the previously created /var/www/html/pub/kits directory on the Satellite server jboss-eap-default.ga.zip (JBoss EAP) jboss-eap (init.d startup file) rhq-enterprise-agent-default.ga.jar (JON agent) rhq-agent-env.sh (JON agent variable definitions) jboss-seam-booking-ds.xml (JBoss application) jboss-seam-booking.ear 6. Create RHEV VM a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., jboss1) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: [blank] Memory Size: (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: Network (PXE) 140

141 d) Select OK e) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO f) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 7. Boot the JBoss VM a) In the RHEV Manager Virtual Machines tab, select the newly created VM b) Select either the Run button or the Run option in the right mouse button menu c) Start console by selecting the Console button when active or the Console option in the right mouse button menu d) After initial PXE booting the Cobbler PXE boot menu will display, select the kickstart that was previously created in Step 2 (e.g., RHEL55jboss:22:tenants). e) The VM will reboot when the installation is complete and the JON Console Dashboard will display the VM as an Auto-Discovered resource. 141

142 Figure 33: JON Console Dashboard 8. For this proof of concept, the JBoss Seam hotel booking web application (a key component of the JBoss EAP) is distributed via Satellite onto each JBoss server. 142

143 9. The application can be tested by directing a browser to the JBoss server URL. For example, Figure 34: JBoss Seam Framework Demo Deploy Using Template Because the previous section has each new JBoss VM automatically registering with JON, deployment via template will differ only in as much as the VM created to act as a model for the template will not register itself with JON so that any VMs created from this template will be able to register their own IP/port token. 143

144 1. Clone JBoss VM Kickstart a) Starting at the satellite home page for the tenant administrator, select the following links: Systems -> Kickstart -> Profiles and select the profile created for the JBoss VM (e.g., jboss) b) Select the link to clone kickstart and provide the information below Kickstart Label: (e.g., jboss-cloner) Click Clone Kickstart c) Select the following links: Scripts -> Script 1 d) Modify the post install script to remove or comment out the following entries: /sbin/chkconfig --add rhq-agent-wrapper.sh /sbin/chkconfig rhq-agent-wrapper.sh on e) Click Update Kickstart 2. Create VM for Template a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., jbossclonervm) Description: [optional] Host Cluster: (e.g., dc1-clus1) Template: [blank] Memory Size: (e.g., 2048) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 2) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: Network (PXE) d) Select OK e) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO f) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 8) the defaults for the remaining entries are adequate 3. Boot the JBoss VM a) In the RHEV Manager Virtual Machines tab, select the newly created VM b) Select either the Run button or the Run option in the right mouse button menu 144

145 c) Start console by selecting the Console button when active or the Console option in the right mouse button menu d) After initial PXE booting the Cobbler PXE boot menu will display, select the kickstart that was previously created in Step 2 (e.g., jboss-cloner:22:tenants). 4. The VM will reboot when the installation is complete and the JON Console Dashboard should not display the VM as an Auto-Discovered resource. 5. Create Template a) Prepare the template system (e.g., jbossclonervm) to register with the satellite upon booting Identify the activation key to use to register the system. The Activation Keys page (in Systems tab) of the satellite will list existing keys for each organization Alternatively, if the system was PXE installed using satellite, the register command can be found in /root/cobbler.ks which includes the key used Using the activation key acquired in the previous step, the following will place commands in the proper script to execute on the next boot: grep rhnreg cobbler.ks cp /etc/rc.d/rc.local /etc/rc.d/rc.localpretemplate echo rhnreg_ks --force --serverurl=https://ra-satvm.ra.rh.com/xmlrpc --sslcacert=/usr/share/rhn/rhn-org-trustedssl-cert --activationkey=22-58d19ee2732c866bf9b89f39e498384e >> /etc/rc.d/rc.local echo mv /etc/rc.d/rc.local.pretemplate /etc/rc.d/rc.local >> /etc/rc.d/rc.local Execute the following commands on the newly created VM: /sbin/chkconfig --add rhq-agent-wrapper.sh /sbin/chkconfig rhq-agent-wrapper.sh on b) Before shutting down the system used to create a template, some level of clearing the configuration settings should be performed. At a minimum the hostname should not be hard-coded as this can lead to confusion when the hostname does not match the IP currently assigned. The following commands will remove the name that was set when installed, and DHCP will set the name upon boot cp /etc/sysconfig/network /tmp grep -v HOSTNAME= /tmp/network > /etc/sysconfig/network Alternatively, a more extensive method of clearing configuration setting is to use the sys-unconfig command. sys-unconfig will cause the system to reconfigure network, authentication and several other subsystems on next boot. c) Shutdown the template model VM 145

146 d) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., jboss_template) Description: [optional] While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete 6. Create a new VM using the template a) At the RHEV Manager Virtual Machines tab, select New Server b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., jboss2) Description: [optional] Template: (e.g., jboss_template) Confirm or override the remaining entries c) If the Data Center and Cluster are set to v2.2 compatibility, the provisioning can be changed from thin to preallocated. in the Allocation tab, provide the following: Provisioning: Clone Disk 1: Preallocated 7. The newly created VM will have a Locked Image while being instantiated. When the process is complete the VM is ready to Boot. 8. The newly created VM will have a Locked Image while being instantiated. 9. When the process is complete the cloned VM is ready to boot. a) At the RHEV Manager Virtual Machines tab, select the newly created VM b) Either select the Run button or the equivalent right mouse button menu option c) Start console by selecting the Console button when active or the equivalent right mouse button menu option d) With the JON agent running, the JON Console Dashboard should display the newly cloned VM as an Auto-Discovered resource. 146

147 10.3 Scale JBoss EAP Application Figure 35 Using the previously created template and a power shell script, multiple instances of the JBoss VM can be rapidly deployed. 1. Use the powershell script created in Section 9.2 to create multiple VMs: a) On the RHEV Manager select the following from the Start menu: All Programs -> Red Hat -> RHEV Manager -> RHEV Manager Scripting Library b) In the power shell window, log in with a superuser account Login-User -user admin -p <password> -domain ra-rhevm-vm c) Change to the scripts directory cd c:/scripts d) Call the script to asynchronously create 5 VMs named jboss#./add-vms -tempname jboss_template -basename jboss -num 5 e) After the VMs finish creating, the operator can select any or all the desired VMs and 147

148 press Run f) Or if desired, the operator can call the script with the -run option which will start each VM as is it synchronously created./add-vms -tempname jboss_template -basename jboss -num 5 -run 148

149 11 Deploying MRG Grid Applications in RHEL VMs 11.1 Deploy MRG Manager in Management Services Cluster Figure Prepare MRG channels a) Synchronize the satellite DB data and RPM repository with Red Hat's RHN DB and RPM repository for the required MRG channels satellite-sync -c rhel-x86_64-server-5-mrg-management-1 -c rhelx86_64-server-5-mrg-grid-1 -c rhel-x86_64-server-5-mrg-messaging-1 -c rhel-x86_64-server-5-mrg-grid-execute-1 149

150 b) Clone the above channels under the custom Red Hat Enterprise Linux 5.5 base channel. Starting at Satellite Home, select the following links for each channel above: Channels -> Manage Software Channels -> clone channel Clone From: (e.g rhel-x86_64-server-5-mrg-management-1) Clone: Current state of the channel (all errata) Click Create Channel In the displayed Details page: Parent Channel: (e.g., rhel5-5-x86_64-server) Channel Name: use provided or specify name Channel Label: use provided or specify label Base Channel Architecture: x86_64 Channel Summary: use provided or specify summary Enter any optional (non asterisk) information as desired Click Create Channel On re-displayed Details page: Organizational Sharing: Public 2. Prepare the required Configuration Channels a) Refer to Appendix A.4 for details on each of the files for each channel. Use this information for access to the files during the channel creation. Using the information in the Appendix, the files can be downloaded to a holding area and have any required modifications applied, readying them for upload into the configuration channels. Another option for all except the largest file which does not have its contents listed, the file could be created by copying the contents from the appendix. b) For each channel listed, create the configuration channel by selecting the Configuration tab -> the Configuration Channels link of the left side of the page -> create new config channel. After specifying each channel's Name, Label, and Description, add the file(s) where all non-default values have been specified. sesame Filename/Path: /etc/sesame/sesame.conf cumin Filename/Path: /etc/cumin/cumin.conf postgresql Filename/Path: /var/lib/pgsql/data/pg_hba.conf Ownership: User name: postgres Ownership: Group name: postgres File Permissions Mode: 600 mrgdeploy Filename/Path: /root/mrgdeploy.sh File Permissions Mode: 744 condor_manager Filename/Path: /etc/condor/condor_config 150

151 Filename/Path: /home/mrgmgr/createnewnode.sh Ownership: User name: mrgmgr Ownership: Group name: mrgmgr File Permissions Mode: 744 Filename/Path: /home/mrgmgr/destroylastnode.sh Ownership: User name: mrgmgr Ownership: Group name: mrgmgr File Permissions Mode: 744 Filename/Path: /home/mrgmgr/satelliteremovelast.pl Ownership: User name: mrgmgr Ownership: Group name: mrgmgr File Permissions Mode: 744 Filename/Path: /var/lib/condor/condor_config.local ntp Filename/Path: /etc/ntp.conf 3. If not previously configured, create storage area for virtual machines. a) Create storage volume (e.g., mgmtvirt_disk) of appropriate size See section 6.3 for greater detail on adding and presenting LUNs from storage. b) Create the MgmtVirtVG from the disk. Initialize the disk for LVM pvcreate /dev/mapper/mgmtvirt_disk create VM vgcreate MgmtVirtVG /dev/mapper/mgmtvirt_disk 4. Configure Activation Key a) Log into satellite as 'manage', select the following links: Systems -> Activation Keys -> create new key Description: (e.g., coe-mrg-gridmgr) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Create Activation Key b) In the Details tab Select the Configuration File Deployment checkbox Click Update Activation Key c) Select Child Channel tab add RHN Tools and all the cloned MRG channels Select Update Key 151

152 d) Select the Packages tab and enter the following packages qpidd sesame qmf condor condor-qmf-plugins cumin perl-frontier-rpc rhncfg rhncfg-client rhncfg-actions ntp postgresql postgresql-server e) Select the Configuration and Subscribe to Channels tabs Select all the configuration channels create in step 2 and select Continue None of the channels had files in common so accept the presented order by selecting Update Channel Rankings 5. Configure Kickstart a) Log into satellite as 'manage', select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile Label: (e.g. coe-mrg-gridmgr) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: KVM Virtualization Guest Click Next to accept input and proceed to next page Click Default Download Location Click Next to accept input and proceed to next page Specify New Root Password, Verify, and Finish b) Select the following links: Kickstart Details -> Details tab Virtual Memory (in MB): 1024 Number of Virtual CPUs: 1 Virtual Disk Space (in GB): 20 Virtual Bridge: cumulus0 Log custom post scripts Click Update Kickstart 152

153 c) Select the following links: Kickstart Details -> Operating System tab Select RHN Tools and cloned MRG Child Channels Uncheck all Repositories Click Update Kickstart d) Select the following links: Kickstart Details -> Advanced Options tab Verify reboot is selected e) Select the following links: System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details f) Select the following links: System Details -> Partitioning tab Change myvg to MRGVG, Click Update g) Select the following link: Activation Keys tab Select coe-mrg-gridmgr Click Update Activation Keys h) Select Scripts tab This script performs the necessary configuration for the MRG Management Console. #Turn on Services chkconfig sesame on chkconfig postgresql on chkconfig condor on chkconfig qpidd on chkconfig cumin on chkconfig ntpd on #Postgresql funkiness rm -rf /var/lib/pgsql/data su - postgres -c "initdb -D /var/lib/pgsql/data" service postgresql restart service qpidd start #Add mrgmgr user useradd mrgmgr 6. Provision MRG Management VM a) On Satellite as 'manage', select the following links: Systems -> Monet (Mgmt Server) -> Virtualization -> Provisioning Check button next to the coe-mrg-gridmgr kickstart profile Guest Name: ra-mrggrid-vm Select Advanced Configuration Virtual Storage Path: MgmtVirtVG Select Schedule Kickstart and Finish 153

154 b) Speed the install by logging on to monet (Mgmt Server) Check in with Satellite and watch verbose output If desired, watch installation rhn_check -vv & virt-viewer ra-jon-vm & Obtain VM MAC address (for use in cobbler system entry) grep "mac add" /etc/libvirt/qemu/ra-jon-vm.xml c) The cobbler system entry for VM is not complete, therefore make changes on Satellite server Discover VM's cobbler system entry (e.g., monet.ra.rh.com:2:ra-jon-vm) Remove this entry cobbler system remove --name=monet.ra.rh.com:2:ra-mrggrid-vm Add complete entry cobbler list cobbler add system --name=ra-mrggrid-vm.ra.rh.com --profile=coe-mrg-gridmgr:2:management --mac=00:16:3e:20:75:67 --ip= hostname=ra-mrggrid-vm.ra.rh.com --dnsname=ra-mrggrid-vm.ra.rh.com Synchronize cobbler and system files cobbler sync d) The hostname may have been set to a temporary DHCP name, change this to the new registered name by logging into VM edit /etc/sysconfig/network, remove name after '=' in HOSTNAME entry reboot 7. Configure VM as a cluster service a) Shutdown the VM so that when the cluster starts an instance there is only one active virsh shutdown ra-mrggrid-vm b) Copy VM definition to all cluster members scp /etc/libvirt/qemu/ra-mrggrid-vm.xml degascl.ra.rh.com:/etc/libvirt/qemu/ c) Log into the luci home page and follow links: cluster -> ciab -> Services -> add a virtual machine service Virtual machine name: ra-mrggrid-vm Path to VM configuration files: /etc/libvirt/qemu Migration type: Live Hypervisor: Automatic Check Automatically start this service box Failover Domain: ciab_fod Recovery policy: Restart Max restart failures: 2 Length of time after which to forget a restart:

155 d) Test service migration clusvcadm -M vm:ra-mrggrid-vm -m monet-cl.ra.rh.com e) Invoke the setup script on ra-mrggrid-vm ssh /root/mrgdeploy.sh f) Test access to MRG Manager Console URL: Login: admin / <password> 8. Install Cygwin on the RHEV Management Platform a) On the ra-rhevm-vm system, navigate to the Cygwin home page, Figure 37: Cygwin Home Page b) Select the Install Cygwin now link locate toward the top right side of the page c) Select Run in the download dialog d) Select Next int the Cygwin Setup Screen 155

156 e) Select Install from Internet and select Next f) Keep the default Root Directory (C:\cygwin) and Install For All Users, by selecting Next g) Keep the default Local Package Directory by selecting Next h) Select the appropriate internet connection, then select Next i) Select a download site and select Next j) During the download, an alert may inform the user that this version is a major update. Select OK. k) After the package manifest download, search for ssh and ensure that openssh is download by selecting Skip in the corresponding line. Skip will change to the version of package for inclusion. Select Next to complete the download. 156

157 l) After the packages install, complete the installation by selecting Finish. 157

158 m)the cygwin bin directory should be added to the system PATH variable. Start the Control Panel -> System and Security -> System -> Advances system settings -> Environment Variables... -> Path -> Edit... -> add C:\cygwin\bin at the end -> select OK 158

159 n) Launch the cygwin shell by selecting Run as administrator in the right mouse button menu from the desktop icon. o) Invoke the following commands in the Cygwin shell, answering yes and providing desired user name and passwords ssh-host-config mkpasswd -cl > /etc/passwd mkgroup l > /etc/group 159

160 p) The username used by the sshd must be edited. Select Start -> Administrative Tools -> Services. Find Cygwin ssh and select the Properties options from the right mouse button menu. Select the Log On tab. Remove the.\ preceding the user name in the This account field. Select OK. q) Start the sshd daemon in the cygwin shell. net start sshd 160

161 11.2 Deploy MRG Grid in RHEL VMs Figure Prepare the required Configuration Channels (condor_execute) a) Refer to Appendix A.4 for details on each of the files for each channel. Use this information for access to the files during the channel creation. Using the information in the Appendix, the files can be downloaded to a holding area and have any required modifications applied, readying them for upload into the configuration channels. Another option for all except the largest file which does not have its contents listed, the file could be created by copying the contents from the appendix. b) For each channel listed, create the configuration channel by selecting the Configuration tab -> the Configuration Channels link of the left side of the page -> create new config channel. After specifying each channel's Name, Label, and Description, add the file(s) where all non-default values have been specified. condor_execute Filename/Path: /etc/sesame/sesame.conf 161

162 Filename/Path: /etc/condor/condor_config Filename/Path: /var/lib/condor/condor_config.local ntp Filename/Path: /etc/ntp.conf 2. Configure Activation Key a) Log into satellite as 'tenant', select the following links: Systems -> Activation Keys -> create new key Description: (e.g., coe-mrg-gridexec) Base Channel: rhel5-5-x86_64-server Add On Entitlements: Monitoring, Provisioning Create Activation Key b) In the Details tab Select the Configuration File Deployment checkbox Click Update Activation Key c) Select Child Channel tab add RHN Tools and all the cloned MRG channels Select Update Key d) Select the Packages tab and enter the following packages qpidd condor condor-qmf-plugins rhncfg rhncfg-client rhncfg-actions ntp e) Select the Configuration and Subscribe to Channels tabs Select all the configuration channels create in step 2 and select Continue None of the channels had files in common so accept the presented order by selecting Update Channel Rankings 3. Configure Kickstart a) Log into satellite as 'tenant', select the following links: Systems -> Kickstart -> Profiles -> create new kickstart profile Label: (e.g. coe-mrg-gridexec) Base Channel: rhel5-5-x86-server Kickstartable Tree: rhel55_x86-64 Virtualization Type: none Click Next to accept input and proceed to next page Click Default Download Location Click Next to accept input and proceed to next page 162

163 Specify New Root Password, Verify, and Finish b) Select the following links: Kickstart Details -> Operating System tab Select RHN Tools and cloned MRG Child Channels Uncheck all Repositories Click Update Kickstart c) Select the following links: Kickstart Details -> Advanced Options tab Verify reboot is selected d) Select the following links: System Details -> Details tab Enable Configuration Management and Remote Commands Click Update System Details e) Select the following links: System Details -> Partitioning tab Change myvg to MRGVG, Click Update f) Select the following link: Activation Keys tab Select coe-mrg-gridexec Click Update Activation Keys g) Select Scripts tab This script performs the necessary configuration for the MRG Management Console. chkconfig condor on chkconfig qpidd on condor_status -any chkconfig sesame on chkconfig ntpd on 4. Deploy scripts on RHEV Manager and MRG Grid Manager a) On the RHEV Manager as the admin user download b) Extract the contents of ciabrhevscripts.tar.gz in C:\Program Files (x86)\redhat\rhevmanager\rhevm Scripting Library c) Edit the contents of ciabcreatenewvm.ps1 to match your environments credentials and configuration d) On the MRG Manager as the mrgmgr user download e) Extract the contents of ciabmrgscripts.tar.gz in /home/mrgmgr f) Edit the contents of CiabCreateNewVm.sh to match your environments credentials and configuration 5. Create VM to be used for template a) At the RHEV Manager Virtual Machines tab, select New Server 163

164 b) In the New Server Virtual Machine dialog, General tab provide the following data Name: (e.g., mrgexectemplate) Description: [optional] Host Cluster: (e.g., dc1-clus1) mrgtemplate: [blank] Memory Size: (e.g., 512) CPU Sockets: (e.g., 1) CPUs Per Socket: (e.g., 1) Operating System: Red Hat Enterprise Linux 5.x x64 c) In the Boot Sequence tab, provide the following: Second Device: Network (PXE) d) Select OK e) Select the Configure Network Interfaces button in the Guide Me dialog and provide the following in the New Network Interface: Type: Red Hat VirtIO f) Select the Configure Virtual Disks button in the Guide Me dialog and provide the following in the New Virtual Disk dialog: Size (GB): (e.g., 10) the defaults for the remaining entries are adequate 6. Boot the Grid Exec VM a) In the RHEV Manager Virtual Machines tab, select the newly created VM b) Select either the Run button or the Run option in the right mouse button menu c) Start console by selecting the Console button when active or the Console option in the right mouse button menu d) After initial PXE booting the Cobbler PXE boot menu will display, select the kickstart that was previously created in Step 2 (e.g., coe-mrg-gridexec:22:tenants). 7. Prepare MRG Grid Exec Node VM for template a) Prepare the template system (e.g. mrgexectemplate) to register with the satellite upon booting Identify the activation key to use to register the system. The Activation Keys page (in Systems tab) of the satellite will list existing keys for each organization Alternatively, if the system was PXE installed using satellite, the register command can be found in /root/cobbler.ks which includes the key used grep rhnreg cobbler.ks Using the activation key acquired in the previous step, the following will place commands in the proper script to execute on the next boot: 164

165 cp /etc/rc.d/rc.local /etc/rc.d/rc.localpretemplate echo rhnreg_ks --force --serverurl=https://ra-satvm.ra.rh.com/xmlrpc --sslcacert=/usr/share/rhn/rhn-org-trustedssl-cert activationkey=22-a570cc dc9df44e0335bc1e22 >> /etc/rc.d/rc.local echo mv /etc/rc.d/rc.local.pretemplate /etc/rc.d/rc.local >> /etc/rc.d/rc.local b) Before shutting down the system used to create a template, some level of clearing the configuration settings should be performed. At a minimum the hostname should not be hard-coded as this can lead to confusion when the hostname does not match the IP currently assigned. The following commands will remove the name that was set when installed, and DHCP will set the name upon boot cp /etc/sysconfig/network /tmp grep -v HOSTNAME= /tmp/network > /etc/sysconfig/network Alternatively, a more extensive method of clearing configuration setting is to use the sys-unconfig command. sys-unconfig will cause the system to reconfigure network, authentication and several other subsystems on next boot. c) If already not shutdown, shutdown the template model VM d) At the RHEV Manager Virtual Machines tab, select the appropriate VM and either the Make Template button or right mouse button menu option Name: (e.g., mgrexectemplate) Description: [optional] e) While creating the template the image is locked. Confirm the template exists in the Templates tab after the creation is complete f) Remove the network from the template Select the Templates tab Choose the created template Choose the Network Interfaces tab it he Details pane Select eth0 Select Remove 8. Creating a MRG Grid virtual machine resource a) Login into the MRG Grid Manager as the mrgmgr user b) Execute the following:./ciabcreatenewvm.sh <templatename> Which performs the following: Determining the name of the last MRG Grid execute node running on RHEV. Registering a new system hostname, mac address, and IP address with Satellite via cobbler 165

166 Creating a new virtual machine in the RHEV Manager Installing MRG Grid on the new virtual machine 11.3 Deploy MRG Grid Application Figure Create the 'admin' user on ra-mrggrid-vm useradd admin 2. Create a job file, /home/admin/jobfile, in the admin users home directory #Test Job Executable = /bin/dd Universe = vanilla #input = test.data output = loop.out error = loop.error Log = loop.log args = if=/dev/zero of=/dev/null should_transfer_files = YES when_to_transfer_output = ON_EXIT 166

167 queue 3. Submit the job as the admin user condor_submit /home/admin/jobfile 4. Verify the job is running condor_q condor_status -any 5. Log into the MRG Management Console, and view the job statistics 11.4 Scale MRG Grid Application Figure 40 To be documented. 167

RED HAT ENTERPRISE VIRTUALIZATION

RED HAT ENTERPRISE VIRTUALIZATION Giuseppe Paterno' Solution Architect Jan 2010 Red Hat Milestones October 1994 Red Hat Linux June 2004 Red Hat Global File System August 2005 Red Hat Certificate System & Dir. Server April 2006 JBoss April

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Red Hat enterprise virtualization 3.0 feature comparison

Red Hat enterprise virtualization 3.0 feature comparison Red Hat enterprise virtualization 3.0 feature comparison at a glance Red Hat Enterprise is the first fully open source, enterprise ready virtualization platform Compare the functionality of RHEV to VMware

More information

FOR SERVERS 2.2: FEATURE matrix

FOR SERVERS 2.2: FEATURE matrix RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,

More information

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms EXECUTIVE SUMMARY Intel Cloud Builder Guide Intel Xeon Processor-based Servers Red Hat* Cloud Foundations Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms Red Hat* Cloud Foundations

More information

RED HAT ENTERPRISE VIRTUALIZATION AND CLOUD STRATEGY

RED HAT ENTERPRISE VIRTUALIZATION AND CLOUD STRATEGY RED HAT ENTERPRISE VIRTUALIZATION AND CLOUD STRATEGY Aram Kananov EMEA Product Marketing Manager Platform and Cloud Business Units Red Hat 1 RED HAT BRINGS COMMUNITY, VENDORS, USERS TOGETHER 2 RED HAT

More information

Red Hat Cloud, HP Edition Reference Architecture. Marc Nozell, Solution Architect, HP Ian Pilcher, Principal Architect, Red Hat

Red Hat Cloud, HP Edition Reference Architecture. Marc Nozell, Solution Architect, HP Ian Pilcher, Principal Architect, Red Hat Red Hat Cloud, HP Edition Reference Architecture Marc Nozell, Solution Architect, HP Ian Pilcher, Principal Architect, Red Hat Version 1.0 May 19, 2011 Red Hat Cloud, HP Edition Reference Architecture

More information

RED HAT: UNLOCKING THE VALUE OF THE CLOUD

RED HAT: UNLOCKING THE VALUE OF THE CLOUD RED HAT: UNLOCKING THE VALUE OF THE CLOUD Chad Tindel September 2010 1 RED HAT'S APPROACH TO THE CLOUD IS BETTER Build better clouds with Red Hat 1. The most comprehensive solutions for clouds both private

More information

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform

More information

Red Hat Cloud Foundations

Red Hat Cloud Foundations Red Hat Cloud Foundations Deploying Private IaaS Clouds Scott Collier, RHCA Principal Software Engineer Version 2.0 April 2011 refarch-feedback@redhat.com 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone:

More information

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.

More information

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to

More information

Cloud Computing with Red Hat Solutions. Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd. sivaram@redhat.com

Cloud Computing with Red Hat Solutions. Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd. sivaram@redhat.com Cloud Computing with Red Hat Solutions Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd sivaram@redhat.com Linux Automation Details Red Hat's Linux Automation strategy for next-generation IT infrastructure

More information

Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization

Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization www.redhat.com Table of contents Introduction Page 3 Benefits of virtualization Page 3 Virtualization challenges

More information

Next Generation Now: Red Hat Enterprise Linux 6 Virtualization A Unique Cloud Approach. Jeff Ruby Channel Manager jruby@redhat.com

Next Generation Now: Red Hat Enterprise Linux 6 Virtualization A Unique Cloud Approach. Jeff Ruby Channel Manager jruby@redhat.com Next Generation Now: Virtualization A Unique Cloud Approach Jeff Ruby Channel Manager jruby@redhat.com Introducing Extensive improvements in every dimension Efficiency, scalability and reliability Unprecedented

More information

Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers. Version 1.0

Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers. Version 1.0 Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Intel Xeon Based Servers Version 1.0 March 2009 Vertical Scaling of Oracle 10g Performance on Red Hat Enterprise Linux 5 on Inel

More information

RED HAT CLOUD SUITE FOR APPLICATIONS

RED HAT CLOUD SUITE FOR APPLICATIONS RED HAT CLOUD SUITE FOR APPLICATIONS DATASHEET AT A GLANCE Red Hat Cloud Suite: Provides a single platform to deploy and manage applications. Offers choice and interoperability without vendor lock-in.

More information

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Preparation Guide v3.0 BETA How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment. Document version 1.0 Document release date 25 th September 2012 document revisions 1 Contents 1. Overview...

More information

Infrastructure as a Service (IaaS) Cloud Computing for Enterprises

<Insert Picture Here> Infrastructure as a Service (IaaS) Cloud Computing for Enterprises Infrastructure as a Service (IaaS) Cloud Computing for Enterprises Speaker Title The following is intended to outline our general product direction. It is intended for information

More information

RED HAT ENTERPRISE VIRTUALIZATION & CLOUD COMPUTING

RED HAT ENTERPRISE VIRTUALIZATION & CLOUD COMPUTING RED HAT ENTERPRISE VIRTUALIZATION & CLOUD COMPUTING James Rankin Senior Solutions Architect Red Hat, Inc. 1 KVM BACKGROUND Project started in October 2006 by Qumranet - Submitted to Kernel maintainers

More information

RED HAT ENTERPRISE VIRTUALIZATION 3.0

RED HAT ENTERPRISE VIRTUALIZATION 3.0 OVERVIEW Red Hat Enterprise Virtualization (RHEV) is a complete virtualization management solution for server and desktop virtualization and the first enterprise-ready, fully open-source virtualization

More information

Red Hat Enterprise Linux 6. Stanislav Polášek ELOS Technologies sp@elostech.cz

Red Hat Enterprise Linux 6. Stanislav Polášek ELOS Technologies sp@elostech.cz Stanislav Polášek ELOS Technologies sp@elostech.cz Red Hat - an Established Global Leader Compiler Development Identity & Authentication Storage & File Systems Middleware Kernel Development Virtualization

More information

JBoss Enterprise MIDDLEWARE

JBoss Enterprise MIDDLEWARE JBoss Enterprise MIDDLEWARE WHAT IS IT? JBoss Enterprise Middleware integrates and hardens the latest enterprise-ready features from JBoss community projects into supported, stable, enterprise-class middleware

More information

An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when

More information

Delivers high performance, reliability, and security. Is certified by the leading hardware and software vendors

Delivers high performance, reliability, and security. Is certified by the leading hardware and software vendors Datasheet Red Hat Enterprise Linux 6 Red Hat Enterprise Linux is a high-performing operating system that has delivered outstanding value to IT environments for nearly a decade. As the world s most trusted

More information

MADFW IaaS Program Review

MADFW IaaS Program Review MADFW IaaS Program Review MADFW CONFIGURATION REPORTING CONTENT AUDITING INSTANCE MANAGEMENT LIFE-CYCLE Terry Seibel MSD SETA 703.808.5741 seibelte@nro.mil Shawn Wells Technical Director 443.534.0130 shawn@redhat.com

More information

Dell and JBoss just work Inventory Management Clustering System on JBoss Enterprise Middleware

Dell and JBoss just work Inventory Management Clustering System on JBoss Enterprise Middleware Dell and JBoss just work Inventory Management Clustering System on JBoss Enterprise Middleware 2 Executive Summary 2 JBoss Enterprise Middleware 5 JBoss/Dell Inventory Management 5 Architecture 6 Benefits

More information

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC

More information

Open Source Datacenter Conference 2011 System Management with RHN Satellite. Dirk Herrmann, Solution Architect, Red Hat

Open Source Datacenter Conference 2011 System Management with RHN Satellite. Dirk Herrmann, Solution Architect, Red Hat Open Source Datacenter Conference 2011 System Management with RHN Satellite Bringing the Community, Vendors and Users Together Enterprise Users Hardware vendors Software vendors Open Source Community A

More information

Virtualization Management the ovirt way

Virtualization Management the ovirt way ovirt introduction FOSDEM 2013 Doron Fediuck Red Hat What is ovirt? Large scale, centralized management for server and desktop virtualization Based on leading performance, scalability and security infrastructure

More information

INTRODUCTION TO CLOUD MANAGEMENT

INTRODUCTION TO CLOUD MANAGEMENT CONFIGURING AND MANAGING A PRIVATE CLOUD WITH ORACLE ENTERPRISE MANAGER 12C Kai Yu, Dell Inc. INTRODUCTION TO CLOUD MANAGEMENT Oracle cloud supports several types of resource service models: Infrastructure

More information

RED HAT ENTERPRISE VIRTUALIZATION

RED HAT ENTERPRISE VIRTUALIZATION RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-to-end enterprise virtualization solution for servers and desktops Provides an on-ramp

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

VMware Server 2.0 Essentials. Virtualization Deployment and Management

VMware Server 2.0 Essentials. Virtualization Deployment and Management VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.

More information

Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0

Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0 Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5 Version 1.0 November 2008 Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone: +1 919 754

More information

RED HAT ENTERPRISE VIRTUALIZATION 3.0

RED HAT ENTERPRISE VIRTUALIZATION 3.0 FEATURE GUIDE RED HAT ENTERPRISE VIRTUALIZATION 3.0 OVERVIEW Red Hat Enterprise Virtualization (RHEV) is a complete virtualization management solution for server and desktop virtualization and the first

More information

Foundations for your. portable cloud

Foundations for your. portable cloud Foundations for your portable cloud Start Today Red Hat s cloud vision is unlike that of any other IT vendor. We recognize that IT infrastructure is and will continue to be composed of pieces from many

More information

Build A private PaaS. www.redhat.com

Build A private PaaS. www.redhat.com Build A private PaaS WITH Red Hat CloudForms and JBoss Enterprise Middleware www.redhat.com Introduction Platform-as-a-service (PaaS) is a cloud service model that provides consumers 1 with services for

More information

Red Hat Enterprise Linux and management bundle for HP BladeSystem TM

Red Hat Enterprise Linux and management bundle for HP BladeSystem TM HP and Red Hat are announcing a specially priced software bundle for customers deploying Red Hat Linux on HP BladeSystem servers. HP will offer Red Hat Enterprise Linux and management bundle that combines

More information

Panoramica su Cloud Computing targata Red Hat AIPSI Meeting 2010

Panoramica su Cloud Computing targata Red Hat AIPSI Meeting 2010 Panoramica su Cloud Computing targata Red Hat AIPSI Meeting 2010 Giuseppe Gippa Paterno' Solution Architect EMEA Security Expert gpaterno@redhat.com Who am I Currently Solution Architect and EMEA Security

More information

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 7

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 7 1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. Insert Information Protection Policy Classification from Slide 7 Oracle Virtual Machine Server pre x86 Marián Kuna Technology Sales

More information

Enterprise-Class Virtualization with Open Source Technologies

Enterprise-Class Virtualization with Open Source Technologies Enterprise-Class Virtualization with Open Source Technologies Alex Vasilevsky CTO & Founder Virtual Iron Software June 14, 2006 Virtualization Overview Traditional x86 Architecture Each server runs single

More information

OpenNebula Open Souce Solution for DC Virtualization. C12G Labs. Online Webinar

OpenNebula Open Souce Solution for DC Virtualization. C12G Labs. Online Webinar OpenNebula Open Souce Solution for DC Virtualization C12G Labs Online Webinar What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision on Virtualized Environments I m using virtualization/cloud,

More information

Migrating to ESXi: How To

Migrating to ESXi: How To ILTA Webinar Session Migrating to ESXi: How To Strategies, Procedures & Precautions Server Operations and Security Technology Speaker: Christopher Janoch December 29, 2010 Migrating to ESXi: How To Strategies,

More information

End to end application delivery & Citrix XenServer 5. John Glendenning Vice President Server Virtualization, EMEA

End to end application delivery & Citrix XenServer 5. John Glendenning Vice President Server Virtualization, EMEA End to end application delivery & Citrix XenServer 5 John Glendenning Vice President Server Virtualization, EMEA Businesses Run on Applications Users Apps 2 Users and Apps are Moving Further Apart Consolidation

More information

Successfully Deploying Globalized Applications Requires Application Delivery Controllers

Successfully Deploying Globalized Applications Requires Application Delivery Controllers SHARE THIS WHITEPAPER Successfully Deploying Globalized Applications Requires Application Delivery Controllers Whitepaper Table of Contents Abstract... 3 Virtualization imposes new challenges on mission

More information

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically

More information

RED HAT INFRASTRUCTURE AS A SERVICE OVERVIEW AND ROADMAP. Andrew Cathrow Red Hat, Inc. Wednesday, June 12, 2013

RED HAT INFRASTRUCTURE AS A SERVICE OVERVIEW AND ROADMAP. Andrew Cathrow Red Hat, Inc. Wednesday, June 12, 2013 RED HAT INFRASTRUCTURE AS A SERVICE OVERVIEW AND ROADMAP Andrew Cathrow Red Hat, Inc. Wednesday, June 12, 2013 SERVICE MODELS / WORKLOADS TRADITIONAL WORKLOADS Stateful VMs: Application defined in VM Application

More information

Servervirualisierung mit Citrix XenServer

Servervirualisierung mit Citrix XenServer Servervirualisierung mit Citrix XenServer Paul Murray, Senior Systems Engineer, MSG EMEA Citrix Systems International GmbH paul.murray@eu.citrix.com Virtualization Wave is Just Beginning Only 6% of x86

More information

Version 3.7 Technical Whitepaper

Version 3.7 Technical Whitepaper Version 3.7 Technical Whitepaper Virtual Iron 2007-1- Last modified: June 11, 2007 Table of Contents Introduction... 3 What is Virtualization?... 4 Native Virtualization A New Approach... 5 Virtual Iron

More information

PARALLELS SERVER BARE METAL 5.0 README

PARALLELS SERVER BARE METAL 5.0 README PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal

More information

Simplified Private Cloud Management

Simplified Private Cloud Management BUSINESS PARTNER ClouTor Simplified Private Cloud Management ClouTor ON VSPEX by LOCUZ INTRODUCTION ClouTor on VSPEX for Enterprises provides an integrated software solution for extending your existing

More information

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems RH413 Manage Software Updates Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems Allocate an advanced file system layout, and use file

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: PRICING & LICENSING GUIDE

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: PRICING & LICENSING GUIDE RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: PRICING & LICENSING GUIDE Red Hat Enterprise Virtualization for Servers: Pricing Guide 1 TABLE OF CONTENTS Introduction to Red Hat Enterprise Virtualization

More information

OpenNebula Open Souce Solution for DC Virtualization

OpenNebula Open Souce Solution for DC Virtualization OSDC 2012 25 th April, Nürnberg OpenNebula Open Souce Solution for DC Virtualization Constantino Vázquez Blanco OpenNebula.org What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision on Virtualized

More information

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland Introducing Markus Erlacher Technical Solution Professional Microsoft Switzerland Overarching Release Principles Strong emphasis on hardware, driver and application compatibility Goal to support Windows

More information

What s New with VMware Virtual Infrastructure

What s New with VMware Virtual Infrastructure What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management

More information

APPENDIX 1 SUBSCRIPTION SERVICES

APPENDIX 1 SUBSCRIPTION SERVICES APPENDIX 1 SUBSCRIPTION SERVICES Red Hat sells subscriptions that entitle you to receive Red Hat services and/or Software during the period of the subscription (generally, one or three years). This Appendix

More information

Red Hat Enterprise Virtualization - KVM-based infrastructure services at BNL

Red Hat Enterprise Virtualization - KVM-based infrastructure services at BNL Red Hat Enterprise Virtualization - KVM-based infrastructure services at Presented at NLIT, June 16, 2011 Vail, Colorado David Cortijo Brookhaven National Laboratory dcortijo@bnl.gov Notice: This presentation

More information

Cloud Computing. Course: Designing and Implementing Service Oriented Business Processes

Cloud Computing. Course: Designing and Implementing Service Oriented Business Processes Cloud Computing Supplementary slides Course: Designing and Implementing Service Oriented Business Processes 1 Introduction Cloud computing represents a new way, in some cases a more cost effective way,

More information

Build Clouds Without Limits Gordon Haff

Build Clouds Without Limits Gordon Haff Red Hat CloudForms Infrastructure-as-a-Service: Build Clouds Without Limits Gordon Haff Is your IT ready for IT-as-a-Service? Is it... Portable across hybrid environments? Does it let you... Manage image

More information

See Appendix A for the complete definition which includes the five essential characteristics, three service models, and four deployment models.

See Appendix A for the complete definition which includes the five essential characteristics, three service models, and four deployment models. Cloud Strategy Information Systems and Technology Bruce Campbell What is the Cloud? From http://csrc.nist.gov/publications/nistpubs/800-145/sp800-145.pdf Cloud computing is a model for enabling ubiquitous,

More information

Windows Server 2008 R2 Essentials

Windows Server 2008 R2 Essentials Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution

More information

Enabling Cloud Deployments with Oracle Virtualization

<Insert Picture Here> Enabling Cloud Deployments with Oracle Virtualization Enabling Cloud Deployments with Oracle Virtualization NAME TITLE The following is intended to outline our general product direction. It is intended for information purposes only,

More information

YOUR STRATEGIC VIRTUALIZATION ALTERNATIVE. Greg Lissy Director, Red Hat Virtualization Business. James Rankin Senior Solutions Architect

YOUR STRATEGIC VIRTUALIZATION ALTERNATIVE. Greg Lissy Director, Red Hat Virtualization Business. James Rankin Senior Solutions Architect YOUR STRATEGIC VIRTUALIZATION ALTERNATIVE Greg Lissy Director, Red Hat Virtualization Business James Rankin Senior Solutions Architect 1 THE VIRTUALIZATION MARKET HAS CHANGED The release of Red Hat Enterprise

More information

Architekturen, Bausteine und Konzepte für Private Clouds Detlef Drewanz EMEA Server Principal Sales Consultant

<Insert Picture Here> Architekturen, Bausteine und Konzepte für Private Clouds Detlef Drewanz EMEA Server Principal Sales Consultant Architekturen, Bausteine und Konzepte für Private Clouds Detlef Drewanz EMEA Server Principal Sales Consultant The following is intended to outline our general product direction.

More information

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor

More information

Syncplicity On-Premise Storage Connector

Syncplicity On-Premise Storage Connector Syncplicity On-Premise Storage Connector Implementation Guide Abstract This document explains how to install and configure the Syncplicity On-Premise Storage Connector. In addition, it also describes how

More information

Parallels Server 4 Bare Metal

Parallels Server 4 Bare Metal Parallels Server 4 Bare Metal Product Summary 1/21/2010 Company Overview Parallels is a worldwide leader in virtualization and automation software that optimizes computing for services providers, businesses

More information

An Alternative to the VMware Tax...

An Alternative to the VMware Tax... An Alternative to the VMware Tax... John Tietjen Senior Solutions Architect Red Hat November 19, 2014 This presentation created for: AGENDA Red Hat Overview Red Hat Enterprise Virtualization: An alternative

More information

Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers

Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers Technical white paper Implementing Red Hat Enterprise Linux 6 on HP ProLiant servers Table of contents Abstract... 2 Introduction to Red Hat Enterprise Linux 6... 2 New features... 2 Recommended ProLiant

More information

Cloud Optimize Your IT

Cloud Optimize Your IT Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release

More information

Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization

Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization Fast facts Customer Industry Geography Business challenge Solution Qualcomm Telecommunications

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

Using Red Hat network satellite to dynamically scale applications in a private cloud

Using Red Hat network satellite to dynamically scale applications in a private cloud About Red HAt Red Hat was founded in 1993 and is headquartered in Raleigh, NC. Today, with more than 60 offices around the world, Red Hat is the largest publicly traded technology company fully committed

More information

HP Intelligent Management Center Standard Software Platform

HP Intelligent Management Center Standard Software Platform Data sheet HP Intelligent Management Center Standard Software Platform Key features Highly flexible and scalable deployment Powerful administration control Rich resource management Detailed performance

More information

VMware Virtual Infrastucture From the Virtualized to the Automated Data Center

VMware Virtual Infrastucture From the Virtualized to the Automated Data Center VMware Virtual Infrastucture From the Virtualized to the Automated Data Center Senior System Engineer VMware Inc. ngalante@vmware.com Agenda Vision VMware Enables Datacenter Automation VMware Solutions

More information

JBoss Enterprise MIDDLEWARE

JBoss Enterprise MIDDLEWARE JBoss Enterprise MIDDLEWARE WHAT IS IT? JBoss Enterprise Middleware integrates and hardens the latest enterprise-ready features from JBoss community projects into supported, stable, enterprise-class middleware

More information

Private Cloud with Fusion Middleware

<Insert Picture Here> Private Cloud with Fusion Middleware Private Cloud with Fusion Middleware Duško Vukmanović Principal Sales Consultant, Oracle dusko.vukmanovic@oracle.com The following is intended to outline our general product direction.

More information

Disaster Recovery Infrastructure

Disaster Recovery Infrastructure Disaster Recovery Infrastructure An Ideal cost effective solution to protect your organizations critical IT infrastructure with business continuity. Organization's IT infrastructure usually evolves with

More information

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to

More information

Cloud Models and Platforms

Cloud Models and Platforms Cloud Models and Platforms Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF A Working Definition of Cloud Computing Cloud computing is a model

More information

Server Virtualization and Consolidation

Server Virtualization and Consolidation Server Virtualization and Consolidation An Ideal cost effective solution to maximize your Return on Investment of your organization's hardware infrastructure It is quit evident today that Business owners,

More information

OpenNebula Open Souce Solution for DC Virtualization

OpenNebula Open Souce Solution for DC Virtualization 13 th LSM 2012 7 th -12 th July, Geneva OpenNebula Open Souce Solution for DC Virtualization Constantino Vázquez Blanco OpenNebula.org What is OpenNebula? Multi-tenancy, Elasticity and Automatic Provision

More information

Installing and Administering VMware vsphere Update Manager

Installing and Administering VMware vsphere Update Manager Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Managing your Red Hat Enterprise Linux guests with RHN Satellite

Managing your Red Hat Enterprise Linux guests with RHN Satellite Managing your Red Hat Enterprise Linux guests with RHN Satellite Matthew Davis, Level 1 Production Support Manager, Red Hat Brad Hinson, Sr. Support Engineer Lead System z, Red Hat Mark Spencer, Sr. Solutions

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Rally Installation Guide

Rally Installation Guide Rally Installation Guide Rally On-Premises release 2015.1 rallysupport@rallydev.com www.rallydev.com Version 2015.1 Table of Contents Overview... 3 Server requirements... 3 Browser requirements... 3 Access

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services

Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services Windows Server on WAAS: Reduce Branch-Office Cost and Complexity with WAN Optimization and Secure, Reliable Local IT Services What You Will Learn Windows Server on WAAS reduces the cost and complexity

More information

JBoss enterprise soa platform

JBoss enterprise soa platform JBoss enterprise soa platform What is it? The JBoss Enterprise SOA Platform includes serviceoriented architecture (SOA) open source middleware such as JBoss Enterprise Service Bus (ESB), JBoss jbpm, JBoss

More information

Real World Cloud Infrastructure with Red Hat Enterprise Virtualization and Red Hat Network Satellite

Real World Cloud Infrastructure with Red Hat Enterprise Virtualization and Red Hat Network Satellite Real World Cloud Infrastructure with Red Hat Enterprise Virtualization and Red Hat Network Satellite Tim Scully Jackpine Technologies Corporation David Egts Principal Architect, Red Hat 23 June 2010 Agenda

More information

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse.

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager pchadwick@suse.com. Product Marketing Manager djarvis@suse. SUSE Cloud 2.0 Pete Chadwick Douglas Jarvis Senior Product Manager pchadwick@suse.com Product Marketing Manager djarvis@suse.com SUSE Cloud SUSE Cloud is an open source software solution based on OpenStack

More information

Introduction to ovirt

Introduction to ovirt Introduction to ovirt James Rankin What is ovirt? Large scale, centralized management for server and desktop virtualization Based on leading performance, scalability and security infrastructure technologies

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

IS PRIVATE CLOUD A UNICORN?

IS PRIVATE CLOUD A UNICORN? IS PRIVATE CLOUD A UNICORN? With all of the discussion, adoption, and expansion of cloud offerings there is a constant debate that continues to rear its head: Public vs. Private or more bluntly Is there

More information

APPENDIX 1 SUBSCRIPTION SERVICES

APPENDIX 1 SUBSCRIPTION SERVICES APPENDIX 1 SUBSCRIPTION SERVICES Red Hat sells subscriptions that entitle you to receive Red Hat services and/or Software during the period of the subscription (generally, one or three years). This Appendix

More information