3 Red Hat Enterprise Linux 6 Consolidation

Similar documents
How to Choose your Red Hat Enterprise Linux Filesystem

FOR SERVERS 2.2: FEATURE matrix

Red Hat enterprise virtualization 3.0 feature comparison

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE PRICING GUIDE

Qualcomm Achieves Significant Cost Savings and Improved Performance with Red Hat Enterprise Virtualization

Red Hat Enterprise linux 5 Continuous Availability

RED HAT ENTERPRISE VIRTUALIZATION 3.0

Red Hat Enterprise Virtualization 3 on

RED HAT ENTERPRISE VIRTUALIZATION PERFORMANCE: SPECVIRT BENCHMARK

To find a more cost-effective virtualization technology with better support and reliability

Best Practices for Implementing iscsi Storage in a Virtual Server Environment

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization

Red Hat CloudForms: Open Clouds Under

Red Hat Global File System for scale-out web services

ENTERPRISE VIRTUALIZATION ONE PLATFORM FOR ALL DATA

RED HAT CLOUD SUITE FOR APPLICATIONS

Moving Virtual Storage to the Cloud

Maximize strategic flexibility by building an open hybrid cloud Gordon Haff

identity management in Linux and UNIX environments

Red Hat Enterprise Linux: The ideal platform for running your Oracle database

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Red Hat Cloud, HP Edition:

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Moving Virtual Storage to the Cloud. Guidelines for Hosters Who Want to Enhance Their Cloud Offerings with Cloud Storage

Microsoft Private Cloud Fast Track Reference Architecture

VERSUS VMWARE VSPHERE

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Virtual Compute Appliance Frequently Asked Questions

Zadara Storage Cloud A

Red Hat Enterprise Linux as a

DeltaV Virtualization High Availability and Disaster Recovery

CHOOSING THE RIGHT STORAGE PLATFORM FOR SPLUNK ENTERPRISE

Simplifying Data Center Network Architecture: Collapsing the Tiers

Block based, file-based, combination. Component based, solution based

An Oracle White Paper August Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

RED HAT OPENSTACK PLATFORM A COST-EFFECTIVE PRIVATE CLOUD FOR YOUR BUSINESS

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Data Center Evolution without Revolution

How To Build An Open Cloud

Cloud-ready network architecture

RED HAT ENTERPRISE VIRTUALIZATION

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Whitepaper. 2 Scalability: Theory and Practice. 3 Scalability Drivers

Red Hat Enterprise Linux is open, scalable, and flexible

Open Source Business Rules Management System Enables Active Decisions

cloud functionality: advantages and Disadvantages

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Cost-effective, extremely manageable, high-density rack servers September Why Blade Servers? by Mark T. Chapman IBM Server Group

34% DOING MORE WITH LESS How Red Hat Enterprise Linux shrinks total cost of ownership (TCO) compared to Windows. I n a study measuring

Enabling Technologies for Distributed Computing

ADVANCED NETWORK CONFIGURATION GUIDE

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

Balancing CPU, Storage

What Is Microsoft Private Cloud Fast Track?

Private cloud computing advances

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

OPENSTACK IN THE ENTERPRISE Best practices for deploying enterprise-grade OpenStack implementations

How To Build A Cisco Ukcsob420 M3 Blade Server

Achieving HIPAA Compliance with Red Hat

Red Hat Enterprise Linux 6 Server:

Achieving HIPAA Compliance with Red Hat

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Build A private PaaS.

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Business-centric Storage for small and medium-sized enterprises. How ETERNUS DX powered by Intel Xeon processors improves data management

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

SAS Business Analytics. Base SAS for SAS 9.2

Unified Computing Systems

Organizations that are standardizing today are enjoying lower management costs, better uptime. INTRODUCTION

OPTIMIZING SERVER VIRTUALIZATION

Enabling Technologies for Distributed and Cloud Computing

JBoss Enterprise MIDDLEWARE

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

EMC PowerPath Family

IOS110. Virtualization 5/27/2014 1

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Technical Brief: Egenera Taps Brocade and Fujitsu to Help Build an Enterprise Class Platform to Host Xterity Wholesale Cloud Service

Dell s SAP HANA Appliance

Microsoft Private Cloud Fast Track

Server Virtualization: Avoiding the I/O Trap

Deployment Guide. How to prepare your environment for an OnApp Cloud deployment.

Transcription:

Whitepaper Consolidation EXECUTIVE SUMMARY At this time of massive and disruptive technological changes where applications must be nimbly deployed on physical, virtual, and cloud infrastructure, Red Hat Enterprise Linux provides businesses with the flexibility they need to thrive as they make the transition to the next generation of IT computing and beyond. This paper examines consolidating multiple servers onto a single system using key technologies incorporated in. TABLE OF CONTENTS 2 Introduction 3 3 Large Systems 3 Expensive Resources 4 Control of Resources with Cgroups 4 heterogeneous Software Environments with Virtualization 5 Reliability 6 Conclusion

Your users are complaining about a slow web application just install another dozen blades or 1U pizza box servers and the complaints go away. For now. is ideally suited to implementations using large numbers of small servers. It is powerful and flexible, supports a broad range of applications, and is easy to install and manage. Introduction The use case for distributed processing using today s low-cost, high-performance, compact servers is compelling a single server for a single task. A print server here. A file server there. A new application is needed, and the easiest way to deal with it is to run it on a dedicated server. There is a major upgrade to an existing application; you need to run the old version and the new version in parallel for a while. You have an old application that is important perhaps even critical to your business. This application requires a specific version of an operating system, so you just leave it running on a dedicated server. Two engineering groups are arguing about their jobs running slow; just give them their own systems. Your users are complaining about a slow web application just install another dozen blades or 1U pizza box servers and the complaints go away. For now. After all, what s the problem? A server is only a few hundred dollars, perhaps a few thousand dollars. They just plug into a regular wall outlet no special power required. You can stuff a bunch of them in a rack, or put them out in departments. After all, you might as well take advantage of the new technology. This works. With a strong master plan and solid implementation, it can be extremely effective and very cost-effective. Or, with organic growth, little planning, and little coordination, it can become a resource-sapping nightmare. is ideally suited to implementations using large numbers of small servers. is powerful and flexible, supports a broad range of applications, and is easy to install and manage. Many uses require nothing more than the native capabilities of file, print, network management, web, database, CMS, and many other applications are distributed as part of, and just need to be configured and started. has great distributed management capabilities, addressing much of the pain of distributed systems. But you may determine that your servers are out of control there are too many of them and each one is doing too little. It is common to discover that most of your distributed servers are running at 10-20 percent utilization levels, with a few greatly overloaded. You may also find that most of your distributed servers are not using more than 25 percent of their memory, but a few are out of memory and experiencing poor performance. Or, you may find that most of your servers have huge amounts of wasted storage, but a few critical systems are out of disk space. Further, you are likely to find that each system has its own user accounts, and that you have no idea who has access to a given server, or what privileges they have on that system. All it takes is one little server in a forgotten corner with a user account for someone who left six months ago to open a security hole into your infrastructure. Without really strong operational controls it is difficult to ensure that all systems are up to date, have current versions of your operating system and applications, and have the latest security patches installed. One answer to this is server consolidation to take the workloads of dozens of servers and run them on a few systems. Done properly, this can improve performance, increase user satisfaction, and reduce cost. 2

is also perfectly suited to consolidating many applications onto a single system. The key elements to achieving this are: Support for large systems Large CPU count Large memory Excellent I/O Control of resources Isolation of applications Support for heterogeneous software environments Reliability has the ability to support: Over 4,000 processors 64TB of memory massive amounts of I/O large numbers of disk drives solid state storage Large Systems The first item to consider in consolidation is to make sure that a single system (or small number of systems) has sufficient performance to replace many dedicated servers. Red Hat Enterprise Linux 6 has excellent scalability. To summarize, has the ability to support over four thousand processors, 64TB of memory, and massive amounts of I/O. Today, the systems used for consolidation are likely to include 24-128 processors (or cores) and 64GB to 1TB of memory. These systems are easily supported by. also supports excellent I/O and large storage. The default EXT4 filesystem provides high performance and supports single filesystems up to 16TB. The optional XFS filesystem supports up to 100TB file systems. supports large numbers of disk drives, solid state storage (SSD), has native RAID, and has good storage management tools. also has excellent network and networked storage capabilities. For networking, supports Gigabit Ethernet and 10GbE, as well as Infiniband. Networked storage capabilities include NFS V4 (and NFS V3), ISCSI, Fibre Channel, and FcoE (Fibre Channel over Ethernet). When used with Fibre Channel, provides dynamic Multi-Path I/O (MPIO). MPIO allows the use of multiple Fibre Channel controllers in a single system. This can be used to increase I/O performance as well as allowing fail-over if a Fibre Channel link fails. Dynamic MPIO balances the I/O workload over the available I/O channels, delivering the best possible I/O performance with large SANs. Expensive Resources Some hardware is very effective but also very expensive. Examples of this include Storage Area Networks (SANs) and 10GbE. SANs are a very effective way to share data, provide access to large amounts of storage, provide high performance I/O, and to improve availability through redundant I/O paths. 3

However, SANs are also expensive. In addition to the storage controller cards that must be installed in the server typically two cards for high availability each storage controller requires a port on a Fibre Channel switch and a Fibre Channel cable. In some cases, Fibre Channel can cost more than the server itself. This is an ideal case to consolidate multiple servers into a single server and share expensive resources. Red Hat Enterprise Linux 6 gives you options process schedulers and memory management are designed to support large numbers of applications and ensure smooth operation. Control of Resources with Cgroups A classic problem with putting multiple applications on a single server is the ability for one application to impact other applications. A single application with a memory leak can consume all memory in the system, starving other applications. An application can consume all available network or storage bandwidth, or can utilize all available CPU processing power. Preventing one application from impacting the performance of another is one of the main reasons for putting each application on its own server. gives you options. First, the process schedulers and memory management are designed to support large numbers of applications and ensure smooth operation. Beyond this, provides cgroups, a new capability for controlling the amount of resources an application can consume. For example, consider a server with 64 processors and 256 GB of memory supporting a workload that includes a large database, a file server, a web server, and a content management system, such as Drupal. Using cgroups, the database, and its related database processes, can be given 32 CPUs, 192GB of memory, and 75 percent of the bandwidth of two Fibre Channel links into the SAN. Since caching is critical to database performance, eight specific CPUs (and related cache and memory) out of the available resources might be dedicated to the database server. The file server might be given four CPUs, 16GB of memory, 20 percent of the bandwidth of two Fibre Channel links, and 50 percent of network bandwidth. A dozen web servers would share16 GB of memory, 16 CPUs. And, as they have low storage I/O requirements, no specific limit on storage. Twenty Drupal sites might be given 20 CPUs and 20GB of memory. In each case, the resources granted are the maximum that the application can consume. If the database workload is light, other applications could use the resources assigned to the database does an excellent job of managing overall system resources and providing good performance for all applications. Virtualization Virtulaization using Kernelbased Virtual Machine (KVM) technology provides a complete virtual machine for each guest. Heterogeneous Software Environments with Virtualization Virtualization using Kernel-based Virtual Machine (KVM) technology provides a complete virtual machine for each guest. A virtual machine is just that it appears to be an actual hardware system to both operating systems and applications. Unlike Linux Containers, KVM virtual machines require the installation of a guest operating system. This operating system is completely independent from the host using a host, you can run Red Hat Enterprise Linux 3, 4, 5, or 6, or Windows guests with full support. Each guest is installed, managed, maintained, and updated independently. Completely different versions of software can be installed in each guest. The result is an ideal environment for server consolidation. 4

It is common to have specific versions of applications running; often older versions of applications. A company may actually have different versions of the same application, being used for different purposes. User environments may be extensively customized for specific uses. System names may be widely used for file servers and application servers. There are reasons for having multiple computers running different applications. When using KVM technology, the full complement of operating system administrative features is available to manage and secure guest resources. System resources may be allocated per guest using cgroups. The administrator can assign different resources to a Windows guest running a client/server application than is assigned to a database on a Linux guest. Because the resources are explicitly allocated, resource contention and overhead are reduced and more guests may be supported. With virtualization, all of these separate systems can be run on a single server. They maintain their own identity, but can share the physical resources of the large system. Using cgroups, the system resources are appropriately allocated. The result is better performance of the individual guests, better overall utilization of hardware, and reduced cost. Reliability The final element of consolidation is reliability. One advantage of using multiple servers is avoiding a single point of failure. On the other hand, having multiple servers increases the probability that you will experience hardware failure ten servers have ten times greater chance of having a failure (on one system) as a single server. So, multiple servers increase the probability of hardware failure, but minimize the scope and impact of a single failure. A difference between high-and low-end servers is the level of RAS (Reliability, Availability, and Serviceability) features built into the system. One of the differences between high-end servers and low-end servers is the level of RAS (Reliability, Availability, and Serviceability) features built into the system. Higher-end systems have additional RAS features and redundancy. At a simple level, for example, large servers are typically configured with multiple network and storage interfaces. They may also offer the ability to hot-swap peripherals and dynamically disable memory and CPUs. supports a wide range of RAS features, including clustering and failover, as well as advanced diagnostic and logging capabilities that allow you to identify many types of failures before they occur and quickly identify which component failed so that you can repair a system and return it to service. Further, provides the ability to live migrate running virtual machines from one server to another. This provides high availability in many ways. Routine maintenance of servers does not require shutting down vital applications they can simply be moved to another server, and then migrated back after the work is completed. This process can be automated, based on factors such as server load or system log events. Server load can be monitored, and a set of servers load balanced by live migrating busy applications to lightly loaded servers. The same tools can be used to monitor error logs. If hardware errors begin to show up, the applications on that server can be live migrated to other systems. A simple example of this could be a fan failure that caused a system to begin to overheat. It would be a few minutes between the time the fan failed and the system overheated to the point that it needed to be shutdown; this is plenty of time to live migrate running applications to other systems. 5

Consolidation also provides the ideal platform for consolidating multiple systems onto a smaller number of servers. Conclusion provides the ideal platform for implementing distributed systems using the high-performance, low-cost servers that are available today. With proper planning, distributed systems can be effective, provide great performance, and excellent cost performance. also provides the ideal platform for consolidating multiple systems onto a smaller number of servers. This can provide more effective utilization of computer resources, improve reliability and availability, be easier to manage, and be more cost-effective than a large number of lightly used small systems. Finally, it is likely that the best solution will be a combination of large servers running large jobs (database servers are a classic example), dedicated servers running specific applications, consolidation servers running a collection of applications, and virtual machines that previously ran on lightly used (or obsolete) dedicated machines. Again, is the ideal platform for hosting the total environment. IT departments that adopt will improve their agility, lower their costs, and reduce complexity with innovative technologies that span physical, virtual, and cloud infrastructures. ABOUT RED HAT Red Hat was founded in 1993 and is headquartered in Raleigh, NC. Today, with more than 60 offices around the world, Red Hat is the largest publicly traded technology company fully committed to open source. That commitment has paid off over time, for us and our customers, proving the value of open source software and establishing a viable business model built around the open source way. SALES AND INQUIRIES NORTH AMERICA 1 888 REDHAT1 EUROPE, MIDDLE EAST AND AFRICA 00800 7334 2835 www.europe.redhat.com europe@redhat.com ASIA PACIFIC +65 6490 4200 www.apac.redhat.com apac@redhat.com LATIN AMERICA +54 11 4329 7300 www.latam.redhat.com info-latam@redhat.com #4279877_1010 Copyright 2010 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, and RHCE are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.