EMC Integrated Infrastructure for VMware. Enabled by EMC Celerra NS-120. Proven Solutions Guide

Similar documents
EMC Integrated Infrastructure for VMware

EMC Integrated Infrastructure for VMware

EMC Virtual Infrastructure for Microsoft SQL Server

EMC Backup and Recovery for Microsoft Exchange 2007 SP2

EMC Business Continuity for Microsoft SQL Server 2008

EMC Backup and Recovery for Microsoft SQL Server

EMC Celerra Unified Storage Platforms

EMC Unified Storage for Microsoft SQL Server 2008

EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter

EMC Backup and Recovery for Microsoft SQL Server

Virtualized Exchange 2007 Local Continuous Replication

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

TGL VMware Presentation. Guangzhou Macau Hong Kong Shanghai Beijing

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Frequently Asked Questions: EMC UnityVSA

Virtualized Exchange 2007 Archiving with EMC Xtender/DiskXtender to EMC Centera

EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series 2 Applied Technology

Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V. Reference Architecture

VMware vsphere 5.1 Advanced Administration

VMware vsphere 5.0 Boot Camp

Configuration Maximums VMware Infrastructure 3

Setup for Failover Clustering and Microsoft Cluster Service

VMware vsphere Data Protection 6.1

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

BEST PRACTICES GUIDE: VMware on Nimble Storage

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

DELL. Dell Microsoft Windows Server 2008 Hyper-V TM Reference Architecture VIRTUALIZATION SOLUTIONS ENGINEERING

Setup for Failover Clustering and Microsoft Cluster Service

Pivot3 Reference Architecture for VMware View Version 1.03

ESX Server 3 Configuration Guide Update 2 and later for ESX Server 3.5 and VirtualCenter 2.5

SAN Conceptual and Design Basics

VMware vsphere-6.0 Administration Training

VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

EMC CLARiiON CX3 Series FCP

Setup for Failover Clustering and Microsoft Cluster Service

EMC Replication Manager for Virtualized Environments

EMC BACKUP-AS-A-SERVICE

Oracle Database Scalability in VMware ESX VMware ESX 3.5

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Evaluation of Enterprise Data Protection using SEP Software

Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure Components of the VMware infrastructure...2

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager

EMC VSPEX END-USER COMPUTING

VMware vsphere 4.1 with ESXi and vcenter

VMware Site Recovery Manager with EMC RecoverPoint

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Configuration Maximums VMware vsphere 4.1

VMware vsphere Design. 2nd Edition

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Disaster Recovery Cookbook Guide Using VMWARE VI3, StoreVault and Sun. (Or how to do Disaster Recovery / Site Replication for under $50,000)

Using VMware ESX Server with IBM System Storage SAN Volume Controller ESX Server 3.0.2

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Configuration Maximums VMware vsphere 4.0

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

VMware Data Recovery. Administrator's Guide EN

EMC Backup and Recovery for Microsoft Exchange 2007

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Table of Contents. vsphere 4 Suite 24. Chapter Format and Conventions 10. Why You Need Virtualization 15 Types. Why vsphere. Onward, Through the Fog!

Configuration Maximums

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Nasuni Filer Virtualization Getting Started Guide. Version 7.5 June 2016 Last modified: June 9, Nasuni Corporation All Rights Reserved

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

StarWind iscsi SAN Software: Using StarWind with VMware ESX Server

Microsoft SMB File Sharing Best Practices Guide

Setup for Failover Clustering and Microsoft Cluster Service

VMware vstorage Virtual Machine File System. Technical Overview and Best Practices

HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment

Setup for Failover Clustering and Microsoft Cluster Service

VMware Virtual Machine File System: Technical Overview and Best Practices

SAN Implementation Course SANIW; 3 Days, Instructor-led

How to Backup and Restore a VM using Veeam

VMware vsphere Examples and Scenarios

Veeam Cloud Connect. Version 8.0. Administrator Guide

Microsoft Exchange Solutions on VMware

Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA

Setup for Microsoft Cluster Service ESX Server and VirtualCenter 2.0.1

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Brocade Solution for EMC VSPEX Server Virtualization

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Information Infrastructure for Vmware

VMware vsphere Data Protection 6.0

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

Virtual SAN Design and Deployment Guide

EMC AVAMAR INTEGRATION WITH EMC DATA DOMAIN SYSTEMS

What s New with VMware Virtual Infrastructure

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

AX4 5 Series Software Overview

Managing Multi-Hypervisor Environments with vcenter Server

White Paper. Recording Server Virtualization

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

Transcription:

EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide

Copyright 2009 EMC Corporation. All rights reserved. Published June 2009 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H6272 EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 2

Table of Contents About this document...5 Overview...5 Audience and purpose...6 Scope...7 Prerequisites and supporting documentation...8 Chapter 1: Solution Design...9 Overview...9 Business challenges and technology solution overview...10 Validated solution objectives...11 Key solution components and architecture...13 Expansion capabilities...15 Hardware and software resources...16 Chapter 2: Solution Components and Configuration...17 Overview...17 Section A: Solution components...18 Overview...18 Key Components...19 Server components...20 Storage components...21 Section B: iscsi Configuration...22 Overview...22 Prerequisites for iscsi configuration...23 iscsi LUN configuration...24 Section C: Cisco Catalyst 3750-E switch configuration...27 Overview...27 Cisco Catalyst 3750-E switch overview...28 Cisco Catalyst 3750-E switch configuration...29 Section D: VMware ESX server configuration...31 Overview...31 VMware ESX server considerations...32 VMware ESX server installation and configuration...33 iscsi configuration for the VMware ESX server...35 Adding a datastore configuration...37 Configuring Disk/LUN properties...39 Section E: EMC Avamar configuration...41 Overview...41 EMC Avamar deduplication overview...42 EMC Avamar installation and configuration...43 Customization...44 Chapter 3: Environment Administration Tools...45 Overview...45 Celerra...46 ESX 3.5 Virtual Infrastructure Environment...54 Avamar...61 Chapter 4: Basic Administration Tasks...62 Overview...62 Monitoring the environment...63 Configuring additional storage...65 VMware ESX administration tasks...66 Backing up and restoring guest VMs...67 Chapter 5: Testing and Validation...68 Overview...68 EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 3

Parameters...69 Methodology...70 Results...71 Summary...73 Conclusion...74 Overview...74 References...75 EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 4

About this document Overview Introduction The EMC commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, Global Solutions Centers (GSC) has built customer integration labs to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges its customers currently face. Contents This chapter contains the following topics: Topic See Page Audience and purpose 6 Scope 7 Prerequisites and supporting documentation 8 EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 5

Audience and purpose Audience This document is intended to be used by EMC field personnel EMC account teams, and customers. Document purpose This guide provides guidance for designing a scalable, well-performing virtualization solution on EMC Celerra storage with local data protection using Avamar for data deduplication and retention. Intended usage This guide is an EMC internal and customer facing document that is intended to be used in the deployment, customization, and ongoing support of the EMC Integrated Infrastructure For VMware solution. Additional usage This document is also intended to be used as a guide for selling designing building, and supporting similar solutions to implement an integrated virtualization solution. EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 6

Scope Solution scope The scope of this solution encompasses the following: design and build details of an integrated VMware ESX solution utilizing Celerra storage, Dell servers, Cisco IP switches and Avamar deduplication technology Note: Both Dell 2950 servers and Dell R710 servers are approved for use within the solution. This document refers only to Dell 2950 servers. customization of the environment to meet the needs of the customer applications unique to each deployment environment validation testing of the integrated environment using I/Ometer, and ongoing administration of the environment after deployment and customization. Out of scope This solution constitutes a platform on which various applications can be run. How the solution is configured depends on the storage and I/O requirements of the applications that are to be deployed. The solution can be built using server and switch hardware that is different from the hardware that was used to validate the solution; however, such alternatives must be properly qualified by EMC. The use of CLI commands to perform solution configuration is beyond the scope of this document. The information in this document is not intended to replace existing, detailed product implementation guides or pre-sale site evaluations. EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 7

Prerequisites and supporting documentation Technology It is assumed the reader is familiar with the following EMC, VMware, Dell and Cisco products: Celerra storage arrays Avamar deduplication appliances VMware ESX and Virtual Infrastructure 3.5 Dell 2950 servers Cisco 3750-E Ethernet switches Supporting documents It is also assumed the reader has an understanding of the following documents available on EMC Powerlink, VMware.com, and Cisco.com: EMC Celerra NS-120 System Installation Guide EMC Avamar 4.1 Server Software Installation Manual VMware Server User s Guide Version 2.0 Catalyst 3750-E Switch Getting Started Guide EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Proven Solutions Guide 8

Chapter 1: Solution Design Overview Introduction This chapter provides an overview of the validated solution design and design assumptions. Contents This chapter contains the following topics: Topic See Page Business challenges and technology solution overview 10 Solution validation objectives 11 Key solution components and architecture 13 Expansion capabilities 15 Hardware and software resources 16 9

Business challenges and technology solution overview Business need Customers need a validated hardware and software solution to simplify the procurement, assembly, deployment, and management of their data centers. Better performance and scalability Often, customers need to deliver better data center performance and scalability while reducing costs. Virtualization is an essential technology to consider, since it enables customers to maximize the return on hardware and software investments. Multiple data centers Many customers have more than one data center to manage, including smaller, remote branch locations. Maximizing IT investments while delivering a consistent level of service across all locations is very difficult. Disparate locations increase maintenance and service costs, since infrastructures can vary across locations. Solution design overview This solution uses ESX Server 3.5 on Celerra storage, with Avamar deduplication. In this solution, Dell 2950 servers and Cisco 3750-E switches are used, although the solution can be modified for other server and network hardware vendors. The technology solution The EMC Integrated Infrastructure for VMware is a reference blueprint for an integrated data center in a rack. The table below lists the solution components. Hardware and software Dell 2950 servers 1 EMC Celerra NS-120 (NAS, iscsi, FC) with easy virtual provisioning VMware for simplicity and efficiency EMC Avamar deduplication to minimize the amount of data to back up Infrastructure services to ensure that the solution is tied to and is manageable from internal systems Custom scripts to facilitate Baseline build and configuration Customer customization and deployment Operational best practices for Using the integrated infrastructure Managing system performance Performing backup and recovery 1 Dell 2950 servers and Dell R710 servers are approved for use within the solution. This document refers only to Dell 2950 servers. 10

Solution validation objectives Define hardware and software configuration Base the configuration of the hardware and software on five ESX servers. The physical configuration can support from two to eight ESX servers, depending on application and storage requirements. Define a baseline configuration Define the initial pre-customization state to prepare the solution for deployment at a customer site. Note: To efficiently support the assembly and configuration of this solution in the field, the hardware and software need to be in a consistent state when the automation scripts are executed to further customize for customer use. Develop automation scripts Develop scripts to automate the customization of software and hardware components in the solution when being deployed into a customer environment. Note: Several aspects of the internal components (software and hardware) of the solution will require customer-driven configuration changes (for example: switch VLAN configuration). Develop teardown script Develop a PowerShell script that can quickly tear down the customized environment and return it to the pre-customization state. Perform initial test matrix Use the results of these tests to provide information for the build plan and project documentation. Reference: For more information, refer to Chapter 4: Testing and Validation > Parameters > Workload Profile. Perform workload testing Using common load-generation tools such as I/Ometer, perform workload testing on the cluster and storage configuration. Workload testing defines the performance limits of the configuration to support deployment and provide consolidation recommendations in a customer s operating environment. 11

Provide VM guidelines Using the integrated Avamar Datastore, provide recommendations for Virtual Machine (VM) backup. Reference: For more information, refer to EMC Avamar installation and configuration > Backups, in this chapter. 12

Key solution components and architecture Key solution components This solution includes the following components: Dell 2950 servers 2 EMC Celerra NS-120 (NAS, iscsi, FC) with easy virtual provisioning VMware for simplicity and efficiency EMC Avamar deduplication appliance to minimize the amount of data to back up Stackable Cisco IP switches to ensure that the solution is tied to and is manageable from internal systems Physical architecture The figure below illustrates the overall physical architecture of the midsize solution. 2 Dell 2950 servers and Dell R710 servers are approved for use within the solution. This document refers only to Dell 2950 servers. 13

Logical architecture The figure below illustrates the logical architecture of the midsize solution. 14

Expansion capabilities Flexible deployment The EMC Integrated Infrastructure For VMware solution is designed to be flexible, depending on the environment in which it is deployed. The combination of servers and storage that can be deployed is vast, with the servers ranging from two to eight Dell 2950s (or similar sized servers) with varying combinations of Disk Array Enclosures (DAEs). For this solution, NS storage capacity ranges from 1.5 TB to 71.5 TB maximum, depending on drive size and type. Fixed rack deployment In a fixed rack deployment, the options are limited to the space in the rack. In a standard 42U rack with five servers and three DAEs (as was used in the validated solution), there exists the ability to add up to three additional servers, or two additional DAEs, or any combination of the two, up to the maximum capacity of the rack. Space allocation dependencies Space allocation for servers and storage in the rack depends on the applications deployed in the environment. Examples of space allocation requirements: A database application will require more disk storage than a Web server. A Web server will likely require more servers or Virtual Machines and less disk space, depending on content. Disk space requirements Great care must be taken in allocating space to each VM to ensure that the maximum benefit is achieved in the least amount of physical space. Allocating space appropriately allows for future expansion. 15

Hardware and software resources Hardware resources The hardware resources used to validate the solution are listed in the table below. Equipment Quantity Configuration Dell 2950 Servers 5 2 socket, quad core, 32 GB RAM, 2 x 73 GB internal disks - RAID (1), 10 network interfaces (1 GbE) NS-120 Storage Array 1 Configured to support NFS and iscsi protocols Celerra DAE Storage Enclosures 3 (15 / 300 GB / 15k FC drives per enclosure) Avamar Backup Datastore 1 Single-node deployment Cisco 3750-E Switch 2 (48 x 1 GbE ports per) Software resources The software resources used to validate the solution are listed in the table below. Software Version Configuration Avamar Backup Software 4.1 Single-node configuration VMware ESX 3.5 Update 3 Each VMware ESX server is configured with a predetermined IP address. This address is used to boot strap the customer configuration process. VMware Virtual Center (DRS, VHA, VMotion) 2.5 If this is a standalone implementation of VMware Virtual Infrastructure, the Virtual Center Server will be installed in a virtual machine located on one of the Dell 2950 servers. NS-120 Management Software 5.6 Required to configure iscsi devices for VMware ESX servers. 16

Chapter 2: Solution Components and Configuration Overview Introduction The EMC Integrated Infrastructure for VMware solution was designed and built to be deployed as a standalone high availability (HA) VMware ESX cluster, using iscsi storage on an EMC Celerra platform. Each component of the solution storage, network, VMware ESX servers, and Avamar deduplication was designed according to best practices to ensure a faulttolerant HA solution that is efficient and scalable. Contents This chapter contains the following sections: Topic See Page Section A: Solution components 18 Section B: iscsi configuration 22 Section C: Cisco Catalyst 3750-E switch configuration 27 Section D: VMware ESX server configuration 31 Section E: EMC Avamar configuration 41 17

Section A: Solution components Overview Introduction The key components in this solution are Dell servers and iscsi storage. Contents This section contains the following topics: Topic See Page Key Components 19 Server components 20 Storage components 21 18

Key components Introduction This section briefly describes the solution's EMC and VMware components. EMC Celerra NS-120 The EMC Celerra NS-120 brings high availability to multi-protocol environments. With the EMC Celerra NS-120, you can connect to multiple storage networks via NAS, iscsi, and Fibre Channel SAN with an integrated package that includes dedicated EMC CLARiiON networked storage. In this solution environment, NAS and iscsi storage are used by the ESX servers. VMware ESX Server 3.5 VMware ESX 3.5 is the market-leading virtualization hypervisor in use across thousands of IT environments around the world. VMware ESX abstracts server processor, memory, storage and networking resources into multiple virtual machines, forming the foundation of the VMware Infrastructure 3 suite. VMware ESX is a bare metal hypervisor that partitions physical servers into multiple virtual machines. Each virtual machine represents a fully functional, complete system with processors, memory, networking, storage and BIOS. EMC Avamar deduplication EMC Avamar backup and recovery solutions utilize patented global data deduplication technology to identify redundant data at the source, minimizing backup data before it is sent over the network. Avamar deduplicated backups function like full backups and can be recovered in just one step, without restoring full backups and subsequent incrementals. In addition, Avamar verifies backup data recoverability and encrypts data for secure electronic backups. 19

Server components Server hardware The EMC Integrated Infrastructure for VMware solution utilizes Dell 2950 servers. BIOS settings To ensure optimal performance in a virtualized environment, the following BIOS settings are modified as follows: Enable virtualization to allow for 64-Bit Guest Operating Systems. I/O Acceleration Technology (I/OAT) is enabled for data acceleration. Local disk protection is configured with RAID. Reference For additional information regarding the Dell hardware used in this solution, see the References section at the end of this document. 20

Storage components Storage components The EMC Integrated Infrastructure for VMware solution uses: iscsi on an EMC Celerra NS-120 array. The EMC Celerra provides access to block and file data using native iscsi and NFS. iscsi LUNs for the VMware ESX server datastores. With the introduction of native iscsi client support through the ESX Server VM Kernel software initiator, the ESX server is able to access up to 255 iscsi LUNs from one or more Celerra iscsi targets. Raw Device Mapping (RDM) disks for virtual machines. Reference Installation and configuration of the Celerra NS-120 is performed to EMC specifications as detailed in the VMware ESX Server Using EMC Celerra Storage Systems Solutions Guide, which can be found at the following URL: http://www.emc.com/collateral/hardware/technical-documentation/h5536-vmwareesx-srvr-using-celerra-stor-sys-wp.pdf 21

Section B: iscsi configuration Overview Introduction The solution uses iscsi LUNs for storage purposes. Contents This section contains the following topics: Topic See Page Prerequisites for iscsi configuration 23 iscsi LUN configuration 24 22

Prerequisites for iscsi configuration Requirements The configuration of iscsi requires that VMware ESX hosts have an IP network connection to access the IP storage, and the iscsi service enabled. Configuration steps The following steps must be completed before the VMware ESX hosts can see the Celerra devices. Step 1 Action Using vcenter, enable the iscsi client in the security profile, firewall properties interface of the client. 2 Configure the iscsi software initiator on the VMware ESX server hosts. 3 Configure iscsi LUNs and mask them to the IQN of the software initiator defined for this VMware ESX server host. The following illustration shows an example of masking an iscsi LUN to the IQN of the software initiator. References See the VMware iscsi SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 located at http://vmware.com for additional details about configuring iscsi for the VMware ESX server. 23

iscsi LUN configuration LUN configurations The Celerra iscsi LUNs are configured as follows: LUNs for creating data stores for virtual machines LUNs used as Raw Device Mapping (RDM) disks for applications deployed in the environment LUNs used to store OS images and VM templates and clones iscsi wizard The following illustration shows an example of the wizard that allows LUN configuration. 24

Datastore requirement The base configuration uses three 500 gigabyte (GB) RAID 5 LUNs on the Celerra to support up to 60 Windows 2003 virtual machines on five ESX servers. RDM disk requirement The RDM devices can be configured as either RAID 1 or RAID 5 LUNs, depending on the requirements of the application. Use a RAID 1 configuration for database and logging applications. RAID 1 LUNs provide optimal performance for write operations. They are suggested for I/O profiles that are write intensive. Use a RAID 5 configuration for other applications, such as file server and online transaction processing (OLTP) applications. VMFS requirement Use RAID 5 LUNs for the virtual machine file system (VMFS). The VMFS stores: OS images, and VM templates and clones. iscsi access requirement The EMC Integrated Infrastructure for VMware solution is designed to utilize VMware HA and DRS. To support these features, enable Multiple Access for all VMware ESX host iscsi initiators when configuring the LUNs on the Celerra. Use the Celerra iscsi Wizard or Celerra command line interface to configure the LUNs. 25

Uncached write mechanism The following recommendations apply to the use of the Celerra uncached write mechanism. Mounting NFS with the uncached write mechanism enabled is recommended. If planning to use more than eight NFS mounts (the default setting) in your deployment, update the configuration setting with the Storage Add-on script during the customization phase of deployment or utilize the Celerra Manager wizards. Benefit: The Celerra uncached write mechanism can enhance write performance to the Celerra file systems. This mechanism allows well-formed writes (for example, writes that are disk block aligned or whose size equals a multiple of the Celerra disk block size) to be sent directly to the disk without being cached on the Celerra Data Mover. Send and receive buffers NFS send and receive buffer sizes need to be configured in multiples of 32 KB. In the tested solution, the following default parameters were used: Send buffer: 256KB Receive buffer: 128KB References For information about using the customization utility, see the Customization section of this document. For further details on the Celerra Manager wizards available for Celerra configuration, see Chapter 3: Environment Administration Tools. For documents used in the design and planning of the environment configuration see the References section of this document. 26

Section C: Cisco Catalyst 3750-E switch configuration Overview Introduction Cisco Catalyst 3750-E switches are used for the IP network infrastructure Access Layer. Contents This section contains the following topic: Topic See Page Cisco Catalyst 3750-E switch overview 28 Cisco Catalyst 3750-E switch configuration 29 27

Cisco Catalyst 3750-E switch overview Overview The Cisco Catalyst 3750-E switch is a stackable GbE switch that allows the combination of up to nine physical switches into a single logical switch, which can be managed as a single switch, and enables port channeling of interfaces across the switch chassis. Diagram: network configuration The diagram below shows the network configuration used in the validated solution. 28

Cisco Catalyst 3750-E switch configuration Cabling In the solution as validated, the ESX server cabling is spread evenly across the two physical switches, as described below: On ESX-01, which has three NICs on the iscsi network, two of the NICs are cabled to the top 3750-E switch. The remaining NIC is cabled to the bottom 3750-E switch. Server links Aggregating the ESX Server links to the access layer switch and configuring them in a Port Channel Group allows for increased utilization of server resources as well as redundancy for network connections. The NIC team is configured to load balance egress traffic on the source and destination IP address information. Benefit: This algorithm improves the overall link use of the ESX system by providing a more balanced distribution across the aggregated links. Data Movers Half of the ports from each NS-120 Data Mover are connected to each physical switch, as described below: On Data Mover 1, interfaces CGE0 and CGE2 are connected to the top 3750-E switch and CGE1 and CGE3 are cabled to the bottom with the same configuration used on the second Data Mover. The interfaces for each Data Mover are configured in a Port Channel on the switch, one Port Channel per Data Mover since they are in an Active/Standby configuration. Benefit: This configuration provides redundancy to insure continuation of traffic flow in the event of a NIC or switch failure. IP-based port channeling When configuring the 3750-E switch stack for both the ESX servers and the Celerra Data Movers, IP-based port channeling (802.3ad link aggregation) is used. Benefit: The ability of the 3750-E to port channel across switches in a stack allows the option of bundling the NICs together logically. 29

Avamar Standalone node The Avamar standalone node in this environment uses a single GbE connection to the storage network. NIC Teaming is not an option at this point so recovery from a NIC or switch failure requires the manual movement of the connection to an alternate NIC or alternate switch and/or port, depending on the failure. Switching efficiency To improve the switching efficiency and performance of the storage network in the environment, Jumbo Frame support is configured on the 3750-E stack. Benefit: Using an MTU of 9198 bytes, the packet size enables less processing overhead, which leads to a significant improvement of the amount of data that can be sent and received. 30

Section D: VMware ESX server configuration Overview Introduction The validated solution uses VMware ESX servers for virtualization purposes. Contents This section contains the following topics: Topic See Page VMware ESX server considerations 32 VMware ESX server installation and configuration 33 iscsi configuration for the VMware ESX server 35 Adding a datastore configuration 37 Configuring Disk/LUN properties 39 31

VMware ESX server considerations Introduction When a VMware ESX server is used with EMC Celerra, it is critical to ensure proper configuration of both the Celerra and the ESX server to ensure optimal performance and availability. Required partitions The VMware ESX 3.5 boot disk on the Dell PowerEdge 2950 is a RAID 1 LUN supported by the Dell PERC adapter. The ESX installation requires the following four partitions: /boot (100 MB minimum) Swap (544 MB recommended) / (2560 MB recommended although ESX 3.5 update 3 supports up to 5 GB) /var/log (200 MB recommended) 32

VMware ESX server installation and configuration Standard installation Installation of the VMware ESX 3.5 software on the servers is standard with no variation from the instructions in the VMware documentation. Next step: After completing the software installation, configuration of network and storage settings must be completed. ESX configuration In this solution, three IP networks/vswitches are configured: One for vmotion One for the Storage network One for Public (client) access Virtual Switch configuration The following illustration shows a Virtual Switch configured for each network. 33

VM kernel port Since iscsi is used in this environment, a VM kernel port for software iscsi is created on the vswitch configured for storage traffic. Recommendation: It is strongly recommended that a separate iscsi network be used to segregate that traffic from client access and vmotion traffic for optimal performance and security. ESX server best practices Suggested best practices for modifications to the ESX environment are: Align the VMFS partition on a 64K block for increased performance. It is aligned on a 64K block by default if configured using the GUI. Setting the alignment is necessary only when using the CLI to create the VMFS datastore. Enable CDP, which provides more visibility into the switching environment. Configure vcenter HA for high availability. In this environment, the vcenter server is on a Virtual Machine in the cluster. This is a supported configuration but not required. The vcenter server can be on a server outside of the cluster environment, but HA is strongly recommended. Network interface settings Important: Ensure that speed and duplex settings are consistent between the ESX server NICs and the Ethernet switch to avoid mismatches that would result in either performance degradation or complete loss of network connectivity. Use of DNS Once the environment build is complete and applications are deployed, use DNS to enable proper address resolution for both the VMware Infrastructure environment and the general data center environment. 34

iscsi configuration for the VMware ESX server Enabling the iscsi initiator Follow the steps in the table below to use the software iscsi initiator in VMware ESX 3.5. Validated solution: In the test environment, build scripts configured the iscsi initiator. Step 1 Action On the Configuration tab of the ESX server, click Storage Adapters, select the iscsi adapter you want to configure and click Properties. In the iscsi Initiator Properties dialog box, Click Configure. Result: The General Properties dialog box opens, displaying the initiator s status, default name, and alias. The following illustration shows the General Properties dialog box. 2 3 Select Enabled to enable the initiator. Enter the new name to change the default iscsi name for your initiator. 4 Note: You do not need to change the default name, however if you do, be sure to format the name properly, otherwise some storage devices might not recognize the software iscsi initiator. Format: iqn.<year-month of domain registration>.<reverse domain name>:<device name> 5 Click OK to save your changes. 35

iscsi initiator view The following illustration shows a view in vcenter of the configured SCSI initiators: Viewing iscsi initiator properties Click Properties to view the properties of an iscsi initiator. The following illustration shows an example of the iscsi properties screen. 36

Adding a datastore configuration Introduction The Virtual Machines use a datastore created on the iscsi device for storage purposes. Add a datastore for storage Follow the steps in the table below to add a datastore for storage. Step Action 1 Log in to the VI Client and select a server from the inventory panel. 2 Select the Configuration tab and click Storage. Click Add Storage to open the Add Storage screen. The following illustration shows how to select a storage type. 3 37

Select the Disk/LUN storage type and click Next to open the Select Disk/LUN page. Note: Opening this page can take a few seconds depending on the number of targets that you have. The following illustration shows the screen on which to select Disk/LUN. 4 5 Select the iscsi device for the datastore and click Next to open the Current Disk Layout page. 6 Verify the current disk layout and click Next to open the Disk/LUN Properties page. 7 Enter a datastore name. Note: The datastore name appears in the VI Client and the label must be unique within the current Virtual Infrastructure instance. Next step After adding the datastore, verify that the layout and properties are correct. See Configuring Disk/LUN properties for step-by-step instructions. 38

Configuringd/LUN properties Configure disk/lun properties After adding the datastore, verify the layout and properties as described below. Step Action 1 Click Next on the Select Disk/LUN page, to open the Disk/LUN - Formatting page. The following illustration shows the Disk/LUN Formatting page. 2 If needed, adjust the file system values and capacity for the datastore and click Next to open the Ready to Complete page. Note: By default, all of the free space on the storage device is available. 3 Review the datastore configuration information and click Finish. 39

Completed storage configuration The following illustration shows the completed storage configuration for the ESX servers. 40

Section E: EMC Avamar configuration Overview Introduction The solution as validated uses EMC Avamar software for deduplicated backups. Contents This section contains the following topics: Topic See Page EMC Avamar deduplication overview 42 EMC Avamar installation and configuration 43 Customization 44 41

EMC Avamar deduplication overview Introduction EMC Avamar is backup and recovery software which, in the validated solution, is installed as a single-node server, single-rack unit appliance. Deduplication benefits Avamar s global data deduplication technology: Eliminates the unnecessary transmission and storage of redundant data across the network Solves traditional backup challenges by reducing the size of the backup data at the source which is done by storing only a single copy of sub-file data segments across all sites and servers Allows deduplicated backups to function like full backups, and can be recovered in just one step while also verifying backup data recoverability 42

EMC Avamar installation and configuration Installation For the solution as validated, a standalone unit configuration is utilized. The singlerack unit appliance is installed in the EMC rack and configured according to the EMC Avamar 4.1 Server Software Installation Manual. Activation After installing and configuring the appliance per the single-node server instructions, the Avamar client is installed on the VMs in the cluster and the VMs are activated on the Avamar server. Backups In the validated solution, the Avamar node is configured to perform guest-based backups based on Avamar recommendations for taking backups within virtualized environments. Guest-based backups provide the same file-level backup methods and procedures as for a physical machine. Note: Multiple backup jobs for the same client cannot be run, but multiple jobs across multiple clients will work. Other types of backups Other types of backups that Avamar provides, which were not used to validate this solution, include: VMware Consolidated Backup (VCB) Uses a centralized proxy server to perform image-level backups for VMs running any operating system and file-level backups for VMs running Microsoft Windows. Service Console-Based Image-level backup of VMs performed on the ESX server. 43

Customization Introduction The flexibility and ease of customization of the EMC Integrated Infrastructure for VMware solution allows it to be deployed in any environment and be up and running quickly, without having to touch each individual component of the solution. Customized environment values The following illustration shows the interactive Customization Utility, used to input customized environment values. Customized ESX data ESX configuration parameters can be configured by modifying the ESX-Addon.txt file in the customization script. 44

Chapter 3: Environment Administration Tools Overview Introduction Once the EMC Integrated Infrastructure for VMware solution is deployed, the environment is ready to begin hosting applications. Each component within the solution can be monitored and administered using the tools provided with the solution. This chapter addresses each component individually and provides an overview of the tools in place for regular administration of the environment. Contents This chapter provides administration information for the following components: Topic See Page Celerra 46 ESX 3.5 Virtual Infrastructure Environment 54 Avamar 61 45

Celerra Introduction Once the environment is installed and configured, maintenance of the storage environment should be performed with Celerra Manager or Navicli. This topic introduces the task wizards available within Celerra. Note: Navicli administration is beyond the scope of this document. If further information is required regarding environment administration using Navicli, please consult your EMC representative. Wizards Various network, CIFS, file system and iscsi tasks are executed quickly and easily using the wizards included with Celerra Manager. The following illustration shows an example of the available wizards. 46

New File System Wizard The following series of figures illustrates the creation of a new file system using the New File System Wizard in Celerra Manager. 47

New File System Wizard (continued) 48

New File System Wizard (continued) 49

New File System Wizard (continued) 50

New File System Wizard (continued) 51

New File System Wizard (continued) 52

New File System Wizard (continued) Monitor GUI The figure below illustrates the Monitor graphical user interface accessed via the Tools menu in Celerra Manager. 53

ESX 3.5 Virtual Infrastructure environment Introduction Once the environment is deployed, various tasks required to monitor, maintain, and add to the environment are performed by command line or by using the Virtual Infrastructure Client. This topic highlights the tools within Virtual Infrastructure Client. Note: Command line utilities are beyond the scope of this document. The Performance screen The figure below illustrates the Performance screen, which enables monitoring of the CPU, memory, disk, and network utilization of the ESX servers in the environment. New Virtual Machine Wizard To deploy applications in the environment, virtual machines need to be deployed. This is done with the Virtual Infrastructure Client. 54

Specify typical or custom configuration The figure below illustrates the New Virtual Machine wizard screen, where the VM machine configuration is selected. Input VM name and location The figure below illustrates the screen where the new VM name and location are input. 55

Add Storage Wizard The following two figures illustrate adding storage to an ESX server using the Add Storage wizard, accessed on the Configuration tab for the ESX server. 56

Virtual Machine Properties The figure below illustrates the Virtual Machine Properties screen, used to remove or edit resources associated with a VM, accessed with the Edit Settings option after right-clicking the VM icon. Add Networking Wizard The figure below illustrates how to add a vswitch to an ESX server, using the Add Networking wizard, accessed on the Configuration tab under Networking. 57

Connection type The figure below illustrates the connection type specification screen. Create a virtual switch The figure below illustrates how to create a virtual switch. 58

New Cluster Wizard To add a new HA cluster to the existing environment, use the New Cluster wizard, accessed by right-clicking the data center name and selecting Add New Cluster. Note: For more detail on these features, see VMware Infrastructure - Automating High Availability (HA) Services with VMware HA and VMware Infrastructure Resource Management with VMware DRS. Cluster name and features The figure below illustrates the first screen in the New Cluster wizard, establishing the new cluster name and features. 59

New Cluster Wizard summary screen The figure below illustrates the New Cluster Wizard summary screen. The figure below illustrates the HA and DRS cluster settings screen. 60

Avamar Introduction Once the virtual machine has been set up as a client, it can be managed through the Avamar administration application. This topic highlights those applications. Backup and restore Backups can be taken at the file, folder or directory level by checking the appropriate box on the Backup & Restore screen. The figure below illustrates the Avamar Administrator screen. 61

Chapter 4: Basic Administration Tasks Overview Introduction Chapter 3 provided an overview of the tools available to administer the EMC Integrated Infrastructure for VMware solution. This chapter provides a brief overview of the basic administration tasks and correlates the tasks with the tools. Also provided are references to sources of detailed information on performing the tasks. Contents This chapter contains the following topics: Topic See Page Monitoring the environment 63 Configuring additional storage 65 VMware ESX administration tasks 66 Backing up and restoring guest VMs 67 62

Monitoring the environment Components that can be monitored It is necessary to monitoring the solution environment so each component and subsystem runs efficiently. The following components can be monitored with the appropriate tools: ESX environment Storage environment IP switching, and Avamar ESX environment Use VirtualCenter or ESX-Top commands to monitor the ESX environment. Third-party tools are available to perform additional monitoring: CPU usage Memory usage Disk utilization Network utilization Reference: Information on monitoring the ESX environment can be found at http://www.vmware.com. Storage environment Use Celerra Manager/Celerra Monitor to monitor the storage environment. Aspects of the storage environment that can be monitored include: Disk utilization I/OPS Network utilization Note: If an existing EMC ControlCenter (ECC) environment exists, the storage can be monitored using that tool also or instead. Reference: Information on using Celerra Manager/Celerra Monitor to monitor the storage environment can be found on EMC Powerlink: Managing EMC Celerra Statistics P/N 300-004-579, Rev A04 Version 5.6 December 2008 63

IP switching Use any Cisco or third-party SNMP-based tools to monitor IP switching. MRTG was used during solution validation. Aspects of IP switching that can be monitored include: Interface utilization Interface errors Switch CPU utilization Reference: For information on MRTG, visit http://oss.oetiker.ch/mrtg/ Avamar Use the monitoring tool available in Avamar Administrator to monitor the Avamar system. Aspects of Avamar that can be monitored include: Backup schedules Capacity utilization Errors or failures Reference: Information on monitoring the Avamar system can be found in the EMC Avamar 4.1 Administrator s Manual on EMC Powerlink. 64

Configuring additional storage Tasks The tasks involved in configuring additional iscsi and NFS storage on the Celerra and the CLARiiON include: Adding more disk capacity (DAEs and disks) Creating additional file systems and logical volumes Modifying or growing existing logical volumes Creating additional storage pools Tools to use To perform these and other storage configuration tasks, use Celerra Manager and Navisphere Manager. Reference Refer to the VMware ESX Server Using EMC Celerra Storage Systems Solutions Guide at the following URL: http://www.emc.com/collateral/hardware/technical-documentation/h5536-vmwareesx-srvr-using-celerra-stor-sys-wp.pdf 65

VMware ESX administration tasks Tasks Basic VMware ESX administration tasks include: Deploying VMs Creating templates, clones, and snapshots Creating users, groups, and permissions Creating resource pools Migrating VMs between ESX servers Tools to use To perform these and other VMware administration tasks, use either: vcenter, available from the Virtual Infrastructure (VI) Client, or ESXCFG command line tools Reference Refer to Basic System Administration ESX Server 3.5, ESX Server 3i version 3.5 Virtual Center 2.5 at the following URL: http://www.vmware.com/pdf/vi3_35/esx_3/r35/vi3_35_25_admin_guide.pdf Additional VMware Infrastructure documentation can be found at the following URL: http://www.vmware.com/support/pubs/vi_pages/vi_pubs_35_3i_i.html 66

Backing up and restoring guest VMs Tasks Backing up and restoring guest VMs involves: Creating backup pools and schedules Installing the Avamar agent on VMs Verifying backups Restoring files and folders Tools Use Avamar Administrator to back up and restore guest VMs. Reference Information on monitoring the Avamar system can be found in the EMC Avamar 4.1 Administrator s Manual on EMC Powerlink. 67

Chapter 5: Testing and Validation Overview Introduction This chapter describes the methodology and parameters used to test and validate the EMC Integrated Infrastructure for VMware solution. Contents This chapter contains the following topics: Topic See Page Parameters 69 Methodology 70 Results 71 Summary 73 68

Parameters Introduction This section reviews the parameters used to measure the EMC Integrated Infrastructure for VMware solution s performance and behavior under simulated load. Testing environment The solution was tested and validated in the EMC Global Solutions Customer Integration Labs using I/Ometer. I/Ometer is an I/O subsystem measurement and characterization tool configured to replicate the behavior of many different applications and measure performance and behavior under simulated loads. Equipment A total of 60 VMs were used during the validation testing phase in the test environment, consisting of five ESX servers (Dell PE2950, four 2.50 GHz dual-core processors, 32 GB memory), configured with twelve Windows 2003 guest virtual machines, each with four virtual CPUs and 3 GB memory. Workload profile The table below provides the I/Ometer workload profile defined for each VM: Workload KB Read/Write Random/Sequential 20% Exchange 2003 4 KB 67% Read 100% Random 20% File Server 8 KB 90% Read 75% Random 20% OLTP 8 KB 67% Read 100% Random 20% Logging 64 KB 100% Write 100% Sequential 20% Video on Demand 512 KB 100% Read 100% Sequential 69

Methodology Introduction This section describes the testing methodology used to measure the EMC Integrated Infrastructure for VMware solution s performance and behavior. Methodology Testing was executed incrementally, starting with six virtual machines per ESX server and increasing until the maximum point was achieved. Measurements The following measurements were taken using ESX-Top, and were used to generate graphical representations of each metric. Workload I/OPS CPU utilization Workload bandwidth utilization Celerra array utilization Note: The Celerra results reflect I/Ometer running on 12 VMs on one ESX server using the workload profile stated above. 70

Results Workload I/OPS The following illustration shows the workload I/OPS. Workload MB/s The following illustration shows the workload bandwidth utilization 71

Physical CPU utilization The following illustration shows the virtual CPU utilization. Celerra array utilization The following illustration shows the Celerra array utilization. Reference: For details, see Measurements in the Methodology topic. 72

Summary Summary Based on the results of the tested workload profile, the following determinations were made: As application workload (virtual machines) is added to the environment, performance across all metrics eventually plateaus at 12 VMs per ESX server. Based on the defined workload profile, this represents an aggressive environment that performs very well under load. This aggressive environment lends itself to many varied configurations for combinations of mail server, file server, database server and VoD server deployments. 73

Conclusion Overview Conclusion The EMC Integrated Infrastructure for VMware solution was designed and built as a data center in a rack, which utilizes build and customization automation to deploy a standardized architecture to host numerous customer applications in a virtualized environment. This solution enables customers to deploy a best practices-based virtualization solution that maximizes the benefits of VMware ESX, EMC Celerra, and EMC Avamar. Learn more To learn more about this and other solutions, contact an EMC representative or visit www.emc.com/solutions. 74

References EMC Celerra Using EMC Celerra IP Storage with VMware Infrastructure 3.5 over iscsi and NFS Best Practices Planning VMware ESX Server VMware ESX Server Optimization with EMC Celerra Performance Study VMware ESX Server Using EMC Celerra Storage Systems Solutions Guide VMware ESX Server 3.5 Resource Management Guide VMware ESX Server 3.5 Administration Guide VMware iscsi SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 EMC Avamar EMC Avamar Backup Solutions for VMware ESX Server on Celerra NS Series EMC Avamar 4.1 Server Software Installation Manual VMware infrastructure VMware Infrastructure Automating High Availability (HA) Services with VMware HA VMware Infrastructure Resource Management with VMware DRS Dell servers http://www.dell.com/downloads/global/power/dell2socket_vs_hp4socket_vmware.pdf http://www.dell.com/downloads/global/products/pedge/en/pe2950_ss_072007.pdf 75