EMC VNX2 Unified Best Practices for Performance



Similar documents
Improved Data Center Power Consumption and Streamlining Management in Windows Server 2008 R2 with SP1

Disk Redundancy (RAID)

Implementing SQL Manage Quick Guide

Microsoft Exchange 2010 on VMware Design and Sizing Examples

Licensing Windows Server 2012 for use with virtualization technologies

Licensing Windows Server 2012 R2 for use with virtualization technologies

Serv-U Distributed Architecture Guide

This report provides Members with an update on of the financial performance of the Corporation s managed IS service contract with Agilisys Ltd.

Technical Paper. Best Practices for SAS on EMC VNX Unified Storage

EMC VNX UNIFIED BEST PRACTICES FOR PERFORMANCE

Microsoft Exchange 2013 on VMware Design and Sizing Guide

Caching Software Performance Test: Microsoft SQL Server Acceleration with FlashSoft Software 3.8 for Windows Server

How To Install An Orin Failver Engine On A Network With A Network Card (Orin) On A 2Gigbook (Orion) On An Ipad (Orina) Orin (Ornet) Ornet (Orn

SBClient and Microsoft Windows Terminal Server (Including Citrix Server)

Serv-U Distributed Architecture Guide

EMC VNX UNIFIED BEST PRACTICES FOR PERFORMANCE

Licensing the Core Client Access License (CAL) Suite and Enterprise CAL Suite

Introduction to Mindjet MindManager Server

Restricted Document. Pulsant Technical Specification

An Oracle White Paper January Oracle WebLogic Server on Oracle Database Appliance

Helpdesk Support Tickets & Knowledgebase

This guide is intended for administrators, who want to install, configure, and manage SAP Lumira, server for BI Platform

Preparing to Deploy Reflection : A Guide for System Administrators. Version 14.1

BackupAssist SQL Add-on

Implementing ifolder Server in the DMZ with ifolder Data inside the Firewall

URM 11g Implementation Tips, Tricks & Gotchas ALAN MACKENTHUN FISHBOWL SOLUTIONS, INC.

Information Services Hosting Arrangements

The Relativity Appliance Installation Guide

Firewall/Proxy Server Settings to Access Hosted Environment. For Access Control Method (also known as access lists and usually used on routers)

Best Practice - Pentaho BA for High Availability

1)What hardware is available for installing/configuring MOSS 2010?

Microsoft SQL Server Configuration Guide for HP IO Accelerators

ScaleIO Security Configuration Guide

CSC IT practix Recommendations

Diagnostic Manager Change Log

AccessData Corporation AD Lab System Specification Guide v1.1

Archiving IVTVision Video (Linux)

Networking Best Practices

Systems Support - Extended

Software Distribution

Silver Peak NX Appliances and the Brocade 7500 Extension Switch

The Importance Advanced Data Collection System Maintenance. Berry Drijsen Global Service Business Manager. knowledge to shape your future

Prioritization and Management of VoIP & RTP s

State of Wisconsin. File Server Service Service Offering Definition

Avatier Identity Management Suite

SaaS Listing CA Cloud Service Management

SANsymphony-V Storage Virtualization Software Installation and Getting Started Guide. February 5,

Software and Hardware Change Management Policy for CDes Computer Labs

Junos Pulse Instructions for Windows and Mac OS X

MaaS360 Cloud Extender

UC4 AUTOMATED VIRTUALIZATION Intelligent Service Automation for Physical and Virtual Environments

Maximizing Virtual Machine Performance

Process Automation With VMware

System Business Continuity Classification

TaskCentre v4.5 Send Message (SMTP) Tool White Paper

StarterPak: Dynamics CRM Opportunity To NetSuite Sales Order

Time is Money Profiting from Reduced Cycle Time

Virtualized Tiered Storage Solutions. A Blueprint for Lowering Capital and Operational Costs for Storage Infrastructure

How To Improve The Availability Of A Micrsft Exchange Server With A Vsphere Platfrm On Vsphera 2010 N Vspheme 2010

Architecting Multi-site HP Storage Solutions

AvePoint High Speed Migration Supplementary Tools

Ten Steps for an Easy Install of the eg Enterprise Suite

Installation Guide Marshal Reporting Console

Using Sentry-go Enterprise/ASPX for Sentry-go Quick & Plus! monitors

ORACLE GOLDENGATE 11G

FINRA Regulation Filing Application Batch Submissions

Citrix XenServer from HP Getting Started Guide

Pexip Infinity and Cisco UCM Deployment Guide

How to deploy IVE Active-Active and Active-Passive clusters

Version: Modified By: Date: Approved By: Date: 1.0 Michael Hawkins October 29, 2013 Dan Bowden November 2013

Optimal Payments Extension. Supporting Documentation for the Extension Package v1.1

HP ExpertOne. HP2-T21: Administering HP Server Solutions. Table of Contents

NASDAQ BookViewer 2.0 User Guide

expertise hp services valupack consulting description security review service for Linux

Wireless Light-Level Monitoring

Readme File. Purpose. Introduction to Data Integration Management. Oracle s Hyperion Data Integration Management Release 9.2.

ABELMed Platform Setup Conventions

Best Practices for Optimizing Performance and Availability in Virtual Infrastructures

HP Connected Backup Online Help. Version October 2012

April 3, Release Notes

Corente Cloud Services Exchange (CSX) Corente Cloud Services Gateway Site Survey Form

Identify Storage Technologies and Understand RAID

ATL: Atlas Transformation Language. ATL Installation Guide

State of Wisconsin Division of Enterprise Technology (DET) Distributed Database Hosting Service Offering Definition (SOD)

RedCloud Security Management Software 3.6 Release Notes

Microsoft Exchange 2010 on VMware Availability and Recovery Options

Interworks Cloud Platform Citrix CPSM Integration Specification

Mobilizing Healthcare Staff with Cloud Services

Trends and Considerations in Currency Recycle Devices. What is a Currency Recycle Device? November 2003

Level 1 Technical. RealPresence Web Suite and Web Suite Pro. Contents

Configuring and Monitoring AS400 Servers. eg Enterprise v5.6

The ADVANTAGE of Cloud Based Computing:

Transcription:

EMC VNX2 Unified Best Practices fr Perfrmance VNX OE fr Blck 05.33.008 VNX OE fr File 8.1.8 EMC Cre Technlgies Divisin, VNX BU Abstract This applied best practices guide prvides recmmended best practices fr installing and cnfiguring VNX2 unified strage systems fr gd perfrmance. Octber 2015

Cpyright 2015 EMC Crpratin. All rights reserved. Published in the USA. Published Octber, 2015 EMC believes the infrmatin in this publicatin is accurate f its publicatin date. The infrmatin is subject t change withut ntice. The infrmatin in this publicatin is prvided as is. EMC Crpratin makes n representatins r warranties f any kind with respect t the infrmatin in this publicatin, and specifically disclaims implied warranties f merchantability r fitness fr a particular purpse. Use, cpying, and distributin f any EMC sftware described in this publicatin requires an applicable sftware license. EMC2, EMC, and the EMC lg are registered trademarks r trademarks f EMC Crpratin in the United States and ther cuntries. All ther trademarks used herein are the prperty f their respective wners. Fr the mst up-t-date regulatry dcument fr yur prduct line, g t the technical dcumentatin and advisries sectin n EMC Online Supprt. EMC VNX2 Unified Best Practices fr Perfrmance Part Number H10938.8 2 EMC VNX2 Unified Best Practices fr Perfrmance

Cntents Chapter 1 System Cnfiguratin... 7 Essential guidelines... 8 Strage Prcessr cache... 8 Physical placement f drives... 8 Ht sparing... 8 Usage f flash drives in hybrid flash arrays... 9 Availability and cnnectivity... 9 Fibre Channel and iscsi cnnectivity... 10 NAS cnnectivity... 11 Chapter 2 Strage Cnfiguratin... 12 General cnsideratins... 13 Drive type... 13 Rules f thumb... 13 RAID level... 14 Calculating disk IOPS by RAID type... 14 Determining which LUN type t cnfigure... 15 Creating LUNs fr Blck access... 16 Creating LUNs fr File access... 16 Strage pl cnsideratins... 17 Strage pl creatin and expansin... 17 Pl capacity cnsideratins... 18 Strage tiers... 18 Tiering plicies... 19 Strage pl bject cnsideratins... 19 Strage pl LUNs fr Blck access... 19 Strage Pl LUNs fr File access... 20 Classic RAID Grup LUN cnsideratins... 22 Drive lcatin selectin... 22 Classic RAID Grup LUNs fr Blck access... 23 EMC VNX2 Unified Best Practices fr Perfrmance 3

Classic RAID Grups fr File access... 23 Chapter 3 Data Services... 25 FAST VP... 26 General... 26 Data relcatin... 26 Pl capacity utilizatin... 26 Cnsideratins fr VNX OE fr File... 27 Multicre FAST Cache... 27 General cnsideratins... 27 Creating FAST Cache... 27 Enabling Multicre FAST Cache n a running system... 27 Data @ Rest Encryptin... 28 Replicatin... 28 VNX Snapshts fr Blck LUNs... 28 SnapView fr Blck LUNs... 29 SnapSure checkpints fr file systems... 29 MirrrView fr Blck LUN replicatin... 29 RecverPint fr Blck LUN replicatin... 30 IP Replicatr fr File System replicatin... 30 Deduplicatin and cmpressin... 30 Blck LUN cmpressin... 30 Blck LUN deduplicatin... 30 Deduplicatin and cmpressin with VNX OE fr File... 31 Anti-virus... 32 File system CAVA... 32 Chapter 4 Applicatin Specific Cnsideratins... 34 Blck applicatin tuning... 35 Hst file system alignment... 35 VMware ESX Server with iscsi datastre... 35 File applicatin tuning... 35 Hypervisr / Database ver NFS r SMB... 35 Bandwidth-intensive applicatins... 36 Cnclusin... 37 4 EMC VNX2 Unified Best Practices fr Perfrmance

Preface As part f an effrt t imprve and enhance the perfrmance and capabilities f its prduct line, EMC frm time t time releases revisins f its hardware and sftware. Therefre, sme functins described in this guide may nt be supprted by all revisins f the hardware r sftware currently in use. Fr the mst up-t-date infrmatin n prduct features, refer t yur prduct release ntes. If a prduct des nt functin prperly r des nt functin as described in this dcument, please cntact yur EMC representative. Nte: This dcument was accurate as f the time f publicatin. Hwever, as infrmatin is added, new versins f this dcument may be released t EMC Online Supprt. Check the website t ensure that yu are using the latest versin f this dcument. Purpse The delivers straightfrward guidance t the majrity f custmers using the strage system in a mixed business envirnment. The fcus is n system perfrmance and maximizing the ease f use f the autmated strage features, while aviding mismatches f technlgy. Sme exceptin cases are addressed in this guide; hwever, less cmmnly encuntered edge cases are nt cvered by general guidelines and are addressed in use-case-specific white papers. Guidelines can and will be brken, apprpriately, wing t differing circumstances r requirements. Guidelines must adapt t: Different sensitivities tward data integrity Different ecnmic sensitivities Different prblem sets These guidelines cntain a few DON T and AVOID recmmendatins: DON T means: D nt d it; there is sme pathlgical behavir AVOID means: All else being equal, it is recmmended nt t, but it still acceptable t d it EMC VNX2 Unified Best Practices fr Perfrmance 5

Audience This dcument is intended fr EMC custmers, partners, and emplyees wh are installing and/r cnfiguring VNX2 unified systems. Sme familiarity with EMC unified strage systems is assumed. Related dcuments The fllwing dcuments prvide additinal, relevant infrmatin. Access t these dcuments is based n yur lgn credentials. All f the dcuments can be fund n http://supprt.emc.cm. If yu d nt have access t the fllwing cntent, cntact yur EMC representative. VNX2: Data at Rest Encryptin Virtual Prvisining fr the VNX2 Series - Applied Technlgy Intrductin t the EMC VNX2 Series - A Detailed Review Intrductin t EMC VNX2 Strage Efficiency Technlgies VNX2 Multicre FAST Cache - A Detailed Review White Paper: VNX2 FAST VP - A Detailed Review White Paper: VNX2 Deduplicatin and Cmpressin - Maximizing effective capacity utilizatin White Paper: VNX2 MCx - Multicre Everything White Paper: VNX Replicatin Technlgies - An Overview White Paper: VNX Snapshts Hst Cnnectivity Guide 6 EMC VNX2 Unified Best Practices fr Perfrmance

Chapter 1 System Cnfiguratin This chapter presents the fllwing tpics: Essential guidelines... 8 Strage Prcessr cache... 8 Physical placement f drives... 8 Ht sparing... 8 Usage f flash drives in hybrid flash arrays... 9 Availability and cnnectivity... 9 EMC VNX2 Unified Best Practices fr Perfrmance 7

Essential guidelines Strage Prcessr cache Physical placement f drives This guide intrduces specific cnfiguratin recmmendatins that enable gd perfrmance frm a VNX2 unified strage system. At the highest level, gd perfrmance design fllws a few simple rules. The main principles f designing a strage system fr perfrmance are: Flash First Utilize flash strage fr the active dataset t achieve maximum perfrmance Distribute lad ver available hardware resurces Design fr 70 percent utilizatin (activity level) fr hardware resurces When utilizing Hard Disk Drives (HDD), AVOID mixing respnse-time-sensitive I/O with large-blck I/O r high-bandwidth sequential I/O Maintain latest released VNX Operating Envirnment versin Strage Prcessr memry cnfiguratin is nt required. Memry allcatin amunts and cache page size are nt cnfigurable parameters. When initially placing drives in the array: Spread flash drives acrss all available buses, and when pssible place them in the lwest-numbered enclsures There are n restrictins arund using r spanning acrss Bus 0 Enclsure 0 Ht sparing Ht sparing is the prcess f rebuilding a failed drive s data nt a system-selected cmpatible drive. Any unbund nn-system drive can be cnsidered fr sparing. When planning Ht Spares cnsider the fllwing recmmendatins: Plan t reserve at least ne f every 30 installed drives f a given type Verify cunt in the GUI r CLI System->Ht Spare Plicy naviseccli htspareplicy -list Nte: Unbund system drives (Bus 0 Enclsure 0 Disk 0 thrugh Disk 3) cannt be used as ht spares Ensure that unbund drives fr each drive type are available SAS Flash must spare fr SAS Flash SAS Flash VP must spare fr SAS Flash VP SAS must spare fr SAS (regardless f rtatinal speed) 8 EMC VNX2 Unified Best Practices fr Perfrmance

NL-SAS must spare fr NL-SAS The capacity f an unbund drive shuld be equal t r larger than the prvisined drives fr which it will spare. Usage f flash drives in hybrid flash arrays EMC recmmends the use f flash drives in VNX strage systems t maximize the ptential f the MCx perating envirnment. EMC recmmends deplying flash drives in the fllwing pririty rder: Cnfigure Multicre FAST Cache first Multicre FAST Cache is a glbal resurce that can benefit all strage resurces Nte: Multicre FAST Cache is nt applicable fr all-flash arrays Next, add a flash tier t pls cntaining thin LUNs The flash tier can accelerate access t thin LUN metadata, imprving perfrmance Cnfigure at least 3% f pl capacity in flash, t capture metadata Thin Prvisined LUNs, VNX Snapshts, Blck Cmpressin, and Blck Deduplicatin all rely n thin LUN technlgy Then add a flash tier t pls utilizing FAST VP Cnfigure at least 10% f pl capacity, fr flash acceleratin f active wrklad Cnfigure at least 25% f pl capacity, fr near-all-flash perfrmance Finally, dedicate an all-flash pl t strage bjects with very high perfrmance requirements Availability and cnnectivity Mre details n the effective use f flash drives fr these purpses can be fund in the relevant sectins in this paper. The VNX2 unified strage array ffers cnnectivity t a variety f client perating systems, and via multiple prtcls, such as FC, iscsi, NFS, and CIFS. EMC prvides cnnectivity guides with detailed instructins fr cnnecting and prvisining strage via different prtcls t the specific hst types. It is highly recmmended that yu cnsult cnnectivity dcuments n http://supprt.emc.cm fr the hst types that will be cnnected t the array fr any specific cnfiguratin ptins. Hst cnnectivity guides cver mre detail, especially fr a particular perating system; reference them fr hst-specific cnnectivity guidelines EMC VNX2 Unified Best Practices fr Perfrmance 9

Fibre Channel and iscsi cnnectivity Fibre Channel cnnectivity is facilitated via the FC I/O mdules n the Blck Strage Prcessrs. iscsi cnnectivity is facilitated via the iscsi I/O mdules n the Blck Strage Prcessrs. Use multiple I/O prts n each SP, and balance hst prt cnnectins acrss I/O prts, as hst prt cnnectins affect the preferred CPU cre assignment If nt cnnecting all f the available I/O prts, use the even numbered prts n each I/O mdule befre using any dd numbered prts Initially skip the first FC and/r iscsi prt f the array if thse prts are cnfigured and utilized as MirrrView cnnectins Fr the VNX8000, engage the CPU cres frm bth CPU sckets with Frnt End traffic Balance the Frnt End I/O mdules between slts 0-5 and slts 6-10 DON T remve I/O mdules if they are nt balanced; instead, cntact EMC supprt Balance hst prt assignments acrss I/O Mdule slts 0-5 and 6-10 AVOID zning every hst prt t every SP prt When registering hst HBAs with VNX OE fr Blck Strage Grups, make sure t set the apprpriate failver mde based n the hst type. See the Hst Cnnectivity Guides fr details. Fr Fibre Channel: Ensure that the FC prts cnnect at the highest speed supprted by the envirnment, preferably 16Gb r 8Gb Cnsider the prt cunt in use when perfrmance targets are required 16Gb prts have a max capability f 90,000 IOPS, r 1,050 MB/s 8Gb prts have a max capability f 60,000 IOPS, r 750 MB/s Fr iscsi: Use 10Gbps fr the best perfrmance 10Gb prts have a max capability f 40,000 IOPS, r 1,200 MB/s Cnfigure Jumb Frames (MTU f 9000) n all iscsi prts Nte: The entire netwrk infrastructure must als supprt Jumb Frames When pssible, segregate iscsi traffic nt dedicated strage netwrks 10 EMC VNX2 Unified Best Practices fr Perfrmance

NAS cnnectivity NAS prtcls (NFS and SMB/CIFS) are facilitated via I/O mdules n the File data mvers. Use 10Gbps fr the best perfrmance Cnfigure Jumb Frames (MTU f 9000) n all NAS prts Nte: The entire netwrk infrastructure must als supprt Jumb Frames It s recmmended t use netwrk trunking and multipathing t prvide prt failver and greater aggregate bandwidth fr NAS cnnectins t a single DM Cnfigure LACP acrss 2 r mre prts n a single DM Use LACP instead f EtherChannel EMC VNX2 Unified Best Practices fr Perfrmance 11

Chapter 2 Strage Cnfiguratin This chapter presents the fllwing tpics: General cnsideratins... 13 Determining which LUN type t cnfigure... 15 Strage pl cnsideratins... 17 Strage pl bject cnsideratins... 19 Classic RAID Grup LUN cnsideratins... 22 12 EMC VNX2 Unified Best Practices fr Perfrmance

General cnsideratins Drive type Match the apprpriate drive type t the expected wrklad: Drive type SAS Flash SAS Flash VP SAS Wrklad type Fr extreme perfrmance; these prvide the best perfrmance fr transactinal randm wrklads, and the lwest write service times. Required fr Multicre FAST Cache Fr extreme perfrmance FAST VP tier; these are a higher capacity flash ptin. Nt fr use with Multicre FAST Cache. Fr general perfrmance tier. Rules f thumb NL-SAS Fr less active data, well-behaved streaming data, archive purpses, and backups. Disk drives are a critical element f unified perfrmance. Use the rule f thumb infrmatin t determine the number f drives t use t supprt the expected wrklad. Rule f thumb data is based n drives that are: Operating at r belw recmmended utilizatin levels Prviding reasnable respnse times Maintaining verhead t handle bursts r hardware failures These guidelines are a cnservative starting pint fr sizing, nt the abslute maximums. Rules f thumb (RT) fr drive bandwidth (MB/s) Bandwidth NL-SAS SAS 10K SAS 15K Flash (All) RT per drive, Sequential Read 15 MB/s 25 MB/s 30 MB/s 90 MB/s RT per drive, Sequential Write This chart gives the expected per-drive bandwidth f the different drive types when servicing sequential wrklads. Disk drives deliver ptimal bandwidth when the wrklad cnsists f: 10 MB/s 20 MB/s 25 MB/s 75 MB/s Large-blck I/O (128KB r larger) Multiple cncurrent sequential streams EMC recmmends the use f parity RAID (RAID-5 r RAID-6) fr predminantly sequential wrklads. When sizing fr bandwidth with RT, d nt include parity drives in the calculatins. Fr example, t estimate the MB/s f a 4+1 RAID grup, multiply the apprpriate value frm the chart by 4 (the number f nn-parity drives) SAS 15K, RAID-5 4+1, with sequential write: 4*25 MB/s = 100 MB/s EMC VNX2 Unified Best Practices fr Perfrmance 13

Rules f thumb (RT) fr drive thrughput (IOPS) Thrughput NL-SAS SAS 10K SAS 15K SAS Flash VP (emlc) SAS Flash (SLC) Per drive RT 90 IOPS 150 IOPS 180 IOPS 3500 IOPS 5000 IOPS This chart gives the expected per-drive IOPS f the different drive types when servicing multi-threaded randm wrklads. Disk drives deliver ptimal IOPS when the wrklad cnsists f: Small-blck I/O (64KB r smaller) Multiple parallel wrklad threads, sending cncurrent activity t all drives When drives are cmbined with RAID prtectin, additinal drive I/O is needed t service randm writes frm the hst. RAID level T size fr hst IOPS, yu must include the RAID verhead as described in the sectin Calculating disk IOPS by RAID type System drives (Bus 0 Disk 0 thrugh Bus 0 Disk 3) have reduced perfrmance expectatins due t the management activities they supprt; rules f thumb fr these drives are adjusted accrdingly Nte: The system drives cannt be included in strage pls in the VNX2 Fr best perfrmance frm the least number f drives, match the apprpriate RAID level with the expected wrklad: RAID level RAID 1/0 RAID 5 RAID 6 fr NL-SAS Expected wrklad Wrks best fr heavy transactinal wrklads with high (greater than 30 percent) randm writes, in a pl with primarily HDDs Wrks best fr medium t high perfrmance, general-purpse and sequential wrklads Wrks best with read-biased wrklads such as archiving and backup t disk RAID 6 prvides additinal RAID prtectin t endure lnger rebuild times f large drives Calculating disk IOPS by RAID type Frnt-end applicatin wrklad is translated int a different back-end disk wrklad based n the RAID type in use. Fr reads (n impact f RAID type): 1 applicatin read I/O = 1 back-end read I/O Fr randm writes: RAID 1/0-1 applicatin write I/O = 2 back-end write I/O RAID 5-1 applicatin write I/O = 4 back-end disk I/O (2 read + 2 write) RAID 6-1 applicatin write I/O = 6 back-end disk I/O (3 read + 3 write) 14 EMC VNX2 Unified Best Practices fr Perfrmance

Example fr calculating disk IOPS frm hst IOPS: Hst IOPS required = 3000, with a read t write rati f 2 t 1, using RAID 5. 2 ut f every 3 hst I/O is a read. Disk reads = (2*(3000/3)) = 2000 RAID 5 requires 4 disk I/O fr every hst write. 1 ut f every 3 hst I/O is a write. Disk writes = 4*(1*(3000/3)) = 4000 Ttal disk IOPS = 2000 + 4000 = 6000 If lking t supprt that required wrklad with 15K rpm SAS drives, ne wuld simply divide the rule f thumb int the required backend IOPS: Determining which LUN type t cnfigure 6000/180 = 33.3, s rund up t 35 t align with a preferred drive cunt f the RAID 5 ptin. The VNX2 strage system supprts multiple types f LUNs t meet the demands f different wrklads and supprt different features. In general, Thin Pl LUNs are required fr Blck space efficiency features. Pl LUNs in general (either Thin r Thick) are required fr FAST VP tiering. Classic RAID Grup LUNs d nt prvide supprt fr advanced features. In terms f perfrmance: Thin LUNs prvide gd perfrmance fr mst wrklads Thick LUNs can prvide higher perfrmance than Thin, given the same platfrm and drive cmplement, by remving the CPU and IOPS lad f certain features Classic RAID Grup LUNs prvide the mst cnsistent perfrmance levels, fr envirnments where variance in perfrmance cannt be tlerated. Use the fllwing charts t determine which is apprpriate fr yur envirnment. EMC VNX2 Unified Best Practices fr Perfrmance 15

Creating LUNs fr Blck access When creating a Blck LUN, determine the desired feature set fr this LUN frm the chart belw, and then create the apprpriate LUN type. See the apprpriate sectins in this dcument fr best practice recmmendatins fr cnfiguring the LUN type and features selected. Creating LUNs fr File access VNX OE fr File builds file systems using Blck LUNs. When creating Blck LUNs fr File, determine the desired feature set fr frm the chart belw, and then create the apprpriate LUN type. See the apprpriate sectins in this dcument fr best practice recmmendatins fr cnfiguring the LUN type and features selected. 16 EMC VNX2 Unified Best Practices fr Perfrmance

Strage pl cnsideratins Strage pl creatin and expansin Create multiple pls in rder t: Separate wrklads with different I/O prfiles Predminantly sequential wrklads shuld be placed in dedicated pls r Classic RAID Grups Dedicate resurces, when yu have specific perfrmance gals Vary pl parameters, such as Multicre FAST Cache enabled/disabled Minimize failure dmains Althugh unlikely, lss f a private RAID grup in the pl cmprmises the ttal capacity f that pl; it may be desirable t create multiple smaller pls rather than cnfigure the ttal capacity int a single pl Strage pls have multiple RAID ptins per tier fr preferred type and drive cunt Cnsider the fllwing rule f thumb fr tier cnstructin: Extreme perfrmance flash tier: 4+1 RAID 5 Perfrmance SAS tier: 4+1 r 8+1 RAID 5 Capacity NL-SAS tier: 6+2 r 8+2 RAID 6 Nte: Classic RAID Grups pls have different recmmended preferred drive cunts as described in the sectin n Classic RAID Grup creatin Use RAID 5 with a preferred drive cunt f 4+1 fr the best perfrmance versus capacity balance Using 8+1 imprves capacity utilizatin at the expense f reduced availability Use RAID 6 fr NL-SAS tier Preferred drive cunts f 6+2, 8+2, r 10+2 prvide the best perfrmance versus capacity balance Using 14+2 prvides the highest capacity utilizatin ptin fr a pl, at the expense f slightly lwer availability and perfrmance Use RAID 1/0 when a high randm write rate (> 30%) is expected with HDD Fr best pssible perfrmance with RAID 1/0, use the largest available preferred drive cunt (i.e., 4+4 > 3+3 > 2+2, etc.) Recmmendatins fr creating and expanding strage pls: When creating a pl, it is best t specify a multiple f the preferred drive cunt fr each tier yu cnfigure Fr example, when using RAID 5 4+1, specify a drive cunt f 5, 10, 15, etc. It is best t maintain the same capacity and rtatinal speed f all drives within a single tier f a given pl EMC VNX2 Unified Best Practices fr Perfrmance 17

Fr example, AVOID mixing 600GB 10K SAS drives in the same pl with 300GB 15K SAS drives; instead, split them int 2 different pls Within a given pl, use all f the same flash technlgy fr the extreme perfrmance tier When expanding pls, use a multiple f the preferred drive cunt already in use fr the tier being expanded Pl capacity cnsideratins It is recmmended t leave apprximately 10% free space in the strage pl, t accmmdate data services. Strage tiers Nte: The pl can still be ver-subscribed abve 100% f actual capacity; the 10% referenced here refers t actual free capacity that is nt used in the pl. FAST VP requires free space t perfrm efficient relcatins; it attempts t keep 10% free per tier VNX Snapshts requires at least 5% free t buffer snapped writes Blck Deduplicatin uses free space t buffer write-splits Maintaining a ttal f 10% free will meet the requirements f all features Nte: By default, the VNX2 will begin issuing alerts when mre than 70% f available capacity has been subscribed. AVOID ver-subscribing pls which cntain thin LUNs fr VNX File. VNX File requires free space in the strage pl fr nrmal functining The number f tiers required in a strage pl is influenced by the fllwing: Perfrmance requirements Capacity requirements Knwledge f the skew between active and inactive capacity The capacity required fr each tier depends n expectatins fr skew, which is the lcality f active data within the ttal strage capacity. Best perfrmance is achieved when the entire active dataset can be cntained within the capacity f the Extreme Perfrmance (flash) and Perfrmance (SAS) tiers. As a starting pint, cnsider capacity per tier f 10 percent flash, 20 percent SAS, and 70 percent NL-SAS. This wrks n the assumptin that less than 30 percent f the used capacity will be active and infrequent relcatins frm the lwest tier will ccur. If the active capacity is knwn, the capacity per tier shuld be sized accrdingly. Best perfrmance is achieved when the active capacity fits entirely in the tp tier. In summary, fllw these guidelines: When Multicre FAST Cache is available, use a 2-tier pl cmprised f SAS and NL-SAS and enable Multicre FAST Cache as a cst-effective way f realizing flash perfrmance withut dedicating flash t this pl Flash tier can be added later if Multicre FAST Cache is nt fully capturing the active data 18 EMC VNX2 Unified Best Practices fr Perfrmance

Fr a 3-tier pl, start with 10 percent flash, 20 percent SAS, and 70 percent NL-SAS fr capacity per tier if skew is nt knwn Tiers can be expanded after initial deplyment t effect a change in the capacity distributin if needed Fr a 2-tier pl, cmbining flash and SAS is an effective way f prviding cnsistently gd perfrmance. The SAS tier prvides a buffer fr active data nt captured in the flash tier; the SAS tier still prvides mdest perfrmance, as well as quicker prmtin t flash when relcatins ccur NL-SAS can be added later if capacity grwth and aged data require it AVOID using a 2-tier pl f flash and NL-SAS if there is uncertainty abut the active data fitting in the flash tier Add a flash tier t a pl with thin LUNs s that metadata is prmted t flash and verall perfrmance is imprved Tiering plicies Cnsider apprximately 3GB f flash capacity t capture each TB f active thin LUN capacity Thick LUNs emplying VNX Snapshts, Cmpressin, r Deduplicatin will cnvert t thin LUNs and therefre als require metadata cnsideratin When creating LUNs in tiered pls: DON T use aut-tier fr LUNs with lw-skew randm wrklads where the active dataset wuld nt fit in the highest tier This might cause excessive tier relcatins that may nt benefit the active data AVOID using highest available when the LUN capacity exceeds 90% the highest tier capacity This will affect the verall efficiency f the highest tier t service active data fr LUNs running in aut-tier mde, and als remve the capability t hld metadata t help thin LUNs AVOID using lwest available with thin, deduplicated, r cmpressed LUNs This will frce the metadata t the lwest tier, which can negatively impact perfrmance Strage pl bject cnsideratins Strage pl LUNs fr Blck access Strage pl LUNs can be created as deduplicated thin, r thin (virtually prvisined), r thick (fully allcated). Deduplicated Thin LUNs fr Blck When planning t use Blck Deduplicatin, it is recmmended t start with a Deduplicated Thin LUN. EMC VNX2 Unified Best Practices fr Perfrmance 19

When creating Deduplicated Thin LUNs, assign the same SP wner t all deduplicated LUNs in a given pl. All Deduplicated Thin LUNs in a pl reside in a single Deduplicatin Cntainer, which is managed by a single SP Match the LUN wnership t the pl s Optimal Deduplicated LUN SP Owner Balance SP utilizatin as fllws when creating Deduplicated Thin LUNs: In a single pl, assign deduplicated LUNs t ne SP, and nn-deduplicated LUNs t the ther SP With multiple pls, balance the Deduplicatin Cntainers acrss the SPs Fr instance, Pl 1 has deduplicated LUNs n SPA, and Pl 2 has deduplicated LUNs n SPB Thin LUNs fr Blck Thin LUNs are recmmended when planning t implement Snapshts r Cmpressin n Blck LUNs Thin LUNs are recmmended when strage efficiency requirements utweigh perfrmance requirements Thin LUNs maximize ease-f-use and capacity utilizatin at sme cst t maximum perfrmance This des nt always mean yu will get slwer respnse time r fewer IOPS, but the ptential f the drives and SPs t deliver IOPS t the hst is less When using thin LUNs, adding a flash tier t the pl can imprve perfrmance Thin LUN metadata can be prmted t the flash tier when FAST VP is enabled Thick LUNs fr Blck Thick LUNs (withut Snapshts) are recmmended fr the highest level f pl-based perfrmance A thick LUN s perfrmance can be better than the perfrmance f a thin LUN Strage Pl LUNs fr File access General guidelines fr strage pl LUNs fr File In general, when creating pl LUNs fr use with File: Pre-prvisin space frm the pl; create and assign LUNs t VNX OE fr File, s that VNX OE fr File has available space fr file system creatin and extensin, snapshts, etc. Create apprximately 1 LUN fr every 4 drives in the strage pl Create LUNs in even multiples f 10 Number f LUNs = (number f drives in pl divided by 4), runded up t nearest multiple f 10 Make all LUNs the same size 20 EMC VNX2 Unified Best Practices fr Perfrmance

Balance LUN wnership acrss SPA and SPB Apply the same tiering plicies t all LUNs in the strage pl File-based space efficiency features are generally recmmended ver Blck LUNbased space efficiency features: If Virtual Prvisining is required fr VNX OE fr File, use a thin- enabled file system n classic r thick LUNs If cmpressin is required fr VNX OE fr File, use VNX OE fr File Deduplicatin and Cmpressin DON T use cmpressed LUNs with VNX OE fr File Blck Cmpressin is nt supprted with VNX OE fr File LUNs If snapshts r checkpints are required fr VNX OE fr File, use SnapSure. DON T create VNX SnapShts n LUNs used by VNX OE fr File Deduplicated Thin LUNs fr File If planning t implement Blck Deduplicatin fr VNX File, deduplicated thin LUNs are recmmended. Blck Deduplicatin must als be enabled n these LUNs frm VNX File. When planning t use Blck Deduplicatin fr VNX File: Create deduplicated thin LUNs fr use by VNX File Create all deduplicated thin LUNs in a given pl with the same SP wner Match the LUN wnership t the pl s Optimal Deduplicated LUN SP Owner Balance SP wnership by assigning t the ther SP any LUNs in this pl that will nt be deduplicated Once the LUNs are visible t VNX File, create a VNX File user-defined strage pl fr LUNs that will be deduplicated; enable fixed_blck_dedup n the user-defined strage pl DON T create a file system n the LUNs until fixed_blck_dedupe has been enabled frm VNX File Thin LUNs fr File Thin LUNs are required when planning t utilize File System and Checkpint Strage Space Reclaim. Fllw the General Guidelines when creating Thin LUNs fr File EMC VNX2 Unified Best Practices fr Perfrmance 21

Thick LUNs fr File Thick LUNs are generally recmmended fr use with File when perfrmance needs utweigh Blck space efficiency requirements. Fllw the General Guideline when creating Thick LUNs fr File. In additin: Allw LUNs t cmplete the prepare prcess (thick LUN slice allcatin) befre adding t the File strage grup; use this cmmand t display the status f the prepare peratin: naviseccli lun -list -pdetails Creating file systems frm strage pl LUNs When creating file systems frm Blck strage pl virtual LUNs, cnsider the fllwing: Use Autmatic Vlume Management (AVM) t create file systems When creating vlumes manually: Stripe acrss 5 LUNs frm the same pl Use a stripe size f 262144 (256KB) Balance SP wnership f LUNs in each stripe Create file systems with split lg type cmmn lg type is nly recmmended when replicating t r frm ther cmmn lg file systems Use thin enabled file systems fr ptimum capacity utilizatin Strage pl LUN wnership cnsideratins AVOID changing the wnership f a pl LUN after initial creatin. Classic RAID Grup LUN cnsideratins Use LUN migratin t mve LUNs t their peer SP if required t change wnership Classic RAID Grup LUNs generally prvide the mst cnsistent level f perfrmance. Classic RAID Grup LUNs are recmmended fr wrklads where perfrmance variatin cannt be tlerated. Classic RAID Grup LUNs als prvide the highest perfrmance with sequential wrklads. Drive lcatin selectin When selecting the drives t use in a Classic RAID grup, drives can be selected either frm a DAE r DAEs n the same bus (hrizntal psitining), r frm multiple DAEs n different buses (vertical psitining). There are nw almst n perfrmance r availability advantages t using vertical psitining. Therefre: Use the default hrizntal psitining methd f drive selectin when creating RAID grups. If a single DAE des nt have enugh available drives t create the RAID grup, selecting drives frm a secnd DAE is acceptable. 22 EMC VNX2 Unified Best Practices fr Perfrmance

The SAS buses are fast enugh t supprt drives distributed acrss enclsures belnging t the same RAID grup. Classic RAID Grup LUNs fr Blck access Drive cunt Classic RAID grups can have a maximum cunt f 16 drives. Fr parity RAID, higher disk cunt ffers higher capacity utilizatin, but with higher risk t availability. Fr predminantly randm wrklads, use the drive type rule f thumb values and the RAID level IOPS calculatins t determine the number f drives needed t meet the expected applicatin wrklad. Sequential ptimizatin ccurs when the array can perfrm full-stripe peratins nrmally assciated with large-blck I/O. Certain drive cunts are mre likely t enable this behavir, thugh these ptimizatins can als ccur with nn-preferred drive cunts. With a predminance f large-blck sequential peratins, the fllwing applies: RAID 5 has a preference f 4+1 r 8+1 RAID 6 has a preference f 8+2 RAID 1/0 has a preference f 4+4 Nte: Strage pls have different recmmended preferred drive cunts as described in the sectin n Strage Pl Creatin. MetaLUNs r hst striping A Classic RAID Grup can cntain a maximum f 16 drives. If Classic RAID Grup LUNs are preferred, but mre than 16 drives are required fr a single LUN, VNX MetaLUNs can be used, r hst striping can be emplyed. Large element size If using hst striping: Use a stripe element size equal t r a multiple f the RG full-stripe-width Stripe acrss LUNs frm different SPs When creating a 4+1 RAID 5 Classic RAID grup fr Blck access, yu can select a 1024 blck (512KB) element size. Use large element size when the predminant wrklad is large-blck randm read activity (such as data warehusing) AVOID using large element size with any ther wrklads The default element size f 128 blcks (64KB) is preferred fr mst wrklads Classic RAID Grups fr File access Drive cunt When creating LUNs fr VNX OE fr File frm Classic RAID grups, cnsider the fllwing: Preferred RAID grup sizes: RAID 1/0: 1+1 RAID 5: 4+1, 8+1 EMC VNX2 Unified Best Practices fr Perfrmance 23

RAID 6: 8+2, 4+2 Create 1 LUN per RAID Grup, up t the maximum LUN size f 16TB Create multiple LUNs when necessary t keep LUN size less than 16TB Balance SP wnership f LUNs frm multiple RAID Grups Creating file systems frm Classic RAID Grup LUNs Guidelines fr creating file systems frm Classic RAID grup LUNs: Use Autmatic Vlume Management (AVM) t create file systems AVM creates well-designed file system layuts fr mst situatins Manual Vlume Management can be used t create file systems with specific attributes fr specialized wrklads Fr metadata-heavy wrklads, stripe acrss an dd number f disk vlumes (dvls) Match VNX OE fr File stripe size t the Blck LUN full-stripe width fr best sequential write perfrmance AVOID wide striping acrss a large number f dvls When creating striped vlumes manually: Stripe acrss LUNs frm different RAID Grups Use a stripe size f 262144 (256KB) Balance SP wnership f selected dvls 24 EMC VNX2 Unified Best Practices fr Perfrmance

Chapter 3 Data Services This chapter presents the fllwing tpic: FAST VP... 26 Multicre FAST Cache... 27 Data @ Rest Encryptin... 28 Replicatin... 28 Deduplicatin and cmpressin... 30 Anti-virus... 32 EMC VNX2 Unified Best Practices fr Perfrmance 25

FAST VP General FAST VP mves data between tiers in a pl based n the perfrmance needs f the data. Cnstruct the pl such that each tier will prvide cnsistent perfrmance. Data relcatin Use cnsistent drive technlgy fr each tier within a single pl Same flash drive technlgy and drive size fr the extreme perfrmance tier Same SAS RPM and drive size fr the perfrmance tier Same NL-SAS drive size fr the capacity tier Relcatin is the prcess f mving pl data slices acrss tiers r within the same tier, t mve ht data t higher perfrming drives r t balance underlying drive utilizatin. Relcatin can ccur as part f a FAST VP scheduled relcatin, as an autmated relcatin after a strage pl expansin, r as a result f manually requesting relcatin. Enable FAST VP n a pl, even if the pl nly cntains a single tier, t prvide nging lad balancing f data acrss available drives based n slice temperature and capacity utilizatin Schedule relcatins fr ff-hurs, s that relcatin activity des nt cntend with the primary wrklad. Schedule relcatins t run befre r during backup windws, s that the relcatins are based n the primary wrklad activity Enable FAST VP n a pl befre expanding the pl with additinal drives. With FAST VP enabled, slices rebalance accrding t slice temperature and capacity utilizatin f pl resurces. Pl capacity utilizatin FAST VP requires unallcated space within the pl t accmmdate data relcatins. It is recmmended t leave abut 10% free space in strage pls with FAST VP enabled Relcatin will attempt t reclaim 10% free per tier Free space is used t ptimize relcatin peratins Free space is used fr new allcatins t thin LUNs Free space is used t supprt Snapsht schedules 26 EMC VNX2 Unified Best Practices fr Perfrmance

Cnsideratins fr VNX OE fr File By default, a VNX OE fr File system-defined strage pl is created fr every VNX OE fr Blck strage pl that cntains LUNs available t VNX OE fr File. (This is a mapped strage pl. ) Multicre FAST Cache All LUNs in a given VNX OE fr File strage pl shuld have the same FAST VP tiering plicy Create a user-defined strage pl t separate VNX OE fr File LUNs frm the same Blck strage pl that have different tiering plicies When using FAST VP with VNX OE fr File, use thin enabled file systems fr increased benefit frm FAST VP multi-tiering. Multicre FAST Cache is best fr small randm I/O where data has skew. The higher the lcality, the greater the benefits f using Multicre FAST Cache. Multicre FAST Cache als adapts quickly t changes in lcality. General cnsideratins EMC recmmends first utilizing available flash drives fr Multicre FAST Cache, which can glbally benefit all LUNs in the strage system. Then supplement perfrmance as needed with additinal flash drives in strage pl tiers. Preferred applicatin wrklads fr Multicre FAST Cache: Small-blck randm I/O applicatins with high lcality High frequency f access t the same data, nt entirely serviced frm system cache Systems where current perfrmance is limited by HDD capability, nt SP capability AVOID enabling Multicre FAST Cache fr pls that are nt expected t benefit, such as when: The primary wrklad is sequential The primary wrklad is large-blck I/O The primary wrklad is small-blck sequential, like database lgs, circular lgs, r VNX OE fr File SavVl (snapsht strage) Creating FAST Cache Chse drives frm multiple buses when pssible. Enabling Multicre FAST Cache n a running system When adding Multicre FAST Cache t a running system, it is recmmended t enable Multicre FAST Cache n a few LUNs at a time, and then wait until the LUNs have reached steady state in Multicre FAST Cache befre enabling mre. Nte: Fr strage pls, Multicre FAST Cache is a pl-wide feature s yu have t enable/disable at the pl level (fr all bjects in the pl). EMC VNX2 Unified Best Practices fr Perfrmance 27

Data @ Rest Encryptin Multicre FAST Cache can imprve verall system perfrmance if the current bttleneck is drive-related, but bsting the IOPS will result in greater CPU utilizatin n the SPs. On an existing system, check the SP CPU utilizatin f the system, and then prceed as fllws: Less than 60 percent SP CPU utilizatin enable a few LUNs r ne pl at a time; let it reach steady state in Multicre FAST Cache, and ensure that SP CPU utilizatin is still acceptable befre enabling Multicre FAST Cache n mre LUNs/pls 60-80 percent SP CPU utilizatin scale in carefully; enable Multicre FAST Cache n ne r tw LUNs, r ne pl with the smallest capacity, and verify that SP CPU utilizatin des nt g abve 80 percent CPU greater than 80 percent DON T activate Multicre FAST Cache Enable Data @ Rest Encryptin (D@RE) befre ppulating the strage system with hst data, t avid any perfrmance impact frm the data-in-place encryptin prcess. Order the strage system with D@RE factry enabled Fr systems already in-huse, enable D@RE befre creating strage pls r LUNs NOTE: Multicre FAST Cache must be disabled befre D@RE can be enabledwhich might further impact perfrmance Replicatin VNX Snapshts fr Blck LUNs VNX Snapshts are used t take pint-in-time checkpints f pl LUNs. Start with thin LUNs if planning t use Snapshts Thick LUNs are eventually cnverted t thin LUNs nce a VNX Snapsht is created n them; all new writes and verwrites require thin Plan the deletin f snapshts Whenever pssible, schedule the deletin f Snapshts during nn-peak hurs f peratin If snapshts must be deleted during peak perids f array activity, lessen the impact by reducing the number f cncurrent Snapsht deletes (fr example, stagger the delete peratins ver several hurs, instead f all at nce) DON'T delete the last snapsht f a Thick LUN, if yu intend t create anther snapsht immediately after deleting the last snapsht Create the new snapsht befre deleting the lder snapsht 28 EMC VNX2 Unified Best Practices fr Perfrmance

SnapView fr Blck LUNs Fr SnapView : Deleting the last snapsht f a Thick LUN will und the thin cnversin, which wuld then be recnverted fr the new snapsht Fr additinal technical infrmatin n VNX Snapshts, refer t EMC VNX Snapshts at EMC Online Supprt. Use SAS drives fr reserve LUN pl (RLP) cnfiguratins, with write cacheenabled LUNs Match the secndary side RLP cnfiguratin t the primary side. AVOID cnfiguring RLP n the same drives as the primary and secndary LUNs t avid drive cntentin DON T enable Multicre FAST Cache n RLP LUNs RLP exhibits multiple small-blck sequential streams that are nt suited fr Multicre FAST Cache DON T enable Multicre FAST Cache n clne private LUNs (CPL) SnapSure checkpints fr file systems SnapSure Checkpints are used t take pint-in-time checkpints f file systems. If using SnapSure t create user snapshts f the primary file system: SnapSure sizing: MirrrView fr Blck LUN replicatin Fr MirrrView : Size the disk layut fr the primary file system (PFS) t include cpy-nfirst-write activity Include ne additinal read I/O frm the PFS, and ne additinal write I/O t the SavVl, fr every hst write I/O Size the layut fr SavVl based n expected user lad t the snapsht file systems DON T disable SmartSnap traversal AVOID enabling Multicre FAST Cache n MirrrView secndary LUNs MV/S secndary LUNs replicate nly writes frm the surce and serviced well by SP cache MV/A secndary LUNs replicate writes during updates and incur cpy-nfirst-write activity; this can incur additinal Multicre FAST Cache prmtins that d nt lead t perfrmance gain DON T enable Multicre FAST Cache n Write Intent Lg (WIL) With MirrrView/A, fllw RLP guidelines in SnapView sectin EMC VNX2 Unified Best Practices fr Perfrmance 29

RecverPint fr Blck LUN replicatin Fr RecverPint: IP Replicatr fr File System replicatin If using VNX File IP Replicatr: DON T enable Multicre FAST Cache fr RecverPint jurnal LUNs Jurnals exhibit primarily large-blck sequential activity nt suited fr Multicre FAST Cache use Size disk layut fr the primary file system t include SnapSure cpy-n-firstwrite and replicatin transfer activity Place the secndary file system n the same drive type as the primary file system It is usually acceptable t place SavVl n NL-SAS fr replicatin If user snapshts are als enabled n the primary file system, then cnsider the user lad frm the snapshts t determine whether NL-SAS is still be adequate fr SavVl Use 1GbE links fr Replicatr intercnnects when traversing WAN links 10GbE is typically nt necessary fr replicatin acrss a WAN When using high-latency netwrks with Replicatr, use a WAN-acceleratr t reduce latency Deduplicatin and cmpressin Blck LUN cmpressin Start with thin LUNs if planning t use Blck cmpressin. Classic r thick LUNs cnvert t thin LUNs when cmpressed Manage the prcesses f Blck Cmpressin t minimize impact t ther wrklads. Pause r change the cmpressin rate t Lw at the system level when respnse-time critical applicatins are running n the strage system. The ptin exists t pause cmpressin at the system level t avid cmpressin verhead during spikes in activity Pausing the cmpressin feature ensures backgrund cmpressin, r assciated space reclamatin peratins, d nt impact hst I/O Blck LUN deduplicatin Blck deduplicatin wrks best in strage envirnments which include multiple cpies f the same data that is read-nly and remains unchanged. Evaluate the I/O prfile f the wrklad and data cntent t determine if deduplicatin is an apprpriate slutin AVOID using Blck Deduplicatin n data that des nt have a large amunt f duplicated data; the added metadata and cde path can 30 EMC VNX2 Unified Best Practices fr Perfrmance

impact I/O respnse time and utweigh any advantage seen by deduplicatin In instances where prtins f data n a single LUN are a gd fit fr Blck Deduplicatin while ther prtins are nt, cnsider separating the data nt different LUNs when pssible AVOID deplying deduplicatin int a high write prfile envirnment; this actin either causes a cycle f data splitting/data deduplicatin r data splitting inflicting unnecessary verhead f the first deduplicatin pass and all future I/O accesses. Each write t a deduplicated 8 KB blck causes a blck fr the new data t be allcated, and updates t the pinters t ccur; with a large write wrklad, this verhead can be substantial AVOID deplying deduplicatin int a predminantly bandwidth envirnment Use Multicre FAST Cache and/r FAST VP with Deduplicatin Optimizes the disk access t the expanded set f metadata required fr deduplicatin Data blcks nt previusly cnsidered fr higher tier mvement r prmtin can nw be htter when deduplicated Start with Deduplicatin Enabled thin LUNs if preparing t use Blck Deduplicatin, t avid backgrund LUN migratin Use the default deduplicatin rate f Medium Be aware that setting Deduplicatin Rate t High can impact CPU utilizatin Set Deduplicatin Rate t Lw during perids f heavy wrklad Use the Frce Deduplicatin ptin sparingly Recmmended nly if a large amunt f new r changed data is added t the LUNs Nrmal deduplicatin prcess is preferred fr all ther scenaris Deduplicatin and cmpressin with VNX OE fr File Space efficiency fr File is available via either VNX OE fr File Deduplicatin and Cmpressin, r Blck Deduplicatin fr VNX File. Utilize VNX OE fr File Deduplicatin and Cmpressin when: Space savings must be available within the target file system File Deduplicatin and Cmpressin returns the frees blcks t the file system The target file system cntains highly cmpressible data File Deduplicatin and Cmpressin uses cmpressin, which is generally mre effective with File data The target file system is sensitive t changes in perfrmance EMC VNX2 Unified Best Practices fr Perfrmance 31

File Deduplicatin and Cmpressin can be utilized with file systems built n Classic RAID Grup LUNs, r Thick r Thin strage pl LUNs Utilize VNX Blck Deduplicatin fr VNX File when: Space savings frm the target file system must be available t ther Blck LUNs in the Blck strage pl The deduplicated file system will nt see the space savings, but ther LUNs in the pl can use the space Multiple target file systems cntain duplicate cpies f data Blck Deduplicatin fr VNX File can deduplicate data acrss multiple file systems if they are in the same Blck strage pl See specific recmmendatins fr each slutin belw. VNX OE fr File Deduplicatin and Cmpressin If using VNX OE fr File deduplicatin and cmpressin: Target deep cmpressin at inactive files. DON T enable CIFS cmpressin n busy Data Mvers. Blck Deduplicatin fr VNX File If using Blck Deduplicatin fr VNX File: Anti-virus CIFS cmpressin ccurs in-line with the hst write and can delay respnse time if the cmpressin activity is thrttled. Plan t place all file systems with duplicate data in the same blck strage pl Create file systems with split lg type File system CAVA If using Cmmn Anti-Virus Agent: Ensure that all LUNs frm the strage pl that will be used by File are wned by a single SP After diskmarking the LUNs n VNX File, enable Fixed-Blck Deduplicatin via the File strage pl prperties Nte: All LUNs and file systems in the File strage pl will participate in Blck Dedupe Always cnsult with the antivirus vendr fr their best practices Ensure CIFS is cmpletely cnfigured, tested, and wrking befre setting up virus checker Ensure the antivirus servers are strictly dedicated fr CAVA use nly Ensure that the number CIFS threads used are greater than virus checker threads. 32 EMC VNX2 Unified Best Practices fr Perfrmance

Exclude real-time netwrk scanning f cmpressed and archive files Set file mask t include nly file types recmmended fr scanning by the antivirus prvider DON T include *.* Disable virus scanning during migratins EMC VNX2 Unified Best Practices fr Perfrmance 33

Chapter 4 Applicatin Specific Cnsideratins This chapter presents the fllwing tpics: Blck applicatin tuning... 35 File applicatin tuning... 35 34 EMC VNX2 Unified Best Practices fr Perfrmance

Blck applicatin tuning Hst file system alignment File system alignment is cvered in detail in Hst Cnnectivity Guide dcuments n http://supprt.emc.cm. In general: Windws Server 2008 and later autmatically align Recent Linux perating systems autmatically align When prvisining LUNs fr lder Windws and Linux perating systems that use a 63-blck header, the hst file system needs t be aligned manually Alignment practices: Use hst-based methds t align the file system EMC recmmends aligning the file system with a 1 MB ffset VMware ESX Server with iscsi datastre Disable Delayed Ack fr iscsi strage adapters and targets. Fr further detail, see VMware Knwledge Base article 1002598 http://kb.vmware.cm/kb/1002598 Update ESX t avid TCP Offlad Engine Chimney issues Fr further detail, see VMware Knwledge Base article 2099293 http://kb.vmware.cm/kb/2099293 File applicatin tuning Hypervisr / Database ver NFS r SMB Recmmendatin is t use cached munt in 8.x VNX OE fr File cached I/O path supprts parallel writes, especially beneficial fr Transactinal NAS. Prir t VNX OE fr File 8.x, file systems required the Direct Writes munt ptin t supprt parallel writes. DON T use Direct Writes Direct Writes munt ptin (GUI); uncached munt (CLI). VNX OE file systems use cached munt by default, which prvides buffer cache benefits. Buffer cache ensures that files are 8KB aligned. Allws read cache hits frm data mver. Fr VMWare ESX Server, install the vsphere plug-in and use it t prvisin strage Available frm supprt.emc.cm EMC VNX2 Unified Best Practices fr Perfrmance 35

Bandwidth-intensive applicatins Fr bandwidth-sensitive applicatins ver NFS: Increase the value f param nfs v3xfersize t 262144 (256KB). Negtiate 256KB NFS transfer size n client with: munt rsize=262144,wsize=262144 Single Data Mver bandwidth has a default nminal maximum f 1600 MB/s (unidirectinal); this is due t the 2x 8Gb FC cnnectins frm the Data Mvers t the Strage Prcessrs. Scale VNX OE fr File bandwidth by utilizing mre active Data Mvers (mdel permitting), r by increasing the number f FC cnnectins per Data Mver frm 2x t 4x. Increasing the FC cnnectivity t 4x prts requires utilizing the 2x AUX FC prts; as a result, FC-based NDMP backups are nt allwed directly frm Data Mver. An RPQ is required fr this change. 36 EMC VNX2 Unified Best Practices fr Perfrmance

Cnclusin This best practices guide prvides cnfiguratin and usage recmmendatins fr VNX2 unified strage systems in general usage cases. Fr detailed discussin f the reasning r methdlgy behind these recmmendatins, r fr additinal guidance arund mre specific use cases, see the dcuments the related dcuments sectin. EMC VNX2 Unified Best Practices fr Perfrmance 37