2 Micrsft Exchange 2013 n VMware This prduct is prtected by U.S. and internatinal cpyright and intellectual prperty laws. This prduct is cvered by ne r mre patents listed at VMware is a registered trademark r trademark f VMware, Inc. in the United States and/r ther jurisdictins. All ther marks and names mentined herein may be trademarks f their respective cmpanies. VMware, Inc 3401 Hillview Ave Pal Alt, CA Page 2 f 34
3 Micrsft Exchange 2013 n VMware Cntents 1. Intrductin Design Cncepts Data Gathering Building the Functinal Design Defining Cmpute Requirements Applicatin f Cmpute Requirements t the Virtual Platfrm Establishing Virtual Machine Sizing and Placement Sample Physical Layut Sizing Examples Single Rle Server Design (12,000 Users) Single Rle Server Design with DAG fr 24,000 Users Multirle and Multisite DAG Server Design (50,000 Users) Summary Page 3 f 34
5 1. Intrductin Micrsft Exchange 2013 n VMware Micrsft Exchange can be cmplex t deply, and there are many design decisins t make t build a slid slutin. Running Micrsft Exchange Server 2013 n VMware vsphere can psitively impact design, deplyment, availability, and peratins, but what des such a slutin lk like? This dcument explres sample architecture designs that illustrate Exchange 2013 envirnments running n vsphere. The fcus f this architecture is t prvide a high-level verview f the slutin cmpnents, with diagrams t help illustrate key cncepts. Fr detailed best practices, see the Micrsft Exchange 2013 n VMware Best Practices Guide. This design and sizing guide cvers: Design cncepts. Data gathering. Building the functinal design. Defining cmpute requirements. Applying the cmpute requirements t the virtual platfrm. Establishing virtual machine sizing and placement. Sizing examples. Single rle server design 12,000 users. Single rle server design with DAG 24,000 users. Multisite design with multirle servers and DAG 50,000 users. Design and deplyment cnsideratins. The examples shw hw these cmpnents cntribute t the verall design and prvide nly a guideline. Custmers shuld wrk with their infrastructure vendrs t develp a detailed sizing and architecture plan designed fr their requirements. After describing sme design cncepts, this dcument lks at sizing examples f Exchange 2013 n vsphere using varius design ptins and explres ptins using standalne mailbx servers and database availability grup (DAG) servers using scale-ut and multirle deplyment methds. This dcument prvides examples t help understand cmpnents and cncepts. Official sizing fr Exchange envirnments varies based n business and technical requirements, as well as server and strage hardware platfrms. VMware recmmends that yu engage yur server and strage vendrs t help plan yur design, r use ne f the detailed, hardware-specific reference architectures fund n the VMware Web site and in the Micrsft Exchange 2013 n VMware Partner Resurce Catalg. Page 5 f 34
6 2. Design Cncepts Micrsft Exchange 2013 n VMware One f the mst cmmn questins abut the virtualizatin f Exchange Server is regarding design and sizing. There is ften the miscnceptin that designing Exchange fr running n vsphere requires special tls, a different apprach, r vast knwledge f virtualizatin. In fact, many f the successful Exchange virtualizatin prjects that VMware has delivered have been based n existing Exchange designs that were riginally created based n a physical server deplyment. The lgical Exchange design is nt impacted significantly by virtualizatin. Sizing, virtual machine placement, and hw best t use features f vsphere ultimately drive what the Exchange tplgy lks like frm a server cunt and distributin perspective. Designing a new envirnment t supprt a virtualized Exchange envirnment fllws the same basic prcess fr a nn-virtualized deplyment, with a few additinal steps. At a high level, the prcess includes: Data gathering. Building the functinal design. Defining cmpute requirements. Applicatin f the cmpute requirements t the virtual platfrm. Establishing virtual machine sizing and placement. The fllwing sectins lk at what is invlved during each f these phases. 2.1 Data Gathering Much f the input fr the Exchange design cmes frm the prerequisite data cllected in this phase. This includes the fllwing tpics: Understanding business and technical requirements. Evaluating the current wrklad. Evaluating the health f the surrunding infrastructure. Understanding supprt and licensing cnsideratins. The data acquired frm these prerequisites drives the functinal design and helps t achieve the virtualizatin design Understanding Business and Technical Requirements A clear understanding f the business requirements fr Exchange helps t drive much f the design. During this stage, questins abut uptime requirements, grwth expectatins, feature supprt, security, and regulatry cmpliance requirements are answered. Many f these requirements are then mapped t specific features that can be prvided either by Exchange itself r in cmbinatin with vsphere. Fr example, in the case f security, an rganizatin might require that the system be islated frm ther applicatins within the datacenter. VMware vclud Netwrking and Security can help t achieve applicatin islatin. It shuld be nted that in sme rganizatins virtualizatin takes pririty in design cnsideratin. What this means is that the applicatin design must cnfrm t what the virtualized infrastructure can prvide. Fr example, VMware vsphere High Availability (HA) shuld be used as the primary methd f high availability instead f an applicatin-specific clustering slutin. This falls int the business and technical requirements discussin. Page 6 f 34
7 2.1.2 Evaluate the Current Wrklad Micrsft Exchange 2013 n VMware In mst envirnments, with the exceptin f very new rganizatins, an established Exchange r ther envirnment is used t evaluate the wrklad characteristics f users. Micrsft bases guidance fr Exchange server sizing requirements n the activity f users. This includes hw many messages are sent and received per day, whether r nt additinal client types are used (such as mbile devices and archiving systems), and average message size. Much f this data can be cllected, using native tls such as Micrsft Perfmn, by gathering and parsing mail lg files r by using third-party tls. Regardless f the methd used, an understanding f the type f lad the user base will put n the prpsed envirnment is an abslute requirement t sizing Exchange prperly. There are scenaris when user characteristics are nt knwn r cannt be evaluated. This is the case in a new envirnment, such as a new cmpany. Fr Exchange there is gd data as t what perfrmance characteristics will be like depending n the number f messages sent and received per day. A gd starting pint fr mst envirnments, even thse with established wrklad characteristics, has been the 150 messages sent and received per day user prfile. Althugh nt very scientific, it prvides a safe starting pint fr mst rganizatins, especially new nes. Because vsphere is a very flexible platfrm, yu can scale up r ut, r even dwn, as needed Evaluate the Health f the Surrunding Infrastructure Exchange is highly dependent n services prvided by Active Directry, DNS, the netwrk infrastructure, and the strage area netwrk (SAN), assuming that strage is based n SAN technlgy. Althugh everyday user activity, such as authenticatin and name reslutin, might appear t functin as intended, the intrductin f an applicatin such as Exchange can make deficiencies in the infrastructure much mre apparent. A thrugh health check f the infrastructure shuld be perfrmed befre any Exchange sftware is installed because even the installatin f Exchange is dependent n Active Directry being cmpletely functinal. A faulty Active Directry can cause the installatin f Exchange t fail and lead t supprt calls fllwed by hurs f manual cleanup Understand Supprt and Licensing Cnsideratins Althugh supprt and licensing is nt an area f much cncern fr Exchange 2013, it is imprtant t be familiar with this tpic. Exchange 2013 is fully supprted n vsphere as a result f the Windws Server Virtualizatin Validatin Prgram. Hwever, there are certain caveats t supprt, such as the level f CPU vercmmitment supprted fr prductin envirnments and the use f netwrk-attached strage (NAS). Licensing is typically a straightfrward cnversatin with Exchange because f the cntinued use f a client/server licensing mdel. Other applicatins are nt as simple, such as Micrsft SQL Server. It is imprtant t understand the licensing implicatins that might affect design decisins, such as scaling ut versus scaling up. Refer t the Micrsft Exchange 2013 n VMware Supprt and Licensing Guide fr mre infrmatin. 2.2 Building the Functinal Design Knwledge f Exchange 2013 architecture is required during this phase f the design. At the mst basic level yu can build an Exchange design, and in mst cases, translate that directly t virtual machines and have a functinal Exchange envirnment. Hwever, knwledge f vsphere, its cnfiguratin ptins, such as the number f vcpus and disk targets supprted per virtual machine, and design best practices allw an Exchange architect t make the best decisins fr a virtualized Exchange envirnment. A successful Exchange virtualizatin prject shuld begin with all areas f the infrastructure represented in the cnversatin. This includes Exchange, vsphere, strage, netwrking, and any ther areas fr cnsideratin, such as facilities. During the design discussins, each functinal area is discussed, and input frm the varius technical representatives is cllected fr further cnsideratin. Page 7 f 34
8 Micrsft Exchange 2013 n VMware The design requirements that result in the functinal design cmprise bth hard and sft values. Items such as the number f mailbxes, user prfile, the number f datacenters, and the tiers f mailbxes t supprt are hard values. These values have n additinal ptins fr cnsideratin. Design requirements such as uptime, database size, and hardware specificatins must be discussed further t determine the best ptin t meet the needs f the rganizatin. Befre sme f these decisins are made there might be further testing required by the rganizatin t validate the slutin. In mst cases the expertise f the architecture team shuld be able t speak t each ptin and its capabilities. When cmpleted, the functinal design shuld include at least the fllwing: High availability methd vsphere High Availability r Exchange DAG. Site resiliency methd Nne, VMware vcenter Site Recvery Manager, r Exchange DAG. Dedicated r multirle servers. Database sizing. Data prtectin Exchange Native Data Prtectin, Exchange-aware backup, VMware vsphere Data Prtectin Advanced. Estimated grwth ver hw many years. Hardware ptins Preferred server vendr and deplyment ptins, such as blade versus rack munt. Mailbx tiers Number f mailbxes, mailbx size r sizes, average message size, archive limit. Client cnnectivity VMware vclud Netwrk and Security Edge, hardware r sftware lad balancer, Windws Netwrk Lad Balancing (NLB), r DNS rund rbin. 2.3 Defining Cmpute Requirements With the functinal design cmplete, the basics fr understanding the physical cmpute requirements are established. T begin the prcess f defining the cmpute requirements, there must be an understanding f what is invlved in this prcess. The fficial sizing guidance fr Exchange 2013 was nt available frm Micrsft at the time f this writing. Hwever, with the cnslidatin f server rles in Exchange 2013, the Mailbx server rle has becme much like the Exchange 2010 multirle client access, hub transprt, and mailbx server. This sectin reviews the prcess fr defining cmpute requirements. During the discussin f defining cmpute requirements, examples are prvided t help illustrate cnsistently the main pints discussed. Nte Althugh the values used in these examples are specific t Exchange 2010, the methdlgy remains the same. As Micrsft prvides updated guidance fr Exchange 2013, replace the fllwing values with updated values, if necessary. Page 8 f 34
9 Micrsft Exchange 2013 n VMware Example The fllwing examples lk at the basic sizing f an Exchange 2013 envirnment. These values are used t determine the cmpute requirements, sizing, and placement f virtual machines fr this envirnment. Ttal mailbxes 24,000 Average mailbx quta 2048MB Average daily send/receive 150 messages Average message size f 75KB High availability database availability grup, vsphere HA Database cpies 2 Sites ne site Prcessr architecture Eight-cre prcessr with a SPECint2006 rating f 41 per cre Prcessr Cre Requirements CPU requirements fr Exchange mailbx servers are represented in megacycles. A megacycle is a unit f measurement used t represent the capacity f a prcessr cre. The perfrmance delivered by a prcessr cre is defined by the clck speed f the prcessr cre. Fr example, a 3.33GHz prcessr cre prvides 3,333 megacycles. This is the baseline used by Micrsft t prvide guidance fr the megacycle requirement f a mailbx prfile. The fllwing table prvides the megacycle estimates fr varius mailbx prfiles fr Exchange Until further guidance fr Exchange 2013 is prvided by Micrsft, these numbers shuld cntinue t be used as a starting pint. Table 1. Megacycles per Mailbx Messages Sent r Received per Mailbx per Day Megacycles fr Active Mailbx r Standalne Mailbx Megacycles fr Passive Mailbx Page 9 f 34
10 Micrsft Exchange 2013 n VMware With the advancement f prcessr technlgy, simply using the megacycles prvided by a prcessr cre is n lnger adequate. Many newer prcessr cres perate at a lwer clck speed than the baseline used by Micrsft but prvide higher thrughput. As a result a megacycle adjustment is required t determine the actual capabilities f a prcessr cre. T make this adjustment, prcessr thrughput ratings frm the Standard Perfrmance Evaluatin Crpratin (SPEC) are used t determine the difference between the baseline per cre value and the per cre value f a newer prcessr. SPECint2006 rate results fr prcessrs are fund n the SPEC website using the search feature (http://www.spec.rg/cgi-bin/sgresults?cnf=rint2006). As an example, the baseline prcessr used by Micrsft, the Intel Xen X5470 (3.33GHz), has a SPECint2006 rating f per cre. The Intel Xen E (2.60GHz) eight-cre prcessr has a SPECint2006 rating f 41 per cre. This is rughly a 218% imprvement. T calculate the adjusted perfrmance per cre f the new prcessr, use the fllwing frmula. ((new per cre value) * (baseline Hertz per cre)) / (baseline per cre value) = adjusted megacycles per cre Using this example: ((41) * (3333) / = 7288 adjusted megacycles per cre This value is used t determine hw many mailbxes can be supprted n a given prcessr cre. Exchange wrklads shuld maintain a ne-t-ne physical prcessr cre t virtual CPU rati. This allws fr a true representatin f adjusted megacycle capabilities when designing and deplying Exchange n vsphere Calculating the Megacycle Requirement fr Standalne Mailbx Servers The prcess fr calculating the megacycle requirement fr a mailbx server can take tw frms depending n whether a DAG is used. Standalne mailbx servers, servers nt in a DAG, must prvide resurces nly fr the mailbxes that they are ging t supprt during nrmal runtime. In ther wrds, if an average mailbx cnsumes 3 megacycles, and the mailbx server must supprt 2000 mailbxes, the mailbx server must be able t deliver 6000 megacycles f prcessr capacity. T prvide fr the ccasinal spike in utilizatin, Micrsft typically recmmends establishing a maximum utilizatin threshld. In Exchange 2010 the threshld fr a standalne mailbx server with all rles installed was 35%. This can be used as a baseline fr Exchange The fllwing summarizes the prcess t determine the megacycle requirement fr a standalne mailbx server supprting 2000 users at 3 megacycles per user. 1. Determine the ttal mailbx megacycle requirements 2000 mailbxes * 3 megacycles/user = 6000 megacycles. 2. Adjust megacycles fr 35% peak utilizatin 6000 megacycles /.35 = ttal megacycles required. Using prcessr cres frm the previus example that supprt 7,288 adjusted megacycles per cre, a mailbx server with tw cres utilizes apprximately 40% f its CPU capacity. This is mre than the recmmended threshld f 35%, but given the ptin f verprvisining the virtual machine by adding an additinal cre r tw, this is an acceptable cnfiguratin with plenty f capacity fr spikes Calculating Megacycle Requirement fr DAG Member Servers DAG member servers require a mre in-depth megacycle calculatin prcess because f variables such as the maximum number f active mailbxes per server, the number f passive mailbxes per server, and the number f database cpy instances. In Exchange 2010 the threshld fr a DAG member server with all rles installed was 40%. This can be used as a baseline fr Exchange The fllwing summarizes the prcess t determine the megacycle requirements fr a DAG member server in a fur-nde DAG supprting 16,000 users at 3 megacycles per user. Tw cpies per database Page 10 f 34
11 Micrsft Exchange 2013 n VMware are used in this example, allwing fr ne DAG member server failure. This example uses the same prcessr cres as in the preceding sectin. 1. Determine the maximum active mailbxes per DAG member server after a single server failure ttal mailbxes / (4 DAG members - 1) = 5334 maximum users per DAG member server after a single server failure. 2. Determine the active mailbx megacycle requirements 5334 maximum users per DAG member server * 3 megacycles per active user = megacycles. 3. Add 10% per additinal database cpy in this example there is ne additinal cpy megacycles * 1.1 = megacycles t supprt active mailbxes per DAG member server. 4. Determine the number f passive mailbxes per DAG member server after a single server failure (4000 active mailbxes during nrmal runtime * 2 database cpies) - (5334 maximum active mailbxes) = 2666 passive mailbxes. 5. Determine the passive mailbx megacycle requirements 2666 passive mailbxes * 0.45 megacycles/passive mailbx = 1200 megacycles. Nte The passive mailbx megacycle requirement is abut 15% f the active mailbx requirement. See Table 1 fr a cmplete listing. 6. Add active and passive mailbx megacycle requirements t determine ttal megacycle requirements per DAG member server active mailbx megacycles passive mailbx megacycles = megacycles. 7. Adjust megacycles fr 40% peak utilizatin ttal megacycles / 0.4 = megacycles required per DAG member server. 8. Determine the number f prcessr cres required ttal megacycles / 7288 adjusted megacycles per cre = 6.4 prcessr cres. Using prcessr cres frm the previus example, which supprt 7,288 adjusted megacycles per cre, a mailbx server with six cres utilizes apprximately 43% f its CPU capacity. This is ver the recmmended threshld f 40%, but given the ptin f verprvisining t seven cres, this is an acceptable slutin Prcessr Sizing Cnsideratins The preceding example shws an established starting pint fr the number f mailbx server virtual machines t begin the sizing exercise. The megacycles calculatin generates the megacycle requirement based n the number f virtual machines. This prvides an idea f exactly hw much CPU perfrmance is required. Different virtual machine cunts yield different results and shuld always be cnsidered t find the best balance between the supprted number f mailbxes per instance, virtual machine size, and number f virtual machines. When sizing virtual machines frm a CPU perspective, cnsider the fllwing: vsphere virtual machines supprt up t 64 virtual CPUs. Micrsft typically recmmends a minimum and maximum supprted number f CPU cres. Fr Exchange 2010 the maximum was 12 cres fr single-rle mailbx servers and 24 cres fr multirle servers. Micrsft recmmends using a hypervisr verhead f 10% when calculating CPU requirements. VMware has seen hypervisr verhead as lw as 2% fr Exchange wrklads n the latest vsphere hypervisr. In mst cases, using the latest vsphere versin and prcessrs help t mitigate any hypervisr verhead. Page 11 f 34
12 Micrsft Exchange 2013 n VMware Example This example shws supprt fr 24,000 users, prtected by DAG and with a mailbx prfile f 150 messages sent/received per day. Because the DAG supprts tw mailbx database cpies, yu must begin with a multiple f tw fr the number f DAG member servers. Tw servers is the minimum, but yu can scale ut frm there t accmmdate any deplyment scenari. This example assumes 4 DAG ndes. Adjustments can be made later, if desired. T calculate the megacycle requirements, perfrm the fllwing: 24,000 / 4 DAG members = 6,000 mailbxes per DAG member server during nrmal peratins. 24,000 mailbxes / (4 DAG members - 1) = 8,000 maximum mailbxes per DAG member server. 8,000 maximum mailbxes * 3 megacycles per active mailbx = 24,000 megacycles required. 24,000 * 1.1 t accunt fr the additinal database cpy = 26,400 megacycles. (6,000 mailbxes during nrmal peratins * 2 database cpies) - (8,000 maximum mailbxes per DAG member server) = 4,000 passive mailbxes. 4,000 passive mailbxes * 0.45 megacycles = 1800 passive mailbx megacycles. 26,400 active mailbx megacycles + 1,800 passive mailbx megacycles = 28,200 ttal megacycles. 28,200 ttal megacycles / 0.40 maximum CPU utilizatin during failver = 70,500 megacycles required per DAG member server. Each prpsed prcessr cre has a SPECint2006 rating f 41, and prvides 7,288 adjusted megacycles. 70,500 megacycles / 7,288 megacycles per cre = 10 cres per DAG member server. At 10 cres, r virtual CPUs, each DAG member server is apprximately 40% utilized after a single DAG member server failure. The number f DAG member servers can be scaled ut even further if smaller virtual machines are desired Memry Requirements Prper sizing f memry resurces is much less cmplicated than prcessr sizing. The amunt f memry assigned t an Exchange 2013 mailbx server depends n the maximum active user cunt t be supprted n the mailbx server and the prfile f thse mailbxes. This prvides the database cache fr user data. Additinal memry must be prvided t supprt the perating system and ther applicatins. Exchange 2013 des have minimum memry supprt requirements. The fllwing table shws these minimums. Table 2. Minimum Memry Requirements Exchange 2010 Server Rle Client Access Mailbx Client Access and Mailbx cmbined Minimum Supprted 4GB 8GB 8GB Page 12 f 34
13 Micrsft Exchange 2013 n VMware The first step in planning fr mailbx server memry is t determine the amunt f required database cache by multiplying the mailbx cunt by the memry requirements based n the user prfile. Fr example, t supprt 4,000 users sending/receiving 150 messages per day requires 36GB f database cache using the Exchange 2010 recmmendatin f 9MB f database cache per mailbx (4000 * 9MB = 36GB). The fllwing table shws the recmmended per mailbx database cache used t size Exchange 2010 mailbx servers. Initial sizing f Exchange 2013 envirnments can cntinue t use these numbers until fficial guidance frm Micrsft is released. Table 3. Per Mailbx Database Cache Messages Sent r Received per Mailbx per Day Database Cache per Mailbx in Megabytes (MB) The next step is t determine the amunt f required physical memry by determining which server cnfiguratin prvides enugh database cache, as well as additinal memry, fr the perating system and applicatins. Micrsft has prvided examples f cmmn memry cnfiguratins and hw much database cache wuld be prvided with that cnfiguratin. Current guidance prvided is specific t Exchange 2010, hwever based n the architecture changes in Exchange 2013, sizing guidance fr Exchange 2010 multirle mailbx servers can be used as a starting pint fr Exchange 2013 mailbx servers. The preceding example shws that 4,000 users sending and receiving 150 messages per day requires 36GB f database cache. Based n the fllwing table, a mailbx server with 64GB f physical RAM prvides 44GB f database cache. Therefre, 64GB f physical RAM is the ideal memry cnfiguratin, based n this mailbx cunt and user prfile. Page 13 f 34
14 Micrsft Exchange 2013 n VMware Table 4. Determining Ttal Memry Server Physical Memry 8GB 16GB 24GB 32GB 48GB 64GB 96GB 128GB Database Cache Prvided 2GB 8GB 14GB 20GB 32GB 44GB 68GB 92GB Example In this example each DAG member server supprts a maximum f 8,000 mailbxes after a single server failure. T calculate the minimum recmmended database cache per DAG member server, perfrm the fllwing: 8,000 maximum active mailbxes per DAG member server * 9MB f database cache per user = 72GB 72GB f database cache is required t supprt 8,000 active mailbxes, and additinal memry is required t supprt the perating system and applicatins. Accrding t Table 4, t prvide 72GB f database cache, each DAG member server shuld be allcated 128GB f memry. Each DAG member server virtual machine is created with 128GB f memry allcated Netwrk Requirements Exchange virtual machines cnfigured with the Client Access r Mailbx server rle, r bth, and nt participating in a DAG, typically require n mre than a single virtual netwrk adapter. When deplyed in a DAG, a virtual machine can be cnfigured with a single netwrk adapter, but the recmmended cnfiguratin fr DAG ndes is t prvide a netwrk adapter fr client cmmunicatin and a separate adapter fr DAG replicatin. Within a virtual machine this means cnfiguring tw virtual netwrk adapters and cnnecting thse adapters t prt grups r virtual switches dedicated t each type f traffic. At the VMware ESXi hst level, a minimum f tw physical netwrk adapters shuld be teamed fr redundancy and cnfigured based n VMware best practices. Separate VLANs are recmmended t separate vsphere management traffic, as well as client and DAG replicatin traffic, fr Exchange virtual machines. Refer t the Micrsft Exchange 2013 n VMware Best Practices Guide fr mre infrmatin. Page 14 f 34
15 2.3.4 Strage Requirements Micrsft Exchange 2013 n VMware Planning strage cnfiguratins fr the Mailbx server rle requires knwledge f the existing user prfile. Micrsft has defined user prfiles by average messages sent and received per day per user. This enables mre accurate planning when migrating frm systems ther than Micrsft Exchange. The user prfile has a direct impact n verall I/O requirements, and knwing these requirements can help yu and yur strage vendrs t design an ptimal strage slutin. In additin t the average mail sent and received, mbile devices, archiving slutins, and antivirus prgrams shuld be taken int cnsideratin as cntributrs t verall I/O. Exchange 2013 cntinues with the reductin in I/O, making mre strage ptins available. Sme f the new features in Exchange 2013 include the supprt fr multiple databases per disk and autmatic reseed f databases with autmated disk recvery. These new features are geared twards envirnments deplying n the larger, mre failure-prne disk drives with n RAID-level strage redundancy. The typical deplyment scenari when using Just a Bunch f Disks (JBOD) includes managing three r mre cpies per database because f the likelihd f failure and requirement t reseed in the case f a single disk failure. This in turn increases management verhead. Using lcal strage fr virtual machines is supprted fr use with vsphere, hwever many custmers cntinue t deply n shared strage. When used with shared-strage architecture, vsphere prvides access t all advanced features, such as vsphere HA, VMware vsphere Distributed Resurce Scheduler (DRS), and VMware vsphere vmtin. Using data prtectin mechanisms prvided by mst strage arrays allws fr minimal database cpy maintenance mst envirnments deplyed n sharedstrage deply a maximum f tw database cpies fr high availability. If a disk failure des ccur, the data stred n the vlume is nt lst, and n reseed is required, assuming strage vendr best practices are fllwed. Sizing strage fr a virtualized Exchange envirnment is the same as sizing fr a physical envirnment regarding I/O requirements. There are a few vsphere-specific items t cnsider when designing strage fr the Mailbx server rle, as fllws: ESXi hsts can have up t 255 individual strage LUNs mapped t them. This shuld be cnsidered a vsphere cluster maximum because the best practice is that all hsts in a vsphere cluster are mapped t the same strage. If mre than 255 strage LUNs must be presented t all f yur Exchange virtual machines, cnsider creating mre, smaller vsphere clusters, cnslidating virtual disks n larger VMware vsphere VMFS vlumes, using larger raw device mappings, r using in-guest attached iscsi. When using VMFS datastres fr Exchange data, be aware that the default cnfiguratin f an ESXi hst limits the pen virtual disk capacity t 8TB. Fr mre infrmatin, see ESXi/ESX hst reprts VMFS heap warnings when hsting virtual machines that cllectively use 4 TB r 20 TB f virtual disk strage (http://kb.vmware.cm/kb/ ). This limit des nt apply t raw device mappings r strage mapped using in-guest attached iscsi. Up t 60 strage targets can be cnfigured per virtual machine. Virtual machine disk frmat (VMDK) disks can be created up t 2TB. Fr larger vlumes, physicalmde raw device mappings can be used up t 64TB and in-guest attached iscsi can be used up t the guest perating system limit. Micrsft has stated that strage sizing fr Exchange 2013 is very similar t that f Exchange T assist in planning the strage design f the Mailbx server rle, custmers shuld cntinue t use the Exchange 2010 Mailbx Server Rle Requirements Calculatr (http://blgs.technet.cm/b/exchange/archive/2009/11/09/ aspx) until an Exchange 2013 equivalent is released. VMware recmmends that Exchange architects fllw Micrsft best practices alng with the strage vendr s best practices t achieve an ptimal strage cnfiguratin fr Exchange Server Page 15 f 34
16 2.3.5 Exchange Mailbx Server Rle Requirements Calculatr Micrsft Exchange 2013 n VMware Althugh mst Exchange architects understand the prcess fr identifying Exchange cmpute requirements, sme may nt perfrm the manual steps utlined in the preceding sectins. The Exchange Mailbx Server Rle Requirements Calculatr has taken all f the prcesses discussed here and incrprated best practices, strage guidance, and mre int a single Excel wrkbk. Using the calculatr is the recmmended methd fr sizing Exchange, even fr virtual deplyments. At the time f this writing the calculatr is nly available fr Exchange 2010, hwever Micrsft has stated that sizing fr the Exchange 2013 Mailbx server rle will be similar t sizing fr multirle Exchange 2010 servers Mailbx, Client Access and hub transprt server rles in a single Exchange instance. Fr mre infrmatin n the Exchange Mailbx Server Rle Requirements Calculatr, g t the Exchange team Web site at Example There are many ptins fr Exchange strage architecture. The Exchange Mailbx Server Rle Requirements Calculatr takes input, much f which is presented in these examples, and prvides a database layut recmmendatin and strage allcatin scheme. Using this example, the calculatr presents the fllwing ptin: 18 databases per server active and passive. 1.8TB maximum database size. 88GB maximum lg size. 18 vlumes are created t huse bth database and lgs n the same vlume. 2.5TB database + lg vlume size. 2 vlumes are created fr perating system and applicatin strage. T accmmdate this amunt f strage (apprximately 46TB per mailbx virtual machine), raw device mappings are used. This als allws the use f vlumes greater than 2TB. Nte Exchange 2013 supprts a maximum f 50 munted databases. Cnsider this when using the Exchange 2010 calculatr t btain early adpter sizing guidance. 2.4 Applicatin f Cmpute Requirements t the Virtual Platfrm With cmpute requirements fr all Exchange virtual machines established, the data is then cnverted int a set f physical requirements. Fr Exchange and ther business critical applicatins this is a very trivial exercise. VMware recmmends a ne-t-ne physical t virtual rati when allcating cmpute resurces. Fr Exchange, this is a very imprtant pint t cmmunicate. The expectatin is ften that virtualizatin is a way f cnsuming mre resurces than are available in a physical server. Althugh this might appear t be the case because f the ability t vercmmit virtual CPUs and memry, what is actually happening in the hypervisr is an advanced sharing algrithm that allws each virtual machine t believe that it has dedicated resurces. As the rati f virtual CPUs t physical CPU cres grws, the hypervisr must schedule mre requests acrss the finite physical resurces. This can result in higher wait times fr tasks that are ready t be scheduled. Fr the majrity f wrklads that are nt very intensive, this is nt a prblem. Exchange is a resurce-intensive wrklad, and when sized crrectly, cnsumes the CPU and memry prvided very efficiently (t a certain degree). Hw Exchange uses its cmpute resurces leaves little rm fr any added latency due t vercmmitment f physical cmpute resurces. This des nt mean that resurces cannt be vercmmitted fr an ESXi hst running Exchange wrklads, but there shuld Page 16 f 34
17 Micrsft Exchange 2013 n VMware be an established baseline befre any vercmmitment is intrduced int the envirnment s that if any additinal latency is bserved, there is a baseline fr cmparisn. The cmpute requirements that have been established with the functinal design are aggregated and mapped t physical CPU cres and memry. This prvides the ttal amunt f physical cmpute resurces required acrss the vsphere cluster. In sme cases, especially in smaller envirnments, the cmpute requirements might be small enugh t cme frm ne r tw physical servers. The next sectin lks at why it is imprtant t understand sizing and placement in additin t the cmpute requirements. Althugh the minimum cmpute requirements might fit int a small number f physical servers, the high availability design might lead t a scaled-ut apprach. Example The fllwing infrmatin is determined by using the data generated thrughut the examples: Deplying fur Exchange mailbx virtual machines requires that each virtual machine is cnfigured with 10 vcpus and 128GB f memry. Each mailbx server virtual machine has 20 strage targets. The vsphere cluster must prvide the fllwing: At least 40 physical CPU cres. 512GB f memry. 2.5 Establishing Virtual Machine Sizing and Placement Physical server capabilities have far surpassed hw applicatins can effectively use thse resurces. This is ften the reasn why an rganizatin lks t virtualizatin. Exchange is n exceptin. Althugh Exchange des a very gd jb f using resurces efficiently, with many f the current prcessr architectures, efficient use requires either placing a very large number f users n a single instance f Exchange r mdifying the server build t prvide fewer resurces. The first ptin means that any service interruptin affects a larger number f users, and the secnd ptin means mre datacenter resurces are cnsumed fr less return n the investment. In a virtualized Exchange envirnment these prblems are slved by creating virtual machines sized t meet the varius requirements, taking manageability and resurce utilizatin int cnsideratin. Fr larger deplyments, creating a smaller number f larger virtual machines enables gd cnslidatin f mailbxes and rm fr ther peripheral virtual machines. Small t mid-sized envirnments might prefer t scale ut the design with smaller virtual machines, allwing them t supprt multiple wrklads alngside Exchange. The apprach taken at this phase f the design depends n factrs such as physical server sizing, high availability requirements, and mailbx cunt. Larger physical servers can accmmdate larger mailbx server virtual machines withut vercmmitting physical resurces, but with smaller physical servers, the design apprach might use smaller virtual machine sizes. If a DAG is being cnsidered t prvide high availability, the best practice recmmendatin is t hst ne DAG member virtual machine per physical hst. This means that the vsphere cluster must cntain, at a minimum, the same number f physical hsts as prpsed DAG ndes. If the design calls fr multiple DAGs, this can wrk well by allwing the c-lcatin f members frm different DAGs n the same hst. This allws yu t drive cnslidatin higher, if the physical hardware supprts the cmpute requirements. Page 17 f 34
18 Micrsft Exchange 2013 n VMware Example The flexibility f vsphere allws fr multiple deplyment ptins. In the previus examples, fur DAG ndes were used fr sizing. T understand hw scaling ut might change cmpute requirements, anther sizing exercise was perfrmed with six DAG ndes. The details f this scenari are as fllws. Deplying six Exchange mailbx virtual machines requires each virtual machine t supprt 4,000 mailbxes during nrmal peratin and up t 4,800 during a single server failure. Each mailbx virtual machine requires 6 vcpus and 64GB f memry. Each mailbx server virtual machine supprts 12 databases. The vsphere cluster must prvide the fllwing: At least 36 physical CPU cres. 384GB f memry. Scaling ut prvides fr better agility, reduced ttal cmpute resurces, and fewer databases per mailbx virtual machine t manage. Hwever, mre DAG members require mre ESXi hsts t keep DAG members n separate physical hsts. 2.6 Sample Physical Layut Using the initial sizing example f fur DAG ndes, the physical layut f virtual machines is illustrated in the fllwing figure. The spare capacity within the ESXi hsts is used t prvide resurces fr client access servers and any ther peripheral services used fr Exchange, such as backup r archive systems. Figure 1. Sample Physical Layut fr 24,000 Mailbxes Page 18 f 34
19 3. Sizing Examples Micrsft Exchange 2013 n VMware The fllwing examples are prvided t illustrate the tpics cvered in this guide acrss multiple deplyment scenaris. These examples are meant t help reinfrce the methdlgy fr sizing an Exchange envirnment n vsphere and understand the flexibility available based n yur deplyment requirements and cnstraints. Nte Prcessr utilizatin, memry sizing, and I/O estimatins are based n Exchange 2010 sizing guidance with cnsideratins taken fr Exchange 2013 architecture changes. Althugh sizing might change as Micrsft releases updated guidance fr Exchange 2013, the methdlgy remains the same at the vsphere level. In each f these examples, the fllwing design parameters are used: Database size Default. Average mailbx quta 2048MB. Average messages sent and received per day 150. Average message size 75KB. Deleted item retentin 14 days. IOPS and megacycle multiplicatin factr Prcessr SPECint2006 rating 8 cres per prcessr, 41 per cre. 3.1 Single Rle Server Design (12,000 Users) This example uses separate Exchange virtual machines fr bth Client Access and Mailbx server rles. vsphere hsts are sized t prvide failver capacity fr all virtual machines. In this design tw vsphere hsts can be taken ffline with n impact t perfrmance r further cnslidatin can be achieved Resurce Requirements by Server Rle The fllwing table lists the cmpute requirements fr each server rle t supprt 12,000 users. Table 5. Exchange Server Rle Resurce Requirements Exchange Rle Physical Resurces per Server Mailbx Server 4 Servers CPU 4 cres (31% max utilizatin). Memry 48GB. OS and Applicatin File Strage 100GB (OS and applicatin files). Database Strage 28 x 2000GB 7.2K RPM SAS 3.5" (RAID 1/0). Lg Strage 2 x 2000GB 7.2K RPM SAS 3.5" (RAID 1/0). Restre LUN 3 x 2000GB 7.2K RPM SAS 3.5" (RAID 5). Netwrk 1Gbps. Client Access Server 4 Servers CPU 4 cres. Memry 8GB. Strage 80GB (OS and applicatin files). Netwrk 1Gbps. Page 19 f 34
20 3.1.2 Guest Virtual Machine Cnfiguratin Micrsft Exchange 2013 n VMware The resurce requirements in the preceding table are translated int the fllwing virtual machine resurces. Table 6. Exchange Virtual Machine Cnfiguratin Exchange Rle Drive Letter/ Munt Pint Virtual Hardware per Virtual Machine Mailbx Server 4 Servers CPU 4 vcpu. Nrmal Run Time 3000 Mailbxes Each C:\ D:\ E:\ F:\ G:\ Memry 48GB. Strage SCSI Cntrller 1: HDD 1 80GB (OS and applicatin files). HDD GB (DB1 DB7). HDD 3 83GB (LOG1 LOG7). HDD GB (DB8 DB14). HDD 5 83GB (LOG8 LOG14). Strage SCSI Cntrller 2: H:\ I:\ J:\ K:\ HDD GB (DB15 DB21). HDD 7 83GB (LOG15 LOG21). HDD GB (DB22 DB28). HDD 9 83GB (LOG22 LOG28). L:\ M:\ N:\ O:\ Strage SCSI Cntrller 3: HDD GB (DB29 DB35). HDD 11 83GB (LOG29 LOG35). HDD GB (DB36 DB42). HDD 13 83GB (LOG36 LOG42). Netwrk vnic 1 LAN/Client Cnnectivity. Client Access Server 4 Servers CPU 2 vcpus. Memry 8GB. Strage SCSI Cntrller 1: 80GB (OS and applicatin files). Netwrk vnic 1 LAN/Client Cnnectivity. Page 20 f 34
- Micrsft Exchange 2010 n VMware This prduct is prtected by U.S. and internatinal cpyright and intellectual prperty laws. This prduct is cvered by ne r mre patents listed at http://www.vmware.cm/dwnlad/patents.html.
VMware vclud Architecture Tlkit High Perfrmance Data with VMware vfabric GemFire Octber 2011 High Perfrmance Data with VMware vfabric GemFire This prduct is prtected by U.S. and internatinal cpyright and
Mnitring Business Critical Applicatins with VMware vcenter Operatins Manager Mnitring Business Critical Applicatins with This prduct is prtected by U.S. and internatinal cpyright and intellectual prperty
Best Practices fr Optimizing Perfrmance and Availability in Virtual Infrastructures www.nimsft.cm Best Practices fr Optimizing Perfrmance and Availability in Virtual Infrastructures PAGE 2 Table f Cntents
WHITE PAPER Backing Up SAS Cntent In Yur SAS 9 Enterprise Intelligence Platfrm Cnsideratins fr Creating Backups f Yur SAS Cntent Table f Cntents Intrductin...1 Understanding the SAS Enterprise Intelligence
White Paper Citrix Cnsulting Best Practices Guide fr Prvisining Services and XenApp Designing an enterprise slutin fr the fast prvisining f XenApp servers Table f cntents Best Practices Guide fr Prvisining
Business Prcess Prtectrs Business Service Management Active Errr Identificatin Event Driven Autmatin Errr Handling and Escalatin Intelligent Ntificatin Prcess Reprting IT Management Business and IT Autmatin
Getting Started Guide fr Administratrs Fr Numara FtPrints, Numara FtPrints fr eservice Versin 9.0 Numara Sftware Inc. Numara FtPrints Getting Started fr Administratrs Manual: Rev 9.0 Numara Sftware numarasftware.cm
Integratin Cmpetency Center ICC Handbk Versin 3.0 29 Nvember 2012 ICC - Integratin Cmpetency Center ICC is a shared service intended fr cmpanies wh wish t design, develp and maintain integratin slutins
Candidates shuld be able t: OCR 2.1.6 (a) explain the advantages f netwrking stand-alne cmputers int a lcal area netwrk AQA 3.1.13 understand what a cmputer netwrk is and be able t discuss the advantages
Cnfiguring Arrays n HP Smart Array Cntrllers Reference Guide Abstract This dcument identifies, and prvides instructins fr, the array cnfiguratin tls available fr HP PrLiant cntrller and server prducts.
998-2095-07-21-14AR0 by Adam Gauci, P.Eng., Didier Giarratan, and Sandeep Pathania Executive summary The utility industry is under pressure t imprve substatin autmatin cyber security. Manufacturers f substatin
SECURITY GUIDANCE FOR CRITICAL AREAS OF FOCUS IN CLOUD COMPUTING V3.0 INTRODUCTION The guidance prvided herein is the third versin f the Clud Security Alliance dcument, Security Guidance fr Critical Areas
Cmmercial in Cnfidence Test Reprt December 2011 Kaspersky Whitelist Database Cmmercial in Cnfidence Kaspersky Whitelisting - Test Reprt WCL Crprate Offices and Test Facilities USA Headquarters and Test
A Primer fr Business Dvana s Primers fr Business series are a set f shrt papers r guides intended fr business decisin makers, wh feel they are being bmbarded with terms and want t understand a cmplex tpic.
Server Backup Plicy Intrductin Data is ne f Banks DIH Limited s mst imprtant assets. In rder t prtect this asset frm lss r destructin, it is imperative that it be safely and securely captured, cpied, and
A Call fr Clarity: Open Questins n the Scpe f FDA Regulatin f mhealth A whitepaper prepared by the mhealth Regulatry Calitin December 22, 2010 Authrs Bradley Merrill Thmpsn Epstein, Becker & Green P.C.
WHITE PAPER An Osterman Research White Paper Published March 2014 Osterman Research, Inc. P.O. Bx 1058 Black Diamnd, Washingtn 98010-1058 USA Tel: +1 253 630 5839 Fax: +1 253 458 0934 firstname.lastname@example.org
A Frrester Ttal Ecnmic Impact Study Prepared Fr KPN The Ttal Ecnmic Impact Of KPN s Managed Vide Services As Used By A Large Financial Service Organizatin Prject Directr: Sebastian Selhrst March 2012 TABLE
CALL CENTER APPLICATIONS Call Prcessing, Mapping, Data Management / Reprting Training Catalgue January 2015 Airbus DS Cmmunicatins CCA Training Catalgue January 2015 CRITICAL MATTERS 1 2 Airbus DS Cmmunicatins
Hsted Private Clud Open surce clud cmputing with penqrm by Rene Buest INSIGHTS Abstract Cmpanies have recgnized the benefits f the flexibility f their IT infrastructure. Hwever, the recent past has reinfrced
An Oracle White Paper July 2012 The Oracle Identity Management Platfrm: Identity Services at Internet Scale Intrductin... 2 Identity and Access Management: Cming f Age... 3 Frm IAM Suite t Cntrls Infrastructure...