Micrsft Exchange 2010 n VMware
Micrsft Exchange 2010 n VMware This prduct is prtected by U.S. and internatinal cpyright and intellectual prperty laws. This prduct is cvered by ne r mre patents listed at http://www.vmware.cm/dwnlad/patents.html. VMware is a registered trademark r trademark f VMware, Inc. in the United States and/r ther jurisdictins. All ther marks and names mentined herein may be trademarks f their respective cmpanies. VMware, Inc 3401 Hillview Ave Pal Alt, CA 94304 www.vmware.cm Page 2 f 45
Micrsft Exchange 2010 n VMware Cntents 1. Intrductin... 7 1.1 Benefits f Running Exchange 2010 n vsphere... 8 2. Design Cncepts... 9 2.1 Resurce Management... 9 2.2 Capacity Planning Prcess Overview... 11 3. Building Blck Examples (Standalne Mailbx Servers)... 12 3.1 The Building Blck Prcess... 12 3.2 Sample Building Blck Sizing 4,000 Users/150 Sent/Received... 13 3.3 Example 1 8,000 Users/150 Sent/Received... 17 3.4 Example 2 16,000 Users/150 Sent/Received... 20 3.5 Example 3 64,000 Users/150 Sent/Received... 22 4. DAG Examples (Clustered Mailbx Servers)... 26 4.1 The DAG Prcess... 26 4.2 DAG Sizing 150 Sent/Received... 27 4.3 DAG Example 1 8,000 Active Mailbxes 150 Sent/Received... 28 4.4 DAG Example 2 16,000 Active Mailbxes 150 Sent/Received... 34 4.5 DAG Example 3 64,000 Active Mailbxes 150 Sent/Received... 39 5. Design and Deplyment Cnsideratins... 45 6. Summary... 45 Page 3 f 45
Micrsft Exchange 2010 n VMware List f Tables Table 1. Building Blck CPU and RAM Requirements fr Mailbxes with 150 Messages Sent/Received per Day*... 12 Table 2. 4,000-User/150 Sent/Received Building Blck Requirements... 14 Table 3. Exchange Virtual Machine Cnfiguratin... 15 Table 4. Exchange Server Rle Resurce Requirements... 17 Table 5. Exchange Virtual Machine Distributin... 18 Table 6. ESXi hst Hardware Cnfiguratin Table... 18 Table 7. Exchange Server Rle Resurce Requirements... 20 Table 8. Exchange Virtual Machine Distributin... 21 Table 9. ESXi Hst Hardware Cnfiguratin Table... 21 Table 10. Exchange Server Rle Resurce Requirements... 23 Table 11. Exchange Virtual Machine Distributin fr Eight ESXi hsts... 24 Table 12. ESXi Hst Hardware Cnfiguratin Table... 25 Table 13. Mailbx Server Resurce Requirements... 28 Table 14. Exchange Virtual Machine Cnfiguratin... 29 Table 15. Exchange Server Rle Resurce Requirements... 31 Table 16. Exchange Virtual Machine Distributin... 32 Table 17. ESXi Hst Hardware Cnfiguratin Table... 32 Table 18. 16,000 Active Mailbxes (150 Sent/Received) DAG Nde Requirements... 34 Table 19. Exchange Virtual Machine Cnfiguratin... 35 Table 20. Exchange Server Rle Resurce Requirements... 37 Table 21. Exchange Virtual Machine Distributin... 38 Table 22. ESXi hst Hardware Cnfiguratin Table... 38 Table 23. 64,000 Active Mailbxes (150 Sent/Received) DAG Nde Requirements... 40 Table 24. Exchange Virtual Machine Cnfiguratin... 40 Table 25. Exchange Server Rle Resurce Requirements... 42 Table 26. Exchange Virtual Machine Distributin... 43 Table 27. ESXi Hst Hardware Cnfiguratin Table... 44 Page 4 f 45
Micrsft Exchange 2010 n VMware List f Figures Figure 1. Physical Separatin f Resurce Pls... 9 Figure 2. Sample Physical Envirnment fr16,000 Mailbxes... 10 Figure 3. Building Blck Virtual Machine Interactin with Shared Strage... 16 Figure 4. Initial Virtual Machine Placement... 19 Figure 5. Initial Virtual Machine Placement... 22 Figure 6. Initial Virtual Machine Placement fr 64,000 Active Users... 25 Figure 7. DAG Layut with Overhead fr Passive Databases... 26 Figure 8. Building Bck Interactin with Shared Strage... 30 Figure 9. Initial Virtual Machine Placement fr 8,000 Active Users... 33 Figure 10. Building Blck Virtual Machine Interactin with Shared Strage... 36 Figure 11. Initial Virtual Machine Placement fr 16,000 Active Users... 39 Figure 13. Initial Virtual Machine Placement fr 64,000 Active Users... 44 Page 5 f 45
Micrsft Exchange 2010 n VMware Page 6 f 45
1. Intrductin Micrsft Exchange 2010 n VMware Micrsft Exchange can be a very cmplex applicatin t deply and there are many design decisins t be made t build a slid slutin. We knw that running Micrsft Exchange Server 2010 n VMware vsphere can psitively impact design, deplyment, availability, and peratins, but what des such a slutin lk like? In this dcument, we explre a sample architecture design that illustrates an Exchange 2010 envirnment running n vsphere. The fcus f this architecture is t prvide a high-level verview f the slutin cmpnents, with diagrams t help illustrate key cncepts. Fr detailed best practices, see the Micrsft Exchange 2010 n VMware: Best Practices Guide. The sample design cvers: Design Cncepts: Resurce Management. Sample Physical Layut. Capacity Planning Prcess Overview. Building Blck Examples (Standalne Mailbx Servers): 8,000 mailbxes 150 sent/received. 16,000 mailbxes 150 sent/received. 64,000 mailbxes 150 sent/received. DAG Examples (Clustered Mailbx Servers): 8,000 active mailbxes 150 sent/received. 16,000 active mailbxes 150 sent/received. 64,000 active mailbxes 150 sent/received. Design and Deplyment Cnsideratins. The examples shw hw these cmpnents cntribute t the verall design and are nly intended t prvide a guideline. Custmers shuld wrk with their infrastructure vendrs t develp a detailed sizing and architecture plan designed fr their requirements. After describing sme imprtant design cncepts, we take a lk at sizing examples f Exchange 2010 n vsphere fr three different sized rganizatins. We ll make ne pass with the VMware building blck prcess fr standalne mailbx servers, and a secnd pass fr mailbx servers cnfigured in a DAG. 8,000 mailbxes 150 sent/received. 16,000 mailbxes 150 sent/received. 64,000 mailbxes 150 sent/received. This dcument describes examples t help understand cmpnents and cncepts. Official sizing fr Exchange envirnments varies based n business and technical requirements, as well as server and strage hardware platfrms. VMware recmmends that yu engage yur server and strage vendrs t help plan yur design, r use ne f the detailed, hardware-specific reference architectures fund n ur website and in the Micrsft Exchange 2010 n VMware: Partner Resurces Catalg. Page 7 f 45
1.1 Benefits f Running Exchange 2010 n vsphere Micrsft Exchange 2010 n VMware Email is ne f the mst critical applicatins in an rganizatin s IT infrastructure. Organizatins increasingly rely n messaging tls fr individual and rganizatinal effectiveness. As a result, messaging administratrs face a cnstant challenge as they cntinually seek t manage the cnflicting demands f availability, agility, and cst. Running Exchange n VMware ffers many benefits: Server Cnslidatin: Utilize all yur server prcessr cres. Maintain rle islatin withut additinal hardware expense. Operatinal advantages: Design fr tday s wrklad rather than guessing abut tmrrw. Design fr specific business requirements. Rapidly prvisin Exchange servers with virtual machine templates. Reduce hardware and peratinal csts f maintaining an Exchange lab. Enhance testing and trubleshting using clned prductin virtual machines. Higher availability with less cmplexity: Reduce planned dwntime due t hardware r BIOS updates with VMware vsphere vmtin. Reduce unplanned dwntime due t hardware failure r resurce cnstraints. Implement simple and reliable Exchange disaster recvery. Page 8 f 45
Micrsft Exchange 2010 n VMware 2. Design Cncepts 2.1 Resurce Management 2.1.1 Resurce Pls When wrking in vsphere envirnments it is imprtant t classify and structure virtual machines lgically as yu wuld physical servers using resurce pls is a great way t accmplish this gal. Resurce pls allw fr easy gruping and lgical separatin f prductin, test, and develpment wrklads. Yu can als emply sme physical separatin as a part f yur infrastructure, separating the test and develpment envirnments frm the prductin envirnment n different ESXi hsts as shwn in Figure 1, but it is nt required. Resurce pls prvide the additinal benefit f making sure that the mst imprtant wrklads maintain pririty fr the use f the physical resurces. T d this, each resurce pl is allwed a certain number f shares. The number f shares designated fr each resurce pl depends n the wrklads fr each virtual machine. Fr example, a prductin resurce pl wuld be given the mst shares and a develpment r test resurce pl wuld be given the least. In the event f resurce cntentin, where tw r mre virtual machines are trying t use the same resurce (fr example, vcpu), the virtual machine with the mst shares assigned t it takes pririty while ther virtual machines have t wait fr the vcpu t becme available. See vsphere Resurce Management (http://pubs.vmware.cm/vsphere- 50/tpic/cm.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-resurce-management-guide.pdf) fr mre infrmatin abut cnfiguring and managing resurce pls. Figure 1. Physical Separatin f Resurce Pls Page 9 f 45
Micrsft Exchange 2010 n VMware When running Micrsft Exchange Server 2010 n vsphere, it is imprtant t cnsider which resurce pls will reside n what hardware. It may be desirable (althugh nt necessary) t separate the prductin envirnment n physical hardware, but make sure there are enugh physical resurces t prvide the availability needed fr prper peratin f VMware HA, DRS, and vmtin (this depends n the verall size f the envirnment). When deplying Exchange 2010 n vsphere, the same rules generally apply as fr a physical design. Fr example, there are advantages t distributing wrklads by separating the mailbx server frm ther peripheral server rles (CAS, Hub Transprt, thers) when yu are wrking with physical servers, and they als apply when deplying Exchange n vsphere. 2.1.2 Dedicated Applicatin Clusters An alternative apprach t using resurce pls is t dedicate ne r mre VMware clusters as Applicatin Clusters. Many f ur custmers have fund that running Enterprise applicatins n vsphere necessitates a different management apprach. A dedicated vsphere cluster culd be cnfigured with different prperties than the general vsphere pl, such as rules t avid ver-cmmitment, limit HA/DRS/vMtin, r t dedicate strage t perfrmance-intensive virtual machines. 2.1.3 Sample Physical Layut Figure 2 demnstrates a sample 16,000 active mailbx envirnment, with each user sending/receiving 150 messages per day, and where each f the Exchange server rles runs in its wn virtual machine. Each ESXi hst has been sized t 16 CPUs and 128GB f RAM t handle the wrklad f 4 Mailbx Servers, 2 Hub Transprt Servers, and 3 Client Access Server virtual machines. T achieve best results in a vsphere envirnment, it is a gd practice t divide ut each server rle int its wn virtual machine t allw fr mre efficient wrklad separatin and increase the amunt f redundancy in the system. Figure 2. Sample Physical Envirnment fr16,000 Mailbxes Page 10 f 45
2.2 Capacity Planning Prcess Overview Micrsft Exchange 2010 n VMware Sizing f an Exchange 2010 envirnment is a cmplex prcess with many variables, including business requirements, anticipated mailbx wrklads, and hardware platfrm, t name a few. The gd news is that sizing an Exchange 2010 envirnment n vsphere is nearly the same as sizing fr physical servers. First, yu must decide whether r nt t cluster the mailbx servers. If yu chse t use standalne mailbx servers prtected by VMware HA, use the building blck apprach (defined in Sectin 3, Building Blck Examples (Standalne Mailbx Servers), and in the Micrsft Exchange 2010 n VMware Best Practices Guide). Hwever, if yu decide t implement Database Availability Grups (DAGs), use the DAG apprach (defined in Sectin 4, DAG Examples (Clustered Mailbx Servers), and in the Best Practices Guide). Strage sizing and cnfiguratin can vary depending n the strage array used and many vendrs have unique enhancements t the strage slutin that can increase availability, speed recvery, enhance perfrmance, and s n. T ptimize perfrmance and take advantage f these features, it is highly recmmended that the strage partner be included in the design effrt. There are many facets t an Exchange 2010 deplyment besides sizing. Exchange 2010 can be deplyed int sme very cmplex, multisite architectures that shuld be designed with the assistance f an Exchange expert, whether that persn is an internal cmpany resurce r a partner with experience deplying bth Exchange and vsphere. The high-level sizing guidelines are described in detail in the Micrsft Exchange 2010 n VMware: Best Practices Guide. Page 11 f 45
Micrsft Exchange 2010 n VMware 3. Building Blck Examples (Standalne Mailbx Servers) 3.1 The Building Blck Prcess The building blck apprach is a recmmended best practice fr creating standalne Exchange Mailbx Servers running n vsphere using pre-sized virtual machine cnfiguratins. Exchange servers that have been divided int virtual machine building blcks (as ppsed t larger, mnlithic Exchange servers) can simplify server sizing during the initial deplyment and create a highly scalable slutin using virtual machines with predictable perfrmance patterns. Testing by VMware and its partners has fcused n fur primary sizes fr mailbx virtual machine building blcks cnsisting f 500, 1000, 2000, and 4000 users. These cnfiguratins have knwn perfrmance prfiles that can be leveraged fr rapid Exchange server sizing as well as easily scaling envirnments as additinal Exchange servers need t be brught nline. Table 1 presents sme pre-sized virtual machine building blck examples designed t hst mailbxes with an average f 150 messages sent/received per day. The same principles are used fr sizing prfiles ranging frm 50 t 550 messages sent/received per day. Table 1. Building Blck CPU and RAM Requirements fr Mailbxes with 150 Messages Sent/Received per Day* Building Blck 500 1000 2000 4000 Prfile 150 sent/received 150 sent/received 150 sent/received 150 sent/received Megacycle Requirement 1,500 3,000 6,000 12,000 vcpu (based n 3.33GHz prcessrbased server) 2 (Minimum) (.6 Actual) 2 (Minimum) (1.3 Actual) 4 (2.6 Actual) 6 (5.1 Actual) Cache Requirement 4.5GB 9GB 18GB 36GB Ttal Memry Size 16GB 16GB 24GB 48GB * Based n http://technet.micrsft.cm/en-us/library/ee712771.aspx The sizing prcess begins with understanding and applying Micrsft guidelines fr each server rle, as represented by the fllwing high-level prcesses: Design the mailbx server building blck: Define current wrklads using the Micrsft Exchange Server Prfile Analyzer (http://www.micrsft.cm/dwnlad/en/details.aspx?displaylang=en&id=10559). Chse an apprpriate building blck (500, 1000, 2000, and 4000 user blcks have been tested and validated, althugh larger building blcks may be pssible). Apply Micrsft guidelines t determine the CPU requirements. Apply Micrsft guidelines t determine the amunt f memry required. Use the Exchange 2010 Mailbx Server Rle Requirements Calculatr (http://blgs.technet.cm/b/exchange/archive/2009/11/09/3408737.aspx) frm Micrsft t determine strage requirements. Page 12 f 45
Design the peripheral server rles: Determine hw many mailbx server building blcks are needed. Calculate the number f mailbx server prcessr cres. Micrsft Exchange 2010 n VMware Use Micrsft Guidelines fr Server Rle Ratis (http://technet.micrsft.cm/enus/library/ee832795.aspx) t calculate prcessr and memry requirements fr the Hub Transprt rles. Use Micrsft Guidelines fr Server Rle Ratis (http://technet.micrsft.cm/enus/library/ee832795.aspx) t calculate prcessr and memry requirements fr the Client Access Server rles. Allcate ne r mre virtual machines fr each server rle t satisfy the previusly calculated number f prcessr cres and amunt f memry. Determine hw the virtual machines will be distributed acrss ESXi hsts. Aggregate virtual machine requirements plus sme verhead t size each ESXi hst. The verhead is imprtant if yu want t minimize the perfrmance impact during the lss f ne f yur ESXi hsts. A typical guideline when chsing the number f required hsts is n+1, where n is the number f hsts required t run the wrklad at peak utilizatin. N+1 allws yu t design fr the pssibility f lsing ne hst frm yur VMware cluster withut taking a huge perfrmance hit during failver. 3.2 Sample Building Blck Sizing 4,000 Users/150 Sent/Received Using the Micrsft sizing guidelines and the building blck apprach, we size a 4,000-user building blck with each mailbx sending/receiving 150 messages per day. The fllwing calculatins are meant t serve as an example f the sizing prcess. Custmers are encuraged t use these as guidelines but must als evaluate specific requirements t determine the mst ptimal deplyment mdels fr their needs. Every envirnment is different and sme rganizatins use email mre heavily than thers. T accurately determine yur mailbx prfile requirements, use the Micrsft Exchange Server Prfile Analyzer (http://www.micrsft.cm/dwnlad/en/details.aspx?displaylang=en&id=10559). It is als strngly recmmended that yu wrk with an internal r partner resurce that is experienced with Exchange architectures t design fr gd perfrmance in yur envirnment. In ur example, we use the fllwing average mailbx prfile definitin: 150 messages sent/received per day. Average message size f 75KB. 2GB mailbx quta. Nte that these examples d nt take int accunt a particular strage slutin. Many VMware strage partners have perfrmed extensive testing n building blcks f varying capacities and wrklad characteristics. See the Micrsft Exchange 2010 n VMware: Partner Resurce Catalg fr stragespecific implementatin details. Page 13 f 45
3.2.1 Mailbx Server Resurce Requirements Micrsft Exchange 2010 n VMware The fllwing table summarizes the resurce requirements fr ur 4,000 user building blck. Table 2. 4,000-User/150 Sent/Received Building Blck Requirements Exchange Rle Mailbx Server Physical Resurces (per server) CPU: 6 cres (60% max utilizatin) Memry: 48GB OS and Applicatin File Strage: 64GB (OS and applicatin files) DB Strage: 110 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Lg Strage: 6 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Restre LUN: 12 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 5) Netwrk: 1 Gbps Page 14 f 45
3.2.2 Guest Virtual Machine Cnfiguratin Micrsft Exchange 2010 n VMware The resurce requirements in Table 2 are translated belw int virtual machine resurces. Table 3. Exchange Virtual Machine Cnfiguratin Exchange Rle Mailbx Server Virtual Hardware (per VM) CPU: 6 vcpu Memry: 48GB Strage: SCSI Cntrller 0 HDD 1: 64GB (OS and applicatin files) Strage: SCSI Cntrller 1 HDD 2: 1833GB (DB1-DB7 databases) HDD 3: 1833GB (DB8-DB14 databases) HDD 4: 1833GB (DB15-DB21 databases) HDD 5: 1833GB (DB22-DB28 databases) HDD 6: 1833GB (DB29-DB35 databases) HDD 7: 1833GB (DB36-DB42 databases) HDD 8: 1833GB (DB43-DB49 databases) HDD 9: 1833GB (DB50-DB56 databases) HDD 10: 524GB (DB57-DB58 databases) Strage: SCSI Cntrller 2 HDD 11: 80GB (DB1-DB7 lgs) HDD 12: 80GB (DB8-DB14 lgs) HDD 13: 80GB (DB15-DB21 lgs) HDD 14: 80GB (DB22-DB28 lgs) HDD 15: 80GB (DB29-DB35 lgs) HDD 16: 80GB (DB36-DB42 lgs) HDD 17: 80GB (DB43-DB49 lgs) HDD 18: 80GB (DB50-DB56 lgs) HDD 19: 23GB (DB57-DB58 lgs) Strage: SCSI Cntrller 3 HDD 20: 1747GB (Restre LUN) Netwrk: NIC 1 Page 15 f 45
3.2.3 Guest Virtual Machine Strage Interactin Micrsft Exchange 2010 n VMware The fllwing figure shws hw the building blck virtual machine interacts with the shared strage. Figure 3. Building Blck Virtual Machine Interactin with Shared Strage Page 16 f 45
3.3 Example 1 8,000 Users/150 Sent/Received Micrsft Exchange 2010 n VMware This example uses ur 4,000 active user building blck numbers t estimate the number f mailbx servers and the amunt f prcessing and memry needed fr the CAS and Hub Transprt Rles. We then translate the estimated resurces int virtual machine and hst cnfiguratins. 3.3.1 Resurce Requirements by Server Rle Using Micrsft and VMware best practices, we can estimate the resurce requirements f each server rle based n server rle ratis and Micrsft sizing guidelines. In this case, we supprt 8,000 active users and thus need tw 4,000 user building blcks. Fr an in-depth lk at the sizing and cnfiguratin prcess, see the Micrsft Exchange 2010 n VMware: Best Practices Guide. Table 4. Exchange Server Rle Resurce Requirements Exchange Rle Mailbx Server (2 servers) Client Access Server (2 servers) Hub Transprt Server (2 servers) Physical Resurces (per server) CPU: 6 cres (60% max utilizatin) Memry: 48GB OS and Applicatin File Strage: 64GB (OS and applicatin files) DB Strage: 110 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Lg Strage: 6 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Restre LUN: 12 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 5) Netwrk: 1Gbps CPU: 4 cres Memry: 8GB Strage: 24GB (OS and applicatin files) Netwrk: 1Gbps CPU: 1 cre Memry: 4GB Strage: 20GB (OS, applicatin, and lg files) 32GB (DB, prtcl/tracking lgs, and temp files) Netwrk: 1Gbps Page 17 f 45
3.3.2 Virtual Machine Distributin Micrsft Exchange 2010 n VMware Nw that we understand the physical resurce requirements and assciated virtual hardware cnfiguratin, we can plan physical ESXi hst hardware t meet thse requirements. T build infrastructure availability int the architecture, we distribute the six ttal virtual machines acrss tw ESXi hsts. Initial placement f virtual machines is relatively unimprtant, especially if yu re using DRS. Table 5. Exchange Virtual Machine Distributin ESXi hst ESXi hst 1 ESXi hst 2 VM(s) Exchange Mailbx VM 1 (6 vcpu/48gb RAM) Exchange Client Access VM 1 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 1 (1 vcpu/4gb RAM) Exchange Mailbx VM 2 (6 vcpu/48gb RAM) Exchange Client Access VM 2 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 2 (1 vcpu/4gb RAM) 3.3.3 ESXi hst Specificatins Each ESXi hst shuld prvide enugh physical hardware resurces t accmmdate the planned wrklad and prvide sme headrm in the event f a VMware HA failver r planned vmtin migratin f live virtual machines fr hst hardware maintenance. The fllwing table summarizes the ESXi hst hardware cnfiguratin based n ur example architecture. Table 6. ESXi hst Hardware Cnfiguratin Table ESXi hst ALL ESXi hsts VM(s) 12 cres (2x6) 96GB RAM (extra 36GB abve requirements fr use in failver) 2 Fibre Channel HBAs 4 Gigabit netwrk adapters Page 18 f 45
3.3.4 Initial Virtual Machine Placement Micrsft Exchange 2010 n VMware Althugh the wrklads migrate autmatically with DRS (including the mailbx servers), the fllwing diagram is a useful planning tl fr initial placement f virtual machines and fr calculating hst failver capacity. At initial placement, bth ESXi hsts have sme failver headrm. Figure 4. Initial Virtual Machine Placement Page 19 f 45
3.4 Example 2 16,000 Users/150 Sent/Received Micrsft Exchange 2010 n VMware The secnd example uses ur 4,000-user building blck numbers t estimate the number f mailbx servers and the amunt f prcessing and memry needed fr the CAS and Hub Transprt Rles. We then translate the estimated resurces int virtual machine and hst cnfiguratins. 3.4.1 Resurce Requirements by Server Rle In this example, the Mailbx Server building blck is the same, but we added tw mre f them. We als recalculated the number f CAS and Hub Transprt virtual machines per Micrsft guidelines. Table 7. Exchange Server Rle Resurce Requirements Exchange Rle Mailbx Server (4 servers) Client Access Server (3 servers) Hub Transprt Server (2 servers) Physical Resurces (per server) CPU: 6 cres (60% max utilizatin) Memry: 48GB OS and Applicatin File Strage: 64GB (OS and applicatin files) DB Strage: 110 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Lg Strage: 6 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Restre LUN: 12 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 5) Netwrk: 1Gbps CPU: 4 cres Memry: 8GB Strage: 24GB (OS and applicatin files) Netwrk: 1Gbps CPU: 2 cres Memry: 4GB Strage: 20GB (OS, applicatin, and lg files) 32GB (DB, prtcl/tracking lgs, and temp files) Netwrk: 1Gbps Page 20 f 45
3.4.2 Virtual Machine Distributin Micrsft Exchange 2010 n VMware In this example, we ve chsen t use fur ESXi hsts cnnected t shared strage t use advanced VMware features such as HA and DRS. T build infrastructure availability int the architecture, we distribute the nine ttal virtual machines acrss fur physical ESXi hsts. Initial placement f virtual machines is relatively unimprtant, especially if yu re using DRS. Table 8. Exchange Virtual Machine Distributin ESXi hst ESXi hst 1 ESXi hst 2 ESXi hst 3 ESXi hst 4 VM(s) Exchange Mailbx VM 1 (6 vcpu/48gb RAM) Exchange Client Access VM 1 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 1 (2 vcpu/4gb RAM) Exchange Mailbx VM 2 (6 vcpu/48gb RAM) Exchange Client Access VM 2 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 2 (2 vcpu/4gb RAM) Exchange Mailbx VM 3 (6 vcpu/48gb RAM) Exchange Client Access VM 3 (4 vcpu/8gb RAM) Exchange Mailbx VM 4 (6 vcpu/48gb RAM) 3.4.3 ESXi Hst Specificatins Each ESXi hst shuld prvide enugh physical hardware resurces t accmmdate the planned wrklad and prvide sme headrm in the event f a VMware HA failver r planned vmtin migratin f live virtual machines fr hst hardware maintenance. The fllwing table summarizes the ESXi hst hardware cnfiguratin based n ur example architecture. Table 9. ESXi Hst Hardware Cnfiguratin Table ESXi hst All ESXi hsts VM(s) 16 cres (4x4) 128GB RAM (extra 14GB abve requirements fr use in failver) 2 Fibre Channel HBAs 4 Gigabit netwrk adapters Page 21 f 45
3.4.4 Initial Virtual Machine Placement Micrsft Exchange 2010 n VMware Althugh the wrklads migrate autmatically with DRS (including the mailbx servers), the fllwing diagram is a useful planning tl fr initial placement f virtual machines and fr calculating hst failver capacity. At initial placement, ESXi hsts 2 and 3 have the mst failver headrm. Figure 5. Initial Virtual Machine Placement 3.5 Example 3 64,000 Users/150 Sent/Received The fllwing example uses ur 4,000 active user building blck numbers t estimate the number f mailbx servers and the amunt f prcessing and memry needed fr the CAS and Hub Transprt Rles. We then translate the estimated resurces int virtual machine and hst cnfiguratins. Althugh we ve used the 4,000-user building blck in this example, higher mailbx cncentratins are certainly pssible, depending n the specific wrklad. Mailbx Server virtual machines have been cnfigured t run 11,000 mailbxes in prductin custmer envirnments. That nted, the 4,000-user building blck has been fficially tested and recmmended by ur server and strage partners. See the Micrsft Exchange 2010 n VMware: Partner Resurce Catalg in this slutin kit fr mre infrmatin abut building blcks and perfrmance testing. Page 22 f 45
3.5.1 Resurce Requirements by Server Rle Micrsft Exchange 2010 n VMware In this example, the Mailbx Server building blck is the same and we scaled t 16 virtual machines. We als increased the Client Access Server cunt t 12 and the Hub Transprt cunt t 4. Table 10. Exchange Server Rle Resurce Requirements Exchange Rle Mailbx Server (16 servers) Client Access Server (12 servers) Hub Transprt Server (4 servers) Physical Resurces (per server) CPU: 6 cres (60% max utilizatin) Memry: 48GB OS and Applicatin File Strage: 64GB (OS and applicatin files) DB Strage: 110 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Lg Strage: 6 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Restre LUN: 12 x 300GB 10K RPM FC/SCSI/SAS 3.5" (RAID 5) Netwrk: 1Gbps CPU: 4 cres Memry: 8GB Strage: 24GB (OS and applicatin files) Netwrk: 1Gbps CPU: 4 cres Memry: 4GB Strage: 20GB (OS, applicatin, and lg files) 32GB (DB, prtcl/tracking lgs, and temp files) Netwrk: 1Gbps Page 23 f 45
3.5.2 Exchange Virtual Machine Distributin Micrsft Exchange 2010 n VMware In this example, we ve increased the physical server cunt t eight ESXi hsts and evenly balanced the initial virtual machine placement acrss them. Table 11. Exchange Virtual Machine Distributin fr Eight ESXi hsts ESXi hst ESXi hst 1 ESXi hst 2 ESXi hst 3 ESXi hst 4 ESXi hst 5 ESXi hst 6 ESXi hst 7 VM(s) Exchange Mailbx VM 1 (6 vcpu/48gb RAM) Exchange Mailbx VM 2 (6 vcpu/48gb RAM) Exchange Client Access VM 1 (4 vcpu/8gb RAM) Exchange Client Access VM 2 (4 vcpu/8gb RAM) Exchange Mailbx VM 3 (6 vcpu/48gb RAM) Exchange Mailbx VM 4 (6 vcpu/48gb RAM) Exchange Client Access VM 3 (4 vcpu/8gb RAM) Exchange Client Access VM 4 (4 vcpu/8gb RAM) Exchange Mailbx VM 5 (6 vcpu/48gb RAM) Exchange Mailbx VM 6 (6 vcpu/48gb RAM) Exchange Client Access VM 5 (4 vcpu/8gb RAM) Exchange Client Access VM 6 (4 vcpu/8gb RAM) Exchange Mailbx VM 7 (6 vcpu/48gb RAM) Exchange Mailbx VM 8 (6 vcpu/48gb RAM) Exchange Client Access VM 7 (4 vcpu/8gb RAM) Exchange Client Access VM 8 (4 vcpu/8gb RAM) Exchange Mailbx VM 9 (6 vcpu/48gb RAM) Exchange Mailbx VM 10 (6 vcpu/48gb RAM) Exchange Client Access VM 9 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 1 (4 vcpu/4gb RAM) Exchange Mailbx VM 11 (6 vcpu/48gb RAM) Exchange Mailbx VM 12 (6 vcpu/48gb RAM) Exchange Client Access VM 10 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 2 (4 vcpu/4gb RAM) Exchange Mailbx VM 13 (6 vcpu/48gb RAM) Exchange Mailbx VM 14 (6 vcpu/48gb RAM) Exchange Client Access VM 11 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 3 (4 vcpu/4gb RAM) Page 24 f 45
Micrsft Exchange 2010 n VMware ESXi hst 8 Exchange Mailbx VM 15 (6 vcpu/48gb RAM) Exchange Mailbx VM 16 (6 vcpu/48gb RAM) Exchange Client Access VM 12 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 4 (4 vcpu/4gb RAM) 3.5.3 ESXi Hst Specificatins Each ESXi hst shuld prvide enugh physical hardware resurces t accmmdate the planned wrklad and prvide sme headrm in the event f a VMware HA failver r planned vmtin migratin f live virtual machines fr hst hardware maintenance. The fllwing table summarizes the ESXi hst hardware cnfiguratin based n ur example architecture. T get the mst ut f ur hardware cnslidatin, we chse t implement 24-cre hsts fr this cnfiguratin. Table 12. ESXi Hst Hardware Cnfiguratin Table ESXi hst All ESXi hsts Specificatin 24 cres (4x6) 128GB RAM (extra 16GB abve requirements fr use in failver) 2 Fibre Channel HBAs 4 Gigabit netwrk adapters 3.5.4 Initial Virtual Machine Placement Althugh the wrklads migrate autmatically with DRS (including the mailbx servers), the fllwing diagram is a useful planning tl fr initial placement f virtual machines and fr calculating hst failver capacity. At initial placement, ESXi hsts 4-8 have the mst failver headrm. Figure 6. Initial Virtual Machine Placement fr 64,000 Active Users Page 25 f 45
4. DAG Examples (Clustered Mailbx Servers) 4.1 The DAG Prcess Micrsft Exchange 2010 n VMware The new Database Availability Grup (DAG) feature in Exchange 2010 necessitates a different apprach t sizing the Mailbx Server rle, frcing the administratr t accunt fr bth active and passive mailbxes. Mailbx Servers that are members f a DAG can hst ne r mre passive databases in additin t any active databases fr which they may be respnsible. Each passive database adds an additinal 10% t the CPU requirements f the mailbx server hsting the active cpy. The fllwing diagram illustrates this principle. There are three Exchange mailbx servers, each with an active database (DB1a dentes database 1 active) and tw passive databases frm the ther tw mailbx servers (DB1p dentes database 1 passive). Each passive cpy f DB1a requires 10% extra prcessing n the server hsting DB1a, fr a ttal f 20% extra CPU verhead. S, each mailbx server in this example requires 20% additinal prcessing pwer t accunt fr passive database cpies. Figure 7. DAG Layut with Overhead fr Passive Databases The sizing prcess begins with understanding and applying Micrsft guidelines fr each server rle, as represented by the fllwing high-level prcesses: Design the Mailbx Server DAG ndes: Define current wrklads using the Micrsft Exchange Server Prfile Analyzer (http://www.micrsft.cm/dwnlad/en/details.aspx?displaylang=en&id=10559). T simplify capacity planning, use the Exchange 2010 Mailbx Server Rle Requirements Calculatr (http://blgs.technet.cm/b/exchange/archive/2009/11/09/3408737.aspx) t calculate CPU, memry, and strage sizing. Alternatively, if yu prefer a manual prcess: Page 26 f 45
Micrsft Exchange 2010 n VMware Apply Micrsft guidelines t determine the CPU and memry requirements. Inputs include, number f mailbxes, mailbx prfile, number f servers in the DAG, number f passive database cpies, and several ther custm parameters. Use the Exchange 2010 Mailbx Server Rle Requirements Calculatr (http://blgs.technet.cm/b/exchange/archive/2009/11/09/3408737.aspx) frm Micrsft t determine strage requirements. Design the peripheral server rles. The Exchange 2010 Mailbx Server Rle Requirements Calculatr als recmmends CPU and memry fr the CAS and Hub Transprt rles. Alternatively, if yu prefer a manual prcess: Cunt the number f mailbx server prcessr cres. Multiply by expected CPU utilizatin. This shuld be less than 80% fr a clustered mailbx server (fr example,16 cres *.80 = 14 cres (runded frm 12.8)). Use the mdified number f mailbx cres and Micrsft Guidelines fr Server Rle Ratis (http://technet.micrsft.cm/en-us/library/ee832795.aspx) t calculate prcessr and memry requirements fr the Hub Transprt rles. Use the mdified number f mailbx cres and Micrsft Guidelines fr Server Rle Ratis (http://technet.micrsft.cm/en-us/library/ee832795.aspx) t calculate prcessr and memry requirements fr the Client Access Server rles. Allcate ne r mre virtual machines fr each server rle t satisfy the previusly calculated number f prcessr cres and amunt f memry. Determine hw the virtual machines will be distributed acrss ESXi hsts. Aggregate virtual machine requirements plus sme verhead t size each ESXi hst. The verhead is imprtant if yu want t minimize the perfrmance impact during the lss f ne f yur ESXi hsts. A typical guideline when chsing the number f required hsts is n+1, where n is the number f hsts required t run the wrklad at peak utilizatin. N+1 allws yu t design fr the pssibility f lsing ne hst frm yur VMware cluster withut taking a huge perfrmance hit during failver. 4.2 DAG Sizing 150 Sent/Received Using the Micrsft guidelines we design ur mailbx servers using Database Availability Grups fr mailbxes sending/receiving 150 messages per day. The number f mailbxes varies in each example. The fllwing calculatins are meant t serve as an example f the sizing prcess. Custmers are encuraged t use these as guidelines but must als evaluate specific requirements t determine the mst ptimal deplyment mdels fr their needs. Every envirnment is different and sme rganizatins use email mre heavily than thers. T accurately determine yur mailbx prfile requirements, utilize the Micrsft Exchange Server Prfile Analyzer (http://www.micrsft.cm/dwnlad/en/details.aspx?displaylang=en&id=10559). It is strngly recmmended that yu wrk with an internal r partner resurce that is experienced with Exchange architectures t design fr gd perfrmance in yur envirnment. In ur example, we use the fllwing average mailbx prfile definitin. 150 messages sent/received per day. Average message size f 75KB. 2GB mailbx quta. Nte that these examples d nt take int accunt a particular strage slutin. Many VMware strage partners have dne extensive testing n building blcks f varying capacities and wrklad Page 27 f 45
Micrsft Exchange 2010 n VMware characteristics. See the Micrsft Exchange 2010 n VMware: Partner Resurce Catalg fr stragespecific implementatin details. 4.3 DAG Example 1 8,000 Active Mailbxes 150 Sent/Received The fllwing example demnstrates the mailbx cnfiguratin needed t supprt 8,000 active users prtected by DAG clustering. We use the mailbx calculatins t estimate the amunt f prcessing and memry needed fr the CAS and Hub Transprt Rles. We then translate the estimated resurces int virtual machine and hst cnfiguratins. 4.3.1 Mailbx Server Resurce Requirements The fllwing table summarizes the resurce requirements fr the mailbx servers running in the DAG. Fr this example, we decided t spread ur 8,000 users acrss three Mailbx Servers. Each server supprts apprximately 2,666 active users during nrmal peratin and has the capacity t supprt apprximately 4,000 active users during failver f ne cluster nde. Table 13. Mailbx Server Resurce Requirements Exchange Rle Mailbx Server (3 ndes) Physical Resurces (per server) CPU: 6 cres (69% max utilizatin) Memry: 48GB Database and Lg Strage 96 x 300GB/10K RPM FC/SCSI/SAS 3.5" Restre LUN Strage 9 x 300GB/10K RPM FC/SCSI/SAS 3.5" Netwrk: 1Gbps Page 28 f 45
4.3.2 Guest Virtual Machine Cnfiguratin Micrsft Exchange 2010 n VMware The resurce requirements in Table 13 are translated int the fllwing virtual machine resurces. Table 14. Exchange Virtual Machine Cnfiguratin Exchange Rle Mailbx Server (3 servers) Virtual Hardware (per VM) CPU: 6 vcpu Memry: 48GB Strage: SCSI Cntrller 0 HDD 1: 64GB (OS and applicatin files) Strage: SCSI Cntrller 1 HDD 2: 1321GB (DB1) HDD 3: 1321GB (DB2) HDD 4: 1321GB (DB3) HDD 5: 1321GB (DB4) HDD 6: 1321GB (DB5) HDD 7: 1321GB (DB6) HDD 8: 1321GB (DB7) Strage: SCSI Cntrller 2 HDD 9: 1321GB (DB8) HDD 10: 1321GB (DB9) HDD 11: 1321GB (DB10) HDD 12: 1321GB (DB11) HDD 13: 1321GB (DB12) HDD 14: 1321GB (DB13) HDD 15: 1321GB (DB14) HDD 16: 1321GB (DB15) Strage: SCSI Cntrller 3 HDD 17: 1206GB (Restre LUN) Netwrk: NIC 1 Page 29 f 45
4.3.3 Guest Virtual Machine Strage Interactin Micrsft Exchange 2010 n VMware Figure 8 illustrates hw the building blck virtual machine interacts with the shared strage. Figure 8. Building Bck Interactin with Shared Strage Page 30 f 45
4.3.4 Resurce Requirements by Server Rle Micrsft Exchange 2010 n VMware Using Micrsft and VMware best practices, we can estimate the resurce requirements f each server rle based n server rle ratis and Micrsft sizing guidelines. In this case, we supprt 8,000 active users and thus need tw Exchange mailbx servers. Fr an in-depth lk at the sizing and cnfiguratin prcess, see the Micrsft Exchange 2010 n VMware: Best Practices Guide. Table 15. Exchange Server Rle Resurce Requirements Exchange Rle Mailbx Server (3 servers) Client Access Server (3 servers) Hub Transprt Server (2 servers) Physical Resurces (per server) CPU: 6 cres Memry: 48GB Database and Lg Strage 96 x 300GB/10K RPM FC/SCSI/SAS 3.5" Restre LUN Strage 9 x 300GB/10K RPM FC/SCSI/SAS 3.5" Netwrk: 1Gbps CPU: 2 cres Memry: 8GB Strage: 24GB (OS and applicatin files) Netwrk: 1Gbps CPU: 1 cre Memry: 4GB Strage: 20GB (OS, applicatin, and lg files) 32GB (DB, prtcl/tracking lgs, and temp files) Netwrk: 1Gbps Page 31 f 45
4.3.5 Virtual Machine Distributin Micrsft Exchange 2010 n VMware Nw that we understand the physical resurce requirements and assciated virtual hardware cnfiguratin, we can plan physical ESXi hst hardware t meet thse requirements. T build infrastructure availability int the architecture, we distribute the eight ttal virtual machines acrss three physical VMware ESXi hst servers. Initial placement f virtual machines is relatively unimprtant, especially if yu are using DRS. Table 16. Exchange Virtual Machine Distributin ESXi hst ESXi hst 1 ESXi hst 2 ESXi hst 3 VM(s) Exchange Mailbx VM 1 (6 vcpu/48gb RAM) Exchange Client Access VM 1 (2 vcpu/8gb RAM) Exchange Hub Transprt VM 1 (1 vcpu/4gb RAM) Exchange Mailbx VM 2 (6 vcpu/48gb RAM) Exchange Client Access VM 2 (2 vcpu/8gb RAM) Exchange Hub Transprt VM 2 (1 vcpu/4gb RAM) Exchange Mailbx VM 3 (6 vcpu/48gb RAM) Exchange Client Access VM 3 (2 vcpu/8gb RAM) 4.3.6 ESXi hst Specificatins Each ESXi hst shuld prvide enugh physical hardware resurces t accmmdate the planned wrklad and prvide sme headrm in the event f a VMware HA failver r planned vmtin migratin f live virtual machines fr hst hardware maintenance. The fllwing table summarizes the ESXi hst hardware cnfiguratin based n ur example architecture. Table 17. ESXi Hst Hardware Cnfiguratin Table ESXi hst All ESXi hsts VM(s) 12 cres (2x6) 96GB RAM (extra 36GB abve requirements fr use in failver) 2 Fibre Channel HBAs 4 Gigabit netwrk adapters Page 32 f 45
4.3.7 Initial Virtual Machine Placement Micrsft Exchange 2010 n VMware Althugh the wrklads migrate autmatically with DRS (including the mailbx servers), the fllwing diagram is a useful planning tl fr initial placement f virtual machines and fr calculating hst failver capacity. At initial placement, all ESXi hsts have failver headrm, but ESXi hst 3 has the mst. Figure 9. Initial Virtual Machine Placement fr 8,000 Active Users Page 33 f 45
Micrsft Exchange 2010 n VMware 4.4 DAG Example 2 16,000 Active Mailbxes 150 Sent/Received The fllwing example demnstrates the mailbx cnfiguratin needed t supprt 16,000 active users prtected by DAG clustering. We use the mailbx calculatins t estimate the amunt f prcessing and memry needed fr the CAS and Hub Transprt Rles. We then translate the estimated resurces int virtual machine and hst cnfiguratins. 4.4.1 Mailbx Server Resurce Requirements The fllwing table summarizes the resurce requirements fr the mailbx servers running in the DAG. Fr this example, we decided t spread ur 16,000 users acrss fur Mailbx servers. Each server supprts apprximately 4,000 active users during nrmal peratin and has the capacity t supprt apprximately 5,333 active users during failver f ne cluster nde. Table 18. 16,000 Active Mailbxes (150 Sent/Received) DAG Nde Requirements Exchange Rle Mailbx Server (4 ndes) Physical Resurces (per server) CPU: 8 cres (82% max utilizatin) Memry: 64GB OS and Applicatin File Strage: 80GB (OS and applicatin files) Database and Lg Strage 138 x 300GB 10K RPM FC/SCSI/SAS 3.5" Restre LUN Strage 9 x 300GB/10K RPM FC/SCSI/SAS 3.5" Netwrk: 1Gbps 4.4.2 Guest Virtual Machine Cnfiguratin The resurce requirements given in Table 18 are translated int the fllwing virtual machine resurces. Page 34 f 45
Micrsft Exchange 2010 n VMware Table 19. Exchange Virtual Machine Cnfiguratin Exchange Rle Mailbx Server (4 servers) Virtual Hardware (per VM) CPU: 8 vcpu Memry: 64GB Strage: SCSI Cntrller 0 HDD 1: 80GB (OS and applicatin files) Strage: SCSI Cntrller 1 HDD 2: 1321GB (DB1) HDD 3: 1321GB (DB2) HDD 4: 1321GB (DB3) HDD 5: 1321GB (DB4) HDD 6: 1321GB (DB5) HDD 7: 1321GB (DB6) HDD 8: 1321GB (DB7) HDD 9: 1321GB (DB8) HDD 10: 1321GB (DB9) HDD 11: 1321GB (DB10) HDD 12: 1321GB (DB11) HDD 13: 1321GB (DB12) Strage: SCSI Cntrller 2 HDD 14: 1321GB (DB13) HDD 15: 1321GB (DB14) HDD 16: 1321GB (DB15) HDD 17: 1321GB (DB16) HDD 18: 1321GB (DB17) HDD 19: 1321GB (DB18) HDD 20: 1321GB (DB19) HDD 21: 1321GB (DB20) HDD 22: 1321GB (DB21) HDD 23: 1321GB (DB22) HDD 24: 1321GB (DB23) HDD 25: 1321GB (DB24) Strage: SCSI Cntrller 3 HDD 26: 1206GB (Restre LUN) Netwrk: NIC 1 Page 35 f 45
4.4.3 Guest Virtual Machine Strage Interactin Micrsft Exchange 2010 n VMware Figure 10 illustrates hw the building blck virtual machine interact with the shared strage. Figure 10. Building Blck Virtual Machine Interactin with Shared Strage 4.4.4 Resurce Requirements by Server Rle In this example, we increased the number f mailbx servers t fur. We als recalculated the number f CAS and Hub Transprt virtual machines per Micrsft guidelines. Page 36 f 45
Micrsft Exchange 2010 n VMware Table 20. Exchange Server Rle Resurce Requirements Exchange Rle Mailbx Server (4 servers) Client Access Server (4 servers) Hub Transprt Server (2 servers) Physical Resurces (per server) CPU: 8 cres Memry: 64GB OS and Applicatin File Strage: 80GB (OS and applicatin files) Database and Lg Strage 138 x 300GB/10K RPM FC/SCSI/SAS 3.5" Restre LUN Strage 9 x 300GB/10K RPM FC/SCSI/SAS 3.5" Netwrk: 1Gbps CPU: 4 cres Memry: 8GB Strage: 24GB (OS and applicatin files) Netwrk: 1Gbps CPU: 2 cres Memry: 4GB Strage: 20GB (OS, applicatin, and lg files) 32GB (DB, prtcl/tracking lgs, and temp files) Netwrk: 1Gbps Page 37 f 45
4.4.5 Virtual Machine Distributin Micrsft Exchange 2010 n VMware In this example, we use fur ESXi hsts cnnected t shared strage s that we can use advanced VMware features such as HA and DRS. T build infrastructure availability int the architecture, we distribute the 10 ttal virtual machines acrss fur physical VMware ESXi hst servers. Initial placement f virtual machines is relatively unimprtant, especially if yu re using DRS. Table 21. Exchange Virtual Machine Distributin ESXi hst ESXi hst 1 ESXi hst 2 ESXi hst 3 ESXi hst 4 VM(s) Exchange Mailbx VM 1 (8 vcpu/64gb RAM) Exchange Client Access VM 1 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 1 (2 vcpu/4gb RAM) Exchange Mailbx VM 2 (8 vcpu/64gb RAM) Exchange Client Access VM 2 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 2 (2 vcpu/4gb RAM) Exchange Mailbx VM 3 (8 vcpu/64gb RAM) Exchange Client Access VM 3 (4 vcpu/8gb RAM) Exchange Mailbx VM 4 (8 vcpu/64gb RAM) Exchange Client Access VM 4 (4 vcpu/8gb RAM) 4.4.6 ESXi hst Specificatins Each ESXi hst shuld prvide enugh physical hardware resurces t accmmdate the planned wrklad and prvide sme headrm in the event f a VMware HA failver r planned vmtin migratin f live virtual machines fr hst hardware maintenance. The fllwing table summarizes the ESXi hst hardware cnfiguratin based fr ur example architecture. Table 22. ESXi hst Hardware Cnfiguratin Table ESXi hst All ESXi hsts VM(s) 16 cres (4x4) 96GB RAM (extra 20GB abve requirements fr use in failver) 2 Fibre Channel HBAs 4 Gigabit netwrk adapters Page 38 f 45
4.4.7 Initial VM Placement Micrsft Exchange 2010 n VMware Althugh the wrklads migrate autmatically with DRS (including the mailbx servers), the fllwing diagram is a useful planning tl fr initial placement f virtual machines and calculating hst failver capacity. At initial placement, ESXi hsts 3 and 4 have the mst failver headrm. Figure 11. Initial Virtual Machine Placement fr 16,000 Active Users 4.5 DAG Example 3 64,000 Active Mailbxes 150 Sent/Received The fllwing example demnstrates the mailbx cnfiguratin needed t supprt 64,000 active users prtected by DAG clustering. We use the mailbx calculatins t estimate the amunt f prcessing and memry needed fr the CAS and Hub Transprt Rles. We then translate the estimated resurces int virtual machine and hst cnfiguratins. 4.5.1 Mailbx Server Resurce Requirements The fllwing table summarizes the resurce requirements fr the mailbx servers running in the DAG. Fr this example, we ve decided t spread ur 64,000 users acrss 12 Mailbx servers in tw DAGs. Each server supprts apprximately 5,333 active users during nrmal peratin and has the capacity t supprt apprximately 6400 active users during failver f ne cluster nde. Nte Fr maximum cnslidatin, larger ESXi hsts were used t accmmdate tw mailbx server virtual machines. When deplying a DAG n vsphere, best practices require n mre than a single DAG nde per DAG t be hsted n a single physical hst. T accmmdate tw DAG ndes per hst the users have been distributed acrss tw DAGs. If using vsphere 5 r later it is als pssible t accmmdate mre users by deplying larger virtual machines. Page 39 f 45
Table 23. 64,000 Active Mailbxes (150 Sent/Received) DAG Nde Requirements Micrsft Exchange 2010 n VMware Exchange Rle Mailbx Server (12 ndes) Physical Resurces (per server) CPU: 8 cres Memry: 96GB OS and Applicatin File Strage: 80GB (OS and applicatin files) Database and Lg Strage 46 x 2000GB 15K RPM FC/SCSI/SAS 3.5" Restre LUN Strage 3 x 2000GB/15K RPM FC/SCSI/SAS 3.5" Netwrk: 1Gbps 4.5.2 Guest Virtual Machine Cnfiguratin Nte The resurce requirements given in Table 23 translated int the virtual machine resurces listed in Table 24. Table 24. Exchange Virtual Machine Cnfiguratin Exchange Rle Mailbx Server (12 servers) Virtual Hardware (per VM) CPU: 8 vcpu Memry: 96GB Strage: SCSI Cntrller 0 HDD 1: 80GB (OS and applicatin files) Strage: SCSI Cntrller 1 HDD 2: 2064GB (DB1) HDD 3: 2064GB (DB2) HDD 4: 2064GB (DB3) HDD 5: 2064GB (DB4) HDD 6: 2064GB (DB5) HDD 7: 2064GB (DB6) HDD 8: 2064GB (DB7) HDD 9: 2064GB (DB8) HDD 10: 2064GB (DB9) Strage: SCSI Cntrller 2 HDD 11: 2064GB (DB10) HDD 12: 2064GB (DB11) HDD 13: 2064GB (DB12) HDD 14: 2064GB (DB13) Page 40 f 45
HDD 15: 2064GB (DB14) HDD 16: 2064GB (DB15) HDD 17: 2064GB (DB16) HDD 18: 2064GB (DB17) HDD 19: 2064GB (DB18) HDD 20: 2064GB (DB19) Strage: SCSI Cntrller 3 HDD 34: 1884GB (Restre LUN) Netwrk: NIC 1 Micrsft Exchange 2010 n VMware 4.5.3 Guest Virtual Machine Strage Interactin The fllwing figure illustrates hw the building blck virtual machine interacts with the shared strage. Figure 12. Building Blck Virtual Machine Interactin with Shared Strage Page 41 f 45
4.5.4 Resurce Requirements by Server Rle Micrsft Exchange 2010 n VMware In this example, we scaled the number f mailbx servers t 16 virtual machines. We als increased the Client Access Server cunt t 13 and the Hub Transprt cunt t fur. Table 25. Exchange Server Rle Resurce Requirements Exchange Rle Mailbx Server (12 servers) Client Access Server (13 servers) Hub Transprt Server (4 servers) Physical Resurces (per server) CPU: 8 cres (82% max utilizatin) Memry: 96GB OS and Applicatin File Strage: 80GB (OS and applicatin files) Database and Lg Strage 46 x 2000GB 15K RPM FC/SCSI/SAS 3.5" Restre LUN Strage 3 x 2000GB/15K RPM FC/SCSI/SAS 3.5" Netwrk: 1Gbps CPU: 4 cres Memry: 8GB Strage: 24GB (OS and applicatin files) Netwrk: 1Gbps CPU: 4 cres Memry: 4GB Strage: 20GB (OS, applicatin, and lg files) 32GB (DB, prtcl/tracking lgs, and temp files) Netwrk: 1Gbps Page 42 f 45
4.5.5 Exchange Virtual Machine Distributin Micrsft Exchange 2010 n VMware In this example, we ve increased the physical server cunt t six ESXi hsts and evenly balanced the initial virtual machine placement acrss them. Table 26. Exchange Virtual Machine Distributin ESXi hst ESXi hst 1 ESXi hst 2 ESXi hst 3 ESXi hst 4 ESXi hst 5 ESXi hst 6 VM(s) Exchange Mailbx VM 1 (8 vcpu/96gb RAM) Exchange Mailbx VM 2 (8 vcpu/96gb RAM) Exchange Client Access VM 1 (4 vcpu/8gb RAM) Exchange Client Access VM 2 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 1 (4 vcpu/4gb RAM) Exchange Mailbx VM 3 (8 vcpu/96gb RAM) Exchange Mailbx VM 4 (8 vcpu/96gb RAM) Exchange Client Access VM 3 (4 vcpu/8gb RAM) Exchange Client Access VM 4 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 2 (4 vcpu/4gb RAM) Exchange Mailbx VM 5 (8 vcpu/96gb RAM) Exchange Mailbx VM 6 (8 vcpu/96gb RAM) Exchange Client Access VM 5 (4 vcpu/8gb RAM) Exchange Client Access VM 6 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 3 (4 vcpu/4gb RAM) Exchange Mailbx VM 7 (8 vcpu/96gb RAM) Exchange Mailbx VM 8 (8 vcpu/96gb RAM) Exchange Client Access VM 7 (4 vcpu/8gb RAM) Exchange Client Access VM 8 (4 vcpu/8gb RAM) Exchange Hub Transprt VM 4 (4 vcpu/4gb RAM) Exchange Mailbx VM 9 (8 vcpu/96gb RAM) Exchange Mailbx VM 10 (8 vcpu/96gb RAM) Exchange Client Access VM 9 (4 vcpu/8gb RAM) Exchange Client Access VM 10 (4 vcpu/8gb RAM) Exchange Client Access VM 11 (4 vcpu/8gb RAM) Exchange Mailbx VM 11 (8 vcpu/96gb RAM) Exchange Mailbx VM 12 (8 vcpu/96gb RAM) Exchange Client Access VM 12 (4 vcpu/8gb RAM) Exchange Client Access VM 13 (4 vcpu/8gb RAM) Page 43 f 45
4.5.6 ESXi hst Specificatins Micrsft Exchange 2010 n VMware Each ESXi hst shuld prvide enugh physical hardware resurces t accmmdate the planned wrklad and prvide sme headrm in the event f a VMware HA failver r planned vmtin migratin f live virtual machines fr hst hardware maintenance. Table 27 summarizes the ESXi hst hardware cnfiguratin based n ur example architecture. T get the mst ut f ur hardware cnslidatin, we chse t implement 32-cre hsts fr this cnfiguratin. Table 27. ESXi Hst Hardware Cnfiguratin Table ESXi hst All ESXi hsts Specificatin 32 cres (8x4) 256GB RAM (extra 40GB abve requirements fr use in failver) 2 Fibre Channel HBAs 4 Gigabit netwrk adapters 4.5.7 Initial Virtual Machine Placement Althugh the wrklads migrate autmatically with DRS (including the mailbx servers), the fllwing diagram is a useful planning tl fr initial placement f virtual machines and fr calculating hst failver capacity. At initial placement, ESXi hst 6 has mst f the failver headrm. Figure 13. Initial Virtual Machine Placement fr 64,000 Active Users Page 44 f 45
Micrsft Exchange 2010 n VMware 5. Design and Deplyment Cnsideratins Exchange aggressively utilizes all f the memry prvided t it in a guest OS. vsphere can supprt higher levels f memry ver-cmmitment if virtual machines share the same OS and applicatin cde pages. Even with page sharing, ver-cmmitment shuld be attempted with cautin t avid perfrmance impacts due t resurce cntentin. VMware recmmends setting Memry Reservatin t the amunt f memry cnfigured fr the virtual machine. Fllw Micrsft Guidelines fr strage sizing using the Exchange 2010 Mailbx Server Rle Requirements Calculatr (http://blgs.technet.cm/b/exchange/archive/2009/11/09/3408737.aspx). Use the latest prcessr generatins fr their enhanced virtualizatin supprt. If deplying n larger hardware and pre-vsphere 5 cnsider deplying multiple DAGs t accmmdate multiple mailbx server virtual machines n the same server hardware. VMware recmmends at least fur NIC prts per ESXi hst machine t address netwrk traffic, virtual machine security and islatin, vmtin, and Management (service cnsle). VMware recmmends at least tw HBA prts per ESXi hst fr redundancy. The NIC/HBA prts are minimum recmmendatins fr each ESXi hst. Mre prts may be needed depending n the number f virtual machines and custmer-specific netwrk and strage requirements. The number shuld be determined by a detailed sizing analysis with the infrastructure vendr. 6. Summary This guide shws example cnfiguratins f Exchange 2010 n VMware. These examples prvide nly high-level guidance and are nt intended t reflect custmer-specific wrklads. Custmers need t wrk with their infrastructure vendrs t build a detailed sizing and architecture design that meets their individual requirements. Page 45 f 45