VMware vcloud Implementation Example



Similar documents
VMware vcloud Implementation Example

VMware vcloud Architecture Toolkit Private VMware vcloud Implementation Example

VMware vshield App Design Guide TECHNICAL WHITE PAPER

VMware vsphere Design. 2nd Edition

Public Cloud Service Definition

VMware vcloud Architecture Toolkit Architecting a VMware vcloud

Configuration Maximums

VMware vsphere-6.0 Administration Training

VMware vcloud Director for Service Providers

VMware vsphere 5.0 Evaluation Guide

VMware Workspace Portal Reference Architecture

VMware vsphere 4.1 with ESXi and vcenter

VMware vcloud Service Definition for a Private Cloud

VMware vcloud Architecture Toolkit Public VMware vcloud Service Definition

vcloud Director User's Guide

VMware vcenter Log Insight Getting Started Guide

Configuration Maximums VMware Infrastructure 3

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Configuration Maximums

Microsoft Exchange Solutions on VMware

Installing and Configuring vcloud Connector

VMware vsphere 5.0 Evaluation Guide

VMware vsphere 5.1 Advanced Administration

E-SPIN's Virtualization Management, System Administration Technical Training with VMware vsphere Enterprise (7 Day)

VMware vsphere 5.0 Boot Camp

Configuration Maximums VMware vsphere 4.1

Configuration Maximums VMware vsphere 4.0

vcloud Suite Architecture Overview and Use Cases

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

vshield Quick Start Guide vshield Manager 4.1 vshield Edge 1.0 vshield App 1.0 vshield Endpoint 1.0

Management of VMware ESXi. on HP ProLiant Servers

Setup for Failover Clustering and Microsoft Cluster Service

Offline Data Transfer to VMWare vcloud Hybrid Service

VirtualclientTechnology 2011 July

VMware vsphere: Fast Track [V5.0]

vcloud Air - Virtual Private Cloud OnDemand Networking Guide

VMware vsphere Replication Administration

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

vsphere Replication for Disaster Recovery to Cloud

Site Recovery Manager Installation and Configuration

Deployment and Configuration Guide

Introduction to VMware EVO: RAIL. White Paper

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

VMware vrealize Automation

Getting Started with ESXi Embedded

VMware vcloud Air Networking Guide

VMware vcloud Networking and Security Overview

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

vshield Quick Start Guide

TGL VMware Presentation. Guangzhou Macau Hong Kong Shanghai Beijing

Khóa học dành cho các kỹ sư hệ thống, quản trị hệ thống, kỹ sư vận hành cho các hệ thống ảo hóa ESXi, ESX và vcenter Server

VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam

vsphere Private Cloud RAZR s Edge Virtualization and Private Cloud Administration

Cloud Infrastructure Licensing, Packaging and Pricing

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

Why Choose VMware vsphere for Desktop Virtualization? WHITE PAPER

W H I T E P A P E R. VMware Infrastructure Architecture Overview

VMware vsphere Data Protection Evaluation Guide REVISED APRIL 2015

The best platform for building cloud infrastructures. Ralf von Gunten Sr. Systems Engineer VMware

VMware vcloud Air - Disaster Recovery User's Guide

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

Vmware VSphere 6.0 Private Cloud Administration

VMware vcloud Automation Center 6.1

JOB ORIENTED VMWARE TRAINING INSTITUTE IN CHENNAI

VMware vcloud Air. Enterprise IT Hybrid Data Center TECHNICAL MARKETING DOCUMENTATION

Technical Note. vsphere Deployment Worksheet on page 2. Express Configuration on page 3. Single VLAN Configuration on page 5

VMware Data Recovery. Administrator's Guide EN

VMware vsphere 4.1. Pricing, Packaging and Licensing Overview. E f f e c t i v e A u g u s t 1, W H I T E P A P E R

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

VMware vcenter Log Insight Getting Started Guide

SAN Conceptual and Design Basics

vshield Quick Start Guide

What s New in VMware vcloud Director 1.5

VMware vrealize Automation

How to Create a Simple Content Management Solution with Joomla! in a vcloud Environment. A VMware Cloud Evaluation Reference Document

Monitoring Databases on VMware

VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

Configuration Maximums

Installing and Administering VMware vsphere Update Manager

VMware vcloud Automation Center 6.0

DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

VMware vsphere Data Protection 6.0

Secure Segmentation of Tier 1 Applications in the DMZ

VM-Series Firewall Deployment Tech Note PAN-OS 5.0

VMware for Bosch VMS. en Software Manual

Bosch Video Management System High availability with VMware

Monitoring Hybrid Cloud Applications in VMware vcloud Air

Network Segmentation in Virtualized Environments B E S T P R A C T I C E S

VMware vcloud Networking and Security

VMware Virtual SAN Design and Sizing Guide TECHNICAL MARKETING DOCUMENTATION V 1.0/MARCH 2014

Virtual Data Centre. User Guide

Virtual Appliance Setup Guide

What s New in VMware vsphere Flash Read Cache TECHNICAL MARKETING DOCUMENTATION

Set Up a VM-Series Firewall on an ESXi Server

Installing and Configuring vcloud Connector

VMware vsphere: Install, Configure, Manage [V5.0]

Transcription:

ware vcloud Implementation Example Public vcloud Service Provider TECHNICAL WHITE PAPER

Document Title Table of Contents 1. Purpose and Overview... 4 1.1 Executive Summary... 4 1.2 Business Requirements... 4 1.3 Use Cases.................................................................4 1.4 Document Purpose and Assumptions.... 4 2. ware vcloud Architecture Design Overview.... 5 2.1 vcloud Definition.... 5 2.2 vcloud Component Design Overview.... 6 3. vsphere Architecture Design Overview.... 6 3.1 High Level Architecture.... 6 3.2 Site Considerations.... 8 3.3 Design Specifications.... 8 4. vsphere Architecture Design Management Cluster.... 9 4.1 Compute Logical Design.... 9 4.1.1 Datacenters.... 9 4.1.2. vsphere Clusters.... 9 4.1.3. Host Logical Design.... 9 4.2 Network Logical Design.... 11 4.3 Shared Storage Logical Design.... 11 4.4 Management Components.... 11 4.5 Management Component Resiliency Considerations.... 12 5. vsphere Architecture Design Resource Groups.... 13 5.1 Compute Logical Design.... 13 5.1.1. Datacenters... 13 5.1.2. vsphere Clusters.... 13 5.1.3. Host Logical Design.... 14 5.2 Network Logical Design.... 14 5.3 Shared Storage Logical Design.... 15 5.4 Resource Group Datastore Considerations.... 16 5.4.1. Datastore Sizing Estimation.... 17 6. vcloud Provider Design.... 17 6.1 Abstractions and ware vcloud Director Constructs.... 17 6.2 Provider vdcs.... 18 6.3 Organizations.... 20 TECHNICAL WHITE PAPER / 2

6.4 Networks.... 20 6.4.1. External Networks... 20 6.4.2. Network Pools.... 20 6.4.3. Networking Use Cases.... 21 6.5 Catalogs.... 23 7. vcloud Security.... 24 7.1 vsphere Security.... 24 7.1.1. Host Security.... 24 7.1.2. Network Security.... 24 7.1.3. vcenter Security.... 24 7.2 ware vcloud Director Security.... 24 8. vcloud Management.... 24 8.1 vsphere Host Setup Standardization.... 24 8.2 ware vcloud Director Logging.... 24 8.3 vsphere Host Logging.... 25 8.4 ware vcloud Director Monitoring.... 25 Appendix A Bill of Materials.... 26 TECHNICAL WHITE PAPER / 3

1. Purpose and Overview 1.1 Executive Summary ACME Service Provider will be implementing a service provider cloud built on ware technologies. The objective of this project is to create a consumer-oriented self-service model to allow ACME Service Provider s customers to individually consume its aggregated resources. This document defines the vcloud architecture and provides detailed descriptions and specifications of the architectural components and relationships. This design is based on a combination of ware best practices and specific business requirements and goals. 1.2 Business Requirements The vcloud for ACME Service Provider will have the following characteristics and provide: Compute capacity to support 1,500 virtual machines, which are predefined workloads. Secure multi-tenancy, permitting more than one organization to share compute resources. In a public cloud, organizations typically represent different customers, and each customer may have several environments such as development or production. A self-service portal where Infrastructure as a Service (IaaS) can be consumed from a catalog of predefined applications (vapp Templates). A chargeback mechanism, so that resources can be methodically allocated, their consumption metered, and the associated cost provided back to the appropriate consumer. Refer to the corresponding Service Definition for further details. 1.3 Use Cases The target use cases for the vcloud include the following workloads: Development and test Pre-production Demos Training Tier 2 and Tier 3 applications 1.4 Document Purpose and Assumptions This vcloud Architecture Design document is intended to serve as a reference for architects, and assumes they have familiarity with ware products, including ware vsphere, vcenter, and ware vcloud Director. The vcloud architecture detailed in this document is organized into these sections: TECHNICAL WHITE PAPER / 4

SECTION DESCRIPTION vcloud Definition vsphere Management vsphere Resources Management and Security vcloud Logical Design Inventory of components that comprise the cloud solution vsphere and vcenter components that support running workloads Resources for cloud consumption Design organized by compute, networking, and shared storage Detailed through logical and physical design specifications and considerations Considerations as they apply to vsphere and ware vcloud Director Management components ware vcloud Director objects and configuration Relationship of ware vcloud Director to vsphere objects This document is not intended as a substitute for detailed product documentation. Refer to the installation and administration guides for the appropriate product as necessary for further information. 2. ware vcloud Architecture Design Overview 2.1 vcloud Definition The ware vcloud is comprised of the following components: VCLOUD COMPONENT DESCRIPTION ware vcloud Director ware vsphere Abstracts and coordinates underlying resources Includes: ware vcloud Director Server (1 or more instances, each installed on a Linux and referred to as a cell ) ware vcloud Director Database (1 instance per clustered set of ware vcloud Director cells) vsphere compute, network and storage resources Foundation of underlying cloud resources Includes: ware ESXi hosts (3 or more instances for Management cluster and Resource Group) vcenter Server (1 instance managing a management cluster of hosts, and 1 instance managing a resource group of hosts reserved for vcloud consumption) vcenter Server Database (1 instance per vcenter Server) TECHNICAL WHITE PAPER / 5

VCLOUD COMPONENT DESCRIPTION ware vshield Provides network security services including NAT and firewall Includes: vshield Edge (deployed automatically as virtual appliances on hosts by ware vcloud Director) vshield Manager (1 instance per vcenter Server in the cloud resource groups) ware vcenter Chargeback Provides resource metering, and chargeback models Includes: vcenter Chargeback Server (1 instance) Chargeback Data Collector (1 instance) vcloud Data Collector (1 instance) VSM Data Collector (1 instance) 2.2 vcloud Component Design Overview The components comprising the vcloud are detailed in this document in the following sections: DESIGN SECTION VCLOUD COMPONENT(S) vsphere Architecture Management Cluster vsphere Architecture Resource Group vcenter Server and vcenter Database vcenter Cluster and ESXi hosts vcenter Chargeback Server and Database vcenter Chargeback Collectors vshield Manager and vshield Edge(s) ware vcloud Director Cell(s) and Database (Oracle) vcenter Server(s) and vcenter Database(s) vcenter Cluster(s) and ESXi hosts 3. vsphere Architecture Design Overview 3.1 High Level Architecture vsphere resources are organized and separated into: A management cluster containing all core components and services needed to run the cloud. One or more resource groups that represent dedicated resources for cloud consumption. Each resource group is a cluster of ESXi hosts managed by a vcenter Server, and is under the control of ware vcloud Director. Multiple resource groups can be managed by the same ware vcloud Director. Reasons for organizing and separating vsphere resources along these lines are: Facilitating quicker troubleshooting and problem resolution. Management components are strictly contained in a relatively small and manageable management cluster. Otherwise, running on a large set of host clusters could lead to situations where it is time-consuming to track down and manage such workloads. TECHNICAL WHITE PAPER / 6

Management components are separate from the resources they are managing. Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vcenter s. Resource groups can be consistently and transparently managed, carved up, and scaled horizontally. The high level logical architecture is depicted as follows. Management Cluster Resource Groups Compute Resources vsphere4.1 Shared Storage Provider vdc Basic Compute Resources Provider vdc Committed Compute Resources SAN Virtual Machines vsphere4.1 Shared Storage vsphere4.1 Shared Storage VCD vsm vcenter AD/DNS Oracle 11g Chargeback SAN SAN NFS MSSQL Log/Mon Figure 1 vcloud Logical Architecture Overview The following diagram depicts the physical design corresponding to the logical architecture previously described. TECHNICAL WHITE PAPER / 7

Figure 2 vcloud Physical Design Overview 3.2 Site Considerations The management cluster resides in a single physical Datacenter. Resource groups all reside within a single physical Datacenter. This ensures a consistent level of service, as it avoids potential latency issues if workloads need to be moved Datacenter to another over a slower or less reliable network. Neither secondary nor DR sites are in the scope of this project. 3.3 Design Specifications The architecture is described by a logical design that is independent of hardware-specific details. The focus is on components, their relationships, and quantity. Additional details are found in Appendix A. TECHNICAL WHITE PAPER / 8

4. vsphere Architecture Design Management Cluster 4.1 Compute Logical Design The compute design encompasses the ESXi host clusters. In this section the scope is further limited to only the infrastructure supporting the management component workloads. 4.1.1. Datacenters The management cluster is contained within a single vcenter datacenter. 4.1.2. vsphere Clusters The management cluster will be comprised of the following vsphere clusters. ATTRIBUTE SPECIFICATION Number of ESXi Hosts 4 ware DRS Configuration ware DRS Migration Threshold ware HA Enable Host Monitoring ware HA Admission Control Policy Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) ware HA Percentage 25% ware HA Admission Control Response ware HA Default Restart Priority ware HA Host Isolation Response ware HA Enable Monitoring ware HA Monitoring Sensitivity Prevent s from being powered on if they violate availability constraints N/A Leave Powered On Yes Medium Table 1 vsphere Clusters Management Cluster 4.1.3. Host Logical Design Each ESXi host in the management cluster will have the following specifications. ATTRIBUTE SPECIFICATION Host Type and Version Processors Storage Networking Memory ware ESXi Installable x86 Compatible Local for ESX binaries Shared for virtual machines Connectivity to all needed VLANS Sized to support estimated workloads Table 2 Host Logical Design Specifications Management Cluster TECHNICAL WHITE PAPER / 9

4.2 Network Logical Design The network design section defines how the vsphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vsphere management, connectivity, vmotion traffic, and Fault Tolerance logging ( record/replay) traffic Redundant vswitches with at least 2 active physical adapter ports each Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level SWITCH NAME SWITCH TYPE FUNCTION # OF PHYSICAL NIC PORTS vswitch0 Standard Management Console vmotion FT 2 x 5 GigE vswitch1 Distributed s, & NFS 2 x 5 GigE Table 3 Virtual Switch Configuration Management Cluster When using the distributed virtual switch, dvuplink ports are the number of physical NIC ports on each host. The physical NIC ports will be connected to redundant physical switches. The following diagrams depict the virtual network infrastructure designs: Management Cluster vswitch0 Management VLAN U vmnic0 vmotion VLAN V vmnic1 Distribution Switch1 Core Switch1 vswitch1 DMZ VLAN W vmnic2 Mgmt VLAN X vmnic3 Distribution Switch2 Core Switch2 dvswitch1(optional) vmnic4 FT VLAN Y vmnic5 Figure 3 vsphere Logical Network Design Management Cluster TECHNICAL WHITE PAPER / 10

PARAMETER SETTING Load Balancing Failover Detection Notify Switches Failover Order Route based on NIC load (for vds) Link status Enabled All active except for Management Network Management Console: Active, Standby vmotion: Standby, Active Table 4 Virtual Switch Configuration Settings Management Cluster 4.3 Shared Storage Logical Design The shared storage design section defines how the vsphere datastores will be configured. The same storage will be used for both the Management cluster as well as the ware vcloud Director Resource groups. Following best practices, the shared storage architecture will meet these requirements: Storage paths will be redundant at the host (connector), switch, and storage array levels. All hosts in a cluster will have access to the same datastores. ATTRIBUTE SPECIFICATION Number of Initial LUNs 2 LUN Size Zoning 500GB Single Initiator single Target zones FS Datastores per LUN 1 s per LUN 12 (distribute redundant s) Table 5 Shared Storage Logical Design Specifications Management Cluster 4.4 Management Components The following components will run as s on the management cluster hosts: vcenter Servers vcenter Database vcenter Update Manager Database vcloud Director Cells vcloud Director Database vcenter Chargeback Server vcenter Chargeback Database vshield Manager ware vcloud Director Cells are stateless in operation with all information stored in the database. There is some caching that happens at the ware vcloud Director cell level, such as SSL session data, but all refreshes and updates are done to information stored in the database. As such, the database is critical to the operation of ware vcloud Director. In a production environment, ware recommends the database be housed in either a cluster configuration, or at the very least have a hot standby available. TECHNICAL WHITE PAPER / 11

ESXi ESXi vcenter Database vcenter Server JDBC VIM API Data Collector Chargeback Database vcenter Chargeback Load Balancer JDBC HTTPS VSM Data Collector vcloud Data Collector vcenter Chargeback UI HTTPS JDBC VSM vcd Database Figure 4 vcenter Chargeback Logical Diagram 4.5 Management Component Resiliency Considerations The following management components will rely on HA and FT for redundancy. MANAGEMENT COMPONENT HA ENABLED? FT ENABLED? vcenter Server Yes No ware vcloud Director Yes No vcenter Chargeback Server Yes No vshield Manager Yes Yes Table 6 Management Component Resiliency TECHNICAL WHITE PAPER / 12

5. vsphere Architecture Design Resource Groups 5.1 Compute Logical Design The compute design encompasses the ESXi host clusters. In this section the scope is further limited to only the infrastructure dedicated to the cloud workloads. 5.1.1. Datacenters Resource groups can map to different datacenters and are managed by a single vcenter server. 5.1.2. vsphere Clusters All vsphere clusters will be configured similarly with the following specifications. ATTRIBUTE SPECIFICATION ware DRS Configuration ware DRS Migration Threshold ware HA Enable Host Monitoring ware HA Admission Control Policy ware HA Admission Control Response ware HA Default Restart Priority ware HA Host Isolation Response ware HA Enable Monitoring ware HA Monitoring Sensitivity Fully automated 3 stars Yes Percentage-based Power on even if they violate availability constraints Medium for all s Leave Powered On No N/A Table 7 vsphere Cluster Configuration Resource Groups The resource groups will have the following vsphere clusters. CLUSTER NAME VCENTER SERVER NAME # OF HOSTS HA PERCENTAGE RG01COMM01 stg-vcvshield.acme.com 4 50% RG01PAYG01 stg-vcvshield.acme.com 4 50% Table 8 vsphere Clusters Resource Groups TECHNICAL WHITE PAPER / 13

5.1.3. Host Logical Design Each ESXi host in the resource groups will have the following specifications. ATTRIBUTE SPECIFICATION Host Type and Version Processors Storage Networking Memory ware ESXi Installable x86 Compatible Local for ESX binaries Shared for virtual machines Connectivity to all needed VLANS Enough to run estimated workloads Table 9 Host Logical Design Specifications Resource Groups 5.2 Network Logical Design The network design section defines how the vsphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vsphere management, connectivity, vmotion traffic, traffic Redundant vswitches with at least 2 active physical adapter ports Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level SWITCH NAME SWITCH TYPE FUNCTION # O F PHYSICAL NIC PORTS vswitch0 Standard Management Console vmotion 2 x 3.4 GigE dvswitch1 Distributed External Networks 2 x3.3 GigE dvswitch2 Distributed Network Pools 2 x 3.3 GigE Table 10 Virtual Switch Configuration Resource Groups When using the distributed virtual switch, dvuplink ports are the number of physical NIC ports on each host. The physical NIC ports will be connected to redundant physical switches. The following diagram depicts the virtual network infrastructure design. TECHNICAL WHITE PAPER / 14

Management Cluster vswitch0 Management VLAN U vmnic0 vmotion VLAN V vmnic1 Distribution Switch1 Core Switch1 vswitch1 DMZ VLAN W vmnic2 Mgmt VLAN X vmnic3 Distribution Switch2 Core Switch2 dvswitch1(optional) vmnic4 FT VLAN Y vmnic5 Figure 5 vsphere Logical Network Design Resource Groups PARAMETER SETTING Load Balancing Failover Detection Notify Switches Failover Order Route based on NIC load (for vds) Link status Enabled All active except for Management Network Management Console: Active, Standby vmotion: Standby, Active Table 11 Virtual Switch Configuration Settings Resource Groups 5.3 Shared Storage Logical Design The shared storage design section defines how the vsphere datastores will be configured. Following best practices, the shared storage architecture will meet these requirements: Storage paths will be redundant at the host (connector), switch, and storage array levels. All hosts in a cluster will have access to the same datastores. TECHNICAL WHITE PAPER / 15

ATTRIBUTE SPECIFICATION Number of Initial LUNs 34 LUN Size Zoning 1 TB Single initiator, single target FS Datastores per LUN 1 s per LUN 12 Table 12 Shared Storage Logical Design Specifications Resource Groups 5.4 Resource Group Datastore Considerations The most common aspect of LUN/datastore sizing is what limit should be implemented regarding the number of s per datastore. The reason for limiting this number is to minimize the potential for SCSI locking and to spread the I/O across as many storage processors as possible. Most mainstream storage vendors will provide warespecific guidelines for this limit, and ware recommends an upper limit of 25 s per FS datastore, regardless of storage platform. In many cases it is forgotten that the number of s per LUN is also influenced by the size and I/O requirements of the but perhaps more importantly the selected storage solution and even disk types. When ware vcloud Director provisions s it automatically places the s on datastores based on the free disk space of each of the associated datastores in an organization virtual data center (Org vdc). Due to this mechanism, we will need to keep the size of the LUNs and the number of s per LUN relatively low to avoid possible I/O contention. When considering the number of s to place on a single datastore, some of the following factors should be considered in conjunction with any recommended s-per-lun ratio: Average workload/profile (in particular, the amount of I/O) Typical size (including configuration files, logs, swap files, and snapshot files) FS metadata Max requirement for IOPs and throughput per LUN, dependency on storage array and design Max RTO, if a LUN is lost, i.e. your backup and restore design If we approach this from an average I/O profile it would be tempting to create all LUNs the same, say as RAID 5, and let the law of averages take care of I/O distribution across all the LUNs and s on those LUNs. Another approach would be to create LUNs with different RAID profiles based on anticipated workloads within an Organization. This would dictate creating Provider virtual datacenters (vdcs) that took into account the allocation models as well as the storage profile in use. We would end up with the following types of Provider vdcs as an example: Commited_High_Performance Commited_Generic PAYG_High_Performance PAYG_Generic As a starting point, ware recommends starting with RAID 5 storage RAID profiles and only creating storage tiers as one-offs to address specific customer requirements. Obviously this creates opportunity in a service provider environment for tiered pricing as well. The ware Scalable Storage Performance Study provides additional information regarding vsphere storage design. TECHNICAL WHITE PAPER / 16

5.4.1. Datastore Sizing Estimation An estimate of the typical datastore size can be approximated by considering the following factors. VARIABLE VALUE Maximum Number of s per Datastore 15 Average Size of Virtual Disk(s) per Average Memory Size per 60 GB 2 GB Safety Margin 10% Table 13 Datastore Size Estimation Factors For example, ((15 * 60GB) + (15 * 2GB))+ 10% = (900GB + 30GB) * 1.1 = 1.023TB 6. vcloud Provider Design 6.1 Abstractions and ware vcloud Director Constructs A key tenet of the cloud architecture is resource pooling and abstraction. ware vcloud Director further abstracts the virtualized resources presented by vsphere by providing logical constructs that map to vsphere logical resources: Organization organizational unit to which resources (vdcs) are allocated. Virtual Datacenter (vdc) Deployment environments, scoped to an organization, in which virtual machines run. Provider Virtual Datacenter vsphere resource groupings that power vdcs, further segmented out into organization vdcs. Organization Virtual Datacenter (vdc) Subset of provider vdc. vcd Org Network Organization vdc External Network Network Pool Provider vdc vsphere Resource Pool (d)vs Port Group vds Compute Cluster Datastore Physical VLAN Physical Network Physical Host Storage Array Figure 6 ware vcloud Director Abstraction Layer Diagram TECHNICAL WHITE PAPER / 17

6.2 Provider vdcs The following diagram shows how the Provider vdcs map back to vsphere resources. vcenter01.vmware.com Basic Provider vdc Committed Provider vdc vhost03 vhost04 vhost05 vhost06 vhost07 vhost08 vhost09 vhost10 FS FS FS FS provider01 (1TB) provider02 (1TB) provider03 (1TB) provider04 (1TB) Figure 7 Provider vdcs in Resource Groups All ESXi hosts will belong to a vsphere cluster which will be associated with one and only one Provider vdc. A vsphere cluster will scale to 25 hosts, allowing for up to 14 clusters per vcenter Server (the limit is bound by the maximum number of hosts per datacenter possible) and an upper limit of 10,000 s (this is a vcenter limit) per resource group. The recommendation is to start with 8 hosts in a cluster and add resources (Hosts) to the cluster as dictated by customer consumption. However, per the public cloud service definition, each of the Provider vdcs (Basic and Committed) will start with 4 hosts. When utilization of the resources reaches 60%, it is recommended that a new Provider vdc/cluster be deployed for all new customers. This provides for growth within the Provider vdcs for the existing customers without necessitating a migration of customers as their utilization nears maxing out a cluster s resources. TECHNICAL WHITE PAPER / 18

As an example, a fully loaded resource group will contain 14 Provider vdcs, and up to 350 ESXi hosts, giving an average consolidation ratio of 26:1 assuming a 5:1 ratio of vcpu:pcpu. To increase this ratio, ACME Service Provider would need to increase the vcpu:pcpu ratio that they are willing to support. The risk associated with an increase in CPU over commitment is mainly in degraded overall performance that can result in higher than acceptable vcpu ready times. The vcpu:pcpu ratio is based on the amount of CPU over commitment, for the available cores, that ACME Service Provider feels comfortable with. For s that are not busy this ratio can be increased without any undesirable effect on performance. Monitoring of vcpu ready times helps identify if the ratio needs to be increased or decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multicore system. A Provider vdc can map to only one vsphere cluster, but can map to multiple datastores and networks. Multiple Provider vdcs are used to map to different types/tiers of resources. Compute this is a function of the mapped vsphere clusters and the resources that back it Storage this is a function of the underlying storage types of the mapped datastores Networking this is a function of the mapped vsphere networking in terms of speed and connectivity Multiple Provider vdcs are created for the following reasons: The cloud requires more compute capacity than a single vsphere cluster (a vsphere resource pool cannot span vsphere clusters) Tiered storage is required; each Provider vdc maps to datastores on storage with different characteristics Requirement for workloads to run on physically separate infrastructure ATTRIBUTE SPECIFICATION Number of Provider vdcs 2 Number of Default External Networks 1 per Organization Table 14 Provider vdc Specifications PROVI D E R VDC CLUSTER DATASTORES VS PHERE NETWORKS FUNCTION RG01COMM01 vcd01vc01c01 vcd01vc01c01l001 vcd01vc01c01l002 vcd01vc01c01l003 RG01PAYG01 vcd01vc01c02 vcd01vc01c02l004 vcd01vc01c02l005 vcd01vc01c02l006 Internet01 PvDC for 400 s Internet01 PvDC for 400 s Table 15 Provider vdc to vsphere Mapping ware recommends assessing workloads to assist in sizing. Following is a standard sizing table that can be used as a reference for future design activities. TECHNICAL WHITE PAPER / 19

SIZE DISTRIBUTION 1 vcpu / 1 GB RAM 45% 2 vcpu / 2 GB RAM 35% 4 vcpu / 4 GB RAM 15% 8 vcpu / 8 GB RAM 5% Total 100% Table 16 Virtual Machine Sizing and Distribution 6.3 Organizations ORGANIZATION NAME DESCRIPTION Total World Cable Emca Corporation Totally High School Cable company with Test Dev Environment Corporate Internet Presence E-Learning Infrastructure Table 17 Organizations 6.4 Networks ATTRIBUTE SPECIFICATION Number of Default External Networks Number of Default vapp Networks Number of default Organization Networks Network Pool Types Used 1 per Organization End-user controlled 2 (Internet, Organization Isolated) vcloud Director Network Isolation (vcd-ni) Is a Pool of Public Routable IP Addresses Available? Yes, for Internet access assigned as a /27 Table 18 Network Specifications 6.4.1. External Networks ACME Service Provider will provide the following sets of External Networks based on need: Internet Port group-backed Network VPN/MPLS Port group-backed network (optional) Part of the provisioning for an organization will involve creating an External network for each Organization, and a VPN network if desired, and associating them with the required Org Networks. The public cloud service definition requires that the Internet connection be a Nat/Routed Org network with at least a /27 networks associated with it. 6.4.2. Network Pools ACME Service Provider will provide the following sets of Network Pools based on need: vcd-ni-backed VLAN-Backed (Optional) TECHNICAL WHITE PAPER / 20

For the vcd-ni-backed pool it is recommended that the transport VLAN be a VLAN that is not in use within the ACME Service Provider infrastructure for increased security and isolation. 6.4.3. Networking Use Cases ACME Service Provider will provide the following four use cases (one optional VPN use case) for their customers based on the use cases that are currently being deployed in their physical environment: 1. Customers should be able to completely isolate vapps for their Development and/or Test Users vapp01 DB 1.12 Web 1.11 vapp01net App 1.10 Isolated vapp Figure 8 vapp Isolated Network 2. Customers should be able to connect to the Organization Networks either directly or via fencing and the Organization Networks will not have access to any public Internet. vapp01 vapp02 DB 1.12 Web 1.11 App 1.10 DB 1.15 Web 1.14 App 1.13 vapp01net vapp02net DevTest_Net(OrgNet) vapp Bridged to Org Figure 9 vapp Network Direct Attached to Org Network TECHNICAL WHITE PAPER / 21

This is an example for a Dev Test environment where developers will use the different IPs in their vapps. vapp1a vapp1b DB 1.12 Web 1.11 App 1.10 DB 1.12 Web 1.11 App 1.10 vapp01net vapp02net DevTest_Net(OrgNet) vapp fenced from Org Figure 10 vapp Network Fenced to Org Network This is an example for Dev Test where developers will have duplicate IPs in their vapps. 3. Customers should be able to connect the Organization Networks either directly or via fencing to the External Networks. vapp1a vapp1b DB 1.12 Web 1.11 App 1.10 DB 1.12 Web 1.11 App 1.10 vapp01net vapp02net DevTest_Net(OrgNet) VPN_Net(ExternalNet) Org Bridged to External Figure 11 vapp Network Fenced to Direct attached Org Network TECHNICAL WHITE PAPER / 22

This would be a good option when the External network is a private Network such as a VPN or MPLS terminated network. vapp1a vapp1b DB 1.12 Web 1.11 App 1.10 DB 1.12 Web 1.11 App 1.10 vapp01net vapp02net DevTest_Net(OrgNet) Org Firewalled from External Internet_Net(ExternalNet) Figure 12 vapp Network Fenced to Fenced Org Network This is one way to connect the External network and preserve VLANs by sharing the same VLAN for the Internet_Net among multiple Organizations The vshield Edge is needed to provide NAT and firewall services for the different Organizations. Once the External Networks have been created, a ware vcloud Director Administrator can create the Organization Networks as shown above. The vshield Edge (VSE) device is needed to perform Address translation between the different networks. The VSE can be configured to provide for port address translation to jump hosts located inside the networks or to gain direct access to individual hosts. ware recommends separating External and Organization networks by using two separate dvswitches. The External networks will be provisioned on dvswitch1 also known as a ProviderDVS. This is where all the static port groups will also be defined such as VPN and MPLS connections as the VLANs for these networks will be known ahead of time. The Organization networks on the other hand will be provisioned off of dvswitch2. This is where all the Organization networks will be created as well as the different types of vcd-ni-backed and VLAN-backed pools. The connectivity for this dvswitch will also be changed to allow for the higher MTU of 1524 needed to accommodate vcd-ni networks. 6.5 Catalogs The catalog will contain ACME Service Provider-specific templates that are to be made available to all organizations. ACME Service Provider will make available a set of catalog entries to cover the classes of virtual machines, templates, and media as specified in the corresponding Service Definition. TECHNICAL WHITE PAPER / 23

7. vcloud Security 7.1 vsphere Security 7.1.1. Host Security Chosen in part for its limited management console functionality, ESXi will be configured with a strong root password stored following corporate password procedures. ESXi lockdown mode will also be enabled to prevent root access to the hosts over the network, and appropriate security policies and procedures will be created and enforced to govern the systems. Because ESXi cannot be accessed over the network, sophisticated host-based firewall configurations are not required. 7.1.2. Network Security Virtual switch security settings will be as follows. FUNCTION SETTING Promiscuous Mode MAC Address Changes Forged Transmits Management cluster Reject Resource Group - Reject Management cluster Reject Resource Group - Reject Management cluster Reject Resource Group - Reject Table 19 Virtual Switch Security Settings 7.1.3. vcenter Security vcenter Server is installed using a local administrator account. When vcenter Server is joined to a domain, this will result in any domain administrator gaining administrative privileges to vcenter. To remove this potential security risk, a new vcenter Administrators group will be created in Active Directory and assigned the vcenter Server Administrator Role, making it possible to remove the local Administrators group from this role. 7.2 ware vcloud Director Security Standard Linux hardening guidelines need to be applied to the ware vcloud Director. There is no need for local users, and the root password will only be needed during install and upgrades to the ware vcloud Director binaries. Additionally, certain network ports must be open for vcloud Director use. Refer to the vcloud Director Administrator s guide for further information. 8. vcloud Management 8.1 vsphere Host Setup Standardization Host Profiles can be used to automatically configure network, storage, security and other features. This feature along with automated installation of ESXi hosts is used to standardize all host configurations. ACME Service Provider will need to standardize on a build process with their hardware integrator to make sure by the time the equipment is delivered to an ACME Service Provider datacenter very little needs to be changed to get a host up and running, short of rolling the rack in and cabling it into the environment. Monitoring is enabled on a cluster level within HA and uses the ware Tools heartbeat to verify a virtual machine is alive. When a virtual machine fails, causing ware Tools heartbeat to not be updated, Monitoring will verify if any storage or networking I/O has occurred over the last 120 seconds and if not, the virtual machine will be restarted. TECHNICAL WHITE PAPER / 24

As such ware recommends enabling both ware HA and Monitoring on the Management cluster and the Resource Group clusters. 8.2 ware vcloud Director Logging Each ware vcloud Director cell logs audit messages to the database where they are retained for 90 days by default. If log retention is needed longer than 90 days and or centralized logging is required, an external Syslog server can be configured and used as a duplicate destination for the events that are logged. 8.3 vsphere Host Logging Remote logging to a central host provides a way to greatly increase administration capabilities. Gathering log files on a central server facilitates monitoring of all hosts with a single tool as well as enables aggregate analysis and the ability to search for evidence of coordinated attacks on multiple hosts. This will apply to the following log analysis: 5.1.4. messages (host log) 5.1.5. hostd (host agent log) 5.1.6. vpxa (vcenter agent log) Within each ESXi host, Syslog behavior is controlled by the Syslog advanced settings. These settings determine the central logging host that will receive the Syslog messages. The hostname must be resolvable using DNS. For this design, each ESXi host will be configured to send log files to a central Syslog server residing in the management cluster. 8.4 ware vcloud Director Monitoring The following items should be monitored through ware vcloud Director. As of ware vcloud Director 1.0 this will need to be done with custom queries to ware vcloud Director using the Admin API to get the consumption data on the different components. Some of the components in ware vcloud Director can also be monitored by log aggregating the Syslog-generated logs from the different ware vcloud Director cells that are to be found on the centralized log server. SCOPE ITEM System vsphere Resources Virtual Machines/vApps Leases Quotas Limits CPU Memory Network IP address pool Storage free space Not in scope Table 20 ware vcloud Director Monitoring Items TECHNICAL WHITE PAPER / 25

Appendix A Bill of Materials The inventory and specifications of components comprising the vcloud are provided. ITEM QUANTITY NAME/DESCRIPTION ESXi Host 4 Vendor X Compute Resource 2 Socket Intel Xeon X5650 (4 core, 2.66 GHz,12MB L3, 95W 8GB RAM) Version : 4.1 vcenter Server 1 2 vcpu 4 GB RAM 1 vnic Min. free disk space: 10 GB Version: 4.1 vcenter and Update Manager Database N/A ware vcloud Director Cell 2 1 vcpu 2 GB RAM 2 vnic Version : 1.0 ware vcloud Director Database 1 2 vcpu 4 GB RAM 1 vnic Oracle 11g R2 vshield Manager 1 1 vcpu 4 GB RAM 1 vnic Version: 4.1 vcenter Chargeback Server 1 2 vcpu 2 GB RAM 1 vnic Guest OS: Windows 2008 x64 Version: 1.5 vcenter Chargeback Database NFS Appliance 2 vcpu 4 GB RAM 1 vnic Guest OS: Windows 2008 x64 N/A TECHNICAL WHITE PAPER / 26

ITEM QUANTITY NAME/DESCRIPTION vshield Edge Appliances Multiple 1 vcpu 256 MB RAM 1 vnic Domain Controllers (AD) API Servers Monitoring Server Logging Server 2 vcpu 4 GB RAM 1 vnic Guest OS: Windows 2008 Datacenter N/A N/A N/A Storage 1 FC SAN Array FS LUN Sizing: 1TB RAID Level: 5 Table 21 Management Cluster Inventory ITEM QUANTITY NAME/DESCRIPTION ESXi Host 12 Vendor X Compute Resource 2 Socket Intel Xeon 5670 (6 core, 2.66 GHz, 12MB L3, 95W, 98 GB RAM) Version : 4.1 vcenter Server 1 1 vcpu 4 GB RAM 1 vnic Min. free disk space: 10 GB Version: 4.1 Version : 4.1 Storage 1 FC SAN Array FS LUN Sizing: 1TB RAID Level: 5 Table 22 Resource Groups Inventory ware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com Copyright 2010 ware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. ware products are covered by one or more patents listed at http://www.vmware.com/go/patents. ware is a registered trademark or trademark of ware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Item No: W_11Q4_WP_ServiceProvider_p27_A_R2