1 ware vcloud Implementation Example Private Enterprise vcloud TECHNICAL WHITE PAPER
2 Table of Contents 1. Purpose and Overview Executive Summary Business Requirements Use Cases Document Purpose and Assumptions ware vcloud Architecture Design Overview vcloud Definition vcloud Component Design Overview vsphere Architecture Design Overview High Level Architecture Site Considerations Design Specifications vsphere Architecture Design Management Cluster Compute Logical Design Datacenters vsphere Clusters Host Logical Design Network Logical Design Shared Storage Logical Design Management Components Management Component Resiliency Considerations vsphere Architecture Design Resource Groups Compute Logical Design Datacenters vsphere Clusters Host Logical Design Network Logical Design Shared Storage Logical Design Resource Group Datastore Considerations Datastore Sizing Estimation vcloud Provider Design Abstractions and ware vcloud Director Constructs Provider vdcs Organizations Networks TECHNICAL WHITE PAPER / 2
3 External Networks Network Pools Networking Use Cases Catalogs vcloud Security vsphere Security Host Security Network Security vcenter Security ware vcloud Director Security vcloud Management vsphere Host Setup Standardization ware vcloud Director Logging vsphere Host Logging ware vcloud Director Monitoring Appendix A Bill of Materials TECHNICAL WHITE PAPER / 3
4 1. Purpose and Overview 1.1 Executive Summary ACME Enterprise will be implementing an internal next generation datacenter private cloud built on ware technologies. This document defines the vcloud architecture and provides detailed descriptions and specifications of the architectural components and relationships for the initial implementation. This design is based on a combination of ware best practices and specific business requirements and goals. 1.2 Business Requirements The vcloud for ACME Enterprise has the following characteristics and provides: Compute capacity to support 300 virtual machines, which are predefined workloads. Secure multi-tenancy, permitting more than one organization to share compute resources. In a private cloud, organizations typically represent different departments, and each department may have several environments such as development or production. A self-service portal where Infrastructure as a Service (IaaS) can be consumed from a catalog of predefined applications (vapp Templates). A chargeback mechanism, so resource consumption can be metered and the associated cost provided back to the appropriate organization or business unit. Refer to the corresponding Service Definition for further details. 1.3 Use Cases The target use case for the vcloud includes the following workloads: Development and test Pre-production Demos Training Tier 2 and Tier 3 applications 1.4 Document Purpose and Assumptions This vcloud Architecture Design document is intended to serve as a reference for ACME Enterprise architects, and assumes they have familiarity with ware products, including ware vsphere, vcenter, and ware vcloud Director. The vcloud architecture detailed in this document is organized into these sections: SECTION DESCRIPTION vcloud Definition vsphere Management Inventory of components that comprise the cloud solution vsphere and vcenter components that support running workloads TECHNICAL WHITE PAPER / 4
5 SECTION DESCRIPTION vsphere Resources Management and Security vcloud Logical Design Resources for cloud consumption Design organized by compute, networking, and shared storage Detailed through logical and physical design specifications and considerations Considerations as they apply to vsphere and ware vcloud Director management components ware vcloud Director objects and configuration Relationship of ware vcloud Director to vsphere objects This document is not intended as a substitute for detailed product documentation. Refer to the installation and administration guides for the appropriate product as necessary for further information. 2. ware vcloud Architecture Design Overview 2.1 vcloud Definition The ware vcloud is comprised of the following components: vcloud COMPONENT DESCRIPTION ware vcloud Director ware vsphere Abstracts and coordinates underlying resources Includes: ware vcloud Director Server (1 or more instances, each installed on a Linux and referred to as a cell ) ware vcloud Director Database (1 instance per clustered set of ware vcloud Director cells) vsphere compute, network and storage resources Foundation of underlying cloud resources Includes: ware ESXi hosts (3 or more instances for Management cluster and 3 or more instances for Resource Cluster, also referred to as Compute Cluster) vcenter Server (1 instance managing a management cluster of hosts, and 1 or more instances managing one or more resource groups of hosts reserved for vcloud consumption. In a Proof of Concept installation, 1 instance of vcenter server managing both the management cluster and a single resource group is allowable.) vcenter Server Database (1 instance per vcenter Server) TECHNICAL WHITE PAPER / 5
6 vcloud COMPONENT DESCRIPTION ware vshield ware vcenter Chargeback Provides network security services including NAT and firewall Includes: vshield Edge (deployed automatically as virtual appliances on hosts by ware vcloud Director) vshield Manager (1 instance per vcenter Server in the cloud resource groups) Provides resource metering, and chargeback models Includes: vcenter Chargeback Server (1 instance) Chargeback Data Collector (1 instance) vcloud Data Collector (1 instance) VSM Data Collector (1 instance) 2.2 vcloud Component Design Overview The components comprising the vcloud are detailed in this document in the following sections: DESIGN SECTION VCLOUD COMPONENT(S) vsphere Architecture Management Cluster vsphere Architecture Resource Group vcenter Server and vcenter Database vcenter cluster and ESXi hosts vcenter Chargeback Server and Database vcenter Chargeback Collectors vshield Manager and vshield Edge(s) ware vcloud Director Cell(s) and Database (Oracle) vcenter Server(s) and vcenter Database(s) vcenter Cluster(s) and ESXi hosts 3. vsphere Architecture Design Overview 3.1 High Level Architecture vsphere resources are organized and separated into: A management cluster containing all core components and services needed to run the cloud. One or more resource groups or compute clusters that represent dedicated resources for cloud consumption. Each resource group is a cluster of ESXi hosts managed by a vcenter Server, and is under the control of ware vcloud Director. Multiple resource groups can be managed by the same ware vcloud Director. Reasons for organizing and separating vsphere resources along these lines are: Facilitating quicker troubleshooting and problem resolution. Management components are strictly contained in a relatively small and manageable management cluster. Otherwise, running on a large set of host clusters could lead to situations where it is time-consuming to track down and manage such workloads. TECHNICAL WHITE PAPER / 6
7 ware vcloud Implementation Example Management components are separate from the resources they are managing. Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vcenter s. Resource groups can be consistently and transparently managed, carved up, and scaled horizontally. The high level logical architecture is depicted as follows. Management Cluster Resource Groups Compute Resources Org vdc #1 vsphere4.1 Shared Storage Compute Resources Compute Resources vsphere4.1 vsphere4.1 Shared Storage Shared Storage SAN SAN SAN Virtual Machines VCD VCenter (RG) vsm Chargeback MSSQL Oracle 11g Org vdc #2 (future) vcenter (MC) AD/DNS Log/Mon (optional) vcenter DB Figure 1 vcloud Logical Architecture Overview The following diagram depicts the physical design corresponding to the logical architecture previously described. Physical Layer vsphere Layer vcloud Resource Groups Provider vdc Cluster A Provider vdc Cluster B Network Infrastructure Fabric Fabric Switch Switch Server Infrastructure vcenter01 - Cluster01 Resource Pool Host C1 Host C2 Resource Pool HA=N+1 CPU=TBD MEM=TBD HA=N+1 CPU=TBD MEM=TBD Data Store Data Store Resource Pool Port Group Host C3 Host C4 Storage Infrastructure Host C5 Host C6 vcenter01 - Cluster02 Host M1 Host M2 Host M3 Management Cluster Management and DB Cluster Resource Pool FC SAN HA=N+1 CPU=TBD MEM=TBD Data Store Port Group Figure 2 vcloud Physical Design Overview TECH N I C AL WH ITE PAPE R / 7
8 3.2 Site Considerations The management cluster and the resource group (compute cluster) reside within a single physical Datacenter. Servers in both clusters are striped across the server chasses. This provides for business continuity of clusters, i.e. HA, should one chassis go down. Neither secondary nor DR sites are in the scope for this project. 3.3 Design Specifications The architecture is described by a logical design that is independent of hardware-specific details. The focus is on components, their relationships, and quantity. Additional details are found in Appendix A. 4. vsphere Architecture Design Management Cluster 4.1 Compute Logical Design The compute design encompasses the ESXi hosts contained in the management cluster. In this section the scope is limited to only the infrastructure supporting the management component workloads Datacenters The management cluster is contained within a single vcenter datacenter vsphere Clusters The management cluster will be comprised of the following vsphere cluster: ATTRIBUTE SPECIFICATION Number of ESXi Hosts 3 ware DRS Configuration ware DRS Migration Threshold ware HA Enable Host Monitoring ware HA Admission Control Policy Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) ware HA Percentage 67% ware HA Admission Control Response ware HA Default Restart Priority ware HA Host Isolation Response ware HA Enable Monitoring ware HA Monitoring Sensitivity Prevent s from being powered on if they violate availability constraints N/A Leave Powered On Yes Medium Table 1 vsphere Clusters Management Cluster TECHNICAL WHITE PAPER / 8
9 Host Logical Design Each ESXi host in the management cluster will have the following specifications: ATTRIBUTE SPECIFICATION Host Type and Version Processors Storage Networking Memory ware ESXi Installable x86 Compatible Local for ESX binaries SAN LUN for virtual machines Connectivity to all needed VLANs Sized to support estimated workloads Table 2 Host Logical Design Specifications Management Cluster 4.2 Network Logical Design The network design section defines how the vsphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vsphere management, connectivity, and vmotion traffic Redundant vswitches with at least 2 active physical (or vnic) adapter ports each Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level SWITCH NAME SWITCH TYPE FUNCTION # OF PHYSICAL NIC PORTS vswitch0 Standard Management Console vmotion Production s Table 3 Virtual Switch Configuration Management Cluster The physical NIC ports will be connected to redundant physical switches. The following diagrams depict the virtual network infrastructure designs: Host vswitch0 Host Networks Switch Management vmotion Native vlan 443 vlan 442 vmnic0 vmnic1 Fabric Production Virtual Machines vlan 440 Switch Figure 3 vsphere Logical Network Designs Management Cluster TECHNICAL WHITE PAPER / 9
10 PARAMETER SETTING Load Balancing Failover Detection Notify Switches Failover Order Route based on NIC load Link status Enabled All active except for Management Network Management Console: Active, Standby vmotion: Standby, Active Table 4 Virtual Switch Configuration Settings Management Cluster 4.3 Shared Storage Logical Design The shared storage design section defines how the vsphere datastores will be configured. The same storage will be used for both the Management cluster as well as the ware vcloud Director Resource groups. Following best practices, the shared storage architecture will meet these requirements: Storage paths will be redundant at the host (connector), switch, and storage array levels. All hosts in a cluster will have access to the same datastores. ATTRIBUTE SPECIFICATION Number of Initial LUNs LUN Size Zoning 1 dedicated, 1 interchange (shared with Compute cluster) 539 GB Single initiator, single target FS Datastores per LUN 1 s per LUN 10 (distribute redundant s) Table 5 Shared Storage Logical Design Specifications Management Cluster 4.4 Management Components The following components will run as s on the management cluster hosts: vcenter Servers vcenter Database vcenter Update Manager Database vcloud Director Cells vcloud Director Database vcenter Chargeback Server vcenter Chargeback Database vshield Manager ware vcloud Director Cells are stateless in operation with all information stored in the database. There is some caching that happens at the ware vcloud Director cell level, such as SSL session data, but all refreshes and updates are done to information stored in the database. As such, the database is critical to the operation of ware vcloud Director. In a production environment, ware recommends the database be housed in either a managed cluster configuration, or at the very least have a hot standby available. TECHNICAL WHITE PAPER / 10
11 ESXi ESXi vcenter Database vcenter Server JDBC VIM API Data Collector Chargeback Database vcenter Chargeback Load Balancer JDBC HTTPS VSM Data Collector vcloud Data Collector vcenter Chargeback UI HTTPS JDBC VSM vcd Database Figure 4 vcenter Chargeback Logical Diagram 4.5 Management Component Resiliency Considerations The following management components will rely on HA and FT for redundancy. MANAGEMENT COMPONENT HA ENABLED? vcenter Server ware vcloud Director vcenter Chargeback Server vshield Manager Yes Yes Yes Yes Table 6 Management Component Resiliency TECHNICAL WHITE PAPER / 11
12 5. vsphere Architecture Design Resource Groups 5.1 Compute Logical Design The compute design encompasses the ESXi host clusters. In this section the scope is further limited to only the infrastructure dedicated to the cloud workloads Datacenters Resource groups can map to different datacenters and are managed by a single vcenter server vsphere Clusters All vsphere clusters will be configured similarly with the following specifications. ATTRIBUTE SPECIFICATION ware DRS Configuration ware DRS Migration Threshold ware HA Enable Host Monitoring ware HA Admission Control Policy Fully automated 3 stars Yes Cluster tolerances 1 host failure (Percentage based) ware HA Percentage 83% ware HA Admission Control Response ware HA Default Restart Priority ware HA Host Isolation Response Prevent s from being powered on if they violate availability constraints N/A Leave Powered On Table 7 vsphere Cluster Configuration Resource Group The resource groups will have the following vsphere cluster. CLUSTER NAME VCENTER SERVER NAME # OF HOSTS HA PERCENTAGE VCDCompute01 ACMEmgmtVC01.vcd. acme.com 6 83% Table 8 vsphere Clusters Resource Groups TECHNICAL WHITE PAPER / 12
13 Host Logical Design Each ESXi host in the resource groups will have the following specifications. ATTRIBUTE SPECIFICATION Host Type and Version Processors Storage Networking Memory ware ESXi Installable x86 Compatible Local for ESX binaries Shared for virtual machines Shared for virtual machines Connectivity to all needed VLANs Enough to run estimated workloads Table 9 Host Logical Design Specifications Resource Groups 5.2 Network Logical Design The network design section defines how the vsphere virtual networking will be configured. Following best practices, the network architecture will meet these requirements: Separate networks for vsphere management, connectivity, vmotion traffic Redundant vswitches with at least 2 active physical adapter ports Redundancy across different physical adapters to protect against NIC or PCI slot failure Redundancy at the physical switch level SWITCH NAME SWITCH TYPE FUNCTION # OF NIC PORTS vswitch0 Standard Management Console vmotion vdswitch Distributed External Networks Network Pools 2 x 10 GigE vnic 2 x 10 GigE vnic Table 10 Virtual Switch Configuration Resource Groups When using the distributed virtual switch, dvuplink ports are the number of physical NIC ports on each host. The physical NIC ports will be connected to redundant physical switches. TECHNICAL WHITE PAPER / 13
14 The following diagram depicts the virtual network infrastructure design. vswitch0 Management Cluster Networking Management vmotion Native vlan 443 vlan 442 vmnic0 vmnic1 Switch Production Virtual Machines vlan 440 Fabric vnetwork Distributed Switch(vDS) Switch External Networks (Production) vlan 440 vmnic2 vmnic3 Network Pools Figure 5 vsphere Logical Network Design Resource Groups PARAMETER SETTING Load Balancing Failover Detection Notify Switches Failover Order Route based on NIC load (for vds) Link status Enabled All active except for Management Network Management Console: Active, Standby vmotion: Standby, Active Table 11 Virtual Switch Configuration Settings Resource Groups 5.3 Shared Storage Logical Design The shared storage design section defines how the vsphere datastores will be configured. Following best practices, the shared storage architecture will meet these requirements: Storage paths will be redundant at the host (HBA), switch, and storage array levels. All hosts in a cluster will have access to the same datastores. TECHNICAL WHITE PAPER / 14
15 ATTRIBUTE SPECIFICATION Number of Initial LUNs LUN Size Zoning 6 dedicated, 1 interchange (shared with Management cluster) 539 GB Single initiator, single target FS Datastores per LUN 1 s per LUN 12 Table 12 Shared Storage Logical Design Specifications Resource Groups 5.4 Resource Group Datastore Considerations The most common aspect of LUN/datastore sizing is what limit should be implemented regarding the number of s per datastore. The reason for limiting this number is to minimize the potential for SCSI locking and to spread the I/O across as many storage processors as possible. Most mainstream storage vendors will provide warespecific guidelines for this limit, and ware recommends an upper limit of 15 s per FS datastore, regardless of storage platform. In many cases it is forgotten that the number of s per LUN is also influenced by the size and I/O requirements of the but perhaps more importantly the selected storage solution and even disk types. When ware vcloud Director provisions s it automatically places the s on datastores based on the free disk space of each of the associated datastores in an Org vdc. Due to this mechanism, we will need to keep the size of the LUNs and the number of s per LUN relatively low to avoid possible I/O contention. When considering the number of s to place on a single datastore, some of the following factors should be considered in conjunction with any recommended s-per-lun ratio: Average workload/profile (in particular, the amount of I/O) Typical size (including configuration files, logs, swap files, and snapshot files) FS metadata Max requirement for IOPs and throughput per LUN, dependency on storage array and design Max RTO, if a LUN is lost, i.e. your backup and restore design If we approach this from an average I/O profile it would be tempting to create all LUNs the same, say as RAID 5, and let the law of averages take care of I/O distribution across all the LUNs and s on those LUNs. Another approach would be to create LUNs with different RAID profiles based on anticipated workloads within an Organization. This would dictate creating Provider virtual datacenters (vdcs) that took into account the allocation models as well as the storage profile in use. We would end up with the following types of Provider vdcs as an example: Allocated_High_Performance Allocated_Generic As a starting point, ware recommends RAID 5 storage profiles, and only creating storage tier-specific Provider vdcs as one-offs to address specific organization or business unit requirements. The ware Scalable Storage Performance Study provides additional information regarding vsphere storage design. TECHNICAL WHITE PAPER / 15
16 Datastore Sizing Estimation An estimate of the typical datastore size can be approximated by considering the following factors. VARIABLE VALUE Maximum Number of s per Datastore 12 Average Size of Virtual Disk(s) per Average Memory Size per 60 GB 2 GB Safety Margin 10% Table 13 Datastore Size Estimation Factors For example, ((12 * 60GB) + (15 * 2GB))+ 10% = (720GB + 30GB) * 1.1 = 825GB 6. vcloud Provider Design 6.1 Abstractions and ware vcloud Director Constructs A key tenet of the cloud architecture is resource pooling and abstraction. ware vcloud Director further abstracts the virtualized resources presented by vsphere by providing logical constructs that map to vsphere logical resources: Organization organizational unit to which resources (vdcs) are allocated. Virtual Datacenter (vdc) Deployment environments, scoped to an organization, in which virtual machines run. Provider Virtual Datacenter vsphere resource groupings that power vdcs, further segmented out into organization vdcs. Organization Virtual Datacenter (vdc) An organization s allocated portion of provider vdc. vcd Org Network Organization vdc External Network Network Pool Provider vdc vsphere Resource Pool (d)vs Port Group vds Compute Cluster Datastore Physical VLAN Physical Network Physical Host Storage Array Figure 6 ware vcloud Director Abstraction Layer Diagram TECHNICAL WHITE PAPER / 16
17 6.2 Provider vdcs The following diagram shows how the Provider vdcs map back to vsphere resources: VCD Compute01 Provider vdc GIS Future vdcs Vcdcomputeclusterx-4 Vcdcomputeclusterx-1 Vcdcomputecluster1-2 Vcdcomputecluster1-3 Vcdcomputecluster1-4 Vcdcomputecluster1-1 Vcdcomputeclusterx-2 Vcdcomputeclusterx-3 FS FS FS vcd_compute_01 (539GB) vcd_compute_02 (539GB) vcd_compute_0x (539GB) Figure 7 Provider vdcs in Resource Groups All ESXi hosts will belong to a vsphere cluster which will be associated with one and only one ACME Enterprise vdc. A vsphere cluster will scale to 25 hosts, allowing for up to 14 clusters per vcenter Server (the limit is bound by the maximum number of hosts per datacenter possible) and an upper limit of 10,000 s (this is a vcenter limit) per resource group. The recommendation is to start with 8 hosts in a cluster and add resources (Hosts) to the cluster as dictated by customer consumption. However, for the initial implementation, the provider vdc will start with 6 hosts. When utilization of the resources reaches 60%, ware recommends that a new provider vdc/cluster be deployed. This provides for growth within the provider vdcs for the existing organizations / business units without necessitating their migration as utilization nears maxing out a cluster s resources. As an example, a fully loaded resource group will contain 14 Provider vdcs, and up to 350 ESXi hosts, giving an average consolidation ratio of 26:1 assuming a 5:1 ratio of vcpu:pcpu. To increase this ratio, ACME Enterprise would need to increase the vcpu:pcpu ratio that they are willing to support. The risk associated with an increase in CPU over commitment is mainly in degraded overall performance that can result in higher than acceptable vcpu ready times. The vcpu:pcpu ratio is based on the amount of CPU over commitment, for the available cores, that ACME is comfortable with. For s that are not busy this ratio can be increased without any undesirable effect on performance. Monitoring of vcpu ready times helps identify if the ratio needs to be increased or decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system. TECHNICAL WHITE PAPER / 17
18 A Provider vdc can map to only one vsphere cluster, but can map to multiple datastores and networks. Multiple Provider vdcs are used to map to different types/tiers of resources. Compute this is a function of the mapped vsphere clusters and the resources that back it Storage this is a function of the underlying storage types of the mapped datastores Networking this is a function of the mapped vsphere networking in terms of speed and connectivity Multiple Provider vdcs are created for the following reasons: The cloud requires more compute capacity than a single vsphere cluster (a vsphere resource pool cannot span vsphere clusters) Tiered storage is required; each Provider vdc maps to datastores on storage with different characteristics Requirement for workloads to run on physically separate infrastructure ATTRIBUTE SPECIFICATION Number of Provider vdcs 1 Number of Default External Networks 1 (Production) Table 14 Provider vdc Specifications PROVIDER VDC CLUSTER DATASTORES VS PHERE NETWORKS GIS VCDCompute01 vcd_compute-01 vcd_compute-02 vcd_compute-03 vcd_compute-04 vcd_compute-05 Production Table 15 Provider vdc to vsphere Mapping ware recommends assessing workloads to assist in sizing. Following is a standard sizing table that can be used as a reference for future design activities. SIZE DISTRIBUTION NUMBER OF S 1 vcpu / 1 GB RAM 65% 2 vcpu / 2 GB RAM 29% 4 vcpu / 4 GB RAM 5% 8 vcpu / 8 GB RAM 1% Total 100% Table 16 Virtual Machine Sizing and Distribution TECHNICAL WHITE PAPER / 18
19 6.3 Organizations ORGANIZATION NAME DESCRIPTION AIS ACME Information Systems Table 17 Organizations 6.4 Networks ATTRIBUTE SPECIFICATION Number of Default External Networks 1 Number of Default vapp Networks End-user controlled Number of default Organization Networks 2 Default Network Pool Types Used Is a Pool of Public Routable IP Addresses Available? vcloud Director Network Isolation (vcd-ni) Yes, for access to Production but only a certain range is given to each Organization. Table 18 Network Specifications External Networks ACME Enterprise will provide the following External Network for the initial implementation: Production (VLAN 440) Part of the provisioning for an organization can involve creating an external network for each Organization, such as internet access, and a VPN network if desired, and associating them with the required Org Networks Network Pools ACME will provide the following sets of Network Pools based on need: ware vcloud Director - Network Isolation-backed VLAN-Backed (Optional) For the vcd-ni-backed pool ware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that is not in use within the ACME infrastructure for increased security and isolation. In the case of this initial implementation, we do not have this option so will use Production VLAN Networking Use Cases ACME will provide the following two use cases for the initial implementation to both demonstrate ware vcloud Director capabilities and as a use case for deploying their production vapp: 1. Users should be able to completely isolate vapps for their Development and/or Test Users 2. Users should be able to connect to the Organization Networks either directly or via fencing and the Organization Networks will not have access to any public Internet. TECHNICAL WHITE PAPER / 19
20 vapp01 Network Pool DB x.10 Web x.11 vappnetwork1 App x.12 (vcd-ni-backed/ VLAN- backed/ Portgroup-backed) Figure 8 vapp Isolated Network vapp01 vapp02 DB x.10 Web x.11 App x.12 DB x.13 Web x.14 App x.15 vappnetwork1 vappnetwork2 Network Pool Direct (vcd-ni-backed/ VLAN- backed/ Portgroup-backed) Isolated Org Network Fenced Figure 9 vapp Network Direct Attached to Org Network This is an example for a Dev/Test environment where developers will use the different IPs in their vapps, so the s in a vapp can communicate to the s in another vapp without any conflicts. TECHNICAL WHITE PAPER / 20
21 vapp01 vapp02 DB x.10 Web x.11 App x.12 DB x.10 Web x.11 App x.12 vappnetwork1 vappnetwork2 Fenced Network Pool (vcd-ni-backed/ VLAN- backed/ Portgroup-backed) Isolated Org Network Fenced Figure 10 vapp Network Fenced to Org Network This is an example for Dev/Test where developers will have duplicate IPs in their vapps. vapp01 vapp02 DB x.10 Web x.11 App x.12 DB x.13 Web x.14 App x.15 vappnetwork1 vappnetwork2 Direct or Fenced Network Pool Org Network (vcd-ni-backed/ VLAN- backed/ Portgroup-backed) Direct External Network Physical Backbone Figure 11 vapp Network Bridged or Fenced to an Org Network that is Direct attached to External Network TECHNICAL WHITE PAPER / 21
22 vapp01 vapp02 DB 1.10 Web 1.11 App 1.12 DB 1.13 Web 1.14 App 1.15 vappnetwork1 vappnetwork2 Direct or Fenced Network Pool Org Network (vcd-ni-backed/ VLAN- backed/ Portgroup-backed) Fenced External Network Physical Backbone Figure 12 vapp Network Fenced to Fenced Org Network This is one way to connect the External network and preserve VLANs by sharing the same VLAN for the Internet among multiple Organizations. The vshield Edge is needed to provide NAT and firewall services for the different Organizations. Once the External Networks have been created, a ware vcloud Director Administrator can create the Organization Networks as shown above. The vshield Edge (VSE) device is needed to perform Address translation between the different networks. The VSE can be configured to provide for port address translation to jump hosts located inside the networks or to gain direct access to individual hosts. ware recommends separating External and Organization networks by using two separate vds switches. For ACME s initial implementation, we do not have the option to create two vds switches as we only had one network (Production VLAN 440) to route vcd-ni traffic between ESX hosts. 6.5 Catalogs The catalog contains ACME-specific templates that are made available to all organizations / business units. ACME will make a set of catalog entries available to cover the classes of virtual machines, templates, and media as specified in the corresponding Service Definition. For the initial implementation, a single cost model will be created using the following fixed cost pricing and chargeback model: TECHNICAL WHITE PAPER / 22
ware vcloud Implementation Example Public vcloud Service Provider TECHNICAL WHITE PAPER Document Title Table of Contents 1. Purpose and Overview... 4 1.1 Executive Summary... 4 1.2 Business Requirements...
XenApp on VMware: This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html.
vshield Manager 5.0.1 vshield App 5.0.1 vshield Edge 5.0.1 vshield Endpoint 5.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware
Public Version 1.5 TECHNICAL WHITE PAPER Table Of Contents Introduction... 3 Enterprise Hybrid Cloud... 3 Public Cloud.... 4 VMware vcloud Datacenter Services.... 4 Target Markets and Use Cases.... 4 Challenges
Performance Study VMware vcenter Server Performance and Best Practices VMware vsphere 4.1 VMware vcenter Server allows you to manage all levels of a VMware vsphere deployment from datacenters to clusters,
Next Generation Security with VMware NSX and Palo Alto Networks VM-Series TECHNICAL WHITE PAPER Summary of Contents Introduction... 3 Intended Audience and purpose of document.... 3 Solution Overview....
VMware vsphere The Best Platform for Building Cloud Infrastructures VMware vsphere 4.1 Features and Benefits Compared to Microsoft Hyper-V R2, and VMware vsphere, the industry s first cloud operating system,
This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware
Best Practices for Virtualizing and Managing SQL Server v1.0 May 2013 Best Practices for Virtualizing and Managing SQL Server 2012 1 1 Copyright Information 2013 Microsoft Corporation. All rights reserved.
Dell EqualLogic Best Practices Series Sizing and Best Practices for Deploying Citrix XenDesktop on VMware vsphere with Dell EqualLogic Storage A Dell Technical Whitepaper Storage Infrastructure and Solutions
What s New in the VMware vsphere 6.0 Platform VERSION 1.1/TECHNICAL WHITE PAPER MARCH 2015 Table of Contents Introduction.... 3 vsphere Hypervisor Enhancements.... 3 Scalability Improvements.... 3 ESXi
Implementing a Hybrid Cloud Strategy Using vcloud Air, VMware NSX and vrealize Automation TECHNICAL WHITE PAPER Table of Contents Purpose and Overview.... 3 Executive Summary.... 3 The Conceptual Architecture....
Basic System Administration ESX Server 3.0 and VirtualCenter 2.0 Basic System Administration Revision: 20090213 Item: VI-ENG-Q206-219 You can find the most up-to-date technical documentation at: http://www.vmware.com/support/pubs
Best Practices for the HP EVA Array using VMware vcenter Site Recovery Manager Table of contents Introduction... 2 HP StorageWorks Continuous Access EVA... 3 Data replication... 3 DR groups and copy sets...
VMware AlwaysOn Point of Care Solution Reference Implementation Case Study for European Healthcare Provider Including Architecture for 25,000 End Users in a Multi-Datacenter Implementation TECHNICAL WHITE
VMware vcenter Update Manager Administration Guide vcenter Update Manager 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD VMware vsphere 5.5 for up to 1,000 Virtual Machines Enabled by Microsoft Windows Server 2012 R2, EMC VNX Series, and EMC Powered Backup EMC VSPEX Abstract
ebook A Successful Data Center Migration - Cradle to Grave Given the dynamic operational environment in which today s data centers operate, wherein applications and data in the production environment is
An Oracle Technical White Paper May 2011 Oracle Optimized Solution for Enterprise Cloud Infrastructure Introduction... 1 Overview of the Oracle Optimized Solution for Enterprise Cloud Infrastructure...
Best Practices and Recommendations for Scale-up Deployments of SAP HANA on VMware vsphere DEPLOYMENT AND TECHNICAL CONSIDERATIONS GUIDE Table of Contents Introduction...................................................................
White Paper MICROSOFT EXCHANGE 2010 STORAGE BEST PRACTICES AND DESIGN GUIDELINES FOR EMC STORAGE EMC Solutions Group Abstract Microsoft Exchange has rapidly become the choice of messaging for many businesses,
Using VMware Infrastructure for and Restore B E S T P R A C T I C E S Table of Contents Introduction... 3 VMware Infrastructure and VMware ESX Server... 3 Disk Structure of ESX Server... 4 Virtual Machine
Microsoft System Center 2012 R2 Why Microsoft? For Virtualizing & Managing SharePoint July 2014 v1.0 2014 Microsoft Corporation. All rights reserved. This document is provided as-is. Information and views