Implementing the HP Cloud Map for a 5000 mailbox Microsoft Exchange Server 2013 solution

Size: px
Start display at page:

Download "Implementing the HP Cloud Map for a 5000 mailbox Microsoft Exchange Server 2013 solution"

Transcription

1 Technical white paper Implementing the HP Cloud Map for a 5000 mailbox Microsoft Exchange Server 2013 solution Using a Windows Server 2012 Hyper-V Failover Cluster and virtual machines Table of contents Executive summary... 3 Overview... 5 Cloud Map design overview... 5 IO template overview... 5 Workflow overview... 6 Solution environment... 8 CloudSystem Matrix resource requirements... 9 Windows domain requirements... 9 Hyper-V Failover Cluster requirements... 9 Virtual machine guest requirements... 9 Networking requirements Operating system requirements How to use this HP Cloud Map Checklist before implementing this Cloud Map Unpacking the Exchange2013_CloudMap_v1.zip file Microsoft Exchange Server 2013 software kits Windows IO Sysprep file placement Next steps Importing and customizing the IO template Importing the IO template Edit the network configurations Edit the server group configurations Edit virtual storage configuration Save the validated template Importing the Operations Orchestration workflows Next steps Configuring and customizing the Cloud Map The Cloud Map configuration file The XSL transformation file... 34

2 IO Windows deployment configuration Updating the Active Directory schema for Exchange Server Set OO system properties OO system account definition Configuring infrastructure orchestration timeout values Review and publish the IO template Review the IO template Save and publish the IO template Create the service Post-deployment actions Network subsystem configuration Distributing the Exchange Server virtual machines across hosts Summary Appendix A: About Microsoft Exchange Server Pre-deployment information Appendix B: About the Cloud Map workflows Appendix C: About the Cloud Map scripts Appendix D: Troubleshooting Problem diagnosis Use of OO Studio workflow debugger Deployment testing: change virtual disk sizes Workflow sleep time parameters Failover cluster creation problem Resource reservation failure Active Directory topology considerations Errors reported with DAC-mode enabled For more information... 62

3 Why HP Cloud Map for Microsoft Exchange Server 2013 Cloud Maps automate the delivery of application services by taking advantage of the capabilities in HP CloudSystem. Cloud Maps provide pre-packaged service designs which allow customers to set up their infrastructure and install applications in minutes or hours rather than days. Cloud Maps leverage the partnerships HP has with independent software vendors and the knowledge developed in years of HP customer deployments. Executive summary HP CloudSystem Matrix is an infrastructure as a service (IaaS) solution for private and hybrid cloud deployments, built on proven HP Converged Infrastructure technologies, such as HP BladeSystem and the Matrix Operating Environment. CloudSystem Matrix is an integrated hardware, software, and services solution that helps you realize the full value of cloud computing as quickly as possible. HP CloudSystem Matrix provides the ability to: Provision infrastructure and applications in minutes or hours for physical and virtual environments rather than in days or weeks. Reduce total cost of ownership (TCO) 1 with built-in infrastructure lifecycle management. Integrate heterogeneous environments into your IaaS infrastructure. HP Cloud Maps accelerate automation of cloud service deployments and ensure consistency and reliability of the implementation of infrastructure service catalogs. The HP Cloud Map for Microsoft Exchange Server 2013 on Windows Server 2012 includes a reference infrastructure template and associated HP Operations Orchestration (OO) workflows to automate the infrastructure provisioning and deployment of Microsoft Exchange Server This document provides comprehensive information for you to take an HP CloudSystem Matrix environment and integrate Microsoft Exchange Server 2013, Windows Server 2012, Hyper-V virtualization, HP infrastructure orchestration (IO), HP Operations Orchestration (OO), HP storage technology, and HP ProLiant servers, to generate a comprehensive cloud solution by following a series of documented steps. This Cloud Map has been tested and verified by HP Cloud Map and Microsoft Exchange Server 2013 experts. This document describes how to import the Cloud Map infrastructure template and workflows into a target CloudSystem Matrix and customize them for use. It details specific areas of the infrastructure template that you will need to modify to enable it to be successfully deployed in your environment. It also outlines the customization steps required for the OO workflows so they operate correctly within your environment. The Exchange Server 2013 environment deployed with this Cloud Map supports GB mailboxes and provides a highly available solution, utilizing the Database Availability Group (DAG) feature of Exchange Server The servers are multirole, having the mailbox and client access roles deployed on each. This solution provides two copies of each Exchange database with the databases distributed among three Exchange servers. In this solution any single Exchange server (a VM) can be taken offline for maintenance, or be offline due to an unplanned outage and the Exchange user experience will not be affected. The solution is sized to accommodate a single Exchange Server VM outage or failure. 1 See the HP white paper: The business case for HP CloudSystem Matrix, 3

4 Figure 1 below shows a view of the Exchange servers and respective database copies. Figure 1 also shows the two networks involved in this solution. The MAPI (Production) network is used for Windows Active Directory and Domain services, access by clients, message transport, and many other common network protocols. The Replication network is used as the primary network for database replication and as the heartbeat network. The MAPI network can be used for database replication if the replication network is not available. Figure 1. Exchange Server 2013 Solution Each server VM hosts a total of eight databases with four of those eight being active under normal circumstances. The active databases are highlighted in blue above. When one of the servers goes offline, those four databases become active on each of the two remaining servers so that they each have six active databases. All twelve databases remain active while one server is offline. More information about solution sizing and varying the solution to meet specific needs is available in Appendix A. Target audience: This document describes how to implement the HP Cloud Map for Microsoft Exchange Server 2013 on an HP CloudSystem Matrix running the HP Matrix Operating Environment (Matrix OE). This document is targeted at technically trained customers, technical advisors, system integrators, pre-sales consultants, and solution architects. Knowledge of the HP CloudSystem Matrix and the underlying components will be helpful when reading this white paper. Please see the section titled For more information at the end of this paper for links to additional information on these topic areas. This white paper describes Cloud Map development and validation completed in August Note The HP Matrix Operating Environment uses a subset of HP Operations Orchestration capability. HP Cloud Maps leverage workflows that are authored using this subset of HP OO. 4

5 Overview This white paper describes how to implement the HP Cloud Map for a 5000 mailbox Microsoft Exchange Server 2013 environment on Windows Server 2012 virtual machines. The virtual machines are deployed to a Microsoft Hyper-V 2012 Failover Cluster, which must exist prior to implementing this Cloud Map. Once implemented, the Cloud Map enables Exchange Server 2013 to be automatically deployed via the HP Matrix OE self-service portal. The HP Cloud Map for Microsoft Exchange Server 2013 on Windows Server 2012 provides: A reference infrastructure template that describes a virtualized environment suitable for a 5000 mailbox Microsoft Exchange Server 2013 service This template is used by the infrastructure orchestration (IO) component of HP Matrix OE, and is thus hereafter referred to as the IO template Workflows and scripts to automate the installation and configuration of the Exchange Server 2013 software The workflows are run by HP Operations Orchestration, and are subsequently referred to as OO workflows. The IO template is used by HP Matrix OE infrastructure orchestration (IO) to deploy virtual servers, storage and networking suitable for an Exchange Server 2013 solution supporting 5000 mailboxes. The workflows that accompany the IO template are executed by the subset of HP Operations Orchestration (OO) that is included within Matrix OE, to install and configure the Exchange Server 2013 software on the deployed infrastructure. One of the benefits of a template driven approach to service creation is the opportunity to employ significant standardization across the various applications in the data center. Instead of a customized request for each application service, templates can be defined that represent various configurations of an application. This type of standardization will not only lead to consistent and repeatable provisioning processes but can help reduce the human error factor when approaching each application deployment as an isolated, customized event. Cloud Map design overview This white paper describes the IO template and the associated OO workflows that enable the provisioning of a Microsoft Exchange Server 2013 environment with Windows Server 2012 running on virtual machines. The IO template provides one potential configuration that can be used to deploy Exchange Server 2013 but select characteristics may be modified so that it supports different configurations. The OO workflows delivered with the HP Cloud Map for Exchange Server 2013 provide the automation steps to take the infrastructure created from the IO template and deploy a running Exchange Server 2013 environment. At a high level, the process of provisioning a new Exchange Server 2013 service on virtual machines running Windows Server 2012 in the CloudSystem Matrix environment involves these steps: 1. From the IO Self Service Portal, a user selects the Exchange Server 2013 service IO template that describes the infrastructure configuration they want to use, and requests that a cloud service is created from that template. 2. IO obtains the necessary approvals for the service request and the creation process continues. 3. IO allocates the necessary Matrix OE resources from the available resource pools and creates the virtual servers using those resources. 4. IO then coordinates the Windows Server 2012 operating system installation onto the newly created virtual servers. 5. Once the virtual servers have been created and Windows Server 2012 installed, IO then calls the OO workflows for Exchange Server 2013 to initiate the application level activities. 6. The OO workflows customize the virtual servers by installing Exchange Server 2013, and performing customization so that the virtual servers are ready to run the Exchange Server 2013 solution. The value delivered by CloudSystem Matrix is that once the user has requested a cloud service and the request has been approved, IO automates the complete process and no further user intervention is required for the server, operating system and application software provisioning. The following sections go into more detail on the IO template and OO workflows supplied with the HP Cloud Map for Exchange Server IO template overview The IO template describes infrastructure (servers, storage and networking) suitable for an Exchange Server 2013 cloud service. As shown in Figure 2, the IO template comprises: Two networks called PRODUCTION and REPLICATION (shown on the left) 5

6 Four virtual server groups, each using a Hyper-V virtual machine running Windows Server 2012: ExchServerN (N = 1, 2, 3) are the server groups for the three systems (VMs) running Exchange Server 2013 in a multirole configuration Witness is the server group for a single system (VM) providing the Witness server used by the Exchange Server DAG Four virtual boot disks one virtual disk (of type VHDX) for each of the four servers in the above virtual server groups Thirty virtual data disks ten full-sized virtual disks (of type VDX) for each of the three Exchange servers. Eight are for mailbox databases, one is for recovery or maintenance space, and one is for public folders. Public folder usage is optional but is configured by default. Storage capacity is allocated for public folders in the solution, but implementation and usage of public folders is outside the scope of this solution. Figure 2. IO Template for Microsoft Exchange Server 2013 Workflow overview The HP Cloud Map for Microsoft Exchange Server 2013 provides workflows and scripts that automate the installation and configuration of the Exchange Server 2013 solution on the virtual server systems, including Windows and other prerequisites. The IO template associates top-level workflows with specific infrastructure orchestration service action execution points, as described in Table 1. Table 1. Top-level OO workflows for Microsoft Exchange Server 2013 Workflow Name Exch2013_CaptureXML Exch2013_InstallConfig Execution Point and Description Execution point: at beginning of Create Service Description: Captures and logs the request-xml that describes the infrastructure for the service being created. Execution point: at end of Create Service Description: Performs the installation and configuration of Exchange Server 2013 on the newly provisioned logical server systems. The main OO workflow of this Cloud Map is the Exch2013_InstallConfig workflow, which is shown in Figure 3. 6

7 Figure 3. The Exch2013_InstallConfig workflow As shown in Figure 3, the Exch2013_InstallConfig workflow calls various subflows. These subflows, including ones not referenced directly by Exch2013_InstallConfig, are briefly described in Table 2. Table 2. Subflow descriptions Subflow Name Exch2013_GetSystemConfig Exch2013_GetServerInfo Exch2013_PrepareWitn Exch2013_PrepareExch Description Reads the configuration file of settings for this Cloud Map. Applies an XSL transformation to the request-xml to produce a list of server details for the newly provisioned servers. The server details comprise hostname, server group name and IP address. Installs required Windows features and registers the Witness server computer account in Active Directory Installs required Windows features and installs Exchange prerequisite software components (Microsoft Filter Pack, Unified Communications managed API Runtime) and unpacks the Exchange Server kit ready for installation Exch2013_InstallConfigExch Configures the Exchange Server data disks on each server before installing the Exchange Server 2013 product. The Exchange Server environment is then configured appropriately for a 5000 mailbox solution Exch2013_WaitForNetwork Exch2013_CopyFiles Exch2013_RunRemScript Exch2013_TraceFlow Waits for the target system s network communication to become active thereby indicating that the system is available Copies files from the CMS to the target server Launches a script on the specified target server Writes workflow trace information to a log file (intended for troubleshooting purposes) Refer to Appendix B which has additional information about these workflows. Instructions for importing these workflows into your Operations Orchestration environment are provided in the Importing the Operations Orchestration workflows section of this document. Once imported, the workflows are located in the OO repository directory named /Library/Hewlett-Packard/Infrastructure Orchestration/Service Actions/Exchange2013. The imported workflows can be viewed (and customized if necessary) using the Operations Orchestration Studio tool that is provided with Matrix OE. 7

8 Solution environment This HP Cloud Map should be implementable on any supported HP CloudSystem Matrix running Matrix OE 7.2 or a later variant of the Matrix OE software that claims compatibility with Matrix OE 7.2, providing that the CloudSystem Matrix has servers based on the x86-64 and compatible architectures. The HP CloudSystem Matrix Compatibility Chart, available at hp.com/go/matrixcompatibility, and the HP Insight Management Support Matrix document, available at hp.com/go/matrixoe/docs, includes details of the servers, Virtual Connect modules, storage arrays, network switches, operating systems, hypervisors, etc., supported by a specific version of Matrix OE. The solution environment tested consisted of an HP BladeSystem c7000 enclosure housing four HP ProLiant BL460c Gen8 servers that were used as virtual machine hosts for the virtual server groups of this HP Cloud Map. The BL460c servers were configured as a Hyper-V Failover Cluster environment on Windows Server The BladeSystem c7000 enclosure housed a pair of HP Virtual Connect Flex-10 10Gb Ethernet Modules for network connectivity and also a pair of HP Virtual Connect 8Gb 24-port Fibre Channel Modules for SAN connectivity. While these Virtual Connect modules are highly suitable options, they can be replaced with HP Virtual Connect FlexFabric 10 Gb/24-port Modules which are the simplest, most converged and flexible way to connect virtualized server blades to any data or storage network. HP Virtual Connect FlexFabric modules eliminate up to 95% of network sprawl at the server edge with one device that converges traffic inside enclosures and directly connects to LANs and SANs. The Virtual Connect modules were linked to network and SAN edge switches from the portfolio of HP products. The configuration of networks and SANs is highly site-specific refer to hp.com/go/networking and hp.com/go/storefabric for product information. HP StoreFabric meets the most demanding needs of hyper-scale, private cloud storage environments by delivering market-leading 16Gb FC technology and capabilities that support highly virtualized environments. An HP 3PAR StoreServ 7200 storage array within the SAN fabric provided SAN storage for the virtual machines running under the Hyper-V Failover Cluster on the BL460c servers. Although other HP storage solutions are available, these latest HP 3PAR StoreServ storage arrays are ideal for HP CloudSystem deployments. This is because HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT as a utility service simply, efficiently, and flexibly. In particular HP 3PAR StoreServ 7000 arrays meet mixed workload demands with improved service levels and virtually unlimited scalability options. Less time is spent managing the storage and new demands can be met with increased responsiveness. The architecture meets current block and file storage needs with enhanced efficiency using traditional spinning media, SSDbased media, or a combination of both. The architecture is also designed to meet Cloud object storage needs as they emerge. The 3PAR StoreServ 7000 arrays enable a doubling of virtual machine (VM) density on your physical servers with use of unique thin technologies that reduces acquisition and operational costs by up to 50% while autonomic management features improve administrative efficiency by up to tenfold. Other HP storage arrays are also supported by HP Matrix OE, as detailed in the aforementioned documents. This Cloud Map is designed to be deployed on a Windows Server 2012 Hyper-V Failover Cluster. A cluster of at least three hosts will provide high availability to the solution, with the three Exchange Server VMs balanced across them. Hyper-V s Live Migration can be used to migrate the VMs between hosts. The Exchange Server VMs comprise an Exchange Database Availability Group (DAG) with two copies of each database spread among the three Exchange Server VMs. This provides high availability in that any one of the three Exchange VMs can be offline or unavailable and Exchange services are provided by the two remaining Exchange VMs. Unplanned failure of a single Hyper-V host will cause its VMs to restart on another host, thus minimizing the time that only a single copy of the database is in use (assuming the Exchange Server VMs are spread across unique hosts). Table 3 lists the hardware and software versions used during template and workflow creation and validation. Table 3. Hardware and software versions used to develop and test this Cloud Map Component Server Pool (VM hosts) Storage Virtual Machine Hypervisor HP Matrix Operating Environment software Model/Version Four HP ProLiant BL460c Gen8 servers in an HP BladeSystem c7000 enclosure One HP 3PAR StoreServ 7200 connected to HP 8Gb SAN switch infrastructure Windows Server 2012 Datacenter Edition Hyper-V Failover Cluster with four identical hosts 7.2 and 7.2 update 1 8

9 Component Central Management Server platform Operating System used by virtual servers in the IO template Model/Version The CMS was implemented on a Windows Server 2008 R2 Virtual Machine. (More typically the CMS is implemented on an HP ProLiant server) Microsoft Windows Server 2012 Standard Edition (Data Center Edition was also tested) Application Microsoft Exchange Server 2013 with Cumulative Update 2 CloudSystem Matrix resource requirements The following sections outline the specific resource requirements that are necessary in the deployment environment to successfully provision an Exchange Server 2013 service from the supplied IO template. Windows domain requirements The Exchange Server and Witness VMs must join a Windows domain during their deployment. The domain must exist prior to implementing the Cloud Map, and be suitably configured with servers providing Active Directory and DNS services. The test environment used during development of this Cloud Map comprised a single domain. All Hyper-V hosts and Exchange VMs were members of this single domain. In accordance with the Microsoft Technet page at the Domain and Forest functional levels must be at Windows Server 2003 levels or above. Hyper-V Failover Cluster requirements As described above, this Exchange Server 2013 solution Cloud Map provides a highly available solution by leveraging the features of Hyper-V Failover Cluster. The Failover Cluster is expected to be properly configured and functional prior to implementing this Cloud Map. The Hyper-V hosts do not need to be solely used for the Exchange Server environment. They can also be hosting other solutions as long as they have sufficient resources to adequately support the combined workloads. Provisioning at least four Hyper-V hosts in the Failover Cluster will help to ensure that each Exchange Server VM can reside on a unique host if a single host becomes unavailable. If only three hosts exist in the cluster then failure of one will cause its VM to attempt to restart on one of the remaining pair. This will only succeed if sufficient resources (CPU and memory) are available on the target host, and this will mean that each host will have to be over-configured with CPU and memory when in its normal state. Each of the VMs deployed from the Cloud Map IO template stores its boot and data disks on a Cluster Shared Volume (CSV). Each VM requires close to 16TB of disk space comprising a boot disk of at least 300GB, ten 1.5TB virtual data disks, plus 128GB of overhead corresponding to its memory size. As supplied, the IO template specifies that each Exchange Server VM uses a unique CSV. Therefore there must be at least three CSVs configured into the cluster, and each must have at least 16TB of available space. It is possible to alter the default shared volume mappings by editing the IO template to specify the actual CSVs to be used 2. See the Edit virtual storage configuration section later in this document. Note that the virtual hard disks of a single VM deployed by IO cannot be spread across multiple CSVs. Virtual machine guest requirements Table 4 shows the requirements for each server group in the template, including the amount of memory and number of virtual CPU cores for each VM. Note these are recommended values for a 5000 Mailbox solution across three servers. Resource requirements can be tailored based on actual usage, or when in a test environment. The Editing a server group s compute resources configuration section describes how to change a server group s requirements. 2 The maximum size of a 3PAR Virtual Volume is 16TB, so if using such an array at least three virtual volumes must be exported to the Hyper-V hosts one for each Exchange Server VM. This also means that the virtual disk sizes specified in the supplied IO template cannot be significantly increased. Doing so will cause server provisioning to fail. Other storage array models allow larger volumes to be presented to hosts. 9

10 Table 4. Server and Storage requirements of the Exchange Server 2013 template Server group Recommended memory Recommended number of virtual CPU cores (vcpus) Storage ExchServerN (3 VMs, N=1, 2 and 3) 128 GB (each VM) 16 vcpus (each VM) Each physical CPU should be of Intel Xeon E capability* 3 x 300 GB (VM boot disks) 10 x 1.5TB on each Exchange Server VM (8 for databases, 1 for a Recovery volume, and 1 for Public Folders) 3 x 128GB (VM overhead) Witness 2 GB 1 vcpu 32 GB (minimum boot disk) 2GB (VM overhead) Totals 386 GB 49 vcpus 47 TB (available on CSVs) * See Appendix A for more detailed sizing information Note on total memory, total virtual CPU cores, and total storage To successfully deploy a service for Exchange Server 2013 from the IO template, you will need to have sufficient resources on the VM host(s) and SAN storage for all of the server groups in the IO template, even in the event of a host failure when the VMs failover to the remaining host(s). The bottom row of Table 4 lists the recommended total amounts of memory, number of virtual CPU cores, and storage requirements of the IO template as provided for a 5000 mailbox solution. The IO template can be modified to adjust the resources allocated if actual utilization is expected to be different, but be aware that reduction of vcpu and memory resources may result in performance degradation. Networking requirements The IO template requires two networks as listed in Table 5. The names of the networks listed below are to be substituted with appropriate networks defined in the virtualized environment typically using Virtual Switches created with Hyper-V. Table 5. Networking requirements of the IO template for Exchange Server 2013 Network Name PRODUCTION REPLICATION Description This is the public network that will be used for the servers to communicate with each other, Active Directory services, Exchange clients, and with other systems that are outside of the Exchange Server 2013 environment. The standard production network in your CloudSystem Matrix environment may be used to meet this requirement. For the environment that was used to develop and test this Cloud Map, the Prod-HVnet network was used as the production network. This is the private network between the Exchange Server VMs that is used for Exchange-related replication traffic. This network could be dedicated solely for this purpose, or could be an existing network that is used for similar purposes with other applications. For the environment that was used to develop and test this Cloud Map the Repl-HVnet network was used. The CloudSystem Matrix networks that will be selected for each of the above Cloud Map networks must be properly configured into infrastructure orchestration before implementing the Cloud Map. If you need information on configuring networks in IO, refer to the Infrastructure Orchestration User Guide at hp.com/go/matrixoe/docs. You will have to modify the template to suit your particular network configuration as described in the Edit the network configurations section of this document. You will also need to customize the file Exch2013_GetServerInfo.xsl as described later in the Customizing the XSL file for network name section of this document. Refer to Appendix A for more Exchange Server specific network configuration information. 10

11 Operating system requirements Table 6 shows the operating system requirements for each server group of the IO template. Table 6. Operating system requirements of the IO template for Microsoft Exchange Server 2013 Server group ExchServerN Witness Operating system requirement Windows Server 2012 Standard Edition (or Datacenter edition if desired). This OS version must be available as a VM template that can be associated with an IO template and thus be deployed by infrastructure orchestration*. The OS image need only have a basic configuration, but the Windows Firewall settings within the OS image must be turned off for the Domain, Public and Private profiles. For the Exchange Server VMs, the template s system disk size must be 300GB, or more. Windows Server 2012 Standard Edition (or Datacenter edition if desired). The same VM template requirements apply to the Witness VM, except that the system disk size need only be of minimal size (32GB). However, the same template as used for the Exchange VMs can be used if desired. *The VM template can be sourced from Microsoft System Center Virtual Machine Manager if this is integrated into Matrix OE, or it can be a pre-created VM within Hyper-V that has been imported into Matrix OE using infrastructure orchestration s Create Hyper-V VM Template setup task, found on the Software tab. Table 7 lists the additional operating system features and products that are installed on the servers during the installation of Exchange Server 2013 by this Cloud Map. Exchange 2013 prerequisites are outlined at: Table 7. Additional operating systems features and products required by Exchange Server 2013 Feature or product name Required Windows 2012 features such as IIS components,.net Framework, RPC over HTTP proxy, and many others Windows Remote Server Administration Tools Active Directory Domain Services (RSAT-ADDS) feature Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit Microsoft Office 2010 Filter Pack 64 bit Microsoft Office 2010 Filter Pack SP1 64 bit The RSAT-ADDS feature is not directly required by Exchange Server but is installed to provide tools for the Cloud Map to perform some necessary changes in Active Directory. The non-operating system components can be downloaded from the prerequisites link provided above. 11

12 How to use this HP Cloud Map This section: Provides a checklist of resources required to implement this HP Cloud Map Describes the HP Cloud Map zip file for Microsoft Exchange Server 2013 Describes the software required for Microsoft Exchange Server 2013 Describes the configuration steps to be performed prior to deploying the Cloud Map The instructions in this document assume you have already set up your HP CloudSystem Matrix environment, comprising: HP BladeSystem enclosure(s) with HP ProLiant server blades HP Virtual Connect for networking and SAN storage connections HP Matrix Operating Environment software, running on a dedicated HP Central Management Server (CMS) system Virtualization hypervisor hosts (Hyper-V role in Windows Server ) running on multiple ProLiant blade servers, configured as a high-availability cluster Checklist before implementing this Cloud Map Before implementing this Cloud Map you should check that you have all of the required resources. Table 8 provides a checklist of the required resources. (Microsoft software kits and licenses are not supplied with this Cloud Map.) Table 8. Checklist of resources required to deploy this Cloud Map Resource Description Check HP CloudSystem Matrix environment Virtual Machine hypervisors Virtual Machine guest operating system image template A configured Active Directory environment A configured and running HP CloudSystem environment (as described in the Solution environment section), with sufficient compute, storage and networking resources to meet the requirements listed in Tables 4 and 5. Multiple virtual machine hypervisors. This requires Hyper-V in a Failover Cluster environment to maximize solution availability A virtual machine guest operating system image template that meets the criteria outlined in the Operating system requirements section. This template must be visible to the Matrix OE infrastructure orchestration deployment environment Exchange Server 2013 requires an Active Directory environment. This Cloud Map assumes the environment is already configured. A schema modification for Exchange Server 2013 is necessary. See section Updating the Active Directory schema for Exchange Server 2013 Exchange2013_CloudMap_v1.zip Zip file containing the files that comprise this HP Cloud Map Exchange Server 2013 installation software Exchange 2013 Prerequisites A software kit that will be extracted on the Exchange servers during deployment to install the product Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit Microsoft Office 2010 Filter Pack 64 bit Microsoft Office 2010 Filter Pack SP1 64 bit Software licenses (if applicable) Licenses / rights-to-use the Microsoft Exchange Server 2013 software 3 This Cloud Map should successfully work with Microsoft Hyper-V Server 2012 installed on the host servers, instead of Windows Server 2012, but this configuration was not tested 12

13 At this stage it is assumed that the first four resources (CloudSystem Matrix, hypervisor, guest operating system image template and Active Directory) have already been configured. Unpacking the Exchange2013_CloudMap_v1.zip file The Exchange2013_CloudMap_v1.zip file can be downloaded from The contents of this Cloud Map file are listed in Table 9. Create an appropriate folder on the CMS (for example, C:\CloudMaps\ Exchange2013_5000-Mailbox\), and unzip the Cloud Map file contents into that folder. Table 9. Contents of the Cloud Map zip file Exchange2013_CloudMap_v1.zip Filename Description Exch2013_5000-Mbx.xml Infrastructure orchestration template for a 5000 mailbox Exchange Server 2013 configuration using Windows Server 2012 on virtual machines Exch2013_GetServerInfo.xsl Exchange2013_workflows.zip Sysprep_Exch2013.inf Exch2013-CM_config.txt Exch2013_CMS_EnablePS.cmd Exch2013_CredSSP_Client.ps1 Exch2013_Schema.ps1, Exch2013_PreReqs.ps1, Exch2013_ServerConfig_1.ps1, Exch2013_ServerConfig_2.ps1, Exch2013_ServerConfig_3.ps1, Exch2013_ExchConfig.ps1, Exch2013_FinalConfig.ps1, Exch2013_ServerCleanup.ps1, Exch2013_CredSSP_Server.ps1 Exch2013_WinFeaturesExch.cmd, ModifyPSexecutionPolicy.reg Exch2013_WitnessConfig.ps1 Exch2013_WinFeaturesWitn.cmd, ModifyPSexecutionPolicy.reg Exch2013_diskpart_1.txt, Exch2013_diskpart_2.txt, Exch2013_diskpart_3.txt Readme.txt Software folder XSL transformation file which extracts details (server name, server group and IP address) of newly provisioned servers from the IO request-xml. This file must be placed in a specific location to be correctly referenced by IO (see section The XSL transformation file) Zip file of the workflows used to perform the installation of Exchange Server 2013 on newly provisioned Windows VM servers Windows file used to enable provisioned servers to join the required domain during the installation process. This file must be edited with site-specific information (see section IO Windows deployment configuration) Configuration file, located in the Config folder. This file will be edited later to customize settings for the specific deployment environment Command shell script to enable PowerShell scripts to execute on the CMS. This script is located in the Scripts folder PowerShell script to enable CredSSP authentication client on a server. This script is located in the Scripts folder PowerShell scripts to configure the Exchange Server VMs and then to install and configure Exchange Server These scripts are located in the Scripts\Exch folder Command shell script and input file to modify PowerShell execution policy, and install required Windows features. These files are located in the Scripts\Exch folder PowerShell script to register a computer account in AD for the Witness server. This script is located in the Scripts\Witn folder Command script and input file to modify PowerShell execution policy, and install required Windows features. These files are located in the Scripts\Witn folder Input files for the DISKPART command that are used to configure provisioned data disks. These files are located in the Scripts\Exch folder Readme file containing information about the Cloud Map. Contains two subfolders, Exch and Witn, into which the Exchanger Server and prerequisite software installation kits must be placed. These software kits are not supplied with this Cloud Map. Refer to the next section for more information. 13

14 Filename Temp folder Description This folder is initially empty. The workflows will create temporary files in this folder during execution. If desired, any files in this folder can be deleted after each service deployment. Microsoft Exchange Server 2013 software kits As shown in the Table 8 checklist, there are a number of software components required for a successful Exchange Server 2013 installation using this Cloud Map. The following files comprise the set of kits needed to be obtained and located on the CMS. The Cloud Map workflows will transfer them to the target servers and then install the components. Table 10 lists the files required by the Cloud Map scripts that must be supplied by the Cloud Map user. Table 10. User-supplied files required by the Cloud Map scripts Filename Description Obtain from Exchange_x64.exe Exchange 2013 CUx Installation File 4 Your Corporate License with Microsoft UcmaRuntimeSetup.exe Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit microsoft.com/en-us/download/details.aspx?id=34992 FilterPack64bit.exe Microsoft Office 2010 Filter Pack 64 bit microsoft.com/en-us/download/details.aspx?id=17062 Microsoft Office 2010 Filter Pack SP1 64 bit Microsoft Office 2010 Filter Pack SP1 64 bit microsoft.com/en-us/download/details.aspx?id=26604 This Cloud Map requires the software kits to be placed in the Software\Exch folder beneath the top-level Cloud Map folder on the CMS. The folder will have been created when unpacking the zip file. No additional software kits are needed for the Witness server, so the Software\Witn folder remains empty except for the Readme.txt file The setup feature of Exchange Server 2013 allows updates to be installed when Exchange is being installed on the servers. The Cloud Map provides an Updates folder that is empty. If there are updates to be installed with Exchange, then this folder can be populated with those updates. Exchange setup fails if the /UpdatesDir option is used and if the specified directory is empty. If you wish to utilize the /UpdatesDir option with Exchange setup, then please modify the three Exch2013_ServerConfig_X.ps1 files, where X is 1, 2, and 3. Those files include the UpdatesDir option in a comment in the file that can be included in the setup.exe command to utilize this feature. Windows IO Sysprep file placement Infrastructure orchestration makes use of a Windows Sysprep file, Sysprep_Exch2013.inf, that is supplied with the Cloud Map zip file, and located in the top-level Cloud Map directory on the CMS. The file must be moved (or copied) beneath infrastructure orchestration s configuration folder. By default the location is C:\Program Files\HP\Matrix infrastructure orchestration\conf\sysprep. The file will be edited later to provide site-specific information. This is described in section IO Windows deployment configuration. Next steps The next step to implementing the Cloud Map for Exchange Server 2013 is to import the IO template into your Matrix OE environment. This is then followed by importing the OO workflows. Once all the templates and workflows from the Cloud Map have been imported into your environment then the final customization steps can be completed. 4 This Cloud Map was tested with Exchange Server 2013 Cumulative Update 2 installation file. It is recommended that this version be used. 14

15 Importing and customizing the IO template Importing the IO template After downloading the Cloud Map, the IO template file can be imported directly into the HP Matrix infrastructure orchestration Designer application on the CMS. The IO template file has appropriate resource attributes already configured. Launch the infrastructure orchestration Designer application and click the Import button which is circled in red in Figure 4. Figure 4. Importing a template into HP Matrix infrastructure orchestration Designer A dialog window like that shown in Figure 5 will be presented. Use the window to navigate to the directory where you placed the unzipped Cloud Map files, and select the IO template which is an XML file then click Open. This will import the IO template into HP Matrix infrastructure orchestration Designer. Figure 5. Upload the IO template 15

16 The IO template defines software requirements for each server group, and typically the software referenced in the template will differ from what is available on the CMS, resulting in the Software ID Mismatch pop-up window shown in Figure 6. This is normal; you will need to reconfigure the server properties to match what is available in your environment, as described later on in this document. For now click Finish to continue. Figure 6. Software ID Mismatch window 16

17 After import, you should see an IO template that looks like Figure 7. You will notice that the Validation Status is a white cross on a red background, which indicates errors. Figure 7. The IO template for Microsoft Exchange Server 2013 in HP Matrix infrastructure orchestration Designer 17

18 If you click the Show Issues button (under Validation Status), it will show the IO template components requiring attention by highlighting them with a red background. Figure 8 shows six issues with the IO template that need attention. The first two are because the networks that were defined in the template do not exist on the target system, so you will need to assign the networks to those that exist at your site. We will now go through the steps required to address this issue. Figure 8. Imported IO template showing issues 18

19 Edit the network configurations To address the networking configuration errors, right click on the PRODUCTION network icon and select Edit Network Configuration, as shown in Figure 9. Figure 9. Edit a network configuration of the IO template 19

20 This will bring up the network configuration dialog shown in Figure 10. In this dialog you will specify which network should be used for the production network at your site. Ensure the Select a specific network radio button is selected. This will display all of the networks configured at your site. The networks that appear as available for use are those that have been defined both in Virtual Connect and with a hypervisor, and configured in infrastructure orchestration with a range of network addresses. Figure 10. Configuring the network Select the network that you want to use for the PRODUCTION network; in Figure 10 the Prod-HVnet network has been selected. At this point you could optionally check the Show All Network Details checkbox (shown near bottom-right of Figure 10) for the dialog window to show you information about the selected network. The information displayed for a network includes how many addresses are available in the address pool for the network which will help you to check if the chosen network has adequate resources. After you configure the PRODUCTION network, you will need to configure the REPLICATION network used by the template. Follow the same process by right clicking on the respective network icon and selecting the network you have configured for replication use (in this example that network is Repl-HVnet ). Note: In an Exchange DAG, the replication network should not have a default gateway, or a DNS server defined. However, the network settings within infrastructure orchestration may necessarily have these items specified, particularly if the network is to be used for other purposes. Refer to section Network subsystem configuration for important information on the network configuration necessary to successfully configure the replication function of Exchange Server. 20

21 Edit the server group configurations Next we need to edit each of the server group configurations. Right click on a server group icon that indicates an issue and select Edit Server Group Configuration, as shown in Figure 11 for the ExchServer1 server group. Figure 11. Edit a server group configuration of the IO template 21

22 The server configuration dialog will be displayed, initially showing the Config tab, as shown in Figure 12. Editing a server group s compute resources configuration Figure 12 shows the dialog window that enables a server group s compute resources to be configured. Figure 12. Edit a server group s compute resources configuration In the dialog window of Figure 12 you can customize the IO template so that the compute resources for this server group match your requirements. For example, if you need a different amount of memory or number of vcpus per server for a server group than what is configured in the template, then you can change it here (assuming that your VM hosts have adequate resources to meet the new configuration). You might decide to use IO Designer s Save as option to save the IO template with different compute resource requirements, to give your users more choice. For example, you could have Small, Medium and Large variations of the IO template, where the number of processors and amount of memory per virtual server is varied accordingly. In the example import session shown here the compute resources were left at the default values. Do not click OK yet because the Software tab shows an error status that has to be addressed. Notice that the Config tab has a warning symbol and that the Processors per Server selection boxes are outlined in yellow. This indicates a configuration warning which in this case is that more than four vcpus are specified. IO Designer provides this warning as a reminder to check that the target hypervisor can support more than four CPUs per VM. This warning is for informational purposes and cannot be eliminated, but can be ignored. 22

23 Reconciling the server group s software selection The Software tab in Figure 12 shows an error status, indicated by the white X circled in red. The problem here is that the operating system requirement for this server group in the IO template needs to be reconciled (recall Figure 6). Select the Software tab and the dialog will show all of the HP IO provisioning entities for the operating systems that are configured in the environment, similar to that shown in Figure 13. Figure 13. Editing a server group s operating system software selection There should be an operating system template matching the requirements given in the Operating system requirements section already prepared, and listed in the selection window. Select the appropriate one. Before clicking OK to complete this dialog, click the Change button next to the Sysprep file dialog box. A Sysprep Selection dialog box appears (shown in Figure 14). Select the file pertaining to this Cloud Map that was previously placed in IO s config\sysprep folder (refer back to section Windows IO Sysprep file placement). It may be necessary to click the refresh icon to cause the newly added file to be listed. 23

24 Figure 14. Selecting a server group s Sysprep file selection Click OK in the above dialog box this causes the Exchange Sysprep file to be selected as shown in Figure 15 Figure 15. Completed server group s operating system software selection Click OK in the Configure Server Group dialog box. This completes the reconciliation of the software selection for the first ExchServer server group. The software selection reconciliation process needs to be repeated for the other ExchServer and Witness server groups in the IO template. 24

25 Edit virtual storage configuration Each server group has one virtual disk associated with it for its boot disk, the size of which comes from the VM operating system template. The Exchange server groups have an additional ten virtual disks associated with each server to store Exchange-related data. The default sizes of the disks match the recommended requirements listed in this document (refer to Table 4). The Exchange Server 2013 solution implemented by this Cloud Map places the boot and data virtual disks for each Exchange Server VM onto a unique Cluster Shared Volume (CSV) configured into the Hyper-V Failover Cluster. The supplied IO template assumes that the CSVs to be used for the Exchange Server data are named C:/ClusterStorage /Volume1, Volume2 and Volume3 and associates these with server groups ExchServer1, ExchServer2 and ExchServer3 respectively (the Witness server also uses Volume3). Depending on the deployed Hyper-V Failover Cluster environment, it may be necessary to change the names of the CSVs associated with each group to ensure that designated CSVs are used by each server VM. Within the IO Designer application, right-click on a boot disk virtual storage icon and select Edit Storage Configuration. The Configure Storage dialog appears and you can then change the Storage Volume Name to specify the actual CSV assigned for the VM data (see Figure 16). Figure 16. Changing the name of Virtual Storage volume Note that the Storage Volume Name can only be changed for bootable disks. All other virtual storage associated with the server group will be assigned to the same volume that is specified for the boot disk. If the default sizes are inappropriate for the desired Exchange Server 2013 environment then you can edit each of the nonbootable virtual disks within the template to specify revised disk sizes. Also, if additional storage is required for your environment, you can edit the IO template now to drag additional virtual storage onto the template, set the size required for the virtual storage, and then link it to a server group. However, before making any such changes, be aware of the capacity limits of the CSV associated with the server group. It may be that insufficient space will be available for additional virtual disks, in which case the IO service will fail to deploy. The OO workflows and scripts provided with this HP Cloud Map make no provision for additionally configured (or removed) storage; they assume that only default storage exists Note Refer to the Appendix D section Resource reservation failure for information about an issue related to storage volumes that may be encountered when deploying a service. 25

26 Save the validated template After importing and addressing site specific configuration issues, you should see an IO template that looks like Figure 17. You will notice that the Validation Status still shows a warning and that the ExchServer groups are highlighted in yellow too. Again, this is expected behavior due to the groups virtual CPU count and can be ignored. Ensure that the Published checkbox is not checked, and save the IO template by clicking the Save icon. Figure 17. Save the validated IO template This concludes the steps for importing and customizing the IO template. Next the OO workflows are imported, then final customization is performed. 26

27 Importing the Operations Orchestration workflows Before you can complete the required changes to the template, you need to import the workflows that are used with the IO template. First, extract the workflows from the zip file Exchange2013_workflows.zip that is supplied within the Cloud Map download file. In the test environment the workflows were unzipped to the folder named Exchange2013_workflows beneath C:\CloudMaps\Exchange2013_5000-Mailbox. Launch HP Operations Orchestration Studio on the CMS either from the desktop icon or from the Start menu and log into it. Select the Repository menu item and then Add Repository, as shown in Figure 18. Figure 18. Add Repository 27

28 A dialog window like that shown in Figure 19 will be displayed. Type in a Repository Name (such as that shown in Figure 19), then for the Add/Create Local Repository field browse to the location where the workflows were unzipped to. Click OK to open the workflow repository in OO Studio. Figure 19. Add Repository Name The title of the OO Studio window should indicate that you are now in a local repository. From the menu bar, click on Repository and then Set Target Repository and select your Default Public Repository, as shown in Figure 20. Figure 20. Set Target Repository 28

29 From the menu bar, click on Repository again and then select item Publish Source To Target Preview. In the left pane of the window, if you drill down to the Library Hewlett-Packard Infrastructure orchestration Service Actions, you should see the Exchange2013 workflows to be published shown in red see Figure 21. Note also that under Configuration System Accounts that there are two accounts associated with this Cloud Map. Finally, note that under Configuration System Properties that there are no system properties being added. Figure 21. Publish Source to Target Preview At the bottom-right of Figure 21 is the Publish/Update window (this might be iconized at the bottom of your Operations Orchestration Studio window). Maximize the Publish/Update window. 29

30 If you fully expand the Library and Configuration items it should then appear similar to Figure 22. Locate the Apply icon (circled in red in Figure 22) and click it to publish the new workflows. Figure 22. Publish/Update window If desired enter a comment for the imported workflows as shown in Figure 23, and click on OK. Figure 23. Publish workflow with comment A confirmation message will be displayed click on OK to continue. If the Publish/Update pane within the OO Studio window is maximized, then iconize it. 30

31 Now open the default repository by clicking on the Repository menu, then Open Repository and select Default Public Repository, as shown circled in red in Figure 24. Figure 24. Open Default Public Repository 31

32 To see the imported workflows in the Default Public Repository, use OO Studio to drill down into Library Hewlett-Packard Infrastructure orchestration Service Actions Exchange2013. You should see a window similar to that shown in Figure 25. (Note that there may also be workflows from other Cloud Maps listed beneath Service Actions) Figure 25. Default Public Repository with Microsoft Exchange Server 2013 workflows Next steps At this stage the IO template and OO workflows have been imported into the target Matrix OE environment. Before users can request an Exchange Server 2013 cloud service, further configuration work is required, such as: Editing configuration files and the XSL transformation file used by the Cloud Map. Customizing the Cloud Map for the target environment for example, by setting OO System Properties, etc. Associating the workflows with the appropriate IO execution points. The Cloud Map configuration and customization work is described in the next section of this document. 32

33 Configuring and customizing the Cloud Map Before deploying the Cloud Map, some modifications are required in the files accessed by the workflows, and OO System Properties and System Accounts must be configured. The Cloud Map configuration file This Cloud Map supplies a text file called Exch2013-CM_config.txt to set various configuration parameters. A default version of this file will have been placed in the Config directory beneath where the Exchange2013_CloudMap_v1.zip file was unpacked, which we will assume is C:\CloudMaps\Exchange2013_5000-Mailbox\. The configuration file must be edited by using a plain text editor, such as Notepad, to customize it for your environment. The format of the configuration file is that each line starts with a parameter name which must begin with Exch2013_ and then has an equals sign, and ends with the value for that parameter. Below is an example parameter and value (do not place white space in the line of text). Exch2013_DomainName=my-domain.net Table 11 lists the configuration file parameters, along with default values and descriptions. Table 11. Contents of the Exch2013-CM_config.txt file Parameter name Default value Description Exch2013_xslFile Exch2013_GetServerInfo.xsl Filename (without directory path) of the XSL transformation file on the CMS that is used to obtain server details from the request-xml. It is generally unnecessary to change the name of this file Exch2013_DeplFolder Exch2013_CloudMap Name of the Cloud Map folder that will be created on deployed VMs to contain the software and scripts required to install the Exchange Server 2013 environment. This folder will be created on the C: drive. After the Cloud Map has been successfully deployed this folder can be deleted if desired Exch2013_DomainName my-domain.net Domain name into which the Exchange servers are placed Exch2013_NBDomainName MY-DOMAIN NetBIOS format domain name into which the Exchange servers are placed Exch2013_DAGName ExchDAG The chosen name of the Database Availability Group for the deployed Exchange Server environment Exch2013_DAGIP An IP address to be associated with the Database Availability Group. This address must be manually selected for the desired environment, and not clash with any other address or address allocation range defined elsewhere in the network. It must be a valid address on the Production network. Exch2013_LicenseKey A1B2C-3D4E5-F6G7H-8I9JA- KBLCM An Exchange Server 2013 license key to apply to the deployed VMs Exch2013_NetworkMaxRetries 20 Value for how many times to sleep and retry network access if a server is not responding on the network. It is generally unnecessary to change this value Exch2013_NetworkSleepTime 30 Value for how long to sleep for (in seconds) before retrying access to servers not responding on the network. It is generally unnecessary to change this value 33

34 Parameter name Default value Description Exch2013_SysprepWaitTime 720 The period of time (in seconds) to wait for deployed VMs to complete their post-installation configuration. After the OO workflow has deployed Windows from a VM template, it will use the Sysprep_Exch2013.inf file to add the VMs to the specified domain. The OO subflow completes before Windows configuration has completed. Continuation of the OO workflow must be delayed until this Windows configuration ends. The default value given here was determined from the test environment used for developing this Cloud Map. Exch2013_FeatureWaitTime 480 The period of time (in seconds) to wait for deployed VMs to complete the configuration of added Windows features. Operating system features are installed by the OO workflow using supplied scripts and the VM is then restarted to apply the change. During reboot it takes a period of time for the feature configuration to complete. Continuation of the OO workflow must be delayed until this Windows configuration ends. The default value given here was determined from the test environment used for developing this Cloud Map. The XSL transformation file The file Exch2013_GetServerInfo.xsl, provided by this Cloud Map, is an extensible Stylesheet Language (XSL) transformation. This XSL file is used by the workflows to extract the server name, server group name, and server s primary IP address for each deployed server from the request XML, where the request XML is the XML that infrastructure orchestration generates to describe the infrastructure that it has provisioned. Customizing the XSL file for network name The XSL file must be modified (by using a plain text editor) to reflect the name of the PRODUCTION network in your HP IO environment. That is, the following line must be changed: <xsl:if test="$subnet='production'"> The PRODUCTION text string must be replaced by the network name you chose as the production network in your HP Matrix IO environment. In the test environment the Prod-HVnet network was used, so the line was changed to: <xsl:if test="$subnet='prod-hvnet'"> Save the modified file, either replacing the original version or choosing an alternative name. Update the value of the Exch2013_xslFile Cloud Map parameter in the Exch2013-CM_config.txt file if you choose to give the modified file a new name. Placing the XSL file The convention is to place all XSL files into the directory defined by the OO system property HpioConfDir, which in the test environment had the default value C:\Program Files\HP\Matrix infrastructure orchestration\conf\oo. Use OO Studio to check the value of HpioConfDir for your environment, and move (or copy) the modified Exch2013_GetServerInfo.xsl file into that directory. IO Windows deployment configuration The server virtual machines that are deployed by IO using the IO template must join a Windows domain when the operating system has been installed. This is achieved using information specified in the Sysprep_Exch2013.inf file which is used by IO to provide customization data for provisioning of virtual machines. An example of this file is supplied in the Cloud Map zip file, and is located in the top-level Cloud Map directory on the CMS. The file should have been previously moved (or copied) beneath IO s configuration folder, default location C:\Program Files\HP\Matrix infrastructure orchestration\conf\sysprep. The supplied version of the Sysprep file is an edited version of Sysprep_sample.inf which already resides in this directory by default. 34

35 The supplied file must now be edited to provide site-specific information (modify the version in the IO configuration folder if the file was copied from its original location). Table 12 lists the parameters that should be reviewed and values changed as necessary Table 12. Parameters of the Sysprep_Exch2013.inf file to be reviewed Parameter AdminPassword TimeZone ProductKey AutoMode AutoUsers DomainAdmin DomainAdminPassword JoinDomain Description The initial local Administrator password of the newly deployed VMs. Use of this parameter prevents the initial prompt for a password change that usually appears after Windows is installed. When the Cloud Map has completed the deployment of the Exchange service then the servers local Administrator passwords can be manually changed. Use of the EncryptedAdminPassword may be more appropriate for certain sites, but use of this option was not tested during development of this Cloud Map Specifies the timezone in which the deployed virtual machines reside Windows Product Key value that will be used to Activate the instance of Windows on the deployed virtual machines The client licensing mode of the Windows Server OS. The default value is PerServer but this may need to change to PerSeat if that is the licensing mode in use The number of Client Licenses purchased for the server (if PerServer mode is specified) A domain user account that allows the newly deployed server to join the desired Windows domain. The specified user must have the required permissions to allow this task The password of the above DomainAdmin user account. It is acknowledged that storing passwords in clear-text files is a potential security issue. It is therefore strongly recommended that the Sysprep file is adequately protected against unintentional access. An alternative to specifying the Domain Administrator account is to specify an existing user account that has an obscure name with minimal permissions to only allow new computers to join the domain. Such a name is given as an example in the file. The name of the Windows domain to which the new servers will belong, in FQDN or NetBIOS format Updating the Active Directory schema for Exchange Server 2013 In order to deploy Exchange Server 2013, the Windows Active Directory schema must be updated, and domains must be prepared. Information about updating the schema and preparing domains is available at: A script to update the schema is included in the Cloud Map, but some organizations may rather perform this task by hand. In either case, the Active Directory schema must be updated, and the domain in which the Exchange Servers are to be installed must be prepared. This can be accomplished by executing the following command from any existing server in the domain that the Exchange servers will be installed in. The Exchange Server 2013 software installation files must first be (temporarily) unpacked on this server. Launch a command prompt and change the default directory to the location of the Exchange setup folder. The server must have Active Directory management tools installed (the RSAT-ADDS Windows feature)..\setup.exe /PrepareAD /OrganizationName:<Exch-Org-Name> /IAcceptExchangeServerLicenseTerms If previous versions of Exchange have been installed, then the organization name parameter is not necessary. The account used to execute this command must be in the Schema Admins and Enterprise Admins groups. That command must be executed on a 64-bit server in the same domain and same Active Directory site as the schema master. After the schema is updated, the changes must be replicated through the forest. In larger and more complex environments, the schema update and domain preparation can be divided in to separate steps. Please refer to the link provided at the start of this section for more information. 35

36 Set OO system properties Operations Orchestration system properties must be set in order to customize the environment used by Cloud Map workflows. This Exchange Server 2013 Cloud Map uses two system properties, detailed in Table 13. Table 13. System Properties for Exchange Server 2013 Cloud Map System property name Example value Description Exch2013_5000-Mbx_CMS-folder C:\CloudMaps\Exchange2013_ 5000-Mailbox Full pathname of the folder on the CMS node used for the various files (configuration file, script files, software kit files, etc.) of this Cloud Map. The value must match the location to which the Cloud Map zip file was extracted Exch2013_5000-Mbx_TraceFlows true Enables a workflow troubleshooting mechanism. Set to true to enable flow messages to be logged in a trace file (refer to Table 20 in Appendix B for additional information) A new OO system property must be created for each item listed in Table 13, using the system property names exactly as written. The example values shown in Table 13 can be customized to your environment. To add a system property To add a system property you first need to be running Operations Orchestration Studio. Then expand the Configuration folder, and right click on System Properties, then select New as shown in Figure 26. Figure 26. Adding a System Property 36

37 Enter the name of the system property. In the example shown in Figure 27 a generic system property name of Example_SysProp is used, whereas you will use one of the system properties from Table 13. After entering the system property name, click on OK. Figure 27. Add system property name Next, enter a Property Value for the system property, and optionally enter a Description. Figure 28 shows the system property Example_SysProp being set to the value of Example_Value. Once the Property Value and Description fields have been set, click the save icon (circled in red) to save the system property. Figure 28. Give System Property a specific value Repeat the process of adding a System Property for all of the properties listed in Table

38 Checking in all system properties To check in all system property additions together, right-click on System Properties in the My Changes/Checkouts pane, and select Check In Tree, as shown in Figure 29. A pop-up dialog window allows you to enter a comment for the change you could enter text such as For Cloud Map for Microsoft Exchange Server 2013 on Windows Server 2012 then click OK. Figure 29. Checking in all System Properties This concludes adding the OO system properties. 38

39 OO system account definition The workflows for Microsoft Exchange Server 2013 use an OO system account called Exch2013_DomAdmin to access (log in to) the deployed Windows VM servers and to make changes to the Active Directory. This OO system account has to be modified using HP Operations Orchestration Studio to set the appropriate username and password. The specified account must be a domain user account that has sufficient permissions to install software, make additions and amendments to Active Directory and perform other administrative actions. It is strongly recommended to use the Domain Administrator account but an alternative user account, if it does not already exist, can be created before implementing the Cloud Map. Ensure the new account is made a member of appropriate domain groups. To set the username and password for the system account, use OO Studio to first expand the Configuration folder, and then expand the System Accounts folder. Right click on Exch2013_DomAdmin, and then select Repository then Check Out. Next, double click on Exch2013_DomAdmin to open it for editing which should display a window similar to that shown in Figure 30. Edit the User Name field to specify the appropriate domain user account (include the domain name), and then click on the Assign Password button and specify the existing password value associated with the domain user account. Figure 30. Modify system account Exch2013_DomAdmin After setting the password for the system account, save the change by clicking on the disk icon. Then check the Exch2013_DomAdmin system account into the OO Studio repository by clicking on the unlocked padlock icon, optionally specifying a comment, and clicking OK. A second system account named Exch2013_CMSAdmin, is used by the workflows to authenticate with the CMS and specify its use of CredSSP authentication. Repeat the above process to specify an administrative username and password for the Exch2013_CMSAdmin system account. 39

40 Configuring infrastructure orchestration timeout values HP Matrix infrastructure orchestration and HP Operations Orchestration include a number of timeout values that limit the length of time that workflows and remote operations can run. These values are usually sufficient for many workflows. However, for the Microsoft Exchange Server 2013 service the timeouts are typically not long enough due to the amount of time that some of the installation steps can take. To accommodate the extra time required for the Microsoft Exchange Server 2013 service creation, it is necessary to increase the default timeouts for the logical server creation, workflows and RAS by editing configuration files on the CMS, as described below. Edit timeout values in the hpio.properties file To increase the logical server creation and workflow timeouts from the default values, login to the CMS server and use Windows Explorer to navigate to the Matrix infrastructure orchestration configuration directory (typically C:\Program Files\HP\Matrix infrastructure orchestration\conf). Make a backup copy of the hpio.properties file so that any changes can be rolled back later if needed. Edit the hpio.properties file to set the properties named in Table 14 to be at least equal to the recommended values for Microsoft Exchange Server (If these properties have already been set higher than the recommended values, then leave them unchanged.) Table 14. Timeout values in the hpio.properties file Name Units Recommended minimum value for Microsoft Exchange Server 2013 timeout.create.virtual.logicalserver minutes 720* timeout.oo.workflow seconds 3600 timeout.oo.workflow.max.run minutes 240 timeout.islogicalserveralive seconds 300 * The timeout value for virtual logical server creation is set to 12 hours. The duration is based on the overall time taken to create the virtual hard drives for the Exchange Server VMs. Increase this value if service deployment fails with a logical server creation timeout (see the Deployment duration section). Editing the RAS timeout To increase the RAS timeout value (specified in seconds), go to the C:\Program Files\HP\Operations Orchestration\Central\conf directory on the CMS server and make a backup copy of the file wrapper.conf. Edit the wrapper.conf file and add the string -Dras.client.timeout=7200 to the end of the wrapper.java.additional.1 stanza. The updated entry should look similar to this: # Java Additional Parameters wrapper.java.additional.1=-djetty.home=../../jetty/ -Diconclude.home="%ICONCLUDE_HOME%" -Djava.security.policy="file:%ICONCLUDE_HOME%/Central/conf/Central.policy" -Dorg.apache.commons.logging.LogFactory=org.apache.commons.logging.impl.LogFactoryImpl -Djava.awt.headless=true -Dicons.dir=Central/extra/icons/ -Dtemplate.dir=Central/template/ -Dras.client.timeout=7200 Restart Windows services After modifying the configuration files noted above, restart the following Windows services (for example, by using Windows Services manager to stop then restart each service, or by rebooting the CMS) to pick up the changes: hpio (this service has the description HP Matrix infrastructure orchestration ) RSCentral RSJRAS 40

41 Review and publish the IO template Review the IO template After the workflows are imported and the environment has been customized, you need to ensure that the required workflows are correctly associated with the IO template. The workflows to be associated with the IO template are shown in Table 15, along with the full path in HP OO Studio to the workflows, and the execution points that the workflows are to run at. Table 15. Workflows to be associated with the Exchange Server 2013 template Workflow name Path Execution points Exch2013_CaptureXML Exch2013_InstallConfig \Library\Hewlett-Packard\Infrastructure Orchestration\Service Actions\Exchange2013\ \Library\Hewlett-Packard\Infrastructure Orchestration\Service Actions\Exchange2013\ Beginning of Create Service End of Create Service To specify the IO template s associated workflows, load the IO template into the infrastructure orchestration Designer tool. Then click on the Workflows button (circled in red in Figure 31). Figure 31. View workflows associated with the IO template 41

42 The Template Workflows dialog window will be displayed and will initially have no workflows listed. Click on the Add button to launch the Add Workflow dialog shown in Figure 32. Figure 32. Adding the Exch2013_InstallConfig workflow Configure the workflows and execution points to match those listed in Table 15. Figure 32 shows an example of adding the Exch2013_InstallConfig workflow, having drilled down to the workflow path specified in Table 15. Click the Add button when each workflow name and corresponding execution point has been selected. Note: It may be necessary to click the refresh icon in order make the Exchange 2013 workflows visible. 42

43 Figure 33 shows the Template Workflows window, with the Exch2013_InstallConfig workflow selected (indicated by the blue background). The execution point for the selected workflow is shown to the right in the Template Workflows window. This window shows that the Exch2013_CaptureXML workflow has also been added. Figure 33. Confirming the workflows associated with the IO template for Microsoft Exchange Server

44 Save and publish the IO template Now that the IO template is known to have the correct workflow associations, save and publish the IO template. First, ensure that an appropriate comment is in the Notes field that will help your users know when to choose this IO template. Then ensure the checkbox next to Published is checked, and click on the Save icon to save the IO template. Figure 34. Publish the updated IO template 44

45 Create the service Once the IO template is published, a service can be created from the infrastructure orchestration Self Service Portal. The Self Service Portal may be started on the CMS from Windows Start menu All Programs HP Insight Management HP Matrix infrastructure orchestration HP Matrix infrastructure orchestration Self Service Portal; or by web-browsing to Once logged in to IO Self Service Portal, navigate to the Templates tab (circled in red in Figure 35). Locate the required template perhaps by using a filter for the Show only rows that contain field as indicated by the text field circled in orange. Select the Exch2013_5000-Mbx template by clicking in the template row (indicated by the yellow circle) and then click the Create Service button (circled in green). Figure 35. Select and create service using Self Service Portal 45

46 A Create Service From Template dialog window is displayed where you can set the service name. The example screenshot in Figure 36 shows a service name Exch2013_Sales and a Hostname Completion string of SL. With this completion string, the hostnames of the server VMs that will be created are EXCHSL1, EXCHSL2, EXCHSL3 and WITNSL, as shown in the yellow popup window. Figure 36. Create service Exch2013_Sales If multiple server pools are listed in the above dialog window, then it is advised to select only the Server Pool(s) on which the Exchange environment can be deployed. Click the Options>> button and then adjust the list of Selected Server Pools to leave the desired Hyper-V Failover Cluster pool(s) listed. Deployment of the service named Exch2013_Sales begins after you click Submit. If all the required resources are available, four VMs will be created, Windows Server 2012 will be installed, and workflows will be run on each. The result will be that a Microsoft Exchange Server 2013 environment for 5000 mailboxes will be available. Deployment duration The Exchange Server 2013 environment will take many hours to deploy testing showed approximately 12 hours overall, but this time could vary with different hardware configurations. Most of the deployment time is due to infrastructure orchestration creating the virtual data disks associated with each Exchange Server VM, although the workflow for the installation and configuration of Exchange Server 2013 on the server VMs takes approximately 2½ hours after the virtual environment is deployed. Infrastructure orchestration uses Virtual Machine Manager to instruct Hyper-V to create fixed sized virtual hard drives. Hyper-V zero-fills this type of drive and this takes approximately 25 minutes per 1.5TB drive (on the test configuration) longer if other virtual disks are being created at the same time. During deployment the three Exchange Server VMs will be created concurrently, with each of their disks created sequentially. Exchange Server 2013 does not support use of dynamically expanding virtual hard disks so this deployment time cannot be reduced by specifying an alternative disk type. Moreover, at the time of writing, infrastructure orchestration cannot initiate creation of dynamically expanding VHDs. 46

47 Identifying server IP addresses It may be necessary to determine the IP addresses of the servers that comprise the new Exchange service in particular for subsequent network configuration tasks on each server. You can determine the IP addresses of the servers by using the infrastructure orchestration Self Service portal. Select the My Services tab circled in red in Figure 37 of the portal interface. Locate the cloud service of interest perhaps by using a filter for the Show only rows that contain field (circled in orange). Select the required service by clicking in the service row, as shown by the yellow circle. Then click on the View Details button which is circled in green. Figure 37. Locating a cloud service and viewing its details 47

48 The portal will then show the details for the specified service, as shown in Figure 38. Figure 38. Exchange cloud service details 48

49 Click on a server group icon to select it in Figure 39 the ExchServer1 server group has been selected, as indicated by the blue rectangle. Click on the NIC Details tab (circled in red), and the IP addresses of the servers are shown (circled in orange). Repeat for the other server groups. Figure 39. Network interface details of a cloud service s server group Post-deployment actions After the Exchange Server 2013 service has been successfully deployed the network configuration of each Exchange Server VM must be reviewed, as well as the distribution of VMs across the Hyper-V hosts. Network subsystem configuration The networking subsystem on the Exchange servers needs to be configured in accordance with the Microsoft Exchange Server documentation section titled Planning for High Availability and Site Resilience. This can be accessed at The following text assumes that two network connections are used per server the supplied infrastructure orchestration template for this Cloud Map only configures two network connections. One is for connection to network resources such as DNS, Active Directory and Exchange client access. This network is referred to as the MAPI network. The other network is for database replication and seeding, which is referred to as the Replication network. Network parameters for these two networks are specified during service deployment with information taken from the pernetwork settings configured within infrastructure orchestration. 49

50 After deployment, each of the server VMs will have network connections named Ethernet and Ethernet 2 (as shown in Figure 40). Figure 40. Default server VM network connection names It is recommended that these networks be renamed to MAPI and Replication. First identify which connection is which by examining the Network Connection Details to obtain the associated IP address. If necessary, cross-reference with the NIC Details information displayed within the Self-Service portal (see Figure 39). Figure 41 shows that connection Ethernet 2 has an IP address associated with the Production network and so should be renamed as the MAPI connection. Figure 41. Network connection details 50

51 To rename the connection, right click on the network connection icon and select Rename from the menu. Figure 42. Renamed network connections Next, the network binding order should be verified to ensure that the MAPI network is in the top of the binding order. Select the Advanced menu item of the Network Connections window, and then select the Advanced Settings menu item. The Advanced Settings dialog box is displayed, as shown in Figure 43. If the MAPI network is not at the top of the binding order, use the arrows on the right side of the dialog to move MAPI to the top and Replication second from the top, then click OK. Figure 43. Network binding order 51

52 Finally, review the network connection parameters for each network. Table 16 shows an example configuration for each of the MAPI and Replication networks. The actual parameters used are site-specific and will be specified in the Network information configured within infrastructure orchestration. Table 16. IP configuration examples for both servers MAPI and Replication networks Parameter Server 1 Server 2 Server 3 MAPI network address MAPI subnet mask MAPI default gateway MAPI DNS addresses Replication network address Replication subnet mask Replication default gateway N/A* N/A* N/A* Replication DNS address N/A N/A N/A * There should be no default gateway on the replication network. If the solution spans IP subnets, then the MAPI network should continue to use a default gateway and the replication network should use persistent static routing. If a solution is deployed on the same subnet, then no persistent static routing is necessary, but if deployed across subnets, the static route could be added with a command similar to this on each server. netsh interface ipv4 add route /24 "Replication" In this example /24 is the target subnet, Replication is the name of the network to act on, and is the target router. If adding persistent static routes for the replication network are necessary in your environment, please refer to your network administrator and Windows documentation for more information. In accordance with the network requirements section in the Planning for High Availability and Site Resilience documentation (at the following parameters are also configured on the Replication network. The MAPI network is left with the default settings. Table 17. Replication network settings on each server Parameter Client For Microsoft Networks QoS Packet Scheduler File and Printer Sharing for Microsoft Networks Internet Protocol Version 6 (TCP/IP v6) Register this connection s address in DNS Setting Disabled Off Disabled Disabled Unchecked IPv4 Subnet mask IPv4 default gateway IPv4 DNS address N/A (see above) N/A Before the network interfaces are properly configured, Exchange reports a network misconfiguration as shown in Figure 44 below. Use the Exchange Management Shell to issue these commands. Figure 44. Exchange network misconfiguration. 52

53 By default Exchange 2013 DAGs are enabled for automatic network configuration. After the network interfaces are properly configured with the above settings, and the servers restarted, the DAG network configuration should resemble Figure 45 below. Figure 45. Revised Exchange network configuration. With automatic network configuration both networks are automatically configured for replication. To disable replication on the MAPI network, automatic network configuration must be turned off. The commands and output to disable automatic network configuration and to disable replication on the MAPI network are shown in Figure 46 below. The commands only need to be executed on one of the Exchange servers. In the example, ExchDAG is the name of the DAG specified by the Cloud Map configuration parameter Exch2013_DAGname. Figure 46. Modifying DAG network properties Distributing the Exchange Server virtual machines across hosts For maximum availability of this Exchange Server solution the IO template specifies that the server groups are Highly Available. This enables the virtual machines to be migrated, and to automatically failover, between hosts in the cluster. Infrastructure orchestration will deploy the VMs to available hosts based on its own loading criteria. Depending on the resource utilization of the hosts at deployment time the VMs could potentially be created on any host possibly with more than one VM on a single host. This situation is undesirable because loss of that host would mean loss of more than one Exchange Server VM thus making the service unavailable until the VMs complete their restart on another host. It is therefore necessary to take manual steps, using Microsoft tools, to ensure that each Exchange Server VM is normally hosted on a unique host server. This is accomplished using the Failover Cluster Manager utility on one of the Hyper-V hosts. The properties of the virtual machine role can be set to choose a preferred order of hosts, and also the role can be manually migrated using Hyper-V s Live Migration feature, by selecting the Move option. Additional Hyper-V hosts (i.e. more than the minimum of three) can be configured into the Failover Cluster and VMs can be Live-Migrated to them to maintain availability during planned server maintenance periods. This paper assumes the reader is familiar with the aspects of configuring and managing a Hyper-V Failover Cluster environment. 53

54 Summary For IT teams, infrastructure provisioning can be both time-consuming and resource-draining. Every time a business unit, application owner, or development team requests resources, a lengthy process begins: IT experts must capture system requirements, design the solution from scratch, and then identify resources that are currently available and those that need to be procured. HP Matrix infrastructure orchestration allows you to provision your infrastructure consistently and automatically from pools of shared resources via a self-service portal. You can rapidly provision resources ranging from a single virtual machine (VM) to complex, multi-tier environments that include physical servers, VMs, and storage systems. With HP CloudSystem Matrix, powered by Matrix OE, application services such as this Exchange Server 2013 solution can be easily provisioned using infrastructure orchestration templates. This enables IT organizations to develop service-driven, standardized application deployment processes. Appendix A: About Microsoft Exchange Server 2013 This solution is sized for GB mailboxes with a profile of 200 messages sent and received per user per day. Sizing was done with both the Microsoft Exchange 2013 Server Role Requirements Calculator, and the HP Sizer for Microsoft Exchange Server The Microsoft calculator is available at: The HP Sizer is available at: hp.com/solutions/microsoft/exchange2013/sizer The result of the sizing for each Exchange server is listed in Table 18: Table 18. Exchange Server resource sizing Server resource Quantity Comment VM System RAM 128 GB As the number of users is changed or the user profile is changed, the amount of RAM required will increase or decrease VM Processors 16 cores of Intel E If virtualizing the Exchange servers on Hyper-V hosts with more than 16 cores, slower processors can be used. Refer to the calculations and explanation below. Database LUNs 8 x 1.5TB LUNs Database LUNs. See sizing notes below Maintenance LUN Public Folder LUNs 1 x 1.5TB LUN Maintenance and recovery LUN 1 x 1.5TB LUN Public Folder LUN. This size is dependent on the amount of data to be stored in Public Folders CPU sizing Each of the three Exchange servers in this solution requires 16 cores of Intel E capacity. CPU sizing for Exchange is done using SPECInt 2006 values. Microsoft s tool for looking up SPECInt 2006 values is at: The Microsoft calculator shows that with 16 cores of Intel E CPUs, CPU utilization in a failover scenario will be at 77%, with 80% being the maximum recommended value. If these virtual machines are deployed on a server with different CPUs installed, the same CPU resources need to be allocated to each of the Exchange virtual machines. 16 cores of E results in a SPECInt 2006 score of 693. For example, if the virtual machines are deployed on a BL660c Gen8 server with 4 Intel Xeon E CPUs, then the correct number of CPUs needs to be provisioned for each virtual machine to provide the same SPECInt 2006 value. Four E CPUs result in a SPECInt 2006 score of Dividing 1090 by 32 cores equals per core. To achieve a score of 693, 20.3 cores of E are required. Rounding down to 20 cores at SPECInt2006 per core equals a score of 681. Using 681 in the Microsoft Calculator shows a CPU utilization of 79%, which is just under the 80% recommended threshold. To summarize for CPU sizing: A SPECInt Score of 670 is required to put CPU utilization at an acceptable level. This can be achieved with 16 cores, 20 cores, or another number of cores that collectively provide a score of 670. In these two cases, 16 cores of E or 20 cores of E provide the CPU resources for this solution. 54

55 Additional CPU headroom can be provided on Intel servers by utilizing Hyper-Threading on the Hyper-V server. The virtual CPUs that Hyper-Threading provides should not be considered equivalent to the hardware cores of the CPU. If using a 4 socket system with E CPUs, with Hyper-Threading enabled, then the system should be considered to have 32 cores, and not the 64 that Windows would see because of Hyper-Threading. The virtual CPUs provided by Hyper-Threading are not equivalent to the physical cores on the CPUs. Storage sizing capacity and IOPS The size of the database and maintenance LUNs can be adjusted to accommodate larger or smaller mailboxes. Either the Microsoft Calculator or the HP Sizer should be used in making these adjustments. One critical element of storage sizing is random input/output operations per second, or IOPS sizing. Each time a mailbox is accessed and mail is sent or received, the storage subsystem is accessed with a combination of read/write operations. The profile of 200 messages sent or received per user per day means that each mailbox is responsible for random IOPS per mailbox users across 12 databases results in 417 users per database, or per 1.5TB LUN. This means that each 1.5TB LUN needs to support 55.9 random IOPS. A best practice is to add a 20% safety factor to IOPS calculations. With that 20% extra, each LUN needs to support 67 IOPS. It is therefore necessary to know the performance of the LUNs attainable on the storage array model being used for the Cluster Storage Volumes. By default each CSV will contain all ten of the 1.5TB virtual disk drives. Configuration details that can impact performance are storage array model, server/array interconnect type and speed, RAID type, storage cache settings, dedicated or shared HDDs for Exchange workload. In a shared storage environment, the Exchange LUNs should be tested in the worst case scenario to ensure that Exchange performance will not be impacted. It is highly recommended that the HDDs be dedicated to the Exchange workload. Database LUNs should be tested to verify that they can provide the appropriate IOPS before being put in to production. Microsoft provides a tool called Jetstress to perform such testing. Jetstress 2013 is available at microsoft.com/en-us/download/details.aspx?id= The Jetstress 2013 Field Guide available at Pre-deployment information Table 19 lists a number of useful Microsoft web pages containing information that should be considered before deploying Exchange Server 2013 in to your environment. Table 19. Microsoft Exchange Server 2013 web pages Title Exchange Server 2013 Planning and Deployment Upgrade from Exchange 2010 to Exchange 2013 Install Exchange 2013 in an existing Exchange 2010 Organization Networking requirements in Planning for High Availability Exchange 2013 Deployment Assistant Exchange 2013 System Requirements Prepare Active Directory and Domains Managing Database Availability Groups Microsoft Exchange 2013 Server Role Requirements Calculator HP Exchange Server 2013 Sizer URL hp.com/solutions/microsoft/exchange2013/sizer 55

56 Appendix B: About the Cloud Map workflows Table 20 lists the supplied workflows and gives additional detail about their purpose and implementation Table 20. Workflows and their purpose Workflow name Exch2013_CaptureXML Exch2013_InstallConfig Exch2013_GetSystemConfig Exch2013_GetServerInfo Exch2013_PrepareWitn Exch2013_PrepareExch Exch2013_InstallConfigExch Exch2013_WaitForNetwork Exch2013_CopyFiles Exch2013_RunRemScript Purpose Captures and logs the Request-XML that describes the infrastructure for the service being created. The Request-XML information is stored on the CMS to file CapturedRequest.xml in the Temp folder beneath the Cloud Map folder. Performs the installation and configuration of Exchange Server 2013 on the newly provisioned logical server systems (VMs). This is the main entry point to the workflows. After calling Exch2013_GetSystem Config, the workflow waits for several minutes for the deployed VMs to complete their configuration. The wait period is specified by configuration parameter Exch2013_SysprepWaitTime. The flow continues by invoking subflows Exch2013_PrepareWitn, Exch2013_PrepareExch and then Exch2013_InstallConfigExch. Reads the configuration file Exch2013-CM_config.txt located on the CMS in the Config folder beneath the Cloud Map folder. This file should be edited to specify site-specific information. Refer to Table 11 for information about the configuration parameters. Applies an XSL transformation to the saved Request-XML to produce a list of server details for the newly provisioned servers (VMs). The server details comprise hostname, server group name and IP address. This information is stored on the CMS to file ServerInfo.txt in the Temp folder beneath the Cloud Map folder. The content of this file is overwritten each time a service is created. The information is written in a format specific to the workflow, but can be used as a quick way to obtain the hostnames and IP addresses of the most-recently deployed VMs. Obtains information about the Witness server from the ServerInfo.txt file and stores it in global flow variables. It then invokes Exch2013_CopyFiles to transfer the Witness server scripts to the deployed VM. The script Exch2013_WinFeaturesWitn.cmd is invoked on the target VM, followed by Exch2013_WitnessConfig.ps1 before the VM is restarted. Obtains information about the Exchange servers from the ServerInfo.txt file and stores it in global flow variables. It then creates three parallel workflow lanes to act upon each deployed VM simultaneously. The parallel actions are invocations of Exch2013_CopyFiles to transfer the Exchange server scripts and software kits to the deployed VM. The parallel actions continue with the invocation of script Exch2013_WinFeaturesExch.cmd on the target VMs, followed by a restart (the newly added features require a reboot to become active). This is followed by running Exch2013_PreReqs.ps1 script on each. If the Cloud Map is ever changed to support more than three Exchange Server VMs, this workflow will need to be edited to include additional parallel lanes. Changes will also be required to the Set XX Flow Variables steps to store information in more global flow variables Runs the Exch2013_ServerConfig_X.ps1 script on each respective Exchange Server VM sequentially, restarting the VM after installation completes. The script Exch2013_FinalConfig.ps1 is then invoked on Exchange Server #1. Waits for the RDP TCP port to be responsive to ping thereby indicating that the target system is available. An initial sleep parameter can be passed so that a network check does not happen too quickly in the situation where a restart is issued, followed by a wait for the server to return. Copies files from the CMS to the target server firstly, all files in the CMS Scripts\Exch or Scripts\Witn folder are copied to Scripts on the target system, and then all files in the CMS Software\Exch or Software\Witn folder are transferred to the Software folder. These folders on the target systems are stored on the C: drive beneath the top-level folder specified by the Exch2013_DeplFolder configuration parameter Launches a script on the specified target server. A parameter specifies whether the script is a regular batch/command script or a PowerShell script. In the latter case, the specified script name is used as an argument to an explicit powershell command invocation 56

57 Workflow name Exch2013_TraceFlows Purpose Writes trace information to a log file (intended for troubleshooting purposes). If System Property Exch2013_5000-Mbx_TraceFlows is set to true then flow entry and exit messages are written to file TraceFlows.txt in the Cloud Map s Temp folder on the CMS. The times of flow entry and exit are logged, and if a flow operation is successful there will be matching Entered and Leaving messages. If there is no corresponding Leaving message then this indicates that the workflow failed in that particular subflow. Appendix C: About the Cloud Map scripts Table 21 outlines the purpose of each of the configuration scripts supplied with the Cloud Map. Table 21. Configuration scripts and their actions Script name Exch2013_WinFeaturesExch.cmd Exch2013_WinFeaturesWitn.cmd Exch2013_WitnessConfig.ps1 Exch2013_PreReqs.ps1 Exch2013_ServerConfig_X.ps1 Exch2013_diskpart_X.txt Exch2013_FinalConfig.ps1 Exch2013_ExchConfig.ps1 Exch2013_ServerCleanup.ps1 Purpose Sets PowerShell script execution policy to none, installs necessary Windows features as well as Active Directory management features as some scripts need those features to make changes in Active Directory. Registers domain and witness server account information into Active Directory Installs Filter Packs, UCMA Runtime, and expands the Exchange Server 2013 installation kit Creates the mount point folder structure, runs the DISKPART command to configure the storage and create the mount points, formats the volumes, sets the volume names, creates the log directories and launches Exchange Server 2013 setup Input file for the DISKPART command which is executed in the Exch2013_ServerConfig_X.ps1 file FinalConfig script is a wrapper for invoking the ExchConfig script with Exchange PowerShell context. Performs post-installation Exchange configuration: Registers the product keys on each server and restarts the information store, pre-stages the computer object for the DAG, creates the DAG and joins servers to the DAG, creates the databases and database copies, sets database activation preference, sets database names as registered in Active Directory and sets mailbox warning levels and size limits Resets PowerShell script execution policy to RemoteSigned Exch2013_Schema.ps1 Updates the Active Directory Schema and prepares the local domain for Exchange Server 2013 installation. Input is a new Organization name or none if an Exchange Organization already exists. Manual invocation of this script is optional, refer back to section Updating the Active Directory schema for Exchange Server 2013 Exch2013_CMS_EnablePS.cmd Exch2013_CredSSP_Client.ps1 Exch2013_CredSSP_Server.ps1 Sets PowerShell script execution policy to none on the CMS Enables the client and server roles for the Credential Security Support Provider. This allows delegation of security credentials to occur across multiple systems during execution of some of the Cloud Map workflows. Table 22 lists configuration change options and the corresponding script files that need to be modified. Table 22. Configuration changes and script files to modify Configuration change Number of Databases Mount Point Paths Volume Names Scripts to modify Exch2013_ServerConfig_X.ps1, Exch2013_diskpart_X.txt, Exch2013_ExchConfig.ps1 Exch2013_ServerConfig_X.ps1 57

58 Configuration change Database Names Database warning and size limits Installation kit filenames Scripts to modify Exch2013_ServerConfig_X.ps1, Exch2013_ExchConfig.ps1. Also need to change Exch2013_diskpart_X.txt if the database names are to match the mount point names Exch2013_ExchConfig.ps1 Exch2013_PreReqs.ps1 Appendix D: Troubleshooting This Cloud Map was successfully tested on infrastructure that is typical of many HP CloudSystem Matrix deployments. It is possible that differences in the intended production deployment environment (such as server and storage models) could adversely affect the behavior of the Cloud Map when deploying an Exchange Server 2013 solution. This appendix lists the files and utilities useful for assisting with troubleshooting any issues that may arise and also describes potential issues and their suggested resolutions. Problem diagnosis Table 23 lists the log files that are useful when diagnosing problems that may occur when deploying this Cloud Map. The deployment may potentially fail during IO template deployment (in which case it is most likely that insufficient resources are available on the Hyper-V hosts) or during workflow execution in which case it is likely that execution of a workflow item failed or an error occurred during the execution of one of the scripts supplied with this Cloud Map. Both of these error types will be reported as a failure on the Matrix OE infrastructure orchestration Requests tab, and selecting the relevant Create request and viewing the Request Details will indicate the error type. Table 23. Log files useful for troubleshooting Log file name hpio-controller.log TraceFlows.txt ServerConfig.log ExchConfig.log ExchangeSetup.log DagTask_<datetime>*.log Central_wrapper.log Comments Located on the CMS, typically in folder C:\Program Files\HP\Matrix infrastructure orchestration\logs. Contains detailed information related to IO service deployments, and may indicate the cause of errors that occur when deploying the service from the IO template (before the workflows have executed) Located on the CMS in the Temp folder beneath the Exchange Server 2013 Cloud Map s main folder. Logs the successful entry and exit of workflows and script invocation, if the Exch2013_5000-Mbx_Traceflows OO System Property is set to true. In general if there is no Leaving flow XXX entry corresponding to an Entering flow XXX entry, then the failure has occurred within flow XXX Located on each of the deployed Exchange Server VMs in the top-level folder containing the Cloud Map deployment files. Contains simple messages documenting the progress of the execution of script Exch2013_ServerConfig_X.ps1 Located on Exchange Server #1 VM in the top-level folder containing the Cloud Map deployment files. Contains simple messages documenting the progress of the execution of script Exch2013_ExchConfig.ps1, as well as the output from the commands executed by that script Located on each of the deployed Exchange Server VMs in the ExchangeSetup folder beneath the C: drive. This and other files in the folder are the standard logs created during the installation of Exchange Server. Examine relevant files for errors occurring during the installation of Exchange Server Located on Exchange Server #1 VM in the ExchangeSetup\DagTasks folder beneath the C: drive. These are the standard logs created by the commands used within script Exch2013_ExchConfig.ps to create the DAG and add members. Examine these files for errors occurring when the DAG (cluster) is formed Located on the CMS, typically in folder C:\Program Files\HP\Operations Orchestration\Central\logs. Contains very detailed information related to workflow execution, and may indicate the cause of workflow errors. It is recommended to refer to this file only if necessary after examining all of the log files listed above 58

59 Use of OO Studio workflow debugger Development of Cloud Map workflows is done using the Operation Orchestration Studio application. This development tool includes a workflow debugger that allows step-by-step workflow execution. Use of this debugger can also be useful when investigating workflow failures and can be invoked to narrow down the failing location within a workflow. A reason for the error will usually be given. It is beyond the scope of this document to comprehensively instruct on how to use the debugger, but the summary steps required to make use of the debugger to run the main workflow are listed below. 1. Launch the Matrix OE infrastructure orchestration Designer and edit the Exch2013_5000-Mbx template. 2. Click on the Workflows button to display the Template Workflows window. 3. Highlight the Exch2013_InstallConfig workflow and uncheck the Create Service: End execution point. Dismiss the Warning message box and temporarily check an alternative execution point, for example Create Snapshot: End. 4. Click OK to apply the workflow change. 5. Save the updated IO template, ensure the Published checkbox is ticked. 6. Deploy the IO template the virtual server and storage resources will be created, but the main workflow will not be executed. Move to step 7 only when all of the virtual machines have been fully deployed and are running Windows Server. 7. Launch Operations Orchestration Studio and login with appropriate credentials (typically username admin ). 8. Expand the Library tree to find the Exch2013_InstallConfig workflow beneath Library Hewlett-Packard Infrastructure orchestration Service Actions Exchange Double-click on the workflow name to display the workflow diagram. 10. Click the Debug Flow button to launch the Debug window. 11. Click the Play button the flow will execute, displaying debug information in the debugger s panes. 12. Any failures will be reported at the point of execution and further analysis can then be performed. When debugging is complete, repeat steps 1-5 above, but re-enable the Create Service: End and disable the Create Snapshot: End execution points. Deployment testing: change virtual disk sizes Users of this Cloud Map may wish to perform test deployments prior to deploying Exchange Server 2013 services in a production environment. As mentioned in the Deployment duration section, provisioning of the designed solution takes many hours. For testing purposes this duration can be significantly reduced by decreasing the size of each Exchange Server s data virtual disk from 1536GB to just 3GB (for example). Complete deployment should then take less than three hours, and any problems with the workflows will be identified much more quickly. Use the IO Designer application to edit the template and temporarily reduce each of the virtual disk sizes. Workflow sleep time parameters The workflows of this Cloud Map are sensitive to the time taken to complete various actions related to Windows and Exchange Server installation and configuration. In various places within the workflows and scripts a sleep command has been used for an appropriate length of time based on behavior seen in the test environment. It is conceivable that deployments outside of the test environment may require a longer sleep time to prevent workflows from prematurely executing, which will likely result in failure. After diagnosing the location of the failure, it may become apparent that a longer sleep time is required. The sleep period, when required in the workflows, is generally specified from the configuration parameters Exch2013_NetworkSleepTime or Exch2013_FeatureWaitTime (listed in Table 11 in The Cloud Map configuration file section). Revised durations can be specified explicitly within the scripts invoked by the workflows. Note: It is highly recommended not to reduce any of the time periods, but to only increase them if found to be necessary. Failover cluster creation problem During execution of the Exch2013_ExchConfig.ps1 script a Database Availability Group is created. The implementation of this is essentially the creation of a failover cluster with the Exchange servers as members. 59

60 During testing, one of the servers repeatedly failed to join the cluster an error was reported in the DagTask log file pertaining to that server s addition to the DAG (cluster). This server VM had been deployed by IO on to a different Hyper-V host to the others. The cause was ultimately found to be a misconfigured network environment. To diagnose and resolve this problem more effectively it is recommended to attempt to create an independent failover cluster of two Windows Server 2012 VMs using standard Microsoft utilities (such as Hyper-V Manager and Failover Cluster Manager). Ensure that the VMs have the same network settings as the VMs deployed using IO. With the two VMs placed on different Hyper-V hosts it is possible that the same failure can be reproduced, and more easily resolved, without relying on repeated deployments of the Exchange Server solution. Resource reservation failure During testing a problem with resource reservation failure was encountered when a service was deployed. The error may also occur at any site deploying services from this Cloud Map. The error is seen in the Request Details log on the IO Requests tab as Reservation failed. Ask the Administrator to make sure all necessary resources are available for this request. The service creation subsequently fails. Occurrences of this error are dependent on the status of Cluster Shared Volumes specified for each Boot volume in the IO template. Infrastructure orchestration incorrectly allocates a server VM to the wrong storage volume after evaluating its free-space. If this problem is encountered it may be possible to work around by swapping Storage Volume Names associated with each server boot disk. For example, if server 1 is associated with C:/ClusterStorage/Volume1, server 2 with C:/ClusterStorage/Volume2 and server 3 with C:/ClusterStorage/Volume3, then swapping the order to be Volume3, Volume2 and Volume1 respectively or some other combination may resolve the issue. The cause of the problem has been identified and resolved, and the fix will be included in a future patch kit for Matrix OE. Active Directory topology considerations The Cloud Map test environment used a simple Active Directory topology comprising a single domain with two Domain Controllers on a single network. No issues related to AD replication were seen. If the Cloud Map is used to deploy an Exchange Server 2013 solution in a more complex AD environment it is conceivable that some aspects of Exchange installation or configuration may fail due to attempting to access information from different AD servers and the required information has not yet been replicated from one AD server to another. To resolve this situation it is recommended that relevant Cloud Map scripts be edited to insert a sleep command at the appropriate location with a duration long enough to ensure replication will have occurred. Alternatively an AD command to force replication can be inserted in to the scripts. Errors reported with DAC-mode enabled Microsoft recommends enabling Datacenter Activation Coordination (DAC) mode on a Database Availability Group to control behavior following a failure of multiple servers in the DAG. (For additional information refer to the Technet article at In line with this recommendation this Cloud Map enables DAC-mode using the following command within the Exch2013_ExchConfig.ps1 script: Set-DatabaseAvailabilityGroup -Identity $DAGName -DatacenterActivationMode DagOnly During testing the DAG reported errors in the Event Log against some of the databases on each Exchange server. Two events per database were logged repeatedly at 15 minute intervals, as shown in Figures 47, 48 and 49 below. However, despite the errors, all databases were found to be healthy and any of the copies could be activated. 60

61 Figure 47. Errors reported against databases with DAC mode enabled Figure 48. First error message Figure 49. Second error message If these events occur in any production deployment please contact your Microsoft Exchange product support team to assist with investigating and resolving the issue. The errors did not appear in the event log when DAC mode was set to off. You may wish to consider this option based on fully understanding the implications and likelihood of the failure scenarios as described in the referenced Technet article and in other documents. To disable DAC mode edit the Exch2013_ExchConfig.ps1 script and modify the aforementioned command to be: Set-DatabaseAvailabilityGroup -Identity $DAGName -DatacenterActivationMode Off 61

Implementing the HP Cloud Map for SAS Enterprise BI on Linux

Implementing the HP Cloud Map for SAS Enterprise BI on Linux Technical white paper Implementing the HP Cloud Map for SAS Enterprise BI on Linux Table of contents Executive summary... 2 How to utilize this HP CloudSystem Matrix template... 2 Download the template...

More information

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template

HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template An HP Reference Architecture for TIBCO Technical white paper Table of contents Executive summary... 2 Solution environment... 2

More information

HP CloudSystem Enterprise

HP CloudSystem Enterprise Technical white paper HP CloudSystem Enterprise HP Cloud Service Automation Design for Microsoft Windows VM with HP Data Protector Table of contents Executive summary... 2 HP CloudSystem Enterprise overview...

More information

HP CloudSystem Enterprise

HP CloudSystem Enterprise Technical white paper HP CloudSystem Enterprise Creating a multi-tenancy solution with HP Matrix Operating Environment and HP Cloud Service Automation Table of contents Executive summary 2 Multi-tenancy

More information

HP CloudSystem Enterprise

HP CloudSystem Enterprise HP CloudSystem Enterprise F5 BIG-IP and Apache Load Balancing Reference Implementation Technical white paper Table of contents Introduction... 2 Background assumptions... 2 Overview... 2 Process steps...

More information

Table of contents. Technical white paper

Table of contents. Technical white paper Technical white paper Provisioning Highly Available SQL Server Virtual Machines for the HP App Map for Database Consolidation for Microsoft SQL Server on ConvergedSystem 700x Table of contents Executive

More information

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management

Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Managing Microsoft Hyper-V Server 2008 R2 with HP Insight Management Integration note, 4th Edition Introduction... 2 Overview... 2 Comparing Insight Management software Hyper-V R2 and VMware ESX management...

More information

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...

More information

MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure

MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure Introduction This article shows you how to deploy the MATLAB Distributed Computing Server (hereinafter referred to as MDCS) with

More information

Introduction to Hyper-V High- Availability with Failover Clustering

Introduction to Hyper-V High- Availability with Failover Clustering Introduction to Hyper-V High- Availability with Failover Clustering Lab Guide This lab is for anyone who wants to learn about Windows Server 2012 R2 Failover Clustering, focusing on configuration for Hyper-V

More information

Virtualizing your Datacenter

Virtualizing your Datacenter Virtualizing your Datacenter with Windows Server 2012 R2 & System Center 2012 R2 Part 2 Hands-On Lab Step-by-Step Guide For the VMs the following credentials: Username: Contoso\Administrator Password:

More information

Configuration Guide. Achieve Unified Management and Scale-Out for Windows Server 2012 R2 Hyper-V Deployments with the Sanbolic Platform

Configuration Guide. Achieve Unified Management and Scale-Out for Windows Server 2012 R2 Hyper-V Deployments with the Sanbolic Platform Configuration Guide Achieve Unified Management and Scale-Out for Windows Server 2012 R2 Hyper-V Deployments with the Sanbolic Platform Introduction Using Microsoft Windows Server 2012 R2 and Hyper-V, organizations

More information

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION Reference Architecture Guide MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION EMC VNX, EMC VMAX, EMC ViPR, and EMC VPLEX Microsoft Windows Hyper-V, Microsoft Windows Azure Pack, and Microsoft System

More information

Administration Guide for the System Center Cloud Services Process Pack

Administration Guide for the System Center Cloud Services Process Pack Administration Guide for the System Center Cloud Services Process Pack Microsoft Corporation Published: May 7, 2012 Author Kathy Vinatieri Applies To System Center Cloud Services Process Pack This document

More information

How to Test Out Backup & Replication 6.5 for Hyper-V

How to Test Out Backup & Replication 6.5 for Hyper-V How to Test Out Backup & Replication 6.5 for Hyper-V Mike Resseler May, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication

More information

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision

More information

HP Device Monitor (v 1.2) for Microsoft System Center User Guide

HP Device Monitor (v 1.2) for Microsoft System Center User Guide HP Device Monitor (v 1.2) for Microsoft System Center User Guide Abstract This guide provides information on using the HP Device Monitor version 1.2 to monitor hardware components in an HP Insight Control

More information

Private cloud computing advances

Private cloud computing advances Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud

More information

Synchronizer Installation

Synchronizer Installation Synchronizer Installation Synchronizer Installation Synchronizer Installation This document provides instructions for installing Synchronizer. Synchronizer performs all the administrative tasks for XenClient

More information

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013

More information

SPEED your path to virtualization.

SPEED your path to virtualization. SPEED your path to virtualization. 2011 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Introducing HP VirtualSystem Chief pillar of

More information

User Guide for VMware Adapter for SAP LVM VERSION 1.2

User Guide for VMware Adapter for SAP LVM VERSION 1.2 User Guide for VMware Adapter for SAP LVM VERSION 1.2 Table of Contents Introduction to VMware Adapter for SAP LVM... 3 Product Description... 3 Executive Summary... 3 Target Audience... 3 Prerequisites...

More information

Web Sites, Virtual Machines, Service Management Portal and Service Management API Beta Installation Guide

Web Sites, Virtual Machines, Service Management Portal and Service Management API Beta Installation Guide Web Sites, Virtual Machines, Service Management Portal and Service Management API Beta Installation Guide Contents Introduction... 2 Environment Topology... 2 Virtual Machines / System Requirements...

More information

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine

How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest

More information

Solution Overview. 2015, Hitachi Data Systems, Inc. Page 3 of 39 pages. Figure 1

Solution Overview. 2015, Hitachi Data Systems, Inc. Page 3 of 39 pages. Figure 1 Deploying Windows Azure Pack on Unified Compute Platform for Microsoft Private Cloud Tech Note Jason Giza/Rick Andersen Hitachi Unified Compute Platform Director is a converged platform architected to

More information

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014

Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014 Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...

More information

Veeam Backup Enterprise Manager. Version 7.0

Veeam Backup Enterprise Manager. Version 7.0 Veeam Backup Enterprise Manager Version 7.0 User Guide August, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may

More information

How To Install An Aneka Cloud On A Windows 7 Computer (For Free)

How To Install An Aneka Cloud On A Windows 7 Computer (For Free) MANJRASOFT PTY LTD Aneka 3.0 Manjrasoft 5/13/2013 This document describes in detail the steps involved in installing and configuring an Aneka Cloud. It covers the prerequisites for the installation, the

More information

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server

More information

Installation Guide: Delta Module Manager Launcher

Installation Guide: Delta Module Manager Launcher Installation Guide: Delta Module Manager Launcher Overview... 2 Delta Module Manager Launcher... 2 Pre-Installation Considerations... 3 Hardware Requirements... 3 Software Requirements... 3 Virtualisation...

More information

Bosch Video Management System High Availability with Hyper-V

Bosch Video Management System High Availability with Hyper-V Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements

More information

XenClient Enterprise Synchronizer Installation Guide

XenClient Enterprise Synchronizer Installation Guide XenClient Enterprise Synchronizer Installation Guide Version 5.1.0 March 26, 2014 Table of Contents About this Guide...3 Hardware, Software and Browser Requirements...3 BIOS Settings...4 Adding Hyper-V

More information

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster #1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information

RSA Authentication Manager 8.1 Virtual Appliance Getting Started

RSA Authentication Manager 8.1 Virtual Appliance Getting Started RSA Authentication Manager 8.1 Virtual Appliance Getting Started Thank you for purchasing RSA Authentication Manager 8.1, the world s leading two-factor authentication solution. This document provides

More information

System Administration Training Guide. S100 Installation and Site Management

System Administration Training Guide. S100 Installation and Site Management System Administration Training Guide S100 Installation and Site Management Table of contents System Requirements for Acumatica ERP 4.2... 5 Learning Objects:... 5 Web Browser... 5 Server Software... 5

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

VMware Identity Manager Connector Installation and Configuration

VMware Identity Manager Connector Installation and Configuration VMware Identity Manager Connector Installation and Configuration VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until the document

More information

Bosch Video Management System High availability with VMware

Bosch Video Management System High availability with VMware Bosch Video Management System High availability with VMware en Technical Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3

More information

SHAREPOINT 2013 IN INFRASTRUCTURE AS A SERVICE

SHAREPOINT 2013 IN INFRASTRUCTURE AS A SERVICE SHAREPOINT 2013 IN INFRASTRUCTURE AS A SERVICE Contents Introduction... 3 Step 1 Create Azure Components... 5 Step 1.1 Virtual Network... 5 Step 1.1.1 Virtual Network Details... 6 Step 1.1.2 DNS Servers

More information

Core Protection for Virtual Machines 1

Core Protection for Virtual Machines 1 Core Protection for Virtual Machines 1 Comprehensive Threat Protection for Virtual Environments. Installation Guide e Endpoint Security Trend Micro Incorporated reserves the right to make changes to this

More information

NSi Mobile Installation Guide. Version 6.2

NSi Mobile Installation Guide. Version 6.2 NSi Mobile Installation Guide Version 6.2 Revision History Version Date 1.0 October 2, 2012 2.0 September 18, 2013 2 CONTENTS TABLE OF CONTENTS PREFACE... 5 Purpose of this Document... 5 Version Compatibility...

More information

Index C, D. Background Intelligent Transfer Service (BITS), 174, 191

Index C, D. Background Intelligent Transfer Service (BITS), 174, 191 Index A Active Directory Restore Mode (DSRM), 12 Application profile, 293 Availability sets configure possible and preferred owners, 282 283 creation, 279 281 guest cluster, 279 physical cluster, 279 virtual

More information

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V Installation Guide for Microsoft Hyper-V Egnyte Inc. 1890 N. Shoreline Blvd. Mountain View, CA 94043, USA Phone: 877-7EGNYTE (877-734-6983) www.egnyte.com 2013 by Egnyte Inc. All rights reserved. Revised

More information

How To Install Powerpoint 6 On A Windows Server With A Powerpoint 2.5 (Powerpoint) And Powerpoint 3.5.5 On A Microsoft Powerpoint 4.5 Powerpoint (Powerpoints) And A Powerpoints 2

How To Install Powerpoint 6 On A Windows Server With A Powerpoint 2.5 (Powerpoint) And Powerpoint 3.5.5 On A Microsoft Powerpoint 4.5 Powerpoint (Powerpoints) And A Powerpoints 2 DocAve 6 Service Pack 1 Installation Guide Revision C Issued September 2012 1 Table of Contents About the Installation Guide... 4 Submitting Documentation Feedback to AvePoint... 4 Before You Begin...

More information

HP OneView Administration H4C04S

HP OneView Administration H4C04S HP Education Services course data sheet HP OneView Administration H4C04S Course Overview This 3-day course covers how to install, manage, configure, and update the HP OneView Appliance. An architectural

More information

HP Matrix Operating Environment 7.2 Recovery Management User Guide

HP Matrix Operating Environment 7.2 Recovery Management User Guide HP Matrix Operating Environment 7.2 Recovery Management User Guide Abstract The HP Matrix Operating Environment 7.2 Recovery Management User Guide contains information on installation, configuration, testing,

More information

Burst Technology bt-loganalyzer SE

Burst Technology bt-loganalyzer SE Burst Technology bt-loganalyzer SE Burst Technology Inc. 9240 Bonita Beach Rd, Bonita Springs, FL 34135 CONTENTS WELCOME... 3 1 SOFTWARE AND HARDWARE REQUIREMENTS... 3 2 SQL DESIGN... 3 3 INSTALLING BT-LOGANALYZER...

More information

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Deploying Windows Streaming Media Servers NLB Cluster and metasan Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................

More information

Docufide Client Installation Guide for Windows

Docufide Client Installation Guide for Windows Docufide Client Installation Guide for Windows This document describes the installation and operation of the Docufide Client application at the sending school installation site. The intended audience is

More information

Pearl Echo Installation Checklist

Pearl Echo Installation Checklist Pearl Echo Installation Checklist Use this checklist to enter critical installation and setup information that will be required to install Pearl Echo in your network. For detailed deployment instructions

More information

5nine Cloud Monitor for Hyper-V

5nine Cloud Monitor for Hyper-V 5nine Cloud Monitor for Hyper-V Getting Started Guide Table of Contents System Requirements... 2 Installation... 3 Getting Started... 8 Settings... 9 Authentication... 9 5nine Cloud Monitor for Hyper-V

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper

Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide

HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide HP ProLiant PRO Management Pack (v 2.0) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant PRO Management Pack for Microsoft System Center version

More information

AUTOMATED DISASTER RECOVERY SOLUTION USING AZURE SITE RECOVERY FOR FILE SHARES HOSTED ON STORSIMPLE

AUTOMATED DISASTER RECOVERY SOLUTION USING AZURE SITE RECOVERY FOR FILE SHARES HOSTED ON STORSIMPLE AUTOMATED DISASTER RECOVERY SOLUTION USING AZURE SITE RECOVERY FOR FILE SHARES HOSTED ON STORSIMPLE Copyright This document is provided "as-is." Information and views expressed in this document, including

More information

Connection Broker Managing User Connections to Workstations, Blades, VDI, and More. Quick Start with Microsoft Hyper-V

Connection Broker Managing User Connections to Workstations, Blades, VDI, and More. Quick Start with Microsoft Hyper-V Connection Broker Managing User Connections to Workstations, Blades, VDI, and More Quick Start with Microsoft Hyper-V Version 8.1 October 21, 2015 Contacting Leostream Leostream Corporation http://www.leostream.com

More information

AVG Business SSO Connecting to Active Directory

AVG Business SSO Connecting to Active Directory AVG Business SSO Connecting to Active Directory Contents AVG Business SSO Connecting to Active Directory... 1 Selecting an identity repository and using Active Directory... 3 Installing Business SSO cloud

More information

Windows Server 2012 授 權 說 明

Windows Server 2012 授 權 說 明 Windows Server 2012 授 權 說 明 PROCESSOR + CAL HA 功 能 相 同 的 記 憶 體 及 處 理 器 容 量 虛 擬 化 Windows Server 2008 R2 Datacenter Price: NTD173,720 (2 CPU) Packaging All features Unlimited virtual instances Per processor

More information

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments RED HAT ENTERPRISE VIRTUALIZATION DATASHEET RED HAT ENTERPRISE VIRTUALIZATION AT A GLANCE Provides a complete end-toend enterprise virtualization solution for servers and desktop Provides an on-ramp to

More information

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems

More information

Quick Start Guide for VMware and Windows 7

Quick Start Guide for VMware and Windows 7 PROPALMS VDI Version 2.1 Quick Start Guide for VMware and Windows 7 Rev. 1.1 Published: JULY-2011 1999-2011 Propalms Ltd. All rights reserved. The information contained in this document represents the

More information

Installing and Configuring vcloud Connector

Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

Installing and Administering VMware vsphere Update Manager

Installing and Administering VMware vsphere Update Manager Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document

More information

XenDesktop Implementation Guide

XenDesktop Implementation Guide Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Implementation Guide Pooled Desktops (Local and Remote) www.citrix.com Contents Contents... 2 Overview... 4 Initial Architecture... 5 Installation

More information

LT Auditor+ 2013. Windows Assessment SP1 Installation & Configuration Guide

LT Auditor+ 2013. Windows Assessment SP1 Installation & Configuration Guide LT Auditor+ 2013 Windows Assessment SP1 Installation & Configuration Guide Table of Contents CHAPTER 1- OVERVIEW... 3 CHAPTER 2 - INSTALL LT AUDITOR+ WINDOWS ASSESSMENT SP1 COMPONENTS... 4 System Requirements...

More information

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V

Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying

More information

HP Server Management Packs for Microsoft System Center Essentials User Guide

HP Server Management Packs for Microsoft System Center Essentials User Guide HP Server Management Packs for Microsoft System Center Essentials User Guide Part Number 460344-001 September 2007 (First Edition) Copyright 2007 Hewlett-Packard Development Company, L.P. The information

More information

Nexio Connectus with Nexio G-Scribe

Nexio Connectus with Nexio G-Scribe Nexio Connectus with Nexio G-Scribe 2.1.2 3/20/2014 Edition: A 2.1.2 Publication Information 2014 Imagine Communications. Proprietary and Confidential. Imagine Communications considers this document and

More information

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage

TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage TECHNICAL PAPER Veeam Backup & Replication with Nimble Storage Document Revision Date Revision Description (author) 11/26/2014 1. 0 Draft release (Bill Roth) 12/23/2014 1.1 Draft update (Bill Roth) 2/20/2015

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information

Optimizing Business Continuity Management with NetIQ PlateSpin Protect and AppManager. Best Practices and Reference Architecture

Optimizing Business Continuity Management with NetIQ PlateSpin Protect and AppManager. Best Practices and Reference Architecture Optimizing Business Continuity Management with NetIQ PlateSpin Protect and AppManager Best Practices and Reference Architecture WHITE PAPER Table of Contents Introduction.... 1 Why monitor PlateSpin Protect

More information

SonicWALL CDP 5.0 Microsoft Exchange InfoStore Backup and Restore

SonicWALL CDP 5.0 Microsoft Exchange InfoStore Backup and Restore SonicWALL CDP 5.0 Microsoft Exchange InfoStore Backup and Restore Document Scope This solutions document describes how to configure and use the Microsoft Exchange InfoStore Backup and Restore feature in

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

VMTurbo Operations Manager 4.5 Installing and Updating Operations Manager

VMTurbo Operations Manager 4.5 Installing and Updating Operations Manager VMTurbo Operations Manager 4.5 Installing and Updating Operations Manager VMTurbo, Inc. One Burlington Woods Drive Burlington, MA 01803 USA Phone: (781) 373---3540 www.vmturbo.com Table of Contents Introduction

More information

FOR SERVERS 2.2: FEATURE matrix

FOR SERVERS 2.2: FEATURE matrix RED hat ENTERPRISE VIRTUALIZATION FOR SERVERS 2.2: FEATURE matrix Red hat enterprise virtualization for servers Server virtualization offers tremendous benefits for enterprise IT organizations server consolidation,

More information

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.

More information

HP CloudSystem Matrix FAQ

HP CloudSystem Matrix FAQ HP CloudSystem Matrix FAQ HP Restricted. For HP and HP Channel Partner Internal Use Only January 2013 1 General... 2 2 What s New?... 6 3 Features and Functions... 9 4 Supported Components... 11 5 Cloud

More information

Course 20533: Implementing Microsoft Azure Infrastructure Solutions

Course 20533: Implementing Microsoft Azure Infrastructure Solutions Course 20533: Implementing Microsoft Azure Infrastructure Solutions Overview About this course This course is aimed at experienced IT Professionals who currently administer their on-premises infrastructure.

More information

HP CloudSystem Matrix: Collecting Usage Data for Showback, Charge-back, and Billing Purposes

HP CloudSystem Matrix: Collecting Usage Data for Showback, Charge-back, and Billing Purposes HP CloudSystem Matrix: Collecting Usage Data for Showback, Charge-back, and Billing Purposes Technical white paper Table of contents Abstract... 2 Overview... 2 CloudSystem Matrix... 4 The CloudSystem

More information

VMware for Bosch VMS. en Software Manual

VMware for Bosch VMS. en Software Manual VMware for Bosch VMS en Software Manual VMware for Bosch VMS Table of Contents en 3 Table of contents 1 Introduction 4 1.1 Restrictions 4 2 Overview 5 3 Installing and configuring ESXi server 6 3.1 Installing

More information

HP Cloud Service Automation

HP Cloud Service Automation Technical white paper HP Cloud Service Automation Integration with HP Service Manager Table of contents Introduction 2 Required software components 2 Configuration requirements 2 Downloading the distribution

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

HP Client Automation Standard Fast Track guide

HP Client Automation Standard Fast Track guide HP Client Automation Standard Fast Track guide Background Client Automation Version This document is designed to be used as a fast track guide to installing and configuring Hewlett Packard Client Automation

More information

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide

WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide WebSpy Vantage Ultimate 2.2 Web Module Administrators Guide This document is intended to help you get started using WebSpy Vantage Ultimate and the Web Module. For more detailed information, please see

More information

Implementing Microsoft Azure Infrastructure Solutions

Implementing Microsoft Azure Infrastructure Solutions Course Code: M20533 Vendor: Microsoft Course Overview Duration: 5 RRP: 2,025 Implementing Microsoft Azure Infrastructure Solutions Overview This course is aimed at experienced IT Professionals who currently

More information

USING DELL POWEREDGE 11G SERVERS TO ENABLE MICROSOFT COLLABORATION TOOLS IN AN ENTERPRISE BUSINESS ENVIRONMENT

USING DELL POWEREDGE 11G SERVERS TO ENABLE MICROSOFT COLLABORATION TOOLS IN AN ENTERPRISE BUSINESS ENVIRONMENT USING DELL POWEREDGE 11G SERVERS TO ENABLE MICROSOFT COLLABORATION TOOLS IN AN ENTERPRISE BUSINESS ENVIRONMENT A collaboration deployment guide commissioned by Dell Inc. Table of contents Table of contents...

More information

Introduction to the AirWatch Cloud Connector (ACC) Guide

Introduction to the AirWatch Cloud Connector (ACC) Guide Introduction to the AirWatch Cloud Connector (ACC) Guide The AirWatch Cloud Connector (ACC) provides organizations the ability to integrate AirWatch with their back-end enterprise systems. This document

More information

Kaspersky Lab Mobile Device Management Deployment Guide

Kaspersky Lab Mobile Device Management Deployment Guide Kaspersky Lab Mobile Device Management Deployment Guide Introduction With the release of Kaspersky Security Center 10.0 a new functionality has been implemented which allows centralized management of mobile

More information

http://docs.trendmicro.com

http://docs.trendmicro.com Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,

More information

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008

Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010

More information

Unitrends Virtual Backup Installation Guide Version 8.0

Unitrends Virtual Backup Installation Guide Version 8.0 Unitrends Virtual Backup Installation Guide Version 8.0 Release June 2014 7 Technology Circle, Suite 100 Columbia, SC 29203 Phone: 803.454.0300 Contents Chapter 1 Getting Started... 1 Version 8 Architecture...

More information

Introduction to the EIS Guide

Introduction to the EIS Guide Introduction to the EIS Guide The AirWatch Enterprise Integration Service (EIS) provides organizations the ability to securely integrate with back-end enterprise systems from either the AirWatch SaaS environment

More information

ADFS 2.0 Application Director Blueprint Deployment Guide

ADFS 2.0 Application Director Blueprint Deployment Guide Introduction: ADFS 2.0 Application Director Blueprint Deployment Guide Active Directory Federation Service (ADFS) is a software component from Microsoft that allows users to use single sign-on (SSO) to

More information

PHD Virtual Backup for Hyper-V

PHD Virtual Backup for Hyper-V PHD Virtual Backup for Hyper-V version 7.0 Installation & Getting Started Guide Document Release Date: December 18, 2013 www.phdvirtual.com PHDVB v7 for Hyper-V Legal Notices PHD Virtual Backup for Hyper-V

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3

More information

RSA Authentication Manager 7.1 Basic Exercises

RSA Authentication Manager 7.1 Basic Exercises RSA Authentication Manager 7.1 Basic Exercises Contact Information Go to the RSA corporate web site for regional Customer Support telephone and fax numbers: www.rsa.com Trademarks RSA and the RSA logo

More information

Migrating to vcloud Automation Center 6.1

Migrating to vcloud Automation Center 6.1 Migrating to vcloud Automation Center 6.1 vcloud Automation Center 6.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

Configuring a VEEAM off host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster with a DELL Compellent SAN (Fiber Channel)

Configuring a VEEAM off host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster with a DELL Compellent SAN (Fiber Channel) 1 Configuring a VEEAM off host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster with a DELL Compellent SAN (Fiber Channel) Introduction This white paper describes how to configure

More information

Hands on Lab: Building a Virtual Machine and Uploading VM Images to the Cloud using Windows Azure Infrastructure Services

Hands on Lab: Building a Virtual Machine and Uploading VM Images to the Cloud using Windows Azure Infrastructure Services Hands on Lab: Building a Virtual Machine and Uploading VM Images to the Cloud using Windows Azure Infrastructure Services Windows Azure Infrastructure Services provides cloud based storage, virtual networks

More information

Annex 9: Private Cloud Specifications

Annex 9: Private Cloud Specifications Annex 9: Private Cloud Specifications The MoICT Private Cloud Solution is based on the Data Center Services (DCS) offering from CMS. DCS is comprised of a fabric in the form of one or more resource pools

More information