Deployment Guide StorMagic SvSAN on VMware vsphere 6.0 with Cisco UCS Mini October 2015 Introduction Virtualization in today s computer infrastructure has led to a distinct and prominent need for storage, accessible to multiple servers simultaneously. Concurrently shared storage provides abilities like vmotion, High Availability, and replication, which provide stability and application availability. This need is satisfied in large data centers typically through the use of large-scale SAN devices. Frequently these solutions leave the smaller deployment models unaddressed, such as Remote Office/Branch Office (ROBO), needs unresolved, as the SAN networking is costly to extend, and suffers from the involved latency. This deployment guide provides an Integrated Infrastructure solution that enables shared storage at the remote data center, using capacity attached directly to the compute layer, for a self-contained application delivery platform. StorMagic SvSAN combined with Cisco UCS-Mini provides compute, networking, and storage in even the most remote location. Combined in a proven and confirmed architecture, compute and storage support for hundreds of virtual desktops as well as necessary infrastructure VMs can be reliably deployed. This joint solution allows the edge enterprise to deploy a data storage infrastructure that works for its multi-site nature This document provides a reference architecture and deployment guide for StorMagic SvSAN on VMware vsphere 6.0 running on Cisco UCS Mini environment. Table of Contents Introduction...1 Technology Overview...2 Cisco UCS C-Series Rack Servers...2 Cisco UCS Manager 3.0...2 Cisco UCS 6324UP Fabric Interconnect...3 Cisco UCS B200 M4 Blade Servers...3 Cisco VIC 1340...3 VMware vsphere 6.0...4 StorMagic SvSAN...4 StorMagic SvSAN with Cisco UCS Mini Architecture...5 Compute...5 Networking...5 Storage...6 Deployment of StorMagic SvSAN for VMware vsphere on Cisco UCS Environment...9 Cisco UCS Configuration...10 Configuring the Cisco UCS Mini Fabric Interconnects...10 Configure UCS Manager...10 Synchronize Cisco UCS to NTP...10 Enable QoS in Cisco UCS Fabric...10 Create QoS policy...10 Enable Uplink Ports...11 Create VLANs...12 Create Local Disk Configuration Policy...13 Create Boot Policy...13 Create BIOS Policy...14 Configure Server Pool and Qualifying Policy...14 Create UUID Suffix Pools...15 Create MAC Pool...15 Create an IP Pool for KVM Access...16 Create vnic Templates and LAN Connectivity Policies...16 Create a Service Profile Template...18 Create Service Profiles from Templates...18 Create Virtual Drives...19 Install VMware 6.0...20 Prepare for SvSAN Deployment...20 Configure Host Networking...23 vcenter server...25 Installing the SvSAN software on a vcenter Server...26 Installing StorMagic PowerShell Toolkit...27 Deploy SvSAN VSAs...27 Recommended SvSAN Mirror Configurations...31 Overview...31 Mirrored Targets...31 Cisco UCS Blade Server Expansion and SvSAN Target Migration...32 Datastore Spanning Using VMware Extents...35 Creating a shared datastore with mirrored Storage...36 Manage VSAs...38 Configure Jumbo Frames on SvSAN Network Interfaces...39 Manage shared datastores...40 1 2015 Cisco and/or its affiliates. All rights reserved.
Technology Overview The following are the hardware and software components used for this deployment guide: Cisco Unified Compute System Mini Cisco UCS Manager 3.0(2c) Cisco UCS B200 M4 Server Cisco VIC 1340 Adapter VMware vsphere 6.0.0b StorMagic SvSAN 5.2 Cisco UCS Mini Cisco UCS, originally designed for the data center, is now optimized for branch and remote offices, point-of-sale, and smaller IT environments with Cisco UCS Mini. UCS Mini is for customers who need fewer servers but still want the robust management capabilities provided by UCS Manager. This solution delivers servers, storage, and 10 Gigabit networking in an easy-to-deploy, compact 6 rack unit form factor. Cisco UCS Mini provides a total computing solution with the proven management simplicity of the award-winning Cisco UCS Manager. Figure 1. Cisco UCS Mini Cisco Nexus Series Switches LAN 10 Gigabit Ethernet Uplinks 10 Gigabit Ethernet Connections to QSFP or SFP Ports on Cisco UCS 6324 10 Gigabit Ethernet Connections to QSFP or SFP Ports on Cisco UCS 6324 Cisco UCS 6324 Fabric Interconnects (Plug into Back of Cisco UCS 5100 Series Blade Chassis) Cisco UCS 5100 Series Blade Chassis Cisco UCS Manager 3.0 Cisco Unified Computing System (UCS) Manager provides unified, embedded management of all software and hardware components of the Cisco UCS through choice of an intuitive GUI, a Command Line Interface (CLI), a Microsoft PowerShell module, or an XML API. The Cisco UCS Manager provides unified management domain with centralized management capabilities and controls multiple chassis and thousands of virtual machines. 2 2015 Cisco and/or its affiliates. All rights reserved.
The Cisco UCS 6324 Fabric Interconnect hosts and runs Cisco UCS Manager in a highly available configuration, enabling the fabric interconnects to fully manage all Cisco UCS elements. The Cisco UCS 6324 Fabric Interconnects support out-of-band management through dedicated 10/100/1000-Mbps Ethernet management ports. Cisco UCS Manager typically is deployed in a clustered active-passive configuration with two Cisco UCS 6324 Fabric Interconnects connected through the cluster interconnect built into the chassis. Cisco UCS Manager 3.0 supports the 6324 Fabric Interconnect that integrates the FI into the UCS Chassis and provides an integrated solution for a smaller deployment environment. Cisco UCS Mini simplifies the system management and saves cost for smaller scale deployments. The hardware and software components support Cisco unified fabric, which runs multiple types of data center traffic over a single converged network adapter. Cisco UCS 6324UP Fabric Interconnect The Cisco UCS 6324 Fabric Interconnect Fabric Interconnect provides the management, LAN and storage connectivity for the Cisco UCS 5108 Blade Server Chassis and direct-connect rack-mount servers. It provides the same full-featured Cisco UCS management capabilities and XML API as the full-scale Cisco UCS solution in addition to integrating with Cisco UCS Central Software and Cisco UCS Director. From a networking perspective, the Cisco UCS 6324 Fabric Interconnect uses a cut-through architecture supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports with switching capacity of up to 500Gbps, independent of packet size and enabled services. Sixteen 10Gbps links connect to the servers, providing a 20Gbps link from each Cisco UCS 6324 Fabric Interconnect to each server. The product family supports Cisco low-latency, lossless 10 Gigabit Ethernet unified network fabric capabilities that increase the reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnect supports multiple traffic classes over a lossless Ethernet fabric from the blade through an interconnect. Significant TCO savings come from Fibre Channel over Ethernet (FCoE)-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated. The Cisco UCS 6324 Fabric Interconnect is a 10 Gigabit Ethernet, FCoE and Fibre Channel switch offering up to 500Gbps throughput and up to four unified ports and one scalability port. Cisco UCS B200 M4 Blade Servers The enterprise-class Cisco UCS B200 M4 blade server extends the capabilities of Cisco s Unified Computing System portfolio in a half-width blade form factor. The Cisco UCS B200 M4 uses the power of the latest Intel Xeon E5-2600 v3 Series processor family CPUs with up to 1536 GB of RAM (using 64 GB DIMMs), two solid-state drives (SSDs) or hard disk drives (HDDs), and up to 80 Gbps throughput connectivity. The UCS B200 M4 Blade Server mounts in a Cisco UCS 5100 Series blade server chassis or UCS Mini blade server chassis. It has 24 total slots for registered ECC DIMMs (RDIMMs) or loadreduced DIMMs (LR DIMMs) for up to 1536 GB total memory capacity (B200 M4 configured with two CPUs using 64 GB DIMMs). It supports one connector for Cisco s VIC 1340 or 1240 adapter, which provides Ethernet and Fibre Channel over Ethernet (FCoE). Cisco VIC 1340 The Cisco UCS Virtual Interface Card (VIC) 1340 is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mlom) designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet. The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco Data Center Virtual Machine Fabric Extender (VM- FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management. 3 2015 Cisco and/or its affiliates. All rights reserved.
VMware vsphere 6.0 VMware vsphere 6.0, the industry-leading virtualization platform, empowers users to virtualize any application with confidence, redefines availability, and simplifies the virtual data center. The result is a highly available, resilient, on-demand infrastructure that is the ideal foundation for any cloud environment. This release contains a lot of new features and enhancements, many of which are industry-first features StorMagic SvSAN StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical applications at remote sites, where this disruption directly equates to a loss in service and revenue. StorMagic SvSAN ensures high availability through a virtualized shared storage platform, so that these business critical applications remain operational. This is achieved by leveraging the direct attached or internal, cost effective server storage including solid-state disk and presenting it as a virtual SAN. StorMagic SvSAN supports the industry leading hypervisors, VMware vsphere and Microsoft Hyper-V. It is installed as a Virtual Storage Appliance (VSA) requiring minimal server resources to provide the shared storage necessary to enable advanced hypervisor features such as High Availability/Failover Cluster, vmotion/live Migration and Distributed Resource Scheduler (DRS)/ Dynamic Optimization. StorMagic SvSAN can be deployed as a simple 2-node cluster; however the flexibility of the architecture enables the virtual infrastructure performance and/or capacity to be scaled to meet the changing business needs, without impacting service availability. This is achieved by adding additional capacity to existing servers or by growing the StorMagic SvSAN cluster. 4 2015 Cisco and/or its affiliates. All rights reserved.
StorMagic SvSAN with Cisco UCS Mini Architecture The architecture for this deployment guide is simple and provides a highly available environment using a combination of features from StorMagic SvSAN, VMware vsphere and Cisco UCS Mini by utilizing the local storage available on B200 M4 blade servers. Compute The UCS Mini chassis with four B200 M4 blades are used for this deployment guide. Each B200 M4 blade server has Intel Xeon E5-2620 v3, 128GB RAM, 2 x 32GB FlexFlash SD Card, 2 x 1.2TB 10K rpm SAS drives and VIC 1340. Figure 2. Physical Connectivity Diagram 1GE Management Network 1 1GE Management Network Fabric Interconnect 6324 USB Port Filmware Upgrades Management Port 10/100/1000 Mbps 10GE Infra Network UCS Mini Chassis LAN 10GE Infra Network 1X40G QSFP+ Ethernet/FCoE Licensed Server Port Direct-attached C-Series no FEX support Appliance Port FCoE Storage Port 4x10G SFP+ Unified Port (Eth/FC/FCoE) Uplink Port (Eth/FC/FCoE) Server Port - Direct-attached C-Series, no FEX support Appliance Port FC/FCoE Storage Port Supports 1/10G Eth, 2/4/8G FC Console Port Networking The management ports on both the Cisco UCS 6324 Fabric Interconnects housed inside the UCS Mini chassis are connected to the management network and at least one 1G or 10G from each side of the fabric is used as uplink ports to connect to the LAN through a pair of upstream switches as shown in the above figure 2. In this example we have created 5 vnics in the service profile associated to the B200 M4 blade servers and will be used to configure the networking as explained below: SvSAN uses three different logical interface traffic types: Management used for accessing the web GUI, plug-in or CLI. At least one must be present. iscsi listens for incoming connection requests from iscsi initiators. At least one must be present. Mirror VSA nodes communicate data and metadata associated with mirrored volumes. StorMagic recommends use of four or more physical NICs between hosts and SvSAN Virtual Storage Appliances (VSA) to provide the recommended level of redundancy and load balancing. vswitch0 will connect directly to the customer s production network and it is presumed this is where production VMs will reside. The VSA would also have an interface on vswitch0 for management, administration and monitoring purposes only. vswitch1 is an isolated network either by vlan or IP. This network should not be used for any other traffic apart from the VSA iscsi and Mirror traffic. This is to prevent against any contention and provide the best storage throughput possible. vswitch2 is an isolated network either by vlan or IP. This network should not be used for any other traffic apart from the VSA iscsi and Mirror traffic. This is to prevent against any contention and provide the best storage throughput possible. vswitch3 is an isolated network either by vlan or IP. This network is dedicated to VMotion traffic. 5 2015 Cisco and/or its affiliates. All rights reserved.
Figure 3. Logical Network Connectivity Diagram VM VM VSA SvSAN Synchronous Mirroring VSA VM VM Hypervisor Hypervisor iscsi/mirror iscsi/mirror vmotion Production NSH Storage In this deployment guide, each blade has 2 x 1.2TB 6Gbps SAS disks and 2 x 32GB Cisco FlexFlash SD cards. The 2 x 1.2TB SAS disks are configured as a RAID0 striped and presented as two virtual drives (VD). The first VD is fixed for 2 TB and the remaining is configured as second VD. Figure 4. Disk Drives in Cisco UCS B200 M4 Server Figure 5. RAID0 Group with two Virtual Drives in Cisco UCS B200 M4 Server 6 2015 Cisco and/or its affiliates. All rights reserved.
The 2 x 32GB Cisco FlexFlash SD cards are configured for Mirror (RAID1) in the UCS service profile s local disk configuration policy. The OS will be installed to the Cisco FlexFlash SD. The VSA deployment will utilize the 185GB virtual disk and must be configured as persistent storage on the host. The second virtual drive is allocated explicitly to the VSA as a Raw Disk Mapping. The VSA sees the virtual disk as a pool of storage which iscsi target(s) can be carved from. These iscsi target(s) can be mirrored with other VSAs to create highly available storage. For this deployment guide we have created four mirror volumes two mirror volumes between a pair of SvSAN VSAs as shown in figure 6. All the iscsi mirror volumes are presented to all the hosts but the mirroring is only between a pair of SvSAN VSAs. A mirrored target is one that is mirrored between a pair of VSAs. This means that there are two identical copies of the target data when running normally, one on each VSA. Any data sent to one plex is automatically copied to the other. This is done synchronously, meaning that if an initiator sends data to one plex, the VSA does not acknowledge the request until the data is present on both plexes. A synchronous mirror can be used to provide highly available storage, in that an initiator can access any plex at any time, and if one plex fails it can continue without interruption by using the other plex. Figure 6. StorMagic SvSAN Mirror Target with four Cisco UCS B200 M4 Servers VMware vsphere Cluster 2 x 1.2TB RAID0 Boot Drive Boot Drive 2 x 1.2TB RAID0 VD 2 2TB VD 1 185GB VSA Shared VMFS VSA VD 1 185GB VD 2 2TB Shared VMFS 2 x Synchronous Mirrors VSA VSA Shared VMFS VD 2 2TB VD 1 185GB Shared VMFS VD 1 185GB VD 2 2TB 2 x 1.2TB RAID0 Boot Drive 2 x 32GB SD card Hypervisor (HV) Partition Mirror Boot Drive 2 x 1.2TB RAID0 Table 1 lists the hardware and software components used for this deployment guide. 7 2015 Cisco and/or its affiliates. All rights reserved.
Table 1. Hardware Software Components Component Description Cisco UCS 1 x Cisco UCS Mini Chassis 2x Cisco UCS 6324 Fabric Interconnects - UCSM 3.0(2c) 4 x Cisco UCS B200 M4 Blade Servers 2 x Intel Xeon processor E5-2620 v3 CPUs per blade server 8 x 16GB DDR4-2133-MHz RDIMM/PC4-17000/dual rank/x4/1.2v per blade server 2 x 1.2TB SAS 10K RPM SFF HDD per blade server 1 UCSB-MRAID12G per blade server 1 Cisco UCS VIC1340 VIC (UCSB-MLOM-40G-03) per blade server 2 x 32GB Cisco Flexible Flash cards (UCS-SD-32G-S) per blade server VMware vsphere VMware vcenter 6.0.0b VMware 6.0.0b StorMagic SvSAN 5.2 8 2015 Cisco and/or its affiliates. All rights reserved.
Deployment of StorMagic SvSAN for VMware vsphere on Cisco UCS Environment This flow chart describes the high-level steps to deploy StorMagic SvSAN for VMware vsphere on a Cisco UCS environment. Figure 7. High-Level steps for Deploying SvSAN for VMware on Cisco UCS Mini with B200 M4 Servers UCS Mini Cable Connectivity Configuration Cisco UCS Mini Fabric Interconnects vcenter StorMagic Software Installation Pre-requisites Configure Host Networking Install StorMagic SvSAN on vcenter Install StorMagic PowerShell Toolkit Deploy SvSAN VSAs Configure Cisco UCSM QoS Ports/Uplinks VLANs Pools and Policies vnic Templates Service Profile Templates Service Profiles Prepare for SvSAN Deployment Create a Shared Datastore with Mirrored Storage Manage SvSAN VSAs Create Virtual Drives Install 6.0 Manage Shared Datastore Storage 9 2015 Cisco and/or its affiliates. All rights reserved.
Cisco UCS Configuration The following section provides a very high-level procedure to configure the Cisco UCS Mini environment to deploy StorMagic SvSAN for VMware vsphere. Configuring the Cisco UCS Mini Fabric Interconnects In this section perform the initial setup of the Cisco UCS 6324 fabric interconnects in a cluster configuration. Refer to the below URL for step-by-step instructions to perform an initial system setup for a cluster configuration. http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/gui/config/guide/3-0/b_ucsm_gui_user_guide_3_0/b_ UCSM_GUI_User_Guide_3_0_chapter_0101.html Configure UCS Manager Login to UCS Manager and configure the pools, policies, service profiles as listed in the sections that follow. Synchronize Cisco UCS to NTP In UCS Manager, navigate to Admin > Time Zone Management. Select a Time Zone and add a NTP server address to synchronize Cisco UCS with an NTP server. Enable QoS in Cisco UCS Fabric In UCS Manager, navigate to LAN tab > LAN Cloud > QoS System Class to enable the priorities and enter the MTU size as shown below. Figure 8. UCS Manager-Enable QoS System Class Create QoS policy Navigate to LAN tab > root > Policies > QoS Policies and create three policies with the information provided in the reference table below: Table 2. QoS Policy for different traffic QoS Policy Name Priority Selection Other Parameters StorMagicISCSI Gold Default StorMagicVM Platinum Default StorMagicVMotion Bronze Default 10 2015 Cisco and/or its affiliates. All rights reserved.
Figure 9. UCS Manager-Create QoS Policy Enable Uplink Ports In UCS Manager, navigate to Equipment tab > Fabric Interconnects > Fabric Interconnects A > Fixed Module > Ethernet Ports and configure the ports connected to upstream switch as Uplink ports. Repeat the same step on Fabric Interconnect B. Figure 10. UCS Manager-Enable Uplink Ports 11 2015 Cisco and/or its affiliates. All rights reserved.
Create VLANs Navigate to LAN tab > LAN > LAN Cloud and create one VLAN in each Fabric A (StorMagicISCSI-A VLAN) and Fabric B (StorMagicISCSI-B VLAN) for the iscsi Storage traffic as shown in figure 11. Figure 11. UCS Manager-Create VLANs for iscsi Storage Traffic Navigate to LAN tab > LAN > VLANs and create the global VLANs for your environment as shown in figure 12. For this guide we created two VLANs one for management and the other for vmotion traffics as shown below. Figure 12. Create Global VLANs for Management and vmotion Traffic 12 2015 Cisco and/or its affiliates. All rights reserved.
Create Local Disk Configuration Policy Navigate to Servers tab > Policies > root > Local Disk Configuration Policy and create a Local Disk Configuration Policy. Select RAID 0 Striped for Mode and enable both the FlexFlash State and FlexFlash RAID Reporting State as shown in figure 13. Figure 13. Create Local Disk Configuration Policy Create Boot Policy Navigate to Servers tab > Policies > root > Boot Policies and Create a Boot Policy by adding CD/DVD and SD Card in the Boot order as shown in figure 14. Figure 14. Create Boot Policy 13 2015 Cisco and/or its affiliates. All rights reserved.
Create BIOS Policy Navigate to Servers tab > Policies > root > BIOS Policies and create a BIOS Policy with settings as shown in figure 15. Figure 15. Create BIOS Policy Configure Server Pool and Qualifying Policy In this section you create Server pool, Server Pool Policy Qualifications and Server Pool Policy for auto-population of servers depending on the configuration. Navigate to Servers tab > Pools > root > Server Pool and right-click to create a Server Pool. Provide a name and click Finish without adding any servers to the pool. Navigate to Servers tab > Policies > root > Server Pool Policy Qualifications. Right-click to create a Server Pool Policy Qualifications with Server PID Qualifications as the criteria, and enter UCSB-B200-M4 for the model as shown in figure 16. Figure 16. Create Server Pool Policy Qualifications 14 2015 Cisco and/or its affiliates. All rights reserved.
Navigate to Servers tab > Policies > root > Server Pool Policies and right-click on it to create a Server Pool Policy. Provide a name and select the Target Pool and Qualification created in the previous section from the drop-down lists. Figure 17. Create Server Pool Policy Create UUID Suffix Pools Navigate to Servers tab > Pools > root > UUID Suffix Pools and right-click on it to Create UUID Suffix Pool and then add a block of UUIDs to it. Figure 18. Create UUID Suffix Pools Create MAC Pool Navigate to LAN tab > Pools > root > UUID Suffix Pools and right-click on it to Create MAC Pool and then add a block of MAC addresses to it. Figure 19. Create MAC Pools 15 2015 Cisco and/or its affiliates. All rights reserved.
Create an IP Pool for KVM Access Navigate to LAN tab > Pools > root > IP Pools > IP Pool ext-mgmt and right-click to create a block of IPv4 addresses for KVM access. Figure 20. Create IP Pool for KVM Access Create vnic Templates and LAN Connectivity Policies Navigate to Servers tab > Policies > root > vnic Templates and right-click on it to create vnic Templates using the reference table given below. Table 3. vnic Templates vnic Template Name Fabric ID Enable Failover (Yes/ No) Template Type VLANs Allowed Native VLAN MTU MAC Pool QoS Policy eth0 Fabric A No Updating Mgm Mgm 1500 StorMagicMAC StorMagicVM eth1 Fabric B No Updating Mgm Mgm 1500 StorMagicMAC StorMagicVM eth2 Fabric A No Updating StorMagicISCSI-A StorMagicISCSI-A 9000 StorMagicMAC StorMagicISCSI eth3 Fabric B No Updating StorMagicISCSI-B StorMagicISCSI-B 9000 StorMagicMAC StorMagicISCSI eth4 Fabric B Yes Updating StorMagicVMotion StorMagicVMotion 9000 StorMagicMAC StorMagicVMotion 16 2015 Cisco and/or its affiliates. All rights reserved.
Figure 21. Create vnic Template Create a LAN Connectivity Policy using the above vnic templates and selecting VMware as the adapter policy for Adapter Performance Profile. Figure 22. Create LAN Connectivity Policy 17 2015 Cisco and/or its affiliates. All rights reserved.
Create a Service Profile Template Create a Service Profile Template from under the Servers tab using the all the pools and policies that are created in the above sections. Figure 23. Create Service Profile Template Create Service Profiles from Templates Create Service Profiles from a Template under the Servers tab. Provide a Naming Prefix, a starting number with number of instances and select the service profile template created in the above step. Figure 24. Create Service Profiles from Template This creates four service profiles and automatically associates with the blade servers matching the selection criteria chosen in the Server Pool Policy. 18 2015 Cisco and/or its affiliates. All rights reserved.
Figure 25. Service Profiles Associated to Blade Servers Create Virtual Drives Select a Service Profile, click on Boot Server and launch the KVM console. Enter the Cisco FlexStorage Controller BIOS Configuration Utility (Ctrl R) to configure RAID 0 group with the two 1.2TB drives and create two virtual drives. Complete this RAID configuration on all the four B200 M4 blade servers as shown in the figure 26. Figure 26. Create RAID Group and Virtual Drives 19 2015 Cisco and/or its affiliates. All rights reserved.
Install VMware 6.0 The procedure in this section needs to be carried out on all Cisco UCS B200 M4 Blade Servers. 1. Download the Cisco Custom image for VMware 6.0 from the below URL. Also download https://my.vmware.com/web/vmware/details?productid=490&downloadgroup=oem-esxi60ga-cisco#product_ downloads 2. In UCS Manager, select a blade server and launch the KVM console session. From the KVM console session Virtual Media tab, select Map CD/DVD and mount the Cisco custom ISO image downloaded in the previous step. 3. Select the CiscoVD Hypervisor partition as shown in the figure 27. Figure 27. Installation Storage Device Selection 4. Once the installation is complete, reboot the system and configure management IP address. Prepare for SvSAN Deployment 1. Log in to the host using the vsphere Client and navigate to Configuration >Storage and click Add Storage to add persistent storage to the host for SvSAN VSA deployment. 2. Select Disk/LUN as Storage Type and create a datastore using the 185GB LUN as shown in figure 28. Figure 28. Add Persistent Storage to Host 3. Upload the patches and device drivers to this newly created datastore. 4. Put the host into Maintenance Mode. vim-cmd hostsvc/maintenance_mode_enter 5. From the shell, install latest version from the link below: https://my.vmware.com/group/vmware/patch#search esxcli software vib update -d /vmfs/volumes/temp/<filename>.zip 20 2015 Cisco and/or its affiliates. All rights reserved.
Figure 29. Upgrade 6. Download the Cisco UCS B200 M4 device drivers image ucs-bxxx-drivers.3.0.2a.iso from the below URL and upgrade the drivers for NIC, storage and other components as shown in figure 30. https://software.cisco.com/download/release.html?mdfid=283853163&flowid=25821&softwareid=283853158&release=2.2(6)&relind=available&rellifecycle=&reltype=latest esxcli software vib install --no-sig-check -d /vmfs/volumes/temp/<filename>.zip Figure 30. Device Driver Upgrade 7. Create a SATP rule to ensure that ESX uses Round Robin path policy and reduces the path IOP count to 1 IOP for all StorMagic iscsi volumes. esxcli storage nmp satp rule add --satp VMW_SATP_ALUA --psp VMW_PSP_RR -O iops=1 --vendor= StorMagc --model= iscsi Volume Figure 31. Create SATP Rule This will ensure the even distribution of IO across mirrored volumes and is a best practice for StorMagic on the UCS Mini architecture. The following command can be used to check if the setting has taken affect. esxcli storage nmp device list Figure 32. Verify Path Policy Setting The above screenshot shows MirrorBA. The two fields that identify the changes are: The Path Select Policy: VMW_PSP_RR Path Selection Policy Device Config: (policy=rr,iops=1.) 8. Login to the vcenter and add all the hosts to the managing vcenter server where the StorMagic software is installed. 21 2015 Cisco and/or its affiliates. All rights reserved.
9. Next create a Cluster under Datacenter and add all the hosts on which the StorMagic SvSAN will be deployed as shown in the figure 33. Figure 33. vcenter Hosts and Clusters 10. Select a host and navigate to Configuration > Storage Adapters and click Add to add the Software iscsi Adapter. Figure 34. Add Software iscsi Adapter Note: SAN connectivity configured for the host iscsi software adapter. iscsi port (TCP port 3260) must be enabled in the VMware firewall iscsi software initiator enabled on hosts SSH enabled on the hosts to allow VSAs to be deployed Following network ports will be used by Stormagic SvSAN 22 2015 Cisco and/or its affiliates. All rights reserved.
Table 4. Network Ports Component/Service Protocol/Port Discovery UDP 4174 XMLRPC Server TCP 8000 Inter-VSA communications TCP 4174 SMDXMLRPC TCP 43127 Web Access TCP 80, 443 Management Services TCP 8990 vsphere Client Plug-in TCP 80, 443 Misc VMware TCP 16961, 16962, 16963 Additional port numbers used by SvSAN can be found here at the below URL: http://www.stormagic.com/manual/svsan_5-2/en/content/port-numbers-used-by-svsan.htm Configure Host Networking 1. Create virtual standard switches in all the hosts as mentioned below. vswitch0 VMkernel Port Management Network Virtual Machine Port Group - Production Virtual Machines and SvSAN management vswitch1 VMkernel Port iscsi traffic Virtual Machine Port Group SvSAN iscsi and Mirror traffic vswitch2 VMkernel Port iscsi traffic Virtual Machine Port Group SvSAN iscsi and mirror traffic vswitch3 VMkernel Port vmotion 2. Edit all the vswitches and VMkernel ports used for iscsi, mirror and vmotion traffic and set the MTU size to 9000 as shown in figures 35 and 36. 23 2015 Cisco and/or its affiliates. All rights reserved.
Figure 35. MTU settings for vswitch Figure 36. MTU settings for VMkernel Port Figure 37 shows all the vswitches and its VMkernel and virtual machine port groups created on all the hosts. Figure 37. Networking Configuration 24 2015 Cisco and/or its affiliates. All rights reserved.
vcenter server StorMagic s SvSAN management components are installed directly onto VMware vcenter Server. This package includes a numbers of services that are required to orchestrate VSA deployment and storage provisioning. A UCS Director module is available that includes workflows to automated VSA deployment and storage provisioning. Finally, a PowerShell module is available to allow administrators to script deployments, provisioning and any management tasks. vcenter StorMagic Software Installation Prerequisites Windows or vcsa. Table 5. vcenter and StorMagic SvSAN Version Compatibility StorMagic SvSAN vcenter Version 5.2 VMware vcenter Server 5.1a VMware vcenter Server 5.1b VMware vcenter Server 5.1 Update 1 VMware vcenter Server 5.1 Update 1a VMware vcenter Server 5.1 Update 1b VMware vcenter Server 5.1 Update 1c VMware vcenter Server 5.1 Update 2 VMware vcenter Server 5.1 Update 2a VMware vcenter Server 5.1 Update 3 VMware vcenter Server 5.1 Update 3a VMware vcenter 5.5 VMware vcenter 5.5.0a VMware vcenter 5.5.0b VMware vcenter 5.5.0c VMware vcenter 5.5.0 Update 1 VMware vcenter 5.5.0 Update 1a VMware vcenter 5.5.0 Update 1b VMware vcenter 5.5.0 Update 1c VMware vcenter 5.5.0 Update 2 VMware vcenter 5.5.0 Update 2d VMware vcenter 5.5.0 Update 2e VMware vcenter Server 6.0.0 Visual C++ Runtime Libraries and.net FX 3.5 SP1 are required. These are automatically installed (if not already) when the SvSAN setup executable (setup.exe) is run. Note: When using Windows 2012, this operation must be performed manually through Server Manager > Add Roles and Features. 25 2015 Cisco and/or its affiliates. All rights reserved.
Installing the SvSAN software on a vcenter Server 1. Selecting run as administrator, run the setup.exe file provided in the downloaded ZIP file. 2. Click Next. The End-User Licence Agreement opens. Note: There are different versions for the USA and the rest of the world. Scroll down to find the appropriate agreement. 3. To accept the Agreement, check the Accept check box and then click Next. 4. Click Typical. This will install the Neutral Storage Service on vcenter Server, together with the plug-in components. 5. Once the installation has begun you must provide it with a valid vcenter username and password. If a user credentials prompt does not appear, right-click on setup.exe, then select Run as Administrator. If the installation appears to hang, check that the pop-up box has not been hidden behind another window. Finish the setup after the installation completes. 26 2015 Cisco and/or its affiliates. All rights reserved.
Installing StorMagic PowerShell Toolkit There are no strict requirements. StorMagic PowerShell toolkit can be installed on the vcenter server or a scripting server. 1. To install the StorMagic PowerShell Toolkit, run the file: setupstormagicpowershelltoolkit.exe 2. The StorMagic PowerShell Toolkit Setup wizard opens. 3. Complete the wizard. There is only one component to install. 4. You can uninstall StorMagic PowerShell Toolkit from Windows Control Panel, Programs and Features, in the usual way. In the vsphere web client a StorMagic management tab will be visible when selecting vcenter > Hosts and Clusters, and a datacenter object in the inventory tree. Deploy SvSAN VSAs Repeat the following steps for each VSA to be deployed. 1. Open the plug-in: in vsphere Web Client, select your datacenter, then click Manage > StorMagic. Figure 38. vsphere Web Client with StorMagic 2. Click Deploy a VSA onto a host. The deployment wizard opens. Click Next. 3. Select a host to deploy a VSA to. Click Next. 4. Read and accept the Terms and Conditions. Click Next. 5. Enter a hostname for the VSA (unique on your network), domain name and the datastore the VSA VM is to reside on. The datastore should be on internal or direct-attached server storage. Two VMDKs are created on this datastore: a 512 MB boot disk and a 20 GB journal disk. 27 2015 Cisco and/or its affiliates. All rights reserved.
Figure 39. Deploy SvSAN VSA 1. Choose the desired storage allocation technique. Raw device mappings offer the best performance; however, SSH must be enabled on the host, but only for the duration of the deployment (SSH can be disabled immediately after deployment). WARNING: If you select an RDM device that has data stored on it, that data will be permanently deleted during deployment Figure 40. Deploy VSA - Storage 2. If you wish to allocate multiple RDMs as a pool, check the Advanced options check box. If the pool you want is listed select it. Otherwise, click Add. The Create Storage Pool window opens. 3. Click Next and click the Skip cache allocation radio button in the Caching configuration window. 4. The Networking page can be left to acquire IP addresses from DHCP, or they can be set statically. Select the Network interface and click Configure to select the interface traffic types. In our setup, VM Network interface is configured for Management and iscsi1, iscsi2 interface is configured for iscsi and Mirroring traffic as shown in the figures 41 and 42. 28 2015 Cisco and/or its affiliates. All rights reserved.
Figure 41. Deploy SvSAN VSA Networking Figure 42. Deploy VSA Networking Configuration Note: Multiple networks are shown if there are multiple vswitches configured on the host. The VSA creates an interface on all vswitches by default. If you do not wish the VSA to create an interface on specific vswitches, clear the box associated with the virtual machine port group. For each interface you can choose its type. 29 2015 Cisco and/or its affiliates. All rights reserved.
5. Enter the VSA license information and click Next. Note: During deployment, the VSA attempts to connect to StorMagic s license server to validate the license. If it needs to go through a proxy server to do this, supply the proxy server details. If the VSA does not have internet connectivity, license it later using an offline activation mechanism. For further information on licensing click here. 6. Enter the VSA management password then click Next. Summary information about the VSA is displayed. 7. Click Finish to deploy the VSA. 8. The deployment process can be monitored in the VMware Recent Tasks window. StorMagic deploy OVF task provides an overall percentage of the deployment. When this task has completed, the VSA will have booted and be operational. 9. Once the VSA has been deployed it is available for creating datastores. Note: The VSA requires at least one network interface be routable to vcenter else the VSA deployment will fail. This is so once the VS VM has booted, it can communicate back to the vcenter server where the SvSAN deployment taskman resides. 30 2015 Cisco and/or its affiliates. All rights reserved.
Recommended SvSAN Mirror Configurations Overview To allow the VSA storage to dynamically scale with odd numbers of UCS Blades it is recommended that each VSA divides its pooled storage into two iscsi mirrors. This configuration will also adhere to VMware s default storage heartbeat policy which requires a minimum of two datastores when deploying a two blade configuration. Mirrored Targets A mirrored target is one that is mirrored between a pair of VSAs, which each VSA hosting one side of the mirror (called a mirror plex). This means that there are two identical copies of the target data when running normally, one on each VSA. Any data sent to one plex is automatically copied to the other. This is done synchronously, meaning that if an initiator sends data to one plex, the VSA does not acknowledge the request until the data is present on both plexes. A synchronous mirror can be used to provide highly available storage, in that an initiator can access any plex at any time, and if one plex fails it can continue without interruption by using the other plex. As previously mentioned, it is recommended that the pool of storage on each VSA is divided into two iscsi mirrors. This will allow the dynamic storage migration when adding additional blades and VSAs to the cluster. This configuration will further improve on the solution reliability should there be multiple blade failures and reduce the possibility of any storage contention; spreading multiple VMs are across mirrored targets. Figure 43. Four Blade Example Mirror Configuration VMware vsphere Cluster 2 x 1.2TB RAID0 Boot Drive Boot Drive 2 x 1.2TB RAID0 VD 2 2TB VD 1 185GB VSA Shared VMFS VSA VD 1 185GB VD 2 2TB Shared VMFS 2 x Synchronous Mirrors VSA VSA Shared VMFS VD 2 2TB VD 1 185GB Shared VMFS VD 1 185GB VD 2 2TB 2 x 1.2TB RAID0 Boot Drive 2 x 32GB SD card Hypervisor (HV) Partition Mirror Boot Drive 2 x 1.2TB RAID0 31 2015 Cisco and/or its affiliates. All rights reserved.
Cisco UCS Blade Server Expansion and SvSAN Target Migration Implementing two mirrors across blade servers allows for dynamic target migration when adding additional blades, for example by adding a fifth blade to the four blade configuration discussed previously. SvSAN management tools can be used to automate this process, migrating plexes from two blades to another. This ensures that storage IO is spread evenly across all the Cisco UCS blade servers. Figure 44. Blade Server Expansion and SvSAN Target Migration Shared VMFS 1 Mirror plex on VSA2 will be migrated to the new blade (VSA3) VSA2 2 x Synchronous Mirrors Shared VMFS 2 VSA2 Shared VMFS VSA 2 x Synchronous Mirrors Shared VMFS VSA The mirror plex on VSA2 housing VMFS1 will be migrated to the newly introduced blades VSA. The storage will remain online throughout this operation but additional IO will be observed as the plex data is written to the new VSA. 32 2015 Cisco and/or its affiliates. All rights reserved.
Figure 45. Five Blade Servers Example Mirror Configuration after Migration VSA3 1. Mirror plex migrated from VSA2 to VSA3 Shared VMFS1 Shared VMFS3 2. New mirror created between VSA3 to VSA2 VSA1 3 x Synchronous Mirrors Shared VMFS 2 VSA2 Shared VMFS VSA 2 x Synchronous Mirrors Shared VMFS VSA Once the plex migration has completed, a new mirror will be created between the new VSA and the VSA the plex was migrated from loading the storage evenly across the three blades. The other two blades in the environment remain unchanged. 33 2015 Cisco and/or its affiliates. All rights reserved.
Figure 46. Six Blade Servers Example Mirror Configuration Shared VMFS 4 3. New mirror plex created between VSA3 to VSA4 VSA3 2 x Synchronous Mirrors Shared VMFS 3 1. Mirror plex on VSA2 will be migrated to VSA4 VSA4 Shared VMFS 1 2. Mirror plex on VSA3 will be migrated to VSA2 VSA1 2 x Synchronous Mirrors Shared VMFS 2 VSA2 Shared VMFS VSA 2 x Synchronous Mirrors Shared VMFS VSA When adding a sixth blade to the solution the same procedure can be applied without taking the storage offline. This process can be applied to N number of blades introduced to the solution. 34 2015 Cisco and/or its affiliates. All rights reserved.
Datastore Spanning Using VMware Extents It is possible to aggregate multiple iscsi mirrored volumes into a single datastore using VMware extents. The StorMagic UCSD module, vsphere plugin and PowerShell scripting module allows this process to be automated. This will allow the administrator to see all mirrored iscsi volumes in the X number of blades as a single datastore. This may be required if VMs are larger than the capacity of a single mirror but it should be noted that this configuration may introduce storage contention. As a performance best practice it is recommended that datastores are provisioned on single mirrors not using extents. 35 2015 Cisco and/or its affiliates. All rights reserved.
Creating a shared datastore with mirrored Storage 1. In vsphere Web Client, select your datacenter and click Manage > StorMagic. The plug-in opens. 2. Click Create a shared datastore. The Create Datastore wizard opens. Click Next. 3. Name the datastore. 4. Set the provisioned size. To set the size so that all the available space is used, check Use all. To divide this storage equally across the two mirrors, set the Size to 1023.99GB. This will divide the available 1.99TB equally across two mirrors. 5. To create mirrored storage, select two VSAs. 6. By default the pool to be used on each VSA is selected automatically. To specify the pool yourself check the Advanced options check box. Figure 47. Create Shared Datastore with Mirrored Storage 7. Click Next. 8. Select the neutral storage host for mirroring. Select one of the other VSAs running on another blade. The Advanced option lets you select the Up isolation policy, which does not require an NSH but is not best practice. Click here for more information on isolation policies. Click Next. 9. Optionally, enable caching on the datastore. This is only available if your license includes caching and a cache device was enabled when the VSA was deployed. If you enable caching, default cache settings are used; these can be modified at a later time using the web GUI. Click Next. 10. Select the hosts that are to have access to the SvSAN datastore. Click Next. Figure 48. Hosts Selection to Mount Datastore 36 2015 Cisco and/or its affiliates. All rights reserved.
11. The first time you create a datastore, the VSAs must authenticate with their hosts if they are not already saved from deployment. For each host, provide the host administrative password (the password that was used when was installed). Click Next. 12. Click Finish to start the datastore creation. The datastore creation process can be monitored in the VMware Recent Tasks window. Once this task has completed, the datastore is available on the hosts. 13. When a mirror is created, a full synchronization is performed to ensure both sides of the mirror are synchronized. 14. Repeat the above steps in this section and create a second datastore on the VSAs or another datastore using the other two SvSAN nodes as shown in figure 49. Figure 49. Create Mirrored Storage 37 2015 Cisco and/or its affiliates. All rights reserved.
Manage VSAs To check the status of your VSAs, in the plug-in click Manage VSAs. Figure 50. Manage VSAs This lists VSAs with their IP addresses and system status. Select a VSA to see more information about it, including the amount of pool storage used the amount available, the system serial number and firmware version. You can monitor the mirror sync status by managing the VSAs directly. Through the StorMagic plugin, click Manage VSAs, select the VSA you wish to manage, click the Management URL. This will launch a new tab directly to the VSAs management interface. The default username is admin and the password is the password supplied at VSA deployment. In addition the VSA dashboard can be used to monitor VSAs in real time. The VSA state reflects the current system status of the VSA. For example, if a mirror is unsynchronized the VSA system status will be warning as the mirror is currently degraded. When the mirror synchronizes the VSA system status will change back to healthy as the VSA is in an optimal running state. Figure 51. Manage VSAs 38 2015 Cisco and/or its affiliates. All rights reserved.
The dashboard can be accessed from the vsphere web client home page. Figure 52. StorMagic Dashboard in vsphere Web Client Configure Jumbo Frames on SvSAN Network Interfaces Repeat the following steps on each SvSAN VSA to configure the jumbo frames on VSA Logical NIC. 1. Login to the vsphere Web Client and select Hosts and Clusters. 2. Select the datacenter where the VSA(s) are deployed. Click Manage > StorMagic > Manage VSAs. 3. Highlight the VSA and a management URL will appear. Click this URL. 4. A new tab will open connecting directly to the VSA. The default username is admin, supply the password entered at VSA deployment. 5. Click Network and then select the network device you want to edit. 6. From Actions click Edit. 7. Edit the MTU size to the desired size to match the rest of the environment network configuration, then click Apply. Figure 53. Setting MTU 39 2015 Cisco and/or its affiliates. All rights reserved.
Manage shared datastores To manage shared datastores, in the plug-in click Manage Shared Datastores. Figure 54. Manage Shared Datastores The Manage Shared Datastores page displays all the VSA-hosted datastores, including the datastore name, the hosts using the datastore, the number of paths to the datastore and the datastore status. Resources 1. http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-mini/index.html 2. http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/gui/config/guide/3-0/b_ucsm_gui_user_guide_3_0/b_ucsm_gui_user_guide_3_0_ chapter_010.html 3. http://www.cisco.com/c/en/us/products/servers-unified-computing/index.html 4. https://www.vmware.com/files/pdf/vsphere/vmw-wp-vsphr-whats-new-6-0.pdf 5. http://www.stormagic.com/manual/svsan_5-2/en/ Americas Headquarters Cisco Systems, Inc. San Jose, CA Asia Pacific Headquarters Cisco Systems (USA) Pte. Ltd. Singapore Europe Headquarters Cisco Systems International BV Amsterdam, The Netherlands Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R) C11-736055-00 10/15