Microsoft Windows Server Best Practices Guide. Version 2.3 Rob \barkz\ Barker, Solution Architect
|
|
|
- Ruth Harmon
- 10 years ago
- Views:
Transcription
1 Microsoft Windows Server Best Practices Guide Version 2.3 Rob \barkz\ Barker, Solution Architect
2 Overview This guide references the recommended best practices for provisioning and utilizing a Pure Storage FlashArray with the Microsoft Windows Server operating system. It will cover the best practices for the Purity Operating Environment (POE) and above. Goals and Objectives Even though the FlashArray has been designed to be ultra-simplistic and efficient, there are a number of best practices recommendations that should be followed. The best practices include host multipathing, SAN zoning configurations and policies, and file system recommendations that should be enforced to ensure a highly available and enterprise class implementation. Audience The target audience for this document includes storage administrators, server administrators, and consulting data center architects. A working knowledge of servers, server operating systems, storage, and networking is recommended, but is not a prerequisite to read this document. Pure Storage
3 Table of Contents Table of Contents... 3 Table of Figures... 3 Operating System Guidelines... 5 Microsoft Windows Server... 5 Supported Versions... 5 Logical Disk Manager and Partition Alignment... 5 Host Connectivity Steps... 5 Microsoft Windows Hotfixes... 5 Microsoft Multipath I/O (MPIO)... 6 Set Disk and MPIO Recommendations HBA Settings Additional Tools rd Party MPIO DSM Interoperability Configuring iscsi Performance Tuning - iscsi Space Reclamation SSD Trim Benefits of TRIM SCSI UNMAP Benefits of SCSI UNMAP Operating Systems and SCSI UNMAP Microsoft Windows Server 2008 R Microsoft Windows Server 2012 / 2012 R Microsoft Hyper-V SAN Zoning Recommendations Troubleshooting Brocade Fill Words IDLE Fill Word Problem Symptoms Problem Resolution References Table of Figures Figure 1. Add MPIO Support dialog Figure 2. MPIO Properties... 7 Figure 3. Windows Disk Management... 9 Figure 4. Not Initialized and Offline disk Figure 5. Purity GUI Storage View... 9 Figure 6. Get-Disk to show all disks connected to the Windows host Figure 7. Viewing only PURE disks using PowerShell Figure 8. Rescan for new disks Pure Storage
4 Figure 9. Select the MPIO Policy Figure 10. MPIO Path Details Figure 11. fcinfo Example Figure 12. mpclaim Example Pure Storage
5 Operating System Guidelines All attached hosts should have a minimum of two paths, connected to different Pure Storage FlashArray controller nodes, to ensure host to storage availability. Microsoft Windows Server Supported Versions The following distributions have been officially tested: Windows Server 2012 R2 Windows Server 2012 Windows Server 2008 R2 Service Pack 1 Logical Disk Manager and Partition Alignment Both Windows 2008 R2, 2012 and 2012 R2 automatically use 1024MB offsets. Pure Storage uses a 512-byte geometry on the FlashArray and, as such, there will never be a block alignment issue. To check the StartingOffset of a Windows host use the following Windows PowerShell: Get-WmiObject Win32_DiskPartition -ComputerName $env:computername select Name, Index, BlockSize, StartingOffset Format-Table * Host Connectivity Steps The following are the high-level steps that outline successful connectivity from a Windows host to the Pure Storage FlashArray: 1. Validate Windows hotfixes 2. Install Multipath I/O (MPIO) 3. Configure New MPIO Device 4. Configuring Disks 5. Setting SAN Policy 6. Configure MPIO Policies 7. Configure HBA settings Microsoft Windows Hotfixes Depending on what version of Microsoft Windows Server that is deployed please ensure the below Hotfixes are installed. To check which Hotfixes, also known as Quick Fix Engineering (QFE), are installed the following Windows PowerShell will list out all the details: Get-WmiObject -Class Win32_QuickFixEngineering Select-Object -Property Description, HotFixID, InstalledOn Format-Table -Wrap Windows Server 2008 R2 KB KB KB KB Pure Storage
6 KB KB KB Windows Server 2008 R2 SP1 KB KB KB Windows Server 2012 KB Microsoft Multipath I/O (MPIO) You either have a windows host with MPIO already installed, or this is a new deployment in which you aim to present and attach Pure Storage volumes. Follow the appropriate steps to get MPIO installed and configured. Install MPIO You can install the Multipath I/O Windows feature using either Server Manager or Windows PowerShell, both methods are provided below. Adding MPIO using Server Manager 1. Open up Server Manager select the Local Server 2. Click Manage and select Add Roles and Features 3. Navigate to the Features section in Add Roles and Features Wizard 4. Scroll down in the list of Features and select the Multipath I/O feature 5. Click Next and choose Restart the destination server automatically if required 6. Click Install Adding MPIO using Windows PowerShell Open up a Windows PowerShell session as an Administrator and run the following command to install Multipath I/O feature: Add-WindowsFeature -Name "Multipath-IO" Configure New MPIO Device First ensure that the Windows host(s) are zoned to the Pure Storage FlashArray. Next add the Pure FlashArray to the MPIO control panel. Choose how to configure with either the MPIO Control Panel or with Windows PowerShell. Add New MPIO Device via Control Panel Open the Windows Start menu or a Run command and type mpiocpl. The MPIO Properties dialog will open. The first tab lists the MPIO Devices, a default device is listed as Vendor 8Product 16, it is safe to leave this entry. To add the Pure Storage FlashArray click Add and enter PURE FlashArray, be sure Pure Storage
7 to follow the proper formatting when entering in the Device Hardware ID (see Figure 1). Note the 4 spaces between PURE and FlashArray. Figure 1. Add MPIO Support dialog. You will be prompted to reboot. Upon boot up the Pure Storage FlashArray will be added to MPIO Devices as shown in Figure 2 below. Figure 2. MPIO Properties Add New MPIO Device using Windows PowerShell The following steps walkthrough configuring MPIO with the same details as using the Windows Control Panel applet. 1. Open up an elevated Windows PowerShell session with Run as an Administrator. 2. Run Get-MSDSMSupportedHW to list out the existing VendorId and ProductId details. 3. Run New-MSDSMSupportedHW -ProductId FlashArray -VendorId PURE to add the PURE FlashArray to the list of MPIO Devices. Pure Storage
8 4. Prepare to reboot and run Restart-Computer, this will reboot the Windows host. 5. After Windows restarts open up an elevated Windows PowerShell session and run the command from Step 2 above to ensure the PURE FlashArray is now listed. Configure Disks Once MPIO has been installed and the proper configurations set any volumes that have been created with a host or host group can be seen from the Windows host. Note If no volumes, hosts or host groups have been created please refer to the Pure Storage FlashArray User s Guide, Using Purity GUI to Administer a FlashArray for step-by-step information. This can be access by logging into the Pure Storage FlashArray and click the Help link in the upper right corner of the GUI. There are two methods that can be used to perform disk management, first is via a GUI uniquely named Disk Management, and second is with Windows PowerShell. Let s first walkthrough using Disk Management. Disk Management GUI 1. Open Windows Server Manager 2. Click Tools > Computer Management to open up the Computer Management application. 3. Click Storage > Disk Management to access all of the volumes connected to the Windows host. Figure 3 provides an example that shows eight volumes connected to the host varying in size from 200GB 500GB. The volumes shown in Figure 3 have already been Initialized and set Online. If the Disk Management view does not show any new volumes connected to the Windows host a rescan should be performed so that Windows can rescan the bus for connected volumes that were setup in the Purity GUI as shown in Figure 5. Perform a rescan using Disk Management (Computer Management) and select Action > Rescan Disks. This will perform a rescan of the bus and display the volumes that are connected to the Windows host. If this is a first time setup of a Pure Storage FlashArray connecting to a Windows host it is most likely that the disks will show Not Initialized and in an Offline state as shown in Figure 4, otherwise it is assumed that the disks where previously setup and should come online and be accessible. Pure Storage
9 Figure 3. Windows Disk Management Figure 4. Not Initialized and Offline disk. Figure 5. Purity GUI Storage View Pure Storage
10 Now that there are volumes connected to the Windows host they can be individually accessed to Initialize and Online. To perform this operation right-click the Disk # and select Initialize Disk, this will open the Initialize Disk dialog select MBR (Master Boot Record) or GPT (GUID Partition Table) as the desired Partition style. Next, select the Volume to create a New Simple Volume based on your business criteria for size, path or drive letter and format. Perform the same steps for however many volumes that are connected to the Windows host. Disk Management via Windows PowerShell Just as with the GUI management we can see and control all of the details for disks connected to the Windows host. Figure 6 shows the same view of information in Figure 3 using: Get-Disk The disk management capabilities illustrated here require PowerShell 4.0. Figure 6. Get-Disk to show all disks connected to the Windows host. Now something that can be done with Windows PowerShell that the GUI does not offer is the ability to only view disks that are from Pure Storage using some additional parameters with the same command run previously. Get-Disk Where-Object FriendlyName -like "PURE*" Pure Storage
11 Figure 7. Viewing only PURE disks using PowerShell. Just as with Disk Management GUI if there are disks that are not shown a rescan should be performed then the command can be re-run the previous PowerShell command to ensure all of the disks are present. "rescan" diskpart Pure Storage
12 Figure 8. Rescan for new disks. Figure 8 shows that doing a rescan the Windows host now has a new Disk 14 that was connected and it is in RAW Partition Style. With Windows Server 2012 and PowerShell it is possible to initialize the newly added disk(s) using Initialize-Disk <DiskNumber>, this will initialize the disk then based on the current SAN Policy have the corresponding effect. Using Initialize-Disk by default set the PartitionStyle to GPT unless specified using the PartitionStyle parameter to MBR or Unknown. If the current SAN Policy is set to the default of OfflineShared the newly initialized disk will need to be brought online manually. Run the following PowerShell commands to determine which disks are Offline then set them all Online. The next section on SAN Policy goes into more detail. Get-Disk Where-Object IsOffline -eq $True Set-Disk -IsOffline $False Next is to create a partition with the newly initialized disk using maximum size. Using the Disk Management GUI it is possible to assign a drive letter or mount point using the Initialize Disk dialog. When creating a new partition with Windows PowerShell you can use the AssignDriveLetter option and also use Add-PartitionAccessPath to set a mount point location (eg. C:\MyMountPoint). Note that the mount point location needs to exist prior. Finally using the Format-Volume command will create a Pure Storage
13 newly formatted NTFS volume, or whatever FileSystem you choose. New-Partition -DiskNumber <DiskNumber> UseMaximumSize AssignDriveLetter Add-PartitionAccessPath -DiskNumber <#> -PartitionNumber <#> -AccessPath C:\MyMountPoint Format-Volume -DriveLetter <DriveLetter> -FileSystem NTFS SAN Policy One final settings not to overlook is the SAN Policy which defines how disks are mounted. If you are running Windows Server 2012 this is accessible (Get/Set) from PowerShell using Get- StorageSetting to find out the current disk policy. If this has not been changed it will have defaulted to OfflineShared for Windows Server 2012 editions. This should be changed to OnlineAll. To change this to the recommended setting run the following: Set-StorageSetting -NewDiskPolicy OnlineAll Policy Setting Effect OfflineAll All new disks are left offline by default. OfflineInternal All disks on busses that are detected as internal are left offline as default. OfflineShared All Disks on sharable busses, such as iscsi, FC, or SAS are left offline by default OnlineAll (Recommended) All disks are automatically brought online. On Windows 2008 / 2008 R2 the SAN Policy can also be changed using Windows PowerShell with the following command: "SAN Policy=OnlineAll" diskpart Note If for whatever reason working in the Windows Disk Management tool or using Windows PowerShell is not for you, all of the aforementioned tasks can be performed using a command line utility included in Windows called DiskPart. DiskPart provides the ability to manage disks, volumes and partitions. Please refer to the following link for full details Pure Storage
14 Configure MPIO Policies Now that the Windows host has the disks connected, initialized and online the MPIO device properties can now be verified. To access the Multi-Path Disk Device Properties perform the following steps: 1. Open Windows Server Manager 2. Click Tools > Computer Management to open up the Computer Management application. 3. Click Storage > Disk Management to access all of the volumes connected to the Windows host. 4. Right-click on one of the new Disk # from the Pure Storage FlashArray 5. Click Properties 6. Click the MPIO tab The dropdown menu Select the MPIO Policy, shown in Figure 9, can be used to select a desired policy, but as mentioned earlier the default policy of Round Robin is recommended. An equivalent number of paths that have been setup from the host to the Pure Storage FlashArray will be listed with their Path Id, Path State, etc. All of these should read Active/Optimized, shown in Figure 9. It is important to note that Pure Storage leverages the Microsoft Device Specific Module (DSM) as you will see that listed as the DSM Name, this can also be seen in Figure 9. Figure 9. Select the MPIO Policy. Pure Storage
15 By selecting the individual Path Id and clicking Edit it is possible to see all of the details for the given path as seen in Figure 10. Figure 10. MPIO Path Details. Pure Storage
16 Pure Storage
17 Set Disk and MPIO Recommendations Configure the Pure Storage recommended settings using the below Windows PowerShell. The PowerShell commands will do the following: 1. Display the current MPIO settings of the Windows host 2. Set all four of recommended MPIO settings Get-MPIOSetting Set-MPIOSetting -NewPathRecoveryInterval 20 Set-MPIOSetting -CustomPathRecovery Enabled Set-MPIOSetting -NewPDORemovePeriod 30 Set-MPIOSetting -NewDiskTimeout 20 Registry Key Windows Default (Decimal) Recommende d (Decimal) HKLM\System\CurrentControlSet\Services\Disk\TimeoutValue 60 seconds 60 seconds HKLM\System\CurrentControlSet\Services\MPIO\Parameters\PDORemovePeri od 20 seconds 30 seconds HKLM\System\CurrentControlSet\Services\MPIO\Parameters\UseCustomPathR ecoveryinterval 0 = disabled 1 = enabled HKLM\System\CurrentControlSet\Services\MPIO\Parameters\PathRecoveryInte rval 55 seconds 20 seconds HBA Settings Pure recommends the using the following HBA settings. The can be modified via the following tools: Emulex OneCommand Manager (OCManager) or HBAnyware (GUI/CLI) QLogic QConvergeConsole or SANsurfer (GUI/CLI) Brocade Host Connectivity Manager (HCM) or Brocade Command Line Utility (BCU) Setting HBA Default Recommended Execution Throttle QLogic Pure Storage
18 Queue Depth Emulex NodeTimeOut Emulex 30 0 Queue Depth (qdepth) Brocade Additional Tools Windows allows administrators to see some of the additional Fibre Channel information. One tool that can be used is fcinfo which can be downloaded from the Microsoft download site. It allows you access to most of the older Host Bus Adapter API functions. The fcinfo tool can be downloaded from Figure 11. fcinfo Example. Yet another helpful tool is mpclaim, which is actually a built-in tool. When using the tool an administrator will be able to see which device targets actually attached. Pure Storage
19 Figure 12. mpclaim Example. 3 rd Party MPIO DSM Interoperability 3rd party DSMs will not claim pure Storage LUNs. At this time, Pure LUNs are not supported by 3 rd party DSM modules, such as: EMC PowerPath, NetApp ONTAP DSM, HP 3PAR DSM, etc. Configuring iscsi Usually the iscsi initiator client is built-in. If it is not present, download and install the latest version (2.08) of the Microsoft iscsi initiator that is relevant for your operating system. Pure Storage does not support port aggregation or VLAN s per iscsi port. Pure Storage provides high availability through the use of Multipath I/O. MCS or Link Aggregation (NIC teaming) is not supported. Pure Storage
20 Configure the IP networking settings on the Pure Storage iscsi 10Gbe ports. If this task has not been performed the IP settings can be configured via the Pure Storage GUI or CLI interfaces. From the Pure Storage GUI Select the System tab Select the Networking Option. Select the relevant iscsi interface and select the Edit option Once the changes are completed the interface can be enabled. Pure Storage iscsi interfaces support jumbo frames so an MTU of 9000 can be selected if the intervening network supports jumbo frames without fragmentation. From the Pure Storage CLI Use the purenetwork command to set the required attributes purenetwork setattr address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx --gateway xxx.xxx.xxx.xxx --mtu 9000 <Ethernet interface> purenetwork enable <Ethernet interface> Once configured setup and discovery of the Pure Storage FlashArray and relevant targets can be completed. Launch the iscsi initiator, discover the target IP address and connect to the Pure Storage FlashArray. Add the discovered target or targets to the list of favorite targets and enable the multipath option. Pure Storage
21 Pure Storage
22 Connect to all the discovered iscsi interfaces on the Pure Storage FlashArray and add them to favorite targets Performance Tuning - iscsi In order to get the best performance out of a single host, 8 iscsi sessions to a Pure Storage FlashArray are recommended. A session is normally created for every target port where a host is connected. If this host is connected to less than 8 paths, additional sessions can be configured going to the same target ports. To add more iscsi sessions, repeat the steps above for the same target portal IP address. Space Reclamation One challenge inherent in storage arrays that present Thin Provisioned volumes is how the various operating systems that use those volumes indicate that data has been deleted. This is referred to as Dead Space Reclamation and is provided by one of two techniques: SSD Trim or SCSI Unmap. This process enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete. Most legacy operating systems inherently do not provide this capability, so special attention needs to be paid if a Host performs large delete operations without rewriting new data into the deleted space. Most current operating environments, such as ESX 5.x, Windows 2012 / 2012 R2 and RedHat Enterprise Linux 6 provide this functionality. Pure Storage
23 SSD Trim TRIM is not a command that forces the SSD to immediately erase data. The TRIM command simply notifies the SSD which LBAs (Logical Block Addresses) are no longer needed. The SSD takes those addresses and updates its own internal map in order to mark those locations as invalid. With this information, the SSD will no longer move that marked invalid block during garbage collection (GC); thus eliminating the time wasted in order to rewrite invalid data to new flash pages. Benefits of TRIM Lower write amplification: Less data is rewritten and more free space is available during GC Higher throughput: Less data to move during GC Improved flash endurance: The drive is writing less to the flash by not rewriting invalid data during GC Keeps SSDs Trim : As an SSD comes close to full, there is a substantial slowdown in write performance as more flash cells must undergo write erase cycles before data can be rewritten Reduce flash controller (processor) time: A lot of resources are used for wear levelling, so more free blocks can help dynamic wear levelling algorithms SCSI UNMAP SCSI UNMAP is the full equivalent of TRIM, but for SCSI disks. UNMAP is a SCSI command that a host can issue to a storage array to free blocks (LBAs) that no longer need to be allocated. Benefits of SCSI UNMAP Beneficial to thinly provisioned storage pools as reclaimed blocks will be put back into the unused pool Avoid out of space condition for thinly provisioned pools of storage Automatic operation that no longer needs to be run manually on host No longer need to run backend array tools to perform thin reclamation (zero page reclaim) that consumed valuable array cycles and potentially slowed down host performance Pure Storage
24 Operating Systems and SCSI UNMAP The following is the currently known list of operating systems that either support or don t support the SCSI UNMAP command. Host Operating System File System Support T10 UNMAP Windows Server 2012 NTFS supported; Resilient File System (ReFS) does not support TRIM/UNMAP Yes (1) Windows Server 2003/2008 (4) No native OS support, must use SDelete (2) Fsutil (3) No Windows Hyper-V 2012 VHDX (5) supported; VHD not supported Yes 1. While Server 2012 supports SSD TRIM (ATA), Pure LUNs are discovered as SCSI devices so UNMAP will be used to reclaim space 2. Secure Delete (SDelete) 3. Fsutil with can be used to create a balloon file that consumes a portion of the free space and then the balloon file can be deleted 4. Windows 2008 R2 supports SSD TRIM (ATA), however TRIM is not used for Pure Storage LUNs (SCSI) 5. The virtual hard disk must be formatted as a.vhdx file (dynamic or fixed). SCSI UNMAP is not supported with the.vhd virtual hard disk format. The guest VM must also support SCSI UNMAP. Microsoft Windows Server 2008 R2 Windows 2003, 2008 and 2008 R2 do not natively provide the capability to reclaim space. Microsoft has provided an alternative through a tool called sdelete. This tool can be downloaded through TechNet at: sdelete is a command line utility that allows you to delete one or more files and/or directories, or to cleanse free space on a logical disk. sdelete accepts wild card characters as part of the directory or file specifier. Note usage: sdelete [-p passes] [-s] [-q] <file or directory>... sdelete [-p passes] [-z -c] [drive letter]... -a Remove Read-Only attribute -c Clean free space -p passes Specifies number of overwrite passes (default is 1) -q Don't print errors (Quiet) -s or -r Recurse subdirectories -z Zero free space (good for virtual disk optimization) When utilizing the -z option, a balloon file is generated. Please evaluate the space available before performing this option. If utilization is high (80-90%), Garbage Collection (GC) will take care of the space clean-up after host side deletion. Garbage Collection may take some time and the reader should be aware of this. Pure Storage
25 Microsoft Windows Server 2012 / 2012 R2 Windows 2012 natively supports the capability to reclaim space and will do so by default. If you wish to disable automatic reclamation, then run the following Windows PowerShell as appropriate: Disable Delete Notification Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -Name DisableDeleteNotification -Value 1 Enable Delete Notification Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\FileSystem" -Name DisableDeleteNotification -Value 0 If space reclamation is disabled, then you can use Defragment and Optimize Drives to manually perform space reclamation. To start the tool, on the Server Manager > Tools > Defragment and Optimize Drives. Microsoft Hyper-V Deleting a file from the file system of an UNMAP capable guest operating system sends UNMAP requests to the Hyper-V host. For this to work successfully, the virtual hard disk must be formatted as a VHDX file; either dynamic or fixed. This feature does not work with the older Virtual Hard Disk (VHD) format. Also, the guest OS must support SCSI UNMAP (see chart under Operating Systems that support SCSI UNMAP). Hyper-V pass-through disks and Virtual Fibre-Channel (NPIV), which will show up as physical disks to the Guest VM, are also supported. SAN Zoning Recommendations Pure Storage supports enterprise class single host initiator zoning configurations. It is recommended, whenever possible and to aid in troubleshooting, that the zoning practices advised by the switch vendors be implemented. Figure 1: Current Pure Storage port connectivity Pure Storage
26 Offset host connections to the Pure Storage FlashArray so as to optimize fibre channel or iscsi HBA load. A fair balance can be obtained by alternating connectivity from the fabric between odd and even host ports on the relevant storage controller node. For example in a highly available 2-node storage controller configuration: pureuser@purestorage> pureport list --initiator Initiator WWN Target Target WWN 21:00:00:24:FF:23:23:F4 CT0.FC1 52:4A:93:70:00:00:86:01 21:00:00:24:FF:23:23:F4 CT1.FC1 52:4A:93:70:00:00:86:11 21:00:00:24:FF:27:29:D6 CT0.FC2 52:4A:93:70:00:00:86:02 21:00:00:24:FF:27:29:D6 CT1.FC2 52:4A:93:70:00:00:86:12 Pure Storage
27 Troubleshooting Brocade Fill Words Brocade FC switches and their OEMs have been known to have some performance and connectivity deficiencies when used with QLogic HBA s that are operating at 8Gb. The Pure Storage FlashArray uses the QLogic 2642 Dual-Port FC HBA and is thus susceptible to this deficiency. This section outlines how to properly configure and tune a Brocade switch in order to avoid excessive CRC and Decode errors. IDLE Fill Word Prior to FOS version 7.0, Brocade FC switches and their derivatives used IDLE primitives for both link initialization and for fill words. This ensured successful link initialization between Brocade switch ports and end devices operating at 1G/2G/4G speeds. However, some 8G devices, such as QLogic HBA s, are not capable of properly establishing links with Brocade 8G FC switches when ARB/ARB or IDLE/ARB primitives are used. For these devices, a new mode is available that provides a hybrid for both link initialization and the fill word. Problem Symptoms Excessive errors can prevent servers from connecting properly or performing with optimum efficiency with the Pure FlashArray. Decode errors indicate failure on an HBA. Failure on a Brocade switch may be indicated by er_enc_out and/or a large number of er_bad_os errors. swd77:root> portstatsshow 6 > stat_wtx byte words transmitted > stat_wrx byte words received > stat_ftx Frames transmitted > stat_frx Frames received > stat_c2_frx 0 Class 2 frames received > stat_c3_frx Class 3 frames received > stat_lc_rx 0 Link control frames received > stat_mc_rx 0 Multicast frames received > stat_mc_to 0 Multicast timeouts > stat_mc_tx 0 Multicast frames transmitted > tim_rdy_pri 0 Time R_RDY high priority > tim_txcrd_z 0 Time TX Credit Zero (2.5Us ticks) > tim_txcrd_z_vc 0-3: > tim_txcrd_z_vc 4-7: > tim_txcrd_z_vc 8-11: > tim_txcrd_z_vc 12-15: > er_enc_in 0 Encoding errors inside of frames > er_crc 0 Frames with CRC errors > er_trunc 0 Frames shorter than minimum > er_toolong 0 Frames longer than maximum > er_bad_eof 0 Frames with bad end-of-frame > er_enc_out 318 Encoding error outside of frames > er_bad_os Invalid ordered set > er_rx_c3_timeout 0 Class 3 receive frames discarded due to timeout > er_tx_c3_timeout 0 Class 3 transmit frames discarded due to timeout Pure Storage
28 > er_c3_dest_unreach 0 Class 3 frames discarded due to destination unreachable > er_other_discard 0 Other discards > er_type1_miss 0 frames with FTB type 1 miss > er_type2_miss 0 frames with FTB type 2 miss > er_type6_miss 0 frames with FTB type 6 miss > er_zone_miss 0 frames with hard zoning miss > er_lun_zone_miss 0 frames with LUN zoning miss > er_crc_good_eof 0 Crc error with good eof > er_inv_arb 0 Invalid ARB > open 0 loop_open > transfer 0 loop_transfer > opened 0 FL_Port opened > starve_stop 0 tenancies stopped due to starvation > fl_tenancy 0 number of times FL has the tenancy > nl_tenancy 0 number of times NL has the tenancy > zero_tenancy 0 zero tenancy Problem Resolution In order to ensure correct interoperability between a Brocade FC and the Pure FlashArray, use the portcfgfillword command to set the fill word of the connecting port to option3 (aa-then-ia). Brocade5100:admin> portcfgfillword 0 3 Usage: portcfgfillword PortNumber Mode Mode: 0/-idle-idle - IDLE in Link Init, IDLE as fill word (default) 1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word 2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW) 3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF Pure Storage
29 References The following links were used during the research for this Best Practices document. CRC and Decode Errors on 8-Gbps Fibre Channel Ports connected to Brocade Switches 8-gbps-fibre-channel-ports-connected-to-brocade Brocade 8 Gbps Fibre Channel Switches and Fill Words _gbps_fibre_channel_switches_and_fill_words?lang=en Windows Sysinternals SDelete Windows Storage Team Blog: Updated Guidance on Microsoft MPIO Settings Windows Storage Team Blog: The Windows Disk timeout value: Less is better Microsoft Multipath I/O Step-by-Step Guide Pure Storage
30 Pure Storage, Inc. 650 Castro Street, Suite #400 Mountain View, CA T: F: Sales: Support: Media: General: Pure Storage
Windows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
Windows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Compellent Storage Center
Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
Data ONTAP DSM 4.1 For Windows MPIO
Data ONTAP DSM 4.1 For Windows MPIO Installation and Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:
SAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled
Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision
Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering
Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.
Fibre Channel HBA and VM Migration
Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 FC0054605-00 A Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 S Information furnished in this
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
Drobo How-To Guide. Topics. What You Will Need. Configure Windows iscsi Multipath I/O (MPIO) with Drobo iscsi SAN
Multipath I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.
Drobo How-To Guide. Deploy Drobo iscsi Storage with VMware vsphere Virtualization
The Drobo family of iscsi storage arrays allows organizations to effectively leverage the capabilities of a VMware infrastructure, including vmotion, Storage vmotion, Distributed Resource Scheduling (DRS),
Dell Compellent Storage Center
Dell Compellent Storage Center Windows Server 2012 Best Practices Guide Dell Compellent Technical Solutions Group July, 2013 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until
vrealize Operations Manager Customization and Administration Guide
vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.0.1 This document supports the version of each product listed and supports all subsequent versions until
Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products. IBM Systems and Technology Group ISV Enablement January 2014
Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...
NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide
NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide Sachin Maheshwari, Anand Ranganathan, NetApp October 2012 Abstract This document provides
Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide
Direct Storage Access Using NetApp SnapDrive Installation & Administration Guide SnapDrive overview... 3 What SnapDrive does... 3 What SnapDrive does not do... 3 Recommendations for using SnapDrive...
Dell High Availability Solutions Guide for Microsoft Hyper-V R2. A Dell Technical White Paper
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013
HP 3PAR Windows Server 2012 and Windows Server 2008 Implementation Guide
HP 3PAR Windows Server 2012 and Windows Server 2008 Implementation Guide Abstract This implementation guide provides the information needed to configure the HP 3PAR StoreServ Storage with Microsoft Windows
HP StoreVirtual DSM for Microsoft MPIO Deployment Guide
HP StoreVirtual DSM for Microsoft MPIO Deployment Guide HP Part Number: AX696-96254 Published: March 2013 Edition: 3 Copyright 2011, 2013 Hewlett-Packard Development Company, L.P. 1 Using MPIO Description
EMC STORAGE WITH MICROSOFT HYPER-V VIRTUALIZATION
White Paper EMC STORAGE WITH MICROSOFT HYPER-V VIRTUALIZATION Design and deployment considerations and best practices using EMC storage solutions EMC Solutions Abstract This white paper examines deployment
Fibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2
FC0054608-00 A Fibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2 Usage Scenarios and Best Practices Guide FC0054608-00 A Fibre Channel NPIV Storage Networking
Drobo How-To Guide. Cloud Storage Using Amazon Storage Gateway with Drobo iscsi SAN
The Amazon Web Services (AWS) Storage Gateway uses an on-premises virtual appliance to replicate a portion of your local Drobo iscsi SAN (Drobo B1200i, left below, and Drobo B800i, right below) to cloudbased
The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.
Emulex HBA Product Evaluation Evaluation report prepared under contract with Emulex Corporation Introduction Emulex Corporation commissioned Demartek to evaluate its 8 Gbps Fibre Channel host bus adapters
StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 FC QLogic/Emulex HBA Implementation Guide In this guide 1.0 Notices
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
Linux Host Utilities 6.1 Installation and Setup Guide
Linux Host Utilities 6.1 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.
Accelerating Databases and Applications...8. Virtualizing and Consolidating Workloads...8. Delivering the Ultimate Virtual Desktop Experience...
Accelerating Databases and Applications...8 Virtualizing and Consolidating Workloads...8 Delivering the Ultimate Virtual Desktop Experience...8 Protecting and Recovering Vital Data Assets...8 Consistent
HP Converged Infrastructure Solutions
HP Converged Infrastructure Solutions HP Virtual Connect and HP StorageWorks Simple SAN Connection Manager Enterprise Software Solution brief Executive summary Whether it is with VMware vsphere, Microsoft
Configuring a VEEAM off host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster with a DELL Compellent SAN (Fiber Channel)
1 Configuring a VEEAM off host backup proxy server for backing up a Windows Server 2012 R2 Hyper-V cluster with a DELL Compellent SAN (Fiber Channel) Introduction This white paper describes how to configure
Setup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.1 ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the
Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide
Dell PowerVault MD3400 and MD3420 Series Storage Arrays Deployment Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
StarWind iscsi SAN & NAS: Configuring HA File Server on Windows Server 2012 for SMB NAS January 2013
StarWind iscsi SAN & NAS: Configuring HA File Server on Windows Server 2012 for SMB NAS January 2013 TRADEMARKS StarWind, StarWind Software and the StarWind and the StarWind Software logos are trademarks
StarWind Virtual SAN Installing & Configuring a SQL Server 2012 Failover Cluster
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installing & Configuring a SQL Server 2012 Failover JANUARY 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the StarWind
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster Tuesday, December 26, 2013 KernSafe Technologies, Inc www.kernsafe.com Copyright KernSafe Technologies 2006-2013.All right reserved.
OneCommand Manager Application for Windows Release Notes
OneCommand Manager Application for Windows Release Notes Version: 6.2.10.3 System: Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012: x64 versions, Enterprise and Server Core installation
Setup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1
ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning
StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013
StarWind iscsi SAN & NAS: Configuring HA Shared Storage for Scale- Out File Servers in Windows Server 2012 January 2013 TRADEMARKS StarWind, StarWind Software and the StarWind and the StarWind Software
Server Virtualization with QNAP Turbo NAS and Microsoft Hyper-V
Server Virtualization with QNAP Turbo NAS and Microsoft Hyper-V How to set up the QNAP Turbo NAS as an iscsi storage for Microsoft Hyper-V and as an ISOs repository Copyright 2010. QNAP Systems, Inc. All
BEST PRACTICES GUIDE: VMware on Nimble Storage
BEST PRACTICES GUIDE: VMware on Nimble Storage Summary Nimble Storage iscsi arrays provide a complete application-aware data storage solution that includes primary storage, intelligent caching, instant
Clustered Data ONTAP 8.3
Updated for 8.3.1 Clustered Data ONTAP 8.3 SAN Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
Direct Attached Storage
, page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel
Best Practices Guide for Exchange 2010 and Tegile Systems Zebi Hybrid Storage Array
Best Practices Guide for Exchange 2010 and Tegile Systems Zebi Hybrid Storage Array Version 2.0: May 2013 Contents The Exchange Story... 1 Zebi Metadata Accelerated Storage System (MASS) The Ultimate in
Configuring HP LeftHand Storage with Microsoft Windows Server
Technical white paper Configuring HP LeftHand Storage with Microsoft Windows Server Table of contents Introduction 3 Target audience 3 Connecting Windows server to HP LeftHand volumes 3 Assigning a VIP
Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform
1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
StarWind iscsi SAN Software: Using StarWind with VMware ESX Server
StarWind iscsi SAN Software: Using StarWind with VMware ESX Server www.starwindsoftware.com Copyright 2008-2010. All rights reserved. COPYRIGHT Copyright 2008-2010. All rights reserved. No part of this
capacity management for StorageWorks NAS servers
application notes hp OpenView capacity management for StorageWorks NAS servers First Edition (February 2004) Part Number: AA-RV1BA-TE This document describes how to use HP OpenView Storage Area Manager
Quick Start - Virtual Server idataagent (Microsoft/Hyper-V)
Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data
Quick Start - Virtual Server idataagent (Microsoft/Hyper-V)
Page 1 of 19 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data
MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure
MATLAB Distributed Computing Server with HPC Cluster in Microsoft Azure Introduction This article shows you how to deploy the MATLAB Distributed Computing Server (hereinafter referred to as MDCS) with
StarWind iscsi SAN & NAS: Configuring HA Storage for Hyper-V October 2012
StarWind iscsi SAN & NAS: Configuring HA Storage for Hyper-V October 2012 TRADEMARKS StarWind, StarWind Software and the StarWind and the StarWind Software logos are trademarks of StarWind Software which
StarWind iscsi SAN: Configuring Global Deduplication May 2012
StarWind iscsi SAN: Configuring Global Deduplication May 2012 TRADEMARKS StarWind, StarWind Software, and the StarWind and StarWind Software logos are trademarks of StarWind Software that may be registered
Introduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
Deploying Windows Streaming Media Servers NLB Cluster and metasan
Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................
Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays
TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic
Clustering ExtremeZ-IP 4.1
Clustering ExtremeZ-IP 4.1 Installing and Configuring ExtremeZ-IP 4.x on a Cluster Version: 1.3 Date: 10/11/05 Product Version: 4.1 Introduction This document provides instructions and background information
ION EEM 3.8 Server Preparation
PREREQUISITE GUIDE JUNE 21, 2006 ION EEM 3.8 Server Preparation This document details the server configuration required for an ION EEM installation. This document can and should be used by the your company
Bosch Video Management System High Availability with Hyper-V
Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements
Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster
www.open-e.com 1 Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster (without bonding) Software Version: DSS ver. 7.00 up10 Presentation updated: May 2013 www.open-e.com 2
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
ThinkServer RD550 and RD650 Operating System Installation Guide
ThinkServer RD550 and RD650 Operating System Installation Guide Note: Before using this information and the product it supports, be sure to read and understand the Read Me First and Safety, Warranty, and
System Center Virtual Machine Manager 2012 and NetApp Data ONTAP SMI-S Agent
Technical Report System Center Virtual Machine Manager 2012 and NetApp Data ONTAP SMI-S Agent Chris Lionetti, NetApp November2011 TR-3874 BEST PRACTICES AND PRESCRIPTIVE GUIDANCE FOR INCORPORATING NETAPP
Step-by-Step Guide to Open-E DSS V7 Active-Active iscsi Failover
www.open-e.com 1 Step-by-Step Guide to Software Version: DSS ver. 7.00 up10 Presentation updated: June 2013 www.open-e.com 2 TO SET UP ACTIVE-ACTIVE ISCSI FAILOVER, PERFORM THE FOLLOWING STEPS: 1. Hardware
EMC CLARiiON PRO Storage System Performance Management Pack Guide for Operations Manager 2007. Published: 04/14/2011
EMC CLARiiON PRO Storage System Performance Management Pack Guide for Operations Manager 2007 Published: 04/14/2011 Copyright EMC2, EMC, and where information lives are registered trademarks or trademarks
Installation Guide. Step-by-Step Guide for clustering Hyper-V virtual machines with Sanbolic s Kayo FS. Table of Contents
Distributed Data Center Virtualization using Windows Server 2008 Hyper-V and Failover Clustering beta release* *The clustered disk removal section will become obsolete once the solution ships in early
Administration Guide - Virtual Server idataagent (Microsoft Hyper-V)
Page 1 of 83 Administration Guide - Virtual Server idataagent (Microsoft Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine
VMware Best Practice and Integration Guide
VMware Best Practice and Integration Guide Dot Hill Systems Introduction 1 INTRODUCTION Today s Data Centers are embracing Server Virtualization as a means to optimize hardware resources, energy resources,
Using SMI-S for Management Automation of StarWind iscsi SAN V8 beta in System Center Virtual Machine Manager 2012 R2
Using SMI-S for Management Automation of StarWind iscsi SAN V8 beta in System Center Virtual Machine Manager 2012 R2 September 2013 TRADEMARKS StarWind, StarWind Software and the StarWind and the StarWind
Installation and Support Guide for Microsoft Windows Server, Linux, Novell NetWare, and VMware ESX Server
System Storage DS3000 Storage Manager Version 10 Installation and Support Guide for Microsoft Windows Server, Linux, Novell NetWare, and VMware ESX Server System Storage DS3000 Storage Manager Version
Clustered Data ONTAP 8.3
Clustered Data ONTAP 8.3 SAN Administration Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277 Web:
NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.
NetApp Software SANtricity Storage Manager Concepts for Version 11.10 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1
Microsoft SQL Server 2012 Reference Architecture. Rob \barkz\ Barker, Solutions Architect Version 1.0 Version 1.0
Microsoft SQL Server 2012 Reference Architecture Rob \barkz\ Barker, Solutions Architect Version 1.0 Version 1.0 Table of Contents 3 Executive summary 3 Goals and objectives 3 Audience 3 Summary of findings
Step-By-Step Guide to Deploying Lync Server 2010 Enterprise Edition
Step-By-Step Guide to Deploying Lync Server 2010 Enterprise Edition The installation of Lync Server 2010 is a fairly task-intensive process. In this article, I will walk you through each of the tasks,
GlobalSCAPE DMZ Gateway, v1. User Guide
GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical
StarWind iscsi SAN Software: Using an existing SAN for configuring High Availability storage with Windows Server 2003 and 2008
StarWind iscsi SAN Software: Using an existing SAN for configuring High Availability storage with Windows Server 2003 and 2008 www.starwindsoftware.com Copyright 2008-2011. All rights reserved. COPYRIGHT
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
NETFORT LANGUARDIAN INSTALLING LANGUARDIAN ON MICROSOFT HYPER V
NETFORT LANGUARDIAN INSTALLING LANGUARDIAN ON MICROSOFT HYPER V Instructions apply to installs on Windows Server 2012 R2 Before you begin When deployed in a Hyper V environment, LANGuardian will capture
Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.
Step-by-Step Guide to configure on Intel Server Systems R2224GZ4GC4 Software Version: DSS ver. 7.00 up01 Presentation updated: April 2013 www.open-e.com 1 www.open-e.com 2 TECHNICAL SPECIFICATIONS OF THE
AssuredSAN 3000 Series Installing Optional Software for Microsoft Windows Server
AssuredSAN 3000 Series Installing Optional Software for Microsoft Windows Server P/N 83-00004319-14-02 Revision A June 2011 Copyright 2011 Dot Hill Systems Corp. All rights reserved. Dot Hill Systems Corp.,
Bare Metal Recovery Quick Start Guide
Bare Metal Recovery Quick Start Guide Revisions Document Control Version 5.4.3 Status Changes Date Final Created. August 2014 Copyright 2003-2014 Intronis, Inc. All rights reserved. 1 Table of Contents
A Dell PowerVault MD3200 and MD3200i Technical White Paper Dell
Implementing Hyper-V A Dell PowerVault MD3200 and MD3200i Technical White Paper Dell THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Sonnet Web Management Tool User s Guide. for Fusion Fibre Channel Storage Systems
Sonnet Web Management Tool User s Guide for Fusion Fibre Channel Storage Systems Contents 1.0 Getting Started... 1 Discovering the IP address Optional - Setting up Internet Explorer Beginning Initial
SANtricity Storage Manager 11.25
SANtricity Storage Manager 11.25 Software Installation Reference NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888)
Hyper-V: Using Hyper-V and Failover Clustering
2010 Microsoft Corporation. All rights reserved. Hyper-V: Using Hyper-V and Failover Clustering Updated: June 9, 2010 Applies To: Windows Server 2008, Windows Server 2008 R2 This guide walks you through
EMC Symmetrix with Microsoft Windows Server 2003 and 2008
Best Practices Planning Abstract This white paper outlines the concepts, procedures, and best practices associated with deploying Microsoft Windows Server 2003 and 2008 with EMC Symmetrix DMX-3 and DMX-4,
Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices
Cisco Unified Computing System with Microsoft Hyper-V Recommended Practices White Paper January 2014 Preface As both Cisco Unified Computing System (Cisco UCS ) and Microsoft Hyper-V continue to gain market
