N_Port ID Virtualization
|
|
- Osborn Franklin
- 8 years ago
- Views:
Transcription
1 A Detailed Review Abstract This white paper provides a consolidated study on the (NPIV) feature and usage in different platforms and on NPIV integration with the EMC PowerPath on AIX platform. February 2010
2 Copyright 2010 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part Number h6894 A Detailed Review 2
3 Table of Contents Executive summary...4 Introduction...4 Audience... 4 Current challenges...4 From the server perspective... 4 From the SAN fabric perspective N_Port Virtualizer...7 NPIV-based LUN access...7 NPIV and QoS (VMware-specific implementation)...8 PowerPath changes for NPIV (AIX-specific)...9 NPIV and performance...10 Conclusion...11 References...11 A Detailed Review 3
4 Executive summary Fibre Channel is a flexible standard based on a networking architecture that can be used as a transport mechanism for a number of upper-level protocols. The most common upper-level protocols are SCSI and TCP/IP. Fibre Channel is a serial full duplex protocol; it has sophisticated flow control that allows it to be extended over a long distance. One of the most remarkable Fibre Channel evolutions is the implementation of the storage area network (SAN). In SANs, Fibre Channel has become the industry s de facto fastswitching-system standard for connecting client computers and servers to highly scalable volumes of data. It also provides improved management and control, better viewing and reporting, fault tolerance, reduced downtime, and better efficiency to data centers. (NPIV) is a technical capability to dynamically increase Fibre Channel HBA port virtualization. This technology is increasingly gaining importance in the storage virtualization domain as data center administrators can clearly see the importance of NPIV-based solutions to deployment scenarios like: VMware, where virtual machines can increase as per the business requirement and host resources Virtual machines running on blade servers Environments with increased fabric port requirements Introduction This white paper provides technical insights into NPIV-based solutions for the deployment challenges mentioned above and also into solutions as implemented in an EMC PowerPath for AIX platform. Audience This white paper is intended for the technology professional, data center system administrators, EMC and non-emc technical staff, and EMC customers to provide a consolidated study on NPIV and its features. Current challenges From the server perspective The current trend in data center design is server virtualization, or the use of virtual machine (VM) technology to prevent proliferation of physical servers. All virtual machines running on a physical server share the same physical I/O connections. In other words, a virtual machine monitor or hypervisor blends individual VM disk I/Os before sending them to the SAN, and unveils potential bandwidth contention problems and quality-of-service issues for applications running in individual virtual machines. Also, the current set of tools used by storage administrators to monitor, troubleshoot, and secure the SAN loses application-level visibility since I/Os initiate from the same physical HBA. In a non-virtual environment a typical SAN practice is to create a zone when assigning a storage logical unit number (LUN) to a server. A zone permits only one particular server to access that LUN. This could be done by assigning the World Wide Name (WWN) of the SAN host bus adapter (HBA) to that LUN. Since each HBA had its own unique identifier or WWN, this allows secure access to that LUN as well as allowing customizable quality of service (QoS) for the application. This best practice was initially broken by server virtualization. As mentioned, each zone is assigned to a WWN, but the problem is each virtualization host may support multiple virtual machines. Each virtual machine shares access to the server s HBA through a hypervisor and as a result, has the same WWN identification to the LUN. Without a mechanism to identify the individual virtual machines to the SAN there is no way to track their use of SAN resources or to make sure they don't conflict with those SAN resources. A Detailed Review 4
5 Another challenge that server virtualization brings to SAN storage is a live migration capability the ability to move a virtual machine from one virtualized server to another. Administrators need to remember to include the second host's WWN in the zoning scheme, otherwise after migration to the second host the virtual machine can't see its storage because the SAN fabric will block access to it from the HBA with an unauthorized WWN. One way to solve these issues is to dedicate physical HBAs to each virtual machine, rather than having the hypervisor manage virtual HBAs. But dedicating HBAs to each virtual machine is pricey and does not deliver much additional value for the investment. Also, the inclusion of multiple physical HBAs in VMs would require more physical ports (N_Port) in a SAN fabric and the result is a bigger SAN fabric requirement. From the SAN fabric perspective With the increasing usage of blade servers in SAN environments, the deployment and use of aggregation switches are becoming more widespread. One major concern when designing and building Fibre Channel based SANs is the total number of switches or domains that can exist in a physical fabric. As the edge switch population grows, the number of domain IDs becomes a concern. The domain is the address of a physical switch or logical virtual fabric; the domain ID is the most significant byte in an endpoint Fibre Channel ID (FCID) (Figure 1). Figure 1. Fibre Channel ID (FCID) The switch uses this FCID to route frames from a given source (initiator) to any destination (target) in a SAN fabric. This 1 byte allows up to 256 possible addresses. The Fibre Channel standard allows for a total of 239 port addresses that can be used for domain IDs, but having more and more domain IDs brings complexity in managing the fabric as well as impact in performance due to a lot of switch connectivities. Another design concern is interoperability with third-party switches. Different SAN fabric vendors infer the Fibre Channel addressing standard differently. Also, some vendor-specific attributes used for switch-toswitch connectivity (or expansion port/e_port connectivity) made connections among different vendor switches challenging, leading customers to implement edge switch technology that matched the core director type in the fabric. To address these concerns, two features, and N_Port Virtualizer, were developed. (NPIV) is an ANSI T11 standard that describes how a single Fibre Channel HBA port (single N_Port/single FCID) can register with several World Wide Port Names (WWPNs) or multiple N_Port IDs in the SAN fabric. This allows a fabric-attached N_Port to claim multiple fabric addresses. Each address appears as a unique entity on the Fibre Channel fabric. In other words, NPIV-capable HBAs can provide multiple WWPNs rather than registering a single WWPN in the fabric. This is beneficial in two ways: In a virtual machine environment each VM can have separate WWPNs so that the hypervisor will be released to provide the I/O blending operation. In a virtual machine environment where many host operating systems or applications are running on a physical host, each virtual machine can now be managed independently from zoning, aliasing, and security perspectives. Also, there would be no extra physical ports to be connected in the SAN fabric so the addition of more edge switches would not be required. Figure 2 shows an example of an NPIV-aware host connection. In the figure, the NPIV-capable SAN is a combination of NPIV-capable HBAs and NPIV-capable switches. A Detailed Review 5
6 NPIV Capable SAN Figure 2. NPIV-aware server host connection An HBA that supports the NPIV feature follows the standard login process. The initial connection and login to the fabric are performed through the standard F_Port login (FLOGI) process. All subsequent logins for either virtual machines or logical partitions (LPAR) on a mainframe are transformed into Fabric Discovery (FDISC) login commands. The FDISC logins follow the same standard process and acquire additional addresses. Figure 3 steps through the login process of an NPIV uplink and the local logins to the NPIV-enabled adapter. Standard FLOGI Process FDISC Login for NPIV Figure 3. NPIV login process A Detailed Review 6
7 N_Port Virtualizer An extension to NPIV is the N_Port Virtualizer feature. The N_Port Virtualizer feature allows the edge switch or end fabric device to behave as an NPIV-based HBA to the core Fibre Channel director (Figure 4). The device aggregates the locally connected host ports or N_Ports into one or more uplinks (pseudointerswitch links) to the core switches. The login process for the N_Port uplink is the same as for an HBA that is NPIV-enabled; the only requirement is the core director should support the NPIV feature. As end devices log in to the NPV-enabled edge switches, the FCID addresses that are assigned use the same domain of the core director. Because the connection is treated as an N_Port and not an E_Port to the core director, the edge switch shares the domain ID of the core switch as FCIDs are being allocated. The edge NPV-enabled switch no longer requires a separate domain ID to receive connectivity to the fabric. Therefore, the use of more domain IDs for additional edge switches could be eliminated using NPV. Figure 4. An N_Port Virtualizer-enabled edge switch behaves like an HBA to the core switch NPIV-based LUN access NPIV enables a single FC HBA port to register several unique WWNs with the fabric, each of which can be assigned to an individual virtual machine. When a virtual machine has a WWN assigned to it, the virtual machine s configuration is updated to include a WWN pair (consisting of a WWPN), and a World Wide Node Name (WWNN). As that virtual machine is powered on, the VMkernel instantiates a Virtual Adapter Port (VPORT) on the physical HBA that is used to access the LUN. The VPORT is a virtual HBA that appears to the FC fabric as a physical HBA, that is, it has its own unique identifier, the WWN pair that was assigned to the virtual machine. Each VPORT is specific to the virtual machine, and the VPORT is destroyed on the host and it no longer appears to the FC fabric when the virtual machine is powered off. If NPIV is enabled, four WWN pairs (WWPN and WWNN) are specified for each virtual machine at creation time. When a virtual machine using NPIV is powered on, it uses each of these WWN pairs in sequence to try to discover an access path to the storage. The number of VPORTs that are instantiated equals the number of physical HBAs present on the host up to the maximum of four. A VPORT is created on each physical HBA that a physical path is found on. Each physical path is used to determine the virtual path that will be used to access the LUN. Note that HBAs that are not NPIV-aware are skipped in this discovery process because VPORTs cannot be instantiated on them. In Figure 5, two IBM mainframe LPARs share a single physical FCP port. Each instance registers with the name server. The NPIV WWPN is supported in the FDISC process. A Detailed Review 7
8 Figure 5. NPIV provides unique WWPNs to servers sharing an FCP port in a zvm Mainframe In a zvm mainframe during Power On Reset (POR) or dynamic I/O activation, each FCP sub-channel is assigned a WWPN by the Support Element (SE) regardless of whether the LPAR is NPIV-enabled. If the LPAR is not enabled for NPIV, the microcode does not use the NPIV WWPNs. The SE retains, on its hard drive, the information about the assigned WWPN (to prevent the data from being lost if the system is shut down or the FCP adapter is replaced). Each LPAR receives a different N_Port ID. This allows multiple LPARs or VM guests to read and write to the same LUN using the same physical port. Without NPIV, writing to the same LUN over a shared port is not allowed. The Virtual FC adapter feature makes use of NPIV. NPIV and QoS (VMware-specific implementation) NPIV becomes truly valuable when it is used in conjunction with storage QoS capabilities like those that Brocade and other vendors provide in an end-to-end configuration. NPIV support in VMware enables extending the benefits of Brocade Adaptive Networking Services to each individual VM rather than to the physical server running the VM. Cisco also plays a significant role in providing NPIV-based solutions on SAN fabric like N_Port Virtualizer, developing solutions like Fabric Port (F_Port) Trunking, and integrating NPV with Cisco VSAN-based environments. Using NPIV to optimize server virtualization provides an administrator with another layer of control. This allows system administrators to more completely understand and provide QoS to the application by specifying the QoS with NPIV. A Detailed Review 8
9 Figure 6. NPIV and QoS in a Brocade-based implementation PowerPath changes for NPIV (AIX-specific) PowerPath is host-based software that provides path management. PowerPath operates with several storage systems, on several operating systems, with Fibre Channel and iscsi data channels. PowerPath supports multiple paths to a LUN, enabling PowerPath to provide: Automatic failover in the event of a hardware failure. PowerPath automatically detects path failure and redirects I/O to the available path(s). PowerPath also performs periodic path health status checks and automatically restores the path when recovered. Dynamic multipath load balancing. PowerPath distributes I/O requests to a logical device across all available paths, thus improving I/O performance and reducing management time and downtime by eliminating the need to configure paths statically across logical LUNs. Typically the AIX disk driver establishes a reserve on a LUN and manages that reserve based on usersettable attributes for a physical volume when the volume is opened. EMC PowerPath manages the reserve through its proprietary commands. The AIX disk driver does not inspect the command nor does it need to understand the semantics of the vendor-specific reserve command. The PowerPath reserve does not change the state of the device from the perspective of the AIX disk driver. This allows complete decoupling of EMC PowerPath solution to manage reserves on logical units from the AIX disk driver. In an NPIV LPAR Partition Mobility solution, the client I/O stack manages the reserve during the migration. This means depending on the type of SCSI RESERVE command issued, a specific action needs to be taken to break and or re-establish a reserve to the LUN as part of the migration. The AIX disk driver does this if it is managing the reserves on behalf of the initiator. In the case of PowerPath, the AIX disk driver does not manage the reserves. The AIX kernel provides a kernel service to allow vendor kernel extensions to act before and after a Partition Mobility migration. The kernel extension calls reconfig_register_ext and registers a function with the AIX kernel, which is called by the AIX kernel on specific events. The kernel calls back into the kernel extensions synchronously in respect to the registered events so in this case a particular stage of the LPAR migration cannot proceed until the registered function completes. A Detailed Review 9
10 NPIV and performance As NPIV is associated with multiple independent data channel I/O transfer, it would be wise to discuss how NPIV efficiently handles performance. NPIV-capable HBAs optimize performances by the capability of interleaving Fibre Channel data transfers at the frame level. To illustrate why frame-level multiplexing has such an impact, let s start with the basics of a Fibre Channel communication exchange. An I/O transaction in Fibre Channel is called an exchange. Exchanges contain one or more sequences, which in turn contain one or more frames, as shown in Figure 7. Frames can be 512, 1,024, or 2,048 bytes in length, but 2,048 is used almost universally. Think of the frame as a word, the sequence as a phrase, and the exchange as an entire conversation. Figure 7. Fibre Channel I/O exchange Frame interleaving allows transfers to be inserted between the frames of another sequence instead of having to wait for the end of the conversation. The difference between exchange and frame-level multiplexing is illustrated in Figure 8. A data transfer conversation begins on the far left (Exchange 0). That conversation is broken into three frames (frames 0, 1, and 2). A second conversation (Exchange 1) begins shortly after the first conversation has begun. When the traditional exchange interleaving method is used, the first frame of the second conversation cannot be transferred until the first conversation (Exchange 0) is complete. With frame level interleaving, the second conversation (Exchange 1) begins earlier and is interleaved with the first conversation. As a result, the second conversation begins transferring data and completes sooner. This translates into more efficient, reliable data transfer and improved performance. Figure 8. Exchange vs. frame interleaving A Detailed Review 10
11 Conclusion Server virtualization technology has matured in recent years and is being adopted by a growing number of IT managers looking to reduce hardware and management costs through server consolidation. NPIV increases the security of virtual servers by enabling secure access to shared Fibre Channel storage using the zoning and LUN masking techniques familiar to the SAN administrators. NPIV also reduces cost and complexity. Recapping the benefits of NPIV: LUN optimization through VM-to-LUN assignment Fabric QoS and prioritization at the VM level NPIV-capable initiator zoning at the VM level releases the hypervisor to provide the I/O blending operation Array-level LUN masking to control LUN access on a per-vm basis Accelerated VM migration VSAN integration and routing Future enhancements on NPIV are in progress. For example, Cisco is developing solutions on F_Port Trunking and F_Port Channeling. References NPIV entry on Wikipedia NPIV Functionality Protocol ftp://ftp.t11.org/t11/member/fc/da/02-340v1.pdf T11 draft standards page NPIV in the Data Center white paper html PowerVM Virtualization on IBM System p: Introduction and Configuration Deployment Guide: Emulex Virtual HBA Solutions and VMware vsphere 4 Storage Networking Industry Association website ESG Lab Review Emulex Optimized Server Virtualization A Detailed Review 11
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
More informationBest Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
More informationHow To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationBuilding the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
More informationFIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
More informationHow To Use The Cisco Mds F Bladecenter Switch For Ibi Bladecenter (Ibi) For Aaa2 (Ibib) With A 4G) And 4G (Ibb) Network (Ibm) For Anaa
Cisco MDS FC Bladeswitch for IBM BladeCenter Technical Overview Extending Cisco MDS 9000 Family Intelligent Storage Area Network Services to the Server Edge Cisco MDS FC Bladeswitch for IBM BladeCenter
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the
More informationVirtual Server-SAN connectivity The emergence of N-Port ID Virtualization
White Paper Virtual Server-SAN connectivity The emergence of N-Port ID Virtualization At a glance Server virtualization is rapidly gaining market acceptance for server consolidation and provisioning. The
More informationSAN Implementation Course SANIW; 3 Days, Instructor-led
SAN Implementation Course SANIW; 3 Days, Instructor-led Course Description In this workshop course, you learn how to connect Windows, vsphere, and Linux hosts via Fibre Channel (FC) and iscsi protocols
More informationNext Generation Data Center Networking.
Next Generation Data Center Networking. Intelligent Information Network. עמי בן-עמרם, יועץ להנדסת מערכות amib@cisco.comcom Cisco Israel. 1 Transparency in the Eye of the Beholder With virtualization, s
More informationEMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
More informationConfiguring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led
Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led Course Description Configuring Cisco Nexus 5000 Switches (DCNX5K) v2.1 is a 5-day ILT training program that is designed
More informationConverged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
More informationStorage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
More informationAn Integrated End-to-End Data Integrity Solution to Protect Against Silent Data Corruption
White Paper An Integrated End-to-End Data Integrity Solution to Protect Against Silent Data Corruption Abstract This white paper describes how T10 PI prevents silent data corruption, ensuring that incomplete
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
More informationFibre Channel HBA and VM Migration
Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 FC0054605-00 A Fibre Channel HBA and VM Migration Guide for Hyper-V and System Center VMM2008 S Information furnished in this
More informationIBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect
IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide
More informationWhy Use 16Gb Fibre Channel with Windows Server 2012 Deployments
W h i t e p a p e r Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments Introduction Windows Server 2012 Hyper-V Storage Networking Microsoft s Windows Server 2012 platform is designed for
More informationHP Converged Infrastructure Solutions
HP Converged Infrastructure Solutions HP Virtual Connect and HP StorageWorks Simple SAN Connection Manager Enterprise Software Solution brief Executive summary Whether it is with VMware vsphere, Microsoft
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESX 4.1 ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the
More informationSolution Guide: Brocade Server Application Optimization for a Scalable Oracle Environment
Solution Guide: Brocade Server lication Optimization for a Scalable Oracle Environment Optimize the performance and scalability of Oracle applications and databases running Oracle Enterprise Linux (OEL)
More informationThe HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.
Emulex HBA Product Evaluation Evaluation report prepared under contract with Emulex Corporation Introduction Emulex Corporation commissioned Demartek to evaluate its 8 Gbps Fibre Channel host bus adapters
More informationCustomer Education Services Course Overview
Customer Education Services Course Overview Accelerated SAN Essentials (UC434S) This five-day course provides a comprehensive and accelerated understanding of SAN technologies and concepts. Students will
More informationEMC ViPR Controller. User Interface Virtual Data Center Configuration Guide. Version 2.4 302-002-416 REV 01
EMC ViPR Controller Version 2.4 User Interface Virtual Data Center Configuration Guide 302-002-416 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published November,
More informationStorage Challenges Created by a Virtualized Server Infrastructure. Agenda. State of server virtualization
Storage Challenges Created by a Virtualized Server Infrastructure Steve Norall Senior Analyst Taneja Group steve@tanejagroup.com Agenda State of server virtualization Four storage challenges created by
More informationUnleash the Performance of vsphere 5.1 with 16Gb Fibre Channel
W h i t e p a p e r Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel Introduction The July 2011 launch of the VMware vsphere 5.0 which included the ESXi 5.0 hypervisor along with vcloud Director
More informationFibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5
Fibre Channel SAN Configuration Guide ESX Server 3.5, ESX Server 3i version 3.5 VirtualCenter 2.5 This document supports the version of each product listed and supports all subsequent versions until the
More informationSetup for Failover Clustering and Microsoft Cluster Service
Setup for Failover Clustering and Microsoft Cluster Service ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document
More informationDell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server
Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server A Dell Technical White Paper PowerVault MD32xx Storage Array www.dell.com/md32xx THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND
More informationFCoE Deployment in a Virtualized Data Center
FCoE Deployment in a irtualized Data Center Satheesh Nanniyur (satheesh.nanniyur@qlogic.com) Sr. Staff Product arketing anager QLogic Corporation All opinions expressed in this presentation are that of
More informationWhite. Paper. Optimizing the Virtual Data Center with Data Path Pools. EMC PowerPath/VE. February, 2011
White Paper Optimizing the Virtual Data Center with Data Path Pools EMC PowerPath/VE By Bob Laliberte February, 2011 This ESG White Paper was commissioned by EMC and is distributed under license from ESG.
More informationM.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
More informationVMware vsphere 5.1 Advanced Administration
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
More informationBrocade SAN Scalability Guidelines: Brocade Fabric OS v7.x
Brocade SAN Scalability Guidelines: Brocade Fabric OS v7.x Version 7.2, update 1 Dated: February 5, 2014 This document provides scalability guidelines that can be used to design and deploy extremely stable
More informationEMC PowerPath Family
DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path
More informationCompellent Storage Center
Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide Dell Compellent Technical Solutions Group October 2012 THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
More informationExpert Reference Series of White Papers. VMware vsphere Distributed Switches
Expert Reference Series of White Papers VMware vsphere Distributed Switches info@globalknowledge.net www.globalknowledge.net VMware vsphere Distributed Switches Rebecca Fitzhugh, VCAP-DCA, VCAP-DCD, VCAP-CIA,
More informationWindows Host Utilities 6.0.2 Installation and Setup Guide
Windows Host Utilities 6.0.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 463-8277
More informationServer Consolidation and Workload Optimization
Server Consolidation and Workload Optimization Contributions for this vendor neutral technology paper have been provided by Blade.org members including, IBM, Emulex, Double-Take Software, NetApp and APC
More informationEMC Invista: The Easy to Use Storage Manager
EMC s Invista SAN Virtualization System Tested Feb. 2006 Page 1 of 13 EMC Invista: The Easy to Use Storage Manager Invista delivers centrally managed LUN Virtualization, Data Mobility, and Copy Services
More informationBrocade FICON/FCP Intermix Best Practices Guide
Brocade FICON/FCP Intermix Best Practices Guide This guide discusses topics related to mixing FICON and FCP devices in the same Storage Area Network (SAN), focusing on issues that end users need to address.
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
More informationDirect Attached Storage
, page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel
More informationApplication Note. Introduction. Instructions
How to configure Emulex Fibre Channel HBAs with Hyper-V Virtual Fibre Channel on Microsoft Windows Server 2012 with a virtual machine running Microsoft Windows Server 2008 R2 x64 This application note
More informationBest Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series
Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.
More informationIMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology
White Paper IMPROVING VMWARE DISASTER RECOVERY WITH EMC RECOVERPOINT Applied Technology Abstract EMC RecoverPoint provides full support for data replication and disaster recovery for VMware ESX Server
More informationStorage Networking Management & Administration Workshop
Storage Networking Management & Administration Workshop Duration: 2 Days Type: Lecture Course Summary & Description Achieving SNIA Certification for storage networking management and administration knowledge
More informationCisco MDS 9000 Family Highlights: Storage Virtualization Series
Cisco MDS 9000 Family Highlights: Storage Virtualization Series Highlighted Feature: Cisco Data Mobility Manager Purpose The Cisco MDS 9000 Family Highlights series provides both business and technical
More informationConfiguration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
More informationvsphere Networking ESXi 5.0 vcenter Server 5.0 EN-000599-01
ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
More informationHitachi Data Systems and Brocade Disaster Recovery Solutions for VMware Environments
Hitachi Data Systems and Brocade Disaster Recovery Solutions for VMware Environments Technical Brief By Sarah Hamilton, Hitachi Data Systems Marcus Thordal, Brocade July 2009 Executive Summary The rapid
More informationEmulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide
Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter IBM BladeCenter at-a-glance guide The Emulex 8Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter enables high-performance connection
More informationAX4 5 Series Software Overview
AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management
More informationLarge SAN Design Best Practices
Large SAN Design Best Practices Introduction As storage area networks (SANs) continue to grow in size, many factors need to be considered to help scale and manage them. This paper focuses on large SAN
More informationHIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
More informationVMware Site Recovery Manager with EMC RecoverPoint
VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright
More informationEMC Integrated Infrastructure for VMware
EMC Integrated Infrastructure for VMware Enabled by EMC Celerra NS-120 Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000
More informationDell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence. Technical Whitepaper
Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence Dell Technical Marketing Data Center Networking May 2013 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY CONTAIN
More informationNavisphere Quality of Service Manager (NQM) Applied Technology
Applied Technology Abstract Navisphere Quality of Service Manager provides quality-of-service capabilities for CLARiiON storage systems. This white paper discusses the architecture of NQM and methods for
More informationVIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS
VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS Successfully configure all solution components Use VMS at the required bandwidth for NAS storage Meet the bandwidth demands of a 2,200
More informationEMC Symmetrix V-Max and Microsoft SQL Server
EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology Abstract This white paper examines deployment and integration of Microsoft SQL Server solutions on the EMC Symmetrix V-Max Series with Enginuity.
More informationUsing EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
More informationCisco Virtual SAN Advantages and Use Cases
Cisco Virtual SAN Advantages and Use Cases The Challenge Application data is accumulating at an increasingly fast pace, and to control costs, IT departments are looking at the benefits of economies of
More informationVirtualizing the SAN with Software Defined Storage Networks
Software Defined Storage Networks Virtualizing the SAN with Software Defined Storage Networks Introduction Data Center architects continue to face many challenges as they respond to increasing demands
More informationBenefits of Networked Storage: iscsi & Fibre Channel SANs. David Dale, NetApp
Benefits of Networked Storage: iscsi & Fibre Channel SANs David Dale, NetApp SNIA Legal Notice The material contained in this presentation is copyrighted by the SNIA. Member companies and individuals may
More informationHow To Use A Virtualization Server With A Sony Memory On A Node On A Virtual Machine On A Microsoft Vpx Vx/Esxi On A Server On A Linux Vx-X86 On A Hyperconverged Powerpoint
ESX 4.0 ESXi 4.0 vcenter Server 4.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
More informationFrequently Asked Questions: EMC UnityVSA
Frequently Asked Questions: EMC UnityVSA 302-002-570 REV 01 Version 4.0 Overview... 3 What is UnityVSA?... 3 What are the specifications for UnityVSA?... 3 How do UnityVSA specifications compare to the
More informationVMWARE VSPHERE 5.0 WITH ESXI AND VCENTER
VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER CORPORATE COLLEGE SEMINAR SERIES Date: April 15-19 Presented by: Lone Star Corporate College Format: Location: Classroom instruction 8 a.m.-5 p.m. (five-day session)
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
More informationIntroduction to MPIO, MCS, Trunking, and LACP
Introduction to MPIO, MCS, Trunking, and LACP Sam Lee Version 1.0 (JAN, 2010) - 1 - QSAN Technology, Inc. http://www.qsantechnology.com White Paper# QWP201002-P210C lntroduction Many users confuse the
More informationViolin Memory Arrays With IBM System Storage SAN Volume Control
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
More informationEMC Integrated Infrastructure for VMware
EMC Integrated Infrastructure for VMware Enabled by Celerra Reference Architecture EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com
More informationFibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2
FC0054608-00 A Fibre Channel NPIV Storage Networking for Windows Server 2008 R2 Hyper-V and System Center VMM2008 R2 Usage Scenarios and Best Practices Guide FC0054608-00 A Fibre Channel NPIV Storage Networking
More informationIntegration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the
More informationA Platform Built for Server Virtualization: Cisco Unified Computing System
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
More informationvsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
More informationIP SAN Fundamentals: An Introduction to IP SANs and iscsi
IP SAN Fundamentals: An Introduction to IP SANs and iscsi Updated April 2007 Sun Microsystems, Inc. 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, CA 95054 USA All rights reserved. This
More informationWindows Host Utilities 6.0 Installation and Setup Guide
Windows Host Utilities 6.0 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP
More informationHitachi Data Systems and Brocade Disaster Recovery Solutions for VMware Environments
Hitachi Data Systems and Brocade Disaster Recovery Solutions for VMware Environments Technical Brief By Sarah Hamilton, Hitachi Data Systems, and Marcus Thordal, Brocade December 2009 Executive Summary
More informationFlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
More informationVMware vsphere 5.0 Boot Camp
VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this
More informationIBM Flex System FC5022 2-port 16Gb FC Adapter IBM Redbooks Product Guide
IBM Flex System FC5022 2-port 16Gb FC Adapter IBM Redbooks Product Guide The network architecture on the IBM Flex System platform has been specifically designed to address network challenges, giving you
More informationDeploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform
1 Deploying SAP on Microsoft SQL Server 2008 Environments Using the Hitachi Virtual Storage Platform Implementation Guide By Sean Siegmund June 2011 Feedback Hitachi Data Systems welcomes your feedback.
More informationStorage Networking Foundations Certification Workshop
Storage Networking Foundations Certification Workshop Duration: 2 Days Type: Lecture Course Description / Overview / Expected Outcome A group of students was asked recently to define a "SAN." Some replies
More informationEMC Data Domain Management Center
EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes
More informationSetup for Microsoft Cluster Service ESX Server 3.0.1 and VirtualCenter 2.0.1
ESX Server 3.0.1 and VirtualCenter 2.0.1 Setup for Microsoft Cluster Service Revision: 20060818 Item: XXX-ENG-QNNN-NNN You can find the most up-to-date technical documentation on our Web site at http://www.vmware.com/support/
More informationInForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide
InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 QLogic/Emulex HBA Implementation Guide InForm OS 2.2.3/2.2.4 VMware ESX Server 3.0-4.0 FC QLogic/Emulex HBA Implementation Guide In this guide 1.0 Notices
More informationiscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
More informationI/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
More informationHP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
More informationIBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
More informationGlobal Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
More informationESXi Configuration Guide
ESXi 4.1 vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions
More informationEnterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
More informationUsing VMWare VAAI for storage integration with Infortrend EonStor DS G7i
Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating
More informationOracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
More information