Managing a Fibre Channel Storage Area Network

Similar documents
SAN Conceptual and Design Basics

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

List of Figures and Tables

Chapter 18. Network Management Basics

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

Comparison of Native Fibre Channel Tape and SAS Tape Connected to a Fibre Channel to SAS Bridge. White Paper

Redundancy in enterprise storage networks using dual-domain SAS configurations

Fibre Channel over Ethernet in the Data Center: An Introduction

The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper.

Storage Networking Foundations Certification Workshop

HBA Virtualization Technologies for Windows OS Environments

Cisco Active Network Abstraction Gateway High Availability Solution

CompTIA Storage+ Powered by SNIA

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option

Network Monitoring. SAN Discovery and Topology Mapping. Device Discovery. Send documentation comments to

Using High Availability Technologies Lesson 12

Monitoring the Network

Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive

COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE

Storage Area Network Configurations for RA8000/ESA12000 on Windows NT Intel

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

Storage Area Network Design Overview Using Brocade DCX Backbone Switches

Violin Memory Arrays With IBM System Storage SAN Volume Control

Figure 1: Illustration of service management conceptual framework

How To Virtualize A Storage Area Network (San) With Virtualization

VERITAS Backup Exec 9.0 for Windows Servers

Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata

WHITE PAPER: customize. Best Practice for NDMP Backup Veritas NetBackup. Paul Cummings. January Confidence in a connected world.

OnCommand Unified Manager

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

Virtualizing the SAN with Software Defined Storage Networks

Advantages of Tape-Network (LTO) Technology

: HP HP0-X02. : Designing & Implementing HP Enterprise Backup Solutions. Version : R6.1

Enterprise Disk Storage Subsystem Directions

The IntelliMagic White Paper: SMI-S for Data Collection of Storage Performance Metrics. December 2010

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

Introduction to Network Management

Storage Networking Management & Administration Workshop

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft

Quantum StorNext. Product Brief: Distributed LAN Client

Vicom Storage Virtualization Engine. Simple, scalable, cost-effective storage virtualization for the enterprise

Using Multipathing Technology to Achieve a High Availability Solution

How To Design A Data Center

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

Traditionally, a typical SAN topology uses fibre channel switch wiring while a typical NAS topology uses TCP/IP protocol over common networking

Implementing a Digital Video Archive Based on XenData Software

IBM Global Technology Services March Virtualization for disaster recovery: areas of focus and consideration.

How To Create A Large Enterprise Cloud Storage System From A Large Server (Cisco Mds 9000) Family 2 (Cio) 2 (Mds) 2) (Cisa) 2-Year-Old (Cica) 2.5

Choosing the best architecture for data protection in your Storage Area Network

Designing HP SAN Networking Solutions

Storage Solutions Overview. Benefits of iscsi Implementation. Abstract

Fabrics that Fit Matching the Network to Today s Data Center Traffic Conditions

HP Education Services Course Overview

Configuring Cisco Nexus 5000 Switches Course DCNX5K v2.1; 5 Days, Instructor-led

Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software

Optimizing Large Arrays with StoneFly Storage Concentrators

PLUMgrid Toolbox: Tools to Install, Operate and Monitor Your Virtual Network Infrastructure

Private cloud computing advances

TABLE OF CONTENTS THE SHAREPOINT MVP GUIDE TO ACHIEVING HIGH AVAILABILITY FOR SHAREPOINT DATA. Introduction. Examining Third-Party Replication Models

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Network and Facility Management: Needs, Challenges and Solutions

Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

DAS to SAN Migration Using a Storage Concentrator

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect

Implementing Offline Digital Video Storage using XenData Software

EVOLUTION OF NETWORKED STORAGE

XenData Video Edition. Product Brief:

A technical guide for monitoring Adobe LiveCycle ES deployments

Top-Down Network Design

EMC Data Protection Advisor 6.0

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

zen Platform technical white paper

EMC CENTERA VIRTUAL ARCHIVE

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

Building the Virtual Information Infrastructure

Storage Area Network

Copyright 2002 Concord Communications, Inc. Network Health is a registered trademark of Concord Communications, Inc. Concord, the Concord logo,

RFP - Equipment for the Replication of Critical Systems at Bank of Mauritius Tower and at Disaster Recovery Site. 06 March 2014

pc resource monitoring and performance advisor

Best Practices for Installing and Configuring the Captaris RightFax 9.3 Shared Services Module

Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i

Management Tools, Systems and Applications. Network Management

Backup and Recovery With Isilon IQ Clustered Storage

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Top-Down Network Design

Virtualization s Evolution

WHITE PAPER PPAPER. Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions. for Microsoft Exchange Server 2003 & Microsoft SQL Server

Transcription:

Managing a Fibre Channel Storage Area Network Storage Network Management Working Group for Fibre Channel (SNMWG-FC) November 20, 1998 Editor: Steven Wilson Abstract This white paper describes the typical Fibre Channel Storage Area Network (SAN) environment from the management perspective, and identifies the challenges of managing that environment. As a solution to these challenges, a SAN management architecture is proposed and a roadmap is provided that outlines the tasks and deliverables of the Fibre Channel Storage Network Management Working Group Copyright 1998, 1999 by Storage Networking Industry Association

Introduction The emergence of storage area networks (SANs) has created the need for new storage management tools and capabilities. While SANs provide many benefits such as lower cost of ownership and increased configuration flexibility, SANs are more complex than traditional storage environments. This inherent complexity associated with storage area networks creates new storage management challenges. The prominent technology for implementing storage area networks is Fibre Channel. Fibre Channel technology offers a variety of topologies and capabilities for interconnecting storage devices, subsystems, and server systems. These varying topologies and capabilities allow storage area networks to be designed and implemented that range from simple to complex configurations. Due to the potential complexity and diverse configurations of the Fibre Channel SAN environment, new management services, policies, and capabilities need to be identified and addressed. This white paper assumes that the reader is familiar with the basic concepts of systems management. However, for the reader s benefit, a review of basic systems management concepts is provided at the end of this paper. The audience for this white paper includes: Systems Management Vendors Fibre Channel Hardware and Software Vendors; System Integrators; System, Storage, and Network Architects. The SNIA would like to acknowledge all members of the Storage Network Management Working Group for their contributions to this white paper. In particular it would like to thank the following people for providing content: Harry Aine, Kim Banker, Kevin Collins, Kumar Malavalli, Michael O Donnell, Brian Reed, Doug Shue, Richard Taborek, Andrea Westerinen, and Steven Wilson (who also edited this document). Storage Networking Industry Association Page 2

The Fibre Channel SAN Environment Historically in storage environments, physical interfaces to storage consisted of parallel SCSI channels supporting a small number of SCSI devices. With Fibre Channel, the technology provides a means to implement robust storage area networks that may consist of 100 s of devices. Fibre Channel storage area networks yield a capability that supports high bandwidth storage traffic on the order of 100 MB/s, and enhancements to the Fibre Channel standard will support even higher bandwidth in the near future. Depending on the implementation, several different components can be used to build a Fibre Channel storage area network. The Fibre Channel SAN consists of components such as storage subsystems, storage devices, and server systems that are attached to a Fibre Channel network using Fibre Channel adapters. Fibre Channel networks in turn may be composed of many different types of interconnect entities. Examples of interconnect entities are switches, hubs, and bridges. Different types of interconnect entities allow Fibre Channel networks to be built of varying scale. In smaller SAN environments, Fibre Channel arbitrated loop topologies employ hub and bridge products. As SANs increase in size and complexity to address flexibility and availability, Fibre Channel switches may be introduced. Each of the components that compose a Fibre Channel SAN must provide an individual management capability, and participate in an often complex management environment. Due to the varying scale of SAN implementations described above, it is useful to view a SAN from both a physical and logical standpoint. The physical view allows the physical components of a SAN to be identified and the associated physical topology between them to be understood. Similarly, the logical view allows the relationships and associations between SAN entities to be identified and understood. Physical View From a physical standpoint, a SAN environment typically consists of four major classes of components. These four classes are: End-user platforms such as desktops and/or thin clients; Server systems; Storage devices and storage subsystems; Interconnect entities. Typically, network facilities based on traditional LAN and WAN technology provide connectivity between end-user platforms and server system components. However in some cases, end-user platforms may be attached to the Fibre Channel network and may access storage devices directly. Server system components in a SAN environment can exist independently or as a cluster. As processing requirements continue to increase, computing Storage Networking Industry Association Page 3

clusters are becoming more prevalent. A cluster is defined as a group of independent computers managed as a single system for higher availability, easier manageability, and greater scalability. Server system components are interconnected using specialized cluster interconnects or open clustering technologies such as the Fibre Channel - Virtual Interface mapping. Storage subsystems are connected to server systems, to end user platforms, and to each other using the facilities of a Fibre Channel network. The Fibre Channel network is made up of various interconnect entities that may include switches, hubs, and bridges. The figure below depicts a typical physical Fibre Channel SAN environment. End-User Platforms (Remote Entities) LAN/WAN Server Systems (Connected Entities) Fibre Channel Network (Switches, Hubs, Bridges: Interconnect Entities) End User Platforms Storage Subsystems (Connected Entities) Disks/RAID Tape / Library Optical Library Logical View From a logical perspective, a SAN environment consists of SAN components and resources, as well as their relationships, dependencies and other associations. Relationships, dependencies, and associations between SAN components are not necessarily constrained by physical connectivity. For Storage Networking Industry Association Page 4

example, a SAN relationship may be established between a client and a group of storage devices that are not physically co-located. Logical relationships play a key role in the management of SAN environments. Some key relationships in the SAN environment are identified below: Storage subsystems and interconnect entities; Between storage subsystems; Server systems and storage subsystems (including adapters); Server systems and end-user components; Storage and end-user components; Between server systems. As a specific example, one type of relationship is the concept of a logical entity group. In this case, server system components and storage components are logically classified as connected components because they are both attached to the Fibre Channel network. A logical entity group forms a private virtual network or zone within the SAN environment with a specific set of connected entities as members. Communication within each zone is restricted to its members. In another example, where a Fibre Channel network is implemented using a switched fabric, the Fibre Channel network may further still be broken down into logically independent sections called sub-fabrics for each possible combination of data rate and class of service. Sub-fabrics are again divided into regions and extended-regions based on compatible service parameters. Regions and extended regions can also be divided into partitions called zones for administrative purposes. Fibre Channel SAN Management Challenges Now that the complexity and scope of the Fibre Channel SAN environment is better understood, the challenges of managing that environment can be considered. The basic challenge stems from the fact that Fibre Channel SANs utilize a complex network for interconnecting storage devices and server systems. This network is potentially made up of multiple components that have both physical and logical relationships to one another. A breakdown in one component or link in the SAN may manifest itself differently depending on the component that recognizes the condition. However, since a SAN is a network, we can benefit by discussing the management challenges in terms of the basic systems management disciplines. From a fault management perspective, tools and mechanisms must exist that allow data from multiple SAN components to be correlated and processed. Without the means to correlate data in a SAN, problem isolation is extremely difficult if not impossible. For example, if multiple link failures were to simultaneously occur in a Fibre Channel network, these would need to be correlated with the multiple I/O failures seen in processing a host's I/O requests. Storage Networking Industry Association Page 5

Probably the greatest challenge involves the configuration management of SANs. Due to the large number of components, and the multitude of physical and logical relationships to one another, robust configuration capabilities must be provided for the Fibre Channel SAN. There will be tradeoffs between keeping track of configuration information within the Fibre Channel network, or forcing a central management platform to ascertain Fibre Channel network topology using management mechanisms such as configuration files, name services, and SNMP interactions. There are also challenges associated with performance management of SANs. Performance information must be provided at a component level as well as an overall system level. Tools and capabilities must exist that again correlate data from a variety of components to provide a system level view of the overall SAN s performance. Common capabilities must be provided to allow software and firmware updates to be managed from a central management station. These capabilities should allow a generic mechanism to transport updates to components from different vendors. There are also challenges with the support of accounting or asset management. Capabilities such as standardized SAN resource identifiers containing asset information must be defined. In addition, a common mechanism must be provided to obtain the asset information from SAN resources. Cluster management is an inherent requirement of SAN management and imposes its own set of challenges. Managing SAN clusters calls for robust solutions for workload management and for single point of control of the cluster s component systems. Data and storage management must be supported within and across clusters. Creating a scalable, single operational management view for SAN clusters enhances productivity and application availability by enabling the automatic detection, monitoring, and control of key cluster components. Security management in the SAN also creates challenges. For example, restrictive mechanisms such as zoning can have both beneficial and negative consequences for central management. While zoning provides a means to isolate certain storage devices, a centralized manager may need to be a member of all zones and must therefore be a trusted entity. The overall security of the SAN then becomes only as secure as the trusted entity. Ultimately the purpose of providing robust management capabilities for the Fibre Channel SAN environment is to support policy management. Policy management is becoming increasingly important for the enterprise where Fibre Channel SANs will most likely be implemented. In the SAN environment, management capabilities must support the invocation of services chained to well defined policy. Examples of services that require policy are: Disaster recovery; Back-up; Hierarchical Storage Management (HSM); Archival. Storage Networking Industry Association Page 6

Storage Area Network Management Architecture The primary goal of the SAN management architecture is to provide a common and consistent view of managed resources so that the challenges of SAN management can be addressed. Ideally this is accomplished while preserving existing management implementations for SAN components. In order to achieve this goal, two layers are defined by the Storage Area Network Management architecture. The two layers are: Entity Management Services; Common Management Services The figure below depicts a high level view of the Storage Area Network Management Architecture. Common Management Services Access Services Common Data Model Entity Mapping SNMP DMI FC Services SES Proprietary Entity Management Native CIM Entity Management Services The Entity Management Services (EMS) layer comprises existing SAN management services and near-term SAN management services. Examples of these services are: Simple Network Management Protocol (SNMP) based services; Desktop Management Interface (DMI) based services; Fibre Channel Services; Storage Enclosure Services (SES); Proprietary Management Services; Native Common Information Model (CIM) Protocols and Services. Storage Networking Industry Association Page 7

The EMS layer accommodates existing management instrumentation and allows additional entity instrumentation to be defined. The services provided by EMS are essentially the super-set of the services associated with each individual entity management service. These services are manifested as either protocol interactions or application programming interfaces. For example, SNMP management requests may be handled using standard SNMP protocol interactions between an agent on a Fibre Channel switch and the Common Management Services layer. Another implementation could issue SNMP requests through an API provided on a management station. Common Management Services The Common Management Services (CMS) layer is the heart of the SAN management architecture. It is based on an object-oriented design model with supporting and associated methods and services. Managed resources associated with SANs are abstracted as management objects by CMS. Management operations for each managed object are exposed through methods that allow attributes and properties associated with managed objects to be monitored and modified. Management operations effected upon objects at the CMS layer may result in multiple operations being performed at the EMS layer. For example, a method associated with a managed object could be implemented as a combination of SNMP, DMI, and Fibre Channel service operations being performed at the EMS layer. The Common Management Services layer is divided into three components: Entity Mapping; Common Data Model; Access Services. The Entity Mapping component translates from the syntax and semantics of the Common Data Model to entity-specific implementations. This component is responsible for interacting with the appropriate entity management services to fulfill a management request invoked by a management application. Entity mapping responsibilities include making the appropriate operation requests to the entity management resources, the collection of responses to those requests, and the subsequent building of the operation response. Entity mapping also collects unsolicited notifications such as alerts, events, and traps from managed resources on the SAN, and packages them for input to the Common Data Model component. The Common Data Model component provides a well-defined and broad view of all managed SAN resources. Fibre Channel SAN resources include adapters, switches, hubs, bridges, storage subsystems, storage devices, and server systems. The definition of SAN management data is complex and spans the representation of simple objects and their properties to complex topological and dependency relationships. The object design of the common data model supports the following capabilities: Storage Networking Industry Association Page 8

1. Abstraction and classification To reduce the complexity of the problem domain, high level and fundamental concepts (the objects of the management domain) are defined. These objects are then grouped into types ( classes ) by identifying common characteristics and features (properties), relationships (associations), and behavior (methods). 2. Object inheritance Sub-classing from the high level and fundamental objects, additional detail can be provided. A subclass inherits all the information (properties, methods and associations) defined for its higher level objects. Subclasses are created to put the right level of detail and complexity at the right level in the model. This can be visualized as a triangle where the top of the triangle is a fundamental object, and more detail and more classes are defined as you move closer to the base. 3. Ability to depict dependencies, component and connection associations Relationships between objects are extremely powerful concepts. Before object-oriented management models, management standards captured relationships in multi-dimensional arrays or cross-referenced data tables. The object paradigm offers a more elegant approach in that relationships and associations are directly modeled. In addition, the way that relationships are named and defined describe the semantics of the object associations. Further semantics and information can be provided in properties (specifying common characteristics and features) of the associations. 4. Standard, inheritable methods The ability to define standard object behavior (methods) is another form of abstraction. Bundling standard methods with an object s data is encapsulation. Imagine the flexibility and possibilities of a standard able to invoke a Reset method against a hung device, or a Reboot method against a hung computer system regardless of the hardware, operating system or device. The Common Data Model component may be realized using the Desktop Management Task Force s (DMTF) Common Information Model (CIM) plus SAN extensions. CIM is a data model that provides a conceptual view of realworld, managed entities (like storage, networks, and systems), which also represents the users, organizations and applications that interact with these entities. CIM attempts to unify and extend the existing instrumentation and management using object-oriented constructs and design. The Common Data Model can build on the object and relationship definitions in CIM to represent the managed Fibre Channel SAN environment. The Access Service component provides the mechanism that management applications use to interact with Common Management Services. The Access Services represent the exposed object interfaces defined by the Common Data Model and may be realized as XML, COM/DCOM, Java RMI, or CORBA. Road Map and Schedule The Storage Networking Industry Association s (SNIA) Storage Network Management Working Group (SNMWG) is chartered to identify, define, and Storage Networking Industry Association Page 9

support open standards needed to address the increased management requirements imposed by storage area network environments. The Fibre Channel sub-group of the SNMWG is further focused to address the management of Fibre Channel Storage Area Networks. During the next year, the SNMWG-FC working group intends to provide the following: 1. Coordinate efforts with the other sub-groups of the SNMWG; 2. Coordinate the development of appropriate Fibre Channel Management Information Bases (MIBs) and map them into CIM; 3. Develop a SAN Management Architecture specification; 4. Provide implementation guides based on the SAN Management Architecture specification; 5. Standardize the host bus adapter (HBA) management in CIM. Storage Networking Industry Association Page 10

Supplemental Reading Basic Systems Management Disciplines In simple terms, systems management is a set of disciplines that allows resources to be monitored, controlled, and serviced. Examples of resources in a storage area network include servers, disk drives, adapters, bridges, hubs, switches, storage controllers, and enclosures. The basic systems management disciplines are: Event and fault management; Configuration management; Change management; Asset and accounting management; Performance management; Security management. Fault management is concerned with the detection and processing of exceptional conditions associated with a resource. Event management deals with notifications of state changes in general, of which faults are one type. Both event and fault management includes the functions of notification, logging, and reporting. Faults additionally require problem isolation and determination, the possible execution of diagnostics, and problem resolution. Configuration management is concerned with the monitoring and modification of a resource s setup or operational state. Configuration management also includes auto-discovery and directory mechanisms to support dynamic environments. Change management is concerned with adding, updating, and deleting physical and logical components of a resource. Change management also provides the processes by which hardware, software, and firmware changes are planned and controlled. Accounting and asset management are concerned with the identification and location of resources, measuring the use of a resource, and applying the appropriate charges to the consumer for the use of that resource. Performance management provides the functions that allow the measurement of resources to determine their availability, utilization, and overall behavior. Performance management functions also provide for measurement data to be analyzed and reported. Security management is concerned with managing the security infrastructure associated with the managed resource. Systems Management Operations Systems management operations may be broken down into three main categories: Storage Networking Industry Association Page 11

Monitor Operations - Monitor operations allow the state of managed components and their relationships to be determined. One typical example of monitoring functionality is querying the availability of a path to a storage device, or reading SAN port statistics. Control Operations Control operations provide the ability to modify the state of a managed resource. Examples of this type of operation would be configuring a zone within a Fibre Channel network, or configuring Fibre Channel port characteristics. Service Operations Service operations allow changes to be made to managed resources to fix faults or for preventative measures. For example, when SAN performance degrades due to a node or component fault, SAN management tools must be able to identify, isolate and correct the fault. Policy Management Policies represent the translation of business goals and objectives to rules and actions. The latter involve the definition of the rules and conditions which, when true, cause the policy actions to be executed. Actions can either result in some activity being performed or the cessation of activity. Alternately, actions can authorize or deter access. To achieve desired business objectives, management operations are effected according to the criteria and rules defined by policy managers. For example, in order to meet service level agreements, service providers could rely on underlying functionality in the managed components to perform autoconfiguration, self-diagnosis and self-correction of faults. This functionality could be invoked upon detection of the violation of policy conditions. Policy based management enables a proactive management environment versus the traditional reactive approaches to systems management. Additional Information Additional information regarding Fibre Channel SAN Management and CIM can be obtained at the following WEB sites: http://www.snia.org/ http://www.sansolutions.com/ http://www.dmtf.org/ Storage Networking Industry Association Page 12