Managing EMC Celerra Volumes and File Systems with Automatic Volume Management
|
|
|
- Ashley Farmer
- 10 years ago
- Views:
Transcription
1 Managing EMC Celerra Volumes and File Systems with P/N Rev A06 June 2009 Contents Introduction System requirements Restrictions Cautions User interface choices Terminology Related information Concepts System-defined storage pools System-defined virtual storage pools User-defined storage pools File system and automatic file system extension AVM and automatic file system extension options Storage pool attributes System-defined storage pool volume and storage profiles File system and storage pool relationship Automatic file system extension Virtual Provisioning Planning considerations Configuring Configure disk volumes Create file systems with AVM Extend file systems with AVM Create file system checkpoints with AVM Managing List existing storage pools Display storage pool details Display storage pool size information Modify system-defined and user-defined storage pool attributes..74 Extend a user-defined storage pool Extend a system-defined storage pool Remove volumes from storage pools Delete user-defined storage pools Troubleshooting Where to get help of 92
2 EMC E-Lab Interoperability Navigator Known problems and limitations Error messages EMC Training and Professional Services Index of 92
3 Introduction (AVM) is an EMC Celerra Network Server feature that automates volume creation and management. By using the Celerra command options and interfaces that support AVM, system administrators can create and expand file systems without creating and managing the underlying volumes. The Celerra automatic file system extension feature automatically extends file systems created with AVM when the file systems reach their specified high water mark (HWM). Virtual Provisioning, also known as thin provisioning, works with automatic file system extension and allows the file system to grow on demand. With Virtual Provisioning, the space presented to the user or application is the maximum size setting, while only a portion of that space is actually allocated to the file system. This document is part of the Celerra Network Server documentation set and is intended for system administrators responsible for creating and managing Celerra volumes and file systems by using AVM. System requirements Table 1 on page 3 describes the Celerra Network Server software, hardware, network, and storage configurations. Table 1 System requirements Software Celerra Network Server version Hardware Network Storage No specific hardware requirements No specific network requirements Any Celerra-qualified storage system 3 of 92
4 Restrictions The restrictions listed below are applicable to AVM, automatic file system extension, Virtual Provisioning, and CLARiiON. AVM restrictions The restrictions applicable to AVM are: Create a file system by using only one storage pool. If you need to extend a file system, extend it by using either the same storage pool or by using another compatible storage pool. Do not extend a file system across storage systems unless it is absolutely necessary. File systems might reside on multiple disk volumes. Ensure that all disk volumes used by a file system reside on the same storage system for file system creation and extension. This is to protect against storage-system and data unavailability. RAID 3 is only supported with EMC CLARiiON Advanced Technology-Attached (ATA). When building volumes on a Celerra Network Server attached to an EMC Symmetrix storage system, use standard Symmetrix volumes (also called hypervolumes), not Symmetrix metavolumes. Use AVM to create the primary EMC TimeFinder /FS (NearCopy or FarCopy) file system, if the storage pool attributes indicate that no sliced volumes are used in that storage pool. AVM does not support Business Continuance Volumes (BCVs) in a storage pool with other disk types. AVM storage pools must contain only one disk type. Disk types cannot be mixed. Table 4 on page 17 provides a complete list of disk types. Table 5 on page 18 provides a list of storage pools and the description of the associated disk types. Automatic file system extension restrictions The restrictions applicable to automatic file system extension are: Automatic file system extension does not work on MGFS, the EMC file system type used while performing data migration from either CIFS or NFS to the Celerra Network Server by using CDMS. Automatic file system extension is not supported on file systems created with manual volume management. You can enable automatic file system extension on the file system only if it is created or extended by using an AVM storage pool. Automatic file system extension is not supported on file systems used with TimeFinder NearCopy or FarCopy. While automatic file system extension is running, the Control Station blocks all other commands that apply to this file system. When the extension is complete, the Control Station allows the commands to run. The Control Station must be running and operating properly for automatic file system extension, or any other Celerra feature, to work correctly. 4 of 92
5 Automatic file system extension cannot be used for any file system that is part of a Remote Data Facility (RDF) configuration. Do not use nas_fs command with -auto_extend option for file systems associated with RDF configurations. Doing so generates the error message: Error 4121: operation not supported for file systems of type EMC SRDF. The options associated with automatic file system extension can be modified only on file systems mounted with read/write permission. If the file system is mounted read-only, you must remount the file system as read/write before modifying the automatic file system extension, HWM, or maximum size options. Enabling automatic file system extension and Virtual Provisioning does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. When there is not enough storage space available to extend the file system to the requested size, automatic file system extension extends the file system to use all the available storage. For example, if automatic file system extension requires 6 GB but only 3 GB is available, the file system automatically extends to 3 GB. Although the file system was partially extended, an error message appears indicating there was not enough storage space available to perform automatic extension. When there is no available storage, automatic file system extension fails. You must manually extend the file system to recover from this issue. Automatic file system extension is supported with EMC Celerra Replicator. Enable automatic file system extension only on the source file system in a replication scenario. The destination file system synchronizes with the source file system and extends automatically. Do not enable automatic file system extension on the destination file system. You cannot create iscsi dense LUNs on file systems with automatic file system extension enabled. You cannot enable automatic file system extension on a file system if there is a storage mode iscsi LUN present on the file system. You will receive an error, Error 2216: <fs_name>: item is currently in use by iscsi. However, iscsi virtually-provisioned LUNs are supported on file systems with automatic file system extension enabled. Automatic file system extension is not supported on the root file system of a Data Mover or on the root file system of a Virtual Data Mover (VDM). Virtual Provisioning restrictions Celerra supports Virtual Provisioning on Symmetrix DMX-4 and CLARiiON CX4 disk volumes. The options associated with Virtual Provisioning can be modified only on file systems mounted with read/write permission. If the file system is mounted readonly, you must remount the file system as read/write before modifying the Virtual Provisioning, HWM, or maximum size options. Celerra virtually-provisioned objects (either iscsi LUNs or File Systems) should not be used with Symmetrix or CLARiiON virtually-provisioned devices. A single file system should not span virtual and standard Symmetrix or CLARiiON volumes. 5 of 92
6 Virtual Provisioning is supported with EMC Celerra Replicator. Enable Virtual Provisioning only on the source file system in a replication scenario. The destination file system synchronizes with the source file system and extends automatically. Do not enable Virtual Provisioning on the destination file system. With Virtual Provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of the Replicator destination file system, while they see the virtually provisioned maximum size of the source file system. "Interoperability considerations" on page 35 provides more information on using automatic file system extension with Celerra Replicator. Virtual Provisioning is supported on the primary file system, but not supported with primary file system checkpoints. NFS, CIFS, and FTP clients cannot see the virtually provisioned maximum size of any EMC SnapSure checkpoint file system. If a file system is created using a virtual storage pool, the -vp option of the nas_fs command cannot be enabled because Celerra Virtual Provisioning and CLARiiON Virtual Provisioning cannot coexist on a file system. Closely monitor Symmetrix Thin Pool space that contains virtually-provisioned devices. Use the command /usr/symcli/bin/symcfg list -pool -thin -all to display pool usage. CLARiiON restrictions EMC does not recommend creating system RAID group and control LUNs on CLARiiON virtual (thin) pools and virtual LUNs. CLARiiON virtual pools only support RAID 5 and RAID 6: RAID 5 is the default, with a minimum of 3 drives (2+1). EMC recommends using multiples of 5 drives. RAID 6 has a minimum of 4 drives (2+2). EMC recommends using multiples of 8 drives. CLARiiON virtual pools do not support SSD drives. The Navisphere Manager is required to provision virtual devices on the CLARiiON. Any platforms that do not provide Navisphere access cannot use this feature. Closely monitor CLARiiON Thin Pool space that contains virtually-provisioned devices. Use the command nas_pool -size <AVM virtual pool name> and look for the physical usage information. An alert is generated when a CLARiiON Thin Pool runs out of space. 6 of 92
7 Cautions If any of this information is unclear, contact your EMC Customer Support Representative for assistance: All parts of a file system must use the same type of disk storage and be stored on a single storage system. Spanning more than one storage system increases the chance of data loss or data unavailability or both. If you plan to set quotas on a file system to control the amount of space that users and groups can consume, turn on quotas immediately after creating the file system. Turning on quotas later, when the file system is in use, can cause temporary file system disruption, including slow file system access. Using Quotas on EMC Celerra contains instructions on turning on quotas and general quotas information. If your user environment requires international character support (that is, support of non-english character sets or Unicode characters), configure the Celerra Network Server to support this feature before creating file systems. Using International Character Sets with EMC Celerra contains instructions to support and configure international character support on a Celerra Network Server. If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots, do not use slice volumes (nas_slice) when creating the production file system (PFS). Instead, use the full portion of the disk presented to the Celerra Network Server. Using slice volumes for a PFS slated as the source for snapshots wastes storage space and can result in loss of PFS data. Automatic file system extension is interrupted during Celerra software upgrades. If automatic file system extension is enabled, the Control Station continues to capture the HWM events, but the actual file system extension does not start until the Celerra upgrade process completes. Insufficient space on a Symmetrix Thin Pool that contains a virtually-provisioned device might result in a Data Mover panic and data unavailability. To avoid this situation, pre-allocate 100 percent of the TDEV when binding it to the Thin Pool. If you do not use 100 percent pre-allocation, there is the possibility of overallocation; therefore, you must closely monitor the pool usage. Insufficient space on a CLARiiON Thin Pool that contains a virtually-provisioned device might result in a Data Mover panic and data unavailability. You cannot pre-allocate space on a CLARiiON Thin Pool so you must closely monitor the thin pool usage to avoid running out of space. 7 of 92
8 User interface choices The Celerra Network Server offers flexibility in managing networked storage that is based on your support environment and interface preferences. This document describes how to use AVM by using the command line interface (CLI). You can also perform many of these tasks by using one of the Celerra management applications: Celerra Manager Basic Edition Celerra Manager Advanced Edition Celerra Monitor Microsoft Management Console (MMC) snap-ins Active Directory Users and Computers (ADUC) extensions For additional information about managing your Celerra: Learning about EMC Celerra on the EMC Celerra Network Server Documentation CD Celerra Manager online help Application s online help system on the EMC Celerra Network Server Documentation CD Installing EMC Celerra Management Applications includes instructions on launching Celerra Manager, and on installing the MMC snap-ins and the ADUC extensions. Table 2 on page 8 identifies the storage pool tasks you can perform in each interface, and the command syntax or the path to Celerra Manager page to use to perform the task. Unless otherwise noted in the task, the operations apply to userdefined and system-defined storage pools. The EMC Celerra Network Server Command Reference Manual contains information on the commands described in Table 2 on page 8. Table 2 Storage pool tasks supported by platform (page 1 of 3) Task Celerra Control Station CLI Celerra Manager Create a new user-defined storage pool. Note: Applies only to user-defined storage pools. nas_pool -create <name> -volumes <volumes> Select Celerras > [Celerra_name] > Storage > Pools, and click New. List existing storage pools. nas_pool -list Select Celerras > [Celerra_name] > Storage > Pools. 8 of 92
9 Table 2 Storage pool tasks supported by platform (page 2 of 3) Task Celerra Control Station CLI Celerra Manager Display storage pool details. nas_pool -info <name> Note: When you perform this operation in the CLI, the total_potential_mb does not include the space in the storage pool in the output. Select Celerras > [Celerra_name] > Storage > Pools, and doubleclick the storage pool name. Note: When you perform this operation from Celerra Manager, the total_potential_mb represents the total available storage, including the storage pool. Display storage pool size information. nas_pool -size <name> Select Celerras > [Celerra_name] > Storage > Pools, and view the Storage Capacity and Storage Used(%) columns. Specify if AVM uses slice volumes or entire unused disk volumes from the storage pool to create or expand a file system. Specify whether AVM extends the storage pool automatically with unused disk volumes whenever the pool needs more space. Note: Applies only to system-defined storage pools. nas_pool -modify {<name> id=<id>} -default_slice_flag {y n} nas_pool -modify {<name> id=<id>} -is_dynamic {y n} Select Celerras > [Celerra_name] > Storage > Pools, double-click the storage pool name to open its properties page, and select or clear Slice Pool Volumes by Default? as required. Select Celerras > [Celerra_name] > Storage > Pools, double-click the storage pool name to open its properties page, and select or clear Automatic Extension Enabled as required. Specifying y tells AVM to allocate new, unused disk volumes to the storage pool when creating or expanding, even if there is available space in the pool. Specifying n tells AVM to allocate all available storage pool space to create or expand a file system before adding volumes to the pool. nas_pool -modify {<name> id=<id>} -is_greedy {y n} Select Celerras > [Celerra_name] > Storage > Pools, double-click the storage pool name to open its properties page, and select or clear Obtain Unused Disk Volumes as required. Note: Applies only to system-defined storage pools. Add volumes to a user-defined storage pool. Note: Applies only to user-defined storage pools. nas_pool -xtend {<name> id=<id>} -volumes <volume_name> [,<volume_name>,...] Select Celerras > [Celerra_name] > Storage > Pools, select the storage pool you want to extend, click Extend, and select one or more volumes to add to the pool. 9 of 92
10 Table 2 Storage pool tasks supported by platform (page 3 of 3) Task Celerra Control Station CLI Celerra Manager Extend a system-defined storage pool by size and specify a storage system from which to allocate storage. Note: Applies only to system-defined storage pools, and only when the is_dynamic attribute for the storage pool is set to n. nas_pool -xtend {<name> id=<id>} -size <integer> [M G] -storage <system_name> Select Celerras > [Celerra_name] > Storage > Pools, select the storage pool you want to extend, and click Extend. Select the Storage System to be used to extend the file system, and type the size requested in MB or GB. Note: The drop-down list shows all the available storage systems, and the volumes shown are only those created on the storage system that is highlighted. Remove volumes from a storage pool. Delete a storage pool. Note: Applies only to user-defined storage pools. nas_pool -shrink {<name> id=<id>} -volumes <volume_name> [,<volume_name>,...] nas_pool -delete {<name> id=<id>} Select Celerras > [Celerra_name] > Storage > Pools, select the storage pool you want to shrink, click Shrink, and select one or more volumes not in use, to be removed from the pool. Select Celerras > [Celerra_name] > Storage > Pools, select the storage pool you want to delete, and click Delete. Change the name of a storage pool. Note: Applies only to user-defined storage pools. Create a file system with automatic file system extension enabled. nas_pool -modify {<name> id=<id>} -name <name> $ nas_fs -name <name> -type <type> -create pool=<pool_name> storage=<system_name> {size=<integer>[t G M]} -auto_extend {no yes} Select Celerras > [Celerra_name] > Storage > Pools, double-click the storage pool name to open its properties page, and type the new name in the Name text box. Select Celerras > File Systems > New, and select Automatic Extension Enabled. 10 of 92
11 Terminology The EMC Celerra Glossary provides a complete list of Celerra terminology. automatic file system extension: Configurable Celerra file system feature that automatically extends a file system created or extended with AVM when the high water mark (HWM) is reached. See also high water mark. (AVM): Feature of the Celerra Network Server that creates and manages volumes automatically without manual volume management by an administrator. AVM organizes volumes into storage pools that can be allocated to file systems. Celerra Data Migration Service (CDMS): Feature for migrating file systems from NFS and CIFS source file servers to a Celerra Network Server. The online migration is transparent to users once it starts. disk volume: On Celerra systems, a physical storage unit as exported from the storage array. All other volume types are created from disk volumes. See also metavolume, slice volume, stripe volume, and volume. file system: Method of cataloging and managing the files and directories on a storage system. high water mark (HWM): Trigger point at which the Celerra Network Server performs one or more actions, such as sending a warning message, extending a volume, or updating a file system, as directed by the related feature's software/parameter settings. logical unit number (LUN): Identifying number of a SCSI or iscsi object that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer to the logical unit itself. metavolume: On a Celerra system, a concatenation of volumes, which can consist of disk, slice, or stripe volumes. Also called a hypervolume or hyper. Every file system must be created on top of a unique metavolume. See also disk volume, slice volume, stripe volume, and volume. slice volume: On a Celerra system, a logical piece or specified area of a volume used to create smaller, more manageable units of storage. See also disk volume, metavolume, stripe volume, and volume. storage pool: Automated Volume Management (AVM), a Celerra feature, organizes available disk volumes into groupings called storage pools. Storage pools are used to allocate available storage to Celerra file systems. Storage pools can be created automatically by AVM or manually by the user. storage system: Array of physical disk devices and their supporting processors, power supplies, and cables. stripe volume: Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across the volume and are addressed in an interlaced manner. Stripe volumes make load balancing possible. See also disk volume, metavolume, slice volume, and volume. thin LUN: A LUN whose storage capacity grows by using a shared virtual (thin) pool of storage when needed. 11 of 92
12 thin pool: A user-defined CLARiiON storage pool that contains a set of disks on which thin LUNs can be created. Universal Extended File System (UxFS): High-performance, Celerra Network Server default file system, based on traditional Berkeley UFS, enhanced with 64-bit support, metadata logging for high availability, and several performance enhancements. Virtual Provisioning: Configurable Celerra file system feature that lets you allocate storage based on your longer term projections, while you dedicate only the file system resources you currently need. Users NFS or CIFS clients and applications see the virtual maximum size of the file system of which only a portion is physically allocated. In addition, combining the automatic file system extension and Virtual Provisioning features lets you grow the file system gradually on an as-needed basis. volume: On a Celerra system, a virtual disk into which a file system, database management system, or other application places data. A volume can be a single disk partition or multiple partitions on one or more physical drives. See also disk volume, metavolume, slice volume, and stripe volume. 12 of 92
13 Related information Specific information related to the features and functionality described in this document are included in: EMC Celerra Network Server Command Reference Manual Online Celerra man pages EMC Celerra Network Server Parameters Guide Configuring NDMP Backups to Disk on EMC Celerra Controlling Access to EMC Celerra System Objects Managing EMC Celerra Volumes and File Systems Manually The EMC Celerra Network Server Documentation CD, supplied with Celerra and also available on the EMC Powerlink website, provides the complete set of EMC Celerra customer publications. After logging in to Powerlink, go to Support > Technical Documentation and Advisories > Hardware/Platforms Documentation > Celerra Network Server. On this page, click Add to Favorites. The Favorites section on your Powerlink home page provides a link that takes you directly to this page. Celerra Support Demos are available on Powerlink. Use these instructional videos to learn how to perform a variety of Celerra configuration and management tasks. After logging in to Powerlink, go to Support > Product and Diagnostic Tools > Celerra Tools > Celerra Support Demos. 13 of 92
14 Concepts The AVM feature automatically creates and manages file system storage. AVM is storage-system independent and supports existing requirements for automatic storage allocation (SnapSure, SRDF, and IP replication). You can configure file systems created with AVM to automatically extend. The automatic file system extension feature allows you to configure a file system to extend automatically, without system administrator intervention, to support file system operations. Automatic file system extension causes the file system to extend when it reaches the specified usage point, the HWM. You set the size for the file system you create, and also the maximum size to which you want the file system to grow. The Virtual Provisioning option lets you present the maximum size of the file system to the user or application, of which only a portion is actually allocated. Virtual Provisioning allows the file system to slowly grow on demand as the data is written. Note: Enabling Virtual Provisioning with automatic file system extension does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, then automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free storage space in the file system. To create file systems, use one or more types of AVM storage pools: System-defined storage pools System-defined virtual storage pools User-defined storage pools System-defined storage pools System-defined storage pools are predefined and available with the Celerra Network Server. You cannot create or delete these predefined storage pools because they are set up to make managing volumes and file systems easier than manually managing them. You can modify some of the attributes of the systemdefined storage pools, but this is unnecessary. AVM system-defined storage pools do not preclude the use of user-defined storage pools or manual volume and file system management, but instead give system administrators a simple volume and file system management tool. With Celerra command options and interfaces that support AVM, you can use system-defined storage pools to create and expand file systems without manually creating and managing stripe volumes, slice volumes, or metavolumes. If your applications do not require precise placement of file systems on particular disks or on particular locations on specific disks, using AVM is an easy way for you to create file systems. AVM system-defined storage pools are adequate for most high availability and performance considerations. Each system-defined storage pool manages the details of allocating storage to file systems. When you create a file system by using AVM system-defined storage pools, storage is automatically allocated from the pool 14 of 92
15 to the new file system. After the storage is allocated to that pool, the storage pool can dynamically grow and shrink to meet the file system needs. System-defined virtual storage pools System-defined virtual storage pools are automatically created during the normal storage discovery (diskmark) process. A system-defined virtual storage pool contains a set of disks on which thin LUNs can be created for use by the Virtual Provisioning capability. When the last virtual disk volume from a specific virtual CLARiiON storage pool is deleted, the system-defined virtual AVM storage pool and its profiles are automatically removed. User-defined storage pools User-defined storage pools allow you to create containers or pools of storage, filled with manually created volumes. When the applications require precise placement of file systems on particular disks or locations on specific disks, consider using AVM user-defined storage pools for more control. User-defined storage pools also allow you to reserve disk volumes so that the system-defined storage pools cannot use them. If the applications require precise placement of file systems on particular disks or locations on specific disks, AVM user-defined storage pools give you more control. They also allow you to reserve disk volumes so that the system-defined storage pools cannot use them. User-defined storage pools provide a better option for those who want more control over their storage allocation while still using the more automated management tool. User-defined storage pools are not as automated as the system-defined storage pools. You must specify some attributes of the storage pool and the storage system from which the space is allocated to create file systems. While somewhat less involved than creating volumes and file systems manually, using these storage pools requires more manual involvement on your part than the system-defined storage pools. When you create a file system by using a user-defined storage pool, you must create the storage pool, choose and add the volumes to it, expand it with new volumes when required, and remove volumes you no longer require in the storage pool. File system and automatic file system extension You can create or extend file systems with AVM storage pools and configure the file system to automatically extend as needed. You can enable automatic file system extension on a file system when it is created, or you can enable and disable it at any later time by modifying the file system. The options that work with automatic file system extension are: HWM Maximum size Virtual Provisioning 15 of 92
16 The HWM is the point at which the file system must be extended to meet the usage demand. The default HWM is 90 percent. The default supported maximum size for any file system is 16 TB. With automatic file system extension, the maximum size is the size to which the file system could grow, up to the supported 16 TB. Setting the maximum size is optional with automatic file system extension, but mandatory with Virtual Provisioning. With Virtual Provisioning enabled, users and applications see the maximum size, while only a portion of that size is actually allocated to the file system. Automatic file system extension allows the file system to grow as needed without system administrator intervention, making it easier to meet system operations requirements continuously, without interruptions. AVM and automatic file system extension options AVM provides a range of options for configuring your storage. The Celerra Network Server can choose the configuration and placement of the file systems by using system-defined storage pools, or you can create a user-defined storage pool and define its attributes. AVM storage pools An AVM storage pool is a container or pool of volumes. Table 3 on page 16 lists the major difference between system-defined and user-defined storage pools. Table 3 System-defined and user-defined storage pool difference Functionality System-defined storage pools User-defined storage pools Ability to grow and shrink Automatic, but the dynamic behavior can be disabled Manual only Administrators must manage the volume configuration, addition, and removal of storage from these storage pools "Managing" on page 70 provides more detailed information. 16 of 92
17 Disk types A storage pool must contain volumes from only one disk type. Table 4 on page 17 lists the available disk types associated with the storage pools and the disk type descriptions. Table 4 Disk types (page 1 of 2) Disk type CLSTD CLATA CLSAS CLSSD STD R1STD R2STD SSD Description Standard CLARiiON disk volumes. CLARiiON Advanced Technology-Attached (ATA) disk volumes. CLARiiON Serial Attached SCSI (SAS) disk volumes. CLARiiON Fibre Channel Solid State Drives (FC SSD) disk volumes. Standard Symmetrix disk volumes, typically RAID 1 configuration. Symmetrix Fibre Channel (FC) disk volumes, set up as source for mirrored storage that uses SRDF functionality. Standard Symmetrix disk volume that is a mirror of another standard Symmetrix disk volume over RDF links. High performance Symmetrix disk volumes built on solid state drives, typically RAID 5 configuration. ATA Standard Symmetrix disk volumes built on SATA drives, typically RAID 1 configuration. R1ATA R2ATA CMATA CMSTD BCV BCVA R1BCA Symmetrix SATA disk volumes, set up as source for mirrored storage that uses SRDF functionality. Symmetrix SATA disk volumes, set up as target for mirrored storage using SRDF functionality. CLARiiON Advanced Technology-Attached (ATA) disk volumes for use with MirrorView /S. The selection box lists the size of free disk volumes and their RAID protection information. Standard CLARiiON disk volumes for use with MirrorView/S. The selection box lists the size of free disk volumes and their RAID protection information. Business continuance volume (BCV) for use by TimeFinder/FS operations. Business continuance volume (BCV) built from SATA disks for use by TimeFinder/FS operations. BCV built from SATA disks that is mirrored to a different Symmetrix over RDF links, RAID 1 configuration; used as a source volume by TimeFinder/FS operations. 17 of 92
18 Table 4 Disk types (page 2 of 2) Disk type R2BCA Description BCV built from SATA disks that is a mirror of another BCV over RDF links; used as a target of destination volume by TimeFinder/FS operations. R1BCV BCV that is mirrored to a different Symmetrix over RDF links, RAID 1 configuration; used as a source volume by TimeFinder/FS operations. R2BCV BCV that is a mirror of another BCV over RDF links; used as a target of destination volume by TimeFinder/FS operations. System-defined storage pools Choosing system-defined storage pools to build the file system is the easiest way to manage volumes and file systems. They are associated with the type of attached storage system you have. If you have a CLARiiON storage system attached, the CLARiiON storage pools are available to you through the Celerra Network Server. If you have a Symmetrix storage system attached, the Symmetrix storage pools are available to you through the Celerra Network server. System-defined storage pools are dynamic by default. The AVM feature adds and removes volumes automatically from the storage pool as needed. Table 5 on page 18 lists the system-defined storage pools supported on the Celerra Network Server. Table 6 on page 21 contains additional information about RAID group combinations for system-defined storage pools. Note: A storage pool can include disk volumes of only one type. Table 5 System-defined storage pools (page 1 of 4) Storage pool name symm_std symm_ata symm_std_rdf_src symm_std_rdf_tgt Description Designed for high performance and availability at medium cost. This storage pool uses STD disk volumes (typically RAID 1). Designed for high performance and availability at low cost. This storage pool uses ATA disk volumes (typically RAID 1). Designed for high performance and availability at medium cost, specifically for storage that will be mirrored to a remote Celerra Network Server that uses SRDF, or to a local Celerra Network Server that uses TimeFinder/FS. Using SRDF/S with EMC Celerra for Disaster Recovery and Using TimeFinder/FS, NearCopy, and FarCopy with EMC Celerra provide more information about the SRDF feature. Designed for high performance and availability at medium cost, specifically as a mirror of a remote Celerra Network Server using SRDF. This storage pool uses Symmetrix R2STD disk volumes. Using SRDF/S with EMC Celerra for Disaster Recovery provides more information about the SRDF feature. 18 of 92
19 Table 5 System-defined storage pools (page 2 of 4) Storage pool name symm_ata_rdf_src symm_ata_rdf_tgt symm_ssd clar_r1 clar_r6 clar_r5_performance clar_r5_economy clarata_archive clarata_r3 clarata_r6 clarata_r10 clarsas_archive clarsas_r6 Description Designed for archival performance and availability at low cost, specifically for storage mirrored to a remote Celerra Network Server using SRDF. This storage pool uses Symmetrix R1ATA disk volumes. Using SRDF/S with EMC Celerra for Disaster Recovery provides more information about the SRDF feature. Designed for archival performance and availability at low cost, specifically as a mirror of a remote Celerra Network Server using SRDF. This storage pool uses Symmetrix R2ATA disk volumes. Using SRDF/S with EMC Celerra for Disaster Recovery provides more information about the SRDF feature. Designed for very high performance and availability at high cost. This storage pool uses SSD disk volumes (typically RAID 5). Designed for high performance and availability at low cost. This storage pool uses CLSTD disk volumes created from RAID 1 mirrored-pair disk groups. Designed for high availability at low cost. This storage pool uses CLSTD disk volumes created from RAID 6 disk groups. Designed for medium performance and availability at low cost. This storage pool uses CLSTD disk volumes created from 4+1 RAID 5 disk groups. Designed for medium performance and availability at low cost. This storage pool uses CLSTD disk volumes created from 8+1 RAID 5 disk groups. Designed for use with infrequently accessed data, such as archive retrieval. This storage pool uses CLATA disk drives in a RAID 5 configuration. Designed for archival performance and availability at low cost. This AVM storage pool uses LCFC, SATA II, and CLATA disk drives in a RAID 3 configuration. Designed for high availability at low cost. This storage pool uses CLATA disk volumes created from RAID 6 disk groups. Designed for high performance and availability at medium cost. This storage pool uses two CLARiiON CLATA disk volumes in a RAID 1/0 configuration. Designed for medium performance and availability at medium cost. This storage pool uses CLSAS disk volumes created from RAID 5 disk groups. Designed for high availability at medium cost. This storage pool uses CLSAS disk volumes created from RAID 6 disk groups. 19 of 92
20 Table 5 System-defined storage pools (page 3 of 4) Storage pool name clarsas_r10 clarssd_r5 cm_r1 cm_r5_performance cm_r5_economy cm_r6 cmata_archive cmata_r3 cmata_r6 cmata_r10 cmsas_archive cmsas_r6 cmsas_r10 Description Designed for high performance and availability at medium cost. This storage pool uses two CLARiiON Serial Attached SCSI (SAS) disk volumes in a RAID 1/0 configuration. Designed for very high performance and availability at high cost. This storage pool uses CLSSD disk volumes created from 4+1 and 8+1 RAID 5 disk groups. Designed for high performance and availability at low cost. This storage pool uses CMSTD disk volumes created from RAID 1 mirrored-pair disk groups for use with MirrorView/Synchronous. Designed for medium performance and availability at low cost. This storage pool uses CMSTD disk volumes created from 4+1 RAID 5 disk groups for use with MirrorView/Synchronous. Designed for medium performance and availability at low cost. This storage pool uses CMSTD disk volumes created from 8+1 RAID 5 disk groups for use with MirrorView/Synchronous. Designed for high availability at low cost. This storage pool uses CMSTD disk volumes created from RAID 6 disk groups for use with MirrorView/Synchronous. Designed for use with infrequently accessed data, such as archive retrieval. This storage pool uses CLARiiON Advanced Technology-Attached (ATA) CMATA disk drives in a RAID 5 configuration for use with MirrorView/Synchronous. Designed for archival performance and availability at low cost. This AVM storage pool uses CMATA disk drives in a RAID 3 configuration for use with MirrorView/Synchronous. Designed for high availability at low cost. This storage pool uses CMATA disk volumes created from RAID 6 disk groups for use with MirrorView/Synchronous. Designed for high performance and availability at medium cost. This storage pool uses two CLARiiON CMATA disk volumes in a RAID 1/0 configuration for use with MirrorView/Synchronous. Designed for medium performance and availability at medium cost. This storage pool uses CMSAS disk volumes created from RAID 5 disk groups for use with MirrorView/Synchronous. Designed for high availability at low cost. This storage pool uses CMSAS disk volumes created from RAID 6 disk groups for use with MirrorView/Synchronous. Designed for high performance and availability at medium cost. This storage pool uses two CLARiiON CMSAS disk volumes in a RAID 1/0 configuration for use with MirrorView/Synchronous. 20 of 92
21 Table 5 System-defined storage pools (page 4 of 4) Storage pool name cmssd_r5 Description Designed for very high performance and availability at high cost. This storage pool uses CMSSD disk volumes created from 4+1 and 8+1 RAID 5 disk groups for use with MirrorView/Synchronous. RAID groups and storage characteristics Table 6 on page 21 correlates the storage array to the RAID groups for systemdefined storage pools. Table 6 RAID group combinations Storage RAID 5 RAID 6 RAID 1 NX4 SAS or SATA 2+1 RAID RAID RAID RAID RAID RAID 1/0 NS20 / NS40 / NS80 FC 4+1 RAID RAID RAID RAID RAID RAID 1 NS20 / NS40 / NS80 ATA 4+1 RAID RAID RAID RAID RAID RAID 6 Not supported NS-120 / NS-480 FC 4+1 RAID RAID RAID RAID RAID RAID 1/0 NS-120 / NS-480 ATA 4+1 RAID RAID RAID RAID RAID 1/0 User-defined storage pools For some customer environments, more user control is required than the systemdefined storage pools offer. One way for administrators to have more control is to create their own storage pools and define the attributes of the storage pool. AVM user-defined storage pools allow you to have more control over how the storage is allocated to file systems. Administrators can create a storage pool, and choose the volumes to contain within it, but must also manually manage the pool and its contents. Administrators must add and remove volumes from the storage pools you create. While user-defined storage pools have attributes similar to system-defined storage pools, user-defined storage pools are not dynamic. They require administrators to explicitly add and remove volumes manually. 21 of 92
22 If you define the storage pool, you must also explicitly add and remove storage from the storage pool and define the attributes for that storage pool. Use the nas_pool command to list, create, delete, extend, shrink, and view storage pools, and to modify the attributes of storage pools. "Create file systems with AVM" on page 42 and "Managing" on page 70 provide more information. Understanding how AVM storage pools work enables you to determine whether system-defined storage pools or user-defined storage pools, or both, are appropriate for the environment. It is also important to understand the ways in which you can modify the storage-pool behavior to suit your file system requirements. "Modify system-defined and user-defined storage pool attributes" on page 74 provides a list of all the attributes and the procedures to modify them. 22 of 92
23 Storage pool attributes System-defined and user-defined storage pools have attributes that control how they create volumes and file systems. Table 8 on page 74 lists the storage pool attributes, the type of entry, the value, whether the attribute is modifiable and for which storage pools, and the description of the attribute.the system-defined storage pools are shipped with the Celerra Network Server. They are designed to optimize performance based on the hardware configuration. Each of the systemdefined storage pools has associated profiles that define the kind of storage used, and how new storage is added to, or deleted from, the storage pool. The system-defined storage pools are designed for use with the Symmetrix and CLARiiON storage systems. The structure of volumes created by AVM might differ greatly depending on the type of storage system used by the various storage pools. This difference allows AVM to exploit the architecture of current and future block storage devices that are attached to the Celerra Network Server. Figure 1 on page 23 shows how the different storage pools are associated with the disk volumes for each storage-system type attached. The nas_disk -list command lists the disk volumes. These are the Celerra Network Server s representation of the LUNs exported from the attached storage system. Note: Any given disk volume must be a member of only one storage pool. cmata_r3 cmata_r6 clarata_r3 cmata_archive clarata_archive AVM storage pools symm_std clar_r5_economy symm_std_rdf_src clar_r5_performance clar_r1 d3 d4 dn Disk volumes in the storage pools dm dx dy dz dn Symmetrix storage system Storage systems CLARiiON storage system CNS Figure 1 AVM system-defined storage pools 23 of 92
24 System-defined storage pool volume and storage profiles Volume profiles are the set of rules and parameters that define how new storage is added to a system-defined storage pool. A volume profile defines a standard method of building a large section of storage from a set of disk volumes. This large section of storage can be added to a storage pool that might contain similar large sections of storage. The system-defined storage pool is responsible to satisfy requests for any amount of storage. Users cannot create or delete system-defined storage pools and their associated profiles. Users can list, view, and extend the system-defined storage pools, and also modify storage pool attributes. Volume profiles have an attribute named storage_profile. A volume profile s storage profile defines the rules and attributes that are used to aggregate some number of disk volumes (listed by the nas_disk -list command) into a volume that can be added to a system-defined storage pool. A volume profile uses its storage profile to determine the set of disk volumes to select (or match existing Celerra disk volumes), where a given disk volume might match the rules and attributes of a storage profile. "CLARiiON system-defined storage pool algorithms" on page 24, "CLARiiON system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 ATA support" on page 27, and "Symmetrix system-defined storage pools algorithm" on page 28 explain how these profiles help system-defined storage pools aggregate the disk volumes into storage pool members, place the members into storage pools, and then build file systems for each storage-system type. When using the systemdefined storage pools without modifications, through the Celerra Manager or the command line interface (CLI), this activity is transparent to users. CLARiiON system-defined storage pool algorithms When you request for a new file system that requires new storage, AVM attempts to create the most optimal stripe volume for a CLARiiON storage system. Systemdefined storage pools for CLARiiON storage systems work with LUNs of a specific type, for example, 4+1 RAID 5 LUNs for the clar_r5_performance storage pool. CLARiiON storage systems integrated models use CLARiiON storage templates to create the LUNs that the Celerra Network Server recognizes as disk volumes. CLARiiON storage templates are a combination of template definition files and scripts (you see just the scripts) that create RAID groups and bind LUNs on CLARiiON storage systems. These CLARiiON storage templates are invoked through the CLARiiON setup script (root only) or through Celerra Manager. Celerra NS600/NS600S/NS700/NS700S with Integrated Array Setup Guide contains more information on using CLARiiON storage templates with Celerra. Disk volumes exported from a CLARiiON storage system are relatively large and might vary in size from approximately 18 GB to 136 GB, depending on physical disk size. A CLARiiON system also has two storage processors (SPs). Most CLARiiON storage templates create two LUNs per RAID group, one owned by SP A, and the other by SP B. Only the CLARiiON RAID 3 storage templates create both LUNs owned by one of the SPs. 24 of 92
25 If no disk volumes are found when a request for space is made, AVM considers the storage pool attributes, and initiates the next step based on these settings: The is_greedy setting indicates if the storage pool must add a new member volume to meet the request, or if it must use all the available space in the storage pool before adding a new member volume. AVM then checks the is_dynamic setting. The is_dynamic setting indicates if the storage pool can dynamically grow and shrink. If set to yes, it allows AVM to automatically add a member volume to meet the request. If set to no, and a member volume must be added to meet the request, then the user must manually add the member volume to the storage pool. The file-system request slice flag indicates if the file system can be built on a slice volume from a member volume. default_slice_flag setting indicates if AVM can slice storage pool member volumes to meet the request. Most of the system-defined storage pools for CLARiiON storage systems first search for four same-size disk volumes, from different buses, different SPs, and different RAID groups. The absolute criteria that the volumes must meet are: A disk volume cannot exceed 2 TB. Disk volumes must match the type specified in the storage pool storage profile. Disk volumes must be of the same size. No two disk volumes can come from the same RAID group. Disk volumes must be on a single storage system. If found, AVM stripes the LUNs together and inserts the stripe into the storage pool. If AVM cannot find the four disk volumes that are bus-balanced, it looks for four same-size disk volumes that are SP-balanced from different RAID groups, and if not found, AVM then searches for four same-size disk volumes from different RAID groups. Next, if AVM has been unable to satisfy these requirements, it looks for three samesize disk volumes that are SP-balanced from different RAID groups, and so on, until the only option left is for AVM to use one disk volume. The criteria that the one disk volume must meet are: A disk volume cannot exceed 2 TB. A disk volume must match the type specified in the storage pool storage profile. If multiple volumes match the first two criteria, then the disk volume must be from the least-used RAID group. 25 of 92
26 Figure 2 on page 26 shows the algorithm used to create a file system by adding a pool member to the AVM CLARiiON system-defined storage pools clar_r1, clar_r5_performance, and clar_r5_economy. Request 4/3/2 disk volumes available? Yes Meets absolute criteria for multiple disk volumes? Yes Select volumes that are: balanced across buses 1 No 1 disk volume available? No No Least used defined by # of disk volumes used in RAID group/ # disk volumes visible in RAID group balanced across storage processors from least used RAID groups Yes Meets absolute criteria for 1 disk volume Yes No Error. Unable to fill request Slice from stripe (smaller of free space available or file system request) Place meta volume on the stripe Stripe volumes together using 8 K stripe size Insert stripe into the storage pool Place disk volumes in pool (no stripe or meta on top) Done Yes Is space in pool enough? No 1 CNS Figure 2 CLARiiON system-defined storage pool algorithm (clar_r1, clar_r5_performance, and clar_r5_economy) Figure 3 on page 26 shows the structure of a clar_r5_performance storage pool. The volumes in the storage pools are balanced between SP A and SP B. clar_r5_performance storage pool stripe_volume1 stripe_volume2 CLARiiON 4+1 RAID5 disk volumes dx dy dz dw 3 dm 3 dn Owned by storage processor A Owned by storage processor B CNS Figure 3 clar_r5_performance storage pool 26 of 92
27 CLARiiON system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 ATA support The three CLARiiON system-defined storage pools that provide support for the ATA environment are clarata_r3, clarata_archive, and clarata_r10. The clarata_r3 storage pool follows the basic CLARiiON algorithm explained in "CLARiiON system-defined storage pool algorithms" on page 24, but uses only one disk volume and does not allow striping of volumes. One of the applications for this pool is backup to disk. Users can manage the RAID 3 disk volumes manually in a user-defined storage pool. However, using the system-defined storage pool clarata_r3 helps users maximize the benefit from AVM disk selection algorithms. The clarata_r3 storage pool supports only CLARiiON ATA drives, not FC drives. The criteria that the one disk volume must meet are: Disk volume cannot exceed 2 TB. Disk volume must match the type specified in the storage pool storage profile. If multiple volumes match the first two criteria, then the disk volume must be from the least-used RAID group. Figure 4 on page 27 shows the storage pool clarata_r3 algorithm. 1 Request 1 disk volume available? No Error. Unable to fill request. Yes Meets absolute criteria for 1 disk volume. No Error. Unable to fill request. Yes Create meta on disk volume. Place meta in storage pool. Yes Done CNS Figure 4 clarata_r3 system-defined algorithm 27 of 92
28 The storage pools clarata_archive and clarata_r10 differ from the basic CLARiiON algorithm. These storage pools use two disk volumes, or a single disk volume, and all ATA drives are the same. Figure 5 on page 28 shows the profile algorithm used to create a file system with the clarata_archive and clarata_r10 storage pools. 1 One volume created Receive request N = 2 Request new pool volume made of N disk volumes Sort N pool volumes by utilization Pick first entry Slice minimum of free space available or space needed from pool entry Space req satisfied? Creation failed Yes Yes Pool volume created in 1? No Other Pool volumes available? No Yes Yes, N = 1 No Error: Unable to fill request Disk volume available? No Done Put request on meta Concatenate slices together (if necessary) CNS Figure 5 clarata_archive and clarata_r10 system-defined storage pools algorithm Symmetrix system-defined storage pools algorithm AVM works differently with Symmetrix storage systems because of the size and uniformity of the disk volumes involved. Typically, the disk volumes exported from a Symmetrix storage system are small and uniform in size. The aggregation strategy used by Symmetrix storage pools is primarily to combine many small disk volumes into larger volumes that can be used by file systems. AVM attempts to distribute the Input/Output (I/O) to as many Symmetrix directories as possible. The Symmetrix storage system can distribute I/O among the physical disks by using slicing and striping on the storage system, but this is less of a concern for the AVM aggregation strategy. A Symmetrix storage pool creates a stripe volume across one set of Symmetrix disk volumes, or creates a metavolume, as necessary to meet the request. The stripe or metavolume is added to the Symmetrix storage pool. When the administrator asks for n GB space from the Symmetrix storage pool, the space is allocated from this system-defined storage pool. AVM adds and takes from the system-defined storage pool as required. The stripe size is set in the system-defined profiles, and you cannot modify the stripe size of a system-defined storage pool. The default stripe size for Symmetrix storage pool is 32 KB. Multi-path file system (MPFS) requires a stripe depth of 32 KB or greater. 28 of 92
29 The algorithm that AVM uses looks for a set of eight disk volumes, and if not found, a set of four disk volumes, and if not found, then a set of two disk volumes, and finally one disk volume. AVM stripes the disk volumes together, if the disk volumes are all of the same size. If the disk volumes are not the same size, AVM creates a metavolume on top of the disk volumes. AVM then adds the stripe or the metavolume to the storage pool. If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool that has space, takes a slice from that metavolume, and makes a metavolume over that slice. Figure 6 on page 29 shows the AVM algorithm used to create the file system with a Symmetrix system-defined storage pool. Received FS request Is there a set of 8/4/2/1 disk volumes? Yes Stripe the LUNs together, or build a meta on top of the LUNs 1 No Error. Unable to fill FS request No Is there a meta in the pool with space remaining? Yes Take a slice from the meta (smaller of free space avail or FS request) Make meta on slice First time through loop? No Yes Concentrate new volume to end of "in progress" meta Disk space requirement satisfied? Yes Build FS on meta Done No 1 CNS Figure 6 Symmetrix system-defined storage pool algorithm Figure 7 on page 29 shows the structure of a Symmetrix storage pool. Symmetrix storage pool stripe_volume1 stripe_volume2 Symmetrix STD disk volumes d3 d4 d5 d6 3 d7 3 d8 3 d9 3 d10 3 CNS Figure 7 Symmetrix storage pool 29 of 92
30 All this system-defined storage pool activity is transparent to users and provides an easy way to create and manage file systems. The system-defined storage pools do not allow users to have much control over how AVM aggregates storage to meet file system needs, but most users prefer ease-of-use over control. When users make a request for a new file system that uses the system-defined storage pools, AVM: Determines if more volumes need to be added to the storage pool; if so, selects and adds volumes. Selects an existing, available storage pool volume to use for the file system and might slice it to obtain the correct size for the file system request. If the request is larger than the largest volume, AVM concatenates the volumes to create the size required to meet the request. Places a metavolume on the resulting volume and builds the file system within the metavolume. Returns the file system information to the user. All system-defined storage pools have specific, predictable rules for getting disk volumes into storage pools, provided by their associated profiles. File system and storage pool relationship When you request for a file system that uses a system-defined storage pool, AVM consumes disk volumes either by adding new members to the pool, or by using existing pool members. To create a file system by using a user-defined storage pool, create the storage pool and add the volumes you want to use manually, before creating the file system. Deleting a file system associated with either a system-defined or user-defined storage pool returns the unused space to the storage pool, but the storage pool might continue to reserve that space for future file system requests. Figure 8 on page 30 shows two file systems built from an AVM storage pool. FS1 Metavolume Slice FS2 Metavolume Slice Member volumes Storage pool CNS Figure 8 File systems built by AVM 30 of 92
31 As Figure 9 on page 31 shows, if FS2 is deleted, the storage used for that file system is returned to the storage pool, and AVM continues to reserve it, as well as any other member volumes available in the storage pool, for a future request. This is true of system-defined and user-defined storage pools. FS1 Metavolume Slice Member volumes Storage pool CNS Figure 9 FS2 deletion returns storage to the storage pool If FS1 is also deleted, the storage that was used for the file systems is no longer required for file systems. A system-defined storage pool removes the volumes from the storage pool and returns the disk volumes to the storage system for use with other features or storage pools. You can change the attributes of a system-defined storage pool so it is not dynamic, and does not grow and shrink dynamically. Doing that increases your direct involvement in managing the volume structure of the storage pool, including adding and removing volumes. A user-defined storage pool does not have any capability to add and remove volumes. To use volumes contained in a user-defined storage pool for another purpose, you must remove the volumes. "Remove volumes from storage pools" on page 82 provides more information on removing volumes. Otherwise, the userdefined storage pool continues to reserve the space for use by that pool. Figure 10 on page 31 shows that the storage pool container still exists after the file systems are deleted, and the volumes continue to be reserved by AVM for future requests of that storage pool. Member volumes Storage pool CNS Figure 10 FS1 deletion leaves storage pool container with volumes If you have modified the attributes that control the dynamic behavior of a systemdefined storage pool, use the procedure in "Remove volumes from storage pools" on page 82 to remove volumes from the system-defined storage pool. For a user-defined storage pool, to reuse the volumes for other purposes, remove the volumes or delete the storage pool. 31 of 92
32 Automatic file system extension Automatic file system extension works only when an AVM storage pool is associated with a file system. You can enable or disable automatic file system extension when you create a file system or modify the file system properties later. "Create file systems with AVM" on page 42 provides the procedure to create file systems with AVM system-defined or user-defined storage pools and enable automatic file system extension on a newly created file system. "Enable automatic file system extension and options" on page 61 provides the procedure to modify an existing file system and enable automatic file system extension. You can set the HWM and maximum size for automatic file system extension. The Control Station might attempt to extend the file system several times, depending on these settings. HWM The HWM identifies the threshold for initiating automatic file system extension. The HWM value must be between 50 percent and 99 percent. The default HWM is 90 percent of the file system size. Automatic file system extension guarantees that the file system usage is at least 3 percent below the HWM. For example, a 100 GB file system reaches its 80 percent HWM at 80 GB. The file system then automatically extends to 110 GB and is now at percent usage (80 GB), which is well below the 80 percent HWM for the 110 GB file system: If automatic file system extension is disabled, when the file system reaches the HWM, an HWM event notification is sent. You must then manually extend the file system. Ignoring the notification could cause data loss. If automatic file system extension is enabled on a file system, when the file system reaches the HWM, an automatic extension event notification is sent to sys_log and the file system automatically extends without any administrative action: A file system that is smaller than 10 GB extends by its size when it reaches the HWM. For example, a 3 GB file system, after reaching its HWM (for example, default of 90 percent), automatically extends to 6 GB. A file system that is larger than 10 GB extends by 5 percent of its size or 10 GB, whichever is larger, when it reaches the HWM. For example, a 100 GB file system extends to 110 GB, and a 500 GB file system extends to 525 GB. Maximum size The default maximum size for any file system is 16 TB. The maximum size for automatic file system extension is from 3 MB up to 16 TB. If Virtual Provisioning is enabled and the selected storage pool is a traditional RAID group (non-virtual CLARiiON thin) storage pool, the maximum size is required; otherwise, this field is optional. The extension size is also dependent on having additional space in the storage pool associated with the file system. 32 of 92
33 Automatic file extension conditions If the file system size reaches the specified maximum size, the file system cannot extend beyond that size, and the automatic extension operation is rejected. If the available space is less than the extend size, the file system extends by the maximum available space. If only the HWM is set with automatic file system extension enabled, the file system automatically extends when that HWM is reached, if the space available and the file system size is less than 16 TB. If only the maximum size is specified with automatic file system extension enabled, the file system automatically extends when the default HWM of 90 percent is reached, if there is space available and the maximum size has not been reached. If the file system reaches or exceeds the set maximum size, automatic extension is rejected. If the HWM or maximum file size is not set, but either automatic file system extension or Virtual Provisioning is enabled, the file system s HWM and maximum size are set to the default values of 90 percent and 16 TB, respectively. Virtual Provisioning The Virtual Provisioning option allows you to allocate storage capacity based on anticipated needs, while you dedicate only the resources you currently need. Combining automatic file system extension and Virtual Provisioning lets you grow the file system gradually as needed. When Virtual Provisioning is enabled and a virtual storage pool is not being used, the virtual maximum file system size is reported to NFS and CIFS clients; if a virtual storage pool is being used, the actual file system size is reported to NFS and CIFS clients. Note: Enabling Virtual Provisioning with automatic file system extension does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free space in the file system. 33 of 92
34 Calculating automatic file system extension size During each automatic file system extension, fs_extend_handler located on the Control Station (/nas/sbin/fs_extend_handler) calculates the extension size by using the algorithm shown in Figure 11 on page 34. Calculate autoextension size (a) {extend_size (a)} based on how often the HWM reached event is polled (10 second), and the I/O rate (=100) extend_size(a)=event_polling_interval * io_rate * 100 / (100 - HWM) Compare extend_size (a) with the current file system size (current_fs_size) If extend_size (a) < 5% of current_fs_size If extend_size (a) > current_fs_size extend_size (b)=5% of current_fs_size extend_size (b)=current_fs_size Calculate the required extension size (req_ext_size) req_ext_size=used*current_fs_size/(hwm-3)-current_fs_size used= percentage of file size used Compare req_ext_size with extend_size (b) Final autoextension size=extend_size (c) Yes Is req_ext_size > extend_size (b) No extend_size (c)=req_ext_size extend_size (c)=extend_size (b) CIFS or NFS client CIFS NFS DART sends the file system target size (target_size) to the Control Station target_size = current_fs_size+extend_size (c) Control Station calculates the extension size dart_request_ext_size= target_size-current_fs_size Yes dart_request_ ext_size > extend_size (c) No Final autoextension size=dart_request_ext_size Final autoextension size=extend_size (c) CNS Figure 11 Automatic file system extension size calculation 34 of 92
35 Planning considerations This section covers important volume and file system planning information and guidelines, interoperability considerations, storage pool characteristics, and Celerra upgrade considerations that you need to know when implementing AVM and automatic file system extension. Review these topics: Celerra Network Server file system management and the nas_fs command The Celerra SnapSure feature (checkpoints) and the fs_ckpt command Celerra Network Server volume management concepts (metavolumes, slice volumes, stripe volumes, and disk volumes) and the nas_volume, nas_server, nas_slice, and nas_disk commands RAID technology Symmetrix storage systems CLARiiON storage systems Interoperability considerations Consider these when using Celerra automatic file system extension with replication: Enable automatic extension and Virtual Provisioning only on the source file system. The destination file system is synchronized with the source and extends automatically. When the source file system reaches its HWM, the destination file system automatically extends first and then the source file system automatically extends. Set up the source replication file system with automatic extension enabled, as explained in "Create file systems with automatic file system extension" on page 50, or modify an existing source file system to automatically extend, by using the procedure "Enable automatic file system extension and options" on page 61. If the extension of the destination file system succeeds but the extension of the source file system fails, the automatic extension operation stops functioning. You receive an error message indicating that the failure is due to the limitation of available disk space on the source side. Manually extend the source file system to make the source and destination file systems the same size, by using the nas_fs -xtend <fs_name> -option src_only command. Using EMC Celerra Replicator (V1) and Using EMC Celerra Replicator (V2) contain instructions to recover from this situation. 35 of 92
36 Other interoperability considerations are: The automatic extension and Virtual Provisioning configuration is not moved over to the destination file system during Replicator failover. If you intend to reverse the replication and the destination file system becomes the source, you must enable automatic extension on the new source file system. With Virtual Provisioning enabled, NFS, CIFS, and FTP clients see the actual size of the Replicator destination file system, while they see the virtually provisioned maximum size on the source file system. Table 7 on page 36 describes this client view. Table 7 Client view of Replicator source and destination file systems Client view Destination file system Source file system without Virtual Provisioning Source file system with Virtual Provisioning Clients see Actual size Actual size Maximum size Using EMC Celerra Replicator (V1) and Using EMC Celerra Replicator (V2) contain more information on using automatic file system extension with Celerra Replicator. AVM storage pool considerations Consider these AVM storage pool characteristics: System-defined storage pools have a set of rules governing how the Celerra Network Server manages storage. User-defined storage pools have attributes that you define for each storage pool. All system-defined storage pools (virtual and non-virtual) are dynamic; they acquire and release disk volumes as needed. Administrators can modify the attribute to disable this dynamic behavior. User-defined storage pools are not dynamic; they require administrators to explicitly add and remove volumes manually. You are allowed to choose disk volume storage from only one of the attached storage systems when creating a user-defined storage pool. Striping never occurs above the storage-pool level. The system-defined CLARiiON storage pools attempt to use all free disk volumes before maximizing use of the partially used volumes. This behavior is considered to be a greedy attribute. You can modify the attributes that control this greedy behavior in system-defined storage pools. "Modify system-defined and user-defined storage pool attributes" on page 74 describes the procedure. Another option is to create user-defined storage pools to group disk volumes to keep system-defined storage pools from using them. "Create file systems with user-defined storage pools" on page 46 provides more information on creating user-defined storage pools. You can create a storage pool to reserve disk volumes, but never create file systems from that storage pool. You can move the disk volumes out of the reserving user-defined storage pool if you need to use them for file system creation or other purposes. 36 of 92
37 The system-defined Symmetrix storage pools maximize the use of disk volumes acquired by the storage pool before consuming more. This behavior is considered to be a not greedy attribute. AVM does not perform storage system operations necessary to create new disk volumes, but consumes only existing disk volumes. You might have to add LUNs to your storage system and configure new disk volumes, especially if you create user-defined storage pools. A file system might use many or all the disk volumes that are members of a system-defined storage pool. You can use only one type of disk volume in a user-defined storage pool. For example, if you create a storage pool and then add a disk volume based on ATA drives to the pool, add only other ATA-based disk volumes to the pool to extend it. SnapSure checkpoint SavVols might use the same disk volumes as the file system of which the checkpoints are made. AVM does not add members to the storage pool if the amount of space requested is more than the sum of the unused and available disk volumes, but less than or equal to the available space in an existing system-defined storage pool. Some AVM system-defined storage pools designed for use with CLARiiON storage systems acquire pairs of storage-processor balanced disk volumes with the same RAID type, disk count, and size. When reserving disk volumes from a CLARiiON storage system, it is important to reserve them in similar pairs. Otherwise, AVM might not find matching pairs, and the number of usable disk volumes might be more limited than was intended. "Create file systems with AVM" on page 42 provides more information on creating file systems by using the different pool types. Managing EMC Celerra Volumes and File Systems Manually contains instructions to recover from this situation. "Related information" on page 13 provides a list of related documentation. Upgrading Celerra software When you upgrade to Celerra Network Server version 5.6 software, all systemdefined storage pools are available. The system-defined storage pools for the currently attached storage systems with available space appear in the output when you list storage pools, even if AVM is not used on the Celerra Network Server. If you have not used AVM in the past, these storage pools are containers and do not consume storage until you request for a file system by using AVM. 37 of 92
38 If you have used AVM in the past, in addition to the system-defined storage pools, any user-defined storage pools you have created also appear when you list the storage pools.!!caution Automatic file system extension is interrupted during Celerra software upgrades. If automatic file system extension is enabled, the Control Station continues to capture HWM events, but actual file system extension does not start until the Celerra upgrade process completes. File system and automatic file system extension considerations Consider your environment, most important file systems, file system sizes, and expected growth, before implementing AVM. Follow these general guidelines when planning to use AVM in your environment: Create the most important and most used file systems first to access them quickly and easily. AVM system-defined storage pools use free disk volumes to create a new file system. For example, there are 40 disk volumes on the storage system. AVM takes eight disk volumes, creates stripe1, slice1, metavolume1, and then creates the file system ufs1: Assuming the default behavior of the system-defined storage pool, AVM uses eight more disk volumes, creates stripe2, and builds file system ufs2, even though there is still space available in stripe1. File systems ufs1 and ufs2 are on different sets of disk volumes and do not share any LUNs, making it easier to locate and access them. If you plan to create user-defined storage pools, consider LUN selection and striping, and do your own disk volume aggregation before putting the volumes into the storage pool. This ensures that the file systems are not built on a single LUN. Disk volume aggregation is a manual process for user-defined storage pools. For file systems with sequential I/O, two LUNs per file system are generally sufficient. If you use AVM for file systems with sequential I/O, consider modifying the attribute of the storage pool to restrict slicing. Automatic file system extension does not alleviate the need for appropriate file system usage planning. Create the file systems with adequate space to accommodate the estimated file system usage. Allocating too little space to accommodate normal file system usage makes the Control Station rapidly and repeatedly attempt to extend the file system. If the Control Station cannot adequately extend the file system to accommodate the usage quickly enough, the automatic extension fails. "Known problems and limitations" on page 86 provides more information on how to identify and recover from this issue. Note: When planning file system size and usage, consider setting the HWM, so that the free space above the HWM setting is a certain percentage above the largest average file for that file system. 38 of 92
39 Use of AVM with a single-enclosure CLARiiON storage system could limit performance because AVM does not stripe between or across RAID group 0 and other RAID groups. This is the only case where striping across 4+1 RAID 5 and 8+1 RAID 5 is suggested. If you want to set a stripe size that is different from the default stripe size for system-defined storage pools, create a user-defined storage pool. "Create file systems with user-defined storage pools" on page 46 provides more information. Take disk contention into account when creating a user-defined pool. If you have disk volumes you would like to reserve, so that the system-defined storage pools do not use them, consider creating a user-defined storage pool and add those specific volumes to it. 39 of 92
40 Configuring The tasks to configure volumes and file systems with AVM are: 1. "Configure disk volumes" on page "Create file systems with AVM" on page "Create a user-defined storage pool" on page "Extend file systems with AVM" on page "Create file system checkpoints with AVM" on page 69 Configure disk volumes The EMC Celerra NS500G, NS500GS, NS600G, NS600GS, NS700G, NS700GS, and NS704G system network servers are gateway network-attached storage (NAS) systems that connect to EMC Symmetrix and CLARiiON arrays. A Celerra gateway system stores data on CLARiiON user LUNs or Symmetrix hypervolumes. If the user LUNs or hypervolumes are not configured correctly on the array, Celerra AVM and Celerra Manager cannot be used to manage the storage. Typically, EMC support personnel does the initial setup of disk volumes on these gateway storage systems. However, if your Celerra gateway system is attached to a CLARiiON array and you want to add disk volumes to the configuration, use the procedure outlined in this section. In this two-step procedure, you first use EMC Navisphere Manager or the EMC Navisphere CLI to create the CLARiiON user LUNs, and then use Celerra Manager to make the new user LUNs available to the Celerra as disk volumes. The user LUNs must be created before you create Celerra file systems. Note: To add CLARiiON user LUNs, you must be familiar with EMC Navisphere Manager or the EMC Navisphere CLI and the process of creating RAID groups and CLARiiON user LUNs for the Celerra volumes. The documentation for EMC Navisphere Manager and EMC Navisphere CLI, available on Powerlink, describes how to create RAID groups and user LUNs. If the disk volumes are configured by EMC, go to "Create file systems with AVM" on page of 92
41 Add CLARiiON user LUNs Step Action 1. Create RAID groups and CLARiiON user LUNs (as needed for Celerra volumes) by using EMC Navisphere Manager or EMC Navisphere CLI. Ensure that you add the LUNs to the Celerra gateway system s storage group: Always create the user LUNs in balanced pairs, one owned by SP A and one owned by SP B. The paired LUNs must be the same size. For FC disks, the paired LUNs do not have to be in the same RAID group. For RAID 5 on FC disks, the RAID group must use five or nine disks. RAID 1 groups always use two disks. For ATA disks, all LUNs in a RAID group must belong to the same SP; create pairs by using LUNs from two RAID groups. RAID 6 groups have no restrictions on the number of disks. ATA disks must be configured as RAID 5, RAID 6, or RAID 3. The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs. Use these settings when creating user LUNs: RAID Type: RAID 5, RAID 6, or RAID 1 for FC disks and RAID 5, RAID 6, or RAID 3 for ATA disks LUN ID: Select the first available value Element Size: 128 Rebuild Priority: ASAP Verify Priority: ASAP Enable Read Cache: Selected Enable Write Cache: Selected Enable Auto Assign: Cleared (off) Number of LUNs to Bind: 2 Alignment Offset: 0 LUN size: Must not exceed 2 TB Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1. When you add the LUN to the storage group for a gateway system, set the HLU to 16 or greater. 2. Perform these steps by using Celerra Manager to make the new user LUNs available to the Celerra system: a. Open the Storage System page for the Celerra system (Storage > Systems). b. Click Rescan. Note: Do not change the host LUN identifier of the Celerra LUNs after rescanning. This might cause data loss or unavailability. 41 of 92
42 Create file systems with AVM This section describes the procedures to create a Celerra file system by using AVM storage pools and explains how to create file systems by using the automatic file system extension feature. You can enable automatic file system extension on new or existing file systems if the file system has an associated AVM storage pool. When you enable automatic file system extension, use the nas_fs command options to adjust the HWM value, set a maximum file size to which the file system can be extended, and enable Virtual Provisioning. "Create file systems with automatic file system extension" on page 50 provides more information. You can create file systems by using system-defined, system-defined virtual, or user-defined storage pools, with automatic file system extension enabled or disabled. Specify the storage system from which to allocate space for either type of storage pool. Choose one or more of these procedures to create file systems: "Create file systems with system-defined storage pools" on page 42 The simplest way to create file systems without having to create the underlying volume structure. "Create file systems with user-defined storage pools" on page 46 Allows more administrative control of the underlying volumes and placement of the file system. Use these storage pools to prevent the system-defined storage pools from using certain volumes. "Create file systems with automatic file system extension" on page 50 Allows you to create a file system that automatically extends when it reaches a certain threshold by using space from either a system-defined or a user-defined storage pool. Create file systems with system-defined storage pools When you create a Celerra file system by using the system-defined storage pools, it is not necessary to create volumes before setting up the file system. AVM allocates space to the file system from the storage pool you specify, residing on the storage system associated with that storage pool, and automatically creates any required volumes when it creates the file system. This ensures that the file system and its extensions are created from the same type of storage, with the same cost, performance, and availability characteristics. The storage system appears as a number associated with the storage system, and is dependent on the type of attached storage system. A CLARiiON storage system appears as a set of integers, prefixed with APM, for example, APM A Symmetrix storage system appears as a set of integers, for example, C. 42 of 92
43 Step Action 1. Obtain the list of available system-defined storage pools and system-defined virtual storage pools by using this command syntax: $ nas_pool -list Example: To list the storage pools, type: $ nas_pool -list Output: id in_use acl name 1 y 0 symm_std 2 n 0 clar_r1 3 y 0 clar_r5_performance 4 y 0 clar_r5_economy 5 n 0 clarata_r3 6 n 0 clarata_archive 7 n 0 symm_std_rdf_src 8 n 0 clar_r1 40 y 0 engineer_apm y 0 tp1_fcntr Display the size of a specific storage pool by using this command syntax: $ nas_pool -size <name> where: <name> = name of the storage pool Example: To display the size of the clar_r5_performance storage pool, type: $ nas_pool -size clar_r5_performance Output: id = 3 name = clar_r5_performance used_mb = avail_mb = 0 total_mb = potential_mb = Note: To display the size of all storage pools, use the -all option instead of the <name> option. 3. Obtain the system name of an attached Symmetrix storage system by using this command syntax: $ nas_storage -list Example: To list the system name of an attached Symmetrix storage system, type: $ nas_storage -list Output: id acl name serial number of 92
44 Step Action 4. Obtain information of a specific Symmetrix storage system in the list by using this command syntax: $ nas_storage -info <system_name> where: <system_name> = name of the storage system Example: To obtain information about the Symmetrix storage system , type: $ nas_storage -info Output: type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_stat R1 1 1 RA-1A Off NA 0 1 Off NA NA NA DA 2 2 DA-2A On WIDE 25 2 On Off NA NA DA 3 3 DA-3A On WIDE 25 2 On Off NA NA SA 5 5 SA-5A On ULTRA 0 2 On On NA NA SA SA-12A On ULTRA 0 2 Off On NA NA DA DA-14A On WIDE 27 2 On Off NA NA DA DA-15A On WIDE 26 2 On Off NA NA R RA-16A On NA 0 1 On NA NA NA R RA-1B Off NA 0 1 Off NA NA NA DA 18 2 DA-2B On WIDE 26 2 On Off NA NA DA 19 3 DA-3B On WIDE 27 2 On Off NA NA SA 21 5 SA-5B On ULTRA 0 2 On On NA NA SA SA-12B OnULTRA 0 2 On On NA NA DA DA-14B On WIDE 25 2 On Off NA NA DA DA-15B On WIDE 25 2 On Off NA NA R RA-16B On NA 0 1 On NA NA NA 44 of 92
45 Step Action 5. Create a file system by size with a system-defined storage pool by using this command syntax: $ nas_fs -name <fs_name> -create size=<size> pool=<pool> storage=<system_name> where: <fs_name> = name of the file system <size> = amount of space to add to the file system; specify the size in GB by typing <number>g (for example, 250G) or in MB by typing <number>m (for example, 500M), or by typing <number>t for TB (for example, 1T) <pool> = name of the storage pool <system_name> = name of the storage system from which space for the file system is allocated Example: To create a file system ufs1 of size 10G with a system-defined storage pool, type: $ nas_fs -name ufs1 -create size=10g pool=symm_std storage= Note: To mirror the file system with SRDF, you must specify the symm_std_rdf_src storage pool. This directs AVM to allocate space from volumes configured when installing for remote mirroring by using SRDF. Using SRDF/S with EMC Celerra for Disaster Recovery contains more information. Output: id = 1 name = ufs1 acl = 0 in_use = False type = uxfs volume = avm1 pool = symm_std member_of = rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = no,virtual_provision=no deduplication= off stor_devs = disks = d20,d12,d18,d10 Note: The EMC Celerra Network Server Command Reference Manual contains information on the options available for creating a file system with the nas_fs command. 45 of 92
46 Create file systems with user-defined storage pools The AVM system-defined storage pools are available for use with the Celerra Network Server. If you require more manual control than the system-defined storage pools allow, create a user-defined storage pool and then create the file system by using that pool. Note: Create a user-defined storage pool and define its attributes to reserve disk volumes so that your system-defined storage pools cannot use them. Prerequisites Prerequisites include: Creating a user-defined storage pool requires manual volume management. You must first stripe the volumes together and add the resulting volumes to the storage pool you create. Managing EMC Celerra Volumes and File Systems Manually describes the steps to create and manage volumes. You cannot use disk volumes you have reserved for other purposes. For example, you cannot use any disk volumes reserved for a system-defined storage pool. Controlling Access to EMC Celerra System Objects contains more information on access control levels. AVM system-defined storage pools designed for use with CLARiiON storage systems acquire pairs of storage-processor balanced disk volumes that have the same RAID type, disk count, and size. "Modify system-defined and userdefined storage pool attributes" on page 74 provides more information. When creating a user-defined storage pool to reserve disk volumes from a CLARiiON storage system, use storage-processor balanced disk volumes with these same qualities. Otherwise, AVM cannot find matching pairs, and the number of usable disk volumes might be more limited than was intended. To create a file system with user-defined storage pool: "Create a user-defined storage pool" on page 47 "Create the file system" on page 48 "Create file systems with automatic file system extension" on page 50 "Create automatic file system extension-enabled file systems" on page of 92
47 Create a user-defined storage pool To create a user-defined storage pool (from which space for the file system is allocated), add volumes to the storage pool and define the storage pool attributes. Action To create a user-defined storage pool, use this command syntax: $ nas_pool -create -name <name> -acl <acl> -volumes [<volume_name>,...] -description <desc> -default_slice_flag {y n} where: <name> = name of the storage pool <acl> = designates an access control level for the new storage pool; default value is 0 <volume_name> = names of the volumes to add to the storage pool; can be a metavolume, slice volume, stripe volume, or disk volume; use a comma to separate each volume name <desc> = assigns a comment to the storage pool; type the comment within quotes -default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed from the storage pool; if set to y, then members might be sliced and if set to n, then the members of the storage pool cannot be sliced, and volumes specified cannot be built on a slice. Example: To create a user-defined storage pool named marketing with a description, with the disk members d126, d127, d128, and d129 specified, and allow the volumes to be built on a slice, type: $ nas_pool -create -name marketing -description storage pool for marketing -volumes d126,d127,d128,d129 -default_slice_flag y Output id = 5 name = marketing description = Storage pool for marketing acl = 0 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = True disk_type = CLSTD server_visibility = server_2,server_3,server_4 47 of 92
48 Create the file system To create a file system, you must first create a user-defined storage pool. "Create a user-defined storage pool" on page 47 provides more information. Use this procedure to create a file system by specifying a user-defined storage pool and an associated storage system. Step Action 1. List the storage system by using this command syntax: $ nas_storage -list Example: To list the storage system, type: $ nas_storage -list Output: id acl name serial number 1 0 APM APM Get detailed information of a specific attached storage system in the list by using this command syntax: $ nas_storage -info <system_name> where: <system_name> = name of the storage system Example: To get detailed information of the storage system APM , type: $ nas_storage -info APM Output: id = 1 arrayname = APM name = APM model_type = RACKMOUNT model_num = 630 db_sync_time = == Sat Jan 6 17:21:00 EST 2007 num_disks = 30 num_devs = 21 num_pdevs = 1 num_storage_grps = 0 num_raid_grps = 10 cache_page_size = 8 wr_cache_mirror = True low_watermark = 70 high_watermark = 90 unassigned_cache = 0 failed_over = False captive_storage = True Active Software Navisphere = ManagementServer = Base = of 92
49 Step Action Storage Processors SP Identifier = A signature = microcode_version = serial_num = LKE prom_rev = agent_rev = (1.43) phys_memory = 3968 sys_buffer = 749 read_cache = 32 write_cache = 3072 free_memory = 115 raid3_mem_size = 0 failed_over = False hidden = True network_name = spa ip_address = subnet_mask = gateway_address = num_disk_volumes = 11 - root_disk root_ldisk d3 d4 d5 d6 d8 d13 d14 d15 d16 SP Identifier = B signature = microcode_version = serial_num = LKE prom_rev = agent_rev = (1.43) phys_memory = 3968 raid3_mem_size = 0 failed_over = False hidden = True network_name = OEM-XOO25IL9VL9 ip_address = subnet_mask = gateway_address = num_disk_volumes = 4 - disk7 d9 d11 d12 Note: This is not a complete output. 49 of 92
50 Step Action 3. Create the file system from a user-defined storage pool and designate the storage system on which you want the file system to reside by using this command syntax: $ nas_fs -name <fs_name> -type <type> -create <volume_name> pool=<pool> storage=<system_name> where: <fs_name> = name of the file system <type> = type of the file system UxFS (default), mgfs, or rawfs <volume_name> = name of the volume <pool> = name of the storage pool <system_name> = name of the storage system on which the file system resides Example: To create the file system ufs1 from a user-defined storage pool and designate the APM storage system on which you want the file system ufs1 to reside, type: $ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing storage=apm Output: id = 2 name = ufs1 acl = 0 in_use = False type = uxfs volume = MTV1 pool = marketing member_of = root_avm_fs_group_2 rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = no,virtual_provision=no deduplication= off stor_devs = APM disks = d6,d8,d11,d12 Create file systems with automatic file system extension Use the -auto_extend option of the nas_fs command to enable automatic file system extension on a new file system created with AVM; the option is disabled by default. Note: Automatic file system extension does not alleviate the need for appropriate file system usage planning. Create the file systems with adequate space to accommodate the estimated file system usage. Allocating too little space to accommodate normal file system usage makes the Control Station rapidly and repeatedly attempt to extend the file system. If the Control Station cannot adequately extend the file system to accommodate the usage quickly enough, the automatic extension fails. If automatic file system extension is disabled and the file system reaches 90 percent full, a warning message is written to the sys_log. Any action necessary is at the administrator s discretion. Note: You do not have to set the maximum size for a newly created file system when you enable automatic file system extension. The default maximum size is 16 TB. With automatic file system extension enabled, even if the HWM is not set, the file system automatically extends up to 16 TB, if the storage space is available in the storage pool. 50 of 92
51 Use this procedure to create a file system with a system-defined storage pool and a CLARiiON storage system, and enable automatic file system extension. Action To create a file system with automatic file system extension enabled, use this command syntax: $ nas_fs -name <fs_name> -type <type> -create size=<size> pool=<pool_name> storage=<system_name> -auto_extend {no yes} where: <fs_name> = name of the file system <type> = type of the file system <size> = amount of space to add to the file system; specify the size in GB by typing <number>g (for example, 250G) or in MB by typing <number>m (for example, 500M), or in TB by typing <number>t (for example, 1T) <pool_name> = name of the storage pool from which to allocate space to the file system <system_name> = name of the storage system associated with the storage pool Example: To enable automatic file system extension as you create a new 10 GB file system, from a systemdefined storage pool, and a CLARiiON storage system, type: $ nas_fs -name ufs1 -type uxfs -create size=10g pool=clar_r5_performance storage=apm auto_extend yes Output id = 434 name = ufs1 acl = 0 in_use = False type = uxfs worm = off volume = v1634 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = hwm=90%,virtual_provision=no deduplication= off stor_devs = APM D,APM A, APM ,APM disks = d20,d12,d18,d10 51 of 92
52 Create automatic file system extension-enabled file systems When you create a file system with automatic file system extension enabled, you can set the point at which you want the file system to automatically extend (the HWM) and the maximum size to which the file system can grow. You can also enable Virtual Provisioning at the same time that you create or extend a file system. "Enable automatic file system extension and options" on page 61 provides information on modifying the automatic file system extension options. If you set the slice=no option on the file system, the actual file system size might be bigger than the size that you specify for the file system, and could exceed the maximum size. In this case, a warning indicates the file system size might exceed the maximum size and the automatic extension fails. If you do not specify the file system slice option (-option slice=yes no) when you create the file system, the file system defaults to the setting of the storage pool. "Modify system-defined and userdefined storage pool attributes" on page 74 provides more information. Note: If the actual file system size is above the HWM when Virtual Provisioning is enabled, the client sees the actual file system size instead of the specified maximum size. Enabling automatic file system extension and Virtual Provisioning does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, automatic extension fails. Users receive an error message when the file system becomes full; even though it appears that there is free space in the file system. The file system must be manually extended. 52 of 92
53 Use this procedure to simultaneously set the automatic file system extension options when you are creating the file system. Step Action 1. Create a file system of a specified size, enable automatic file system extension and Virtual Provisioning, and set the HWM and the maximum file system size simultaneously by using this command syntax: $ nas_fs -name <fs_name> -type <type> -create size=<integer>[t G M] pool=<pool name> storage=<system_name> -auto_extend {no yes} -vp {yes no} -hwm <50-99>% -max_size <integer>[t G M] where: <fs_name> = name of the file system <type> = type of the file system <integer> = size requested in MB, GB, or TB; the maximum size is 16 TB <pool name> = name of the storage pool <system_name> = attached storage system on which the file system and storage pool reside <50-99> = percentage between 50 and 99, at which you want the file system to automatically extend Example: To create a 10 MB, UxFS from an AVM storage pool, with automatic file system extension enabled, maximum file system size of 200M, HWM of 90 percent, and Virtual Provisioning enabled, type: $ nas_fs -name ufs2 -type uxfs -create size=10m pool=clar_r5_performance -auto_extend yes -vp yes -hwm 90% -max_size 200M Output: id = 435 name = ufs2 acl = 0 in_use = False type = uxfs worm = off volume = v1637 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = hwm=90%,max_size=200m,virtual_provision=yes deduplication= off stor_devs = APM D,APM A, APM ,APM disks = d20,d12,d18,d10 Note: When you enable Virtual Provisioning on a new or existing file system, you must also specify the maximum size to which the file system can automatically extend. 53 of 92
54 Step Action 2. Verify the settings for the specific file system after enabling automatic file system extension by using this command syntax: $ nas_fs -info <fs_name> where: <fs_name> = name of the file system Example: To verify the settings for the file system ufs2 after enabling automatic file system extension, type: $ nas_fs -info ufs2 Output: id = 2 name = ufs2 acl = 0 in_use = False type = uxfs worm = off volume = v1637 pool = clar_r5_performance rw_servers = ro_servers = rw_vdms = ro_vdms = backups = ufs2_snap1,ufs2_snap2 auto_ext = hwm=66%,max_size= m,virtual_provision=yes deduplication= off stor_devs = APM D,APM A, APM ,APM disks = d20,d12,d18,d10 You can also set the options -hwm and -max_size on each file system with automatic file system extension enabled. When enabling Virtual Provisioning on a file system, you must set the maximum size but setting the high water mark is optional. Extend file systems with AVM Increase the size of a Celerra file system nearing its maximum capacity by extending the file system. You can: Extend a file system by size to add space if the file system has an associated system-defined storage pool. You can specify the storage system from which to allocate space. "Extend file systems with system-defined storage pools" on page 55 provides instructions. Extend a file system by using a storage pool other than the one used to create the file system. "Extend file systems by using a different storage pool" on page 57 provides instructions. Extend a file system by volume if the file system has an associated user-defined storage pool. "Extend file systems with user-defined storage pools" on page 59 provides instructions. Extend an existing file system by enabling automatic file system extension on that file system. "Enable automatic file system extension and options" on page 61 provides instructions. 54 of 92
55 Extend an existing file system by enabling Virtual Provisioning on that file system. "Enable Virtual Provisioning" on page 65 provides instructions. Managing EMC Celerra Volumes and File Systems Manually contains the instructions to extend file systems manually. Extend file systems with system-defined storage pools All file systems created by using the AVM feature have an associated storage pool. Extend a file system created with a system-defined storage pool (either virtual or non-virtual) by specifying only the size and the name of the file system. AVM allocates storage from the storage pool to the file system. You can specify the storage system you want to use. If you do not, the last storage system associated with the storage pool is used. Note: A file system created using a system-defined virtual storage pool can be extended on its existing pool or by using a compatible system-defined virtual storage pool that contains the same disk type. Use this procedure to extend a file system with a system-defined storage pool by size. Note: Use either a system-defined or user-defined storage pool to extend a file system. Step Action 1. Check the file system configuration to confirm that the file system has an associated storage pool by using this command syntax: $ nas_fs -info <fs_name> where: <fs_name> = name of the file system Note: If you see a storage pool defined in the output, the file system was created with AVM and has an associated storage pool. Example: To check the file system configuration to confirm that the file system ufs1 has an associated storage pool, type: $ nas_fs -info ufs1 Output: id = 8 name = ufs1 acl = 0 in_use = False type = uxfs volume = v121 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = APM disks = d7,d13 55 of 92
56 Step Action 2. Extend the file system by using this command syntax: $ nas_fs -xtend <fs_name> size=<size> pool=<pool> storage=<system_name> where: <fs_name> = name of the file system <size> = amount of space to add to the file system; specify the size in GB by typing <number>g (for example, 250G) or in MB by typing <number>m (for example, 500M) <pool> = name of the storage pool <system_name> = name of the storage system; if you do not specify a storage system, the default storage system is the one on which the file system resides and if the file system spans multiple storage systems, the default is any one of the storage systems on which the file system resides. Note: The first time you extend the file system without specifying a storage pool, the default storage pool is the one used to create the file system. If you specify a storage pool that is different from the one used to create the file system, the next time you extend this file system without specifying a storage pool, the last pool in the output list is the default. Example: To extend the size of the file system ufs1 by 10M, type: $ nas_fs -xtend ufs1 size=10m pool=clar_r5_performance storage=apm Output: id = 8 name = ufs1 acl = 0 in_use = False type = uxfs volume = v121 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = APM disks = d7,d13,d19,d25,d30,d31,d32,d33 3. Check the size of the file system after extending it to confirm that the size increased by using this command syntax: $ nas_fs -size <fs_name> where: <fs_name> = name of the file system Example: To check the size of the file system ufs1 after extending it to confirm that the size increased, type: $ nas_fs -size ufs1 Output: total = avail = used = 0 ( 0% ) (sizes in MB) volume: total = (sizes in MB) 56 of 92
57 Extend file systems by using a different storage pool You can use more than one storage pool to extend a file system. Ensure that the storage pools have space allocated from the same storage system to prevent the file system from spanning more than one storage system. Note: A file system created using a system-defined virtual storage pool can be extended on its existing pool or by using a compatible system-defined virtual storage pool that contains the same disk type. Use this procedure to extend the file system by using a different storage pool than the one used to create the file system. Step Action 1. Check the file system configuration to confirm that the file system has an associated storage pool by using this command syntax: $ nas_fs -info <fs_name> where: <fs_name> = name of the file system Example: To check the file system configuration to confirm that the file system ufs2 has an associated storage pool, type: $ nas_fs -info ufs2 Output: id = 9 name = ufs2 acl = 0 in_use = False type = uxfs volume = v121 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = APM disks = d7,d13 Note: The storage pool used earlier to create or extend the file system is shown in the output as associated with this file system. 57 of 92
58 Step Action 2. Optionally, if you want to extend the file system by using a different storage pool other than the one used to create the file system, by using this command syntax: $ nas_fs -xtend <fs_name> size=<size> pool=<pool> where: <fs_name> = name of the file system <size> = amount of space you want to add to the file system; specify the size in GB by typing <number>g (for example, 250G) or in MB by typing <number>m (for example, 500M) <pool> = name of the storage pool Example: To extend the file system ufs2 by using a different storage pool other than the one used to create the file system, type: $ nas_fs -xtend ufs2 size=10m pool=clar_r5_economy Output: id = 9 name = ufs2 acl = 0 in_use = False type = uxfs volume = v123 pool = clar_r5_performance,clar_r5_economy member_of = root_avm_fs_group_3,root_avm_fs_group_4 rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = APM disks = d7,d13,d19,d25 Note: The storage pools used to create and extend the file system now appear in the output. There is only one storage system from which space for these storage pools is allocated. 3. Check the file system size after extending it to confirm the increase in size by using this command syntax: $ nas_fs -size <fs_name> where: <fs_name> = name of the file system Example: To check the size of the file system ufs2 after extending it to confirm the increase in size, type: $ nas_fs -size ufs2 Output: total = avail = used = 0 ( 0% ) (sizes in MB) volume: total = (sizes in MB) 58 of 92
59 Extend file systems with user-defined storage pools If you created a file system with a user-defined storage pool, you must extend the file system manually, by specifying the volumes to add. Note: With user-defined storage pools, you must manually create the underlying volumes, including striping, before adding the volume to the storage pool. Managing EMC Celerra Volumes and File Systems Manually describes the detailed procedures needed to perform these tasks before creating or extending the file system. If you do not specify a storage system when extending the file system, the default storage system is the one on which the file system resides. If the file system spans multiple storage systems, the default is any one of the storage systems on which the file system resides. Use this procedure to extend the file system by using the same user-defined storage pool that was used to create the file system. Step Action 1. Check the configuration of the file system to confirm the associated user-defined storage pool by using this command syntax: $ nas_fs -info <fs_name> where: <fs_name> = name of the file system Example: To check the configuration of the file system ufs3 to confirm the associated user-defined storage pool, type: $ nas_fs -info ufs3 Output: id = 10 name = ufs3 acl = 0 in_use = False type = uxfs volume = V121 pool member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = = marketing stor_devs = APM disks = d7,d8 Note: The user-defined storage pool used to create the file system is defined in the output. 59 of 92
60 Step Action 2. Extend the file system by using this command syntax: $ nas_fs -xtend <fs_name> <volume_name> pool=<pool> storage=<system_name> where: <fs_name> = name of the file system <volume_name> = name of the volume to add to the file system <pool> = storage pool associated with the file system; it can be user-defined or systemdefined <system_name> = name of the storage system on which the file system resides Example: To extend the file system ufs3, type: $ nas_fs -xtend ufs3 v121 pool=marketing storage=apm Output: id = 10 name = ufs3 acl = 0 in_use = False type = uxfs volume = v121 pool member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = = marketing stor_devs = APM disks = d7,d8,d13,d14 Note: The next time you extend this file system without specifying a storage pool, the last pool in the output list is the default. 3. Check the size of the file system ufs3 after extending it to confirm that the size increased by using this command syntax: $ nas_fs -size <fs_name> where: <fs_name> = name of the file system Example: To check the size of the file system ufs3 after extending it to confirm that the size increased, type: $ nas_fs -size ufs3 Output: total = avail = used = 0 ( 0% ) (sizes in MB) volume: total = (sizes in MB) 60 of 92
61 Enable automatic file system extension and options You can automatically extend an existing file system created with AVM systemdefined or user-defined storage pools. The file system automatically extends by using space from the storage system and storage pool with which the file system is associated. If you set the (slice=no) option on the file system, the actual file system size might be bigger than the size you specify for the file system, and could exceed the maximum size. In this case, you receive a warning indicating that the file system size might exceed the maximum size, and automatic extension fails. If you do not specify the file system slice option (-option slice=yes no) when you create the file system, the file system defaults to the setting of the storage pool. "Modify system-defined and user-defined storage pool attributes" on page 74 describes the procedure to modify the default_slice_flag attribute on the storage pool. Use the -modify option to enable automatic extension on an existing file system. You can also set the HWM and maximum size. To enable automatic file system extension and options: "Enable automatic file system extension" on page 62 "Set the HWM" on page 63 "Set the maximum file system size" on page 64 You can also enable Virtual Provisioning at the same time that you create or extend a file system. "Enable Virtual Provisioning" on page 65 describes the procedure to enable Virtual Provisioning on an existing file system. "Enable automatic extension, Virtual Provisioning, and all options simultaneously" on page 67 describes the procedure to simultaneously enable automatic extension, Virtual Provisioning, and all options on an existing file system. 61 of 92
62 Enable automatic file system extension If the HWM or maximum size is not set, the file system automatically extends up to the default maximum size of 16 TB when the file system reaches the default HWM of 90 percent, if the space is available. An error message appears if you try to enable automatic file system extension on a file system created manually. Note: The HWM is 90 percent by default when you enable automatic file system extension. Action To enable automatic extension on an existing file system, use this command syntax: $ nas_fs -modify <fs_name> -auto_extend {no yes} where: <fs_name> = name of the file system Example: To enable automatic extension on an existing file system ufs3, type: $ nas_fs -modify ufs3 -auto_extend yes Output id = 28 name = ufs3 acl = 0 in_use = True type = uxfs worm = off volume = v157 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=90%,virtual_provision=no stor_devs = APM F,APM D APM ,APM disks = d20,d18,d14,d11 disk=d20 stor_dev=apm f addr=c0t1l15 server=server_2 disk=d20 stor_dev=apm f addr=c32t1l15 server=server_2 disk=d18 stor_dev=apm d addr=c0t1l13 server=server_2 disk=d18 stor_dev=apm d addr=c32t1l13 server=server_2 disk=d14 stor_dev=apm addr=c0t1l9 server=server_2 disk=d14 stor_dev=apm addr=c32t1l9 server=server_2 disk=d11 stor_dev=apm addr=c0t1l6 server=server_2 disk=d11 stor_dev=apm addr=c32t1l6 server=server_2 62 of 92
63 Set the HWM With automatic file system extension enabled on an existing file system, use the -hwm option to set a threshold. To specify a threshold, type an integer between 50 and 99 percent; the default is 90 percent. If the HWM or maximum size is not set, the file system automatically extends up to the default maximum size of 16 TB when the file system reaches the default HWM of 90 percent, if the space is available. The value for maximum size, if specified, has an upper limit of 16 TB. Action To set the HWM on an existing file system, with automatic file system extension enabled, use this command syntax: $ nas_fs modify <fs_name> -hwm <50-99>% where: <fs_name> = name of the file system <50-99> = an integer representing the file system usage point at which you want it to automatically extend Example: To set the HWM on an existing file system ufs3, with automatic extension already enabled, type: $ nas_fs -modify ufs3 -hwm 85% Output id = 28 name = ufs3 acl = 0 in_use = True type = uxfs worm = off volume = v157 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=85%,virtual_provision=no stor_devs = APM F,APM D, APM ,APM disks = d20,d18,d14,d11 disk=d20 stor_dev=apm f addr=c0t1l15 server=server_2 disk=d20 stor_dev=apm f addr=c32t1l15 server=server_2 disk=d18 stor_dev=apm d addr=c0t1l13 server=server_2 disk=d18 stor_dev=apm d addr=c32t1l13 server=server_2 disk=d14 stor_dev=apm addr=c0t1l9 server=server_2 disk=d14 stor_dev=apm addr=c32t1l9 server=server_2 disk=d11 stor_dev=apm addr=c0t1l6 server=server_2 disk=d11 stor_dev=apm addr=c32t1l6 server=server_2 63 of 92
64 Set the maximum file system size Use the -max_size option to specify a maximum size to which a file system can grow. To specify the maximum size, type an integer and specify T for TB, G for GB (default), or M for MB. When you enable automatic file system extension, the file system automatically extends up to the default maximum size of 16 TB. Set the HWM at which you want the file system to automatically extend. If the HWM is not set, the file system automatically extends up to 16 TB when the file system reaches the default HWM of 90 percent, if the space is available. Action To set the maximum file system size with automatic file system extension already enabled, use this command syntax: $ nas_fs -modify <fs_name> -max_size <integer>[t G M] where: <fs_name> = name of the file system <integer> = maximum size requested in MB, GB, or TB Example: To set the maximum file system size on the existing file system, type: $ nas_fs -modify ufs3 -max_size 16T Output id = 28 name = ufs3 acl = 0 in_use = True type = uxfs worm = off volume = v157 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=85%,max_size= m,virtual_provision=no stor_devs = APM F,APM D, APM ,APM disks = d20,d18,d14,d11 disk=d20 stor_dev=apm f addr=c0t1l15 server=server_2 disk=d20 stor_dev=apm f addr=c32t1l15 server=server_2 disk=d18 stor_dev=apm d addr=c0t1l13 server=server_2 disk=d18 stor_dev=apm d addr=c32t1l13 server=server_2 disk=d14 stor_dev=apm addr=c0t1l9 server=server_2 disk=d14 stor_dev=apm addr=c32t1l9 server=server_2 disk=d11 stor_dev=apm addr=c0t1l6 server=server_2 disk=d11 stor_dev=apm addr=c32t1l6 server=server_2 64 of 92
65 Enable Virtual Provisioning You can enable Virtual Provisioning at the same time that you create or extend a file system. Use the -vp option to enable Virtual Provisioning. You must also specifically set the maximum size to which you want the file system to automatically extend. An error message appears if you attempt to enable Virtual Provisioning and do not set the maximum size. "Set the maximum file system size" on page 64 describes the procedure to set the maximum file system size. The upper limit for the maximum size is 16 TB. The maximum size you set is the file system size that is presented to users, if the maximum size is larger than the actual file system size. Note: Enabling automatic file system extension and Virtual Provisioning does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free space in the file system. The file system must be manually extended. Enable Virtual Provisioning on the source file system when the feature is used in a replication situation. With Virtual Provisioning enabled, NFS, CIFS, and FTP clients see the actual size of the Replicator destination file system, while they see the virtually provisioned maximum size of the Replicator source file system. "Interoperability considerations" on page 35 provides additional information. Action To enable Virtual Provisioning with automatic extension enabled on the file system, use this command syntax: $ nas_fs -modify <fs_name> -max_size <integer>[t G M] -vp {yes no} where: <fs_name> = name of the file system <integer> = size requested in MB, GB, or TB Example: To enable Virtual Provisioning, type: $ nas_fs -modify ufs1 -max_size 16T -vp yes 65 of 92
66 Output id = 27 name = ufs3 acl = 0 in_use = True type = uxfs worm = off volume = v157 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=85%,max_size= m,virtual_provision=yes stor_devs = APM F,APM D, APM ,APM disks = d20,d18,d14,d11 disk=d20 stor_dev=apm f addr=c0t1l15 server=server_2 disk=d20 stor_dev=apm f addr=c32t1l15 server=server_2 disk=d18 stor_dev=apm d addr=c0t1l13 server=server_2 disk=d18 stor_dev=apm d addr=c32t1l13 server=server_2 disk=d14 stor_dev=apm addr=c0t1l9 server=server_2 disk=d14 stor_dev=apm addr=c32t1l9 server=server_2 disk=d11 stor_dev=apm addr=c0t1l6 server=server_2 disk=d11 stor_dev=apm addr=c32t1l6 server=server_2 66 of 92
67 Enable automatic extension, Virtual Provisioning, and all options simultaneously Note: An error message appears if you try to enable automatic file system extension on a file system that was created without using a storage pool. Action To simultaneously enable automatic file system extension and Virtual Provisioning on an existing file system, set the HWM and the maximum size, use this command syntax: $ nas_fs -modify <fs_name> -auto_extend {no yes} -vp {yes no} -hwm <50-99>% -max_size <integer>[t G M] where: <fs_name> = name of the file system <50-99> = an integer representing the file system usage point at which you want it to automatically extend <integer> = size requested in MB, GB, or TB Example: To modify a UxFS to enable automatic extension, enable Virtual Provisioning, set a maximum file system size of 16 TB, with an HWM of 90 percent, type: $ nas_fs -modify ufs4 -auto_extend yes -vp yes -hwm 90% -max_size 16T Output id = 29 name = ufs4 acl = 0 in_use = False type = uxfs worm = off volume = v157 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=90%,max_size= m,virtual_provision=yes stor_devs = APM F,APM D, APM ,APM disks = d20,d18,d14,d11 67 of 92
68 Verify the maximum size of the file system Automatic file system extension fails when the file system reaches the maximum size. Action To force an extension to determine whether the maximum size has been reached, use this command syntax: $ nas_fs -xtend <fs_name> size=<size> where: <fs_name> = name of the file system <size> = size to extend the file system by, in MB Example: To force an extension to determine whether the maximum size has been reached, type: $ nas_fs -xtend ufs1 size=4m Output id = 759 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = v2459 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_4 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=90%,max_size= m (reached) virtual_provision=yes <<< stor_devs = APM disks = d10 disk=d10 stor_dev=apm addr=c16t1l8 server=server_4 disk=d10 stor_dev=apm addr=c32t1l8 server=server_4 disk=d10 stor_dev=apm addr=c0t1l8 server=server_4 disk=d10 stor_dev=apm addr=c48t1l8 server=server_4 68 of 92
69 Create file system checkpoints with AVM Use either AVM system-defined or user-defined storage pools to create file system checkpoints. Specify the storage system on which you want the file system checkpoint to reside. Use this procedure to create the checkpoint, specifying a storage pool and storage system. Note: You can only specify the storage pool for the checkpoint SavVol when there are no existing checkpoints of the PFS. Step Action 1. Obtain a list of available storage systems by using this command syntax: $ nas_storage -list Note: To obtain more detailed information on the storage system and associated names use the -info option instead. 2. Create the checkpoint by using this command syntax: $ fs_ckpt <fs_name> -name <name> -Create [size=<integer>[t G M %]] pool=<pool> storage=<system_name> where: <fs_name> = name of the file system for which you want to create a checkpoint <name> = name of the checkpoint <integer> = amount of space to allocate to the checkpoint; type the size in TB, GB, or MB <pool> = name of the storage pool <system_name> = storage system on which the file system checkpoint resides Note: Virtual Provisioning is not supported with checkpoints. NFS, CIFS, and FTP clients cannot see the virtually provisioned maximum size of a SnapSure checkpoint file system. Example: To create the checkpoint ckpt1, type: $ fs_ckpt ufs1 -name ckpt1 -Create size=10g pool=clar_r5_performance storage=apm Output: id = 1 name = ckpt1 acl = 0 in_use = False type = uxfs volume = V126 pool = clar_r5_performance member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = APM disks = d7,d8 69 of 92
70 Managing The tasks to manage AVM storage pools are: "List existing storage pools" on page 70 "Display storage pool details" on page 71 "Display storage pool size information" on page 71 "Modify system-defined and user-defined storage pool attributes" on page 74 "Extend a user-defined storage pool" on page 80 "Extend a system-defined storage pool" on page 81 "Remove volumes from storage pools" on page 82 "Delete user-defined storage pools" on page 83 List existing storage pools When the existing storage pools are listed, all the system-defined storage pools and three user-defined storage pools (marketing, engineering, and sales) appear in the output. All existing storage pools are listed, regardless of whether they are in use. Action To list all existing system-defined and user-defined storage pools, use this command syntax: $ nas_pool -list Example: To list the storage pools, type: $ nas_pool -list Output id in_use acl name 1 y 0 symm_std 2 n 0 clar_r1 3 y 0 clar_r5_performance 4 y 0 clar_r5_economy 5 y 0 marketing 6 y 0 engineering 7 y 0 sales 8 n 0 clarata_r3 9 n 0 clarata_archive 10 n 0 symm_std_rdf_src 11 n 0 clar_r1 40 y 0 engineer_apm of 92
71 Display storage pool details Action To display detailed information of a specified system-defined, system-defined virtual, or userdefined storage pool, use this command syntax: $ nas_pool -info <name> where: <name> = name of the storage pool Example: To display detailed information of the storage pool marketing, type: $ nas_pool -info marketing Output id = 5 name = marketing description = acl = 0 in_use = True clients = fs24,fs26 members = d320,d319 default_slice_flag = True is_user_defined = True virtually_provisioned= True disk_type = CLSTD server_visibility = server_2,server_3 Display storage pool size information The size information of the storage pool appears in the output. If there are more than one storage pool, the output shows the size information for all the storage pools. The storage pool size information includes: The total used space in the storage pool in MB (used_mb) The total unused space in the storage pool in MB (avail_mb) The total used and unused space in the storage pool in MB (total_mb) The total space available from all sources in MB that could potentially be added to the storage pool (potential_mb). For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended and shrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage pool is equal to the total potential storage available. Note: If either non MB-aligned disk volumes or disk volumes of different sizes are striped together, truncation of storage might occur. The total amount of space added to a pool might be different than the total amount taken from potential storage. Total space in the pool includes the truncated space, but potential storage does not include the truncated space. 71 of 92
72 In Celerra Manager, the potential MB in the output represents the total available storage, including the storage pool. In the CLI, the output for potential_mb does not include the space in the storage pool. Note: Use the -size -all option to display the size information for all storage pools. Action To display the size information for a specific storage pool, use this command syntax: $ nas_pool -size <name> where: <name> = name of the storage pool Example: To display the size information for the clar_r5_performance storage pool, type: $ nas_pool -size clar_r5_performance Output id = 3 name = clar_r5_performance used_mb = avail_mb = 0 total_mb = potential_mb = Action To display the size information for a specific virtual storage pool, use this command syntax: $ nas_pool -size <name> where: <name> = name of the storage pool Example: To display the size information for the ThinPool0 storage pool, type: $ nas_pool -size ThinPool0_APM Output id = 49 name = ThinPool0_APM used_mb = 0 avail_mb = 0 total_mb = 0 potential_mb = 1023 Physical storage usage in Thin Pool Thin Pool 0 on APM used_mb = 2048 avail_mb = total_mb = of 92
73 Display Symmetrix storage pool size information Sliced volumes do not appear in the output because the Symmetrix storage pools default_slice_flag value is set to no. Use the -size -all option to display the size information for all storage pools. Action To display the size information of Symmetrix-related storage pools, use this command syntax: $ nas_pool -size <name> -slice y where: <name> = name of the storage pool Example: To request size information for a specific Symmetrix storage pool, type: $ nas_pool -size symm_std -slice y Output id = 5 name = symm_std used_mb = avail_mb = 0 total_mb = potential_mb = Note Use the -slice y option to include any space from sliced volumes in the available result. The size information for the system-defined storage pool named clar_r5_performance appears in the output. If you have more storage pools, the output shows the size information for all the storage pools. used_mb is the used space in the specified storage pool in MB. avail_mb is the amount of unused available space in the storage pool in MB. total_mb is the total of used and unused space in the storage pool in MB. potential_mb is the potential amount of storage that can be added to the storage pool available from all sources in MB. For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended and shrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage pool is equal to the total potential storage available. If either non MB-aligned disk volumes or disk volumes of different sizes are striped together, truncation of storage might occur. The total amount of space added to a pool might be different than the total amount taken from potential storage. Total space in the pool includes the truncated space, but potential storage does not include the truncated space. 73 of 92
74 Modify system-defined and user-defined storage pool attributes System-defined and user-defined storage pools have attributes that control how they manage the volumes and file systems. Table 8 on page 74 lists the modifiable storage pool attributes, the value, and the attribute description. Table 8 Storage pool attributes Attribute Values Modifiable Description name Quoted string Yes User-defined storage pools Unique name. If a name is not specified during creation, one is automatically generated. description Quoted string Yes User-defined storage pools A text description. Default is (blank string). acl Integer. For example, 0. Yes User-defined storage pools Access control level. Controlling Access to EMC Celerra System Objects contains instructions to manage access control levels. default_slice_flag y n Yes System-defined and user-defined storage pools Answers the question, can AVM slice member volumes to meet the file system request? A y entry tells AVM to create a slice of exactly the correct size from one or more member volumes. An n entry gives the primary or source file system exclusive access to one or more member volumes. Note: If using TimeFinder or automatic file system extension, this attribute should be set to n. You cannot restore file systems built with sliced volumes to a previous state by using TimeFinder/FS. is_dynamic y n Yes System-defined storage pools is_greedy y n Yes System-defined storage pools Note: Only applicable if volume_profile is not blank. Answers the question, is this storage pool allowed to automatically add or remove member volumes? The default answer is n. Note: Only applicable if volume_profile is not blank. This field answers the question, is this storage pool greedy? When a storage pool receives a request for space, a greedy storage pool attempts to create a new member volume before searching for free space in existing member volumes. The attribute value for this storage pool is y. A storage pool that is not greedy uses all available space in the storage pool before creating a new member volume. The attribute value for this storage pool is n. 74 of 92
75 You can change the attribute default_slice_flag for system-defined and userdefined storage pools. It indicates whether member volumes can be sliced. If the storage pool has member volumes built on one or more slices, you cannot set this value to n. Action To modify the default_slice_flag for a system-defined or user-defined storage pool, use this command syntax: $ nas_pool -modify {<name> id=<id>} -default_slice_flag {y n} where: <name> = name of the storage pool <id> = ID of the storage pool Example: To modify a storage pool named marketing and change the default_slice_flag to prevent members of the pool from being sliced when space is dispensed, type: $ nas_pool -modify marketing -default_slice_flag n Output id = 5 name = marketing description = Storage pool for marketing acl = 0 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag= True is_user_defined = True disk_type = STD server_visibility = server_2,server_3,server_4 Note When the default_slice_flag is set to y, it appears true as shown in the output. If using automatic file system extension, the default_slice_flag should be set to n. Modify system-defined storage pool attributes The system-defined storage pool s attributes that can be modified are: -is_dynamic indicates whether the system-defined storage pool is allowed to automatically add or remove member volumes. If -is_greedy is set to y, the system-defined storage pool attempts to create new member volumes before using space from existing member volumes. A systemdefined storage pool that is not greedy (set to n) consumes all the existing space in the storage pool before trying to add additional member volumes. The tasks to modify the attributes of a system-defined storage pool are: "Modify the -is_greedy attribute of a system-defined storage pool" on page 76 "Modify the -is_dynamic attribute of a system-defined storage pool" on page of 92
76 Modify the -is_greedy attribute of a system-defined storage pool Action To modify the -is_greedy attribute of a specific system-defined storage pool to allow the storage pool to use new volumes rather than existing volumes, use this command syntax: $ nas_pool -modify {<name> id=<id>} -is_greedy {y n} where: <name> = name of the storage pool <id> = ID of the storage pool Example: To change the attribute -is_greedy to false, for the storage pool named clar_r5_performance, type: $ nas_pool -modify clar_r5_performance -is_greedy n Output id = 3 name = clar_r5_performance description = acl = 0 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = False virtually_provisioned= False is_greedy = False is_dynamic = True disk_type = STD server_visibility = server_2,server_3,server_4 Note The n entered in the example delivers a False answer to the is_greedy attribute in the output. 76 of 92
77 Modify the -is_dynamic attribute of a system-defined storage pool Modify user-defined storage pool attributes The user-defined storage pool s attributes that can be modified are: Action To modify the -is_dynamic attribute of a specific system-defined storage pool to not allow the storage pool to add or remove new members, use this command syntax: $ nas_pool -modify {<name> id=<id>} -is_dynamic {y n} where: <name> = name of the storage pool <id> = ID of the storage pool Example: To change the attribute -is_dynamic to false to not allow the storage pool to add or remove new members, for the storage pool named clar_r5_performance, type: $ nas_pool -modify clar_r5_performance -is_dynamic n Output id = 3 name = clar_r5_performance description = acl = 0 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = False virtually_provisioned= False is_greedy = False is_dynamic = False disk_type = STD server_visibility = server_2,server_3,server_4 Note The n entered in the example delivers a False answer to the is_dynamic attribute in the output. -name: Changes the name of the specified user-defined storage pool to the new name. -acl: Designates an access control level for a user-defined storage pool. The default value is 0. -description: Changes the description comment for the user-defined storage pool. The tasks to modify the attributes of a user-defined storage pool are: "Modify the name of a user-defined storage pool" on page 78 "Modify the access control of a user-defined storage pool" on page 78 "Modify the description of a user-defined storage pool" on page of 92
78 Modify the name of a user-defined storage pool Action To modify the name of a specific user-defined storage pool, use this command syntax: $ nas_pool -modify <name> -name <new_name> where: <name> = old name of the storage pool <new_name> = new name of the storage pool Example: To change the name of the storage pool named marketing to purchasing, type: $ nas_pool -modify marketing -name purchasing Output id = 5 name = purchasing description = Storage pool for marketing acl = 0 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = True virtually_provisioned= False disk_type = STD server_visibility = server_2,server_3,server_4 Note The name change to purchasing appears in the output. The description does not change unless the administrator changes it. Modify the access control of a user-defined storage pool Controlling Access to EMC Celerra System Objects contains instructions to manage access control levels. Note: The access control level change to 1 appears in the output. The description does not change unless the administrator modifies it. Action To modify the access control level for a specific user-defined storage pool, use this command syntax: $ nas_pool -modify {<name> id=<id>} -acl <acl> where: <name> = name of the storage pool <id> = ID of the storage pool <acl> = designates an access control level for the new storage pool; default value is 0 Example: To change the access control level for the storage pool named purchasing, type: $ nas_pool -modify purchasing -acl 1 78 of 92
79 Output id = 5 name = purchasing description = Storage pool for marketing acl = 1 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = True virtually_provisioned= False disk_type = STD server_visibility = server_2,server_3,server_4 Modify the description of a user-defined storage pool Action To modify the description of a specific user-defined storage pool, use this command syntax: $ nas_pool -modify {<name> id=<id>} -description <description> where: <name> = name of the storage pool <id> = ID of the storage pool <description> = descriptive comment about the pool or its purpose; type the comment within quotes Example: To change the descriptive comment for the storage pool named purchasing, type: $ nas_pool -modify purchasing -description storage pool for purchasing Output id = 15 name = purchasing description = Storage pool for purchasing acl = 1 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = True virtually_provisioned= False disk_type = STD server_visibility = server_2,server_3,server_4 79 of 92
80 Extend a user-defined storage pool You can add a slice volume, a metavolume, a disk volume, or a stripe volume to a user-defined storage pool. Action To extend the volumes for an existing user-defined storage pool, use this command syntax: $ nas_pool -xtend {<name> id=<id>} -volumes [<volume_name>,...] where: <name> = name of the storage pool <id> = ID of the storage pool <volume_name> = names of the volumes separated by commas Example: To extend the volumes for the storage pool named engineering, with volumes d130, d131, d132, and d133, type: $ nas_pool -xtend engineering -volumes d130,d131,d132,d133 Output id = 6 name = engineering description = acl = 0 in_use = False clients = members = d126,d127,d128,d129,d130,d131,d132,d133 default_slice_flag = True is_user_defined = True virtually_provisioned= False disk_type = STD server_visibility = server_2,server_3,server_4 Note The original volumes (d126, d127, d128, and d129) appear in the output, followed by the volumes added in the example. 80 of 92
81 Extend a system-defined storage pool Specifying a size by which you want AVM to expand a system-defined pool and turning off the dynamic behavior of the system pool keeps it from consuming additional disk volumes. Doing this: Uses the disk selection algorithms that AVM uses to create system-defined storage pool members. Prevents system-defined storage pools from rapidly consuming a large number of disk volumes. Prerequisites You can specify the storage system from which to allocate space to the pool. The dynamic behavior of the system-defined storage pool must be turned off by using the nas_pool -modify command before extending the pool. On successful completion, the system-defined storage pool expands by at least the specified size. The storage pool might expand more than the requested size. The behavior is the same as when the storage pool is expanded during a file-system creation. If a storage system is not specified and the pool has members from a single storage system, then the default is the existing storage system. If a storage system is not specified and the pool has members from multiple storage systems, the existing set of storage systems is used to extend the storage pool. If a storage system is specified, space is allocated from the specified storage system: The specified pool must be a system-defined pool. The specified pool must have the is_dynamic attribute set to n, or false. "Modify system-defined storage pool attributes" on page 75 provides instructions to change the attribute. There must be enough disk volumes to satisfy the size requested. 81 of 92
82 Extend a system-defined storage pool by size Action To extend a system-defined storage pool by size and specify a storage system from which to allocate space, use this command syntax: $ nas_pool -xtend {<name> id=<id>} -size <integer> -storage <system_name> where: <name> = name of the system-defined storage pool <id> = ID of the storage pool <integer> = size requested in MB or GB; default size unit is MB <system_name> = name of the storage system from which to allocate the storage Example: To extend the system-defined clar_r5_performance storage pool by size and designate the storage system from which to allocate space, type: $ nas_pool -xtend clar_r5_performance -size 128M -storage APM Output id = 3 name = clar_r5_performance description = acl = 0 in_use = False clients = members = d11,d12,d13,d14 default_slice_flag = True is_user_defined = True virtually_provisioned= False disk_type = CLSTD server_visibility = server_2,server_3,server_4,server_5 Remove volumes from storage pools Action To remove volumes from a system-defined or user-defined storage pool, use this command syntax: $ nas_pool -shrink {<name> id=<id>} -volumes [<volume_name>,...] where: <name> = name of the storage pool <id> = ID of the storage pool <volume_name> = names of the volumes separated by commas Example: To remove volumes d130 and d133 from the storage pool named marketing, type: $ nas_pool -shrink marketing -volumes d130,d of 92
83 Output id = 5 name = marketing description = Storage pool for marketing acl = 0 in_use = False clients = members = d126,d127,d128,d129,d131,d132 default_slice_flag = True is_user_defined = True virtually_provisioned= True disk_type = STD server_visibility = server_2,server_3,server_4 Delete user-defined storage pools You can delete only a user-defined storage pool that is not in use. You must remove all storage pool member volumes before deleting a user-defined storage pool. This delete action only removes the volumes in the specified storage pool and deletes the storage pool, not the volumes. System-defined storage pools cannot be deleted. Action To delete a user-defined storage pool, use this command syntax: $ nas_pool -delete <name> where: <name> = name of the storage pool Example: To delete the user-defined storage pool named sales, type: $ nas_pool -delete sales Output id = 7 name = sales description = acl = 0 in_use = False clients = members = default_slice_flag = True is_user_defined = True virtually_provisioned= True 83 of 92
84 Delete a user-defined storage pool and its volumes The -deep option deletes the storage pool and also recursively deletes each member of the storage pool unless it is in use or is a disk volume. Action To delete a user-defined storage pool and the volumes in it, use this command syntax: $ nas_pool -delete {<name> id=<id>} [-deep] where: <name> = name of the storage pool <id> = ID of the storage pool Example: To delete the storage pool named sales, type: $ nas_pool -delete sales -deep Output id = 7 name = sales description = acl = 0 in_use = False clients = members = default_slice_flag = True is_user_defined = True virtually_provisioned= False 84 of 92
85 Troubleshooting As part of an effort to continuously improve and enhance the performance and capabilities of its product lines, EMC periodically releases new versions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, contact your EMC Customer Support Representative. Consider these steps when troubleshooting AVM: Obtain all files and subdirectories in /nas/log/ and /nas/volume/ from the Control Station before reporting problems, which helps to diagnose the problem faster. Additionally, save any files in /nas/tasks when problems are seen from Celerra Manager. The support material script collects information related to Celerra Manager and APL. Set the environment variable NAS_REPLICATE_DEBUG=1 to log additional information in /nas/log/nas_log.al.tran. Report any useful SYR data. Where to get help Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at Troubleshooting For troubleshooting information, go to Powerlink, search for Celerra Tools, and select Celerra Troubleshooting from the navigation panel on the left. Technical support For technical support, go to Powerlink and choose Support. On the Support page, you can access Support Forums, request a product enhancement, talk directly to an EMC representative, or open a service request. To open a service request, you must have a valid support agreement. Please contact you EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account. Note: Do not request a specific support representative unless one has already been assigned to your particular system problem. Problem Resolution Roadmap for EMC Celerra contains additional information about using Powerlink and resolving problems. 85 of 92
86 EMC E-Lab Interoperability Navigator The EMC E-Lab TM Interoperability Navigator is a searchable, web-based application that provides access to EMC interoperability support matrices. It is available at After logging in to Powerlink, go to Support > Interoperability and Product Lifecycle Information > E-Lab Interoperability Navigator. Known problems and limitations Table 9 on page 86 describes known problems that might occur when using AVM and automatic file system extension and presents workarounds. Table 9 Known problems and workarounds Known problem Symptom Workaround AVM system-defined storage pools and checkpoint extensions recognize temporary disks as available disks. Temporary disks might be used by AVM systemdefined storage pools or checkpoint extension. Place the newly marked disks in a user-defined storage pool. This protects them from being used by system-defined storage pools (and manual volume management). In an NFS environment, the write activity to the file system starts immediately when a file changes. When the file system reaches the HWM, it begins to automatically extend but might not finish before the Control Station issues a file system full error. This causes an automatic extension failure. In a CIFS environment, the CIFS/Windows Microsoft client does Persistent Block Reservation (PBR) to reserve the space before the writes begin. As a result, the file system full error occurs before the HWM is reached and before automatic extension is initiated. An error message indicating the failure of automatic extension start, and a full file system. Alleviate this timing issue by lowering the HWM on a file system to ensure automatic extension can accommodate normal file system activity. Set the HWM to allow enough free space in the file system to accommodate write operations to the largest average file in that file system. For example, if you have a file system that is 100 GB, and the largest average file in that file system is 20 GB, set the HWM for automatic extension to 70%. Changes made to the 20 GB file might cause the file system to reach the HWM, or 70 GB. There is 30 GB of space left in the file system to handle the file changes, and to initiate and complete automatic extension without failure. 86 of 92
87 Error messages As of version 5.6, all new event, alert, and status messages provide detailed information and recommended actions to help you troubleshoot the situation. To view message details, use any of these methods: Celerra Manager: Right-click an event, alert, or status message and select to view Event Details, Alert Details, or Status Details. Celerra CLI: Type nas_message -info <MessageID>, where MessageID is the message identification number. EMC Celerra Network Server Error Messages Guide: Use this guide to locate information about messages that are in the earlierrelease message format. Powerlink: Use the text from the error message s brief description or the message s ID to search the Knowledgebase on Powerlink. After logging in to Powerlink, go to Support > Knowledgebase Search > Support Solutions Search. EMC Training and Professional Services EMC Customer Education courses help you learn how EMC storage products work together within your environment in order to maximize your entire infrastructure investment. EMC Customer Education features online and hands-on training in state-of-the-art labs conveniently located throughout the world. EMC customer training courses are developed and delivered by EMC experts. Go to EMC Powerlink at for course and registration information. EMC Professional Services can help you implement your Celerra Network Server efficiently. Consultants evaluate your business, IT processes, and technology and recommend ways you can leverage your information for the most benefit. From business plan to implementation, you get the experience and expertise you need, without straining your IT staff or hiring and training new personnel. Contact your EMC representative for more information. 87 of 92
88 88 of 92
89 Index A algorithm automatic file system extension 34 CLARiiON 26 Symmetrix 29 attributes storage pool, modifying 74, 75 storage pools 23 system-defined storage pools 75 user-defined storage pools 77 automatic file system extension algorithm 34 and Celerra Replicator interoperability considerations 35 enabling 42 guidelines 38 how it works 16 maximum size 64 maximum size option 50 options 15 restrictions 4 Virtual Provisioning 65 (AVM) 11 restrictions 4 storage pool 16 AVM. See (AVM) 11 C cautions 7 spanning storage systems 7 Celerra upgrade automatic file system extension issue 7 character support, international 7 clar_r1 storage pool 19 clar_r5_economy storage pool 19 clar_r5_performance storage pool 19 clar_r6 storage pool 19 clarata_archive storage pool 19 clarata_r10 storage pool 19 clarata_r3 storage pool 19 clarata_r6 storage pool 19 CLARiiON thin pool, insufficient space 7 clarsas_archive storage pool 19 clarsas_r10 storage pool 20 clarsas_r6 storage pool 19 clarssd_r5 storage pool 20 cm_r1 storage pool 20 cm_r5_economy storage pool 20 cm_r5_performance storage pool 20 cm_r6 storage pool 20 cmata_archive storage pool 20 cmata_r10 storage pool 20 cmata_r3 storage pool 20 cmata_r6 storage pool 20 cmsas_archive storage pool 20 cmsas_r10 storage pool 20 cmsas_r6 storage pool 20 cmssd_r5 storage pool 21 concepts, AVM explanation 14 considerations 35 interoperability 35 D details, displaying 71 displaying details 71 size information 71 E extending file systems 54, 55 with different storage pool 57 with user-defined storage pools 59 extending storage pools system-defined 81 user-defined 80 F file system creating 69 default type 45 extending by size 55, 59 quotas 7 file system considerations 38 G guidelines automatic file system extension 38 I International character support 7 K known limitations 86 P planning guidelines 38 profiles, volume, and storage 24 Q quotas for file system 7 R RAID group combinations 21 restrictions automatic file system extension 4 AVM 4 Celerra file systems 7 Symmetrix volumes 4 TimeFinder/FS 7 89 of 92
90 S storage pools attributes 30 clar_r1 19 clar_r5_economy 19 clar_r5_performance 19 clar_r6 19 clarata_archive 19 clarata_r10 19 clarata_r3 19 clarata_r6 19 clarsas_archive 19 clarsas_r10 20 clarsas_r6 19 clarssd_r5 20 cm_r1 20 cm_r5_economy 20 cm_r5_performance 20 cm_r6 20 cmata_archive 20 cmata_r10 20 cmata_r3 20 cmata_r6 20 cmsas_archive 20 cmsas_r10 20 cmsas_r6 20 cmssd_r5 21 deleting user-defined storage pools 83 displaying details 71 displaying size information 71 explanation 16 extending system-defined storage pools 81 extending user-defined storage pools 80 list 70 managing 70 modifying attributes 74 remove volumes from user-defined storage pools 82 supported types 18 symm_ata 18 symm_ata_rdf_src 19 symm_ata_rdf_tgt 19 symm_ssd 19 symm_std 18 symm_std_rdf_src 18 symm_std_rdf_tgt 18 system-defined CLARiiON 24 system-defined Symmetrix 28 symm_ata storage pool 18 symm_ata_rdf_src storage pool 19 symm_ata_rdf_tgt storage pool 19 symm_ssd storage pool 19 symm_std storage pool 18 symm_std_rdf_src storage pool 18 symm_std_rdf_tgt storage pool 18 Symmetrix thin pool, insufficient space 7 system-defined storage pools 55 U Unicode characters 7 upgrading, nas software 37 V volume management, automatic 14 volume, AVM of 92
91 Notes 91 of 92
92 About this document As part of its effort to continuously improve and enhance the performance and capabilities of the Celerra Network Server product line, EMC periodically releases new versions of Celerra hardware and software. Therefore, some functions described in this document may not be supported by all versions of Celerra software or hardware presently in use. For the most up-to-date information on product features, see your product release notes. If your Celerra system does not offer a function described in this document, contact your EMC Customer Support Representative for a hardware upgrade or software update. Comments and suggestions about documentation Your suggestions will help us improve the accuracy, organization, and overall quality of the user documentation. Send a message to [email protected] with your opinions of this document. Copyright EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. 92 of 92
Managing Volumes and File Systems on VNX Manually
EMC VNX Series Release 7.0 Managing Volumes and File Systems on VNX Manually P/N 300-011-808 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
EMC VNX Series Release 7.0
EMC VNX Series Using MirrorView Synchronous with VNX for File for P/N 300-011-859 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com 2 of 144 Contents Introduction..................................................4
Virtual Provisioning for the EMC VNX2 Series VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Applied Technology
White Paper Virtual Provisioning for the EMC VNX2 Series Applied Technology Abstract This white paper discusses the benefits of Virtual Provisioning on the EMC VNX2 series storage systems. It provides
AX4 5 Series Software Overview
AX4 5 Series Software Overview March 6, 2008 This document presents an overview of all software you need to configure and monitor any AX4 5 series storage system running the Navisphere Express management
Configuring Virtual Data Movers on VNX
EMC VNX Series Release 8.1 Configuring Virtual Data Movers on VNX P/N 300-014-560 Rev 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 1998-2013
EMC Celerra Network Server
EMC Celerra Network Server Release 5.6.47 Using Windows Administrative Tools with Celerra P/N 300-004-139 REV A02 EMC Corporation Corporate Headquarters: Hopkintons, MA 01748-9103 1-508-435-1000 www.emc.com
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
How To Configure Vnx 7.1.1 (Vnx) On A Windows-Only Computer (Windows) With A Windows 2.5 (Windows 2.2) (Windows 3.5) (Vnet) (Win
EMC é VNX dm Series Release 7.1 Configuring VNX dm User Mapping P/N 300-013-811 Rev 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright â 2009-2012
Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011
Replicating VNXe3100/VNXe3150/VNXe3300 CIFS/NFS Shared Folders to VNX Technical Notes P/N h8270.1 REV A01 Date June, 2011 Contents Introduction... 2 Roadmap... 3 What is in this document... 3 Test Environment...
HP StorageWorks Automated Storage Manager User Guide
HP StorageWorks Automated Storage Manager User Guide Part Number: 5697 0422 First edition: June 2010 Legal and notice information Copyright 2010, 2010 Hewlett-Packard Development Company, L.P. Confidential
EMC Replication Manager for Virtualized Environments
EMC Replication Manager for Virtualized Environments A Detailed Review Abstract Today s IT organization is constantly looking for ways to increase the efficiency of valuable computing resources. Increased
How To Use An Hpl Storage Manager On A 2Tb Server On A Microsoft Server On An Ubuntu 2Tv 2Tva 2Tfs 2 (Ahem) Or 2Tvi 2 (I386) On A Server
HP StorageWorks Automated Storage Manager User Guide HP Part Number: 5697-0816 Published: April 2011 Edition: Second Copyright 2010, 2011 Hewlett-Packard Development Company, L.P. Confidential computer
Installing Management Applications on VNX for File
EMC VNX Series Release 8.1 Installing Management Applications on VNX for File P/N 300-015-111 Rev 01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
Using Windows Administrative Tools on VNX
EMC VNX Series Release 7.0 Using Windows Administrative Tools on VNX P/N 300-011-833 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2011 -
EMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by Quest LiteSpeed Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
IP Storage in the Enterprise Now? Why? Daniel G. Webster Unified Storage Specialist Commercial Accounts
1 IP Storage in the Enterprise Now? Why? Daniel G. Webster Unified Storage Specialist Commercial Accounts 2 10Gigabit Ethernet in the News Every where you look 10Gb Ethernet products Changing the face
Configuring Celerra for Security Information Management with Network Intelligence s envision
Configuring Celerra for Security Information Management with Best Practices Planning Abstract appliance is used to monitor log information from any device on the network to determine how that device is
Virtual Data Movers on EMC VNX
White Paper Virtual Data Movers on EMC VNX Abstract This white paper describes the high availability and portable capability of the Virtual Data Mover (VDM) technology delivered in the EMC VNX series of
EMC Backup and Recovery for Microsoft Exchange 2007 SP2
EMC Backup and Recovery for Microsoft Exchange 2007 SP2 Enabled by EMC Celerra and Microsoft Windows 2008 Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
EMC VNX2 Deduplication and Compression
White Paper VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Maximizing effective capacity utilization Abstract This white paper discusses the capacity optimization technologies delivered in the
FlexArray Virtualization
Updated for 8.2.1 FlexArray Virtualization Installation Requirements and Reference Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support
EMC Business Continuity for Microsoft SQL Server 2008
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
Migration of File Servers to NAS and Multi-Tiered Storage
Migration of File Servers to NAS and Multi-Tiered Storage EMC Proven Professional Knowledge Sharing May, 2007 Bryan Horton Senior Systems Engineer A Healthcare Services Provider 2007 EMC Corporation 1
NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide
NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide Sachin Maheshwari, Anand Ranganathan, NetApp October 2012 Abstract This document provides
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
EMC VSI for VMware vsphere: Storage Viewer
EMC VSI for VMware vsphere: Storage Viewer Version 5.6 Product Guide P/N 300-013-072 REV 07 Copyright 2010 2013 EMC Corporation. All rights reserved. Published in the USA. Published September 2013 EMC
EMC VNX Series. Using VNX File Deduplication and Compression. Release 7.0 P/N 300-011-809 REV A01
EMC VNX Series Release 7.0 Using VNX File Deduplication and Compression P/N 300-011-809 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2009-2011
White Paper. EMC REPLICATION MANAGER AND MICROSOFT SQL SERVER A Detailed Review
White Paper EMC REPLICATION MANAGER AND MICROSOFT SQL SERVER A Detailed Review Abstract This white paper discusses how EMC Replication Manager integrates with Microsoft SQL Server to provide a solution
VERITAS Storage Foundation 4.0
VERITAS Storage Foundation 4.0 Release Notes Solaris Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any
EMC CLARiiON Virtual Provisioning
Applied Technology Abstract This white paper discusses the benefits of Virtual Provisioning on EMC CLARiiON CX4 storage systems. It provides an overview of this technology, and describes how Virtual Provisioning
EMC Symmetrix V-Max and Microsoft SQL Server
EMC Symmetrix V-Max and Microsoft SQL Server Applied Technology Abstract This white paper examines deployment and integration of Microsoft SQL Server solutions on the EMC Symmetrix V-Max Series with Enginuity.
Direct Storage Access Using NetApp SnapDrive. Installation & Administration Guide
Direct Storage Access Using NetApp SnapDrive Installation & Administration Guide SnapDrive overview... 3 What SnapDrive does... 3 What SnapDrive does not do... 3 Recommendations for using SnapDrive...
STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER
STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead
Increasing Recoverability of Critical Data with EMC Data Protection Advisor and Replication Analysis
Increasing Recoverability of Critical Data with EMC Data Protection Advisor and Replication Analysis A Detailed Review Abstract EMC Data Protection Advisor (DPA) provides a comprehensive set of features
EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager
EMC Business Continuity for VMware View Enabled by EMC SRDF/S and VMware vcenter Site Recovery Manager A Detailed Review Abstract This white paper demonstrates that business continuity can be enhanced
BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything
BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest
EMC Symmetrix V-Max with Veritas Storage Foundation
EMC Symmetrix V-Max with Veritas Storage Foundation Applied Technology Abstract This white paper details the benefits of deploying EMC Symmetrix V-Max Virtual Provisioning and Veritas Storage Foundation
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE
EMC MIGRATION OF AN ORACLE DATA WAREHOUSE EMC Symmetrix VMAX, Virtual Improve storage space utilization Simplify storage management with Virtual Provisioning Designed for enterprise customers EMC Solutions
Reference Architecture. EMC Global Solutions. 42 South Street Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com
EMC Backup and Recovery for SAP with IBM DB2 on IBM AIX Enabled by EMC Symmetrix DMX-4, EMC CLARiiON CX3, EMC Replication Manager, IBM Tivoli Storage Manager, and EMC NetWorker Reference Architecture EMC
Data ONTAP 8.2. MultiStore Management Guide For 7-Mode. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.
Data ONTAP 8.2 MultiStore Management Guide For 7-Mode NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone: +1(888) 4-NETAPP Web:
Solution Overview 4 Layers...2. Layer 1: VMware Infrastructure 3...2. Components of the VMware infrastructure...2
Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer
EMC NetWorker and Replication: Solutions for Backup and Recovery Performance Improvement
EMC NetWorker and Replication: Solutions for Backup and Recovery Performance Improvement A Detailed Review Abstract Recovery management, the next phase in the evolution of backup and data protection methodologies,
Administering a Microsoft SQL Server 2000 Database
Aug/12/2002 Page 1 of 5 Administering a Microsoft SQL Server 2000 Database Catalog No: RS-MOC2072 MOC Course Number: 2072 5 days Tuition: $2,070 Introduction This course provides students with the knowledge
Versant High Availability Backup Usage Manual. Release 7.0.1.4
Versant High Availability Backup Usage Manual Release 7.0.1.4 Table of Contents CHAPTER 1: Introduction... 3 Overview... 4 Habackup Features... 5 Ability to Perform Split... 5 Level 1 and Level 2 Backups...7
Backup Solutions for the Celerra File Server
White Paper Backup Solutions for the Celerra File Server EMC Corporation 171 South Street, Hopkinton, MA 01748-9103 Corporate Headquarters: 508) 435-1000, (800) 424-EMC2 Fax: (508) 435-5374, Service: (800)
Quick Start - Virtual Server idataagent (Microsoft/Hyper-V)
Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data
Windows Server 2008 R2 Essentials
Windows Server 2008 R2 Essentials Installation, Deployment and Management 2 First Edition 2010 Payload Media. This ebook is provided for personal use only. Unauthorized use, reproduction and/or distribution
EMC SAN Copy. A Detailed Review. White Paper
White Paper EMC SAN Copy Abstract This white paper presents the features, functionality, and performance characteristics of EMC SAN Copy, as well as examples for use. It outlines the benefits and functionality
EMC ViPR Controller. Service Catalog Reference Guide. Version 2.3 XXX-XXX-XXX 01
EMC ViPR Controller Version 2.3 Service Catalog Reference Guide XXX-XXX-XXX 01 Copyright 2015- EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX
White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.
IBRIX Fusion 3.1 Release Notes
Release Date April 2009 Version IBRIX Fusion Version 3.1 Release 46 Compatibility New Features Version 3.1 CLI Changes RHEL 5 Update 3 is supported for Segment Servers and IBRIX Clients RHEL 5 Update 2
How To Back Up A Computer To A Backup On A Hard Drive On A Microsoft Macbook (Or Ipad) With A Backup From A Flash Drive To A Flash Memory (Or A Flash) On A Flash (Or Macbook) On
Solutions with Open-E Data Storage Software (DSS V6) Software Version: DSS ver. 6.00 up40 Presentation updated: September 2010 Different s opportunities using Open-E DSS The storage market is still growing
Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
Navisphere Quality of Service Manager (NQM) Applied Technology
Applied Technology Abstract Navisphere Quality of Service Manager provides quality-of-service capabilities for CLARiiON storage systems. This white paper discusses the architecture of NQM and methods for
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Network Attached Storage (NAS) Solutions Design Concepts
Network Attached Storage (NAS) Solutions Design Concepts 2006 EMC Corporation. All rights reserved. Welcome to. The AUDIO portion of this course is supplemental to the material and is not a replacement
EMC Business Continuity and Disaster Recovery Solutions
EMC Business Continuity and Disaster Recovery Solutions Comprehensive Data Protection Rick Walsworth Director, Product Marketing EMC Cross Platform Replication 1 Agenda Data Protection Challenges EMC Continuity
Expert. Briefing. \\\\ Best Practices for Managing Storage with Hyper-V
\\\\ Best Practices for Managing Storage with Hyper-V Learn how storage functionality changes with the newest Hyper-V release and how it differs from VMware. Get details on specific Hyper-V features including
How To Use A Microsoft Networker Module For Windows 8.2.2 (Windows) And Windows 8 (Windows 8) (Windows 7) (For Windows) (Powerbook) (Msa) (Program) (Network
EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
Xanadu 130. Business Class Storage Solution. 8G FC Host Connectivity and 6G SAS Backplane. 2U 12-Bay 3.5 Form Factor
RAID Inc Xanadu 200 100 Series Storage Systems Xanadu 130 Business Class Storage Solution 8G FC Host Connectivity and 6G SAS Backplane 2U 12-Bay 3.5 Form Factor Highlights Highest levels of data integrity
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution
EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
Managing Microsoft Office SharePoint Server Content with Hitachi Data Discovery for Microsoft SharePoint and the Hitachi NAS Platform
Managing Microsoft Office SharePoint Server Content with Hitachi Data Discovery for Microsoft SharePoint and the Hitachi NAS Platform Implementation Guide By Art LaMountain and Ken Ewers February 2010
EMC Symmetrix with Microsoft Windows Server 2003 and 2008
Best Practices Planning Abstract This white paper outlines the concepts, procedures, and best practices associated with deploying Microsoft Windows Server 2003 and 2008 with EMC Symmetrix DMX-3 and DMX-4,
VMware vsphere Backup and Replication on EMC Celerra
VMware vsphere Backup and Replication on EMC Celerra Applied Technology Abstract NAS is a common storage platform for VMware vsphere environments. EMC Celerra can present storage to VMware ESX servers
EMC Backup and Recovery for Microsoft SQL Server
EMC Backup and Recovery for Microsoft SQL Server Enabled by EMC NetWorker Module for Microsoft SQL Server Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
EMC NETWORKER SNAPSHOT MANAGEMENT
White Paper Abstract This white paper describes the benefits of NetWorker Snapshot Management for EMC Arrays. It also explains the value of using EMC NetWorker for snapshots and backup. June 2013 Copyright
EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01
EMC NetWorker Release 8.0 Licensing Guide P/N 300-013-596 REV A01 Copyright (2011-2012) EMC Corporation. All rights reserved. Published in the USA. Published June, 2012 EMC believes the information in
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Introduction to Gluster. Versions 3.0.x
Introduction to Gluster Versions 3.0.x Table of Contents Table of Contents... 2 Overview... 3 Gluster File System... 3 Gluster Storage Platform... 3 No metadata with the Elastic Hash Algorithm... 4 A Gluster
Virtual Provisioning. Management. Capacity oversubscription Physical allocation on the fly to logical size. With Thin Provisioning enabled
Management Virtual Provisioning Capacity oversubscription Physical allocation on the fly to logical size Automatic File System Extension past logical size With Thin Provisioning enabled Additional storage
EMC NetWorker Module for Microsoft Applications Release 2.3. Application Guide P/N 300-011-105 REV A02
EMC NetWorker Module for Microsoft Applications Release 2.3 Application Guide P/N 300-011-105 REV A02 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright
NetApp Software. SANtricity Storage Manager Concepts for Version 11.10. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.
NetApp Software SANtricity Storage Manager Concepts for Version 11.10 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1
ACHIEVING STORAGE EFFICIENCY WITH DATA DEDUPLICATION
ACHIEVING STORAGE EFFICIENCY WITH DATA DEDUPLICATION Dell NX4 Dell Inc. Visit dell.com/nx4 for more information and additional resources Copyright 2008 Dell Inc. THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES
IBM Sterling Control Center
IBM Sterling Control Center System Administration Guide Version 5.3 This edition applies to the 5.3 Version of IBM Sterling Control Center and to all subsequent releases and modifications until otherwise
OnCommand Report 1.2. OnCommand Report User Guide. NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.
OnCommand Report 1.2 OnCommand Report User Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501 Support telephone: +1 (888) 463-8277 Web: www.netapp.com
DocAve 6 High Availability
DocAve 6 High Availability User Guide Service Pack 6 Issued October 2015 1 Table of Contents What s New in This Guide... 6 About DocAve High Availability... 7 Submitting Documentation Feedback to AvePoint...
TECHNICAL PAPER. Veeam Backup & Replication with Nimble Storage
TECHNICAL PAPER Veeam Backup & Replication with Nimble Storage Document Revision Date Revision Description (author) 11/26/2014 1. 0 Draft release (Bill Roth) 12/23/2014 1.1 Draft update (Bill Roth) 2/20/2015
EMC Celerra NS Series/Integrated
Data Sheet EMC Celerra NS Series/Integrated High-end features in a mid-tier IP Storage solution: NS20, NS40, NS80 The Big Picture Ensure no-compromise availability through integrated advanced clustering,
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Technical Paper. Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage
Technical Paper Best Practices for SAS on EMC SYMMETRIX VMAX TM Storage Paper Title Table of Contents Introduction... 1 BRIEF OVERVIEW OF VMAX ARCHITECTURE... 1 PHYSICAL STORAGE DISK TYPES, FA PORTS,
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Celerra Unified Storage Platforms Using iscsi
EMC Business Continuity for Microsoft SQL Server Enabled by SQL DB Mirroring Applied Technology Abstract Microsoft SQL Server includes a powerful capability to protect active databases by using either
Open Source and License Source Information
BlackArmor NAS 220 BlackArmor NAS 220 User Guide 2010 Seagate Technology LLC. All rights reserved. Seagate, Seagate Technology, the Wave logo, and FreeAgent are trademarks or registered trademarks of Seagate
EMC VNX UNIFIED BEST PRACTICES FOR PERFORMANCE
EMC VNX UNIFIED BEST PRACTICES FOR PERFORMANCE VNX OE for Block 5.32 VNX OE for File 7.1 EMC Enterprise & Mid-range Systems Division Abstract This white paper provides recommended best practices for installing
EMC Best Practices: Symmetrix Connect and File Level Granularity
EMC Applied Best Practices EMC Best Practices: Symmetrix Connect and File Level Granularity February 27, 2001 EMC Best Practices: Symmetrix Connect and File Level Granularity 1 No part of the publication
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage
EMC Support Matrix. Interoperability Results. P/N 300 000 166 ECO 36106 Rev B30
EMC Support Matrix Interoperability Results P/N 300 000 166 ECO 36106 Rev B30 Table of Contents Copyright EMC Corporation 2006...1 EMC's Policies and Requirements for EMC Support Matrix...2 Selections...4
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
Backup Exec 2010: Archiving Options
Backup Exec 2010: Archiving Options White Paper: Backup Exec 2010: Archiving Options Backup Exec 2010: Archiving Options Contents Introduction............................................................................................
User Guide. Laplink Software, Inc. Laplink DiskImage 7 Professional. User Guide. UG-DiskImagePro-EN-7 (REV. 5/2013)
1 Laplink DiskImage 7 Professional Laplink Software, Inc. Customer Service/Technical Support: Web: http://www.laplink.com/contact E-mail: [email protected] Laplink Software, Inc. 600 108th Ave.
EMC Celerra Gateway. Reach new heights of availability, scalability, and flexibility with EMC Celerra Gateway platforms: NS40G and NS-G8
DATA SHEET EMC Celerra Gateway Reach new heights of availability, scalability, and flexibility with EMC Celerra Gateway platforms: NS40G and NS-G8 The Big Picture Ensure no-compromise availability through
Storage Based Replications
Storage Based Replications Miroslav Vraneš EMC Technology Group [email protected] 1 Protecting Information Is a Business Decision Recovery point objective (RPO): How recent is the point in time for
VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning
VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning Abstract This white paper provides insight into the virtualization of desktop systems using the EMC Celerra IP storage
VMware Site Recovery Manager with EMC RecoverPoint
VMware Site Recovery Manager with EMC RecoverPoint Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com Copyright
Managing Celerra for the Windows Environment
Managing Celerra for the Windows Environment P/N 300-002-679 Rev A01 March 2006 Contents Introduction..................................................3 Windows and multiprotocol documentation....................3
