ARCHIVING AND MANAGING REMOTE SENSING DATA USING STATE OF THE ART STORAGE TECHNOLOGIES
|
|
|
- Lisa Garrett
- 10 years ago
- Views:
Transcription
1 ARCHIVING AND MANAGING REMOTE SENSING DATA USING STATE OF THE ART STORAGE TECHNOLOGIES B Lakshmi a, C Chandrasekhara Reddy a, *, SVSRK Kishore a a NRSC, SDAPSA, Hyderabad, India (lakshmi_b,sekharareddy_cc,kishore_svsr)@nrsc.gov.in Commission VIII, WG VIII/9 KEY WORDS: Virtualized Storage, Multi-tiering, Data Archival, Data Management ABSTRACT: Integrated Multi-mission Ground Segment for Earth Observation Satellites (IMGEOS) was established with an objective to eliminate human interaction to the maximum extent. All emergency data products will be delivered within an hour of acquisition through FTP delivery. All other standard data products will be delivered through FTP within a day. The IMGEOS activity was envisaged to reengineer the entire chain of operations at the ground segment facilities of NRSC at Shadnagar and Balanagar campuses to adopt an integrated multi-mission approach. To achieve this, the Information Technology Infrastructure was consolidated by implementing virtualized tiered storage and network computing infrastructure in a newly built Data Centre at Shadnagar Campus. One important activity that influences all other activities in the integrated multi-mission approach is the design of appropriate storage and network architecture for realizing all the envisaged operations in a highly streamlined, reliable and secure environment. Storage was consolidated based on the major factors like accessibility, long term data protection, availability, manageability and scalability. The broad operational activities are reception of satellite data, quick look, generation of browse, production of standard and valueadded data products, production chain management, data quality evaluation, quality control and product dissemination. For each of these activities, there are numerous other detailed sub-activities and pre-requisite tasks that need to be implemented to support the above operations. The IMGEOS architecture has taken care of choosing the right technology for the given data sizes, their movement and long-term lossless retention policies. Operational costs of the solution are kept to the minimum possible. Scalability of the solution is also ensured. The main function of the storage is to receive and store the acquired satellite data, facilitate high speed availability of the data for further processing at Data Processing servers and help to generate data products at a rate of about 1000 products per day. It also archives all the acquired data on tape storage for long-term retention and utilization. Data sizes per satellite pass range from hundreds of megabytes to tens of gigabytes The images acquired from remote sensing satellites are valuable assets of NRSC and are used as input for further generation of different types of user data products through multiple Data Processing systems. Hence, it is required to collect and store the data within a shared, high speed repository concurrently accessible by multiple systems. After the raw imagery is stored on a high-speed repository, the images must be processed in order for them to be useful for value-added processing or for imagery analysts. The raw image file has to be copied on to data processing servers for further processing. Given the large file sizes, it is impractical to transfer these files to processing servers via a local area network. Even at gigabit Ethernet rates (up to 60MB/s), a 5GB file will take at least 83 seconds. For this reason, it is useful to employ a shared file system which allows every processing system to directly access the same pool where raw images were stored. Concurrent access by multiple systems is ensured for processing and generation of data products. With the above reasons, it was chosen to have high speed disk arrays for acquisition and processing purposes and tape based storage systems for long-term huge data (Peta Bytes) archival in a virtualized multitier storage architecture. This paper explains the architecture involved in a virtualized tiered storage environment being used for acquisition, processing and archiving the remote sensing data. It also explains the data management aspects involved in ensuring data availability and archiving Peta bytes sized, remote sensing data acquired over the past 40 years. 1. INTRODUCTION Prior to 2011, NRSC was storing Satellite data in various types of physical media for archival purpose. This media was stored physically in racks at a distance from the work centers. When ever the data was required, tape would be moved to the respective work centre and copied on to the system. This process takes quite some time and this delay gets added to the product turn around time. To avoid such delays, it was planned to archive the entire data on online storage systems. The entire data acquired is archived onto three tier storage and used as and when required. IMGEOS consolidates data acquisition, processing & dissemination systems and is built around scalable Storage Area Network based 3-tier storage system to meet current and future Earth Observation missions. Work flow manager, user services, unified schedulers and processes of Data Ingest (DI) / Ancillary Data Processing (ADP) / Data Processing (DP) / Value Added Data Services (VADS) / Data Quality Evaluation (DQE) /Product Quality Check (PDC) are reengineered to utilize resources in a multimission mode. doi: /isprsarchives-xl
2 Online availability of all the acquired data for all missions is ensured to minimize data-ingest times. All the movements of intermediate data and products are eliminated through usage of centralized SAN storage. The storage is organized in three tiers. Tier-1 contains recent data and most frequently used data. Tier-2 will have data acquired in the last 15 months and off-the shelf products. Tier-3 will hold all the acquired data on tapes. All the three tiers are managed through HSM / ILM based on configurable set policies. These storages are presented in high-available and scalable configuration to all the storage clients through high speed FC network. Centralized monitoring and management of the SAN is done. All the servers and workstations are connected on highavailable and expandable Gigabit Ethernet comprising of core and edge switches. Centralized monitoring and management of the entire network is done. A set of four dedicated servers are provisioned for Data Ingest (DI) to acquire data in real time from four antenna systems. One additional server is provisioned for redundancy. Three dedicated servers are provisioned for ADP to process ancillary data in near real-time. The redundant DI server is also a standby for ADP. Data screening is done for all passes acquired and corrective feed back mechanism will be provided on data quality related issues. Data Processing is classified into optical, microwave and non-imaging categories. A set of servers will be provisioned for each category and will be operated in multi mission mode. Servers are provisioned to cater peak load processing requirement of 1000 products per day. A set of work stations are provisioned for handling different functions like DQE, VADS, PQC and Media Generation. 2. STORAGE ARCHITECTURE The architecture has taken care of choosing the right technology for the given data sizes, their movement and long-term lossless retention policies. Operational costs of the solution are kept to the minimum possible. Scalability of the solution is also ensured. The main function of the storage is to receive and store the acquired satellite data, facilitate high speed availability of the data for further processing at DP servers and help generate data products at a rate of about 1000 products per day. It also archives all the acquired data on tape storage for long-term retention and utilization. Data sizes per satellite pass range from hundreds of megabytes to tens of gigabytes. These raw images are most valuable assets of NRSC and are used as input for further generation of different types of user data products through multiple DP systems. Hence, it is required to collect and store the data within a shared, high speed repository concurrently accessible by multiple systems. Earlier, the raw image file had to be copied on to data processing servers for further processing. After standard processing, the images were processed further to generate value-added products or for image analysis. Given the large file sizes, it takes time to transfer these files between work centers via a local area network. Even at gigabit Ethernet rates (up to 60MB/s), a 5GB file will take at least 83 seconds. For this reason, it is useful to employ a shared file system which allows every processing system to directly access the same pool where raw images were stored. Concurrent access by multiple systems is ensured for processing and generation of data products. Disk based systems provide high performance front-end for an archive. Disk based systems are not cost-effective to archive huge data for long term data archival and protection. Hence tape based solutions are planned for data protection for longer periods. With the above reasons, it was chosen to have high speed disk arrays for acquisition and processing purposes and tape based storage systems for long-term huge data archival. 2.1 Tiered Storage Systems Today s storage devices provide only point solutions to specific portions of large data repository issues. For example, high-end Fiber Channel based RAID solutions provide excellent performance profiles to meet demanding throughput needs of Satellite data acquisition and processing systems. Serial ATA (SATA) based RAID systems provide large capacity needs for longer-term storage. Tape storage provides enormous capacity without power consumption. Each of these storage technologies also has unique downsides to their utilization for certain data types. Utilization of Fiber Channel RAIDs for all data types is cost prohibitive. SATA-based RAID systems do not have the performance or reliability to stand up to high processing loads. Tape technology does not facilitate transparent, random access for data processing. IMGEOS storage systems architecture leveraged the aspects of each device s capabilities into a seamless, scalable, managed data repository as required. The architecture is shown in figure Storage Virtualization using SAN File System SAN File System (SFS) enables systems to share a high speed pool of images, media, content, analytical data, and other key digital assets so files can be processed and distributed quicker. Even in heterogeneous environments, all files are easily accessible to all hosts using SAN or LAN. When performance is key, SFS uses SAN connectivity to achieve throughput. For cost efficiency and fan-out, SFS provides LAN-based access using clustered gateway systems with resiliency and throughput that outperforms traditional network sharing methods. For long term data storage, SFS expands pools into multi-tier archives, automatically moving data between different disk and tape resources to reduce costs and protect content. Data location is virtualized so that any file can easily be accessed for re-use, even if it resides on tape. SFS streamlines processes and facilitates faster job completion by enabling multiple business applications to work from a single, consolidated data set. Using the SFS, applications running on different operating systems (Windows, Linux, Solaris) will simultaneously access and doi: /isprsarchives-xl
3 encompass the complexity of the hardware and all the software layers. 2.4 Storage Manager Storage Manager (SM) enhances the solution by reducing the cost of long term data retention, without sacrificing accessibility. SM sits on top of SFS and utilizes intelligent data movers to transparently locate data on multiple tiers of storage. This enables us to store more files at a lower cost, without having to reconfigure applications to retrieve data from disparate locations. Instead, applications continue to access files normally and SM automatically handles data access regardless of where the file resides. As data movement occurs, SM also performs a variety of data protection services to guarantee that data is safeguarded both on site and off site. 2.5 Meta Data Servers Figure 1. Tiered Storage modify files on a common, high-speed SAN storage pool. This centralized storage solution eliminates slow LAN-based file transfers between workstations and dramatically reduces delays caused by single-server failures. The IMGEOS SAN has a high availability (HA) configuration, in which a redundant server is available to access files and pick up processing requirements of a failed system, and carry on processing. 2.3 Hierarchical Storage Management (HSM) HSM is one of the features of SAN file System. HSM (Hierarchical Storage Management) is policy- based management of file archiving in a way that uses storage devices economically and without the user needing to be aware of when files are being retrieved from storage media. The hierarchy represents different types of storage media, such as redundant array of independent disks systems (FC, SATA), tape etc. Each of this type represents a different level of cost and speed of retrieval when access is needed. For example, as a file ages in an archive, it can be automatically moved to a slower but less expensive form of storage. Using an HSM it is possible to establish and state guidelines for how often different kinds of files are to be copied to a backup storage device. Once the guideline has been set up, the HSM software manages everything automatically. The challenge for HSM is to manage heterogeneous storage systems, often from multiple hardware and software vendors. Making all the pieces work seamlessly requires an intimate understanding of how the hardware, software drivers, network interfaces, and operating system interact. In many ways, HSM is the analogue to heterogeneous computing, where different processing elements are integrated together to maximize compute throughput. In the same way, HSM virtualizes non-uniform storage components so as to present a global storage pool to users and applications. In other words, the tiers must be abstracted; users and applications must be able to access their files transparently, without regard to their physical locations. This allows the customers and their software to remain independent of the underlying hardware. HSM has been a critical link in tiered storage systems, since these tools must SAN File System (SFS) is a heterogeneous shared file system. SFS enables multiple servers to access a common disk repository. It is ensured that the file system is presented coherently and I/O requests are properly processed (e.g. Windows kernel behaves differently from a Linux kernel). Because multiple servers are accessing a single file system some form of traffic cop is required to prevent two systems from writing to the same disk location and guarantee that a server reading a file is not accessing stale content because another server is updating the file. Like this there are many features in SFS as listed above. The SFS is installed on Meta data servers and SFS functions are performed using a Meta Data Servers. The Meta Data servers configuration is 4-CPU, 32 GB memory etc. These servers are made high available by connecting them in cluster. This avoids single point of failure of SAN File System. 2.6 High Availability The High Availability (HA) feature is a special configuration with improved availability and reliability. The configuration consists of two similar servers, shared disks and tape libraries. SFS is installed on both servers. One of the servers is dedicated as the primary server and the other the standby server. File System and Storage Manager run on the primary server. The standby server runs File System and special HA supporting software. The failover mechanism allows the services to be automatically transferred from the current active primary server to the standby server in the event of the primary server failure. The roles of the servers are reversed after a failover event. Only one of the two servers is allowed to control and update SFS metadata and databases at any given time. The HA feature enforces this rule by monitoring for conditions that might allow conflicts of control that could lead to data corruption. The HA is managed by performing various HA-related functions such as starting or stopping nodes on the HA cluster. 2.7 Storage Implementation All the data acquired from the satellites in Data Ingest systems will be transferred to the Storage Area Network doi: /isprsarchives-xl
4 (SAN). This is a three tiered storage. The location of given dataset is determined by its latest usage. This is based on frequency of usage of the data. The Frequently used data will be stored in FC storage; less frequently used data will be stored in SATA storage. Third level of storage is tape based storage. The tape storage is used for backing up the data of the disk based tiers. The storage is divided into nine logical storage partitions for functional use and management. All the servers access this storage for various computing requirements. The acquired data is processed for auxiliary data extraction, processing, generating Ancillary Data Interface File (ADIF) and inputs to browse image and catalogue generation. Browse images, catalogue information, ADIF and Framed Raw Extended Data Format (FRED) data are the outputs of pre-processing system. There are four antenna terminals and each terminal is associated with one data ingest server and one common additional server as standby for all the four. The five servers are connected to the SAN. The Ancillary Data Processing, Data Processing and other servers access the storage. The setup is shown in figure 2. the three tiers will be accessed through SAN file system installed on meta-data servers. 2.8 SAN Components The SAN components are shown in the figure 3. The components of the SAN storage are three tiers of storage (FC, SATA and Tape), SAN switches and Metadata servers. SAN file system is installed on Metadata servers. Metadata servers are connected to the SAN switches using host bus adapters. Similarly all other servers, which are direct SAN clients, are also connected to the SAN switches. Storage tiers, SAN switches and servers are connected at 4Gbps bandwidth. SAN switches are configured in high available mode. 2.9 Storage Systems Management One of the useful benefits of consolidating storage on to a SAN is the flexibility and power provided by the Storage Resource Management (SRM) software. Using the SRM software, management and administration functions like storage partitioning, RAID arrays creation, LUNs creation, logical volumes creation, file systems creation, etc are simplified. A variety of audit trails and event logging functions are supported, which aid in many management tasks including those connected with security. Storage provisioning is another important task of SRM, which allows the administrator to allocate the initial sizes to various server systems and flexibly enhance the allocations based on actual need. The SRM software also allows performance tuning by distributing the logical volumes across different arrays or disks Scalability Scalability features built into any system ensure that the system is protected from short-term obsolescence while also optimizing the return on investment. The IMGEOS architecture is designed in accordance with this principle and scalability is inherent in all the envisaged subsystems. Figure 2. Storage Setup It is found from the satellite data usage pattern, that recently acquired (<18 months) data is more frequently ordered than the earlier data. High performance FC storage of 75 TB can accommodate about three-months of data acquisition from all the satellites in addition to other work space required for processing servers. The tier-2 storage (SATA drive based) can accommodate all the acquired data for about 15 months. All the data is available online in the tape library. The size of the tape storage is 6.0 PB, which will be available online for data archival and processing. The tape storage is logically divided into two equal parts. Each part holds one copy of data to be archived. Additionally, one more vaulted copy will be maintained. With this, data archived will have three copies (two online copies from tape library and one vaulted copy) having identical data for redundancy / backup. A fourth copy is generated which is kept at the Disaster Recovery (DR) site. Overtime, existing archived data (~ 950 TB) on DLTs and SDLTs, will be ported onto the tape library, so that there will be only one archive for all the data. As detailed before, all By consolidating the disk system onto a high performance networked mass storage system based on FC SAN, a very high level of disk capacity scalability is achieved. Though initially configured for 75 TB disk capacity, the primary FC disk system is envisaged for scalability up to 400TB and more. The second-tier disk system is also configured on similar lines. These systems are mid-range high available storage systems. These systems were procured with 40% more expandability. If required, additional mid-range storage systems (SATA) can be added and consolidated. All FC storage systems support disks capacities of 146GB, 300 GB, 450 GB and above. Mid-range storage systems support 750 GB, 1000 GB and above. The SAN switches are configured with 8 Gbps speed. But the above said storage systems are specified with 4 Gbps speed and can be upgraded to 8 Gbps as and when required. The tape library is scalable from the initial 16-drive, 4000 slots to 64-drives, slots configuration, which translates from 6.0 PB to 15.0 PB (using LTO-5). Apart from the built-in scalability, modular augmentation of the disk and tape library systems with additional subsystems will further extend the capacities as well as the life of the infrastructure. doi: /isprsarchives-xl
5 archive for later usage. Three more copies of the same data is made on tapes, one for online backup of the archive (which shall remain on the tape library) and other two for vaulting. In the vaulted tapes one copy is being stored in the vaults and the other moved to other location for Disaster Recovery purpose. Based on the set policies, the data in the tier-1, is automatically moved to another storage tier (tier-2), which is created using lower performance disks. The file path as seen by the processing server will remain same. In case, the file is removed from tier 1 or tier 2, the same file is restored from tape storage. After restoration, file path and name will appear as before without needing any changes from user-end. Figure 3. SAN Components 3. ARCHIVES MANAGEMENT As of now, in IMGEOS SAN, the remote sensing data acquired at Shadnagar (India), Antartica and Svalbard (Norway) is archived. The data from current missions viz. RISAT-1, Resourcesat-2, Cartosat-1, Cartosat-2, Oceansat-2, Resorcesat-1, NOAA-19, NPP and METOP-A/B is stored onto SAN. NRSC has around 950 TB of data from earlier missions archived on optical/ tape media. The 95% of data porting onto IMGEOS SAN is completed. After this all the remote sensing data available with NRSC will be in online archives for data processing work centers. This requires managing the huge data efficiently to ensure automatic generation of data products irrespective of the date of pass. Data Management involves storage, access and preservation of the data. In IMGEOS data management activities cover the entire lifecycle of the data, from acquisition on DI system to product delivery through FTP server and from backing up data (as it is created) to archive data (for future reuse). Specific activities and issues that fall within the category of Data Management include: Data access Metadata creation Data storage Data archiving Data sharing and re-use Data integrity Data security The data management is done using two broad components: SAN File System (SFS), a high performance data sharing software, and Storage Manager (SM), the intelligent, policybased data mover. 3.1 Operational Scenario in Tiered Storage Data received from Data Ingest systems is stored in high performance disk storage (Tier -1), which is being used for processing by Data Processing Servers. Subsequently, the data is copied automatically to the tape library based on the set policies. The data remains in the tape library as an online 3.2 Data Protection As with any form of asset management system, cost control is only one of the key concerns. Data protection is paramount, especially in geospatial activities, because images cannot be recreated. The ability of an archive system to keep a digital asset safe in the long term starts with the ability to protect it in the short term. With file systems, even in relatively modest environments, growing into the 100s of terabytes, the traditional backup and recovery paradigms break down. Traditional backups require a nightly file system scan to identify new and changed files. If a file system has millions or 10s of millions of files stored within it, the time to scan that entire directory structure spans hours. Only after the scan has completed data will be copied to another location for safekeeping. Traditional backup paradigms also require periodic (weekly or monthly) full file system backups. With 100s of terabytes, this too can take hours or days. And since geospatial data is relatively static once created it is unlikely to change repeated backups of this class of data add little value and instead increase costs as more and more media are used to stored the same, unchanging data. Using a traditional backup strategy for this class of data often results in throwing technology at the problem: deploying very high speed RAID and a sufficient number of tape drives to meet shrinking backup windows. While this strategy can work, it is quite expensive and contradicts the organization strategy of deploying technology to meet Its needs. After all, backup copies are merely insurance policies against a problem with the primary copy. Recovery procedures are equally difficult as the process of restoring millions of files and terabytes of data can take days or weeks. SFS takes a different approach to data protection. As each file arrives into a SFS volume, it is placed on a queue for the archiving component of SFS, Storage Manager, to determine how to protect that file. The archiving component uses data movement and protection policies, defined by the system administrator, which define how new or modified files should be replicated to another tier of storage, either disk or tape. When replication occurs, multiple copies of a file can be generated. Typically, this will be one copy for onsite storage (the copy for repurposing) and one copy for offsite storage (a disaster recovery copy). Through this process archiving occurs and there is never a need to perform a full backup. Instead, the organization automatically fulfills cost reduction strategies along with data protection. Additionally, SFS offers data integrity checks, performing a checksum on files as they are written to tape and then verified when restored back to disk. Through this process, SFS offers an doi: /isprsarchives-xl
6 additional measure of data protection to prevent accessing damaged data. Restoring a large storage system after a disaster is equally efficient. SFS is able to restore the file system name space, the mechanism that allows users to access their data, at a rate of 20 million files per hour. Once the name space is recovered, the data is fully accessible. As users access their data, it is automatically pulled back from the replicate copy on the other storage tier. If the file is not accessed, it is not pulled back, eliminating any unnecessary data transfers. 3.3 Storage Policies Storage is not only an object, but a service. Storage provides both short and long term business object storage and retrieval services for the organization. Defining clear storage management policies is more than just storage hardware and management software. The ultimate goal of storage service and any storage management policy tied to it is to deliver a cost-effective, guaranteed, uniform level of service over time. Achieving this goal is one of the harder aspects of storage management. Storage policies act as the primary channels through which data is included in data protection and data recovery operations. A storage policy forms the primary logical entity through which data is backed up. Its chief function is to map data from its original location to a physical media. The growth in data is making it increasingly difficult for administrators to manage routine maintenance tasks, such as back-ups and disaster recovery provision. This is made more acute by shrinking windows for these tasks. As data volumes keep increasing each year, it becomes increasingly important that data storage policies are put in place. A data storage policy is a set of procedures that are implemented to control and manage data. A data storage policy is how the data should be stored, ie online, near-line, or off-line, as effective archiving can dramatically reduce the size of daily back-ups. A Storage Manager storage policy also defines how files will be managed in a directory and subdirectories. Storage policies can be related to one or more directories. All files in that directory and sub-directories are governed by the storage policy. The connection between a storage policy and a directory is called the relation point. The storage policies are created based on the following parameters. is gathered as a snapshot on all component configured. This assists to analyze and debug problems in the storage system. 3.5 Verification The data received from different remote sensing satellites is stored onto tape library as per defined storage policies. For data product generation, ADIF and FRED files are required. Hence, every month, availability of FRED data against ADIF is verified. Over a period of time the FRED data of any satellite will reside only on tapes as the files will get truncated on disk. Therefore in future, if old data is required, invariably tape has to be accessed. As tape is a magnetic medium with electro-mechanical components, its reliability has to be checked regularly. To meet this requirement, daily few tapes are verified by performing a random read on each of the tape. This verification is a continuous process. CONCLUSION Using hierarchical storage that employs a mix of storage technologies allowed to build a cost -efficient multi-tiered storage system that meets the performance goals and recovery/archival requirements. Incorporating automatic migration and archiving of data files removes user intervention from the storage policy procedure and enhances data integrity for large - scale environments. A good archive strategy is a vital part of organization Information Technology policy that can realize massive storage cost savings. The combination of intelligent archiving and data preservation software coupled with the latest high speed tape libraries gives best value, protection, and operational cost savings. By balancing hardware performance and capacity requirements with the way data is actually managed, is the most cost-effective solution. REFERENCES Jon William Toigo, The Holy Grail of Data Storage Management, EMC 2 series, The Information Storage and Management Book Number of copies to create Media type to use when storing data Amount of time to store data after data is modified The amount of time (in days) before relocating a file Amount of time before truncating a file after a file is Modified Separate Storage policies are created for each satellite and these policies are monitored for compliance of the policy. 3.4 Monitoring The logs/ admin alerts of metadata servers, SFS and SM are continuously monitored for probable alerts. Health check is periodically done by running various diagnostic checks on the SFS. Whenever required, the current state of the system doi: /isprsarchives-xl
Archiving and Managing Remote Sensing Data Using State of the Art Storage Technologies
Archiving and Managing Remote Sensing Data Using State of the Art Storage Technologies Ms B Lakshmi C Chandrasekhar Reddy SVSRK Kishore SDAPSA, NRSC, Hyderabad NRSC Functions Remote Sensing Data Acquisition
STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER
STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Implementing a Digital Video Archive Using XenData Software and a Spectra Logic Archive
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures
1 Refreshing Your Data Protection Environment with Next-Generation Architectures Dale Rhine, Principal Sales Consultant Kelly Boeckman, Product Marketing Analyst Program Agenda Storage
Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows
Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows Sponsored by: Prepared by: Eric Slack, Sr. Analyst May 2012 Storage Infrastructures for Big Data Workflows Introduction Big
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Protect Microsoft Exchange databases, achieve long-term data retention
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
Optimizing Large Arrays with StoneFly Storage Concentrators
Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time
BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything
BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest
The Microsoft Large Mailbox Vision
WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more e mail has many advantages. Large mailboxes
Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration
Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta
Implementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
EMC DATA DOMAIN OPERATING SYSTEM
ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read
XenData Archive Series Software Technical Overview
XenData White Paper XenData Archive Series Software Technical Overview Advanced and Video Editions, Version 4.0 December 2006 XenData Archive Series software manages digital assets on data tape and magnetic
How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com [email protected] [email protected]
Energy Efficient Storage - Multi- Tier Strategies For Retaining Data
Energy and Space Efficient Storage: Multi-tier Strategies for Protecting and Retaining Data NOTICE This White Paper may contain proprietary information protected by copyright. Information in this White
A Best Practice Guide to Archiving Persistent Data: How archiving is a vital tool as part of a data center cost savings exercise
WHITE PAPER A Best Practice Guide to Archiving Persistent Data: How archiving is a vital tool as part of a data center cost savings exercise NOTICE This White Paper may contain proprietary information
Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
EMC DATA DOMAIN OPERATING SYSTEM
EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
Solution Brief: Creating Avid Project Archives
Solution Brief: Creating Avid Project Archives Marquis Project Parking running on a XenData Archive Server provides Fast and Reliable Archiving to LTO or Sony Optical Disc Archive Cartridges Summary Avid
Protecting enterprise servers with StoreOnce and CommVault Simpana
Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection
Solution Brief Unitrends Recovery-Series: Addressing Enterprise-Class Data Protection 2 Unitrends has leveraged over 20 years of experience in understanding ever-changing data protection challenges in
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions
Ultra-Scalable Storage Provides Low Cost Virtualization Solutions Flexible IP NAS/iSCSI System Addresses Current Storage Needs While Offering Future Expansion According to Whatis.com, storage virtualization
Amazon Cloud Storage Options
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
XenData Product Brief: SX-550 Series Servers for LTO Archives
XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production
Using HP StoreOnce Backup systems for Oracle database backups
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
Hitachi NAS Platform and Hitachi Content Platform with ESRI Image
W H I T E P A P E R Hitachi NAS Platform and Hitachi Content Platform with ESRI Image Aciduisismodo Extension to ArcGIS Dolore Server Eolore for Dionseq Geographic Uatummy Information Odolorem Systems
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software
Implementing an Automated Digital Video Archive Based on the Video Edition of XenData Software The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on
White paper 200 Camera Surveillance Video Vault Solution Powered by Fujitsu
White paper 200 Camera Surveillance Video Vault Solution Powered by Fujitsu SoleraTec Surveillance Video Vault provides exceptional video surveillance data capture, management, and storage in a complete
Tier 2 Nearline. As archives grow, Echo grows. Dynamically, cost-effectively and massively. What is nearline? Transfer to Tape
Tier 2 Nearline As archives grow, Echo grows. Dynamically, cost-effectively and massively. Large Scale Storage Built for Media GB Labs Echo nearline systems have the scale and performance to allow users
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
WHITE PAPER BRENT WELCH NOVEMBER
BACKUP WHITE PAPER BRENT WELCH NOVEMBER 2006 WHITE PAPER: BACKUP TABLE OF CONTENTS Backup Overview 3 Background on Backup Applications 3 Backup Illustration 4 Media Agents & Keeping Tape Drives Busy 5
SAN TECHNICAL - DETAILS/ SPECIFICATIONS
SAN TECHNICAL - DETAILS/ SPECIFICATIONS Technical Details / Specifications for 25 -TB Usable capacity SAN Solution Item 1) SAN STORAGE HARDWARE : One No. S.N. Features Description Technical Compliance
Doc. Code. OceanStor VTL6900 Technical White Paper. Issue 1.1. Date 2012-07-30. Huawei Technologies Co., Ltd.
Doc. Code OceanStor VTL6900 Technical White Paper Issue 1.1 Date 2012-07-30 Huawei Technologies Co., Ltd. 2012. All rights reserved. No part of this document may be reproduced or transmitted in any form
(Scale Out NAS System)
For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages
Planning a Backup Strategy
Planning a Backup Strategy White Paper Backups, restores, and data recovery operations are some of the most important tasks that an IT organization performs. Businesses cannot risk losing access to data
Cost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved.
Cost Effective Backup with Deduplication Agenda Today s Backup Challenges Benefits of Deduplication Source and Target Deduplication Introduction to EMC Backup Solutions Avamar, Disk Library, and NetWorker
Riverbed Whitewater/Amazon Glacier ROI for Backup and Archiving
Riverbed Whitewater/Amazon Glacier ROI for Backup and Archiving November, 2013 Saqib Jang Abstract This white paper demonstrates how to increase profitability by reducing the operating costs of backup
T a c k l i ng Big Data w i th High-Performance
Worldwide Headquarters: 211 North Union Street, Suite 105, Alexandria, VA 22314, USA P.571.296.8060 F.508.988.7881 www.idc-gi.com T a c k l i ng Big Data w i th High-Performance Computing W H I T E P A
IP Storage On-The-Road Seminar Series
On-The-Road Seminar Series Disaster Recovery and Data Protection Page 1 Agenda! The Role of IP in Backup!Traditional use of IP networks for backup! backup capabilities! Contemporary data protection solutions
IBM System Storage DS5020 Express
IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited
High Availability Databases based on Oracle 10g RAC on Linux
High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN, June 2006 Luca Canali, CERN IT Outline Goals Architecture of an HA DB Service Deployment at the CERN Physics Database
Total Cost of Ownership Analysis
Total Cost of Ownership Analysis Abstract A total cost of ownership (TCO) analysis can measure the cost of acquiring and operating a new technology solution against a current installation. In the late
Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International
Keys to Successfully Architecting your DSI9000 Virtual Tape Library By Chris Johnson Dynamic Solutions International July 2009 Section 1 Executive Summary Over the last twenty years the problem of data
Understanding Disk Storage in Tivoli Storage Manager
Understanding Disk Storage in Tivoli Storage Manager Dave Cannon Tivoli Storage Manager Architect Oxford University TSM Symposium September 2005 Disclaimer Unless otherwise noted, functions and behavior
Long term retention and archiving the challenges and the solution
Long term retention and archiving the challenges and the solution NAME: Yoel Ben-Ari TITLE: VP Business Development, GH Israel 1 Archive Before Backup EMC recommended practice 2 1 Backup/recovery process
HyperQ Remote Office White Paper
HyperQ Remote Office White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com [email protected] [email protected] Introduction
NETWORK ATTACHED STORAGE DIFFERENT FROM TRADITIONAL FILE SERVERS & IMPLEMENTATION OF WINDOWS BASED NAS
INTERNATIONAL International Journal of Computer JOURNAL Engineering OF COMPUTER and Technology (IJCET), ENGINEERING ISSN 0976-6367(Print), ISSN 0976 & 6375(Online) TECHNOLOGY Volume 4, Issue (IJCET) 3,
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Disaster Recovery Strategies: Business Continuity through Remote Backup Replication
W H I T E P A P E R S O L U T I O N : D I S A S T E R R E C O V E R Y T E C H N O L O G Y : R E M O T E R E P L I C A T I O N Disaster Recovery Strategies: Business Continuity through Remote Backup Replication
Virtualization s Evolution
Virtualization s Evolution Expect more from your IT solutions. Virtualization s Evolution In 2009, most Quebec businesses no longer question the relevancy of virtualizing their infrastructure. Rather,
VERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
The Modern Virtualized Data Center
WHITEPAPER The Modern Virtualized Data Center Data center resources have traditionally been underutilized while drawing enormous amounts of power and taking up valuable floorspace. Virtualization has been
Every organization has critical data that it can t live without. When a disaster strikes, how long can your business survive without access to its
DISASTER RECOVERY STRATEGIES: BUSINESS CONTINUITY THROUGH REMOTE BACKUP REPLICATION Every organization has critical data that it can t live without. When a disaster strikes, how long can your business
Disk-to-Disk-to-Tape (D2D2T)
Where Disk Fits into Backup Tape originated in the 1950 s as the primary storage device for computers. It was one of the first ways to store data beyond the memory of a computer, which at the time was
Tributary Systems Storage Director Provides Superior ROI. By Ed Ahl, Director of Business Development
Tributary Systems Storage Director Provides Superior ROI By Ed Ahl, Director of Business Development Tape plays various important roles in the mainframe environment, unlike in open systems where it is
Multi-Terabyte Archives for Medical Imaging Applications
Multi-Terabyte Archives for Medical Imaging Applications This paper describes how Windows servers running XenData Archive Series software provide an attractive solution for storing and retrieving multiple
XenData Product Brief: SX-250 Archive Server for LTO
XenData Product Brief: SX-250 Archive Server for LTO An SX-250 Archive Server manages a robotic LTO library creating a digital video archive that is optimized for broadcasters, video production companies,
Evaluation of Enterprise Data Protection using SEP Software
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
Technical White Paper for the Oceanspace VTL6000
Document No. Technical White Paper for the Oceanspace VTL6000 Issue V2.1 Date 2010-05-18 Huawei Symantec Technologies Co., Ltd. Copyright Huawei Symantec Technologies Co., Ltd. 2010. All rights reserved.
Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation
Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office
Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security
Overview Blackboard Collaborate Web Conferencing Hosted Environment Technical Infrastructure and Security Blackboard Collaborate web conferencing is available in a hosted environment and this document
EMC Data Domain Boost for Oracle Recovery Manager (RMAN)
White Paper EMC Data Domain Boost for Oracle Recovery Manager (RMAN) Abstract EMC delivers Database Administrators (DBAs) complete control of Oracle backup, recovery, and offsite disaster recovery with
The safer, easier way to help you pass any IT exams. Exam : 000-115. Storage Sales V2. Title : Version : Demo 1 / 5
Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with
Microsoft SQL Server 2005 on Windows Server 2003
EMC Backup and Recovery for SAP Microsoft SQL Server 2005 on Windows Server 2003 Enabled by EMC CLARiiON CX3, EMC Disk Library, EMC Replication Manager, EMC NetWorker, and Symantec Veritas NetBackup Reference
Near-Instant Oracle Cloning with Syncsort AdvancedClient Technologies White Paper
Near-Instant Oracle Cloning with Syncsort AdvancedClient Technologies White Paper bex30102507wpor Near-Instant Oracle Cloning with Syncsort AdvancedClient Technologies Introduction Are you a database administrator
Veritas Storage Foundation High Availability for Windows by Symantec
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
Redefining Oracle Database Management
Redefining Oracle Database Management Actifio PAS Specification A Single Solution for Backup, Recovery, Disaster Recovery, Business Continuity and Rapid Application Development for Oracle. MAY, 2013 Contents
How To Virtualize A Storage Area Network (San) With Virtualization
A New Method of SAN Storage Virtualization Table of Contents 1 - ABSTRACT 2 - THE NEED FOR STORAGE VIRTUALIZATION 3 - EXISTING STORAGE VIRTUALIZATION METHODS 4 - A NEW METHOD OF VIRTUALIZATION: Storage
Redefining Microsoft SQL Server Data Management. PAS Specification
Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing
IBM Global Technology Services March 2008. Virtualization for disaster recovery: areas of focus and consideration.
IBM Global Technology Services March 2008 Virtualization for disaster recovery: Page 2 Contents 2 Introduction 3 Understanding the virtualization approach 4 A properly constructed virtualization strategy
Comparison of Native Fibre Channel Tape and SAS Tape Connected to a Fibre Channel to SAS Bridge. White Paper
Comparison of Native Fibre Channel Tape and SAS Tape Connected to a Fibre Channel to SAS Bridge White Paper Introduction IT Managers may approach backing up Storage Area Networks (SANs) with the idea of
IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.
IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise
EMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Data Management using Hierarchical Storage Management (HSM) with 3-Tier Storage Architecture
P - 388 Data Management using Hierarchical Storage Management (HSM) with 3-Tier Storage Architecture Manoj Kumar Kujur, B G Chaurasia, Sompal Singh, D P Singh GEOPIC, ONGC, Dehradun, E-Mail [email protected]
HyperQ DR Replication White Paper. The Easy Way to Protect Your Data
HyperQ DR Replication White Paper The Easy Way to Protect Your Data Parsec Labs, LLC 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com [email protected]
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
EMC BACKUP MEETS BIG DATA
EMC BACKUP MEETS BIG DATA Strategies To Protect Greenplum, Isilon And Teradata Systems 1 Agenda Big Data: Overview, Backup and Recovery EMC Big Data Backup Strategy EMC Backup and Recovery Solutions for
THE CASE FOR ACTIVE DATA ARCHIVING
THE CASE FOR ACTIVE DATA ARCHIVING Written by Ray Quattromini 3 rd September 2007 Contact details Tel: 01256 782030 / 08704 286 186 Fax: 08704 286 187 Email: [email protected] Web: www.data-storage.co.uk
Dell PowerVault DL Backup to Disk Appliance Powered by CommVault. Centralized data management for remote and branch office (Robo) environments
Dell PowerVault DL Backup to Disk Appliance Powered by CommVault Centralized data management for remote and branch office (Robo) environments Contents Executive summary Return on investment of centralizing
Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication
PRODUCT BRIEF Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication NOTICE This Product Brief contains proprietary information protected by copyright. Information in this
XenData Video Edition. Product Brief:
XenData Video Edition Product Brief: The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on a single Windows 2003 server to create a cost effective digital
Implementing Offline Digital Video Storage using XenData Software
using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional
Disk-to-Disk Backup & Restore Application Note
Disk-to-Disk Backup & Restore Application Note All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc., which are subject to change from
IBM Tivoli Storage Manager
Help maintain business continuity through efficient and effective storage management IBM Tivoli Storage Manager Highlights Increase business continuity by shortening backup and recovery times and maximizing
VERITAS Business Solutions. for DB2
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
Save Time and Money with Quantum s Integrated Archiving Solution
Case Study Forum WHITEPAPER Save Time and Money with Quantum s Integrated Archiving Solution TABLE OF CONTENTS Summary of Findings...3 The Challenge: How to Cost Effectively Archive Data...4 The Solution:
HyperQ Storage Tiering White Paper
HyperQ Storage Tiering White Paper An Easy Way to Deal with Data Growth Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com [email protected]
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Configuring Celerra for Security Information Management with Network Intelligence s envision
Configuring Celerra for Security Information Management with Best Practices Planning Abstract appliance is used to monitor log information from any device on the network to determine how that device is
