1 Information Lifecycle Management (ILM) Overview Issue July 2006 Pages 9 Contents 1. Summary 2 2. Driving forces Data growth Legal and regulatory demands The value of data during the lifecycle Cost pressure Technological innovation 4 3. The ILM concept Basic methodology External input The value of information Service Level Agreements Access rights/data integrity Storage hierarchies/storage classes Structuring the data Benefits 6 4. Advance provisions Consolidation and SAN/NAS infrastructure Virtualization Storage Resource Management 7 5. The ILM model 7 6. Conclusion 9
2 White Paper Issue: July 2006 Page 2 / 9 1. Summary Information Lifecycle Management (ILM) has the potential to significantly improve cost positions (Total Cost of Ownership, productivity, etc.) and Service Level Agreements. ILM also provides functions, which help companys to fulfill legal and regulatory requirements as far as archiving is concerned. The following paragraph depicts a concise summarization of ILM. Information Lifecycle Management (ILM) is a storage management concept that actively manages information objects through their whole lifetime. ILM is not a product but it is a combination of processes and technologies. The ultimate goal of ILM is having the right information at the right place in compliance to legal regulatory requirements and in accordance to Service Level Agreements (SLA) at lowest cost possible. An optimization process within a rules engine determines the most suitable location for the managed information objects. The most important parameters for the optimization process are requirements of business process on the one hand (value of information, security requirements, Service Level Agreements, etc.) and the cost structure of the storage hierarchy on the other hand. The results of the optimization process are decisions as to where best to store information objects or how to control backup, replication, migration, relocation and archiving functions. For an efficiently functioning ILM, certain advance provisions are required. Virtualization for the online, nearline and NAS areas are just some examples. The separation of the logical view from the physical view puts ILM in the position to optimally place information objects on the basis of process decisions. The white paper therefore attempts to focus on the dynamic processes that make up the actual core of ILM. 2. Driving forces The idea of implementing effective ILM is by no means new. A lot of functions have been available in the mainframe area for decades. In the past, the approach tended to be less systematic and less formal. The IT and storage environment has undergone enormous change in many areas, which means that a professional, comprehensive concept is now required. The major driving forces are unrestricted data growth, internal and external obligations in the form of legal and regulatory demands, intensifying cost pressure and the technological innovations of recent years. 2.1 Data growth Businesses and institutions can bear witness to the extreme levels of data growth experienced in many sectors. According to a study of the Radicati Group the number of received / sent s per user and day within enterprises will rise from about 130 in 2006 to about 170 in In the same time the average size of e- mails with attachments (which account for 25% of all s) will rise from an average of 420 KB in 2006 to 470 KB in That means, for each box 3 TB of data has to be handled under the assumption of 200 working days a year. On basis of the above mentioned growth this size will rise to about 4 TB per mailbox and year in Another area experiencing extremely high growth is the health sector. In the Storage Magazine article Optimize your storage for fixed content, issue may 2003, it was shown that a Picture Archive and Communications System (PACS) in a large hospital can easily generate more than five Terabytes of digital X-ray or MRI information (Magnetic Resonance Imaging) annually. Even double digit TeraBytes growth rates have been reported.
3 White Paper Issue: July 2006 Page 3 / 9 Similar growth rates can also be found in other sectors. Examples include research and development, video editing, video streaming, Customer Relationship Management (CRM), quick decision (parallel analysis of the content of multiple sources for critical keywords), fraud detection, monitoring, etc. This trend can also be seen in the banking and insurance segment where mainframes have strong positions. 2.2 Legal and regulatory demands Legal and regulatory demands define how and how long information is stored and in what form it has to be made available when requested. The basis for such demands is formed by legislation or regulation at international, national, communal, association and organizational level. Internal company controls are also important in this context. International presentations and documentation frequently cite American laws and controls including SEC Rule 17a-4, 21CFR Part 11, HIPPA, Sarbanes-Oxley, GoBs, DoD Comparable requirements can also be cited for other countries. In Germany, for example, the Grundsätze der ordnungsgemäßen Buchführung (GoB) (similar to the Generally Accepted Accounting Principles in the US), the Grundsätze zum Datenzugriff und zur Prüfbarkeit digitale Unterlagen (GDPdU) (principles governing data access and the verification of digital documentation), the Handelsgesetzbuch (HGB) (Commercial Code) and the Abgabenordnung (AO) (Tax Code) etc. apply. There are also, as is the case in the United States, a multitude of additional regulations. One such example are the controls of the Bundesanstalt für Finanzdienstleistungsaufsicht (BaFin) (federal supervisory agency for financial services), which apply to banks, insurance companies and stock trading. There are signs of an emerging trend towards harmonization in Europe, although expectations should not be too high in the short term. It is the duty of those responsible to work together with internal or external auditors, chartered accountants or other authorized experts and carry out binding advise to define the type of storage and retention periods. The majority of demands appear to be technology neutral, which leaves plenty of scope in relation to the technical realization in ILM. One thing, however, is certain the amount of information to be stored and the length of retention periods will continue to increase significantly. 2.3 The value of data during the lifecycle The value of data changes over the course of time. This fact can best be illustrated by means of an example involving a simplified analysis of some of the processes in a bank transaction. The most important information is the account balance. This data will always have a high value, as it is from this information that debts and liabilities are determined. A specific business transaction (withdrawal, deposit, transfer, etc.) must be kept online until the bank statement is printed. After this time, the individual transaction no longer has to be held in the online primary storage. The value is, however, required for the creation of monthly business reports or for calculating the balance sheet total. Following publication of the balance sheet total, the individual values play a lesser role and thus have a lower value. However, the value of this data changes abruptly when, for example, monthly incoming payments have to be checked for loan applications, when follow up orders, queries etc. have to be processed, or when audits or tax audits are due. Access to the individual values is required in some circumstances. A permanent change in value therefore does not take place until query deadlines or prescribed archiving times have been passed. It is then at the discretion of the bank to either delete the data or continue to hold it for a longer period of time. In some cases there are also situations where data has to be deleted after a certain period of time. Cases of this nature are often to be found in sectors where monitoring data (e.g. video surveillance) has to be deleted after a given number of days. This may require certain technologies that prevent the data from ever being restored. 2.4 Cost pressure Reducing costs is a permanent goal. Over the course of time there have only been short periods where cost pressure has been pushed into the background. E-business is a case in point. When e-business first arrived, some businesses made expansion their highest priority. In the end, however, it just didn t work out for a great many businesses (for the sake of clarity, a clear distinction needs to be made between the unworkable e- business models of the past and the sound Internet services, e-commerce and Intranet/Extranet applications that we have today). Today, the issue of reducing costs is once again top priority. The main reasons may lie in globalization and the difficult economic situation in many regions. Cost reductions must be targeted at the right areas. Costs can be divided into HR (human resources), technology and process costs. HR costs tend to rise permanently if you assume a constant headcount. The percentage drop in technology costs in recent
4 White Paper Issue: July 2006 Page 4 / 9 years has reached double digits, based on the same capacity and no change in technology. Process costs are for the most part individual, though they can be strongly influenced by automation. The one commonality is the constraint, whereby more has to be achieved with fewer resources. ILM provides automated processes to help better utilize existing technology. This in turn reduces HR costs, as the right information is automatically made available at the right time in the right place. 2.5 Technological innovation The emergence of new storage technologies such as ATA disks (Advanced Technology Attachment), which have different price/performance positioning compared with Fibre Channel/SCSI disks, has expanded the classical storage model. Today we have at least three main hierarchies leaving aside the fact that there are different technologies in the online area (high-end, mid-tier and entry RAID systems) and different technologies in the nearline area (tapes as well as cartridges, CDs, DVDs, etc.). The hierarchy begins with the high performance, high availability, online primary area. The main characteristic of the online primary area is the fact that data is available immediately. Thanks to block level access, access speeds are in the millisecond range. In the enterprise sector, this hierarchy usually consists of FC or SCSI disks with high performance and high availability. Unfortunately, high performance and high availability also come at a high price. Next in line to the online primary area are the online secondary areas. This hierarchy also has direct access feature but, unlike the online primary area, is constructed using high capacity disks with moderate throughput values but lower costs per Megabyte. Normally the secondary online area is based on ATA disk technologies. The notion ATA shall be treated as collective term even if some vendors make use of different technologies. Some of the most important application scenarios are to be found in archiving and in disk-todisk backup. Storage systems can either be based exclusively on ATA disk systems or they can house FC / SCSI disk in parallel with ATA disks. Due to the ability to freely configure the systems the storage system can optimally adjusted e.g. for OLTP or backup / archiving tasks. At the other end of the hierarchy lies the nearline area with tape technology and much lower price levels per Megabyte. Access speeds in this hierarchy range from a few seconds to a few minutes, with the requirement that the tapes be managed in automatic libraries. Tape technology has undergone major development in recent years, with cartridge capacity currently lying at 200 Gigabytes and set to increase to 400 and 800 Gigabytes respectively in the near future. In addition to the nearline level, it is also possible to have a migration level with shelves and manual access, for example. The challenge is to make optimum use of the hierarchies with the help of ILM. 3. The ILM concept 3.1 Basic methodology ILM is not a product, but a combination of processes and technologies. It determines how information flows through a storage infrastructure made up of different storage hierarchies/storage classes. It does not examine individual input/output requests (I/Os), but information objects. Examples of information objects include volumes (LUNs), file systems, files, messages including attachments, digitized contracts/policies, patient records in hospitals, etc. The processes determine at what time and in which storage hierarchy/storage class an information object is to be available. The storage hierarchy/storage class is determined in an optimization process. Variables for the optimization process incorporate, on the one hand, the value of the information in the information object, the frequency of access, the age of the information object, legal/regulatory requirements, etc. and, on the other, the costs per storage unit, the required access speed and availability, the available capacity, etc. in the relevant storage hierarchy/storage class. The optimization process is a recurring process, which is performed for the first time when an information object is created. The information objects are then reevaluated at regular intervals. If, following evaluation, it is determined that it would be better to store an information object in a different storage hierarchy/storage class, the process initiates the necessary steps (e.g. migration or relocation to a different storage hierarchy/storage class, archiving, deletion of the information object, etc.). Optimization processes can also take place on an eventdriven basis outside of the defined cycles.
5 White Paper Issue: July 2006 Page 5 / External input The optimization process described above requires additional decision criteria, which need to be incorporated as external requirements (not from the IT department). The most important of these are: The value of the information at the relevant time The agreed Service Level Agreements (SLA) Definition of access rights/data integrity The value of information Defining the value of information is a management task that is performed on the basis of the business processes with the mapping of the business logic. It is often the case that the business processes are not clearly enough defined. In such instances, external consultants should be brought in. Once the value of the information has been defined, the protection of the information can also be substantiated. The most important considerations are the demands to be met by RPO (Recovery Point Objectives) and RTO (Recovery Time Objectives), for example. RPO defines the maximum loss of information that may occur when a fatal error occurs. To put it in fairly general terms, one could ask how many transactions would be lost if either the system crashed or, if the worst comes to the worst, the entire computer center including the information was no longer available. RTO, on the other hand, defines the time between a damaging event and the point at which the IT system needs to be operational again. In certain sectors, this is not merely an internal decision, but is influenced by association requirements or even by legislation Service Level Agreements Service Level Agreements (SLA) define the service that an end user, for example, can expect when accessing an information object. SLAs should, in principle, be defined for every provision of service. This aspect takes on a greater significance in ILM, as dynamic processes run on the basis of the changing value of the information over the course of its lifecycle. For example, information objects can be moved from the online primary area to slower, more cost-effective storage media if they have not been accessed for a long time. If information objects that have been moved need to be accessed again, it must be clear that the access will be for a longer time, but may not exceed a specified maximum time. The SLA definitions specify the technology to which the information objects can be moved after the given times Access rights/data integrity Like the value of the data, access rights must principally be defined from a management viewpoint. For example, it must be decided whether data should only be accessible internally or whether it should also be accessible externally. In reality there are, of course, different internal user groups with different access rights. ILM must take these requirements into account so that they cannot be violated by ILM actions. After data has been migrated, for example, the same access rights that were in place before the data was migrated must continue to exist. The same applies to backup and replication etc. Archiving follows different rules. After arching information object nobody should be able to delete or modify an information object before the expiration date has expired. If modifications are necessary a new version of the object has to be created. The old object must not be altered in any circumstance. Protection demands can change during the lifecycle of an information object. 3.3 Storage hierarchies/storage classes ILM can work more effectively the better the entire storage area is structured. This is normally done through storage hierarchies and storage classes. Storage hierarchies are produced as a result of the classification of storage technologies in ascending or descending order according to a technical or cost viewpoint. Technical aspects and cost viewpoints are often mutually dependent. For example, storage systems with fast access are in a higher price range than storage systems with slower access.
6 White Paper Issue: July 2006 Page 6 / 9 Storage classes are the result of the logical structuring of the storage area. Storage hierarchies can thus be structured in more detail. In addition, it is possible to make a distinction based on the number of copies of information objects to be created automatically, even when the storage objects are in the same storage hierarchy. 3.4 Structuring the data In the current model we have evaluated the data globally and defined the containers (information objects). This framework needs further refinement, as otherwise the highest level of requirements would have to be applied to data that does not have such requirements. In structuring the data, we divide the entire dataset into logically connected subareas. The easiest way to structure data is by area of responsibility (or department). Examples include development, purchase, sales, marketing, HR, etc. Data can also be structured according to user group. An orthogonal division is created through the separation of structured and unstructured data (see section Driving forces ). Structuring according to application appears more practical, as applications are often used at inter-departmental level. The model becomes more interesting if the vertical procedures can attribute individual data objects. It would be ideal if, for example, a process on the SMTP server could determine whether incoming s are tax-related. It would therefore not be necessary to archive all processes as a precaution. Especially in the file system area first approaches for automated data classification have arrived. In these solutions appliances are placed into the storage network which assess step by step all files according to defined rules and trigger necessary actions. For instances the appliance can scan all files if the string confidential is contained in the content. If such a file is stored on a open accessible filer mistakenly then the appliance could move this file from the open accessible filer to file with controlled access rithts. By structuring the data, it is possible to define different processes and optimization points for the different data areas. 3.5 Benefits The aim is to guarantee optimum performance at enterprise level by storing data in different storage classes according to their importance and in compliance with the legal / regulatory requirements and in accordance to SLAs at lowest cost possible, without unduly preventing users from accessing the data. Key variables for the optimization process are Service Level Agreements (be they of an internal or external nature) and the available budget/configurations. The concept is based on the idea that the value of most information and, consequently, the access models for the information objects change over the course of time. Automated processes help ensure that the information objects are available at all times in the right place at the lowest possible cost. This therefore reduces investment costs. Hardware costs, however, account for only a small portion of total costs. Different Total Cost of Ownership analyses estimate this portion to be only one third to one fifth of total costs. The remaining costs are infrastructure, administration and operating costs. The punctual archiving of information objects eases the burden on the online primary area, for example, for which costly backup processes are executed. Archived information objects need only be saved once in principle. Archiving can therefore reduce daily, weekly, or monthly backup volumes. The situation is even more serious when it comes to restoring data. If old information objects are not removed from the online primary area, all objects have to be reloaded before the application can go online again. ILM provides the basis for guaranteeing Service Level Agreements, as the dynamic process always ensure optimum utilization of the individual storage classes. Service centers can forward these Service Level Agreements directly to their customers. When complied with, Service Level Agreements also have an effect on customer loyalty. Customer inquiries, whether they are made directly over the Internet or indirectly through employees, can be responded to in the shortest time possible. Customer satisfaction increases accordingly.
7 White Paper Issue: July 2006 Page 7 / 9 4. Advance provisions 4.1 Consolidation and SAN/NAS infrastructure From a conceptual point of view, it doesn t matter with ILM whether the IT configuration is operated in a decentralized or centralized model as long as there are communication paths to the remote sites. But the communication overhead is much higher in a distributed model compared to a consolidated model. All consolidation potential should therefore be utilized before introducing ILM. Consolidation may be the key technology when it comes to, on the one hand, creating coherent storage hierarchies and, on the other, keeping the number of storage classes to a reasonable level. Practical examples of effective consolidation can be found in the sector. With Exchange Server 2003, for example, several hundred servers can be concentrated on just a few servers. It is much easier to connect a migration or archiving level with a few servers than it is with several hundred servers that may be distributed over long distances. Intelligent storage networks (SAN, NAS, IP Storage) make implementation of dynamic ILM decisions at operational level much easier. In intelligent storage networks, every server has access to every storage area in principle. This creates the requirement whereby, for example, the most suitable storage class has to be selected for the first assignment. 4.2 Virtualization Effective ILM assumes problem-free access to resources. When deciding which technologies to use for storing information objects, technical attributes should play only a minor role. This is achieved today by means of virtualization. A distinction must be made here between virtualization concepts in the online and nearline and NAS areas. In the online area we have solutions based on servers, networks and storage systems. A detailed description of the different virtualization concepts is available on the white paper Virtualisation, basic building block for dynamic IT 4.3 Storage Resource Management In Storage Resource Management (SRM), the characteristics of available storage systems, physical disks, logical and virtual volumes, file systems, files, database containers, database contents, etc. are recorded and stored in a central, structured database for long-term administration. This forms the basis for successful cross-server and cross-platform data consolidation and the evaluation of storage utilization for trend analyses, threshold value monitoring or usage accounting. The storage classes and the sizes required can be derived from the data. 5. The ILM model The ILM model consists, on the one hand, of process instances and, on the other, of the transition matrix. Standard process instances include initial allocation, data backup including recovery, replication, migration (HSM), archiving, relocation and deletion/destruction of information objects. In initial allocation, ILM specifies which storage class an information object is to be stored in. Today, this is still mainly done by means of a fixed assignment of file system to virtual volumes or LUNs. However, there are also file systems that already automate this process. Conventional data backup at basic and file level is normally based on a version schema. In other words, the information objects are copied to another physical medium at regular intervals without destroying the previous copy. This allows access not just to the last version of an information object, but also to older versions. Databases and applications provide restore functions up to a specific transaction status using the log files. Replication is available in the form of continuous mirroring and point-in-time copies. With continuous mirroring, every write operation is performed either synchronously or asynchronously in the mirrored dataset as well. Data that is modified in a dataset is changed in the other dataset as quickly as possible. The original dataset and mirrored dataset should always be linked.
8 White Paper Issue: July 2006 Page 8 / 9 Point-in-time copies create a logical copy of the dataset at a specific point in time. Even when changes are made in the original dataset, the copy remains in the same logical state that it was in when created at the specific point in time. High flexible configurations with the highest requirements for RPO / RTO need additional technologies in the form of Continuous Data Protection which surmounts Point-in-Time Copies. Continuous Data Protections extends regular Point-in-Time Copies by enabling a reset process to a discretionary point in time in the past. Going back to a discretionary point in time is realized that for each write action the new data including a time stamp and the old data is stored. Continuous Data Protection can be implemented either in the file system or on the block level. Migration moves information objects form one storage class to a different storage class without changing the access syntax. Information objects can, for example, be migrated to the online secondary area or to tape libraries. System integrated migration will first copy the information object to the new storage class and then convert the original object into a pointer that refers to the copied object in the other storage class. This has the advantage of the application or user being able to use the same access syntax as if the file were still in the online primary area. The only difference from the end user s point of view lies in the access time, as the system must first recall the information object in the original storage class in the event of it being accessed. Migration processes are transparent to the application. Migration is to be disassociated from systems where a linear file system that behaves like an online file system is realized, but information objects are written quickly to offline storage classes. These systems do not recognize policies, but instead use the online storage as a cache. It is not possible to make a deterministic forecast as to whether an information object still exists in the cache. Probabilities can be specified on the basis of the cache size and access behavior if necessary. Archiving is the targeted storage of an individual information object or a related set of information objects in a different storage class and the definitions specifying the length of time over which the information objects cannot be deleted. A key feature of archiving is the additional storage of logical metadata for each archiving request. The metadata can either be created manually or automatic indexing can be performed. Retention periods are specified on the basis of the business processes. For example, German tax law prescribes, in addition to many other deadlines, a typical retention period of six and ten years. Depending on the sector, there are also much longer retention periods (e.g. design drawings or approval documentation for a nuclear power plant). The ILM processes must guarantee compliance with these deadlines. The relocation of an information object is, in principle, an irreversible process. Relocation is an important process instance when old hardware is to be replaced with new hardware. Information objects are principally LUNs or volumes. With relocation, the challenge lies in the transparent execution of the process, which usually takes a long time and following the transparent change of the addressing mechanisms. Following a relocation, all areas are released on the original. At the end of the lifecycle of an information object there are the deletion and destruction process instances. With deletion, the space is only logically released. Destruction means that the data areas are overwritten many times with specific alternating bit patterns so that the possibility of data reconstruction (even with sophisticated physical analysis tools) is ruled out. The dynamic process is described in the transition matrix. After initial allocation, all transitions into all other states are possible. A backup or a replicate can be created from an information object and the information object can be moved, relocated, archived or deleted. After deletion, especially after destruction, transition to another state is no longer possible, while an initial allocation can once again be derived from saved or replicated information objects. Migrated or archived information objects can once again be made available for initial allocation. Information objects often go from the backup state to the replicate state, whereby the data backup is automatically replicated after being written to a backup medium. There are also other possible transitions. For example, information objects can be moved from the backup area or archived. For information objects with fixed content, initial allocation in the archive area is also possible. Special systems are suitable for information objects with a fixed content. They provide the basis for compliance with legal and regulatory demands in the archiving area.
9 White Paper Issue: July 2006 Page 9 / 9 Figure 1: Transition matrix 6. Conclusion The industry is implementing ILM solutions with emphasis. Although there are already partial solutions available, an integrated solution as described in the theory behind the concept is still a few years away. Partial solutions show that the industry is already on the way towards an overall solution. Partial solutions can more than likely utilize a high percentage of the potential that currently exists. There are therefore few arguments against beginning ILM immediately. The key is implementing business process requirements. A great deal more integration is still required here. It is often difficult to know all of the legal and regulatory demands on the one hand and correctly estimate the value of the information object on the other. This becomes all the more difficult when group or departmental interests differ within an organization. Neutral advice from an external expert can help in this case. Delivery subject to availability, specifications subject to change without notice, correction of errors and omissions excepted. All conditions quoted (TCs) are recommended cost prices in EURO excl. VAT (unless stated otherwise in the text). All hardware and software names used are brand names and/or trademarks of their respective holders. Copyright Fujitsu Siemens Computers, 07/2006 Published by department: Simon Kastenmüller Phone: ++49 (0) Extranet:
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
THE CASE FOR ACTIVE DATA ARCHIVING Written by Ray Quattromini 3 rd September 2007 Contact details Tel: 01256 782030 / 08704 286 186 Fax: 08704 286 187 Email: email@example.com Web: www.data-storage.co.uk
XenData White Paper XenData Archive Series Software Technical Overview Advanced and Video Editions, Version 4.0 December 2006 XenData Archive Series software manages digital assets on data tape and magnetic
Multi-Terabyte Archives for Medical Imaging Applications This paper describes how Windows servers running XenData Archive Series software provide an attractive solution for storing and retrieving multiple
QStar White Paper Tiered Storage QStar White Paper Tiered Storage Table of Contents Introduction 1 The Solution 1 QStar Solution 3 Conclusion 4 Copyright 2007 QStar Technologies, Inc. QStar White Paper
White paper 200 Camera Surveillance Video Vault Solution Powered by Fujitsu SoleraTec Surveillance Video Vault provides exceptional video surveillance data capture, management, and storage in a complete
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
XenData Video Edition Product Brief: The Video Edition of XenData Archive Series software manages one or more automated data tape libraries on a single Windows 2003 server to create a cost effective digital
Using XenData Software and a Spectra Logic Archive With the Video Edition of XenData Archive Series software on a Windows server and a Spectra Logic T-Series digital archive, broadcast organizations have
WHITE PAPER September 2005 Strategic archiving Using information lifecycle management to archive data more efficiently and comply with new regulations ABSTRACT Information lifecycle management can help
ETERNUS CS High End Unified Data Protection Optimized Backup and Archiving with ETERNUS CS High End 0 Data Protection Issues addressed by ETERNUS CS HE 60% of data growth p.a. Rising back-up windows Too
HP s Information Supply Chain Optimizing Information, Data and Storage for Business Value John Webster August, 2011 Technology Insight Series Evaluator Group Copyright 2011 Evaluator Group, Inc. All rights
Accelerating Backup/Restore with the Virtual Tape Library Configuration That Fits Your Environment A WHITE PAPER Abstract: Since VTL uses disk to back up data, it eliminates the media and mechanical errors
White Paper What is IP SAN? Introduction Internet Protocol, or IP, has grown to become the most widely used telecommunications standard worldwide. The technology is well understood, easy to implement and
Business-centric Storage for small and medium-sized enterprises How DX powered by Intel Xeon processors improves data management Data management requirements are increasing day by day Storage administrators
SOLUTION PROFILE EMC DATA DOMAIN EXTENDED RETENTION SOFTWARE: MEETING NEEDS FOR LONG-TERM RETENTION OF BACKUP DATA ON EMC DATA DOMAIN SYSTEMS MAY 2012 Backups are essential for short-term data recovery
Storage Backup and Disaster Recovery: Using New Technology to Develop Best Practices September 2008 Recent advances in data storage and data protection technology are nothing short of phenomenal. Today,
STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead
It s All About Restoring Your Data Achieving Rapid Business Resumption with Highly Efficient Disk-to-Disk Backup & Restore All trademark names are the property of their respective companies. This publication
Business-centric Storage for small and medium-sized enterprises How DX powered by Intel Xeon processors improves data management DX Online Storage Family Architecture DX60 S2 DX100 S3 DX200 S3 Flexible
Paper 176-28 Future Trends and New Developments in Data Management Jim Lee, Princeton Softech, Princeton, NJ Success in today s customer-driven and highly competitive business environment depends on your
The Secret to Affordably Protecting Critical Data Using StoneFly Backup Advantage All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc.
Document No. Technical White Paper for the Oceanspace VTL6000 Issue V2.1 Date 2010-05-18 Huawei Symantec Technologies Co., Ltd. Copyright Huawei Symantec Technologies Co., Ltd. 2010. All rights reserved.
EaseTag Cloud Storage Solution The Challenge For most companies, data especially unstructured data, continues to grow by 50 percent annually. The impact of spending more every year on storage, and on protecting
Backup and Recovery What is a Backup? Backup is an additional copy of data that can be used for restore and recovery purposes. The Backup copy is used when the primary copy is lost or corrupted. This Backup
using XenData Software XenData software manages data tape drives, optionally combined with a tape library, on a Windows Server 2003 platform to create an attractive offline storage solution for professional
BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest
Fall 2008 PrimeArray Data Storage Solutions Network Attached Storage (NAS) iscsi Storage Area Networks (SAN) Optical Storage Systems (CD/DVD) AutoStor iscsi SAN solution. See pages 8 and 9 for more information.
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
Archive Storage Technologies Supporting Information Lifecycle Management V Noboru Osada V Kunihiko Kassai (Manuscript received September 16, 2005) A large amount of fixed content has been generated due
Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office
SANmelody Application Affordable Remote Data Replication Your Data is as Valuable as Anyone s You know very well how critical your data is to your organization and how much your business would be impacted
Creating a Catalog for ILM Services Bob Mister Rogers, Application Matrix Paul Field, Independent Consultant Terry Yoshii, Intel SNIA Legal Notice The material contained in this tutorial is copyrighted
WHITEPAPER NO MORE TIERS ASSUREON, REDEFINING WHAT ARCHIVE MEANS Companies are faced with a wave of complexity in trying to make sense of tiered storage: complex storage infrastructures storage management
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization New Drivers in Information Storage Data is unquestionably the lifeblood of today s digital organization. Storage solutions remain
Information Lifecycle Management Using StoneFly Backup Advantage All trademark names are the property of their respective companies. This publication contains opinions of StoneFly, Inc. which are subject
Pacific Life Insurance Company Pacific Life Turns Compliance into Business Advantage SOLUTION SNAPSHOT Applications: SAP, Oracle, Microsoft Exchange, Microsoft SQL Server, KVS, FileNet EMC Software: EMCControlCenter
White Paper Improving Backup Effectiveness and Cost-Efficiency with Deduplication By Lauren Whitehouse October, 2010 This ESG White Paper was commissioned by Fujitsu and is distributed under license from
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
DATA ARCHIVING The first Step toward Managing the Information Lifecycle Best practices for SAP ILM to improve performance, compliance and cost 2010 Dolphin. West Chester, PA All rights are reserved, including
Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce
White Paper AUTOMATED DATA RETENTION WITH EMC ISILON SMARTLOCK Abstract EMC Isilon SmartLock protects critical data against accidental, malicious or premature deletion or alteration. Whether you need to
DEFINING THE RIGH DATA PROTECTION STRATEGY The Nuances of Backup and Recovery Solutions By Cindy LaChapelle, Principal Consultant, ISG www.isg-one.com INTRODUCTION Most organizations have traditionally
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
Create Storage Audits with Visual Reports Automated Data Life Cycle Management Efficient use of Storage Resources Continuous Storage Availability Cost Savings Ease of Scaling Automated Capacity Management
An innovative information-retention solution IBM System Storage DR550 Highlights Repository for all kinds of Offers low TCO by using multiple content structured and storage tiers (disk, tape, unstructured
EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009
From the only independent developer of large scale archival storage systems, the Versity Storage Manager brings enterpriseclass storage virtualization to the Linux platform. Based on Open Source technology,
Help maintain business continuity through efficient and effective storage management IBM Tivoli Storage Manager Highlights Increase business continuity by shortening backup and recovery times and maximizing
PRODUCT BRIEF Quantum DXi6500 Family of Network-Attached Disk Backup Appliances with Deduplication NOTICE This Product Brief contains proprietary information protected by copyright. Information in this
2016 Industry Report Cloud Storage Rent or Buy? A 5 Year Total Cost of Ownership Study on the Economics of Cloud Storage Sponsored by StrongBox Data Solutions, Inc. Copyright 2016 Brad Johns Consulting
XenData Product Brief: SX-550 Series Servers for LTO Archives The SX-550 Series of Archive Servers creates highly scalable LTO Digital Video Archives that are optimized for broadcasters, video production
White Paper EMC FOR NETWORK ATTACHED STORAGE (NAS) BACKUP AND RECOVERY Abstract This white paper provides an overview of EMC s industry leading backup and recovery solutions for NAS systems. It also explains
Mail Archivierung & Back Up: Das Gesetz will es aber was bringt es? Petar Duic Channel Manager Germany Backup and Archiving Products: Similar but Different Archiving Designed for long retention Plan to
Solution Brief: Creating Avid Project Archives Marquis Project Parking running on a XenData Archive Server provides Fast and Reliable Archiving to LTO or Sony Optical Disc Archive Cartridges Summary Avid
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric 2001 San Diego Gas and Electric. All copyright and trademark rights reserved. Importance
DATA CENTER VIRTUALIZATION WHITE PAPER SEPTEMBER 2006 EXECUTIVE SUMMARY Many enterprise IT departments have attempted to respond to growth by adding servers and storage systems dedicated to specific applications,
Table of Contents Preface.................................................... 1 Introduction............................................................. 1 High Availability.........................................................
State of Michigan Records Management Services Frequently Asked Questions About E mail Retention It is essential that government agencies manage their electronic mail (e mail) appropriately. Like all other
WHITEPAPER The Modern Virtualized Data Center Data center resources have traditionally been underutilized while drawing enormous amounts of power and taking up valuable floorspace. Virtualization has been
SOLUTION BRIEF: KEY CONSIDERATIONS FOR DISASTER RECOVERY A Disaster Recovery and Business Continuity plan is specific to the circumstances, priorities and expense versus the value decisions of the organization,
How would lost data impact your business? What you don t know could hurt you. NETWORK ATTACHED STORAGE FOR SMALL BUSINESS How would lost data impact your business? What you don t know could hurt you You
Barracuda Data Protection Products Long-term storage for Just-In-Case protection Florian Vojtech Sales Engineer 1 Product Overview SECURITY Backup Services / Products: Similar but Different Archiving Designed
Tape s evolving data storage role Balancing Performance, Availability, Capacity, Energy for long-term data protection and retention By Greg Schulz Founder and Senior Advisory Consultant, the Server and
Framework for Data warehouse architectural components Author: Jim Wendt Organization: Evaltech, Inc. Evaltech Research Group, Data Warehousing Practice. Date: 04/08/11 Email: firstname.lastname@example.org Abstract:
Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while
Keys to Successfully Architecting your DSI9000 Virtual Tape Library By Chris Johnson Dynamic Solutions International July 2009 Section 1 Executive Summary Over the last twenty years the problem of data
Lunch and Learn: Modernize Your Data Protection Architecture with Multiple Tiers of Storage Session 17174, 12:30pm, Cedar Kevin Horn Principal Product Manager Enterprise Data Protection Solutions March
NetApp SnapMirror Protect Your Business at a 60% lower TCO Name Title Disaster Recovery Market Trends Providing disaster recovery remains critical Top 10 business initiative #2 area for storage investment
Information-centric Policy Management Edgar StPierre, EMC SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation
Overview of Data Replication 1 Abstract Replication is the process of making a copy of something, or of creating a replica. In different contexts, such as art (copies of a painting), bioscience (cloning),
DATASHEET FUJITSU ETERNUS CS800 DATA PROTECTION APPLIANCE ETERNUS CS800 IS A TURNKEY DATA PROTECTION APPLIANCE WITH DEDUPLICATION FOR MIDRANGE ENVIRONMENTS The ETERNUS CS solution family offers an outstanding
An Oracle White Paper September 2009 Oracle Total Recall with Oracle Database 11g Release 2 Introduction: Total Recall = Total History... 1 Managing Historical Data: Current Approaches... 2 Application
Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time
ELIMINATE TAPE WITH ASSUREON THE DIGITAL AGE TRAPPED IN AN 8 TRACK WORLD Tape has been a standard data storage medium for more than 50 years, but that era is winding down. The capabilities of tape have
Introduction to Optical Archiving Library Solution for Long-term Data Retention CUC Solutions & Hitachi-LG Data Storage 0 TABLE OF CONTENTS 1. Introduction... 2 1.1 Background and Underlying Requirements