1 Guide to Data Protection and Disaster Recovery Sponsored by:
2 Table of Contents Designing a Data Protection and DR Strategy Understanding key technologies (snapshot, image, replication deduplication etc.) Taking a Unified Approach to Data Protection and Recovery Data Protection for a Virtualised Environment The business side of DP and DR - a CIO perspective Options for long term Archive Note from the editor Data is currently hot we are seeing huge hype around Big Data although if you think about it, Data Analytics has been around for a long time. Of-course Technologies like Hadoop that extend the possibilities of what can be achieved with Big Data are changing the landscape. In South East Asia we have finally seen cloud become a mainstream destination for applications and data. Whilst we think there is still some maturity required before BOYD gets true corporate adoption in this part of the world, we still see unregulated business data being created, viewed and stored on mobile devices. If you have read our hot data technologies index you will know that we give highest adoption rating to Data Protection technologies. That is why we see this guide as Key. Whatever your data and wherever it is created and stored, it always Needs to be protected and in the event of disaster recovered. We hope you enjoy the Guide Yours in Storage Allan Guam
3 Designing a Data Protection & Disaster Recovery Strategy By Martin Lee Data&StorageAsean Key considerations in formulating a strategy need to include: Data Classification Recovery Time Objectives (RTO) Recovery Point Objectives (RPO) Off Site Copy Capability Archive requirements Only after considering the above is it possible to make decisions on the correct technology that will underpin your strategies. Often overlooked Data Classification is actually key to ensuring the most relevant and efficient strategy. It is impossible to decide how quickly data needs to be recovered or how long it needs to archived, until that data has been classified. Even a simple home grown system of classification can work wonders. Designing a strategy for Data Protection and Disaster Recovery by answering questions about your data, applications and infrastructure Using three tiers such as Business Critical, Business Relevant and Non Business can enable you to develop different strategies for each data type. Obviously the protection put in place for Non Business data will be very different from that used for Business Critical. Once you know what data you have it is important to set RTO and RPO objectives for each data type. Recovery TIME Objective means how quickly you would like to be able to recover your data in the event of data loss while Recovery POINT Objective refers to the most recent point you need to be able to recover back to in the event of data loss or system crash. An RPO of 1 second would mean you can always recover to 1 second before the disaster occurred. Likewise an RPO of 1 hour would mean you are prepared to lose up to one hour of data. Typical nightly backups will usually give an RPO of 24 hours. These two concepts RTO and RPO combined with good data classification will get you very close to being able to define a strong data protection and Disaster Recovery strategy. As an example you decide Business Critical Data requires an RTO of 15 minutes and RPO of 5 minutes, and that your Non-Business data can have an RTO of 48 hours and RPO of 24 hours. Knowing this and defining this gets you a long way towards having your top level strategy in place. Once you have these details it is possible to engage technology providers and find out how they can help you achieve these service levels objectives. The two other headline concepts that nearly always need to be factored into any Data Protection and Disaster Recovery (DR) plan are Archive or Retention policy and Off-Site considerations. Archive is all about how long backup up data needs to be retained. This is also not the same for every type of data. However it does not always follow that data classified as Business Critical needs to have a long retention time. Therefore considerations about Archive often need to be made outside of the initial data classification. Regulatory and statutory considerations may mean that backups of some data need to be retained and archived for many years, whereas other data may not need to be retained beyond one or two backups. This means that different classifications for Archive need to be made, calculated and then appropriate technology for Archive selected. Key considerations in formulating a strategy need to include: The final major consideration that needs to be factored into a Data Protection and Recovery plan is whether off site copies of data are required. Again, the technology you use for Offsite will depend on the reason why offsite copies of your data are needed. If the offsite requirement is about a safe retention location rather than fast recovery, then using a service to collect tapes and store them at a third party facility might be fine. On the other hand if you want offsite copies of your data for fast recovery in a DR scenario, then an entirely different replication based technology might be required. In short off-siting your backup and recovery data is based on classification followed by selection of the right technology to match the need. The concepts described above will help you design the strategy suitable for your needs. The articles that follow will help you to apply different technologies to enable you to build and implement that strategy.
4 Understanding Key Technologies By Martin Lee Data&StorageAsean Backup In information technology, a backup or the process of backing up refers to the copying and archiving of computer data so it may be used to restore the original after a data loss event. Typically a backup of data is copied to a location separate from where the source data is created and stored. For this reason removable media such as tape has long been the backup media of choice for data backup. Backups have two main purposes. One purpose is to create a second copy of data that is recoverable after a data lost incident, such as an accidental deletion or some kind of data corruption. Another purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy. For some businesses and organisations it is important to maintain recoverable copies of backups going back many years. Replication Data replication involves making a real time or very near real time copy of data to a secondary and remote location from the source data. Replication is carried out to enable Fault Tolerance, ensuring an exact copy of primary data is available and ready for use in the event the source data becomes unavailable through some kind of unplanned outage such as a natural disaster. Understanding the key Data Protection technologies is vital in order to be able to to dicipher what vendors are telling you and then build your DR plan. Snapshot A snapshot is a read only frozen point in time copy of a data set used for backup purposes. Once the Snapshot has been created, it provides a read only separate point in time copy of the primary data set. Any backup application is then able to backup the snapshot rather than the live primary data. This means that application up time and user access to files is not impacted at all by backup. Once a first snapshot has been completed, future snapshots can be taken very quickly (a matter of seconds) as they simply take the changes and build a new frozen point in time copy by pointing the changes back to the original. Disk Image backup A disk Image takes a copy of the entire contents of a disk volume. An image bypasses the file system and generates sector level copy of the disk. This means that an Image backup will create an exact copy of the structure of the volume it copies, including settings and operating system level information. Image backups are able to create an exact copy automatically removing unused blocks in order to save space. Bare Metal Recovery A Bare Metal Recovery (BMR) is a very fast recovery process, usually minutes in the event of a total server failure situation. BMR utilises an image backup to recover a complete system in just a few simple steps. Typically a Bare Metal Recovery is performed by first booting an emergency OS (such as windows PE), then recovering an entire image from an Image Backup, and rebooting the system again. On reboot the exact system as copied into the image backup will be restored and run exactly as the system that failed. BMR can usually be used to recover to dissimilar hardware. Continuous Data Protection (CDP) CDP is a type of backup that runs continuously as a service backing up every change to primary data in real time or near real time. Deploying this method of copying data allows for recovering to any point in time over the retention period that data is being copied. CDP differs from Replication in that it is not always designed for instant system recovery or for sending data to a remote redundant location. CDP by its nature is not well suited to utilizing tape-based media as a target. Rather it is best used with disk based technology. Concepts explained: BMR Deduplication Data deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data. Due the nature of this technique, deduplication is particularly well suited for improving storage utilization for backups made to disk, where duplicate copies of files take up a large amount of space. P2V Also referred to as Physical to Virtual Backup, P2V is a process where a physical server is backed up in a format that enables it to be recovered instantly as a virtual machine on a hypervisor such as VMware or Microsoft Hyper-V.
5 By Caddy Tan Regional Director for Arcserve The backup and recovery market is quickly morphing as end users weather a perfect storm hitting their infrastructure, affecting the efficiency of their operations. There are many factors. Let me share my short list: unstructured data adoption of virtualization and funding such as Recovery Point Objective (RPO) and Recovery Time Objective (RTO) The use of multiple point solutions for data protection and high availability from multiple vendors creates inconsistent data protection infrastructures for larger end users Fueled by data growth and technology advancement such as virtualization, current architectures come up short in a number of areas and essentially perpetuate data protection islands or silos. Today s IT is about the interdependence of systems and applications in the context of service delivery. Understanding and proving that you can recover in a business-reasonable amount of time with a business-acceptable currency of data is crucial. Metrics such as RPO and RTO have become synonymous with business availability. Current architectures make overall poor use of their resources due to a lack of ability to measure process inefficiencies, obsolete solutions with expensive licensing, or niche data protection solutions that only compound the problem, by adding complexity on top of complexity. A fundamental change is necessary to fix data protection. It requires the adoption of a modern architecture that is been designed to solve today s complex problems, as well as provide a highly scalable platform for the future. The next generation of data protection products cannot compromise when it comes to features: completeness is essential. What this means is a solution that combines all the core technologies of data protection and recovery: image backup, file-level backup, advanced scheduling, physical, virtual, tape, replication, high availability, deduplication, etc. It s a long list, and one that only a few vendors can successfully handle. This is what we call the solution stacking phenomenon. Another way to look at this is that every organization should be able to benefit from the best levels of protection and recoverability, something that until recently was reserved only for the largest enterprises. Innovation is about offering what was once limited to the few to a broader audience, and that s exactly what next generation architectures need to do: innovate by delivering enterprise-level features at a fraction of the cost, and with ease of use. The use of a specific technology like backup or replication is a conclusion, not a starting point. Also important is mapping the data/system protection or recovery level to the business need. Reverse engineering requirements to match a technology is a sure way to miss the mark. A modern solution lets customers or service providers easily create plans on the basis of their RPO and RTO, as simple as if they were using a dial. Let the right technology kick in based on requirements. Key to this endeavor is the abstraction of what can be complex tasks or workflows that happen behind the scenes. Therefore, the next generation of solutions has to unify many technologies in a way that is easy to configure, yet still provides fine tuning capabilities. A modern unified architecture has to be designed on the core tenants of usability and flexibility. Ease of use has become a de facto requirement in light of the complexities associated with data recovery infrastructures. Arcserve Unified Data Protection brings to the market the first easy-to-use and easy-to-deploy Backup, Replication, High Availability and True Global Deduplication within one solution, onpremise, off-premise or in the cloud. Arcserve UDP s true global deduplication feature dramatically reduces the amount of data actually transferred during backup cycles. The ability to deduplicate across all the clients in the infrastructure is central to limiting the unnecessary storage and transfer of existing data data is deduplicated across nodes, jobs and across sites. In addition, the solution allows for in-place re-hydration of data, for fast granular restore, including from tape. To achieve even better RPO and RTO, Arcserve UDP provides continuous data replication, system and application monitoring, automatic and push-button failover, automated end-user redirection and push-button failback functionality all to help reduce system downtime for both physical and virtual servers on-premise, offpremise or in the cloud. It also enables automated disaster recovery testing of business-critical systems, applications & data, without business downtime or impact to production systems to assure system recoverability. As many IT professionals are constantly reminded, it s all about delivering on RPO and RTO!
6 Data Protection for a virtualised environment By Allan Guiam Editor Data&StorageAsean For years, storage professionals have been telling us that backups are a necessary part of good housekeeping and no IT person should ever be caught off guard without any backup of data, preferably as recent as possible. Why? Because Murphy s Law of anything that can go wrong will go wrong is still very much with us. One approached to take is the rule, best illustrated below by the helpfulhacker, which suggests that (1) you need to keep three backup copies of your data at all times the frequency depends on how critical the data is and how often it gets updated; (2) you need to store your backups in two media types internal disk or external disk or cloud as long as it s not stored on the same media; and (3) keep on copy offsite particularly true for enterprises. Keeping at least three copies of the data will ensure that if the primary data is corrupt, you will have two additional copies to fall back into. You can also use deduplication technology to shrink the data that needs to be copied. You may elect to have a clone copy of the production data for the third copy to increase your data protection odds. Finally, to mitigate any further risks associated with any potential corruption of data, you will need to perform regular restore tests to guarantee the integrity of copied data. The rule excellent illustarted by helpfulhacker below is as relevant in Virtualised environments as it has been for Physical Every media type has the potential to fail. DVDs can develop bubbles. Tapes can form mildew or become brittle or demagnetized. NAS firmware updates can fail. So having at least two media different media or media types reduces the risks of two media failing at the same time. Today there is a multitude of media options for enterprises including LTO tape, NAS, WORM systems, to name a few. So pick at least two media depending on your business priorities and budgets. Earthquakes, typhoons, flooding and fires are frequent occurrences in Asia raising the potential for data lose among enterprises in the region. Best practice therefore calls for backup copies to be stored in remote sites far from the primary site. Companies with multiple sites can use one of their other sites for backup. For everyone else, an online backup service is an effective and cheap alternative. Why buy your own hardware and software when pay as you go is just as reliable without having to be concerned with running out of space, media compatibility or reliability? The only other things you d have to consider is security and for this encryption should be standard in your arsenal of data protection. Where it involves vaulting, you complicate the issue with proximity to the backup site. Let me share with you a true story Some time ago, I spoke to the CIO of a large conglomerate in Asia (anonymity is required here) and he confessed that for years they have been religiously backing up their data to tape at nights when server load was at its lowest. They even set up a dedicated link to another office that automatically backs up production data to a disaster recovery site. The followed the 321 rule above: (1) perform regular backup; (2) backup to a different media; and (3) backup to an offsite location. One day disaster struck the primary datacentre rendering the entire facility down. The DR strategy kicked in converting the remote site into the temporary datacentre production facility. Look for a solution that can see all your storage, both virtual and physical. The solution must allow for granular and application level of recovery Fast forward to today when according to estimates and predictions by analysts between 60 to 70 percent of Windows-based datacentres are virtualized, with Linux about 35 to 45 percent there. The initial drivers of cost and efficiency are being displaced in favour of flexibility and scalability. Today it s hard to find server technology that is not already virtualized or virtualization-ready out of the box. This 2014 we will see network and storage hardware accelerate their virtualization transformation.
7 Data Protection for a virtualised environment - Cont. By Allan Guiam Editor Data&StorageAsean How will this impact data protection strategies and policies? One major impact is Physical to Virtual or P2V. Even for those companies that are not rushing towards a virtualised infrastructure for production, having just one machine on standby running a hypervisor provides a level or Disaster Recoverability that was just not within the reach of many IT departments before virtualisation became a reality. Recovering a physical server into a virtual server enables full recovery from a total physical server failure within minutes. When looking for solutions that offer this, try and find solutions that are create a VMware or Hyper-V bootable image from a physical server backup. The real challenge we see is what to look for in a backup and recovery solution for use in virtual environments. Look for a solution that can see all your storage, both virtual and physical. The solution must allow for granular and application level of recovery. Why try to recover an entire image when all you need is a file? Make sure your chosen solution provides all levels of recovery full virtual machine, individual virtual disks, virtualized application, and database servers, along with standards like file, folder, and granular objects such as an individual . Application awareness is an essential component of virtual machine backup. While most backup products provide crash consistent backups meaning those applications use integration with technologies like Microsoft Volume Shadow Copy Service (VSS), many backup products do not perform required post-process functions like log truncation, which ensure you are protecting the application completely. Many backup applications can t perform granular recovery of those virtualized applications either. Many business-critical applications like Exchange or SQL Server will only do certain types of maintenance only when a successful backup occurs. Application-aware backup solutions ensure this maintenance can take place. Usually, this requires some sort of software (i.e. an agent, whether it s deployed beforehand or injected and uninstalled on demand) in the virtualized application server. More and more organizations are running multiple hypervisors within their environment, especially as alternatives to VMware are gaining popularity especially Microsoft Hyper-V. Finding a single solution that supports all of your hypervisors will simplify backup complexity and licensing, streamline management, and reduce costs. While some IT organizations have invested in multiple separate tools for backup one for physical servers and virtual servers customers have consistently asked for a single vendor to manage both environments. This is because a differing approach to backup leads to inconsistent data management, backup confusion, increased cost, and even conflict between various IT organizations. The solution is for IT to bring together the virtualization and backup teams, assign ownership, authority, and resources for backup of both physical and virtual machines..
8 The Business Side of DP & DR A CIO Persepctive By Allan Guiam Editor Data&StorageAsean Mid-to-large enterprises in Asia today carry a data protection (DP) and a disaster recovery (DR) strategy as part of their portfolio of services. What has been changing is the impact of emerging technologies such as cloud computing in terms of how internal management view the viability of outsourcing DP and DR to an external party. The rationale behind DP and DR have not changed significantly but the complexities of undertaking and the degree of understanding of the tools available have grown significantly, allowing organizations to focus on what is important their business. An interview with Suk-Wah Kwok, Regional Chief Information Officer - Asia Pacific, Lockton Companies (Hong Kong) Ltd provides insight into the business side of DP and DR. Ranked 9th in the world; Lockton Companies is the world s largest privately owned insurance broker. An outspoken and candid technologist Suk-Wah is a familiar face in Hong Kong s IT landscape. How does a financial service company define data protection and disaster recovery? Designing a strategy for Data Protection and Disaster Recovery by answering questions about your data, applications and infrastructure SWK: In our business, we have two kinds of critical data: corporate data and client data. With data protection we are charged with protecting both data from unauthorised and unintended use and access. Data protection is about recognising the criticality of all data categories, how they are being used, how they should be used, how leakage and misuse can occur, and where the exposure and risks are for the company, and finding ways to mitigate such risks. For me, disaster recovery is a set of measures designed to ensure reasonable and acceptable recovery IT services to meet business demand. In the insurance industry, our goal is to ensure business exposure is kept to a minimum. In this regard, what IT does has to be related to what the business commits to the client. Our clients are mostly corporate clients who have in-house compliance expertise themselves, hence they will challenge and scrutinize our systems and processes and demand a certain level of service guarantee from us. Thus in our business, Service Level Agreement or SLA is a common business concept. Before a client signs their business to us, it is common for them to check that we have DR measures in place, and they may ask us about uptime, support level, compensation for outages and things like that. As a result, IT has to develop disaster recovery systems and strategies with full awareness and understanding of what the business has committed to our clients. What makes these different for a financial service provider like Lockton when compared to a bank or insurance company? SWK: The average bank deals with consumers as customers it is retail and as such the transaction volume can be huge. Service demand tends to be consistently high at all times throughout the year. As a consequence demands on IT are generally also consistent, daily in fact, throughout the year. You can just imagine the non-stop demand on DR. Thus in our business, Service Level Agreement or SLA is a common business concept. Before a client signs their business to us, it is common for them to check that we have DR measures in place The technology side is relatively simple. As long as you have a CIO or Head of IT who knows what he/she is doing and management is willing to invest in the necessary technology, putting automatic data protection measures in place is relatively straight forward. The second aspect is about education and awareness and making sure that HR puts in manuals, procedures and processes, including training and regular refresh to ensure that employees understand what data protection and business ethics mean to the company and its clients, and their individual obligations. In Lockton, we have IT, HR and compliance officers working together to set up systems and processes to mitigate the risks that come as part of doing business.
9 The Business Side of DP & DR A CIO Persepctive - Cont. By Allan Guiam Editor Data&StorageAsean The hardest part to control is actually human behaviour because it is subject to individual discipline. You can get someone trained as much and as frequently as you like, but it is difficult to change habits and control user behaviour. This is where IT can help the business by taking a proactive role to reduce potential damage caused by wrong user behaviour, as well as resorting to automatic monitoring to help identify system detectable non-compliant user behaviours. We are fortunate that at Lockton, our business staff are permanent employees and licensed brokers who are constantly reminded of their corporate and industry responsibilities and obligations. It is more difficult for insurance companies that use self-employed agents whose behaviour and discipline are much harder to control. In a multi-country operation such as yours, what is the approach you take to ensure compliance with regulations around disaster recovery? SWK: When you are in a regional position like mine when I have to provide for a business that operates in multiple countries, the best practice and most economical way to ensure we meet regulatory compliance as well as SLA commitments to clients is to centralize our infrastructure including our DR effort. And this is what Lockton has been doing for all wholly owned subsidiaries in Asia Pacific. This means country IT does not have to worry about DR - it is all taken care of at the regional level. And I consider this the most efficient in terms of cost, time, and resources. Both require significant investment in time, money and resources do you get asked by the CEO about the payback of such systems? How has cloud computing changed your view or approach to disaster protection and disaster recovery? Business compliance is strictly speaking outside my jurisdiction as we have a compliance officer (CO) who takes care of it. That said I work very closely with the CO to ensure we are aligned. For an international insurance broker like Lockton, DR and Compliance is a global mandate. Countries won t pass internal audits unless they have a proven working DR in place. Because disaster recovery has now become a universally accepted requirement of doing business, I don t really get asked by management on issues like payback. I am, however, constantly required to find the most cost-effective way of providing DR. Right now, I am exploring newer approaches including the use of public cloud services for our regional infrastructure and DR services. I am obviously required to go through cost and benefit analysis for the various approaches. What is your advice on the topic of data protection and disaster recovery? SWK: In my view, data protection and disaster recovery are mainstream IT services. Hence these should already be in the IT service catalogue of all CIOs. Like any IT service, there are so many different ways to achieve similar purposes. I have been in similar capacity for well over a decade and have gone through different ways myself in providing such services. I feel that the vendors have definitely matured enormously and emerging technologies are becoming increasingly affordable and practical. This means CIOs no longer have excuses to keep status quo. They should constantly challenge their old or existing ways of doing things and open their minds to newer and more cost efficient ways of addressing the critical data protection and DR needs. Over the years, I have personally moved from individual countries doing local DR, covering measures like country redundancy, replication and backup, to taking it to a full blown regional approach combining experience and technology automation. The challenge is to be constantly aware of new technologies and processes and explore their potential. For example, right now, I am exploring cloud DR services. What is your advice to vendors selling you these technologies and services? SWK: As a frequent adopter of vendor services, I ask myself a few questions before I decide which vendor to use to support a specific undertaking: Internal Expertise Do I have internal expertise to do so, if not, is it more cost effective to hire someone in-house to do so, or should I consider outsourcing? Efficiency Even if I have internal resources, will it be more efficient for internal resources to do the job, or should we resort to vendor? Bearing in mind my view of efficiency includes responsiveness to business needs and unplanned problems that arise. Cost Will it be cheaper to do it in house or use vendor services? Value to my customers Will the business and my stakeholders be more satisfied with using internal IT services or using vendor services? The pattern is clear as emerging technologies become mainstream they get cheaper. The other good news is that these days emerging technologies prices are coming down a lot quicker, so we can make practical use of them earlier. One thing I ve also learned is that if we dig deeper into the real cost base of providing a lot of IT services in-house versus using vendor services, I often find using vendors financially a no brainer. That said I am a very open minded CIO who is willing to explore new options.
10 Options for Long Term Archive By Martin Lee Data&StorageAsean Until very recently long term archive or deep storage for electronic data was limited to one choice tape. However in recent years other viable options have also become available. When looking for the right media/technology for storing data for long time archive there are certain aspects that need to be factored in. Price Storage for archive is regarded as secondary or nonprimary storage, as such the price you expect to pay for archive storage should be considerably less than you pay for primary storage. For archive storage rates in the region $0.01 per GB are achievable. Durability Archive storage is often in place to ensure compliance with regulatory requirements. If regulation requires that data is kept for 7 years then you must have confidence that the media on which you backup has the longevity to meet the archive duration needs. Portability Generally it makes sense to remove archive data from primary systems. Doing so frees resource on the primary system and, possibly more important, removing archive data in this way and storing it in a remote location from primary systems adds an element of additional redundancy. Different options for Archive storage enable portability in different ways. Offline/Near-line/Online There are two aspects to whether your storage is Offline, Nearline or Online. Online will allow for the fastest recovery, through to offline, which can mean the recovery period may run into days. The other aspect is cost the cost of maintaining and powering online storage is considerably more than the costs associated with near-line or offline. In fact the cost of energy for an online storage system compared to a near-line system like a tape library can be 4 to 5 times more. These two aspects need to be weighed in and assessed before making a final choice for Archive. Broadly speaking we see four main choices for archive storage these are Tape, Deduplication, RDX Removeable Disk, and Cloud. Tape continues to evolve. LTO is the dominant tape standard with LTO 6 taking the standard to increased capacity per cartridge (up to 6.25tb with compression) and Transfer speeds of 160mb/s. Tape remains a perfect medium for long term archive. When not being used for day to day backups wear from tape drive heads is not an issue, price per GB is about as low as you can get, and the media can be kept off line reducing power consumption and costs. It is for these reasons that tape s much discussed decline has never come to pass. We can question if tape is today a viable media for daily backups, but for deep and long term archive, tape remains the gold standard against which all other options should be compared. Back in 2009 Deduplication was possibly the hottest storage and data technology. It was being hyped as much the same way that Big Data is today. Today many enterprise companies are taking advantage of deduplication technology and storing years of backup on deduplicated disk. Like for like, we believe that tape is still the most cost effective archive option, however Deduplication has some key strengths. If you need fast access to long archived data, it is likely that you will be able to retrieve files very quickly from deduplicated disks. In addition, due to the nature of deduplicated data, it is a very efficient way to replicate your backup data to an offsite deduplicated replication target. Cost is 50 cents per GB. RDX has silently become the standard for SME backup. Every major server manufacturer offers RDX as its internal server backup product for small business. RDX originally stood for Removable Disk Exchange and the technology provides the main benefits of tape in a disk format. Its rugged, has longevity, is portable and can be stored offline. For enterprise companies, the price per GB may make it unfeasible as an archive media. But for smaller companies (e.g. a small design shop) a 1.5TB RDX cartridge is a simple and rugged platform to archive a large amount of data and keep it speedily accessible in the event it needs to be recalled. Cloud Storage Providers are finally providing serious options for cost effective archive. As an example AWS offer Glacier for long term archive. The price is very enticing but the service level on recovery is days rather than hours, generally acceptable for long term archive data. Over the coming years cloud providers could offer the first serious alternative to tape for long term Archive. Cloud Archive means instant off-site copies of your data, and pricing seems genuinely competitive. However it s important to factor in costs for recovering data, and also challenges of sending or pulling back a huge amount of data over a network connection. We should also remember that to hit the price points they do, it is logical to think that the cloud providers themselves are utilizing tape technology for their long term archive offerings. Cost per GB Offline Speed of recovery Deduplication $0.05 No Instant RDX $0.35 Yes hours Tape $0.2 Yes hours Cloud $0.1 No days Archive is about storing data long term. Considerations for data retrieval and completely any other classication of data storage Broadly speaking we see four main choices for archive storage these are Tape, Deduplication, RDX Removeable Disk, and Cloud.
11 Subscribe at: Press Releases: Sponsored by: