Optimal Implementation of Continuous Data Protection (CDP) in Linux Kernel
|
|
|
- Avice Holland
- 10 years ago
- Views:
Transcription
1 Optimal Implementation of Continuous Data Protection (CDP) in Linux Kernel Xu Li and Changsheng Xie Data Storage Division, Wuhan National Laboratory for Optoelectronics Huazhong University of Science and Technology Wuhan, Hubei, P. R. China Qing Yang Dept. of Electrical and Computer Engineering University of Rhode Island Kingston, RI Abstract To protect data and recover data in case of failures, Linux operating system has built-in MD device that implements RAID architectures. Such device can recover data in case of single hardware failure among multiple disks. But it cannot recover data that were damaged by human errors, virus attack, and disastrous failures. In this paper, we present an implementation of a device driver that is capable of recovering data to any point-intime in case of various failures. A simple mathematical model is used to guide the optimization of our implementation in terms of space usage and recovery time. Extensive experiments have been carried out to show that the implementation is fairly robust and numerical results demonstrate that the implementation is optimal. Keywords: Data storage, data protection, CDP, data recovery, disk I/O architecture. 1. Introduction With the rapid advances in networked information services, data protection and recover have become top priority of many organizations [ 1, 2, 3, 4 ]. In Linux operating system, there are variety of data protection and recovery programs in the open source community. Such programs can generally be classified into three categories: block level device drivers that provide data protection functionalities, file versioning, and backup and replications. Typical block level device driver that provides data protection and recovery is MD (multiple devices) driver for software disk arrays [5]. MD implements most of RAID architectures including RAID1 through RAID5 in software, often referred to as software RAID. Such device can tolerate one disk failure and can rebuild data on the failed disk using other functional disks. However, such a device is not able to go back to a past point-intime to recover data damaged by human errors, virus attacks, and disasters. File versioning is another type of data protection technique that records a history of changes to files. Versioning was implemented by some early file systems such as Cedar File System [6], 3DFS [7], and CVS [8] to list a few. Typically, users need to create versions manually in these systems. There are also copy-on-write versioning systems exemplified by Tops-20 [ 9 ] and VMS [10] that have automatic versions for some file operations. Elephant [11] transparently creates a new version of a file on the first write to an open file. CVFS [12] makes versions for each individual write or small meta-data using highly efficient data structures. OceanStore [ 13 ] uses versioning not only for data recovery but also for simplifying many issues with caching and replications. The LBFS [14] file system exploits similarities between files and versions of the same files to save network bandwidth for a file system on low-bandwidth networks. Peterson and Burns have recently implemented the ext3cow file system that brings snapshot and file versioning to the open-source community [15]. Other programs such as rsync, rdiff, and diff also provide versioning of files. To improve efficiency, flexibility and portability of file versioning, Muniswamy-Reddy et al [16] presented a lightweight user-oriented versioning file system called Versionfs that supports various storage policies configured by users. File versioning provides a time-shifting file system that allows a system to recover to a previous version of files. But they work mainly at file system level not at block device level. Block level storages usually provide
2 high performance and efficiency especially for applications such as databases that access raw devices. At block level, data protection is traditionally done using snapshots and backups. Despite the rapid advances in computer technology witnessed in the past two decades, data backup is a notable exception that is fundamentally the same as it was 20 years ago. It was well-known that backup remains a costly and highly intrusive batch operation that is prone to error and consumes an exorbitant amount of time and resources [3]. In this paper, we present a new implementation of block level CDP driver in Linux operating system. Our implementation is based on the concept of TRAP architecture [17 ] that is capable of recovering data to any point-in-time in case of various failures. Two important design issues have been studied in depth: additional storage space usage and recovery time. We use a simple mathematical model to guide our design to optimize space usage and recovery time. Furthermore, in order to minimize possible failures caused by broken chains of parities, we provide an optimal way of organizing the parity chain with periodical snapshots inserted in the chains. Based on our implementation, we have carried out extensive experiments to test the robustness of our program and to evaluate the performance of our implementation. Standard benchmarks are used in our experiments such as TPC-C, IOMeter, and PostMark. Our measurement results show that our implementation is space optimal and recovery time optimal. The paper is organized as follows. Next section gives a brief overview of TRAP architecture for the purpose of completeness. Section 3 presents the detailed design of our implementation associated with the mathematical model used to guide our design. In Section 4, a detailed implementation as a Linux device driver is presented followed by our experimental settings in Section 5. Section 6 gives numerical results and discussions. We conclude our paper in Section Brief Overview of TRAP Architecture As presented in [17], TRAP keeps a log of parities as a result of each write on a block. Figure 1 shows the basic design of TRAP. Suppose that at time T(k), the host writes into a data block with logic block address A i that belongs to a data stripe (A 1, A 2 A i, A n ). The RAID controller performs the following operation to update its parity disk: P T(k) = A i (k) A i (k-1) P T(k-1) (1) where P T(k) is the new parity for the corresponding stripe, A i (k) is the new data for data block A i, A i (k-1) is the old data of data block A i, and P T(k-1) is the old parity of the stripe. Leveraging this computation, TRAP appends the first part of the above equation, i.e. P T(k) = A i (k) A i (k-1), to the parity log stored in the TRAP disk after a simple encoding box, as shown in Figure 1. Fig. 1 Data logging method of TRAP design. Now consider the parity log corresponding to a data block, A i, after a series of write operations. The log contains (P T(k), P T(k-1), P T(2), P T(1) ) with time stamps T(k), T(k-1),, T(2), and T(1) associated with the parities. Suppose that an outage occurred at time t 1, and we would like to recover data to the image as it was at time t 0 (t 0 t 1 ). To do such a recovery, for each data block A i, we first find the largest T(r) in the corresponding parity log such that T(r) t 0. We then perform the following computation: A i (r)= P T(r) P T(r-1) P T(1) A i (0), (2) where A i (r) denotes the data image of A i at time T(r) and A i (0) denotes the data image of A i at time T(0). Note that P T(l) A i (l-1) = A i (l) A i (l-1) A i (l-1) = A i (l), for all l=1,2, r. Therefore, Equation (2) gives A i (r) correctly assuming that the original data image, A i (0), exists. 3. Design and Analysis of ST-CDP The TRAP architecture discussed in the previous section provides CDP function by means of the parity chains resulting from block write operations. Since every change is kept in the chain, one can go back to any point-in-time. The traditional snapshot/backup, on the other hand, provides periodical data images of block level storage. When data recovery is necessary, these two data protection techniques work quite differently. TRAP needs to retrieve the parity chain for each data 2
3 block and perform the parity computation to recover the data block corresponding to the recovery time point. As the parity chain gets longer, so does the recovery time because of longer parity computations. Snapshot, on the other hand, just needs to restore the corresponding data blocks corresponding to the recovery time point, though the number of possible recovery points is limited by the frequency of snapshots performed. These two techniques present us with a trade-off between RPO (Recovery Point Objective) and RTO (Recovery Time Objective) [4]. Our purpose here is to design an optimal approach to data recovery by taking advantages of both techniques. In our design and implementation, we take a hybrid approach. The idea is to break down the parity chain into sub-chains. Between any two subsequent sub-chains, we insert snapshot data image. The length of the sub-chain is a configurable parameter determined by system administrator or storage manager. We call our design ST_CDP (Snapshot in TRAP CDP). Adding snapshots between parity chains has several practical advantages. First of all, it limits the maximum recovery time. Secondly, the configurable sub-chain sizes allow a system administrator to organize the parity chains in different data structures and to optimize space usage and retrieval times. Thirdly and more importantly, this organization increases significantly the reliability and recoverability of the TRAP architecture. This is because a parity chain may become completely useless if there is any single bit error in the chain. The longer the parity chain is, the higher the probability of chain failure. Breaking up the parity chains into sub-chains and adding snapshot in between reduces the probability of such failures and increases data recoverability. Figure 2 shows the new parity logging structure. As shown in the figure, we insert snapshots in the parity chain. As a result, sub-chains are formed that are separated by periodical snapshots. At recovery time, only one sub-chain that contains the recovery time point is needed. The recovery time for each data block is limited by the half of the sub-chain length because parity computation can be done both ways, redo and undo, as shown in [17]. To minimize chain retrieval time, one can also organize all sub-chains in an efficient data structure, which is out of scope of this paper. From the above discussion, it is clear that the length of each sub-chain is an important parameter to determine. Let d be the length of each sub-chain in terms of the number of parity blocks in the sub-chain. We would like to determine what d value one should choose for optimal implantation of TRAP on Linux operating system. In order to provide a quantitative guidance on how to choose d, let us define the following symbols: Symbols d IO rate S blk S log C T dec T xor T spn W avg Definition Sub-chain (parity chain) Length: number of parity blocks in each sub-chain IO throughput of the disk storage Data block size Size of compressed parity block Compression Ratio: C= S blk / S log Decoding time EX-OR operation time RPO: time span between current time and recovery time point Average number of write operations per time unit Table 1. Definition of symbols used in analysis. Ai(k+d) Insert Ai(k) P T(n)... P T(k+d-1)... P T(k-1)... P T(0) Header Interval d Insert Interval d Ai(k-d) Insert Fig. 2 Data logging method of ST-CDP design. If we do not break up parity chains, the recovery time of each data block is given by (T dec +T xor + S log /IO rate )* W avg * T span. (3) Now consider our ST-CDP design with sub-chains of d parity blocks. As mentioned previously, the recovery time for each data block is limited by the half of the sub-chain length, d, because parity computation can be done both ways, redo and undo. If we assume that the recovery time point is uniformly distributed among d points within a sub-chain if the RPO falls into that chain, the expected parity blocks needed to do EX_OR computations for the data recovery is given by = = + =. The recovery time of each data block in case of ST-CDP is given by T(d) = (T dec +T xor + S log /IO rate )* E(d)+ S blk /IO rate (5) 3
4 The first half of the above equation gives the parity computation time and the second half gives the data copy time. It is interesting to note that this recovery time is independent of RPO but dependent on d value. The recovery time increases as d increases. Let us now consider the cost of the ST-CDP program. The major cost is the additional storage space needed to store the parity logs and snapshot data while running ST-CDP. We would like to examine the average storage increase per time unit while running the ST-CDP, which is given by S(d) = S log *W avg + S blk *W avg /d, where the first term gives the space for parity log and the second term gives the snapshot space. Since the number of snapshots inserted is inversely proportional to d, the space usage of snapshots is also inversely proportional to d. The other important cost is the time it takes to recover data. Ideally, we would like to use as little storage space as possible and recover data as quickly as possible. We will use these two factors to determine how good a data protection technology is. We therefore use the product of these two cost factors as the compound cost of ST-CDP. Let F(d) = T(d)*S(d) =[(T dec +T xor +S log /IO rate )*E(d)+S blk /IO rate ]* (S log *W avg +S blk *W avg /d) = (c 1 *E(d)+c 2 )*(c 3 +c 4 /d) = (c 1 *d/4+c 1 /4+c2)(c 3 +c 4 /d) (6) where c 1 =(T dec +T xor +S log /IO rate ), c 2 =S blk /IO rate, c 3 = S log *W avg, and c 4 = S blk *W avg. These are constants independent of d. Now, let us consider the derivative of F(d) and set it to 0. We have ( c1 + 4 c2) c4 F (d)=0 d 0 = (7) c c 1 3 Since the second derivative, F (d) = (c 1 +4c 2 )C 4 *d -3 /2, F (d = d 0 ) = c 1 c 3 /2d 0 >0, the minimum value of F(d) exists when d=d 0. We will choose d to be the integer closest to d 0 as our optimal sub-chain size. 4. Driver Implementation Based on the design and analysis presented in the previous sections, we have implemented our ST-CDP in Linux Kernel. Our implementation is developed as an added kernel module on top of MD RAID5. The ST- CDP was developed as a standalone block device driver independent of higher level file systems. As a result, it can support variety of applications including different file systems and database applications. The ST-CDP has two major functional modules, CDP logging module and recovery module. The CDP logging module works at run time to keep journaling of parities and snapshots. It bypasses all I/O read operations and intercepts all I/O write operations. There are two parallel threads, one handling normal write operations and the other performing CDP functions. There are two major parts in the CDP functional module. The first part does the parity computation and logging. It also keeps track of metadata for the parity logs. The second part carries out snapshot operations when triggered. The snapshot operations starts whenever the number of parities collected for a block reaches value d defined in the previous section. The underlying storage is partitioned into two volumes: source volume and CDP volume. The source volume stores the production data while the CDP volume stores the parity logs and snapshot data. The recovery module of the ST-CDP is a program that runs offline. When data recovery needs to be done, the recovery module starts by retrieving parity logs and snapshot data. Based on the designated RPO, it searches the parity chains for each data block to find the subchain that contains the desired RPO. Once such subchain is found, the recovery program searches for a parity block that has the timestamp matches the closest to the RPO. OX-OR operations are then performed to recover the right data block. After all changed data blocks are recovered, the data will be written to the source volume and recovery process is done. It is also possible that the RPO matches one of the snapshots in the CDP volume. In this case, no parity computation is necessary. The recovery program just copies the snapshot data to the source volume. 5. Experimental Settings For the purpose of testing of our ST-CDP implementation and performance evaluation, we have carried out measurement experiments. Figure 3 shows the high level block diagram of our experimental settings. To allow multiple clients and multiple storage servers in a networked environment, we implemented the lower level storage device using iscsi protocol as shown in Figure 3. Our ST-CDP module runs on the storage server at block device level of Linux operating system. The client machine has file system, database, and application benchmarks installed. The details of the 4
5 hardware and software environment in our experiments are shown in Table 2 below. Fig. 3 System architecture of ST-CDP implementation. Right workloads are important for performance studies [18]. In order to have an accurate evaluation, we use real world I/O workloads and standard benchmarks. The first benchmark, TPC-C, is a well-known benchmark used to model the operational end of businesses where real-time transactions are processed [ 19 ]. TPC-C simulates the execution of a set of distributed and on-line transactions (OLTP) for a period of two to eight hours. It is set in the context of a wholesale supplier operating on a number of warehouses and their associated sales districts. TPC-C incorporates five types of transactions with different complexity for online and deferred execution on a database system. These transactions perform the basic operations on databases such as inserts, deletes, updates and so on. From data storage point of view, these transactions will generate reads and writes that will change data blocks on disks. For Postgres Database, we use the implementation from TPCC-UVA [20]. 8 warehouses with 25 users are built on Postgres database. Details regarding TPC-C workloads specification can be found in [19]. Besides benchmarks running on databases, we have also run two file system benchmarks IoMeter and PostMark. IoMeter is a flexible and configurable benchmark tool that is also widely used in industries and the research community [21]. It can be used to measure the performance of a mounted file system or a block device. We run the IoMeter on NTFS with 4K-block size for two types of workloads: 100% random writes, and 30% writes and 70% reads. PostMark is another widely used file system benchmark tool written by Network Appliance, Inc [22]. It measures performance in terms of transaction rates in an ephemeral small-file environment by creating a large pool of continually changing files. Once the pool has been created, a specified number of transactions occur. Each transaction consists of a pair of smaller transactions, i.e. Create file/delete file and Read file/append file. Each transaction s type and files it affected are chosen randomly. The read and write block size can be tuned. In our experiments, we set PostMark workload to include 10,000 files and to perform 20,000 transactions. Read and Write buffer sizes are set to 4KB. 2 Client Nodes Storage Server CPU Intel Xeon 2.8GHZ Intel Core 2 E2140, 1.6GHz RAM DDR2 533, 2GB DDR2 333, 1GB Disk SATA 300GB SATA 300GB OS Red Hat Linux 9.0 (Kernel2.6.9) Gentoo Linux (Kernel ) Switch Cisco 3750-E Gb NIC 2*PCI 1GB/s Benchmarks TPC-C on Postgres database IoMeter PostMark Table 2. List of testing environments. 6. Numerical Results and Discussions In this section, we present our measurement results in terms of space usage, recovery time, and run time performance impact of ST-CDP. We compare the performance results of three data protection techniques: namely native TRAP with no sub-chains, ST-CDP, and pure periodical snapshots. The snapshot we evaluate here is redirect-on-write snapshot, ROW for short, as opposed to copy-on-write snapshots [ 23 ]. In our experiments, we set the values of d to and 94 corresponding to block sizes of 4KB,8KB, 16KB,32KB,and 64KB, respectively. These values are selected based on our analysis presented in the previous section. Our first experiment is to measure the additional space usage of the three data protection technologies. Figure 4 shows the measured results. We plotted the space usage of the three data protection technologies for different block sizes ranging from 4KB through 64KB. It can be seen from this figure that snapshot takes most space because it keeps the original data blocks of all changed data. Native TRAP takes the least amount of space because of locality property of write operations as evidenced in [17]. The space usage of ST-CDP is somewhere in between the other two because it stores both parity logs and small amount of snapshots between sub-chains. Because we choose the optimal value of d for each block size, the space overhead of ST-CDP is closer to that of TRAP than that of ROW snapshot. Our 5
6 observation is that ST-CDP provides continuous data protection with substantial less storage overhead than continuous real-time snapshots Block Size(KB) Log Data Szie(MB) ROW TRAP ST-CDP Fig. 4 Storage space comparison for TPC-C on Postgres database. Block Average I/O Response Time(ms) Size(KB) 4 ROW8 TRAP16ST-CDP32 RAID-564 Fig. 5 Average I/O response time comparison for 70% reads and 30% writes of IoMeter benchmark. Since ST-CDP carries out parity computation and snapshot operations at run time, an immediate question is how it impact application performance. Our next experiment is to evaluate the performance impact of ST- CDP on applications. For this purpose, we run IOMeter to measure the IO performance while enabling the ST- CDP module. Figure 5 shows our measured results in terms of average I/O response time as functions of block sizes. We plotted 4 performance curves corresponding to snapshots, TRAP, ST-CDP, and RAID5 alone with no data protection program running. Performance of RAID5 is used as a reference for us to observe the negative impacts of the three data protection technologies. We noticed that snapshot has the most performance impact and TRAP has the least. ST-CDP is in between but close to that of TRAP. For block size of 4KB, ST-CDP s performance is about 8.3% lower than that of RAID5. For block size of 32KB, such performance difference is about 5.4%. The maximum performance drop of TRAP and snapshot compared to RAID5 are 7.1%(4KB) and %(32KB), respectively, whereas the maximum performance drop of ST-CDP is about 8.3%. Average I/O Response Time(ms) 789 ROW TRAP ST-CDP RAID-5 Fig. 6 Average I/O response time comparison for 100% random writes of IoMeter benchmark. Figure 6 shows the results of IOMeter with 100% random writes. Results similar to that of Figure 5 are observed. The maximum performance drops of the three data protection techniques compared to that of RAID5 are 9.2%(8KB), 6.3%(4KB),and 29.6%(64KB) Throughput(MB/s) for ST-CDP, TRAP, and snapshots respectively I/O 05ST-CDP(Read)RAID-5(Read)ST-CDP(Write)RAID-5(Write) Fig. 7 I/O throughput comparison for PostMark benchmark. Block Size(KB) Block Size(KB) PostMark results are shown in Figure 7. In this figure, we compared the performance of ST-CDP with the performance RAID5 to see how much performance degradation caused by the ST-CDP overhead. We noticed that the performance of both RAID5 and ST- CDP increases as the block size increases. The performance differences between RAID5 and ST-CDP 6
7 are very small and these performance differences do not change with block size. This observed result can be attributed to the fact that the parity computation of ST- CDP is a part of RAID5 parity computation. Therefore, the overhead of ST-CDP is manageable. Our next experiment is to measure the recovery time that is very important performance parameter for data protection technologies. We considered the 5GB of data of normal I/O operations and try to recover data to different RPOs. We measured the recovery times of native TRAP and ST-CDP and compared their respective recovery times. Figure 8 shows the measured recovery time as function of RPO. As can be seen from this figure, TRAP s recovery time increases as RPO increases because of EX-OR computation of long parity chains. On the other hand, the recovery time of ST-CDP keeps flat while RPO changes. For example, for 4KB block, recovering data to half hour ago takes about 1,246 seconds with native TRAP. To recover data to 8 hours ago, TRAP takes about 4,910 seconds, 3.9 times longer. For block size of 64KB, recovery time of TRAP becomes smaller. It takes about 723s and 2,061s to recover data to half an hour ago and 8 hours ago, respectively. On the other hand, ST-CDP module can recover data much faster irrespective of RPO. As shown in Figure 8, the recovery time varies from 212 seconds to 253 seconds, very little change! Recover Time(s) Time Span(Hours) 0TRAP(4KB) 0.5 TRAP(64KB) 1 2ST-CDP(4KB) 4 ST-CDP(64KB) 8 Fig. 8 Recover time comparison between ST-CDP and TRAP. 7. Conclusions In this paper, we have presented a design, analysis, and a Linux implementation of a continuous data protection technique referred to as ST-CDP. It is based on TRAP [17] technology that keeps logs of parities of changed data blocks and interspersed with snapshot data. The implementation is done at block device level as an independent device driver that can be added to MD software RAID device. Extensive experiments have been carried out to show the implementation is fairly robust. Standard benchmarks are used to evaluate the performance and cost of the implementation. Numerical results have shown that the overhead is manageable. The major advantage of ST-CDP is low RTO that is RPO independent. Our future work includes investigating reliability and recoverability of TRAP by adding error correcting code in parity chains and implementing it in hardware RAID controllers. Acknowledgments The authors would like to thank Jing Yang and Yu Cheng of Data Storage Division, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, for their help in system implementation and testing. Their contribution to this work is greatly appreciated. This research is sponsored in part by National Science Foundation of China under grants number NSFC , , and and Chinese 973 grant number 2004CB and US National Science Foundation under grants CCR and CCF Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References [1] D. M. Smith, The cost of lost data, Journal of Contemporary Business Practice, Vol. 6, No. 3, [2] D. Patterson, A. Brown and et al. Recovery oriented computing (ROC): Motivation, Definition, Techniques, and Case Studies, Computer Science Technical Report UCB/CSD , U.C. Berkeley, March 15, [3] J. Damoulakis, Continuous protection, Storage, Vol. 3, No. 4, pp , June [4] K. Keeton, C. Santos, D. Beyer, J. Chase, J. Wilkes, Designing for disasters, In Proc. of 3rd Conference on File and Storage Technologies, San Francisco, CA, [5] Linux Kernel Drivers, Available: [6] D.K. Gifford, R.M. Needham and M.D. Schroeder, Cedar file system, Communication of the ACM, Vol.31, No.3, pp , March [7] D. G. Korn and E. Krell, The 3-D file system, In Proc. of the USENIX Summer Conference, Baltimore, DC, Summer 1989, pp
8 [ 8 ] B. Berliner and J. Polk, Concurrent Versions System (CVS), 2001, [9] L. Moses, An introductory guide to TOPS-20, Tech. Report TM-82-22, USC/Information Sciences Institutes, [10] K. McCoy, VMS File System Internals, Digital Press, [11] D. S. Santry, M.J. Feeley, N.C. Hutchinson, A.C. Veitch, R.W. Carton, and J. Ofir, Deciding when to forget in the Elephant file system, In Proc. of 17th ACM Symposium on Operating System Principles, Charleston, SC, Dec. 1999, pp [12] C.A.N. Soules, G. R. Goodson, J. D. Strunk, and G.R. Ganger, Metadata efficieny in versioning file systems, In Proc. of the 2nd USENIX Conference on File and Storage Technologies, San Francisco, CA, March 2003, pp [13] S. Rhea, P. Eaton, D. Geels, H. Weatherspoon, B. Zhao, and J. Kubiatowicz, Pond: The OceanStore prototype, In Proc. of the 2nd USENIX Conference on File and Storage Technologies (FAST), San Francisco, CA, March [14] A. Muthitacharoen, B. Chen, and D. Mazières, "A low-bandwidth network file system," In Proc. of the Eighteenth ACM symposium on Operating systems principles, Alberta, Canada, October [15] Z. Peterson and R. C. Burns, Ext3cow: A Time- Shifting File System for Regulatory Compliance, ACM Transactions on Storage, Vol.1, No.2, pp , [16] K. Muniswamy-Reddy, C. P. Wright, A. Himmer, and E. Zadok, A versatile and user-oriented versioning file system, In Proc. of the 3rd USENIX Conference on File and Storage Technologies, San Francisco, CA, [17] Q. Yang, W. Xiao, and J. Ren, TRAP-Array: A Disk Array Architecture Providing Timely Recovery to Any Point-in-time, In Proceedings of the 33rd annual international symposium on Computer Architecture, June 2006, pp [18] Yiming Hu and Qing Yang, DCD---Disk Caching Disk: A New Approach for Boosting I/O Performance, In 23rd Annual International Symposium on Computer Architecture (ISCA), Philadelphia, PA, May [19] Transaction Processing Performance Council, TPC BenchmarkTM C Standard Specification, 2005, [20] J. Piernas, T. Cortes and J. M. García, TPCC- UVA: A free, open-source implementation of the TPC-C Benchmark, 2005, uva.html. [ 21 ] Intel, IoMeter: Performance Analysis Tool, [22] J. Katcher, PostMark: A new file system bench - mark, Network Appliance, Tech. Rep. 3022, [ 23 ] W. Xiao, Y Liu, Q. Yang, J Ren, and C Xie, Implementation and performance evaluation of two snapshot methods on iscsi target stores, in Proc. of IEEE/NASA Conf. on Mass Storage Systems and Technologies, May
Continuous Data Protection in iscsi Storages
Continuous Data Protection in iscsi Storages, Weijun Xiao, Jin Ren Dept. of ECE University of Rhode Island Kingston, RI [email protected] 1 Background Online data storage doubles every 9 months Applications
S 2 -RAID: A New RAID Architecture for Fast Data Recovery
S 2 -RAID: A New RAID Architecture for Fast Data Recovery Jiguang Wan*, Jibin Wang*, Qing Yang+, and Changsheng Xie* *Huazhong University of Science and Technology, China +University of Rhode Island,USA
Supplemental File of S 2 -RAID: Parallel RAID Architecture for Fast Data Recovery
JOURNAL OF L A T E X CLASS FILES, VOL. 6, NO. 1, JANUARY 27 1 Supplemental File of S 2 -RAID: Parallel RAID Architecture for Fast Data Recovery Jiguang Wan, Jibin Wang, Changsheng Xie, and Qing Yang, Fellow,
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
RAID 5 rebuild performance in ProLiant
RAID 5 rebuild performance in ProLiant technology brief Abstract... 2 Overview of the RAID 5 rebuild process... 2 Estimating the mean-time-to-failure (MTTF)... 3 Factors affecting RAID 5 array rebuild
Performance Analysis of RAIDs in Storage Area Network
Performance Analysis of RAIDs in Storage Area Network Sneha M. Assistant Professor, Department of Computer Science and Engineering, R V College of Engineering Bengaluru-560059 ABSTRACT Direct Attached
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Online Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
Best Practices for Architecting Storage in Virtualized Environments
Best Practices for Architecting Storage in Virtualized Environments Leverage Advances in Storage Technology to Accelerate Performance, Simplify Management, and Save Money in Your Virtual Server Environment
DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution
DELL TM PowerEdge TM T610 500 Mailbox Resiliency Exchange 2010 Storage Solution Tested with: ESRP Storage Version 3.0 Tested Date: Content DELL TM PowerEdge TM T610... 1 500 Mailbox Resiliency
Energy aware RAID Configuration for Large Storage Systems
Energy aware RAID Configuration for Large Storage Systems Norifumi Nishikawa [email protected] Miyuki Nakano [email protected] Masaru Kitsuregawa [email protected] Abstract
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
Multi-level Metadata Management Scheme for Cloud Storage System
, pp.231-240 http://dx.doi.org/10.14257/ijmue.2014.9.1.22 Multi-level Metadata Management Scheme for Cloud Storage System Jin San Kong 1, Min Ja Kim 2, Wan Yeon Lee 3, Chuck Yoo 2 and Young Woong Ko 1
Real-time Protection for Hyper-V
1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
Eloquence Training What s new in Eloquence B.08.00
Eloquence Training What s new in Eloquence B.08.00 2010 Marxmeier Software AG Rev:100727 Overview Released December 2008 Supported until November 2013 Supports 32-bit and 64-bit platforms HP-UX Itanium
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Understanding Disk Storage in Tivoli Storage Manager
Understanding Disk Storage in Tivoli Storage Manager Dave Cannon Tivoli Storage Manager Architect Oxford University TSM Symposium September 2005 Disclaimer Unless otherwise noted, functions and behavior
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage
Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage Technical white paper Table of contents Executive summary... 2 Introduction... 2 Test methodology... 3
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage
EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL
EMC Business Continuity for Microsoft SQL Server 2008
EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Celerra Fibre Channel, EMC MirrorView, VMware Site Recovery Manager, and VMware vsphere 4 Reference Architecture Copyright 2009, 2010
Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering
Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution Database Solutions Engineering By Subhashini Prem and Leena Kushwaha Dell Product Group March 2009 THIS WHITE PAPER IS FOR INFORMATIONAL
Software-defined Storage at the Speed of Flash
TECHNICAL BRIEF: SOFTWARE-DEFINED STORAGE AT THE SPEED OF... FLASH..................................... Intel SSD Data Center P3700 Series and Symantec Storage Foundation with Flexible Storage Sharing
<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures
1 Refreshing Your Data Protection Environment with Next-Generation Architectures Dale Rhine, Principal Sales Consultant Kelly Boeckman, Product Marketing Analyst Program Agenda Storage
HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers
TANET2007 臺 灣 網 際 網 路 研 討 會 論 文 集 二 HyLARD: A Hybrid Locality-Aware Request Distribution Policy in Cluster-based Web Servers Shang-Yi Zhuang, Mei-Ling Chiang Department of Information Management National
File System & Device Drive. Overview of Mass Storage Structure. Moving head Disk Mechanism. HDD Pictures 11/13/2014. CS341: Operating System
CS341: Operating System Lect 36: 1 st Nov 2014 Dr. A. Sahu Dept of Comp. Sc. & Engg. Indian Institute of Technology Guwahati File System & Device Drive Mass Storage Disk Structure Disk Arm Scheduling RAID
Recovery Protocols For Flash File Systems
Recovery Protocols For Flash File Systems Ravi Tandon and Gautam Barua Indian Institute of Technology Guwahati, Department of Computer Science and Engineering, Guwahati - 781039, Assam, India {r.tandon}@alumni.iitg.ernet.in
WHITE PAPER Optimizing Virtual Platform Disk Performance
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Redefining Oracle Database Management
Redefining Oracle Database Management Actifio PAS Specification A Single Solution for Backup, Recovery, Disaster Recovery, Business Continuity and Rapid Application Development for Oracle. MAY, 2013 Contents
PIONEER RESEARCH & DEVELOPMENT GROUP
SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Performance Modeling and Analysis of a Database Server with Write-Heavy Workload
Performance Modeling and Analysis of a Database Server with Write-Heavy Workload Manfred Dellkrantz, Maria Kihl 2, and Anders Robertsson Department of Automatic Control, Lund University 2 Department of
Performance and scalability of a large OLTP workload
Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............
Flexible Storage Allocation
Flexible Storage Allocation A. L. Narasimha Reddy Department of Electrical and Computer Engineering Texas A & M University Students: Sukwoo Kang (now at IBM Almaden) John Garrison Outline Big Picture Part
One Solution for Real-Time Data protection, Disaster Recovery & Migration
One Solution for Real-Time Data protection, Disaster Recovery & Migration Built-in standby virtualisation server Backs up every 15 minutes up to 12 servers On and Off-site Backup User initialed file, folder
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Performance Report Modular RAID for PRIMERGY
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers
TPCC-UVa: An Open-Source TPC-C Implementation for Parallel and Distributed Systems
TPCC-UVa: An Open-Source TPC-C Implementation for Parallel and Distributed Systems Diego R. Llanos and Belén Palop Universidad de Valladolid Departamento de Informática Valladolid, Spain {diego,b.palop}@infor.uva.es
Pivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview
Pivot3 Desktop Virtualization Appliances vstac VDI Technology Overview February 2012 Pivot3 Desktop Virtualization Technology Overview Table of Contents Executive Summary... 3 The Pivot3 VDI Appliance...
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)
White Paper Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability) White Paper July, 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006
OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006 EXECUTIVE SUMMARY Microsoft Exchange Server is a disk-intensive application that requires high speed storage to deliver
DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering
DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top
Summer Student Project Report
Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June September 2014 Abstract This report will outline two projects that were done as part of a three months
VERITAS Storage Foundation 4.3 for Windows
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
On Benchmarking Popular File Systems
On Benchmarking Popular File Systems Matti Vanninen James Z. Wang Department of Computer Science Clemson University, Clemson, SC 2963 Emails: {mvannin, jzwang}@cs.clemson.edu Abstract In recent years,
Violin: A Framework for Extensible Block-level Storage
Violin: A Framework for Extensible Block-level Storage Michail Flouris Dept. of Computer Science, University of Toronto, Canada [email protected] Angelos Bilas ICS-FORTH & University of Crete, Greece
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB, Cassandra, and MongoDB
bankmark UG (haftungsbeschränkt) Bahnhofstraße 1 9432 Passau Germany www.bankmark.de [email protected] T +49 851 25 49 49 F +49 851 25 49 499 NoSQL Performance Test In-Memory Performance Comparison of SequoiaDB,
1 Storage Devices Summary
Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious
HP Smart Array Controllers and basic RAID performance factors
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays
Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation
Total Data Protection for your business
A platform to empower your business C Y K C K C K Total Data Protection for your business Protecting business data No matter where it lives At Breakwater we understand the importance of a dependable, extremely
2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
Data Backup and Archiving with Enterprise Storage Systems
Data Backup and Archiving with Enterprise Storage Systems Slavjan Ivanov 1, Igor Mishkovski 1 1 Faculty of Computer Science and Engineering Ss. Cyril and Methodius University Skopje, Macedonia [email protected],
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
The Methodology Behind the Dell SQL Server Advisor Tool
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
Mosaic Technology s IT Director s Series: Exchange Data Management: Why Tape, Disk, and Archiving Fall Short
Mosaic Technology s IT Director s Series: : Why Tape, Disk, and Archiving Fall Short Mosaic Technology Corporation * Salem, NH (603) 898-5966 * Bellevue, WA (425)462-5004 : Why Tape, Disk, and Archiving
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
BEST PRACTICES GUIDE: VMware on Nimble Storage
BEST PRACTICES GUIDE: VMware on Nimble Storage Summary Nimble Storage iscsi arrays provide a complete application-aware data storage solution that includes primary storage, intelligent caching, instant
How To Use A Recoverypoint Server Appliance For A Disaster Recovery
One Solution for Real-Time Data protection, Disaster Recovery & Migration Built-in standby virtualisation server Backs up every 15 minutes up to 12 servers On and Off-site Backup Exchange Mailbox & Message
Microsoft SMB File Sharing Best Practices Guide
Technical White Paper Microsoft SMB File Sharing Best Practices Guide Tintri VMstore, Microsoft SMB 3.0 Protocol, and VMware 6.x Author: Neil Glick Version 1.0 06/15/2016 @tintri www.tintri.com Contents
High Availability & Disaster Recovery Development Project. Concepts, Design and Implementation
High Availability & Disaster Recovery Development Project Concepts, Design and Implementation High Availability & Disaster Recovery Development Project CONCEPTS Who: Schmooze Com Inc, maintainers, core
Evaluation of Enterprise Data Protection using SEP Software
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
Backup and Recovery 1
Backup and Recovery What is a Backup? Backup is an additional copy of data that can be used for restore and recovery purposes. The Backup copy is used when the primary copy is lost or corrupted. This Backup
Intel RAID Controllers
Intel RAID Controllers Best Practices White Paper April, 2008 Enterprise Platforms and Services Division - Marketing Revision History Date Revision Number April, 2008 1.0 Initial release. Modifications
Maximizing SQL Server Virtualization Performance
Maximizing SQL Server Virtualization Performance Michael Otey Senior Technical Director Windows IT Pro SQL Server Pro 1 What this presentation covers Host configuration guidelines CPU, RAM, networking
TPCC-UVa. An Open-Source TPC-C Implementation for Parallel and Distributed Systems. Computer Science Department University of Valladolid, Spain
An Open-Source TPC-C Implementation for Parallel and Distributed Systems Computer Science Department University of Valladolid, Spain PMEO-PDS 06, Rhodes Island, April 29th, 2006 Motivation There are many
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
WHITE PAPER PPAPER. Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions. for Microsoft Exchange Server 2003 & Microsoft SQL Server
WHITE PAPER PPAPER Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions for Microsoft Exchange Server 2003 & Microsoft SQL Server
Chronicle: Capture and Analysis of NFS Workloads at Line Rate
Chronicle: Capture and Analysis of NFS Workloads at Line Rate Ardalan Kangarlou, Sandip Shete, and John Strunk Advanced Technology Group 1 Motivation Goal: To gather insights from customer workloads via
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
Main Memory Data Warehouses
Main Memory Data Warehouses Robert Wrembel Poznan University of Technology Institute of Computing Science [email protected] www.cs.put.poznan.pl/rwrembel Lecture outline Teradata Data Warehouse
Comparison of Hybrid Flash Storage System Performance
Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.
Storage System: Management of Explosively Increasing Data in Mission-Critical Systems
Special Issue Advanced Technologies Driving Dynamic Collaboration Featuring Hardware Platforms Storage System: Management of Explosively Increasing Data in Mission-Critical Systems By Toshiro NAKAJIMA,*
COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel I/O (I) I/O basics Fall 2012 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network card 1 Network card
A Method of Deduplication for Data Remote Backup
A Method of Deduplication for Data Remote Backup Jingyu Liu 1,2, Yu-an Tan 1, Yuanzhang Li 1, Xuelan Zhang 1, Zexiang Zhou 3 1 School of Computer Science and Technology, Beijing Institute of Technology,
Redefining Microsoft SQL Server Data Management. PAS Specification
Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing
Nimble Storage Best Practices for Microsoft SQL Server
BEST PRACTICES GUIDE: Nimble Storage Best Practices for Microsoft SQL Server Summary Microsoft SQL Server databases provide the data storage back end for mission-critical applications. Therefore, it s
IMPLEMENTATION OF SOURCE DEDUPLICATION FOR CLOUD BACKUP SERVICES BY EXPLOITING APPLICATION AWARENESS
IMPLEMENTATION OF SOURCE DEDUPLICATION FOR CLOUD BACKUP SERVICES BY EXPLOITING APPLICATION AWARENESS Nehal Markandeya 1, Sandip Khillare 2, Rekha Bagate 3, Sayali Badave 4 Vaishali Barkade 5 12 3 4 5 (Department
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
