Post-Intrusion Recovery Using Data Dependency Approach
|
|
- Tracy Jacobs
- 3 years ago
- Views:
From this document you will learn the answers to the following questions:
What method did researchers use to divide a log into multiple segments?
What type of data is used to divide a log into multiple segments?
What did researchers divide into multiple segments?
Transcription
1 T3A3 4:20 Proceedings of the 2001 IEEE Workshop on Information Assurance and Security United States Military Academy, West Point, NY, 5-6 June, 2001 Post-Intrusion Recovery Using Data Dependency Approach Sani Tripathy and Brajendra Panda!, Member, IEEE Computer Science Department University of North Dakota Grand Forks, ND {sani, Abstract-- Recovery of lost or damaged data in a postintrusion detection scenario is a difficult task since database management systems are not designed to deal with malicious committed transactions. Few existing methods developed for this purpose heavily rely on logs and require that the log must not be purged. This causes the log grow tremendously and, since scanning the huge log takes enormous amount of time, recovery becomes a complex and prolonged process. In this research, we have used data dependency approach to divide a log into multiple segments, each segment containing only related operations. During damage assessment and recovery, we identify and skip parts of logs that contain unaffected operations. This accelerates the task. Through simulation we have validated performance of our method. Index Terms-- Information Warfare, Malicious Transaction, Data Dependency, Log Segmentation. I. INTRODUCTION NY computer system that is connected to a network is Avulnerable to information attacks. In spite of all preventive measures, savvy intruders manage to sneak through and damage sensitive data. Initial damage later spreads to other parts of the database when a legitimate transaction updates valid data after reading damaged data. Damage may also get spread for reasons such as system integrity check as described in [1] and [6]. Intrusion detection helps in identifying an attack. Significant amount of work has been performed in the area of intrusion detection. A few are described in [3], [5] and [10]. However, none of the existing intrusion detection methods guarantee that an attack will be detected immediately. Therefore, a major part of the database may have been affected by the time an attack is detected and an attacking transaction is identified. The situation becomes worse as time passes by and finally, it may be difficult, although not impossible, to recover the system. Hence, immediate and efficient damage assessment, and fast and accurate recovery is important. In this research, we have developed a model using data dependency approach to divide the log into multiple segments. During damage assessment, only few of these segments will be accessed instead of the entire log. This expedites the recovery process. We have developed a simulation model to test the performance of our model. The results show dramatic improvements over traditional methods. In the next section, we discuss related work. Our proposed model is described in section III and the clustering algorithm is presented in section IV. Section V offers the performance of our model obtained through simulation. Section VI concludes the paper. II. RELATED WORK Traditional recovery methods [2], [4], [7] have been designed to perform recovery in case of media or system failures, but they lack the efficacy required to recover from effects of malicious committed transactions. In such situations, after the detection of an attack, effects of all transactions reading directly or indirectly from the malicious transaction along with that of the malicious transaction need to be undone. Then affected transactions must be re-executed to reflect the correct state of the database. Since an attack may be detected days or even months after its occurrence, the log must never be purged. Otherwise, information about the attacker and other valid but affected transactions will not be available. This requirement makes the log grow massively and searching the log during damage assessment and recovery incurs a long delay, which is unacceptable in many real-time applications. In [8], Jajodia et al. have discussed recovery issues for defensive information warfare. Liu et al. [9] have presented algorithms that re-write transaction history by moving the attacking transaction and all affected transactions beyond non-affected transactions. However, this process requires significant page I/O since all transactions after and including the malicious transaction have to be read. In order to save log access time, researchers have proposed to cluster the log using transaction dependency approach and have shown that during damage assessment only one of the clusters will be accessed [13]. Instead of traditional transaction dependency, researchers in [11] and [12] have used data dependency for recovery from malicious transactions. Rather than undoing all operations of affected transactions and then re-executing them, their approach suggests to undo and redo only affected operations of those transactions. Nevertheless, they require that log must be accessed starting from the malicious! This work was supported in part by US AFOSR grant F ISBN /$ IEEE 156
2 transaction till the end in order to perform damage assessment and recovery. In this research, we have developed an extended data dependency model and developed an algorithm to divide the log into several clusters. Only operations on data items that are inter-dependent kept in one cluster. During damage assessment, only relevant clusters need to be scanned. The damage assessment and recovery algorithms presented in [11] can aptly use clusters created by our method, whereas, other existing algorithms can easily be modified for this purpose. III. THE MODEL This work is based on the assumption that the attacking transaction has already been detected by intrusion detection techniques. So, given an attacking transaction, our goal is to determine the affected ones quickly, stop new and executing transactions from accessing affected data, and then carry out recovery process. We further assume that the scheduler produces a strict serializable history, and the log is not modifiable by users (so that log can t be damaged). As the transactions get executed, the log grows with time and is never purged. The log is stored in the secondary storage, so every access to it requires a disk I/O. During recovery, we need to access the log to restore the database. To avoid unnecessary retrieval of the massive log that results in tremendous amount of page I/Os, the clustering approach is followed. Next, we cite two of the definitions that were initially presented in [11] since they form the basis of our model. Definition 1: A write operation w i [x] of a transaction T i is dependent on a read operation r i [y] of T i if w i [x] is computed using the value obtained from r i [y]. Definition 2: A data value v 1 is dependent on data value v 2 if the write operation that wrote v 1 was dependent on a read operation on v 2. Note that v 1 and v 2 may be two different versions of the same data item. In our model, the operations on the data items that, in accordance with the definition 2, are directly or indirectly dependent on each other are kept in the same cluster. Within a transaction, some of its operations may be independent of each other. Therefore, not all operations of an affected transaction are affected in case of an attack. Hence, during recovery we need not re-execute all operations of a transaction, rather re-execute only the affected operations of that transaction. Keeping this philosophy in mind, while clustering, we store independent operations in different clusters. Another perspective that we have contemplated is determining the largest possible subset of all unaffected data items and, then, making them available as soon as possible. This reduces the risk of denial-of-service types of attacks. The following definitions help us achieve this goal by determining various possible dependency boundaries between data items. Definition 3: For any two data items x and y, if value of x may be used in calculating value of y then, y can be influenced by x. This relation is denoted by: x y. If the relation is bidirectional, i.e., either data item can influence the other, we denote the relationship as x y. We assume that any data item can be updated by using its previous value. Therefore, the relationship is reflexive. But it is neither commutative nor transitive. Definition 4: A probability graph is a directed graph representing possible relationships among data items in a database. In such a graph, the data items are represented by nodes and an edge between two nodes represent the caninfluence relationship between them. An edge can be unidirectional or bi-directional. Definition 5: A Clique C comprises of related data items such that, 1) for each x C, a data item y, such that either x y, or y x, or x y and 2) the probability graph plotted by taking all the data items is a connected one. A clique guarantees that data items belonging to different cliques never affect each other. Therefore, during damage assessment, if we identify the clique containing the damage made by the attacker, items of other cliques can be immediately made available to users. Definition 6: A critical link is a specific connecting node in a probability graph, removal of which may divide the graph into multiple disconnected graphs. During damage assessment, if it is determined that a critical link is not updated, then it is clear that data items on one side of the link have not affected items on the other side(s) of the link. However, it must be noticed that determination of whether a link is updated adds additional overhead. A study needs to be done to determine information on which links must be kept; otherwise, the maintenance cost will exceed the expected benefit. IV. LOG CLUSTERING In data dependency approach, as update operations of transactions are encountered, affected data items are checked for data dependency and accordingly put in appropriate cluster. Operations on the items of the clique may spread over a number of clusters. Operations that are related or dependent on each other are put in the same cluster. When we consider the aspect of the recovery from malicious attack, we cannot overlook the requirements for recovery from traditional failures. Hence the basic operations for traditional recovery like undo and redo must be taken care of. Since our algorithm stores only committed transactions in clusters, there is no need to carry out undo operations in case of transaction or media failures. Operations of all active transactions are stored in a temporary log. Clusters for those are determined periodically. In case of traditional recovery, when operations of a transaction have not been flushed even after the commit point, the need for redo operation arises. In our method, the operations of any non-flushed committed ISBN /$ IEEE 157
3 transaction will be in temporary log. We add a checkpoint in temporary log to mark the last committed transaction that has been flushed, so that a cluster can be created until that point. We do not store commit operations of transaction in any cluster. Therefore, the transactions stored in cluster do not require undo and redo operations. To carry out those operations we may need to refer to the temporary log. The actual dependency of data items is hard to determine from transaction operations, and transaction semantics must be considered for this purpose. Since transaction semantics are not available, for simplicity we have made following assumptions. First, a write operation on any data item is dependent on all preceding read operations appearing after the previous write operation (if any) of the same transaction. Secondly, if a write operation is immediately followed by another write operation, the second write operation is independent of any read operation. The following data structures are used in the clustering algorithm. A. Data Structures Cluster - data list (CDL): This is used to store the cluster IDs and the corresponding data items, so that related (directly or transitively) data items can be stored together. This table may be referred to identify the relevant clusters to which the operations of the transaction belong. Each operation in the TOL i is considered and the CDL is checked to find out if the data item is already in a cluster, if it is already there, then the same cluster ID is assigned; otherwise, a new cluster ID is assigned. It is guaranteed that no items belong to more than one cluster. If a dependency is established between two items belonging to two different clusters, then those clusters are combined to form one cluster. Transaction - cluster list (TCL): This structure is used to store the transaction IDs and corresponding cluster IDs. A single transaction may be spread over a large number of clusters. Therefore, to trace a transaction we need to store this information. In case an attacker is detected, this list will help in determining the affected clusters. Transaction operation list for T i (TOL i ): This structure stores operations of transaction T i along with their corresponding cluster IDs. This is a temporary structure and is discarded, once the last operation of the transaction is stored in a cluster. B. Clustering Algorithm 1. If data structures TCL and CDL not found then Set TCL = {} and CDL= {} 2. Scan each operation till check point of the log. For every operation O i in the log 2.1 Case O i is Start Set TOL i ={} Case O i is Read or Write Add O i to TOL i Case O i is Abort Delete TOL i Case O i is Commit Add transaction ID to the Transaction ID entry of the table TCL; Delete all the read operations in TOL i which do not affect any write operation; Call assign_cluster; Delete TOL i Procedure assign_cluster // Identifies cluster for each operation in TOL i 1. Scan each operation until the end of the table TOL i For every operation O i [x] in TOL i 1.1 If O i (x) is read, then If x belongs to a cluster say C k Update the cluster entry for O i [x] in TOL i to C k Else Assign a new cluster ID and update the cluster entry for O i [x] in TOL i accordingly; Add the new cluster ID and data item to CDL; Add the new cluster ID to TCL 1.2 If O i [x] is write operation If x is in cluster C k (as checked from CDL) Update cluster entry for O i [x] in TOL i to C k Else Assign a new cluster ID and update the cluster entry for O i [x] in TOL i Add the new cluster ID to CDL and TCL; If operation O i [x] has dependent reads then Check all such dependent read operations in TOL i and their cluster IDs For each such different cluster ID Merge all the clusters and put all entries of those in one cluster as given in CDL; Update CDL, TCL and TOL i to reflect the changes 2. Add all operations of TOL i to their respective clusters Next, we present two lemmas regarding transaction operations. Proofs of these lemmas are obvious; therefore, due to space constraints proofs are not provided here. Readers interested in proofs may want to contact the authors. Lemma 1: Effect of every operation in the cluster is already in the database. Lemma 2: No operation is stored in more than one cluster. In the above algorithm, whenever a read operation on any item is encountered, the CDL is checked to determine the corresponding cluster. When a write operation is encountered, the dependency is checked with the previous read operations. If any dependency is established, all dependent operations are put in the same cluster. Once the clusters are created, pinpointing the damage becomes easier. Given the attacking transaction, we can determine the clusters, which contain this transaction. Then the operations of the attacking transaction can be undone and the affected operations can be re-executed by using existing algorithms such as the one described in [11]. V. PERFORMANCE ANALYSIS A simulation model was developed to compare the performance of the clustering approach with the traditional ISBN /$ IEEE 158
4 log based recovery method. The simulation program was executed in two phases. The first phase included the creation of a strict serializable history, followed by clustering and damage assessment in the second phase. The transactions were executed following a strict two-phase locking protocol. The transaction ID of the attacking transaction was prespecified in the program. Traditional log based recovery techniques always scan operations of all transactions from the point of attack to the end of the log. Then recovery was done accordingly. In this case, total number of read and write data items of the transactions was calculated from the point of attack to the end of the log. In the case of the clustering model, only those clusters containing the attacking transaction were accessed. Then the counts of the total number of affected and unaffected operations were taken. This simulation was run using two main variations. First, to realize the effect of the attacker at various places in the log. For this, the attacker ID was varied while keeping the total number of transactions, total number of data items, and maximum number of items accessible by a transaction fixed during each run of the program. Secondly, to observe the effect of number of executed transactions. In this case, the total number of transactions was changed in each run while maintaining the same attacker ID, fixed number of data items, and fixed maximum accessible data items by a transaction. A. Calculation Methods For the calculation of page access time, the following system-dependent parameters were used: Space taken by a read operation record of a transaction in the log (RD) = 40 bytes, Space taken by a write operation record of a transaction in the log (WR) = 60 bytes, Page Size (PS) = 1024 bytes, and Page Access Time (PT) = 20 milliseconds. For traditional log based recovery methods, the recovery process begins by scanning all operations of transactions from the point of attack to the end of the log. To calculate the total page access time or log access time for this model, all read and write operations of transactions starting from the attacking transaction to the last transaction in the log were considered. Using the fixed space occupied by each of these operations as listed above, the size of the part of the log that needs to be scanned and hence the estimated total page access time was calculated. For the clustering approach, the total number of affected operations was calculated in each cluster from the attacking transaction up to the end of each cluster. The calculation of time for the damage assessment was done in the following manner. Let R represent the total number of affected read operations and, W denotes the total number of affected write operations. Then the total page access time is calculated as follows: Total space for read records = R * RD bytes Total space for write records = W * WR bytes Total amount of space needed to be scanned T = R * RD + W * WR bytes Number of pages needed P = T / PS +1 Total access Time = P * PT The above-mentioned steps were used to calculate the total log access time for the traditional log based methods. Similar calculation was done to calculate the access time for the clustering model, based on the total number of affected read and write operations. The number of pages to be read for each cluster was calculated and then summed up to calculate the total access time. B. Results Figure 15.1 shows access time comparison between our cluster based approach and the traditional log approach, when the attacking transaction is varied. Other fixed parameters used were: 4000 total data items, 500 transactions, and a maximum of 40 data items accessible by a transaction. Since the transactions are interleaved, the access time is not dependent on the position of the attacking transaction. Therefore, the access time fluctuates with the change of attacker position. Figure 15.2 was obtained by running the program with same parameter values as before except for the total data items, which was changed to Although the log access time was almost similar to the previous case, the access times for cluster based model decreased drastically. The larger set of data items yielded less dependency among the data items, thus, giving rise to clusters with less number of affected operations. Using the same parameter values as in the first case but changing the total number of transactions to 1000 incurred more dependencies among data items. Therefore, the cluster size also increased resulting in a higher access time than before. Still our model proved to be better than the traditional approach. Figure 15.3 displays the obtained result. We ran the experiment again by fixing the total possible data items at 8000, maximum data items accessible by a transaction at 30, and having the attacker ID as 950. This time, total number of transactions executed was varied from 1000 to 1400 with increments of 100. The comparison result is illustrated in Figure VI. CONCLUSIONS In a post-intrusion detection situation, fast and accurate recovery from malicious transactions is crucial for survival of any information system. Since recovery algorithms require that the database log must not be purged, the log grows out of proportion. Searching massive logs during damage assessment and recovery is very inefficient. In this paper, we have developed a model based on data dependency approach to pre-determine possible paths of information flow in databases. This helps in determining parts of the database that are not affected by the attack. We have presented an algorithm to cluster the log based on data dependency, thus, grouping the related operations together. The operations affecting each other directly or indirectly will be stored in the same cluster. This enables us in skipping various sections of the log during damage assessment and recovery. Through simulation we compared the performance of our algorithm ISBN /$ IEEE 159
5 with the traditional method. The results confirm our claim that our proposed method accelerates the recovery process considerably. In situations where the amount of dependencies among data items is less, the cluster sizes remain small. This results in dramatically less access time than the traditional approach. However, even with larger cluster sizes resulting due to more dependencies among data items, our model outperforms the traditional log based method. VII. ACKNOWLEDGMENT The authors wish to thank Dr. Robert L. Herklotz and Capt. Alex Kilpatrick for their support, which made this work possible. VIII. REFERENCES [1] P. Ammann, S. Jajodia, C. D. McCollum, and B. Blaustein, Surviving information warfare attacks on databases, In Proceedings of the 1997 IEEE Symposium on Security and Privacy, p , Oakland, CA, May [2] P. A. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency Control and Recovery in Database Systems, Addison-Wesley, Reading, MA, [3] Leonard J. LaPadula, State of the art in anomaly detection and reaction, Technical Report, Center for Integrated Intelligence Systems, The Mitre Corporation, Bedford, MA, July [4] R. Elmasri and S. B. Navathe, Fundamentals of Database Systems, Second Edition, Addison-Wesley, Menlo Park, CA, [5] B. Mukherjee, L. Herelein, and K. Letitt, Network intrusion detection, IEEE Network, Vol. 8, No. 3, p , May/June [6] R. Graubart, L. Schlipper, and C. McCollum, Defending Database Management Systems Against Information Warfare Attacks, Technical report, The MITRE Corporation, [7] J. Gray and A. Reuter, Transaction Processing: Concepts and Techniques, Morgan Kaufmann, San Mateo, CA, [8] S. Jajodia, C. D. McCollum and P. Amman, Trusted Recovery, In Communications of the ACM, Vol. 42, No. 7, p , July [9] P. Liu, P. Ammann, and S. Jajodia, Rewriting histories: recovering from malicious transactions, Distributed and Parallel Databases, Vol. 8, No. 1, p. 7-40, January [10] T. F. Lunt, A Survey of Intrusion Detection Techniques, Computers & Security, Vol. 12, No. 4, p , June [11] B. Panda and J. Giordano, Reconstructing the Database after Electronic Attacks, In Database Security XII: Status and Prospect, S. Jajodia (editor), Kluwer Academic Publishers, p , [12] B. Panda and J. Giordano, An Overview of Post Information Warfare Data Recovery, In Proceedings of the 1998 ACM Symposium on Applied Computing, Atlanta, GA, February [13] S. Patnaik and B. Panda, Dependency Based Logging for Database Survivability from Hostile Transactions, In Proceedings of the 12 th International Conference on Computer Applications in Industry and Engineering, Atlanta, GA, Nov ISBN /$ IEEE 160
Recovering from Malicious Attacks in Workflow Systems
Recovering from Malicious Attacks in Workflow Systems Yajie Zhu, Tai Xin, and Indrakshi Ray Department of Computer Science Colorado State University zhuy,xin,iray @cs.colostate.edu Abstract. Workflow management
More informationData Recovery and the Transaction Log Problem
Fine Grained Transaction Log for Data Recovery in Database Systems Ge Fu, Min Chen, Hong Zhu, Yucai Feng, Yi Zhu, Jie Shi Database and Multimedia Technology Research Institute Huazhong University of Science
More informationChapter 15: Recovery System
Chapter 15: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Shadow Paging Recovery With Concurrent Transactions Buffer Management Failure with Loss of
More informationChapter 14: Recovery System
Chapter 14: Recovery System Chapter 14: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Remote Backup Systems Failure Classification Transaction failure
More informationApplying Attribute Level Locking to Decrease the Deadlock on Distributed Database
Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database Dr. Khaled S. Maabreh* and Prof. Dr. Alaa Al-Hamami** * Faculty of Science and Information Technology, Zarqa University,
More informationChapter 10. Backup and Recovery
Chapter 10. Backup and Recovery Table of Contents Objectives... 1 Relationship to Other Units... 2 Introduction... 2 Context... 2 A Typical Recovery Problem... 3 Transaction Loggoing... 4 System Log...
More information! Volatile storage: ! Nonvolatile storage:
Chapter 17: Recovery System Failure Classification! Failure Classification! Storage Structure! Recovery and Atomicity! Log-Based Recovery! Shadow Paging! Recovery With Concurrent Transactions! Buffer Management!
More informationTransaction Management Overview
Transaction Management Overview Chapter 16 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Transactions Concurrent execution of user programs is essential for good DBMS performance. Because
More informationReview: The ACID properties
Recovery Review: The ACID properties A tomicity: All actions in the Xaction happen, or none happen. C onsistency: If each Xaction is consistent, and the DB starts consistent, it ends up consistent. I solation:
More information2 nd Semester 2008/2009
Chapter 17: System Departamento de Engenharia Informática Instituto Superior Técnico 2 nd Semester 2008/2009 Slides baseados nos slides oficiais do livro Database System c Silberschatz, Korth and Sudarshan.
More informationDatabase Tuning and Physical Design: Execution of Transactions
Database Tuning and Physical Design: Execution of Transactions David Toman School of Computer Science University of Waterloo Introduction to Databases CS348 David Toman (University of Waterloo) Transaction
More information(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Pessimistic Timestamp Ordering. Write Operations and Timestamps
(Pessimistic) stamp Ordering Another approach to concurrency control: Assign a timestamp ts(t) to transaction T at the moment it starts Using Lamport's timestamps: total order is given. In distributed
More informationAttack graph analysis using parallel algorithm
Attack graph analysis using parallel algorithm Dr. Jamali Mohammad (m.jamali@yahoo.com) Ashraf Vahid, MA student of computer software, Shabestar Azad University (vahid.ashraf@yahoo.com) Ashraf Vida, MA
More informationIntroduction to Database Management Systems
Database Administration Transaction Processing Why Concurrency Control? Locking Database Recovery Query Optimization DB Administration 1 Transactions Transaction -- A sequence of operations that is regarded
More informationInformation Systems. Computer Science Department ETH Zurich Spring 2012
Information Systems Computer Science Department ETH Zurich Spring 2012 Lecture VI: Transaction Management (Recovery Manager) Recovery Manager ETH Zurich, Spring 2012 Information Systems 3 Failure Recovery
More informationHow To Recover From Failure In A Relational Database System
Chapter 17: Recovery System Database System Concepts See www.db-book.com for conditions on re-use Chapter 17: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery
More informationRecovery System C H A P T E R16. Practice Exercises
C H A P T E R16 Recovery System Practice Exercises 16.1 Explain why log records for transactions on the undo-list must be processed in reverse order, whereas redo is performed in a forward direction. Answer:
More informationFirewall Verification and Redundancy Checking are Equivalent
Firewall Verification and Redundancy Checking are Equivalent H. B. Acharya University of Texas at Austin acharya@cs.utexas.edu M. G. Gouda National Science Foundation University of Texas at Austin mgouda@nsf.gov
More informationRecovery and the ACID properties CMPUT 391: Implementing Durability Recovery Manager Atomicity Durability
Database Management Systems Winter 2004 CMPUT 391: Implementing Durability Dr. Osmar R. Zaïane University of Alberta Lecture 9 Chapter 25 of Textbook Based on slides by Lewis, Bernstein and Kifer. University
More informationRedo Recovery after System Crashes
Redo Recovery after System Crashes David Lomet Microsoft Corporation One Microsoft Way Redmond, WA 98052 lomet@microsoft.com Mark R. Tuttle Digital Equipment Corporation One Kendall Square Cambridge, MA
More informationFlexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks
Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks Prashil S. Waghmare PG student, Sinhgad College of Engineering, Vadgaon, Pune University, Maharashtra, India. prashil.waghmare14@gmail.com
More informationDistributed Databases
C H A P T E R19 Distributed Databases Practice Exercises 19.1 How might a distributed database designed for a local-area network differ from one designed for a wide-area network? Data transfer on a local-area
More informationA Comparison of General Approaches to Multiprocessor Scheduling
A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA jing@jolt.mt.att.com Michael A. Palis Department of Computer Science Rutgers University
More informationUVA. Failure and Recovery. Failure and inconsistency. - transaction failures - system failures - media failures. Principle of recovery
Failure and Recovery Failure and inconsistency - transaction failures - system failures - media failures Principle of recovery - redundancy - DB can be protected by ensuring that its correct state can
More informationA Case for Dynamic Selection of Replication and Caching Strategies
A Case for Dynamic Selection of Replication and Caching Strategies Swaminathan Sivasubramanian Guillaume Pierre Maarten van Steen Dept. of Mathematics and Computer Science Vrije Universiteit, Amsterdam,
More informationINTRODUCTION TO DATABASE SYSTEMS
1 INTRODUCTION TO DATABASE SYSTEMS Exercise 1.1 Why would you choose a database system instead of simply storing data in operating system files? When would it make sense not to use a database system? Answer
More informationRecovery Principles of MySQL Cluster 5.1
Recovery Principles of MySQL Cluster 5.1 Mikael Ronström Jonas Oreland MySQL AB Bangårdsgatan 8 753 20 Uppsala Sweden {mikael, jonas}@mysql.com Abstract MySQL Cluster is a parallel main memory database.
More informationCRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS
CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS Jing Huang Le Gruenwald School of Computer Science The University of Oklahoma Norman, OK 73019 Email: gruenwal@mailhost.ecn.uoknor.edu Keywords:
More informationProject Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching
1 Project Proposal Data Storage / Retrieval with Access Control, Security and Pre- Presented By: Shashank Newadkar Aditya Dev Sarvesh Sharma Advisor: Prof. Ming-Hwa Wang COEN 241 - Cloud Computing Page
More informationChapter 16: Recovery System
Chapter 16: Recovery System Failure Classification Failure Classification Transaction failure : Logical errors: transaction cannot complete due to some internal error condition System errors: the database
More informationExtending Multidatabase Transaction Management Techniques to Software Development Environments
Purdue University Purdue e-pubs Computer Science Technical Reports Department of Computer Science 1993 Extending Multidatabase Transaction Management Techniques to Software Development Environments Aidong
More informationDr Markus Hagenbuchner markus@uow.edu.au CSCI319. Distributed Systems
Dr Markus Hagenbuchner markus@uow.edu.au CSCI319 Distributed Systems CSCI319 Chapter 8 Page: 1 of 61 Fault Tolerance Study objectives: Understand the role of fault tolerance in Distributed Systems. Know
More informationImplementing New Approach for Enhancing Performance and Throughput in a Distributed Database
290 The International Arab Journal of Information Technology, Vol. 10, No. 3, May 2013 Implementing New Approach for Enhancing Performance and in a Distributed Database Khaled Maabreh 1 and Alaa Al-Hamami
More informationRecovery Protocols For Flash File Systems
Recovery Protocols For Flash File Systems Ravi Tandon and Gautam Barua Indian Institute of Technology Guwahati, Department of Computer Science and Engineering, Guwahati - 781039, Assam, India {r.tandon}@alumni.iitg.ernet.in
More informationSTUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM
STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM Albert M. K. Cheng, Shaohong Fang Department of Computer Science University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu
More informationCONCURRENCY CONTROL IN TRUSTED DATABASE MANAGEMENT SYSTEMS: A SURVEY. Bhavani Thuraisingham and Hai-Ping Ko
CONCURRENCY CONTROL IN TRUSTED DATABASE MANAGEMENT SYSTEMS: A SURVEY Bhavani Thuraisingham and Hai-Ping Ko The MITRE Corporation Burlington Road, Bedford, MA 01730 ABSTRACT Recently several algorithms
More informationCourse Content. Transactions and Concurrency Control. Objectives of Lecture 4 Transactions and Concurrency Control
Database Management Systems Fall 2001 CMPUT 391: Transactions & Concurrency Control Dr. Osmar R. Zaïane University of Alberta Chapters 18 and 19 of Textbook Course Content Introduction Database Design
More informationLoad Balancing in Distributed Data Base and Distributed Computing System
Load Balancing in Distributed Data Base and Distributed Computing System Lovely Arya Research Scholar Dravidian University KUPPAM, ANDHRA PRADESH Abstract With a distributed system, data can be located
More informationComprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering
Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha
More informationQoSIP: A QoS Aware IP Routing Protocol for Multimedia Data
QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data Md. Golam Shagadul Amin Talukder and Al-Mukaddim Khan Pathan* Department of Computer Science and Engineering, Metropolitan University, Sylhet,
More informationEnsuring Security in Cloud with Multi-Level IDS and Log Management System
Ensuring Security in Cloud with Multi-Level IDS and Log Management System 1 Prema Jain, 2 Ashwin Kumar PG Scholar, Mangalore Institute of Technology & Engineering, Moodbidri, Karnataka1, Assistant Professor,
More informationAPPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
More informationLocality Based Protocol for MultiWriter Replication systems
Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin lgao@cs.utexas.edu One of the challenging problems in building replication
More informationRecovery Theory. Storage Types. Failure Types. Theory of Recovery. Volatile storage main memory, which does not survive crashes.
Storage Types Recovery Theory Volatile storage main memory, which does not survive crashes. Non-volatile storage tape, disk, which survive crashes. Stable storage information in stable storage is "never"
More informationRecovery Guarantees in Mobile Systems
Accepted for the International Workshop on Data Engineering for Wireless and Mobile Access, Seattle, 20 August 1999 Recovery Guarantees in Mobile Systems Cris Pedregal Martin and Krithi Ramamritham Computer
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014
RESEARCH ARTICLE An Efficient Priority Based Load Balancing Algorithm for Cloud Environment Harmandeep Singh Brar 1, Vivek Thapar 2 Research Scholar 1, Assistant Professor 2, Department of Computer Science
More informationA Review of Anomaly Detection Techniques in Network Intrusion Detection System
A Review of Anomaly Detection Techniques in Network Intrusion Detection System Dr.D.V.S.S.Subrahmanyam Professor, Dept. of CSE, Sreyas Institute of Engineering & Technology, Hyderabad, India ABSTRACT:In
More informationA Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs
A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs Michael Scherger Department of Computer Science Texas Christian University Email: m.scherger@tcu.edu Zakir Hussain Syed
More informationStudy of Different Types of Attacks on Multicast in Mobile Ad Hoc Networks
Study of Different Types of Attacks on Multicast in Mobile Ad Hoc Networks Hoang Lan Nguyen and Uyen Trang Nguyen Department of Computer Science and Engineering, York University 47 Keele Street, Toronto,
More informationConcurrency Control. Module 6, Lectures 1 and 2
Concurrency Control Module 6, Lectures 1 and 2 The controlling intelligence understands its own nature, and what it does, and whereon it works. -- Marcus Aurelius Antoninus, 121-180 A. D. Database Management
More informationCS 245 Final Exam Winter 2013
CS 245 Final Exam Winter 2013 This exam is open book and notes. You can use a calculator and your laptop to access course notes and videos (but not to communicate with other people). You have 140 minutes
More informationA Game Theoretical Framework on Intrusion Detection in Heterogeneous Networks Lin Chen, Member, IEEE, and Jean Leneutre
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL 4, NO 2, JUNE 2009 165 A Game Theoretical Framework on Intrusion Detection in Heterogeneous Networks Lin Chen, Member, IEEE, and Jean Leneutre
More informationOracle Rdb Performance Management Guide
Oracle Rdb Performance Management Guide Solving the Five Most Common Problems with Rdb Application Performance and Availability White Paper ALI Database Consultants 803-648-5931 www.aliconsultants.com
More informationIJTC.ORG REVIEW OF IDS SYSTEM IN LARGE SCALE ADHOC NETWORKS
REVIEW OF IDS SYSTEM IN LARGE SCALE ADHOC NETWORKS Palamdeep a,, Dr.Parminder Singh b a MTech Student, k.palambrar@gmail.com,chandigarh Engineering College,Landran,Punjab,India b Assistant Professor, singh.parminder06@gmail.com,chandigarh
More informationAn Integrated CyberSecurity Approach for HEP Grids. Workshop Report. http://hpcrd.lbl.gov/hepcybersecurity/
An Integrated CyberSecurity Approach for HEP Grids Workshop Report http://hpcrd.lbl.gov/hepcybersecurity/ 1. Introduction The CMS and ATLAS experiments at the Large Hadron Collider (LHC) being built at
More informationOracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*
Oracle Database 11 g Performance Tuning Recipes Sam R. Alapati Darl Kuhn Bill Padfield Apress* Contents About the Authors About the Technical Reviewer Acknowledgments xvi xvii xviii Chapter 1: Optimizing
More informationIntrusion Detection: Game Theory, Stochastic Processes and Data Mining
Intrusion Detection: Game Theory, Stochastic Processes and Data Mining Joseph Spring 7COM1028 Secure Systems Programming 1 Discussion Points Introduction Firewalls Intrusion Detection Schemes Models Stochastic
More informationA Novel Distributed Denial of Service (DDoS) Attacks Discriminating Detection in Flash Crowds
International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 9, December 2014, PP 139-143 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Novel Distributed Denial
More informationMINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,
More informationRecovery Modeling in MPLS Networks
Proceedings of the Int. Conf. on Computer and Communication Engineering, ICCCE 06 Vol. I, 9-11 May 2006, Kuala Lumpur, Malaysia Recovery Modeling in MPLS Networks Wajdi Al-Khateeb 1, Sufyan Al-Irhayim
More informationFAWN - a Fast Array of Wimpy Nodes
University of Warsaw January 12, 2011 Outline Introduction 1 Introduction 2 3 4 5 Key issues Introduction Growing CPU vs. I/O gap Contemporary systems must serve millions of users Electricity consumed
More informationDatabase Concurrency Control and Recovery. Simple database model
Database Concurrency Control and Recovery Pessimistic concurrency control Two-phase locking (2PL) and Strict 2PL Timestamp ordering (TSO) and Strict TSO Optimistic concurrency control (OCC) definition
More informationFuzzy Network Profiling for Intrusion Detection
Fuzzy Network Profiling for Intrusion Detection John E. Dickerson (jedicker@iastate.edu) and Julie A. Dickerson (julied@iastate.edu) Electrical and Computer Engineering Department Iowa State University
More informationIntroduction to the Event Analysis and Retention Dilemma
Introduction to the Event Analysis and Retention Dilemma Introduction Companies today are encountering a number of business imperatives that involve storing, managing and analyzing large volumes of event
More information2 Technologies for Security of the 2 Internet
2 Technologies for Security of the 2 Internet 2-1 A Study on Process Model for Internet Risk Analysis NAKAO Koji, MARUYAMA Yuko, OHKOUCHI Kazuya, MATSUMOTO Fumiko, and MORIYAMA Eimatsu Security Incidents
More informationDetection of Distributed Denial of Service Attack with Hadoop on Live Network
Detection of Distributed Denial of Service Attack with Hadoop on Live Network Suchita Korad 1, Shubhada Kadam 2, Prajakta Deore 3, Madhuri Jadhav 4, Prof.Rahul Patil 5 Students, Dept. of Computer, PCCOE,
More informationOn the Ubiquity of Logging in Distributed File Systems
On the Ubiquity of Logging in Distributed File Systems M. Satyanarayanan James J. Kistler Puneet Kumar Hank Mashburn School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Logging is
More informationUdai Shankar 2 Deptt. of Computer Sc. & Engineering Madan Mohan Malaviya Engineering College, Gorakhpur, India
A Protocol for Concurrency Control in Real-Time Replicated Databases System Ashish Srivastava 1 College, Gorakhpur. India Udai Shankar 2 College, Gorakhpur, India Sanjay Kumar Tiwari 3 College, Gorakhpur,
More informationEnergy Efficient Load Balancing among Heterogeneous Nodes of Wireless Sensor Network
Energy Efficient Load Balancing among Heterogeneous Nodes of Wireless Sensor Network Chandrakant N Bangalore, India nadhachandra@gmail.com Abstract Energy efficient load balancing in a Wireless Sensor
More informationIntrusion Detection System using Log Files and Reinforcement Learning
Intrusion Detection System using Log Files and Reinforcement Learning Bhagyashree Deokar, Ambarish Hazarnis Department of Computer Engineering K. J. Somaiya College of Engineering, Mumbai, India ABSTRACT
More informationCOS 318: Operating Systems
COS 318: Operating Systems File Performance and Reliability Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Topics File buffer cache
More informationFile-System Implementation
File-System Implementation 11 CHAPTER In this chapter we discuss various methods for storing information on secondary storage. The basic issues are device directory, free space management, and space allocation
More informationPerformance Tuning for the Teradata Database
Performance Tuning for the Teradata Database Matthew W Froemsdorf Teradata Partner Engineering and Technical Consulting - i - Document Changes Rev. Date Section Comment 1.0 2010-10-26 All Initial document
More informationResponse time behavior of distributed voting algorithms for managing replicated data
Information Processing Letters 75 (2000) 247 253 Response time behavior of distributed voting algorithms for managing replicated data Ing-Ray Chen a,, Ding-Chau Wang b, Chih-Ping Chu b a Department of
More informationPostgreSQL Concurrency Issues
PostgreSQL Concurrency Issues 1 PostgreSQL Concurrency Issues Tom Lane Red Hat Database Group Red Hat, Inc. PostgreSQL Concurrency Issues 2 Introduction What I want to tell you about today: How PostgreSQL
More informationVirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5
Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.
More informationAn Evaluation of Network Survivability When Defense Levels Are Discounted by the Accumulated Experience of Attackers
An Evaluation of Network Survivability When Defense Levels Are Discounted by the Accumulated Experience of Attackers Frank Yeong-Sung Lin National Tatiwan University, Taiwan yslin@im.ntu.edu.tw Pei-Yu
More informationDistributed Architectures. Distributed Databases. Distributed Databases. Distributed Databases
Distributed Architectures Distributed Databases Simplest: client-server Distributed databases: two or more database servers connected to a network that can perform transactions independently and together
More informationChapter 13. Chapter Outline. Disk Storage, Basic File Structures, and Hashing
Chapter 13 Disk Storage, Basic File Structures, and Hashing Copyright 2007 Ramez Elmasri and Shamkant B. Navathe Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files
More informationDistributed Dynamic Load Balancing for Iterative-Stencil Applications
Distributed Dynamic Load Balancing for Iterative-Stencil Applications G. Dethier 1, P. Marchot 2 and P.A. de Marneffe 1 1 EECS Department, University of Liege, Belgium 2 Chemical Engineering Department,
More informationKeywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.
Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement
More informationOnline backup and versioning in log-structured file systems
Online backup and versioning in log-structured file systems Ravi Tandon* Student Author* Depatment of Computer Science and Engineering Indian Institute of Technology Guwahati Email: r.tandon@alumni.iitg.ernet.in
More informationAlok Gupta. Dmitry Zhdanov
RESEARCH ARTICLE GROWTH AND SUSTAINABILITY OF MANAGED SECURITY SERVICES NETWORKS: AN ECONOMIC PERSPECTIVE Alok Gupta Department of Information and Decision Sciences, Carlson School of Management, University
More informationA Study on Workload Imbalance Issues in Data Intensive Distributed Computing
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing Sven Groot 1, Kazuo Goda 1, and Masaru Kitsuregawa 1 University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Abstract.
More informationSmooth and Flexible ERP Migration between both Homogeneous and Heterogeneous ERP Systems/ERP Modules
28.8.2008 Smooth and Flexible ERP Migration between both Homogeneous and Heterogeneous ERP Systems/ERP Modules Lars Frank Department of Informatics, Copenhagen Business School, Howitzvej 60, DK-2000 Frederiksberg,
More informationNetwork (Tree) Topology Inference Based on Prüfer Sequence
Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,
More informationGraph Database Proof of Concept Report
Objectivity, Inc. Graph Database Proof of Concept Report Managing The Internet of Things Table of Contents Executive Summary 3 Background 3 Proof of Concept 4 Dataset 4 Process 4 Query Catalog 4 Environment
More informationVerifying Semantic of System Composition for an Aspect-Oriented Approach
2012 International Conference on System Engineering and Modeling (ICSEM 2012) IPCSIT vol. 34 (2012) (2012) IACSIT Press, Singapore Verifying Semantic of System Composition for an Aspect-Oriented Approach
More informationFlow-based detection of RDP brute-force attacks
Flow-based detection of RDP brute-force attacks Martin Vizváry vizvary@ics.muni.cz Institute of Computer Science Masaryk University Brno, Czech Republic Jan Vykopal vykopal@ics.muni.cz Institute of Computer
More informationFile System Design and Implementation
Transactions and Reliability Sarah Diesburg Operating Systems CS 3430 Motivation File systems have lots of metadata: Free blocks, directories, file headers, indirect blocks Metadata is heavily cached for
More informationKey Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
More informationLoad Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
More informationRecovery algorithms are techniques to ensure transaction atomicity and durability despite failures. Two main approaches in recovery process
Database recovery techniques Instructor: Mr Mourad Benchikh Text Books: Database fundamental -Elmesri & Navathe Chap. 21 Database systems the complete book Garcia, Ullman & Widow Chap. 17 Oracle9i Documentation
More informationVulnerabilities of Intrusion Detection Systems in Mobile Ad-hoc Networks - The routing problem
Vulnerabilities of Intrusion Detection Systems in Mobile Ad-hoc Networks - The routing problem Ernesto Jiménez Caballero Helsinki University of Technology erjica@gmail.com Abstract intrusion detection
More informationInternational Journal of Scientific & Engineering Research, Volume 4, Issue 8, August-2013 1300 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 4, Issue 8, August-2013 1300 Efficient Packet Filtering for Stateful Firewall using the Geometric Efficient Matching Algorithm. Shriya.A.
More informationA Protocol Based Packet Sniffer
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 3, March 2015,
More informationA Defense Security Approach against Hacking Using Trusted Graphs
A Defense Security Approach against Hacking Using Trusted Graphs D. N. Rewadkar 1, Harshal A. Kute 2 1 Head, Department of Computer Engineering, RMD Sinhgad School of Engineering, University of Pune, India
More information