Post-Intrusion Recovery Using Data Dependency Approach

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Post-Intrusion Recovery Using Data Dependency Approach"

Transcription

1 T3A3 4:20 Proceedings of the 2001 IEEE Workshop on Information Assurance and Security United States Military Academy, West Point, NY, 5-6 June, 2001 Post-Intrusion Recovery Using Data Dependency Approach Sani Tripathy and Brajendra Panda!, Member, IEEE Computer Science Department University of North Dakota Grand Forks, ND {sani, Abstract-- Recovery of lost or damaged data in a postintrusion detection scenario is a difficult task since database management systems are not designed to deal with malicious committed transactions. Few existing methods developed for this purpose heavily rely on logs and require that the log must not be purged. This causes the log grow tremendously and, since scanning the huge log takes enormous amount of time, recovery becomes a complex and prolonged process. In this research, we have used data dependency approach to divide a log into multiple segments, each segment containing only related operations. During damage assessment and recovery, we identify and skip parts of logs that contain unaffected operations. This accelerates the task. Through simulation we have validated performance of our method. Index Terms-- Information Warfare, Malicious Transaction, Data Dependency, Log Segmentation. I. INTRODUCTION NY computer system that is connected to a network is Avulnerable to information attacks. In spite of all preventive measures, savvy intruders manage to sneak through and damage sensitive data. Initial damage later spreads to other parts of the database when a legitimate transaction updates valid data after reading damaged data. Damage may also get spread for reasons such as system integrity check as described in [1] and [6]. Intrusion detection helps in identifying an attack. Significant amount of work has been performed in the area of intrusion detection. A few are described in [3], [5] and [10]. However, none of the existing intrusion detection methods guarantee that an attack will be detected immediately. Therefore, a major part of the database may have been affected by the time an attack is detected and an attacking transaction is identified. The situation becomes worse as time passes by and finally, it may be difficult, although not impossible, to recover the system. Hence, immediate and efficient damage assessment, and fast and accurate recovery is important. In this research, we have developed a model using data dependency approach to divide the log into multiple segments. During damage assessment, only few of these segments will be accessed instead of the entire log. This expedites the recovery process. We have developed a simulation model to test the performance of our model. The results show dramatic improvements over traditional methods. In the next section, we discuss related work. Our proposed model is described in section III and the clustering algorithm is presented in section IV. Section V offers the performance of our model obtained through simulation. Section VI concludes the paper. II. RELATED WORK Traditional recovery methods [2], [4], [7] have been designed to perform recovery in case of media or system failures, but they lack the efficacy required to recover from effects of malicious committed transactions. In such situations, after the detection of an attack, effects of all transactions reading directly or indirectly from the malicious transaction along with that of the malicious transaction need to be undone. Then affected transactions must be re-executed to reflect the correct state of the database. Since an attack may be detected days or even months after its occurrence, the log must never be purged. Otherwise, information about the attacker and other valid but affected transactions will not be available. This requirement makes the log grow massively and searching the log during damage assessment and recovery incurs a long delay, which is unacceptable in many real-time applications. In [8], Jajodia et al. have discussed recovery issues for defensive information warfare. Liu et al. [9] have presented algorithms that re-write transaction history by moving the attacking transaction and all affected transactions beyond non-affected transactions. However, this process requires significant page I/O since all transactions after and including the malicious transaction have to be read. In order to save log access time, researchers have proposed to cluster the log using transaction dependency approach and have shown that during damage assessment only one of the clusters will be accessed [13]. Instead of traditional transaction dependency, researchers in [11] and [12] have used data dependency for recovery from malicious transactions. Rather than undoing all operations of affected transactions and then re-executing them, their approach suggests to undo and redo only affected operations of those transactions. Nevertheless, they require that log must be accessed starting from the malicious! This work was supported in part by US AFOSR grant F ISBN /$ IEEE 156

2 transaction till the end in order to perform damage assessment and recovery. In this research, we have developed an extended data dependency model and developed an algorithm to divide the log into several clusters. Only operations on data items that are inter-dependent kept in one cluster. During damage assessment, only relevant clusters need to be scanned. The damage assessment and recovery algorithms presented in [11] can aptly use clusters created by our method, whereas, other existing algorithms can easily be modified for this purpose. III. THE MODEL This work is based on the assumption that the attacking transaction has already been detected by intrusion detection techniques. So, given an attacking transaction, our goal is to determine the affected ones quickly, stop new and executing transactions from accessing affected data, and then carry out recovery process. We further assume that the scheduler produces a strict serializable history, and the log is not modifiable by users (so that log can t be damaged). As the transactions get executed, the log grows with time and is never purged. The log is stored in the secondary storage, so every access to it requires a disk I/O. During recovery, we need to access the log to restore the database. To avoid unnecessary retrieval of the massive log that results in tremendous amount of page I/Os, the clustering approach is followed. Next, we cite two of the definitions that were initially presented in [11] since they form the basis of our model. Definition 1: A write operation w i [x] of a transaction T i is dependent on a read operation r i [y] of T i if w i [x] is computed using the value obtained from r i [y]. Definition 2: A data value v 1 is dependent on data value v 2 if the write operation that wrote v 1 was dependent on a read operation on v 2. Note that v 1 and v 2 may be two different versions of the same data item. In our model, the operations on the data items that, in accordance with the definition 2, are directly or indirectly dependent on each other are kept in the same cluster. Within a transaction, some of its operations may be independent of each other. Therefore, not all operations of an affected transaction are affected in case of an attack. Hence, during recovery we need not re-execute all operations of a transaction, rather re-execute only the affected operations of that transaction. Keeping this philosophy in mind, while clustering, we store independent operations in different clusters. Another perspective that we have contemplated is determining the largest possible subset of all unaffected data items and, then, making them available as soon as possible. This reduces the risk of denial-of-service types of attacks. The following definitions help us achieve this goal by determining various possible dependency boundaries between data items. Definition 3: For any two data items x and y, if value of x may be used in calculating value of y then, y can be influenced by x. This relation is denoted by: x y. If the relation is bidirectional, i.e., either data item can influence the other, we denote the relationship as x y. We assume that any data item can be updated by using its previous value. Therefore, the relationship is reflexive. But it is neither commutative nor transitive. Definition 4: A probability graph is a directed graph representing possible relationships among data items in a database. In such a graph, the data items are represented by nodes and an edge between two nodes represent the caninfluence relationship between them. An edge can be unidirectional or bi-directional. Definition 5: A Clique C comprises of related data items such that, 1) for each x C, a data item y, such that either x y, or y x, or x y and 2) the probability graph plotted by taking all the data items is a connected one. A clique guarantees that data items belonging to different cliques never affect each other. Therefore, during damage assessment, if we identify the clique containing the damage made by the attacker, items of other cliques can be immediately made available to users. Definition 6: A critical link is a specific connecting node in a probability graph, removal of which may divide the graph into multiple disconnected graphs. During damage assessment, if it is determined that a critical link is not updated, then it is clear that data items on one side of the link have not affected items on the other side(s) of the link. However, it must be noticed that determination of whether a link is updated adds additional overhead. A study needs to be done to determine information on which links must be kept; otherwise, the maintenance cost will exceed the expected benefit. IV. LOG CLUSTERING In data dependency approach, as update operations of transactions are encountered, affected data items are checked for data dependency and accordingly put in appropriate cluster. Operations on the items of the clique may spread over a number of clusters. Operations that are related or dependent on each other are put in the same cluster. When we consider the aspect of the recovery from malicious attack, we cannot overlook the requirements for recovery from traditional failures. Hence the basic operations for traditional recovery like undo and redo must be taken care of. Since our algorithm stores only committed transactions in clusters, there is no need to carry out undo operations in case of transaction or media failures. Operations of all active transactions are stored in a temporary log. Clusters for those are determined periodically. In case of traditional recovery, when operations of a transaction have not been flushed even after the commit point, the need for redo operation arises. In our method, the operations of any non-flushed committed ISBN /$ IEEE 157

3 transaction will be in temporary log. We add a checkpoint in temporary log to mark the last committed transaction that has been flushed, so that a cluster can be created until that point. We do not store commit operations of transaction in any cluster. Therefore, the transactions stored in cluster do not require undo and redo operations. To carry out those operations we may need to refer to the temporary log. The actual dependency of data items is hard to determine from transaction operations, and transaction semantics must be considered for this purpose. Since transaction semantics are not available, for simplicity we have made following assumptions. First, a write operation on any data item is dependent on all preceding read operations appearing after the previous write operation (if any) of the same transaction. Secondly, if a write operation is immediately followed by another write operation, the second write operation is independent of any read operation. The following data structures are used in the clustering algorithm. A. Data Structures Cluster - data list (CDL): This is used to store the cluster IDs and the corresponding data items, so that related (directly or transitively) data items can be stored together. This table may be referred to identify the relevant clusters to which the operations of the transaction belong. Each operation in the TOL i is considered and the CDL is checked to find out if the data item is already in a cluster, if it is already there, then the same cluster ID is assigned; otherwise, a new cluster ID is assigned. It is guaranteed that no items belong to more than one cluster. If a dependency is established between two items belonging to two different clusters, then those clusters are combined to form one cluster. Transaction - cluster list (TCL): This structure is used to store the transaction IDs and corresponding cluster IDs. A single transaction may be spread over a large number of clusters. Therefore, to trace a transaction we need to store this information. In case an attacker is detected, this list will help in determining the affected clusters. Transaction operation list for T i (TOL i ): This structure stores operations of transaction T i along with their corresponding cluster IDs. This is a temporary structure and is discarded, once the last operation of the transaction is stored in a cluster. B. Clustering Algorithm 1. If data structures TCL and CDL not found then Set TCL = {} and CDL= {} 2. Scan each operation till check point of the log. For every operation O i in the log 2.1 Case O i is Start Set TOL i ={} Case O i is Read or Write Add O i to TOL i Case O i is Abort Delete TOL i Case O i is Commit Add transaction ID to the Transaction ID entry of the table TCL; Delete all the read operations in TOL i which do not affect any write operation; Call assign_cluster; Delete TOL i Procedure assign_cluster // Identifies cluster for each operation in TOL i 1. Scan each operation until the end of the table TOL i For every operation O i [x] in TOL i 1.1 If O i (x) is read, then If x belongs to a cluster say C k Update the cluster entry for O i [x] in TOL i to C k Else Assign a new cluster ID and update the cluster entry for O i [x] in TOL i accordingly; Add the new cluster ID and data item to CDL; Add the new cluster ID to TCL 1.2 If O i [x] is write operation If x is in cluster C k (as checked from CDL) Update cluster entry for O i [x] in TOL i to C k Else Assign a new cluster ID and update the cluster entry for O i [x] in TOL i Add the new cluster ID to CDL and TCL; If operation O i [x] has dependent reads then Check all such dependent read operations in TOL i and their cluster IDs For each such different cluster ID Merge all the clusters and put all entries of those in one cluster as given in CDL; Update CDL, TCL and TOL i to reflect the changes 2. Add all operations of TOL i to their respective clusters Next, we present two lemmas regarding transaction operations. Proofs of these lemmas are obvious; therefore, due to space constraints proofs are not provided here. Readers interested in proofs may want to contact the authors. Lemma 1: Effect of every operation in the cluster is already in the database. Lemma 2: No operation is stored in more than one cluster. In the above algorithm, whenever a read operation on any item is encountered, the CDL is checked to determine the corresponding cluster. When a write operation is encountered, the dependency is checked with the previous read operations. If any dependency is established, all dependent operations are put in the same cluster. Once the clusters are created, pinpointing the damage becomes easier. Given the attacking transaction, we can determine the clusters, which contain this transaction. Then the operations of the attacking transaction can be undone and the affected operations can be re-executed by using existing algorithms such as the one described in [11]. V. PERFORMANCE ANALYSIS A simulation model was developed to compare the performance of the clustering approach with the traditional ISBN /$ IEEE 158

4 log based recovery method. The simulation program was executed in two phases. The first phase included the creation of a strict serializable history, followed by clustering and damage assessment in the second phase. The transactions were executed following a strict two-phase locking protocol. The transaction ID of the attacking transaction was prespecified in the program. Traditional log based recovery techniques always scan operations of all transactions from the point of attack to the end of the log. Then recovery was done accordingly. In this case, total number of read and write data items of the transactions was calculated from the point of attack to the end of the log. In the case of the clustering model, only those clusters containing the attacking transaction were accessed. Then the counts of the total number of affected and unaffected operations were taken. This simulation was run using two main variations. First, to realize the effect of the attacker at various places in the log. For this, the attacker ID was varied while keeping the total number of transactions, total number of data items, and maximum number of items accessible by a transaction fixed during each run of the program. Secondly, to observe the effect of number of executed transactions. In this case, the total number of transactions was changed in each run while maintaining the same attacker ID, fixed number of data items, and fixed maximum accessible data items by a transaction. A. Calculation Methods For the calculation of page access time, the following system-dependent parameters were used: Space taken by a read operation record of a transaction in the log (RD) = 40 bytes, Space taken by a write operation record of a transaction in the log (WR) = 60 bytes, Page Size (PS) = 1024 bytes, and Page Access Time (PT) = 20 milliseconds. For traditional log based recovery methods, the recovery process begins by scanning all operations of transactions from the point of attack to the end of the log. To calculate the total page access time or log access time for this model, all read and write operations of transactions starting from the attacking transaction to the last transaction in the log were considered. Using the fixed space occupied by each of these operations as listed above, the size of the part of the log that needs to be scanned and hence the estimated total page access time was calculated. For the clustering approach, the total number of affected operations was calculated in each cluster from the attacking transaction up to the end of each cluster. The calculation of time for the damage assessment was done in the following manner. Let R represent the total number of affected read operations and, W denotes the total number of affected write operations. Then the total page access time is calculated as follows: Total space for read records = R * RD bytes Total space for write records = W * WR bytes Total amount of space needed to be scanned T = R * RD + W * WR bytes Number of pages needed P = T / PS +1 Total access Time = P * PT The above-mentioned steps were used to calculate the total log access time for the traditional log based methods. Similar calculation was done to calculate the access time for the clustering model, based on the total number of affected read and write operations. The number of pages to be read for each cluster was calculated and then summed up to calculate the total access time. B. Results Figure 15.1 shows access time comparison between our cluster based approach and the traditional log approach, when the attacking transaction is varied. Other fixed parameters used were: 4000 total data items, 500 transactions, and a maximum of 40 data items accessible by a transaction. Since the transactions are interleaved, the access time is not dependent on the position of the attacking transaction. Therefore, the access time fluctuates with the change of attacker position. Figure 15.2 was obtained by running the program with same parameter values as before except for the total data items, which was changed to Although the log access time was almost similar to the previous case, the access times for cluster based model decreased drastically. The larger set of data items yielded less dependency among the data items, thus, giving rise to clusters with less number of affected operations. Using the same parameter values as in the first case but changing the total number of transactions to 1000 incurred more dependencies among data items. Therefore, the cluster size also increased resulting in a higher access time than before. Still our model proved to be better than the traditional approach. Figure 15.3 displays the obtained result. We ran the experiment again by fixing the total possible data items at 8000, maximum data items accessible by a transaction at 30, and having the attacker ID as 950. This time, total number of transactions executed was varied from 1000 to 1400 with increments of 100. The comparison result is illustrated in Figure VI. CONCLUSIONS In a post-intrusion detection situation, fast and accurate recovery from malicious transactions is crucial for survival of any information system. Since recovery algorithms require that the database log must not be purged, the log grows out of proportion. Searching massive logs during damage assessment and recovery is very inefficient. In this paper, we have developed a model based on data dependency approach to pre-determine possible paths of information flow in databases. This helps in determining parts of the database that are not affected by the attack. We have presented an algorithm to cluster the log based on data dependency, thus, grouping the related operations together. The operations affecting each other directly or indirectly will be stored in the same cluster. This enables us in skipping various sections of the log during damage assessment and recovery. Through simulation we compared the performance of our algorithm ISBN /$ IEEE 159

5 with the traditional method. The results confirm our claim that our proposed method accelerates the recovery process considerably. In situations where the amount of dependencies among data items is less, the cluster sizes remain small. This results in dramatically less access time than the traditional approach. However, even with larger cluster sizes resulting due to more dependencies among data items, our model outperforms the traditional log based method. VII. ACKNOWLEDGMENT The authors wish to thank Dr. Robert L. Herklotz and Capt. Alex Kilpatrick for their support, which made this work possible. VIII. REFERENCES [1] P. Ammann, S. Jajodia, C. D. McCollum, and B. Blaustein, Surviving information warfare attacks on databases, In Proceedings of the 1997 IEEE Symposium on Security and Privacy, p , Oakland, CA, May [2] P. A. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency Control and Recovery in Database Systems, Addison-Wesley, Reading, MA, [3] Leonard J. LaPadula, State of the art in anomaly detection and reaction, Technical Report, Center for Integrated Intelligence Systems, The Mitre Corporation, Bedford, MA, July [4] R. Elmasri and S. B. Navathe, Fundamentals of Database Systems, Second Edition, Addison-Wesley, Menlo Park, CA, [5] B. Mukherjee, L. Herelein, and K. Letitt, Network intrusion detection, IEEE Network, Vol. 8, No. 3, p , May/June [6] R. Graubart, L. Schlipper, and C. McCollum, Defending Database Management Systems Against Information Warfare Attacks, Technical report, The MITRE Corporation, [7] J. Gray and A. Reuter, Transaction Processing: Concepts and Techniques, Morgan Kaufmann, San Mateo, CA, [8] S. Jajodia, C. D. McCollum and P. Amman, Trusted Recovery, In Communications of the ACM, Vol. 42, No. 7, p , July [9] P. Liu, P. Ammann, and S. Jajodia, Rewriting histories: recovering from malicious transactions, Distributed and Parallel Databases, Vol. 8, No. 1, p. 7-40, January [10] T. F. Lunt, A Survey of Intrusion Detection Techniques, Computers & Security, Vol. 12, No. 4, p , June [11] B. Panda and J. Giordano, Reconstructing the Database after Electronic Attacks, In Database Security XII: Status and Prospect, S. Jajodia (editor), Kluwer Academic Publishers, p , [12] B. Panda and J. Giordano, An Overview of Post Information Warfare Data Recovery, In Proceedings of the 1998 ACM Symposium on Applied Computing, Atlanta, GA, February [13] S. Patnaik and B. Panda, Dependency Based Logging for Database Survivability from Hostile Transactions, In Proceedings of the 12 th International Conference on Computer Applications in Industry and Engineering, Atlanta, GA, Nov ISBN /$ IEEE 160

Recovering from Malicious Attacks in Workflow Systems

Recovering from Malicious Attacks in Workflow Systems Recovering from Malicious Attacks in Workflow Systems Yajie Zhu, Tai Xin, and Indrakshi Ray Department of Computer Science Colorado State University zhuy,xin,iray @cs.colostate.edu Abstract. Workflow management

More information

Chapter 15: Recovery System

Chapter 15: Recovery System Chapter 15: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Shadow Paging Recovery With Concurrent Transactions Buffer Management Failure with Loss of

More information

Fine Grained Transaction Log for Data Recovery in Database Systems

Fine Grained Transaction Log for Data Recovery in Database Systems Fine Grained Transaction Log for Data Recovery in Database Systems Ge Fu, Min Chen, Hong Zhu, Yucai Feng, Yi Zhu, Jie Shi Database and Multimedia Technology Research Institute Huazhong University of Science

More information

Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database

Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database Dr. Khaled S. Maabreh* and Prof. Dr. Alaa Al-Hamami** * Faculty of Science and Information Technology, Zarqa University,

More information

Chapter 14: Recovery System

Chapter 14: Recovery System Chapter 14: Recovery System Chapter 14: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Remote Backup Systems Failure Classification Transaction failure

More information

Chapter 10. Backup and Recovery

Chapter 10. Backup and Recovery Chapter 10. Backup and Recovery Table of Contents Objectives... 1 Relationship to Other Units... 2 Introduction... 2 Context... 2 A Typical Recovery Problem... 3 Transaction Loggoing... 4 System Log...

More information

Database Management System Dr.S.Srinath Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

Database Management System Dr.S.Srinath Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. Database Management System Dr.S.Srinath Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. # 28 Recovery Mechanisms II Hello and welcome. In the previous session

More information

Logging and Recovery. Logging Recovery Archiving

Logging and Recovery. Logging Recovery Archiving Logging and Recovery Logging Recovery Archiving Recovery: Failsafe systems Failures and correctives Erroneous data entry Constraints, triggers Media failure: e.g.,bit failures, head crash Catastrophic

More information

! Volatile storage: ! Nonvolatile storage:

! Volatile storage: ! Nonvolatile storage: Chapter 17: Recovery System Failure Classification! Failure Classification! Storage Structure! Recovery and Atomicity! Log-Based Recovery! Shadow Paging! Recovery With Concurrent Transactions! Buffer Management!

More information

Transaction Management Overview

Transaction Management Overview Transaction Management Overview Chapter 16 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Transactions Concurrent execution of user programs is essential for good DBMS performance. Because

More information

Review: The ACID properties

Review: The ACID properties Recovery Review: The ACID properties A tomicity: All actions in the Xaction happen, or none happen. C onsistency: If each Xaction is consistent, and the DB starts consistent, it ends up consistent. I solation:

More information

Database System Recovery

Database System Recovery Database System Recovery Outline 1. Introduction 2. DB Recovery Model 3. Recovery Manager 4. Log-based Recovery 5. Media Failure 6. Shadow Paging 1. Introduction Transaction - The execution of a program

More information

Database Tuning and Physical Design: Execution of Transactions

Database Tuning and Physical Design: Execution of Transactions Database Tuning and Physical Design: Execution of Transactions David Toman School of Computer Science University of Waterloo Introduction to Databases CS348 David Toman (University of Waterloo) Transaction

More information

2 nd Semester 2008/2009

2 nd Semester 2008/2009 Chapter 17: System Departamento de Engenharia Informática Instituto Superior Técnico 2 nd Semester 2008/2009 Slides baseados nos slides oficiais do livro Database System c Silberschatz, Korth and Sudarshan.

More information

(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Pessimistic Timestamp Ordering. Write Operations and Timestamps

(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Pessimistic Timestamp Ordering. Write Operations and Timestamps (Pessimistic) stamp Ordering Another approach to concurrency control: Assign a timestamp ts(t) to transaction T at the moment it starts Using Lamport's timestamps: total order is given. In distributed

More information

Recovery System C H A P T E R16. Practice Exercises

Recovery System C H A P T E R16. Practice Exercises C H A P T E R16 Recovery System Practice Exercises 16.1 Explain why log records for transactions on the undo-list must be processed in reverse order, whereas redo is performed in a forward direction. Answer:

More information

Information Systems. Computer Science Department ETH Zurich Spring 2012

Information Systems. Computer Science Department ETH Zurich Spring 2012 Information Systems Computer Science Department ETH Zurich Spring 2012 Lecture VI: Transaction Management (Recovery Manager) Recovery Manager ETH Zurich, Spring 2012 Information Systems 3 Failure Recovery

More information

Firewall Verification and Redundancy Checking are Equivalent

Firewall Verification and Redundancy Checking are Equivalent Firewall Verification and Redundancy Checking are Equivalent H. B. Acharya University of Texas at Austin acharya@cs.utexas.edu M. G. Gouda National Science Foundation University of Texas at Austin mgouda@nsf.gov

More information

Chapter 17: Recovery System

Chapter 17: Recovery System Chapter 17: Recovery System Database System Concepts See www.db-book.com for conditions on re-use Chapter 17: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery

More information

Databases 2011 Recovery

Databases 2011 Recovery Databases 2011 Recovery Christian S. Jensen Computer Science, Aarhus University Transaction basics Logging Undo/Redo recovery No-Undo/Redo Undo/No-Redo No-Undo/No-Redo Checkpointing Outline Recovery 2

More information

Distributed Databases

Distributed Databases C H A P T E R19 Distributed Databases Practice Exercises 19.1 How might a distributed database designed for a local-area network differ from one designed for a wide-area network? Data transfer on a local-area

More information

Implementing New Approach for Enhancing Performance and Throughput in a Distributed Database

Implementing New Approach for Enhancing Performance and Throughput in a Distributed Database 290 The International Arab Journal of Information Technology, Vol. 10, No. 3, May 2013 Implementing New Approach for Enhancing Performance and in a Distributed Database Khaled Maabreh 1 and Alaa Al-Hamami

More information

Introduction to Database Management Systems

Introduction to Database Management Systems Database Administration Transaction Processing Why Concurrency Control? Locking Database Recovery Query Optimization DB Administration 1 Transactions Transaction -- A sequence of operations that is regarded

More information

Recovery and the ACID properties CMPUT 391: Implementing Durability Recovery Manager Atomicity Durability

Recovery and the ACID properties CMPUT 391: Implementing Durability Recovery Manager Atomicity Durability Database Management Systems Winter 2004 CMPUT 391: Implementing Durability Dr. Osmar R. Zaïane University of Alberta Lecture 9 Chapter 25 of Textbook Based on slides by Lewis, Bernstein and Kifer. University

More information

Extending Multidatabase Transaction Management Techniques to Software Development Environments

Extending Multidatabase Transaction Management Techniques to Software Development Environments Purdue University Purdue e-pubs Computer Science Technical Reports Department of Computer Science 1993 Extending Multidatabase Transaction Management Techniques to Software Development Environments Aidong

More information

Transfer 500. Account A Fred Bloggs 500 Account B Sue Smith 0 Account B Sue Smith 500

Transfer 500. Account A Fred Bloggs 500 Account B Sue Smith 0 Account B Sue Smith 500 Transaction Processing Recovery & Concurrency Control What is a transaction A transaction is the basic logical unit of execution in an information system. A transaction is a sequence of operations that

More information

Redo Recovery after System Crashes

Redo Recovery after System Crashes Redo Recovery after System Crashes David Lomet Microsoft Corporation One Microsoft Way Redmond, WA 98052 lomet@microsoft.com Mark R. Tuttle Digital Equipment Corporation One Kendall Square Cambridge, MA

More information

Transaction Management

Transaction Management Transaction Management Transactions are units of work that must be executed atomically and (seemingly) in isolation from other transactions Their effects should be durable: no completed work should be

More information

INTRODUCTION TO DATABASE SYSTEMS

INTRODUCTION TO DATABASE SYSTEMS 1 INTRODUCTION TO DATABASE SYSTEMS Exercise 1.1 Why would you choose a database system instead of simply storing data in operating system files? When would it make sense not to use a database system? Answer

More information

Attack graph analysis using parallel algorithm

Attack graph analysis using parallel algorithm Attack graph analysis using parallel algorithm Dr. Jamali Mohammad (m.jamali@yahoo.com) Ashraf Vahid, MA student of computer software, Shabestar Azad University (vahid.ashraf@yahoo.com) Ashraf Vida, MA

More information

QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data

QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data Md. Golam Shagadul Amin Talukder and Al-Mukaddim Khan Pathan* Department of Computer Science and Engineering, Metropolitan University, Sylhet,

More information

Dr Markus Hagenbuchner markus@uow.edu.au CSCI319. Distributed Systems

Dr Markus Hagenbuchner markus@uow.edu.au CSCI319. Distributed Systems Dr Markus Hagenbuchner markus@uow.edu.au CSCI319 Distributed Systems CSCI319 Chapter 8 Page: 1 of 61 Fault Tolerance Study objectives: Understand the role of fault tolerance in Distributed Systems. Know

More information

Recovery Principles of MySQL Cluster 5.1

Recovery Principles of MySQL Cluster 5.1 Recovery Principles of MySQL Cluster 5.1 Mikael Ronström Jonas Oreland MySQL AB Bangårdsgatan 8 753 20 Uppsala Sweden {mikael, jonas}@mysql.com Abstract MySQL Cluster is a parallel main memory database.

More information

A Case for Dynamic Selection of Replication and Caching Strategies

A Case for Dynamic Selection of Replication and Caching Strategies A Case for Dynamic Selection of Replication and Caching Strategies Swaminathan Sivasubramanian Guillaume Pierre Maarten van Steen Dept. of Mathematics and Computer Science Vrije Universiteit, Amsterdam,

More information

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching

Project Proposal. Data Storage / Retrieval with Access Control, Security and Pre-Fetching 1 Project Proposal Data Storage / Retrieval with Access Control, Security and Pre- Presented By: Shashank Newadkar Aditya Dev Sarvesh Sharma Advisor: Prof. Ming-Hwa Wang COEN 241 - Cloud Computing Page

More information

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering

Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations. Database Solutions Engineering Comprehending the Tradeoffs between Deploying Oracle Database on RAID 5 and RAID 10 Storage Configurations A Dell Technical White Paper Database Solutions Engineering By Sudhansu Sekhar and Raghunatha

More information

UVA. Failure and Recovery. Failure and inconsistency. - transaction failures - system failures - media failures. Principle of recovery

UVA. Failure and Recovery. Failure and inconsistency. - transaction failures - system failures - media failures. Principle of recovery Failure and Recovery Failure and inconsistency - transaction failures - system failures - media failures Principle of recovery - redundancy - DB can be protected by ensuring that its correct state can

More information

Course Content. Transactions and Concurrency Control. Objectives of Lecture 4 Transactions and Concurrency Control

Course Content. Transactions and Concurrency Control. Objectives of Lecture 4 Transactions and Concurrency Control Database Management Systems Fall 2001 CMPUT 391: Transactions & Concurrency Control Dr. Osmar R. Zaïane University of Alberta Chapters 18 and 19 of Textbook Course Content Introduction Database Design

More information

A Comparison of General Approaches to Multiprocessor Scheduling

A Comparison of General Approaches to Multiprocessor Scheduling A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA jing@jolt.mt.att.com Michael A. Palis Department of Computer Science Rutgers University

More information

B.Sc (Computer Science) Database Management Systems UNIT - IV

B.Sc (Computer Science) Database Management Systems UNIT - IV 1 B.Sc (Computer Science) Database Management Systems UNIT - IV Transaction:- A transaction is any action that reads from or writes to a database. Suppose a customer is purchasing a product using credit

More information

CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS

CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS CRASH RECOVERY FOR REAL-TIME MAIN MEMORY DATABASE SYSTEMS Jing Huang Le Gruenwald School of Computer Science The University of Oklahoma Norman, OK 73019 Email: gruenwal@mailhost.ecn.uoknor.edu Keywords:

More information

Load Balancing in Distributed Data Base and Distributed Computing System

Load Balancing in Distributed Data Base and Distributed Computing System Load Balancing in Distributed Data Base and Distributed Computing System Lovely Arya Research Scholar Dravidian University KUPPAM, ANDHRA PRADESH Abstract With a distributed system, data can be located

More information

CONCURRENCY CONTROL IN TRUSTED DATABASE MANAGEMENT SYSTEMS: A SURVEY. Bhavani Thuraisingham and Hai-Ping Ko

CONCURRENCY CONTROL IN TRUSTED DATABASE MANAGEMENT SYSTEMS: A SURVEY. Bhavani Thuraisingham and Hai-Ping Ko CONCURRENCY CONTROL IN TRUSTED DATABASE MANAGEMENT SYSTEMS: A SURVEY Bhavani Thuraisingham and Hai-Ping Ko The MITRE Corporation Burlington Road, Bedford, MA 01730 ABSTRACT Recently several algorithms

More information

Chapter 9. Transaction Management and Concurrency Control. Database Systems: Design, Implementation, and Management, Sixth Edition, Rob and Coronel

Chapter 9. Transaction Management and Concurrency Control. Database Systems: Design, Implementation, and Management, Sixth Edition, Rob and Coronel Chapter 9 Transaction Management and Concurrency Control Database Systems: Design, Implementation, and Management, Sixth Edition, Rob and Coronel 1 In this chapter, you will learn: What a database transaction

More information

Transaction Management

Transaction Management OVERVIEW OF TRANSACTION MANAGEMENT Tópicos Avançados da Base de Dados Arley Pinto up110370423 (andrapt@gmail.com) Gabriel de Jesus up110370572 (thejesusgaby@gmail.com) TRANSACTION OVERVIEW A transaction

More information

Ensuring Task Dependencies During Workflow Recovery

Ensuring Task Dependencies During Workflow Recovery Ensuring Task Depencies During Workflow Recovery Indrakshi Ray, Tai Xin, and Yajie Zhu Department of Computer Science Colorado State University iray,xin,zhuy @cs.colostate.edu Abstract. Workflow management

More information

CS 245 Final Exam Winter 2013

CS 245 Final Exam Winter 2013 CS 245 Final Exam Winter 2013 This exam is open book and notes. You can use a calculator and your laptop to access course notes and videos (but not to communicate with other people). You have 140 minutes

More information

Chapter 16: Recovery System

Chapter 16: Recovery System Chapter 16: Recovery System Failure Classification Failure Classification Transaction failure : Logical errors: transaction cannot complete due to some internal error condition System errors: the database

More information

Locality Based Protocol for MultiWriter Replication systems

Locality Based Protocol for MultiWriter Replication systems Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin lgao@cs.utexas.edu One of the challenging problems in building replication

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks

Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks Flexible Deterministic Packet Marking: An IP Traceback Scheme Against DDOS Attacks Prashil S. Waghmare PG student, Sinhgad College of Engineering, Vadgaon, Pune University, Maharashtra, India. prashil.waghmare14@gmail.com

More information

Ensuring Security in Cloud with Multi-Level IDS and Log Management System

Ensuring Security in Cloud with Multi-Level IDS and Log Management System Ensuring Security in Cloud with Multi-Level IDS and Log Management System 1 Prema Jain, 2 Ashwin Kumar PG Scholar, Mangalore Institute of Technology & Engineering, Moodbidri, Karnataka1, Assistant Professor,

More information

A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs

A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs Michael Scherger Department of Computer Science Texas Christian University Email: m.scherger@tcu.edu Zakir Hussain Syed

More information

Recovery Protocols For Flash File Systems

Recovery Protocols For Flash File Systems Recovery Protocols For Flash File Systems Ravi Tandon and Gautam Barua Indian Institute of Technology Guwahati, Department of Computer Science and Engineering, Guwahati - 781039, Assam, India {r.tandon}@alumni.iitg.ernet.in

More information

Recovery Theory. Storage Types. Failure Types. Theory of Recovery. Volatile storage main memory, which does not survive crashes.

Recovery Theory. Storage Types. Failure Types. Theory of Recovery. Volatile storage main memory, which does not survive crashes. Storage Types Recovery Theory Volatile storage main memory, which does not survive crashes. Non-volatile storage tape, disk, which survive crashes. Stable storage information in stable storage is "never"

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014 RESEARCH ARTICLE An Efficient Priority Based Load Balancing Algorithm for Cloud Environment Harmandeep Singh Brar 1, Vivek Thapar 2 Research Scholar 1, Assistant Professor 2, Department of Computer Science

More information

Crash recovery requirements

Crash recovery requirements COS : Database and Information Management Systems Crash Recovery Crash Recovery Overview Goals of crash recovery Either transaction commits and is correct or aborts Commit means all actions of transaction

More information

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM

STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM Albert M. K. Cheng, Shaohong Fang Department of Computer Science University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu

More information

Detection of Distributed Denial of Service Attack with Hadoop on Live Network

Detection of Distributed Denial of Service Attack with Hadoop on Live Network Detection of Distributed Denial of Service Attack with Hadoop on Live Network Suchita Korad 1, Shubhada Kadam 2, Prajakta Deore 3, Madhuri Jadhav 4, Prof.Rahul Patil 5 Students, Dept. of Computer, PCCOE,

More information

Recovery Guarantees in Mobile Systems

Recovery Guarantees in Mobile Systems Accepted for the International Workshop on Data Engineering for Wireless and Mobile Access, Seattle, 20 August 1999 Recovery Guarantees in Mobile Systems Cris Pedregal Martin and Krithi Ramamritham Computer

More information

Database Concurrency Control and Recovery. Simple database model

Database Concurrency Control and Recovery. Simple database model Database Concurrency Control and Recovery Pessimistic concurrency control Two-phase locking (2PL) and Strict 2PL Timestamp ordering (TSO) and Strict TSO Optimistic concurrency control (OCC) definition

More information

Crash Recovery Method. Kathleen Durant CS 3200 Lecture 11

Crash Recovery Method. Kathleen Durant CS 3200 Lecture 11 Crash Recovery Method Kathleen Durant CS 3200 Lecture 11 Outline Overview of the recovery manager Data structures used by the recovery manager Checkpointing Crash recovery Write ahead logging ARIES (Algorithm

More information

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress*

Oracle Database 11 g Performance Tuning. Recipes. Sam R. Alapati Darl Kuhn Bill Padfield. Apress* Oracle Database 11 g Performance Tuning Recipes Sam R. Alapati Darl Kuhn Bill Padfield Apress* Contents About the Authors About the Technical Reviewer Acknowledgments xvi xvii xviii Chapter 1: Optimizing

More information

MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT

MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,

More information

Transaction Concept. Chapter 15: Transactions. ACID Properties. Example of Fund Transfer. Transaction State. Example of Fund Transfer (Cont.

Transaction Concept. Chapter 15: Transactions. ACID Properties. Example of Fund Transfer. Transaction State. Example of Fund Transfer (Cont. Chapter 15: Transactions Transaction Concept! Transaction Concept! Transaction State! Implementation of Atomicity and Durability! Concurrent Executions! Serializability! Recoverability! Implementation

More information

Oracle Rdb Performance Management Guide

Oracle Rdb Performance Management Guide Oracle Rdb Performance Management Guide Solving the Five Most Common Problems with Rdb Application Performance and Availability White Paper ALI Database Consultants 803-648-5931 www.aliconsultants.com

More information

Concurrency Control. Module 6, Lectures 1 and 2

Concurrency Control. Module 6, Lectures 1 and 2 Concurrency Control Module 6, Lectures 1 and 2 The controlling intelligence understands its own nature, and what it does, and whereon it works. -- Marcus Aurelius Antoninus, 121-180 A. D. Database Management

More information

Intrusion Detection: Game Theory, Stochastic Processes and Data Mining

Intrusion Detection: Game Theory, Stochastic Processes and Data Mining Intrusion Detection: Game Theory, Stochastic Processes and Data Mining Joseph Spring 7COM1028 Secure Systems Programming 1 Discussion Points Introduction Firewalls Intrusion Detection Schemes Models Stochastic

More information

Data might get lost We don t like that, because Solutions are based on logging techniques General term: write ahead logging

Data might get lost We don t like that, because Solutions are based on logging techniques General term: write ahead logging Data might get lost We don t like that, because Solutions are based on logging techniques General term: write ahead logging Wrong user data: avoid by using constraints System failure: loss of main memory

More information

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5

VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 Performance Study VirtualCenter Database Performance for Microsoft SQL Server 2005 VirtualCenter 2.5 VMware VirtualCenter uses a database to store metadata on the state of a VMware Infrastructure environment.

More information

FAWN - a Fast Array of Wimpy Nodes

FAWN - a Fast Array of Wimpy Nodes University of Warsaw January 12, 2011 Outline Introduction 1 Introduction 2 3 4 5 Key issues Introduction Growing CPU vs. I/O gap Contemporary systems must serve millions of users Electricity consumed

More information

COURSE 5. Database Recovery

COURSE 5. Database Recovery COURSE 5 Database Recovery Making Changes in Database Transaction T made changes to x. There is use of DB buffer. There is a crash while operations are performed. Scenario1: Neither operations made it

More information

Key Components of WAN Optimization Controller Functionality

Key Components of WAN Optimization Controller Functionality Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications

More information

A Review of Anomaly Detection Techniques in Network Intrusion Detection System

A Review of Anomaly Detection Techniques in Network Intrusion Detection System A Review of Anomaly Detection Techniques in Network Intrusion Detection System Dr.D.V.S.S.Subrahmanyam Professor, Dept. of CSE, Sreyas Institute of Engineering & Technology, Hyderabad, India ABSTRACT:In

More information

Goal. Log Manager. Recovery Manager Model

Goal. Log Manager. Recovery Manager Model Goal Log Manager Database Systems Implementation Based on slides by Phil Bernstein and Jim Gray A database may become inconsistent because of a transaction failure (abort) database system failure (possibly

More information

Optimizing Your Database Performance the Easy Way

Optimizing Your Database Performance the Easy Way Optimizing Your Database Performance the Easy Way by Diane Beeler, Consulting Product Marketing Manager, BMC Software and Igy Rodriguez, Technical Product Manager, BMC Software Customers and managers of

More information

PIONEER RESEARCH & DEVELOPMENT GROUP

PIONEER RESEARCH & DEVELOPMENT GROUP SURVEY ON RAID Aishwarya Airen 1, Aarsh Pandit 2, Anshul Sogani 3 1,2,3 A.I.T.R, Indore. Abstract RAID stands for Redundant Array of Independent Disk that is a concept which provides an efficient way for

More information

IJTC.ORG REVIEW OF IDS SYSTEM IN LARGE SCALE ADHOC NETWORKS

IJTC.ORG REVIEW OF IDS SYSTEM IN LARGE SCALE ADHOC NETWORKS REVIEW OF IDS SYSTEM IN LARGE SCALE ADHOC NETWORKS Palamdeep a,, Dr.Parminder Singh b a MTech Student, k.palambrar@gmail.com,chandigarh Engineering College,Landran,Punjab,India b Assistant Professor, singh.parminder06@gmail.com,chandigarh

More information

A Game Theoretical Framework on Intrusion Detection in Heterogeneous Networks Lin Chen, Member, IEEE, and Jean Leneutre

A Game Theoretical Framework on Intrusion Detection in Heterogeneous Networks Lin Chen, Member, IEEE, and Jean Leneutre IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL 4, NO 2, JUNE 2009 165 A Game Theoretical Framework on Intrusion Detection in Heterogeneous Networks Lin Chen, Member, IEEE, and Jean Leneutre

More information

A Novel Distributed Denial of Service (DDoS) Attacks Discriminating Detection in Flash Crowds

A Novel Distributed Denial of Service (DDoS) Attacks Discriminating Detection in Flash Crowds International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 9, December 2014, PP 139-143 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Novel Distributed Denial

More information

Chapter 13. Chapter Outline. Disk Storage, Basic File Structures, and Hashing

Chapter 13. Chapter Outline. Disk Storage, Basic File Structures, and Hashing Chapter 13 Disk Storage, Basic File Structures, and Hashing Copyright 2007 Ramez Elmasri and Shamkant B. Navathe Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files

More information

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age. Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement

More information

Chapter 15: Transactions

Chapter 15: Transactions Chapter 15: Transactions Database System Concepts, 5th Ed. See www.db book.com for conditions on re use Chapter 15: Transactions Transaction Concept Transaction State Concurrent Executions Serializability

More information

IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES

IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 2 Issue 6 June, 2013 Page No. 1914-1919 IMPROVED PROXIMITY AWARE LOAD BALANCING FOR HETEROGENEOUS NODES Ms.

More information

Fuzzy Network Profiling for Intrusion Detection

Fuzzy Network Profiling for Intrusion Detection Fuzzy Network Profiling for Intrusion Detection John E. Dickerson (jedicker@iastate.edu) and Julie A. Dickerson (julied@iastate.edu) Electrical and Computer Engineering Department Iowa State University

More information

Checkpointing, Redo, Undo/Redo Logging. CS157B Chris Pollett Apr.20, 2005.

Checkpointing, Redo, Undo/Redo Logging. CS157B Chris Pollett Apr.20, 2005. Checkpointing, Redo, Undo/Redo Logging CS157B Chris Pollett Apr.20, 2005. Outline Checkpointing Redo Logging Undo/redo Logging Checkpointing So far recovery requires that the entire log file be looked

More information

Data Storage - II: Efficient Usage & Errors

Data Storage - II: Efficient Usage & Errors Data Storage - II: Efficient Usage & Errors Week 10, Spring 2005 Updated by M. Naci Akkøk, 27.02.2004, 03.03.2005 based upon slides by Pål Halvorsen, 12.3.2002. Contains slides from: Hector Garcia-Molina

More information

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration 1 Harish H G, 2 Dr. R Girisha 1 PG Student, 2 Professor, Department of CSE, PESCE Mandya (An Autonomous Institution under

More information

File-System Implementation

File-System Implementation File-System Implementation 11 CHAPTER In this chapter we discuss various methods for storing information on secondary storage. The basic issues are device directory, free space management, and space allocation

More information

Study of Different Types of Attacks on Multicast in Mobile Ad Hoc Networks

Study of Different Types of Attacks on Multicast in Mobile Ad Hoc Networks Study of Different Types of Attacks on Multicast in Mobile Ad Hoc Networks Hoang Lan Nguyen and Uyen Trang Nguyen Department of Computer Science and Engineering, York University 47 Keele Street, Toronto,

More information

Load Distribution in Large Scale Network Monitoring Infrastructures

Load Distribution in Large Scale Network Monitoring Infrastructures Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu

More information

On the Ubiquity of Logging in Distributed File Systems

On the Ubiquity of Logging in Distributed File Systems On the Ubiquity of Logging in Distributed File Systems M. Satyanarayanan James J. Kistler Puneet Kumar Hank Mashburn School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Logging is

More information

Analyze Database Optimization Techniques

Analyze Database Optimization Techniques IJCSNS International Journal of Computer Science and Network Security, VOL.10 No.8, August 2010 275 Analyze Database Optimization Techniques Syedur Rahman 1, A. M. Ahsan Feroz 2, Md. Kamruzzaman 3 and

More information

2 Technologies for Security of the 2 Internet

2 Technologies for Security of the 2 Internet 2 Technologies for Security of the 2 Internet 2-1 A Study on Process Model for Internet Risk Analysis NAKAO Koji, MARUYAMA Yuko, OHKOUCHI Kazuya, MATSUMOTO Fumiko, and MORIYAMA Eimatsu Security Incidents

More information

PostgreSQL Concurrency Issues

PostgreSQL Concurrency Issues PostgreSQL Concurrency Issues 1 PostgreSQL Concurrency Issues Tom Lane Red Hat Database Group Red Hat, Inc. PostgreSQL Concurrency Issues 2 Introduction What I want to tell you about today: How PostgreSQL

More information

Oracle Database Concepts

Oracle Database Concepts Oracle Database Concepts Database Structure The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed

More information

File System Implementation II

File System Implementation II Introduction to Operating Systems File System Implementation II Performance, Recovery, Network File System John Franco Electrical Engineering and Computing Systems University of Cincinnati Review Block

More information

A Visualization Technique for Monitoring of Network Flow Data

A Visualization Technique for Monitoring of Network Flow Data A Visualization Technique for Monitoring of Network Flow Data Manami KIKUCHI Ochanomizu University Graduate School of Humanitics and Sciences Otsuka 2-1-1, Bunkyo-ku, Tokyo, JAPAPN manami@itolab.is.ocha.ac.jp

More information