Consistency Management for Multiple Perspective Software Development
|
|
|
- Brent Hodge
- 10 years ago
- Views:
Transcription
1 Consistency Management for Multiple Perspective Software Development Wai Leung Poon Anthony Finkelstein Dept. of Computing, Dept. of Computer Science, Imperial College, City University, 180 Queen's Gate, Northampton Square, London SW7 2BZ, UK London EC1V 0HB, UK Abstract This paper discusses consistency management for realising viewpoint-based software development in an aggressively decentralised environment. Relations between design artefacts are specified in consistency checks. Possible interference between concurrent actions of users is handled by modelling the interaction between these actions terms of transactions. We briefly describe how software process model knowledge can be exploited to enhance concurrency subject to the correctness criteria of serialisability. Open issues and the research agenda underlying the development of such an environment are sketched. KEYWORDS: concurrency control, transaction management, software process, software development environment 1. Viewpoint-based Decentralised Environments Software development commonly involves geographically distributed participants with different objectives and expertise. A wide range of data representations, tools and development methodologies may be adopted. This is referred to as multiple perspective software development [1]. Such situations can be modelled in terms of the collaboration of autonomous development environments, each of which encapsulates one or more dimensions of the heterogeneity. The resulting system, which is composed of a collection of autonomous environments facilitates reuse and evolution as loosely coupled environments may be added, removed or replaced. In addition, because the system does not rely on centralised control, it avoids a single point of failure and permits higher levels of concurrent activities thus enhancing productivity. An environment may be characterised in terms of five components namely domain, style, work plan, specification and work record. The domain component describes the area of concern of the environment and sets its context in the overall system under construction. The specification component describes the domain in the representation scheme supported by the style component. The specification component contains design artefacts. Design artefacts created in the software development are stored in design databases. A simplistic view of design database is adopted. A database is a collection of design objects. Each design object is a pair of domain and value. The value determines the state of an object. We focus on only two basic operations on objects namely read and write. The style component describes the representation scheme supported by the environment. The domain component and the style component prescribe the desirable relations [2] between artefacts within an environment and between different environments. Ideally, the consistency of a design is defined in terms of a collection of consistency constraints on the design objects. Each consistency constraint governs a collection of design objects. It specifies acceptable combinations of states of design objects. A design will be in a consistent state if all the consistency constraints are satisfied. The work plan component describes development actions together with a strategy for producing desirable products. The software process model being examined is described by a collection of rules in terms of "situation-action" pair discussed in [3]. An action will be carried out if the current state of the process model meets the situation described. To complete the specified action, a sequence of operations have to be carried out within a software development environment or across different ones. Operations are carried out automatically or manually. The work record component records the history and current state of the development. It is important to ensure that all consistency constraints are satisfied eventually. Ideally, when changes have been made, tests could be performed to check if the consistency constraints have been satisfied. However, it is not always possible to state all consistency constraints in advance. Consistency constraints may be elicited during the software development. Some may not be stated explicitly. For instance, a user may create several objects in particular states and assume that they are consistent with each other without explicitly defining any test to check the consistency. Because the consistency of such changes are beyond the control of the supporting environment, we assume that a user, who performs any sequence of operations on objects, will bring the design from one consistent state to another one if the initial state of the design is a consistent one and the update satisfies all explicitly stated consistency constraints. Using these assumptions, common causes of inconsistency are: Incorrect actions. Users perform actions to change design objects in such a way that one or more consistency constraints are violated. Non-atomic actions. A consistency constraint may govern a collection of design objects. If only one operation can be performed on one of the objects at a time, this inevitably means violating the consistency constraints temporarily until all changes on related objects has been made. Bad dependencies. Inconsistency may be caused by the uncoordinated actions of different users on shared or related objects. A description [4] of the three types of problems are: lost update, uncommitted dependency and inconsistent analysis. Different perspectives. Contradicting requirements and unresolved conflicting design decisions may be advanced by users in different environments.
2 We model the interaction between different environments in terms of transactions and shows how the problems described above may be addressed in a unified framework. 2 Scenario The behaviour of the environment may be best illustrated by an example. We illustrate how users collaborate with each other in terms of transactions while skipping details of enabling techniques until the next section. First of all, we assume that at a certain point of the development, the specification, which is composed of miniature data flow diagrams, is as shown in figure 1. For expository simplicity, we use the short notation, i1[x4] r1[] w1[] r1[] w1[] [x3] c, t o represent the sequence of operations. The symbols i, d, r and w represent the operations of insert, delete, read and write respectively. In addition, either a or c is used as the last operation of a transaction to denote that the transaction is aborted or committed respectively. The objects are referred to by their name while the exact values of their attributes are ignored. The number in suffix, '1' in this case, may be used to denote that operations belong to the same transaction in case of possible ambiguity. Because T1 does not violate any known consistency constraints, a commit operation, c, can be used to end the transaction. V1 (Ann) X Y d4 Decomposition of X in V1 Figure 1. Initial states of two consistent specifications. The diagram V2 is a decomposition of the node X in the diagram V1. Ann and Bob are responsible for the development of V1 and V2 respectively. We assume that the diagram is represented by two kinds of objects namely node and link. A node has a name as its sole attribute. A link has three attributes: a name, the node which it connected from and the node which it connected to. The name of node and link are assumed to be unique. A data flow diagram is self-consistent if every link has a source and a destination unless it flows in or flows out of a diagram. We consider that two perspectives are consistent with each other as long as the data flows which flow in or out of a diagram V2 match those of node X in diagram V1. Based on the definition, the initial contents of the specifications stored in two environments are as follows: V1 V2 node(x) node(x1) node(y) node(x1) link(, #, x) link(, #, x1) link(, x, y) link(, x1, x2) link(d4, y, #) link(, x2, #) # represent an external connection Now, Bob decides that it is necessary to replace the process x2 with x4 as shown in figure 2. Without committing ourselves to the details of implementation, operations in the transaction, which we will referred to as T1, can be represented as: insert[node(x4)] read[flow(, x1, x2)] write[flow(, x1, x4)] read[flow(, x2, #)] write[flow(, x4, #)] delete[node(x3)] commit 2a Before the modification 2b. Proposed changes Figure 2. Bob decides to replace the node x2 with x4. In a multi-user environment, it is possible that concurrent tasks are performed by users who access related objects at the same time. One cause of concurrent access is that operations in one perspective may need concerted actions from other perspectives in order to maintain the design in a consistent state. For instance, Ann thinks that it is desirable to produce a data flow from the node x as shown in figure 3a. Because the node x is further decomposed in perspective V2, it is beyond her knowledge or authority to decide whether can be made available. The decision depends on the confirmation of Bob. This dependency can be represented by checks between two environments. Hence, a transaction in Ann's perspective triggers another transaction which is to be performed in Bob's perspective. As a result, a dependency is established between two transactions. On the one hand, Ann's transaction should not be committed until the Bob's one has succeeded. On the other hand, if Ann retracts her changes by aborting the transaction, it would be reasonable to abort the transaction in Bob's perspective as well. V1 (Ann) X Y 3a. Ann proposes the addition of d4 3b. Bob's possible respone to Ann's proposal Figure 3. Dependency between transactions across different perspectives Bob, who is responsible for constructing the diagram V2, decides that neither the process x1 nor the process x2 can produce the data flow. Hence, he adds an extra step named x3, which takes an intermediate result, denoted by, from x1 and produces the original for x2 while producing as required by Ann. The changes, which will be collectively referred as transaction T2, are
3 shown in the figure 3b. (Of course, it is also plausible that Bob rejects Ann's proposal, then a new proposal may have to be made.) The short notation, i2[x3] r2[] w2[] i2[] i2[] c, is used to denote the following operations: insert[node(x3)] read[flow(, x1, x2)] write[flow(, x3, x2)] insert[flow(, x1, x3)] insert[flow(, x3, #)] commit Nevertheless, there is interference between two transactions on the data flow. In fact, there are conflicts between insert and delete operations as well but we ignore these for the moment to simplify the discussion. An undesirable portion of interleaving of operations is "r1[] r2[] w2[] w1[]", which allows the update written by the T2 to be overwritten by T1. The difference between the expected combined effect and the undesirable one is shown in figure 4. Hence, it is essential to co-ordinate the actions taken to put the proposed changes in the two alternatives into the design. 4a. Expected combined effect 4b. Undesirable effect caused by interference Figure 4. Comparison between an expected and undesirable combined effect. A possible way to avoid the interference is to delay execution of the conflicting actions of T1 until T2 has committed. Hence, a possible execution of the transaction which leave the database in a consistent state is (with irrelevant sequence of operations represented as "*"): T1: * r1[] w1[] * c T2: * r2[] w2[] * c As a result, the conflicting operations in T1 should not be performed until all operations in T2 have been carried out and committed. To avoid the delay, we may opt for accessing the information immediately once the conflicting actions have finished. The corresponding sequence of execution is: T1: * r1[] w1[] * c T2: * r2[] w2[] * c This scheme does enhance the concurrency of the execution but it is done at the expense of possible cascading abort. The reason is that T1 depends on the tentative update made by T2 since changes made by T2 has not been committed yet. If T2 is in fact aborted, new operations will have to be performed in T1 to compensate for the changes or T1 will have to be aborted as well. Hence, the first option is preferable as it reduces the dependency between different activities. 3 Concurrency Control Managing complex relations between objects can be tedious and error-prone. As illustrated above, two possible measures can be taken. Firstly, to detect incorrect actions, the consistency constraints can specified between different objects. If a user performs operations on some objects and violates some consistency constraints as a result, the environment will be able to advise user to carry out further automated or manual operations to update dependent objects. Operations may require co-operation between users in different environments. Secondly, the sequence in which objects are accessed can be controlled so that effect of one's non-atomic action is not visible to others and undesirable dependency can be avoided. A way to coordinate concurrent actions of users is that operations are allowed to be interleaved as long as it is logically equivalent to a serial execution. The requirement can be analysed in terms of the well established concept of serialisability [5]. Here, we focus on the mechanisms on which such an environment can be based and techniques to enhance concurrency by exploiting knowledge of the software process. A software process is composed of a collection of process steps. Between each step, checks are performed to decide the next appropriate actions. We simply treat each process step or check as an individual subtransaction [6]. By definition the consistency of the distributed design artefacts is maintained if and only if there is a serialisable global schedule S such that S is computationally equivalent to the result of the execution of transactions in different sites. Hence, we have to synchronise the operations between global transactions executed across different environments and between local and global transactions executed in the same site. Centralising the scheduling of all transactions is a simple solution. Unfortunately, it leads to a single point of failure and possibly incurs expensive communication between the centralised scheduler and distributed development environments, even if most transactions are executed locally. Furthermore, because the actual operations performed in a transaction usually depend on the users ad hoc decisions, it is difficult to distinguish between local and global transactions in advance. As a result, we would have to treat any transaction with uncertain status as a global transaction thus creating a possible bottleneck at the centralised scheduler. Hence, we prefer techniques which achieve global serialisability through the collaboration of decentralised schedulers. The requirement above could be achieved by enforcing a rigorous schedule together with transaction management which guarantees that if a transaction has any operations conflicting with another transaction, the execution of the operations of one transaction must be delayed until the commit or abort of another transaction has been received [7]. Two phase locking can be used to enforce the rigorous schedule. We choose a locking protocol because: firstly the technique is well investigated; secondly, we seek to minimise aborting transactions, the other well known technique, timestamp [5] tends to rely on aborting to enforce the serialisability; thirdly, the phantom problem may appear if objects are created or destroyed. Granularity locking [8] provides an efficient solution to the phantom problem however, it is unclear what the performance of the its counterpart in timestamp based approaches, e.g. [9], might be. Finally, the locking protocol avoids the problem of cascading abort discussed above. To ensure commit-deferred, the global transaction manager must ensure that the commit operations are sent to different environments only after all operations have been performed. It is enforced by requiring all local transaction mangers to acknowledge the commitment of transactions to the originator of the global transaction. Assuming that there is no communication
4 error and abortion of subtransaction, we may regard a transaction as typically composed of two phases. In the first phase, the transaction manager creates subtransactions which are to be executed in one or more environments. From each subtransaction, the transaction manager expects an acknowledgement of the result of execution. No commit should be sent to any subtransactions in different sites until all acknowledgements have been received. In the second phase, if all acknowledgements have been received a commit will be sent to all participating sites. Hence, locks which are held by any subtransaction should not be released to others subtransactions until receiving the acknowledgement from the root of the nested transaction. Finally, since there is no need to coordinate the execution order between global transactions, we can safely assimilate the role of global transaction manager into the local transaction manager. 4 Ways to Enhance Concurrency The software process has been described in terms of a sequence of actions and checks. Each sequence is represented as a subtransaction. Apparently, each subtransaction need to acquire the appropriate lock in order to access the order. However, there are two useful features of the relationships between subtransactions that we may exploit to enhance concurrency. Firstly, users' decision on any operations do not depend on objects read by the checks. Secondly, the amount of repair operations depends on the result of the checks. It is highly unlikely that all repairs, which are designed to cater for different problems, have to be performed in any particular instance. Hence, it is conservative to acquire all the locks before the execution. Instead of requiring a transaction to acquire locks all at once, we treat each sequence of operations in a check or a repair as a separate unit in acquiring locks. This permits a higher degree of concurrency if only a few repair actions actually take place. Two-phase locking protocols with deferred commitment as suggested above may cause deadlocks. There are three kinds of techniques to handle the deadlock namely prevention, detection and avoidance [10]. Deadlock prevention techniques, which require us to lock all objects before execution, undermines concurrent activities performed by different users. Deadlock detection techniques could incur expensive costs because resolving a deadlock is not trivial except by aborting transactions on which users may have invested substantial manual effort. The remaining option is that we monitor if there is any situation which may lead to deadlock. If so, actions are taken to avoid the deadlock. Nevertheless, merely based on the knowledge of locks being held and users who are waiting for locks, traditional deadlock avoidance, for instance [11], may abort operations unnecessarily. To reduce the number of unnecessary abortions, effort could be spent on the deadlock avoidance mechanism to improve its predictive power. The process model is one of the valuable source of information. One promising approach that we are exploring is to enhance our understanding of the likelihood of deadlock by simulating the future execution of the process model while it is being enacted. Serialisable execution of the transactions cannot avoid the situation when contradicting proposals may be asserted by different perspectives. Transaction models manage consistency by isolating any inconsistent state from other until the transaction is committed or aborted. Hence, either different perspectives have to be treated as different objects or at most one perspective can be stored in a specification at a time. We propose to relax such limitation by treating possible conflicting changes on the same collection of objects as different versions. Objects which are engaged could be made available to other by allowing independent changes to be made on previous consistent versions of the object. With this new facility, any user may also protect tentative changes from being visible to others until he or she thinks that the updated version is stable enough. In the above example, Bob may create two versions of diagram for figure 2b and figure 3b based on the baseline as shown in figure 1a. Changes can then be made concurrently without worrying about the impact of one on the other. As a result, the delay experienced from Ann's perspective could also be shortened. However, the price to pay is the requirement to merge different versions together. Such a process may be assisted by a conflict resolution tool [12]. 5 Summary and Further Work We have outlined our proposal for consistency management through imposing consistency checks and detecting the interference between users' operations in terms of a transaction model. We have also sketched our approach to handling possible deadlock and tolerating inconsistency caused by different perspectives by allowing multiple concurrent versions of a perspective. We have to evaluate the proposed deadlock avoidance scheme against other techniques analytically and experimentally. It is also important to investigate the impact of the process model chosen on the performance of the techniques. Versioning brings in its train the problems of configuration management. How can a collection of version be selected so as to compose a consistency design? Which version should be selected as a baseline for a new version when several versions have been created already? We speculate that the problem could be partly addressed by analysing the work record, which stores the history of transaction, in order to provide hints on organising different versions of design objects. Acknowledgements We gratefully acknowledge the support of The Croucher Foundation and the EPSRC Voila project. Thanks also to our colleagues Wolfgang Emmerich, Jeff Kramer and Bashar Nuseibeh for many valuable suggestions. References [1] A. Finkelstein, J. Kramer, B. Nuseibeh, L. Finkelstein, and M. Goedicke, Viewpoints: A Framework for Integrating Multiple Perspectives in System Development, International Journal of Software Engineering and Knowledge Engineering, vol. 2, pp , [2] B. Nuseibeh, J. Kramer, and A. Finkelstein, Expressing the Relationships Between Multiple Views in Requirement Specification, presented at 15th International Conference on Software Engineering, Baltimore, USA, [3] U. Leonhardt, A. Finkelstein, J. Kramer, and B. Nuseibeh, Decentralised Process Modelling in a Multi- Perspective Development Environment, presented at 17th International Conference of Software Engineering, Seattle, Washington, USA, 24-28th April 1995, [4] C. J. Date, an Introduction to Database Systems, 6th ed. Wokingham, England: Addison-Wesley, [5] P. A. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency Control and Recovery in Database Systems: Addison-Wesley, [6] J. E. B. Moss, Nested transactions : an approach to reliable distributed computing. Cambridge, Mass.: MIT Press, [7] Y. Breitbart, D. Georgakopoulos, M. Rusinkiewicz, and A. Silberschatz, On Rigorous Transaction Scheduling, IEEE Transactions on Software Engineering, vol. 17, pp , 1991.
5 [8] J. Gray, R. A. Lorie, G. R. Putzolu, and I. L. Traiger, Granularity of Locks and Degrees of Consistency in a Shared Database, in Modeling in Data Base Management Systems. Amsterdam: Elsevier North-Holland, [9] M. J. Carey, Granularity Hierarchies in Concurrency Control, presented at 2nd ACM SIGACT-SIGMOD Symposium on Principles of Database Systems, Colony Square Hotel, Atlanta, Georgia, [10] M. Singhal, Deadlock Detection in Distributed Systems, in IEEE Computer, 1989, pp [11] D. J. Rosenkrantz, R. E. Stearns, and P. M. Lewis II, System Level Concrrency Control for Distributed Systems, ACM Transactions on Database System, vol. 3, pp , [12] S. M. Easterbrook, Elicitation of Requiremnts form Multiple Perspectives, : Imperial College, London, 1991.
Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database
Applying Attribute Level Locking to Decrease the Deadlock on Distributed Database Dr. Khaled S. Maabreh* and Prof. Dr. Alaa Al-Hamami** * Faculty of Science and Information Technology, Zarqa University,
Transactions. SET08104 Database Systems. Copyright @ Napier University
Transactions SET08104 Database Systems Copyright @ Napier University Concurrency using Transactions The goal in a concurrent DBMS is to allow multiple users to access the database simultaneously without
Extending Multidatabase Transaction Management Techniques to Software Development Environments
Purdue University Purdue e-pubs Computer Science Technical Reports Department of Computer Science 1993 Extending Multidatabase Transaction Management Techniques to Software Development Environments Aidong
Lecture 7: Concurrency control. Rasmus Pagh
Lecture 7: Concurrency control Rasmus Pagh 1 Today s lecture Concurrency control basics Conflicts and serializability Locking Isolation levels in SQL Optimistic concurrency control Transaction tuning Transaction
(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Pessimistic Timestamp Ordering. Write Operations and Timestamps
(Pessimistic) stamp Ordering Another approach to concurrency control: Assign a timestamp ts(t) to transaction T at the moment it starts Using Lamport's timestamps: total order is given. In distributed
Database Tuning and Physical Design: Execution of Transactions
Database Tuning and Physical Design: Execution of Transactions David Toman School of Computer Science University of Waterloo Introduction to Databases CS348 David Toman (University of Waterloo) Transaction
A Security Approach in System Development Life Cycle
A Security Approach in System Development Life Cycle (1) P.Mahizharuvi, Research Scholar, Dept of MCA, Computer Center, Madurai Kamaraj University, Madurai. [email protected] (2) Dr.K.Alagarsamy,
Implementing New Approach for Enhancing Performance and Throughput in a Distributed Database
290 The International Arab Journal of Information Technology, Vol. 10, No. 3, May 2013 Implementing New Approach for Enhancing Performance and in a Distributed Database Khaled Maabreh 1 and Alaa Al-Hamami
Database Replication Techniques: a Three Parameter Classification
Database Replication Techniques: a Three Parameter Classification Matthias Wiesmann Fernando Pedone André Schiper Bettina Kemme Gustavo Alonso Département de Systèmes de Communication Swiss Federal Institute
Requirements Engineering: A Roadmap
Requirements Engineering: A Roadmap Bashar Nuseibeh & Steve Easterbrook Department of Computing Imperial College of Science, Technology & Medicine London SW7 2BZ, UK Email: [email protected] http://www-dse.doc.ic.ac.uk/~ban/
Chapter 14: Recovery System
Chapter 14: Recovery System Chapter 14: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Remote Backup Systems Failure Classification Transaction failure
PostgreSQL Concurrency Issues
PostgreSQL Concurrency Issues 1 PostgreSQL Concurrency Issues Tom Lane Red Hat Database Group Red Hat, Inc. PostgreSQL Concurrency Issues 2 Introduction What I want to tell you about today: How PostgreSQL
Database Replication with Oracle 11g and MS SQL Server 2008
Database Replication with Oracle 11g and MS SQL Server 2008 Flavio Bolfing Software and Systems University of Applied Sciences Chur, Switzerland www.hsr.ch/mse Abstract Database replication is used widely
Chapter 15: Recovery System
Chapter 15: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Shadow Paging Recovery With Concurrent Transactions Buffer Management Failure with Loss of
Udai Shankar 2 Deptt. of Computer Sc. & Engineering Madan Mohan Malaviya Engineering College, Gorakhpur, India
A Protocol for Concurrency Control in Real-Time Replicated Databases System Ashish Srivastava 1 College, Gorakhpur. India Udai Shankar 2 College, Gorakhpur, India Sanjay Kumar Tiwari 3 College, Gorakhpur,
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at http://www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2007 Vol. 6, No. 1, January-February 2007 CM Configuration Change Management John D.
Compact Representations and Approximations for Compuation in Games
Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions
Concurrency Control. Module 6, Lectures 1 and 2
Concurrency Control Module 6, Lectures 1 and 2 The controlling intelligence understands its own nature, and what it does, and whereon it works. -- Marcus Aurelius Antoninus, 121-180 A. D. Database Management
! Volatile storage: ! Nonvolatile storage:
Chapter 17: Recovery System Failure Classification! Failure Classification! Storage Structure! Recovery and Atomicity! Log-Based Recovery! Shadow Paging! Recovery With Concurrent Transactions! Buffer Management!
Concurrency control. Concurrency problems. Database Management System
Concurrency control Transactions per second (tps) is the measure of the workload of a operational DBMS; if two transactions access concurrently to the same data there is a problem: the module who resolve
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
An Object Model for Business Applications
An Object Model for Business Applications By Fred A. Cummins Electronic Data Systems Troy, Michigan [email protected] ## ## This presentation will focus on defining a model for objects--a generalized
Concurrency Control. Chapter 17. Comp 521 Files and Databases Fall 2010 1
Concurrency Control Chapter 17 Comp 521 Files and Databases Fall 2010 1 Conflict Serializable Schedules Recall conflicts (WR, RW, WW) were the cause of sequential inconsistency Two schedules are conflict
Definition of SOA. Capgemini University Technology Services School. 2006 Capgemini - All rights reserved November 2006 SOA for Software Architects/ 2
Gastcollege BPM Definition of SOA Services architecture is a specific approach of organizing the business and its IT support to reduce cost, deliver faster & better and leverage the value of IT. November
The ConTract Model. Helmut Wächter, Andreas Reuter. November 9, 1999
The ConTract Model Helmut Wächter, Andreas Reuter November 9, 1999 Overview In Ahmed K. Elmagarmid: Database Transaction Models for Advanced Applications First in Andreas Reuter: ConTracts: A Means for
Roadmap. 15-721 DB Sys. Design & Impl. Detailed Roadmap. Paper. Transactions - dfn. Reminders: Locking and Consistency
15-721 DB Sys. Design & Impl. Locking and Consistency Christos Faloutsos www.cs.cmu.edu/~christos Roadmap 1) Roots: System R and Ingres 2) Implementation: buffering, indexing, q-opt 3) Transactions: locking,
Recovery System C H A P T E R16. Practice Exercises
C H A P T E R16 Recovery System Practice Exercises 16.1 Explain why log records for transactions on the undo-list must be processed in reverse order, whereas redo is performed in a forward direction. Answer:
Decomposition into Parts. Software Engineering, Lecture 4. Data and Function Cohesion. Allocation of Functions and Data. Component Interfaces
Software Engineering, Lecture 4 Decomposition into suitable parts Cross cutting concerns Design patterns I will also give an example scenario that you are supposed to analyse and make synthesis from The
A Framework for Software Product Line Engineering
Günter Böckle Klaus Pohl Frank van der Linden 2 A Framework for Software Product Line Engineering In this chapter you will learn: o The principles of software product line subsumed by our software product
Configuration Management Models in Commercial Environments
Technical Report CMU/SEI-91-TR-7 ESD-9-TR-7 Configuration Management Models in Commercial Environments Peter H. Feiler March 1991 Technical Report CMU/SEI-91-TR-7 ESD-91-TR-7 March 1991 Configuration Management
Patterns of Information Management
PATTERNS OF MANAGEMENT Patterns of Information Management Making the right choices for your organization s information Summary of Patterns Mandy Chessell and Harald Smith Copyright 2011, 2012 by Mandy
A Reputation Replica Propagation Strategy for Mobile Users in Mobile Distributed Database System
A Reputation Replica Propagation Strategy for Mobile Users in Mobile Distributed Database System Sashi Tarun Assistant Professor, Arni School of Computer Science and Application ARNI University, Kathgarh,
Development/Maintenance/Reuse: Software Evolution in Product Lines
Development/Maintenance/Reuse: Software Evolution in Product Lines Stephen R. Schach Vanderbilt University, Nashville, TN, USA Amir Tomer RAFAEL, Haifa, Israel Abstract The evolution tree model is a two-dimensional
Tool Support for Software Variability Management and Product Derivation in Software Product Lines
Tool Support for Software Variability Management and Product Derivation in Software s Hassan Gomaa 1, Michael E. Shin 2 1 Dept. of Information and Software Engineering, George Mason University, Fairfax,
Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification
Introduction Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification Advanced Topics in Software Engineering 1 Concurrent Programs Characterized by
EVALUATION OF THE INVESTMENT COMPENSATION SCHEME DIRECTIVE DG INTERNAL MARKET AND SERVICES EXECUTIVE REPORT AND RECOMMENDATIONS
EVALUATION OF THE INVESTMENT COMPENSATION SCHEME DIRECTIVE DG INTERNAL MARKET AND SERVICES EXECUTIVE REPORT AND RECOMMENDATIONS 1. BACKGROUND Directive 97/9/EC, known as the Investment Compensation Scheme
Component visualization methods for large legacy software in C/C++
Annales Mathematicae et Informaticae 44 (2015) pp. 23 33 http://ami.ektf.hu Component visualization methods for large legacy software in C/C++ Máté Cserép a, Dániel Krupp b a Eötvös Loránd University [email protected]
Review: The ACID properties
Recovery Review: The ACID properties A tomicity: All actions in the Xaction happen, or none happen. C onsistency: If each Xaction is consistent, and the DB starts consistent, it ends up consistent. I solation:
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter
Conversational Datastreams are simply a short hand graphical notation for a Datastream connection in which a message sent in the direction of the
JSD VERSION 2 AND AN OVERVIEW OF DEVELOPMENTS IN JSD J. R. Newport This paper provides a brief description of the JSD method as it was originally published in 1983 1 followed by a description of the enhancements
Online, Asynchronous Schema Change in F1
Online, Asynchronous Schema Change in F1 Ian Rae University of Wisconsin Madison [email protected] Eric Rollins Google, Inc. [email protected] Jeff Shute Google, Inc. [email protected] ABSTRACT Sukhdeep
An Overview of Distributed Databases
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 4, Number 2 (2014), pp. 207-214 International Research Publications House http://www. irphouse.com /ijict.htm An Overview
Generalized Isolation Level Definitions
Appears in the Proceedings of the IEEE International Conference on Data Engineering, San Diego, CA, March 2000 Generalized Isolation Level Definitions Atul Adya Barbara Liskov Patrick O Neil Microsoft
Software Engineering. Software Processes. Based on Software Engineering, 7 th Edition by Ian Sommerville
Software Engineering Software Processes Based on Software Engineering, 7 th Edition by Ian Sommerville Objectives To introduce software process models To describe three generic process models and when
Network Working Group. Category: Best Current Practice S. Bradner Harvard University M. Patton Consultant July 1997
Network Working Group Request for Comments: 2182 BCP: 16 Category: Best Current Practice R. Elz University of Melbourne R. Bush RGnet, Inc. S. Bradner Harvard University M. Patton Consultant July 1997
Chapter 16: Recovery System
Chapter 16: Recovery System Failure Classification Failure Classification Transaction failure : Logical errors: transaction cannot complete due to some internal error condition System errors: the database
ONTOLOGY BASED FEEDBACK GENERATION IN DESIGN- ORIENTED E-LEARNING SYSTEMS
ONTOLOGY BASED FEEDBACK GENERATION IN DESIGN- ORIENTED E-LEARNING SYSTEMS Harrie Passier and Johan Jeuring Faculty of Computer Science, Open University of the Netherlands Valkenburgerweg 177, 6419 AT Heerlen,
Traceability Patterns: An Approach to Requirement-Component Traceability in Agile Software Development
Traceability Patterns: An Approach to Requirement-Component Traceability in Agile Software Development ARBI GHAZARIAN University of Toronto Department of Computer Science 10 King s College Road, Toronto,
Marketing Mix Modelling and Big Data P. M Cain
1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored
Solution Documentation for Custom Development
Version: 1.0 August 2008 Solution Documentation for Custom Development Active Global Support SAP AG 2008 SAP AGS SAP Standard Solution Documentation for Custom Page 1 of 53 1 MANAGEMENT SUMMARY... 4 2
Weaving the Software Development Process Between Requirements and Architectures
Weaving the Software Development Process Between and s Bashar Nuseibeh Computing Department The Open University Walton Hall Milton Keynes MK7 6AA, U.K. Email:[email protected] ABSTRACT This position
Implementation of Virtual Local Area Network using network simulator
1060 Implementation of Virtual Local Area Network using network simulator Sarah Yahia Ali Department of Computer Engineering Techniques, Dijlah University College, Iraq ABSTRACT Large corporate environments,
The Phases of an Object-Oriented Application
The Phases of an Object-Oriented Application Reprinted from the Feb 1992 issue of The Smalltalk Report Vol. 1, No. 5 By: Rebecca J. Wirfs-Brock There is never enough time to get it absolutely, perfectly
Applying 4+1 View Architecture with UML 2. White Paper
Applying 4+1 View Architecture with UML 2 White Paper Copyright 2007 FCGSS, all rights reserved. www.fcgss.com Introduction Unified Modeling Language (UML) has been available since 1997, and UML 2 was
BPM and SOA require robust and scalable information systems
BPM and SOA require robust and scalable information systems Smart work in the smart enterprise Authors: Claus Torp Jensen, STSM and Chief Architect for SOA-BPM-EA Technical Strategy Rob High, Jr., IBM
HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1. Department of Computer Science, University of York, York, YO1 5DD, England.
HARD REAL-TIME SCHEDULING: THE DEADLINE-MONOTONIC APPROACH 1 N C Audsley A Burns M F Richardson A J Wellings Department of Computer Science, University of York, York, YO1 5DD, England ABSTRACT The scheduling
CS 245 Final Exam Winter 2013
CS 245 Final Exam Winter 2013 This exam is open book and notes. You can use a calculator and your laptop to access course notes and videos (but not to communicate with other people). You have 140 minutes
Neglecting Agile Principles and Practices: A Case Study
Neglecting Agile Principles and Practices: A Case Study Patrícia Vilain Departament de Informatics and Statistics (INE) Federal University of Santa Catarina Florianópolis, Brazil [email protected] Alexandre
Monitoring Infrastructure (MIS) Software Architecture Document. Version 1.1
Monitoring Infrastructure (MIS) Software Architecture Document Version 1.1 Revision History Date Version Description Author 28-9-2004 1.0 Created Peter Fennema 8-10-2004 1.1 Processed review comments Peter
Verifying Semantic of System Composition for an Aspect-Oriented Approach
2012 International Conference on System Engineering and Modeling (ICSEM 2012) IPCSIT vol. 34 (2012) (2012) IACSIT Press, Singapore Verifying Semantic of System Composition for an Aspect-Oriented Approach
Business-Driven Software Engineering Lecture 3 Foundations of Processes
Business-Driven Software Engineering Lecture 3 Foundations of Processes Jochen Küster [email protected] Agenda Introduction and Background Process Modeling Foundations Activities and Process Models Summary
Chapter 10. Backup and Recovery
Chapter 10. Backup and Recovery Table of Contents Objectives... 1 Relationship to Other Units... 2 Introduction... 2 Context... 2 A Typical Recovery Problem... 3 Transaction Loggoing... 4 System Log...
Requirements Traceability. Mirka Palo
Requirements Traceability Mirka Palo Seminar Report Department of Computer Science University of Helsinki 30 th October 2003 Table of Contents 1 INTRODUCTION... 1 2 DEFINITION... 1 3 REASONS FOR REQUIREMENTS
Goal-Driven Adaptable Software Architecture for UAVs
SEAS DTC Annual Technical Conference 2008 Goal-Driven Adaptable Software Architecture for UAVs William Heaven, Daniel Sykes, Jeff Magee, Jeff Kramer SER001 Imperial College London The Challenge Autonomous
Improving Interoperability in Mechatronic Product Developement. Dr. Alain Biahmou, Dr. Arnulf Fröhlich, Dr. Josip Stjepandic
International Conference on Product Lifecycle Management 1 Improving Interoperability in Mechatronic Product Developement Dr. Alain Biahmou, Dr. Arnulf Fröhlich, Dr. Josip Stjepandic PROSTEP AG Dolivostr.
Transaction Management Overview
Transaction Management Overview Chapter 16 Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke 1 Transactions Concurrent execution of user programs is essential for good DBMS performance. Because
Case studies: Outline. Requirement Engineering. Case Study: Automated Banking System. UML and Case Studies ITNP090 - Object Oriented Software Design
I. Automated Banking System Case studies: Outline Requirements Engineering: OO and incremental software development 1. case study: withdraw money a. use cases b. identifying class/object (class diagram)
FAULT TOLERANCE FOR MULTIPROCESSOR SYSTEMS VIA TIME REDUNDANT TASK SCHEDULING
FAULT TOLERANCE FOR MULTIPROCESSOR SYSTEMS VIA TIME REDUNDANT TASK SCHEDULING Hussain Al-Asaad and Alireza Sarvi Department of Electrical & Computer Engineering University of California Davis, CA, U.S.A.
