Design of Dynamic Replica Control Algorithm for Distributed Real-Time Databases
|
|
|
- Luke Shelton
- 10 years ago
- Views:
Transcription
1 Design of Dynamic Replica Control Algorithm for Distributed Real-Time Databases Hala Abdel hameed Department of Electronics and Communication Engineering, Faculty of Engineering, Mansoura University, EGYPT Hazem M. El-Bakry Department of Information Systems, Faculty of Computer Science & Information Systems, Mansoura University, EGYPT Torky Sultan Faculty of Computer Science and Information Systems - Helwan University, Helwan - Egypt Abstract Maintaining consistency between the actual state of the real-time object of the external environment and its images as reflected by all its replicas distributed over multiple nodes is one of the most important issue affecting the design of real-time database. Efficient replica control algorithm can contribute significantly to maintain this consistency. A replication control algorithm is presented for medium and large scale distributed database system by trying to maintain an independent consistency degree for each data object by presenting an adaptive dynamic replication control algorithm. This algorithm is based on some system factors to increase the chance of having an updated data item locally; avoiding remote access to meet timing constrains and achieves both availability and consistency for the replicated data as much as possible. A detailed simulation study shows that our algorithms can greatly improve the system performance compared to the systems without replication or systems with simple replication strategies such as full replication. Keywords Distributed database, Real-time databases, Real-time transaction, and Replica control algorithm. 1. Introduction Many real-time systems are inherently distributed in nature, and need to share data that are distributed among different sites [35-37]. For example, military tracking, medical monitoring, naval combat control systems and factory automation etc. All of those critical systems need data to be obtained and updated in a timely fashion [1]. But, sometimes data that is required at a particular location is not available when it is needed and getting it from remote site may take too long before which the data may become invalid, this potentially leads to large number of tardy transactions (transaction that miss their deadline). One of the solutions, for the above-mentioned problem, is replication of data in real-time databases. By replicating temporal data items, instead of asking for remote data access requests, transactions that need to read remote data can now access the locally available copies. This helps transactions meet their time and data freshness requirements. In order to suite the different needs of the distributed real-time systems such as different data workloads and database specifications, multiple ways to handle the replication control and different replication schemes are proposed. In the distributed systems, replication is seen as a cost effective way to increase availability. However, replication is used for both performance and faulttolerant purposes thereby introducing a constant tradeoff between consistency and efficiency. There are two main approaches for replication [2,3,4] synchronize (also called Active or state machine) in which a collection of identical servers maintains the same copies of the system state, client write operations are applied synchronously to all of the replicas. Although this approach increases consistency of the replicated data, it increases system overhead. Asynchronous (also called lazy or passive) replication on the other hand where changes introduced by a transaction are propagated to other sites only after the transaction has committed. However, this approach reduces system overhead at the expense of temporal consistency. Replication algorithms can also be characterized according to what and where the objects are replicated. The most extreme is full replication in which we replicate all data items to all sites in the distributed system. The benefit of full replication is that all data are available to read locally which leads to increasing performance and this is one of benefits of replication. But this slow down the system since updating one copy creates transactions for updating at all other sites, also issues like concurrency control and recovery become more complicated. If some objects of the database are replicated at other sites then it creates a partially replicated database which makes the above issues less complicated than the full replication. Regarding the locations and number of replicas, we can distinguish between static replication algorithms in which both locations and number of replicas are fixed and dynamic replication, when the locations and number of replicas are dynamically changing according to system conditions and data needs. In that work, a replication control algorithm for medium and large scale distributed database system is presented, by trying to maintain an independent consistency degree for each data object by presenting an adaptive dynamic replication control algorithm. This algorithm is based on the entire system workload of the distributed site and increase the chance to have ISBN:
2 an updated data item locally to avoid remote access to meet timing constrains and achieves both availability and consistency for the replicated data as much as possible. 2. Related Work In replicated database systems, copies of the data items can be stored at multiple sites, which achieve two complementary features: performance improvement and high availability. On the other hand, data replication introduces its own problems. Access to a data item is no longer controlled exclusively by a single site; instead, the access control is distributed across the sites each storing a copy of the data item. It is necessary to ensure that mutual consistency of the replicated data is provided, in other words, replicated copies must behave like a single copy. This is possible by preventing conflicting accesses on the different copies of the same data item, and by making sure that all data sites eventually receive all updates. Therefore, major issue is the development of replication protocol/policy. The problem of finding an optimal replication scheme in a general network (i.e., a replication scheme that has a minimum cost for a given read-write pattern), has been shown to be NPcomplete for the static case. As we have seen, several classifications are possible for replication, Wiesmann & Schiper [4], distinguishing five different techniques [5,6,7,8] (active, weak-voting, certification- based, primary copy and lazy replication). All these techniques only dealt with fully replicated databases and need a reliable total order broadcast in order to propagate the transaction updates. Several replication models have been proposed based on these techniques with some little changes All the replication models that have been developed so far can be classified into optimistic replication or pessimistic one [9]; in the optimistic replication, all the operations are performed locally at each node as if there were alone in the system and optimistically assumes that there is no conflict with the other replicated copes located in the other nodes. This approach will of course increase both response time and performance that all the operations of the transaction will performed locally with no need for remote access for any object leading to temporary inconsistency between different nodes. And required a good conflict resolution and compensation polices to eliminate this temporal inconsistency. On the other hand, other models is pessimistically avoiding conflict between concurrent operations running in the other nodes by different methods such as, global lock or primary copy in which only the primary site is permitted to update its items. There have been a number of research papers about data replication in traditional database systems where some sporadic efforts have been made for the development of different types of protocols/policies [10, 11, 12, 13, 14] but it is not for real-time systems. In the literature there is a little replication models for real-time database systems [15,16,17], one of those models are found in [18] where full replication is used in medium-scale or large-scale distributed real-time database systems and present two replication algorithms, the ORDER algorithm that is designed to work in an environment where all data types and relations in the system are known a priori. In term of scalability, the algorithm has been enhanced to a replication algorithm called On-demand Real-time Decentralized Replication with Replica Sharing (ORDER-RS). In [19], Peddi et. al. present a replication algorithm called Just-In-Time Real-Time Replication (JITRTR), which creates replication transactions based on client s data requirements in a distributed real-time object-oriented database. In [20] a replication protocol named PRiDe, designed for optimistic replication with forward conflict resolution in distributed real-time databases was described. The model defines four phases for the replication protocol: the local update phase, the propagation phase, the integration phase, and the stabilization and compensation phase. In that model, the transactions are executed locally and the replicated items are updated as if they are alone in the system, after completing the transaction, the model check for conflict in the other nodes and a conflict resolution policy is used to resolve it. 3. Model Overview In this paper, a replica control algorithm used in DoMORE (Dynamic allocation Module for Replication in Distributed Real-time DataBase) is described. DoMORE is a replication model based on increasing the probability of providing the updated data for the real-time transactions to meet their timing constrains by increasing their chance to have these data locally without the need to get it remotely from other sites. This goal can be achieved by dynamically allocating data to distributed nodes according to their access pattern. The model also allows different degree of consistency for each data object which is dynamically calculated according different factors (system requirements and the entire workload). DoMORE model allows most of the transactions to execute and commit locally as if the transaction is executed in a local, centralized database. And use a strict 2PL commit protocol for distributed transaction, upholding the ACID properties [21]. Transaction processing is guaranteed to be serializable with respect to other local transactions. As it was mentioned, the objective of the replication model is to give the clients the illusion of service that is provided by one server and the clients have no knowledge about the data existence behind. Of course marinating redundant data adds overhead to the system, and this can be reduced by exploring week consistency semantics of applications. This tradeoff between ISBN:
3 consistency and system cost is the main problem of all the replication models. Generally, consistency is a term for discussing the correctness of data in a database, the database is said to be consistence if all consistency predicates is hold. For real-time database, consistency predicates can refer to the relationships between database objects and external environment that are modeled by the database from time point of view. In a replicated system we can define three different types of predicators for consistency[22]; external temporal consistency which deals with the relationship between an object of the external world and its image on the server, inter-object temporal consistency which is the relationship between different objects or events (within a single node) and it also include the relationship between temporal data item and non-temporal data item that depend on that item, and mutual consistency, which reflects the relationship between the object and its copy (replica) in different remote sites. DoMORE employs both eager and lazy replication according to the types of database items, and it guarantees that all the transactions will read an updated valid data items and maintain both Temporal Consistency and Mutual Global Consistency [23,24,25]. In DoMORE, global consistency is achieved through continuously propagation and integration of updates (typically, transaction updates are propagated and integrated as soon as possible, but propagation or integration may be deferred under certain circumstances, such as if there is a transient overload). define segment as a group of data objects that share properties, capturing some aspects of the application semantics, and is allocated to a specified subset of the nodes (possibly temporarily inconsistent with each other). Each segment is considered as a partition of the database and as units of allocation of replicas, which can be individually replicated based on specified replication requirements from all the clients at a certain database node. If the specification indicates that a data object will never be used by any clients on a node, it does not need to be replicated to that node, and also a certain database object may not be available at a node, but the clients at the node do not need to be aware of that, because they will never access it. This is called call virtual full replication where the client assumption of full replication of the database is still valid. Traditional RDBMs are based on the assumption that data resides primarily on disk. In a dynamic runtime environment, data might be on disk or cached in mainmemory at any given moment. Because disk input/output (I/O) is far more expensive than memory access, main memory databases have been used because of the high performance of memory accesses and the decreasing cost of main memory [29,30]. Because access to main memory is so much faster than disk access, we can expect transactions to complete more quickly in a main memory system. So in the distributed transactions, that use lock-based commit protocol locks will not be held as long, thus, lock contention may not be as important as it is when the data is disk resident DoMORE is also based on the concept of Virtual Full Replication (ViFuR) [26] that has been introduced in DeeDS [27,28]. It creates a perception of full replication by ensuring that all used data objects are available at the local node so the user can interact with the database as if it is full replicated and all data are available locally. Thus this approach can take the advantages of full replication such as, reducing the resource usage compared to full replication, and transaction timeliness, simplified addressing of communication between nodes, as well as support for fault tolerance. Thus, the database user cannot distinguish a virtually fully replicated database from a fully replicated one. 1. System Model Admission Controller Application Transaction Admitted Transactions Monitor Propagated Trans Transaction Manager Network Work Load & performance Data Replication Manager Trans Integrated Mapped Trans Real-time Database & System resources Trans. Queue Priority Assignment & scheduler The system model consists of a set of distributed realtime database systems contain a group of main memory real-time databases connected by high-speed networks. It was assumed that a reliable real-time communication is maintained, i.e., any messages sent over the network is eventually delivered and have predictable transmission time [25]. The whole database is segmented on different nodes, we can Fig. 1 The inter model components at node N In the system, a firm real-time database model is used; tardy transactions (transactions that have missed their deadlines) are aborted. Figure 1 illustrates the main components of the model; the monitor is periodically collects the workload data of the entire system and sends them to the admission controller. The monitor also, checks the system performance periodically and ISBN:
4 record the statistics data about the transactions' miss ratio. The admission controller is responsible for accepting or rejecting the remote requests from the other nodes. Transaction manager is responsible for generating local update transactions and replication transactions for their nodes. We assumed here that transactions are executed sequentially and there is no concurrent transactions, however, if concurrent transaction execution is required, the algorithm can be extended to allow concurrent transaction execution, e.g., through a locking scheme, in the next work we will consider concurrency execution of transactions and use one of the concurrency control protocol suitable for real-time systems. In the priority assignment and scheduler component, it is known that scheduling is necessary to choose an action to execute when several guards are enabled. So, it is necessary that each transaction is assigned a priority (such as prioritizing local operation), local read and update actions should have precedence over propagation and replication actions. For simplicity, in this paper Earliest Deadline First (EDF) policy is used and transactions are processed accordingly. As mentioned above, the model implement both eager and lazy replication, for eager replication where a synchronies replication is performed all the sites participating in a transaction s execution are engaged in an atomic commit protocol (ACP). The model use a strict 2PL committing protocol to ensure consistent termination of distributed transactions despite site and communication failures. The two-phase commit (2PC) protocol [31] is the simplest and most used ACP. The primary goal of a two-phase commit protocol is to ensure that all participants agree on whether a transaction commits or aborts. Since 2PC consumes a substantial amount of a transaction s execution time due to the cost of its coordination messages and forced log writes to stable storage required for recovery, a number of 2PC variants appear in the literature, most notably, presumed abort (PrA) and presumed commit (PrC) [32]. As opposed to PrA, PrC has been designed to reduce the cost associated with committing transactions rather than aborting ones. 2. Data Model The data model used in this paper categorizes the data used into two main categories; Sensor Data objects, as the real-time systems interacts with the environment through various sensors, e.g. temperature and pressure sensors and it is important to maintain consistency between the states of the environment as perceived by the sensor and with the actual state of the environment. The sensed data is processed further to derive new data called Derived Data that depends on past sensor data, for example the temperature and pressure information pertaining to a reaction may be used to derive the rate at which the reaction appears to be progressing which is in turn could be used to derive a new data. So we can define two different types of data items; temporal data items that changes with time and have a validity interval and Non-temporal data items which is not change with time and thus they don't have a validity interval. If we define each site or node as a segment for a set of data items, it is called a primary site for them (Psite). For a specific data item, the copy of data item at the primary site is called primary copy and the copies that are replicated are called replicated copies or replicas. As illustrated in a previous work [33], the proposed database framework will be used, whereas there are two types of data located at each node; local or shared data. Local data object can only be updated by its primary site, and the shard data items can be updated by any site in the network in cooperation with its primary site. It was assumed that for each site, a backup site is predefined to be used as a backup site in case of site failure and to guarantee the minimum degree of replication. Each replica has associated a version number (VN) which reflects the last update number for this data object. When an update is received, the receiving node (primary) increases the updating node s Version Number of local replica of the updated object. Any object has the following specifications; for each: o O o= ( Id, Type, Name, PsiteId, Value, TS, VI, BUF, VN, FR) Id / type / name/ PsiteId / VI / BUF/ FR Value Current Value TS (Time Stamp VI Validity Interval Fig. 2 The structure of real-time attribute Id: is the unique identifier for the object on his primary site. Type: whether it is local or shard data object. PsiteId: the object's primary site id where the object was originates, this attribute give an indication of whether the object is a primary data object or it is a replica e.g., if PsiteId = local site, this object is a primary object originated at this site, otherwise it is a replica for a remote data object. Value: is used to store the final attribute value captured by the related last update method. TS: is used to store the last time at which the attribute's value was updated. VI: denotes object's absolute validity interval i.e., the length of the time interval following timestamp during which the object is considered to have absolute validity. BUF: is the Basic Update Frequency, for each temporal data object it is updated periodically at a given update frequency received from its primary. FR: if that object has a predefined freshness requirements to maintain the consistency level between different replicas scattered over all sites for the same data object. VN: is the version number that reflects the last update number of that object. ISBN:
5 3. Transaction Model For the local type data objects, the Read phase is performed locally at each site for only the active replicas located in this site. The Propagation phase of the replication process for any transaction updates of a local data item to remote nodes is delayed until after the transaction commits. And the propagation messages for remote transactions that have been received at a node are integrated locally according to a local scheduling policy. For the shared data item the commitment of the transaction that update it is conditional by at least the agreement of its primary site to which it is belongs using the (2PC) protocol. In that case, the integration task maintains local conflict detection data structures and is responsible for making updates by remote transactions visible to local transactions. The integration task is serialized with respect to local transactions. A transaction T and all of its updates are said to be integrated on node N if T has committed locally or if T has been committed and propagated to N from another node and has been processed by the local integration task on N. The transactions are divided into read (query) transaction in which all its operation is read the data objects, while the update transaction can contain at least one write operation. Transaction can be also classified into remote or local transaction; the transaction is considered local if all its operations are performed in the local site, while it, while it is remote if it at least one remote operation. Notably, only transaction of one operation is considered here. 4. Formal Definitions Before we describe the algorithm, we need to define some terms formally used to describe the algorithm. When a temporal data item (whether it is local or shared data item) is updated in a specific node, the Replication Degree (RD) defines the number of nodes to which it must be replicated and the number of propagation messages that must be created by the Replication Manager. Replica Allocation Set (RAS) define a set of sites or nodes to which the replica updates or the propagation messages must be sent. Definition 1: Replication Degree RD is the number of sites/nodes to which the propagation messages will be sent for a particular update. It is calculated according to a Replica Degree Function RDF which takes a specific parameter (Node WorkLoad, Object Freshness requirements (OFR), User Defined Level. Note that the upper bound for RD is the total number of nodes in the distributed system. Definition 2: The Replica Allocation Set (RAS) of a propagation transaction T propagation = (Ti d, Lsite Id, Rsite id, WS, GUF, DL, e) is the set of remote nodes hosting replicas of objects in the write set of T. That is: RAS ( T ) = { n N o nt ( R( o, n) ϕ To determine the RAS, the model maintain for each data object -at its primary site- a new data structure called a needlist which is an array that contains list of sites ID requesting that data item, and is arranged by the highest frequency rated site for demanding that object. Definition 3: Let D = (O,R,S) be a distributed replicated database, and let r R be the replica of object o O on node n N. The NeedList NL (o) for o is a vector of S elements containing the latest S site use this object. This vector is of the form < S 1id, S 2id,..., S (N 1)id >, where each element n, 0 n < N represents an identifier for a unique node or site use this data object recently. void append(s i (id)) // Append i th Site to the end of the needlist. int HighestPriority () // Returns the SiteId located at the head of the array. void RAF (S i (id)) // Append i th Site to the RAS void RemoveSite(S i (id)) // Remove the i th Site from the needlist after performing the RAF function on it. // HighestPriority () and RemoveSite () both return -1 if the queue is empty. Fig. 3 Methods performed in the need list Needlist (NL) implements the methods in figure 3, the first method for adding a new site id in the need list for the intended object, this method is implemented when the object is accessed or updated by that node. When adding a new site, it is added in the head of the array. The order of the elements located in the array indicates the priorities for selecting that site to be added in the RAS. Definition 4: For a propagation transaction T propagation executing in site n N the Replica Allocation Function RAF: RAS (T) is the function that maps a node N located in the head of the needlist to Replica Allocation Set of that transaction. As illustrated earlier, each node N hosts a set of temporal data objects as a primary site and also maintains a set of replicas of temporal data objects hosted by other nodes. All replicas of a particular data item are updated using the fresh value from their primary copy. When a replica existed in a remote site and periodically receives an update from its primary site within its validity interval VI it is called an Active Replica, otherwise, it is called an Inactive Replica. An active replica will become inactive if it is not updated within its validity interval. Definition 5: For a set of replicas R of logical objects in a set O, replica r R of a logical object o O (where o is a sensor data object) on a particular node is called Active Replica R(o,N), if : ( CurrentTime TS( o)) VI( o). ISBN:
6 While the Active Replica for the other type of consistency that is related to the derived data objects is determined according to its relative consistency, for example if we considered two objects O 1 and O 2 which have two timestamps TS 1 and TS 2 respectively, O 1 and O 2 satisfied the relative consistency called Relative Valid Interval RVI if: TS 1 2 TS RVI Definition 6: For two replicas r1,r2 where ri R of logical objects O 1, O 2 in a set O, (where o is a derived data object ) on a particular node is called Active Replica R(O i,n) if:. TS 1 2 TS RVI 5. Replica Control Algorithm The goal of the proposed Replica Control Algorithm is to gain efficiency over Virtual Full Replication (ViFuR) strategy by dynamically changing the replication degree (RD) and replica allocation set (RAS). The main question of any replication model is how to determine an appropriate replication level and placement for an object? In some replication schemas the replication level for an object is predefined (e.g., 5 copies) leaving the run-time system to determine the placement of the five replicas in the network [34]. While in others, it also specifies the locations of the replicas. These interfaces require the system designers to make a mapping from the desired characteristics of the (replicated) object, such as fixed level of availability, to a replication level and placement that will achieve those characteristics. For using in this algorithm, a new replication schema is defined in which neither the replication degree nor the allocation sites is defined. Rather, for each object the replication degree and the allocation table is dynamically changing according to data access and system requirements at each site, e.g. if we have N nodes each have a set of data items (segment) that it considered as a primary site for them, the Replication Manager (RM) is responsible for dynamically calculating the replication degree RD that must be propagated to the other remote sites at each object update. RD =N-1 in case of full replication and RD 0 for fault tolerance purposes. RD is calculated by a separate module in the replication manager according to specific factors affecting this value (here, we consider only two factors; System workload and a predefined freshness requirement for each data object). Using Work Load (L) as a factor, the RD L is calculated as follows: If we have N sites to propagate a new replica, and we have 100 percentage to represent the entire workload for each node, we can divide that workload into n ranges, the difference between any two consecutive ranges = x, where x = 100/n. 100 ( RD * x) L< 100 (( RD 1) * x)... 1 RD n (1) L L L For example; if there are 5 sites in the network and according to the entire workload, the range is calculated as follows: x = 100/5 where x=20% of workload. When the entire workload is between 40% and 60%, then using (1) RD L = 3. Because different factors can affect RD differently, the following weighted average equation can be used to calculate the value of RD to be used by the algorithm. RD= (( W1 * RDL + W2 * RDFR ( RDofmfactors)* W )) / m) (2) Where m is the number of factors and w is the weight for each factor and (w 1+w 2..+w m) =1. The freshness requirement (FR) for each object is taken as another factor affecting the Replication Degree (The FR is given for each object), accordingly, the RD can be calculated using (3). RD = W * RD + W * RD ) / 2) (3) ( 1 L 2 FR The model divides the replication process into 4 phases, (Read Phase, Update Phase, Propagation Phase, and Cooperation and Integration Phase). As it was illustrated previously that data objects are classified to either local or shared data object. For the local data items, the primary site is responsible for both updating and propagating phases, while for shard data items, any site can update it, and only the primary site is taking the responsibility of the propagating phase. When to update a replica, is another decision made by the model; the primary site begins pushing replicas to the other sites when the primary site receives a new value for a specific data item from the external environment (sensor data). The algorithm calculates RD as illustrated in the last section and determines the RAS by mapping the objects in the needlist to RAS using RAF function. When the primary site pushes an update for a specific site and that data item is used by a local transaction, after committing the transaction, the site send a need request (need req.) to the primary site which in turn add it to its need list. Note that, If the selected site use that data item one more time, it will not send a need request unless the primary send a new replica. Algorithm 1 shows the pseudocode for the proposed replica control algorithm when a specific node receives a read transaction. And Figure 3 illustrates the steps when an update transaction is received at node N. When a read transaction is received at node N, first the algorithm checks for a existence of active replica(s) by checking the validity interval of the required object(s), if it exists, it will be used by the transaction otherwise, a requested transaction is created and sends to the primary site containing that object(s). Note we assume that only one site can be requested by the transaction. ISBN:
7 If an update transaction is received, a check for that if the requested object(s) is a primary object (Local, Shared) is maintained, as it was previously illustrated that the primary site is responsible for updating its local data objects. If the requested object(s) is a shared data object, a validity check is done, and the cooperation phase is done between this site and the object primary site. When the primary site receives the update request, it fist checks the conflict existence using (VN) of the object. And it then starts the propagation phase to other sites using the RD and RAS values generated by the Replication Manager. Transaction Manager must differentiate between the update transactions and the propagation transactions to avoid a cycle of endless propagation process, simply, a transaction type could be used. Fig. 4 Transaction miss ratio 4. Performance Evaluation A full simulation environment have been developed to test the proposed allocation algorithm, we have chosen system parameter values that are typical of today s technology capabilities, e.g., network delays. The settings for the system parameters are given in table 1, while the settings for user transaction workload are given in table 2. A user transaction consists of operations on temporal data objects including both sensor and derived data objects. The transactions arrival rate follows Poisson arrival pattern, the arrival rate ג varied from (10-80) transaction per second, accordingly, the work load applied is approximately varied from (50%-100%) when (10-80) respectively. The execution time for one operation is between 100 microseconds to 1000 microseconds while the transaction execution time is exponentially distributed with mean (3). The sensor execution time is uniformly distributed (0.1 1) second. The slack factor of transactions is set to 5 and the affect of its changes is studied in the simulation result in figure. The Remote Data Ratio is the ratio of the number of remote data operations (operations that access data hosted by other sites) to that of all data operations. The remote data ratio is set to 20%, which means 20 percent of transaction operations are remote data operations. At each node, the user transaction workload. Table 1: System Parameter Settings Parameter Value Node # Network Delay 1 3 ms Temporal Derived Data # 200/Node Temporal Sensor Data # 100/Node Base Update Frequency Uniform(0. 1-1) sec System Load % Table 2: Transaction Parameter Settings Parameter Value Sensor Transaction # 300 User transaction # 700 Write Operation Time 5 ms Read Operation Time 3 ms Slack Factor 5 Remote Transaction ratio 50% Read/Write operation Prob. ( ) Respectively Execution Time Of Sensor Tran. Uniform (0.1 1) s Execution Time Of user Tran. Exep (3) Execution Time Of Propagation Tran. Exep (3) Transaction Arrival Rate (20 80) Trans/s All simulation results are based on at least ten runs, to evaluate our algorithm we use no replication and full replication as two baseline protocols. These two algorithms are the simplest, but widely used replication control strategies. The transaction miss ratios and number of messages (reflects the network overhead) of the three algorithms are shown in Fig. 4. As we can see from the figures, among the three algorithms, the proposed algorithm gives the best transaction miss ratios under different transaction workloads. 5. Conclusion and Future Work In this paper, we present a dynamic replication control algorithms designed for medium and large scale distributed real-time database systems. The algorithms is designed to receive both periodic and aperiodic transactions and the system has no a prior knowledge of its data requirements. The replicas of the data items are being dynamically allocated to distributed nodes according to their access pattern. The model also allows different degree of consistency for each data object which is dynamically calculated according different factors are dynamically created for its sites according to data need and access needs. A detailed simulation study shows that our algorithms can greatly improve the system performance compared to the systems without replication or systems with simple full replication strategy. It is desirable to enhance and extend our algorithm to deal with transaction of many operations instead of one operation, and deal with other parameters that affect the performance issues for ISBN:
8 distributed real-time database, such as using one of the concurrency control protocol to enable concurrent execution of transactions. References [1] K. Ramamritham, Real-time databases, International Journal of Distributed and Parallel Databases 1(2),, pp , [2] J. Gray, P. Helland, P. O Neil, D. Shasha, " The dangers of replication and a solution". In: Proc. of the ACM SIGMOD International Conf. On Management of Data, Vol. 25, No. 2 of ACM SIGMOD Record., New York, ACM Press, 1996, pp [3] F.B. Schneider, Implementing Fault-Tolerant Services Using the State Machine Approach A Tutorial, ACM Computing Surveys, Vol.22, No.4, 1990, pp [4] W. Matthias, S. André "Comparison of Database Replication Techniques Based on Total Order Broadcast " IEEE Trans. Knowledge Data Eng. Vol.17, No.4, 2005, pp [5] F. Pedone, R. Guerraoui and A. Schiper "The Database State Machine Approach", Journal of Distributed and Parallel Databases and Technology, Vol.14, No.1,July 2003, pp [6] J. Holliday, D. Agrawal, and A.E. Abbadi, The Performance of Replicated Databases Using Atomic Broadcast Group Communication, Technical Report TRCS99-11, Computer Science Dept.,Univ. of California, Santa Barbara, [7] F. Pedone, The Database State Machine and Group Communication Issues, PhD Thesis, EEcole Polytechnique Fe de rale de Lausanne, Switzerland, [8] B. Kemme and G. Alonso, A New Approach to Developing and Implementing Eager Database Replication Protocols, ACM Trans. Database Systems, vol. 25, no. 3, pp , [9] Y. Saito, M. Shapiro, Optimistic replication. ACM Comput. Surv. Vol.37, No.1, pp , 2005 [10] S. Hyuk Son."Replicated data management in distributed database systems", SIGMOD Rec. Vol.17, No.4, 62-69, 1988 [11] ] O. Wolfson, S. Jajodia, Y. Huang, "An adaptive data replication algorithm", ACM on Database Systems (TODS), Vol.22,No. 2, pp , 1997 [12] ] B. Kemme, G. Alonso, " A Suite of Database Replication Protocols based on Group Communication Primitives", In Proceedings of the The 18th International Conference on Distributed Computing Systems (ICDCS '98). IEEE Computer Society, Washington, DC, USA, pp [13] M. Wiesmann, F. Pedone, A. Schipe, B. Kemme, G. Alonso," Understanding replication in databases and distributed systems", In: Proc. 20th International Conference on Distributed Computing Systems (ICDCS 2000), Taipei, Taiwan, R.O.C., pp , [14] S. Cook, J. Pachl, I. Pressman, "The optimal location of replicas in a network using a READ-ONE-WRITE-ALL policy", Distrib. Comput, Vol. 15, No.1, pp , [15] M. Xiong, K. Ramamritham, J. Haritsa, J. Stankovic, " MIRROR A state conscious concurrency control protocol for replicated real-time databases", In Proc. 5th IEEE Real-Time Technology and Applications Symposium (RTAS 99), pp ,1999. [16] G. Mathiason, S. Andler," Virtual full replication: Achieving scalability in distributed real-time main-memory systems. In: Proc. of thework-in-progress Session of the 15th Euromicro Conf. on Real-Time Systems. (2003) [17] J. Barreto, "Information sharing in mobile networks: a survey on replication strategies " Technical Report RT/015/03Instituto Superior T ecnico/distributed Systems Group, Inesc-ID Lisboa, [18] Y. Wei, A. Aslinger, S. H. Son, J.A. Stankovic, " ORDER: A Dynamic Replication Algorithm for Periodic Transactions in Distributed Real-Time Databases", In Proceedings of Realtime and Embedded Computing Systems and Applications, pp , [19] P. Peddi, L. DiPippo, "A replication strategy for distributed real-time object oriented databases", In: Fifth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing. (2002) [20] Sanny Syberfeldt, "Optimistic Replication with Forward Conflict Resolution in Distributed Real-Time Databases", Dissertation No. 1150, Linköping 2007 [21] T. Haerder, A Reute, " Principles of transaction-oriented database recovery", ACM Comput. Surv, Vol.15, No.4, pp , December [22] M. Shapiro, K. Bhargavan, Y. Chong, Y. Hamadi "A formalism for consistency and partial replication", [23] T. Gustafsson, " Maintaining Data Consistency in Embedded Databases for Vehicular Systems", Licentiate Thesis, Linköping Studies in Science and Technology Thesis No Linköping University, [24] S. Gustavsson, S.F. Andler, "Self-stabilization and eventual consistency in replicated real-time databases ", in Proceedings of the first workshop on Self-healing systems", (WOSS 02), Charleston, SC, USA, ACM, pp , [25] Gustavsson, S. F. Andler, S. (2005), Continuous consistency management in distributed real-time databases with multiple writers of replicated data", Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 2 Vol. 03, [26] S. Gustavsson, S. F. Andler, " Real-time conflict management in replicated databases", in Proceedings of the Fourth Conference for the Promotion of Research in IT at New Universities and University Colleges in Sweden (PROMOTE IT 2004), Karlstad, Sweden, Vol. 2, pp , [27] S. Andler, J. Hansson, J. Eriksson, J. Mellin, M. Berndtsson, B. Eftring, "DeeDS towards a distributed and active real-time database system", SIGMOD Record, Vol. 25, No.1, pp , [28] S. Andler, J. Hansson, J. Eriksson, J. Mellin, M. Berndtsson, B. Eftring, An overview of the DeeDS real-time database architecture", in Proceedings of the Sixth International Workshop on Parallel and Distributed Real- Time Systems, [29] J. Baulier, P., Bohannon, S., Gogate, C., Gupta, S., Haldar, "DataBlitz storage manager : Main memory database performance for critical applications", Proceedings SIGMOD '99 of the 1999 ACM SIGMOD international conference on Management of data, ACM SIGMOD Record, Vol.28, No. 2, pp , [30] G. Mathiason, "Segmentation in a distributed real-time main memory database", Master s thesis, University of Sk ovde, Sweden, [31] R. Elmasri, S. Navathe, "Fundamentals of Database Systems", 6 th Edition, Addison Wesley, [32] C. Mohan, B. Lindsay, and R. Obermarck, " Transaction management in the R* distributed database management system", ACM Trans. Database Syst. Vol. 11,No. 4,pp , [33] Torky Sultan, Hazem M. El-Bakry, Hala A. Hameed, " General Framework for Modeling Replicated Real-Time Database", International Journal of Electrical and Computer Engineering, Vol. 4, No. 8, pp , [34] D.L. McCue, M.C. Little, "Computing Replica Placement in Distributed Systems" A Position Paper for the Second Workshop on the Management of Replicated Data. Computing Laboratory University of Newcastle upon Tyne Appeared in the Proceedings of the IEEE Second Workshop on Replicated Data, Monterey, pp 58-61, [35] Hala Abdel hameed, Hazem M. El-Bakry and Torky Sultan, "A General Framework for Modeling Replicated Real-Time Database," International Journal of Electrical and Computer Engineering, vol. 4, no. 8, 2009, pp [36] Hazem M. El-bakry, and Nikos Mastorakis A Fast Searching Protocol for Fully Replicated System, Proc. of of 13 th WSEAS International Conference on Computers, Rodos, Greece, July 22-25, 2009, pp [37] Hazem M. El-Bakry, and Nikos Mastorakis, "Fast Word Detection in a Speech Using New High Speed Time Delay Neural Networks," WSEAS Transactions on Information Science and Applications, issue 7, vol. 5, July 2009, pp ISBN:
9 //Define NeedList[n] array of nodes for each object o O : initially empty; // LSiteId : the id of the local site where the transaction is initiated. // RSiteId: the Id of the remote site to which the transaction is sent. // O =(Id,Type,Name,PsiteId,Value,TS,VI,BUF,VN,FR) // T Read :{ (Tid,LsiteId,Rsiteid,RS,D,RGUF,DL,e) has been submitted} Begin If LSiteId = RsiteId //Check if the transaction is local transaction Then If O(PsiteId) = RsiteId Then // check if the object is primary object Return result from executing Tread on O; Else If Current time O(TS)<= O(VI) Then //check if it is active replica; Return result from executing Tread on O; Else Create A remote Read transaction Create local update transaction; Return result from executing Tread on O; Else Return result from executing Tread on O; append( (LsiteId)) // Append i th Site to the top of the needlist Algorithm 1: Replica control algorithm on receiving a Read transaction // Define NeedList[n] array of nodes for each object o O : initially empty; // LSiteId : the id of the local site where the transaction is initiated. // RSiteId: the Id of the remote site to which the transaction is sent. // O =(Id,Type,Name,PsiteId,Value,TS,VI,BUF,VN,FR) // T update :{ (T id,lsiteid,rsiteid,ws,rs,d,rguf,dl,e) has been received}. // Define RD int : replication degree 1< RD <N. // Define RAS[n] array of nodes for each object o O: initially empty; // Define i int; Begin If LSiteId = RsiteId //Check if the transaction is local Then If O(PsiteId)= RsiteId Then // check if the object is primary object Return result from executing T update on O; VN+1; // enter the propagation phase For j:=1 to RD do Perform RAF (Si (id)) // Append ith Site located in the needlist(o) to the RAS For For each site S in RAS Do Begin Create T update (S(id)); For; Else Return result from executing Tupdate on O; Receive Acknowledgment from the primary site. Commite T update; Else //if the transaction is a remote update transaction ; Return result from executing Tupdate on O; If O(PsiteId) RsiteId Then // check if the opject is not a local object; Commite (T update); if VN(o)+1; Else Send Aknowleadgment to the T update(lsiteid); append(s i (id)) // Append i th Site to the end of the needlist VN(o)+1; Commite (T update); // enter the propagation phase For j:=1 to RD do Perform RAF (Si (id)) // Append ith Site located in the needlist(o) to the RAS for For each site S in RAS Do Begin Create T update (S(id)); for Algorithm 2: Replica control algorithm on receiving an update transaction ISBN:
Segmentation in a Distributed Real-Time Main-Memory Database
Segmentation in a Distributed Real-Time Main-Memory Database HS-IDA-MD-02-008 Gunnar Mathiason Submitted by Gunnar Mathiason to the University of Skövde as a dissertation towards the degree of M.Sc. by
Survey on Comparative Analysis of Database Replication Techniques
72 Survey on Comparative Analysis of Database Replication Techniques Suchit Sapate, Student, Computer Science and Engineering, St. Vincent Pallotti College, Nagpur, India Minakshi Ramteke, Student, Computer
Udai Shankar 2 Deptt. of Computer Sc. & Engineering Madan Mohan Malaviya Engineering College, Gorakhpur, India
A Protocol for Concurrency Control in Real-Time Replicated Databases System Ashish Srivastava 1 College, Gorakhpur. India Udai Shankar 2 College, Gorakhpur, India Sanjay Kumar Tiwari 3 College, Gorakhpur,
How to Implement Multi-Master Replication in Polyhedra
School of Humanities and Informatics Masters Dissertation in Computer Science, 25 Credit Points D-level Spring 2006 How to Implement Multi-Master Replication in Polyhedra Using Full Replication and Eventual
Database Replication Techniques: a Three Parameter Classification
Database Replication Techniques: a Three Parameter Classification Matthias Wiesmann Fernando Pedone André Schiper Bettina Kemme Gustavo Alonso Département de Systèmes de Communication Swiss Federal Institute
DATABASE REPLICATION A TALE OF RESEARCH ACROSS COMMUNITIES
DATABASE REPLICATION A TALE OF RESEARCH ACROSS COMMUNITIES Bettina Kemme Dept. of Computer Science McGill University Montreal, Canada Gustavo Alonso Systems Group Dept. of Computer Science ETH Zurich,
QoS Management in Distributed Real-Time Databases
QoS Management in Distributed Real-Time Databases Yuan Wei Sang H. Son John A. Stankovic K.D. Kang Abstract There is a growing need for real-time data services in distributed environments. Providing quality-of-service
Database Replication with Oracle 11g and MS SQL Server 2008
Database Replication with Oracle 11g and MS SQL Server 2008 Flavio Bolfing Software and Systems University of Applied Sciences Chur, Switzerland www.hsr.ch/mse Abstract Database replication is used widely
The ConTract Model. Helmut Wächter, Andreas Reuter. November 9, 1999
The ConTract Model Helmut Wächter, Andreas Reuter November 9, 1999 Overview In Ahmed K. Elmagarmid: Database Transaction Models for Advanced Applications First in Andreas Reuter: ConTracts: A Means for
Locality Based Protocol for MultiWriter Replication systems
Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin [email protected] One of the challenging problems in building replication
Data Distribution with SQL Server Replication
Data Distribution with SQL Server Replication Introduction Ensuring that data is in the right place at the right time is increasingly critical as the database has become the linchpin in corporate technology
Replication: Analysis & Tackle in Distributed Real Time Database System
Replication: Analysis & Tackle in Distributed Real Time Database System SANJAY KUMAR TIWARI, A.K.SHARMA, V.SWAROOP Department of Computer Sc. & Engg. M.M.M. Engineering College, Gorakhpur, U.P., India.
USING REPLICATED DATA TO REDUCE BACKUP COST IN DISTRIBUTED DATABASES
USING REPLICATED DATA TO REDUCE BACKUP COST IN DISTRIBUTED DATABASES 1 ALIREZA POORDAVOODI, 2 MOHAMMADREZA KHAYYAMBASHI, 3 JAFAR HAMIN 1, 3 M.Sc. Student, Computer Department, University of Sheikhbahaee,
Distributed Data Management
Introduction Distributed Data Management Involves the distribution of data and work among more than one machine in the network. Distributed computing is more broad than canonical client/server, in that
Using Logs to Increase Availability in Real-Time. Tiina Niklander and Kimmo Raatikainen. University of Helsinki, Department of Computer Science
Using Logs to Increase Availability in Real-Time Main-Memory Tiina Niklander and Kimmo Raatikainen University of Helsinki, Department of Computer Science P.O. Box 26(Teollisuuskatu 23), FIN-14 University
Virtual Full Replication for Scalable Distributed Real-Time Databases
Linköping Studies in Science and Technology Dissertation No. 1281 Virtual Full Replication for Scalable Distributed Real-Time Databases by Gunnar Mathiason Department of Computer and Information Science
Distributed Software Systems
Consistency and Replication Distributed Software Systems Outline Consistency Models Approaches for implementing Sequential Consistency primary-backup approaches active replication using multicast communication
Real Time Database Systems
Real Time Database Systems Jan Lindström Solid, an IBM Company Itälahdenkatu 22 B 00210 Helsinki, Finland [email protected] March 25, 2008 1 Introduction A real-time database system (RTDBS) is a
MIRROR: A State-Conscious Concurrency Control Protocol for. Replicated Real-Time Databases
: A State-Conscious Concurrency Control Protocol for Ming Xiong, Krithi Ramamritham Replicated Real-Time Databases Jayant R. Haritsa John A. Stankovic Department of Computer Science SERC Department of
Secure Cloud Transactions by Performance, Accuracy, and Precision
Secure Cloud Transactions by Performance, Accuracy, and Precision Patil Vaibhav Nivrutti M.Tech Student, ABSTRACT: In distributed transactional database systems deployed over cloud servers, entities cooperate
Dr Markus Hagenbuchner [email protected] CSCI319. Distributed Systems
Dr Markus Hagenbuchner [email protected] CSCI319 Distributed Systems CSCI319 Chapter 8 Page: 1 of 61 Fault Tolerance Study objectives: Understand the role of fault tolerance in Distributed Systems. Know
Key-words: Firm real-time transactions, distributed database system, commit processing, processor overload, data replication.
How to Manage Replicated Real-Time Databases in an Overloaded Distributed System? S. Saad-Bouzefrane - C. Kaiser Laboratoire CEDRIC, Conservatoire National des Arts et Métiers, 292 rue Saint Martin, 75141
Cloud DBMS: An Overview. Shan-Hung Wu, NetDB CS, NTHU Spring, 2015
Cloud DBMS: An Overview Shan-Hung Wu, NetDB CS, NTHU Spring, 2015 Outline Definition and requirements S through partitioning A through replication Problems of traditional DDBMS Usage analysis: operational
(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Pessimistic Timestamp Ordering. Write Operations and Timestamps
(Pessimistic) stamp Ordering Another approach to concurrency control: Assign a timestamp ts(t) to transaction T at the moment it starts Using Lamport's timestamps: total order is given. In distributed
TECHNIQUES FOR DATA REPLICATION ON DISTRIBUTED DATABASES
Constantin Brâncuşi University of Târgu Jiu ENGINEERING FACULTY SCIENTIFIC CONFERENCE 13 th edition with international participation November 07-08, 2008 Târgu Jiu TECHNIQUES FOR DATA REPLICATION ON DISTRIBUTED
Modular Communication Infrastructure Design with Quality of Service
Modular Communication Infrastructure Design with Quality of Service Pawel Wojciechowski and Péter Urbán Distributed Systems Laboratory School of Computer and Communication Sciences Swiss Federal Institute
Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful.
Architectures Cluster Computing Job Parallelism Request Parallelism 2 2010 VMware Inc. All rights reserved Replication Stateless vs. Stateful! Fault tolerance High availability despite failures If one
Chapter 14: Recovery System
Chapter 14: Recovery System Chapter 14: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Remote Backup Systems Failure Classification Transaction failure
Proactive, Resource-Aware, Tunable Real-time Fault-tolerant Middleware
Proactive, Resource-Aware, Tunable Real-time Fault-tolerant Middleware Priya Narasimhan T. Dumitraş, A. Paulos, S. Pertet, C. Reverte, J. Slember, D. Srivastava Carnegie Mellon University Problem Description
Real-Time Databases and Data Services
Real-Time Databases and Data Services Krithi Ramamritham, Indian Institute of Technology Bombay, [email protected] Sang H. Son, University of Virginia, [email protected] Lisa Cingiser DiPippo, University
Load Balancing in Fault Tolerant Video Server
Load Balancing in Fault Tolerant Video Server # D. N. Sujatha*, Girish K*, Rashmi B*, Venugopal K. R*, L. M. Patnaik** *Department of Computer Science and Engineering University Visvesvaraya College of
Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing
www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,
Load Balancing in Distributed Data Base and Distributed Computing System
Load Balancing in Distributed Data Base and Distributed Computing System Lovely Arya Research Scholar Dravidian University KUPPAM, ANDHRA PRADESH Abstract With a distributed system, data can be located
Geo-Replication in Large-Scale Cloud Computing Applications
Geo-Replication in Large-Scale Cloud Computing Applications Sérgio Garrau Almeida [email protected] Instituto Superior Técnico (Advisor: Professor Luís Rodrigues) Abstract. Cloud computing applications
CHAPTER 6: DISTRIBUTED FILE SYSTEMS
CHAPTER 6: DISTRIBUTED FILE SYSTEMS Chapter outline DFS design and implementation issues: system structure, access, and sharing semantics Transaction and concurrency control: serializability and concurrency
A Replication Protocol for Real Time database System
International Journal of Electronics and Computer Science Engineering 1602 Available Online at www.ijecse.org ISSN- 2277-1956 A Replication Protocol for Real Time database System Ashish Srivastava 1 Udai
How to Choose Between Hadoop, NoSQL and RDBMS
How to Choose Between Hadoop, NoSQL and RDBMS Keywords: Jean-Pierre Dijcks Oracle Redwood City, CA, USA Big Data, Hadoop, NoSQL Database, Relational Database, SQL, Security, Performance Introduction A
Fault-Tolerant Framework for Load Balancing System
Fault-Tolerant Framework for Load Balancing System Y. K. LIU, L.M. CHENG, L.L.CHENG Department of Electronic Engineering City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong SAR HONG KONG Abstract:
Patterns of Information Management
PATTERNS OF MANAGEMENT Patterns of Information Management Making the right choices for your organization s information Summary of Patterns Mandy Chessell and Harald Smith Copyright 2011, 2012 by Mandy
Transactions and ACID in MongoDB
Transactions and ACID in MongoDB Kevin Swingler Contents Recap of ACID transactions in RDBMSs Transactions and ACID in MongoDB 1 Concurrency Databases are almost always accessed by multiple users concurrently
In Memory Accelerator for MongoDB
In Memory Accelerator for MongoDB Yakov Zhdanov, Director R&D GridGain Systems GridGain: In Memory Computing Leader 5 years in production 100s of customers & users Starts every 10 secs worldwide Over 15,000,000
Comparison on Different Load Balancing Algorithms of Peer to Peer Networks
Comparison on Different Load Balancing Algorithms of Peer to Peer Networks K.N.Sirisha *, S.Bhagya Rekha M.Tech,Software Engineering Noble college of Engineering & Technology for Women Web Technologies
Availability Digest. MySQL Clusters Go Active/Active. December 2006
the Availability Digest MySQL Clusters Go Active/Active December 2006 Introduction MySQL (www.mysql.com) is without a doubt the most popular open source database in use today. Developed by MySQL AB of
Distributed Databases
C H A P T E R19 Distributed Databases Practice Exercises 19.1 How might a distributed database designed for a local-area network differ from one designed for a wide-area network? Data transfer on a local-area
Data Consistency on Private Cloud Storage System
Volume, Issue, May-June 202 ISS 2278-6856 Data Consistency on Private Cloud Storage System Yin yein Aye University of Computer Studies,Yangon [email protected] Abstract: Cloud computing paradigm
Component Replication in Distributed Systems: a Case study using Enterprise Java Beans
Component Replication in Distributed Systems: a Case study using Enterprise Java Beans G. Morgan, A. I. Kistijantoro, S. K. Shrivastava and M.C. Little * School of Computing Science, Newcastle University,
STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM
STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM Albert M. K. Cheng, Shaohong Fang Department of Computer Science University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu
Database Replication with MySQL and PostgreSQL
Database Replication with MySQL and PostgreSQL Fabian Mauchle Software and Systems University of Applied Sciences Rapperswil, Switzerland www.hsr.ch/mse Abstract Databases are used very often in business
Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.
Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement
Dynamic Adaptive Feedback of Load Balancing Strategy
Journal of Information & Computational Science 8: 10 (2011) 1901 1908 Available at http://www.joics.com Dynamic Adaptive Feedback of Load Balancing Strategy Hongbin Wang a,b, Zhiyi Fang a,, Shuang Cui
Queue Weighting Load-Balancing Technique for Database Replication in Dynamic Content Web Sites
Queue Weighting Load-Balancing Technique for Database Replication in Dynamic Content Web Sites EBADA SARHAN*, ATIF GHALWASH*, MOHAMED KHAFAGY** * Computer Science Department, Faculty of Computers & Information,
Performance Modeling and Analysis of a Database Server with Write-Heavy Workload
Performance Modeling and Analysis of a Database Server with Write-Heavy Workload Manfred Dellkrantz, Maria Kihl 2, and Anders Robertsson Department of Automatic Control, Lund University 2 Department of
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
Avoid a single point of failure by replicating the server Increase scalability by sharing the load among replicas
3. Replication Replication Goal: Avoid a single point of failure by replicating the server Increase scalability by sharing the load among replicas Problems: Partial failures of replicas and messages No
Data Management in the Cloud
Data Management in the Cloud Ryan Stern [email protected] : Advanced Topics in Distributed Systems Department of Computer Science Colorado State University Outline Today Microsoft Cloud SQL Server
Shareability and Locality Aware Scheduling Algorithm in Hadoop for Mobile Cloud Computing
Shareability and Locality Aware Scheduling Algorithm in Hadoop for Mobile Cloud Computing Hsin-Wen Wei 1,2, Che-Wei Hsu 2, Tin-Yu Wu 3, Wei-Tsong Lee 1 1 Department of Electrical Engineering, Tamkang University
Chapter 18: Database System Architectures. Centralized Systems
Chapter 18: Database System Architectures! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems! Network Types 18.1 Centralized Systems! Run on a single computer system and
Load Balancing in cloud computing
Load Balancing in cloud computing 1 Foram F Kherani, 2 Prof.Jignesh Vania Department of computer engineering, Lok Jagruti Kendra Institute of Technology, India 1 [email protected], 2 [email protected]
New method for data replication in distributed heterogeneous database systems
New method for data replication in distributed heterogeneous database systems Miroslaw Kasper Department of Computer Science AGH University of Science and Technology Supervisor: Grzegorz Dobrowolski Krakow,
Efficient Data Replication Scheme based on Hadoop Distributed File System
, pp. 177-186 http://dx.doi.org/10.14257/ijseia.2015.9.12.16 Efficient Data Replication Scheme based on Hadoop Distributed File System Jungha Lee 1, Jaehwa Chung 2 and Daewon Lee 3* 1 Division of Supercomputing,
Efficient File Sharing Scheme in Mobile Adhoc Network
Efficient File Sharing Scheme in Mobile Adhoc Network 1 Y. Santhi, 2 Mrs. M. Maria Sheeba 1 2ndMECSE, Ponjesly College of engineering, Nagercoil 2 Assistant professor, Department of CSE, Nagercoil Abstract:
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL
CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL This chapter is to introduce the client-server model and its role in the development of distributed network systems. The chapter
174: Scheduling Systems. Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 TIMING ANALYSIS IN NETWORKED MEASUREMENT CONTROL SYSTEMS
174: Scheduling Systems Emil Michta University of Zielona Gora, Zielona Gora, Poland 1 Timing Analysis in Networked Measurement Control Systems 1 2 Introduction to Scheduling Systems 2 3 Scheduling Theory
ADDING A NEW SITE IN AN EXISTING ORACLE MULTIMASTER REPLICATION WITHOUT QUIESCING THE REPLICATION
ADDING A NEW SITE IN AN EXISTING ORACLE MULTIMASTER REPLICATION WITHOUT QUIESCING THE REPLICATION Hakik Paci 1, Elinda Kajo 2, Igli Tafa 3 and Aleksander Xhuvani 4 1 Department of Computer Engineering,
Synchronization and replication in the context of mobile applications
Synchronization and replication in the context of mobile applications Alexander Stage ([email protected]) Abstract: This paper is concerned with introducing the problems that arise in the context of mobile
Response time behavior of distributed voting algorithms for managing replicated data
Information Processing Letters 75 (2000) 247 253 Response time behavior of distributed voting algorithms for managing replicated data Ing-Ray Chen a,, Ding-Chau Wang b, Chih-Ping Chu b a Department of
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Implementing New Approach for Enhancing Performance and Throughput in a Distributed Database
290 The International Arab Journal of Information Technology, Vol. 10, No. 3, May 2013 Implementing New Approach for Enhancing Performance and in a Distributed Database Khaled Maabreh 1 and Alaa Al-Hamami
A Reputation Replica Propagation Strategy for Mobile Users in Mobile Distributed Database System
A Reputation Replica Propagation Strategy for Mobile Users in Mobile Distributed Database System Sashi Tarun Assistant Professor, Arni School of Computer Science and Application ARNI University, Kathgarh,
Review: The ACID properties
Recovery Review: The ACID properties A tomicity: All actions in the Xaction happen, or none happen. C onsistency: If each Xaction is consistent, and the DB starts consistent, it ends up consistent. I solation:
A new Cluster Resource Manager for heartbeat
A new Cluster Resource Manager for heartbeat Lars Marowsky-Brée [email protected] January 2004 1 Overview This paper outlines the key features and design of a clustered resource manager to be running on top
Conflict-Aware Load-Balancing Techniques for Database Replication
Conflict-Aware Load-Balancing Techniques for Database Replication Vaidė Zuikevičiūtė Fernando Pedone Faculty of Informatics University of Lugano 6900 Lugano, Switzerland University of Lugano Faculty of
How To Understand The Concept Of A Distributed System
Distributed Operating Systems Introduction Ewa Niewiadomska-Szynkiewicz and Adam Kozakiewicz [email protected], [email protected] Institute of Control and Computation Engineering Warsaw University of
Introduction to Database Management Systems
Database Administration Transaction Processing Why Concurrency Control? Locking Database Recovery Query Optimization DB Administration 1 Transactions Transaction -- A sequence of operations that is regarded
Optimizing a ëcontent-aware" Load Balancing Strategy for Shared Web Hosting Service Ludmila Cherkasova Hewlett-Packard Laboratories 1501 Page Mill Road, Palo Alto, CA 94303 [email protected] Shankar
Snapshots in Hadoop Distributed File System
Snapshots in Hadoop Distributed File System Sameer Agarwal UC Berkeley Dhruba Borthakur Facebook Inc. Ion Stoica UC Berkeley Abstract The ability to take snapshots is an essential functionality of any
StreamStorage: High-throughput and Scalable Storage Technology for Streaming Data
: High-throughput and Scalable Storage Technology for Streaming Data Munenori Maeda Toshihiro Ozawa Real-time analytical processing (RTAP) of vast amounts of time-series data from sensors, server logs,
