Replication and Consistency

Similar documents
Distributed Software Systems

Replication architecture

Database Replication Techniques: a Three Parameter Classification

Database Replication with Oracle 11g and MS SQL Server 2008

Distributed Systems (5DV147) What is Replication? Replication. Replication requirements. Problems that you may find. Replication.

Synchronization in. Distributed Systems. Cooperation and Coordination in. Distributed Systems. Kinds of Synchronization.

Avoid a single point of failure by replicating the server Increase scalability by sharing the load among replicas

CHAPTER 2 MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT- SERVER MODEL

Naming vs. Locating Entities

Distributed Data Stores

Geo-Replication in Large-Scale Cloud Computing Applications

A Framework for Highly Available Services Based on Group Communication

Global Server Load Balancing

Eventually Consistent

Dr Markus Hagenbuchner CSCI319. Distributed Systems

Facebook: Cassandra. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Overview Design Evaluation

Information sharing in mobile networks: a survey on replication strategies

Simple Solution for a Location Service. Naming vs. Locating Entities. Forwarding Pointers (2) Forwarding Pointers (1)

Brewer s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services

THE BCS PROFESSIONAL EXAMINATIONS BCS Level 6 Professional Graduate Diploma in IT. April 2009 EXAMINERS' REPORT. Network Information Systems

Database Replication

Basics Of Replication: SQL Server 2000

Cluster Computing. ! Fault tolerance. ! Stateless. ! Throughput. ! Stateful. ! Response time. Architectures. Stateless vs. Stateful.

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Distributed Data Management

Hadoop and Map-Reduce. Swati Gore

(Pessimistic) Timestamp Ordering. Rules for read and write Operations. Pessimistic Timestamp Ordering. Write Operations and Timestamps

G Porcupine. Robert Grimm New York University

Homework 1 (Time, Synchronization and Global State) Points

Survey on Comparative Analysis of Database Replication Techniques

Database Replication with MySQL and PostgreSQL

Design Patterns for Distributed Non-Relational Databases

Cloud Computing at Google. Architecture

Data Consistency on Private Cloud Storage System

SAN Conceptual and Design Basics

Distributed Databases

Redis Cluster. a pragmatic approach to distribution

Informix Dynamic Server May Availability Solutions with Informix Dynamic Server 11

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Data Distribution with SQL Server Replication

An Overview of Distributed Databases

Web DNS Peer-to-peer systems (file sharing, CDNs, cycle sharing)

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

A Reputation Replica Propagation Strategy for Mobile Users in Mobile Distributed Database System

Replication on Virtual Machines

ZooKeeper. Table of contents

COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters

Recommendations for Performance Benchmarking

On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform

This paper defines as "Classical"

Software Life-Cycle Management

StorTrends RAID Considerations

Distributed Systems Lecture 1 1

Microsoft SQL Server Data Replication Techniques

Managing Users and Identity Stores

Distributed Database Management Systems

RADOS: A Scalable, Reliable Storage Service for Petabyte- scale Storage Clusters

The CAP theorem and the design of large scale distributed systems: Part I

SCALABILITY AND AVAILABILITY

EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications

MS SQL Performance (Tuning) Best Practices:

Database Tuning and Physical Design: Execution of Transactions

Scalable Membership Management and Failure Detection (Dependability) INF5360 Student Presentation by Morten Lindeberg

Dependability in Web Services

Review: The ACID properties

INTRODUCTION TO DATABASE SYSTEMS

Chapter 6: distributed systems

Massive Data Storage

DFSgc. Distributed File System for Multipurpose Grid Applications and Cloud Computing

A new Cluster Resource Manager for heartbeat

CHAPTER 6: DISTRIBUTED FILE SYSTEMS

Tunnel Broker System Using IPv4 Anycast

Ph.D. Thesis Proposal Database Replication in Wide Area Networks

The Sierra Clustered Database Engine, the technology at the heart of

Transaction Management in Distributed Database Systems: the Case of Oracle s Two-Phase Commit

Virtual Full Replication for Scalable. Distributed Real-Time Databases

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

3.7. Clock Synch hronisation

Introduction CORBA Distributed COM. Sections 9.1 & 9.2. Corba & DCOM. John P. Daigle. Department of Computer Science Georgia State University

14.1 Rent-or-buy problem

Big Data & Scripting storage networks and distributed file systems

A Dell Technical White Paper Dell Storage Engineering

04 Internet Protocol (IP)

The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper.

UVA. Failure and Recovery. Failure and inconsistency. - transaction failures - system failures - media failures. Principle of recovery

Big Data Management and NoSQL Databases

Distributed Computing and Big Data: Hadoop and MapReduce

Locality Based Protocol for MultiWriter Replication systems

Strongly consistent replication for a bargain

PARALLELS CLOUD STORAGE

Scheduling Shop Scheduling. Tim Nieberg

A Study of Application Recovery in Mobile Environment Using Log Management Scheme

Snapshots in Hadoop Distributed File System

Tushar Joshi Turtle Networks Ltd

Final Exam. Route Computation: One reason why link state routing is preferable to distance vector style routing.

Lecture 3: Scaling by Load Balancing 1. Comments on reviews i. 2. Topic 1: Scalability a. QUESTION: What are problems? i. These papers look at

!! #!! %! #! & ((() +, %,. /000 1 (( / 2 (( 3 45 (

Async: Secure File Synchronization

Transcription:

Replication and Consistency Chapter 7: Replication Page 1 Introduction to Replication Increased availability Sometimes needed: nearly 100% of time a service should be available In case of server failures: simply contact another server with the same data items Network partitions and disconnected operations: availability of data if the connection to a server is lost. But: after re-establishing the connection, (conflicting) data updates have to be resolved Fault-tolerant services Guaranteeing correct behaviour in spite of certain faults (can include timeliness) If f in a group of f+1 servers crash, then 1 remains to supply the service If f in a group of 2f+1 servers have byzantine faults, the group can supply a correct service Chapter 7: Replication Page 3 Introduction to Replication Replication is Replication of data: the maintenance of copies of data at multiple computers Object replication: the maintenance of copies of whole server objects at multiple computers Replication can provide Performance enhancement, scalability Remember caching: improve performance by storing data locally, but data are incomplete Additionally: several web servers can have the same DNS name. The servers are selected by DNS in turn to share the load Replication of read-only data is simple, but replication of frequently changing data causes overhead in providing current data Chapter 7: Replication Page 2 When to do Replication? Replication at service initialisation Try to estimate number of servers needed from a customer's specification regarding performance, availability, fault-tolerance, Choose places to deposit data or objects Example: root servers in DNS Replication 'on demand' When failures or performance bottlenecks occur, make a new replica, possibly placed at a new location in the network Example: web server of an online shop Or: to improve local access operations, place a copy near a client Example: (DNS) caching, disconnected operations Chapter 7: Replication Page 4

Why Object Replication? It is useful to consider objects instead of only considering data Objects have the benefit of encapsulating data and operations on data. Thus, object-specific operation requests can be distributed But: now one has to consider internal object states! This topic is related to mobile agents, but it becomes more complicated in the case of consistent internal states! Chapter 7: Replication Page 5 Management of Replicated Data Needed: replication transparency and consistency Client requests are handled by Front Ends. A front end provides replication transparency By the front ends, clients see a service that gives them access to logical objects, which are in fact replicated at several Replica Managers which guarantee consistency Client request operations: read-only requests: calls without making updates update requests: calls executing write operations (but could also execute read operations) Client Front End server Client Front End server Client Front End server Service Chapter 7: Replication Page 7 Requirements for Replication For all of these application areas, one requirement holds: the client should not be aware of using a group of computers! Replication transparency Clients do not work on multiple physical copies, they only see one logical object which they request to do an action Clients only expect a single result, not results of each data copy Needed: propagation of updates Consistency How to ensure all data copies to be consistent? Specific degrees depending on the application: Temporarily inconsistencies could be tolerated When multiple clients are connected using different copies, they should get consistent results Performance, availability, fault-tolerance consistency, up-to-date information Scalability? Chapter 7: Replication Page 6 System Model Each logical object is implemented by a collection of physical copies called replicas (the replicas are not necessarily consistent all the time; some may have received updates not yet delivered to the others) Assumption: asynchronous system where processes fail only by crashing and generally no network partitions occur. Replica managers Contain replicas on a computer and access them directly Replica managers apply operations to replicas recoverably, i.e. they do not leave inconsistent results if they crash Static systems are based on a fixed set of replica managers In a dynamic system, replica managers may join or leave (e.g. when they crash) A replica manager can be a state machine which has the following properties: a) Operations are applied atomically b) The current state is a deterministic function of the initial state and the applied operations c) All replicas start identically and carry out the same operations d) The operations must not be affected by clock readings etc. Chapter 7: Replication Page 8

Performing a Request In general: five phases for performing a request on replicated data Issue request. The front end either: sends the request to a single replica manager that passes it on to the others, or multicasts the request to all of the replica managers Coordination. For consistent execution, the replica managers decide whether to apply the request (e.g. because of failures) how to order the request relative to other requests (according to FIFO, causal or total ordering) Execution. The replica managers execute the request (sometimes tentatively) Agreement. The replica managers agree on the effect of the request, e.g. perform it 'lazily' or immediately Response. One or more replica managers reply to the front end, which combines the results: For high availability, give first response to client To tolerate byzantine faults, take a vote Chapter 7: Replication Page 9 Group Membership Service The group membership service provides: a) An interface for group membership changes Create and destroy process groups Add/remove members (a process can generally belong to several groups) b) A failure detector Monitor members for failures (process crashes or communication failure) Exclude a process from membership when unreachable c) Notification of members when changes in membership occur d) Expand of group addresses Expand a group identifier to the current group membership Coordinate delivery when membership is changing Note: IP multicast allows members to join/leave and performs address expansion, but does not support the other features Chapter 7: Replication Page 11 Group Communication Strong requirement: replication needs multicast communication (group communication) to distribute requests to several servers holding the data and to manage the replicated data Required: dynamic membership in groups That means: establish a group membership service which manages dynamic memberships in groups, additionally to multicast communication Group Address Expansion Group Send Leave Fail Membership Management Multicast Communication Join A client sending from outside the group does not have to know the group membership. Chapter 7: Replication Page 10 View-synchronous Group Communication A full membership service maintains group views, which are lists of group members. An new view is generated each time a process is added or excluded. View delivery Ease the life of a programmer: he has not to know about performing consistent operations when group membership changes Instead: notify group members when a change occurs In the ideal case, all processes get the same change information in the same order View synchronous group communication uses reliable mulitcast and view delivery Messages are delivered reliably to all group members All processes agree on the ordering of messages and membership changes A joining process can safely get state from another member Or if one crashes, another will know which operations it had already performed Chapter 7: Replication Page 12

Views A group membership service maintains group views, which are lists of current group members. A new group view is generated when a member joins or leaves. A view V p (g) is process p s understanding of its group (list of members) Example: V p.0(g) = {p}, V p.1(g) = {p, q}, V p.2 (g) = {p, q, r}, V p.3 (g) = {p,r} An event occurs in a view v p,i (g) if at the time of event occurrence, p has delivered v p,i (g) but has not yet delivered v p,i+1 (g). p delivers a view by multicasting it to all processes Requirements for view delivery Order: If p delivers vi(g) and then vi+1(g) then no other process q delivers vi+1(g) before v i (g). Integrity: If p delivers vi(g), then p is in vi(g). Non-triviality: if process q joins a group and becomes reachable from process p, then eventually q is always in the views that p delivers. Chapter 7: Replication Page 13 Example: View Synchronous Communication P q X X X Allowed Allowed P q X r r V(p,q,r) V(q,r) V(q,r) V(p,q,r) P X P X q q r r V(p,q,r) V(q,r) V(p,q,r) V(q,r) Not Allowed Not Allowed Chapter 7: Replication Page 15 View Synchronous Communication Extends reliable multicast for dynamic groups (w.r.t changing views) The following guarantees are provided Agreement: Correct processes deliver the same set of messages in any view if p delivers m in V, and then delivers V, then all processes in V V deliver m in view V Integrity: if p delivers message m, p does not deliver m again, also p group (m) Validity: Correct processes always deliver the messages they send. That is, if p delivers message m in view v(g), and some process q v(g) does not deliver m in view v(g), then the next view v (g) that p delivers excludes q Order: If p delivers vi(g) and then vi+1(g) then no other process q delivers vi+1(g) before v i (g) Chapter 7: Replication Page 14 Replication Example How to provide fault-tolerant services, i.e. a service that is provided correct even if some processes fail? Simple replication system: e.g. two replica managers A and B each managing replicas of two accounts x and y. Clients use the local replica manager if possible, after responding to a client the update is transmitted to the other replica manager Client 1: Client 2: setbalance B (x,1) setbalance A (y,2) getbalance A (y) 2 getbalance A (x) 0 Initial balance of x and y is $0 Client 1 first updates x at B (local). When updating y it finds B has failed, so it uses A for the next operation. Client 2 reads balances at A (local), but because B had failed, no update was propagated to A: x has amount 0. Be careful when designing replication algorithms! You need a consistency model Chapter 7: Replication Page 16

Strict Consistency There are several correctness criteria for replication regarding consistency. Real world: strict consistency: Any read on a data item x returns a value corresponding to the result of the most recent write on x Problem with strict consistency: it relies on absolute global time Chapter 7: Replication Page 17 Sequential Consistency Problem with linearisability: real-time requirement practically is not to fulfil. Weaker correctness criterion: sequential consistency A replicated shared service is sequential consistent if for any execution there is some interleaving of the series of operations issued by all the clients with The interleaved sequence of operations meets the specification of a (single) correct copy of the objects The order of operations in the interleaving is consistent with the program order in which each individual client executed them Every linearisable service is sequential consistent, but not vice versa: Client 1: setbalance B (x,1) Client 2: getbalance A (y) 0 Possible under a naive replication strategy: the update at B has not yet been propagated to A when client 2 reads it. The real-time criterion for getbalance A (x) 0 linearisability is not satisfied, setbalance A (y,2) because the reading of x occurs after its writing. But: both criteria for sequential consistency are satisfied with the ordering: getbalance A (y) 0; getbalance A (x) 0; setbalance B (x,1); setbalance A (y,2) Chapter 7: Replication Page 19 Linearisability The strictest criterion for a replication system is linearisability Consider a replicated service with two clients that perform read and update operations o 1i resp. o 2j. Communication is synchronous, i.e. a client waits for one operation to complete before doing another Single server: serialise the operations by interleaving, e.g. o 20, o 21, o 10, o 22, o 11, o 12 In replication: virtual interleaving; a replicated shared service is linearisable if for any execution there is some interleaving of the series of operations issued by all the clients with The interleaved sequence of operations meets the specification of a (single) correct copy of the objects The order of operations in the interleaving is consistent with the real times at which they occurred in the actual execution Linearisability concerns only the interleaving of individual operations, it is not intended to be transactional Chapter 7: Replication Page 18 more Consistency Sequential consistency is widely used, but has poor performance. Thus, there are models relaxing the consistency guarantees: Causal Consistency (weakening of sequential consistency) Distinction between events that are causally related and those that are not Example: read(x); write(y) in one process are causally related, because the value of y can depend on the value of x read before Consistency criterion: Writes that are potentially causally related must be seen by all processes in the same order. Concurrent writes may be seen in a different order on different machines Necessary: keeping track of which processes have seen which write (vector timestamps) FIFO Consistency (weakening of causal consistency) Writes done by a single process are seen by all other processes in the order in which they were issued, but writes from different processes may be seen in a different order by different processes Easy to implement Weak Consistency, Release Consistency, Entry Consistency Chapter 7: Replication Page 20

Summary on Consistency Models Consistency Strict Linearisability Sequential Causal FIFO Description Absolute time ordering of all shared accesses matters All processes must see all shared accesses in the same order. Accesses are furthermore ordered according to a global timestamp All processes see all shared accesses in the same order. Accesses are not ordered in time All processes see causally-related shared accesses in the same order All processes see writes from each other in the order they were used. Writes from different processes may not always be seen in that order Chapter 7: Replication Page 21 Fault Tolerance: Passive (primarybackup) Replication Replication for fault tolerance: passive or primary-backup model There is at any time a single primary replica manager and one or more secondary replica managers (backups, slaves) Client Front End primary Backup. Client Front End Backup Backup Front ends only communicate with the primary replica manager which executes the operation and sends copies of the updated data to the backups If the primary fails, one of the backups is promoted to act as the primary This system implements linearisability, since the primary sequences all the operations on the shared objects If the primary fails, the system is linearisable, if a single backup takes over exactly where the primary left off view-synchronous group communication can achieve linearisability Chapter 7: Replication Page 23 Replication Approaches In the following lecture: replication for fault tolerance for highly available services in transactions Chapter 7: Replication Page 22 Execution in Passive Replication There are five phases in performing a client request: 1. Request: a front end issues the request, containing a unique identifier, to the primary replica manager 2. Coordination: the primary performs each request atomically, in the order in which it receives it it checks the unique identifier, if it has already executed the request. If yes, it resends the response 3. Execution: The primary executes the request and stores the response 4. Agreement: If the request is an update the primary sends the updated state, the response and the unique identifier to all the backups. The backups send an acknowledgement 5. Response: The primary responds to the front end, which hands the response back to the client Chapter 7: Replication Page 24

Properties of Passive Replication + Simple principle +To survive f process crashes, f +1 replica managers are required + Front end can be designed with little functionality - But: relatively large overheads by using view-synchronous communication: several rounds of communication, latency after a primary crash because of an agreement to a new view Variation: clients can read from backups Reduces the work load for the primary Achieves sequential consistency but not linearisability Sun Network Information System (NIS) uses passive replication with weaker guarantees than sequential consistency: Achieves high availability and good performance The master receives updates and propagates them to slaves using one-toone communication. Information retrieval can be made by using either the master or a slave server Chapter 7: Replication Page 25 Execution in Active Replication There are the following phases in performing a client request: 1. Request The front end attaches a unique identifier to a request and uses totally ordered reliable multicast to send it to all replica managers. 2. Coordination The group communication system delivers the requests to all the replica managers in the same (total) order. 3. Execution Every replica manager executes the request. They are state machines and receive requests in the same order, so the effects for correct replica managers are identical. (Agreement) No agreement is required because all managers execute the same operations in the same order, due to the properties of the totally ordered multicast. 4. Response The front end collects responses from the managers (and combines them, if necessary). Chapter 7: Replication Page 27 Active Replication The replica managers are state machines all playing the same role and are organised as a group: A front end multicasts each request to the group of replica managers All replica managers start in the same state and perform the same operations in the same order so that their state remains identical (notice: totally ordered reliable multicast would be needed to guarantee the identical execution order!) If a replica manager crashes it has no effect on the performance of the service because the others continue as normal Byzantine failures can be tolerated because the front end can collect and compare the replies it receives Client Front End. Client Front End Chapter 7: Replication Page 26 Properties of Active Replication As replica managers are state machines, we have sequential consistency but do not achieve linearisability (because we do not have perfectly accurate synchronisation algorithms) Possible to deal with byzantine failures: use 2f +1 replica managers to overrule up to f process failures. For this, a front end has simply to wait for receiving f +1 identical responses from replica managers. Replica managers have to digitally sign their responses! Variations: Allow a sequence of read operations to be executed from different replica managers in any order Or: allow the front end to send a read-only request to only one replica manager (if this manager fails, the front end contacts the next one) Allow write operations on different data items to be executed in any order Chapter 7: Replication Page 28

Highly Available Services How to provided services with high availability, i.e. how to use replication to achieve an availability of (ideally) 100%? Reasonable response times for as much of the time as possible Even if some results do not conform to sequential consistency e.g. a disconnected user may accept temporarily inconsistent results if he can continue to work and fix inconsistencies later Difference to fault tolerant systems For fault tolerance, all replica managers have to agree on a result 'eager' evaluation, not acceptable for reaching reasonable response times Instead for high availability: Only contact a minimum number of replica managers necessary to reach an acceptable level of service Client should tied up for a minimum time while managers coordinate their actions Weaker consistency generally requires less agreement and makes data more available. Updates are propagated 'lazily'. Chapter 7: Replication Page 29 Query & Update Operations The service consists of a collection of replica managers that exchange gossip messages Queries and updates are sent by a client via a front end to any replica manager Query, prev Gossip Query Val C Val, new Service Update, prev Update id Update C The front end converts operations containing both, reading and writing, in two separate calls For ordering operations, the front end sends a timestamp prev with each request to denote its latest state A new timestamp new is passed back in a read operation to mark the data state the client has seen last Chapter 7: Replication Page 31 The Gossip Architecture The gossip architecture is a framework for implementing highly available services Data is replicated close to the location of clients Replica managers periodically exchange gossip messages containing updates they have received from clients Two basic types of operations are provided: queries - read only operations updates - modify (but do not read) the state Front ends choose any replica manager to send queries and updates to, selected by availability and response times Two guarantees are made (even if managers are temporarily unable to communicate with one another): Each client gets a consistent service over time (i.e. the data reflects at least the updates seen by the client so far, even if it uses different replica managers). Vector timestamps are used with one entry per manager. Relaxed consistency between replicas. All replica managers eventually receive all updates and use ordering guarantees to suit the needs of the application (generally causal ordering). A client may observe stale data. Chapter 7: Replication Page 30 Timestamps Each front end keeps a vector timestamp that reflects the latest data value seen by the front end (prev) Clients can communicate with each other. This can lead to causal relationships Service between client operations which has to be considered in the replicated system. Thus, communication is made via the front ends including an exchange of vector timestamps allowing the front ends to consider causal ordering in their timestamps Gossip Vector timestamps C C Chapter 7: Replication Page 32

Gossip Processing of Queries and Updates Phases in performing a client request: 1. Request Front ends normally use the same replica manager for all operations Front ends may be blocked on queries 2. Update response The replica manager replies as soon as it has received the update 3. Coordination The replica manager receiving a request waits to process it until the ordering constraints apply. This may involve receiving updates from other replica managers in gossip messages 4. Execution The replica manager executes the request 5. Query response If the request is a query the replica manager now replies 6. Agreement Replica managers update one another by exchanging gossip messages with the most recent received updates. This has not to be done for each update Gossip Replica Manager Replica timestamp Value timestamp Stable Update log updates Value Executed operation table Chapter 7: Replication separately Page 33 Chapter 7: Replication Page 34 Gossip messages Other replica managers Updates Replica timestamp Timestamp table Replica log OperationID Update Prev Replica manager Replica Manager State Query & Update Operations Main state components of a replica manager: Value: reflects the application state as maintained by the replica manager (each manager is a state machine). It begins with a defined initial value and is applied update operations to Value timestamp: the vector timestamp that reflects the updates applied to get the saved value Executed operation table: prevents an operation from being applied twice, e.g. if received from other replica managers as well as the front end Timestamp table: contains a vector timestamp for each other replica manager, extracted from gossip messages. Update log: all update operations are recorded immediately when they are received. An operation is held back until ordering allows it to be applied Replica timestamp: a vector timestamp indicating updates accepted by the manager, i.e. placed in the log (different from value s timestamp if some updates are not yet stable). Chapter 7: Replication Page 35 A query includes a timestamp. The operation is marked as pending until the manager's timestamp is larger then the operation's timestamp. Update operations are processed in causal order. A front end can send an update operation containing timestamp and identifier id to one or several replica managers When replica manager i receives an update request, it checks whether it is new, by looking for the id in its executed ops table and its log If it is new, the replica manager - increments by 1 the ith element of its replica timestamp, - assigns the result as new timestamp ts to the update, and - stores the update in its log. The replica manager returns ts to the front end, which merges it with its vector timestamp (Note: by sending a request to several managers the front end gets back several timestamps which have to be merged) Depending on the timestamp for an operation, it can be delayed till gossip messages from other replica managers arrive. When the timestamp allows, the replica manager applies the operation and makes an entry in the executed operation table Chapter 7: Replication Page 36

Gossip Messages The timestamp table contains a vector timestamp for each other replica, collected from gossip messages A replica manager uses entries in this timestamp table to estimate which updates another manager has not yet received. These information are sent in a gossip message A gossip message m contains the log m.log and the replica timestamp m.ts A manager receiving gossip message m has the following main tasks: Merge the arriving log with its own Apply in causal order updates that are new and have become stable Remove redundant entries from the log and executed operation table when it is known that they have been applied by all replica managers Merge its replica timestamp with m.ts, so that it corresponds to the additions in the log Chapter 7: Replication Page 37 Properties of the Gossip Architecture The gossip architecture is designed to provide a highly available service + Clients with access to a single replica manager can work even when other managers are inaccessible - but this is not suitable for data such as bank accounts - it is inappropriate for updating replicas in real time Scalability - as the number of replica managers grows, so does the number of gossip messages -for R managers collecting G updates in a gossip message, the number of messages per request is 2 + (R-1)/G Variation: increase G and improve the number of gossip messages, but make latency worse For applications where queries are more frequent than updates, use some read-only replicas placed near the client, which are updated only by gossip messages Chapter 7: Replication Page 39 Update Propagation The given architecture does not specify when to exchange gossip messages. To design a robust system in which each update is propagated in reasonable time, an exchange strategy is needed. The time which is required for all replica managers to receive a given update depends upon three factors: 1. The frequency and duration of network partitions This is beyond the system s control 2. The frequency with which replica managers send gossip messages This may be tuned to the application 3. The policy for choosing a partner with which to exchange gossip messages Random policies choose a partner randomly but with weighted probabilities so as to favour some partners over others Deterministic policies give fixed communication partners Topological policies arrange the replica managers into a fixed graph (mesh, circle, tree, ) and messages are passed to the neighbours Other strategies: consider transmission latencies, fault probabilities, Chapter 7: Replication Page 38