Building service testbeds on FIRE P2P@Clouds Converging P2P with clouds towards advanced real time media distribution architectures. Nikolaos Efthymiopoulos, Athanasios Christakidis, Loris Corazza, Spyros Denazis, Odysseas Koufopavlou Department of Electrical and Computer Engineering University of Patras, Greece
Warning! 2 BonFIRE 2
Motivation I P2P@Clouds will specify and execute research experiments that aim at orchestrating user resources and cloud resources (media servers) toward low cost and stable real time media distribution. SYSTEM IN A NUTSHELL Users that enter the system are able to contribute (processing, storage and bandwidth resources). NETWORK MEDIA DISTRIBUTION GRAPH Fixed BW AUXILIARY BANDWIDTH PROVIDERS MEDIA SERVERS A set of servers support users in case that their resources are not sufficient. USERS -PEERS Amanagement server (monitoring and control) coordinates available resources. 3 BonFIRE 3
Motivation II We aim at the design of a real time media distribution system that has: Efficiency Achieving the highest possible utilization of the upload bandwidth of participating peers in order to minimize any additional bandwidth provided by a set of media servers. Stability of the system in the presence of dynamic conditions (user and network behaviour). Low Cost Scalability property of such systems is determined by the amount of bandwidth and processing overhead that media servers have to contribute as the number of participating peers grows. 4 BonFIRE 4
Functional Architecture 5 BonFIRE 5
1 st Experiment: P2P Overlay The development, evaluation and enhancement of two distributed algorithms: Intra-DOMA and Inter-DOMA They aim at the dynamic optimization of a P2P overlay in which the distribution paths are organized and rearranged dynamically according to the underling network conditions. We quantify network conditions by dynamically measuring the upload bandwidth of participating peers and the network latency between each pair of them. 6 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 6
1 st Experiment: P2P Overlay H1.1 - Convergence: In case of no changes in the underlying network our two algorithms converge to an optimal overlay. H1.2 Implementation flaws: The implementation of our algorithms doesn t affect the stability and consistency of the overlay. This means that, after the execution of the experiment, the resulting overlay graph has the intended architecture. EXP1.1: Testing H1.1 and H1.2 we need a stable environment with respect to the number of participating peers, meaning that the population of peers will remain constant through the whole experiment. As the purpose is to test only the properties of the overlay graph, there will not be any streaming process. 7 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 7
1 st Experiment: P2P Overlay H1.3 Optimality: The total energy of the resulting overlay is minimum. This means that our algorithms not only converge but they converge to an optimal value in terms of the minimization of network latency between a peer and its neighbors. EXP1.2: Testing H1.3 we will conduct an experiment similar to the first one with the exception that before the execution, and using BonFire s network monitoring facilities, we will have calculated the theoretical minimum energy for the system under test and then compare it with the experimental results. 8 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 8
1 st Experiment: P2P Overlay H1.4 Scalability: The overlay can scale, meaning that the control overhead needed for the execution of the algorithms doesn t affect the scaling properties of the overlay EXP1.3: For H1.4 we are going to conduct a number of experiments in which the population of participating peers changes throughout their execution. By using our experiment controller we can force the peer-to-peer client to enter of leave the system either in pre-defined times or dynamically. Then with the help of the monitoring facilities we can observe in real-time the properties of our overlay and check our hypothesis 9 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 9
1 st Experiment: P2P Overlay H1.5 Adaptability: The overlay is efficiently adaptable to changes in the underlying network and/or peer arrivals and departures. EXP1.4: For H5 we will experiment with dynamic network and peer behaviors. We will measure their impact to the consistency of the overlay and the degradation that they cause in the efficiency of the real time media distribution. H1.6 - Performance. Our proposed overlay architecture highly increases performance of P2P live streaming. EXP1.5: We will compare the performance of live streaming under the existence of our algorithms and without them. 10 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 10
2 nd Experiment: P2P Flow Control Problem formulation In P2P media distribution architectures each sender feeds multiple receivers while each receiver is fed from multiple senders. The flows are not persistent and consist of small non-continuous chunks of video data H2.1 Accuracy of the measurements: The dynamic upload bandwidth of a peer can be measured accurately with the type of traffic that P2P live streaming generates. EXP2.1: We will create a scenario where the upload bandwidth a peer changes very dynamically and we will evaluate how various measurement mechanisms perform. H2.2 Isolation of measurements. The measurement process takes into account generated traffic from other applications. EXP2.2: Here we will introduce unrelated network traffic and we will observe how our measurement algorithm reacts. 11 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 11
2 nd Experiment: P2P Flow Control H2.3 Multi-flow control: The simultaneous flows (small chucks) of a sender to different receivers can be coordinated in order to efficiently utilize uploading bandwidth. EXP2.3: This is our main set of experiments. In this set we will use a single source multi-sink environment, with the source being our peer-topeer client acting as the producer of the media stream. We will apply here various control techniques and we will evaluate their performance and stability in P2P live streaming. H2.4 Fairness: The flow control mechanism can avoid network congestion and is fair to traditional TCP flow control. EXP2.4: In this final set of experiments we will deploy our full peer-topeer system enhanced with the P2P congestion control mechanism. We will observe if our flow control algorithm is able to avoid congestion to the underlying network and if it is capable to share in a fair way network resources with other applications that use traditional TCP congestion control. 12 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 12
BonFire Facilities 2 sets of experiments (overlay graph, flow control) For the first set we are going to use BonFire WAN facility Geographically disperse sites Allows the deployment of a large number of clients Install more than 50 clients distributed evenly in all sites of WAN For the second set we are going to use BonFire Virtual Wall Controlled and isolated emulated network Can set the paths capacities and latencies dynamically and observe the effects on our algorithms Real time monitoring facilities 13 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 13
Experimentation Architecture 14 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 14
Experimentation Architecture Client: The client is the virtual machine acting as user in P2P video streaming Producer: The producer is the virtual machine in charge of providing the initial data stream to all the clients Experimentation monitoring module: In each client we will embed a module responsible to monitoring and gathering application level statistics. These will be sent to the data aggregation module in real time through UDP syslog style messages. Experimentation data aggregation module: Through this module we can observer in real time the state of our experiments. All the application level statistics are gathered here. The statistics are complementary to the network level statistics that are gathered from the BonFire monitoring system 15 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 15
Experimentation Architecture Experimentation scenario generation and control module. This is the main control component. Through it we can change the state of our P2P clients by passing new configuration parameters (before and during the execution of the experiments. We can also control the behavior of the clients by controlling their entrance and departure from the overlay. The instructions are passed through XML-RPC calls to the embedded control components of our client User-client behaviour generator module: It controls user arrivals and departures of our system either on the fly, through the user interface of our control component, or by setting these times before the execution of an experiment. Algorithm tuning and experiment initialization module: It initializes and modifies parameters of various algorithms (number of simultaneous flows, number of neighbours, etc) 16 Activity 5, WP 5.9, Open Call Experiment P2P@Clouds BonFIRE 16
Building service testbeds on FIRE Thank you for your attention!