RCL: Design and Open Specification



Similar documents
RCL: Software Prototype

Advanced Computer Networks. Network I/O Virtualization

Introduction to OpenStack

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista

COS 318: Operating Systems. Virtual Machine Monitors

KVM Architecture Overview

High-performance vnic framework for hypervisor-based NFV with userspace vswitch Yoshihiro Nakajima, Hitoshi Masutani, Hirokazu Takahashi NTT Labs.

Virtualization Technologies

VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing

Introduction to Virtualization & KVM

How Linux kernel enables MidoNet s overlay networks for virtualized environments. LinuxTag Berlin, May 2014

KVM, OpenStack, and the Open Cloud

KVM, OpenStack, and the Open Cloud

Full and Para Virtualization

HRG Assessment: Stratus everrun Enterprise

Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking

Chapter 5 Cloud Resource Virtualization

KVM: Kernel-based Virtualization Driver

Enabling Technologies for Distributed and Cloud Computing

Real-Time KVM for the Masses Unrestricted Siemens AG All rights reserved

KVM: A Hypervisor for All Seasons. Avi Kivity avi@qumranet.com

The QEMU/KVM Hypervisor

KVM on S390x. Revolutionizing the Mainframe

Enabling Technologies for Distributed Computing

Developing tests for the KVM autotest framework

D7.2 Website development and setup of social accounts

Mobile Cloud Computing T Open Source IaaS

High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman

Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University

Virtualization. Dr. Yingwu Zhu

Virtualization. Pradipta De

Programmable Networking with Open vswitch

The Microsoft Windows Hypervisor High Level Architecture

Enterprise-Class Virtualization with Open Source Technologies

RPM Brotherhood: KVM VIRTUALIZATION TECHNOLOGY

PCI-SIG SR-IOV Primer. An Introduction to SR-IOV Technology Intel LAN Access Division

How To Understand The Power Of A Virtual Machine Monitor (Vm) In A Linux Computer System (Or A Virtualized Computer)

Chapter 14 Virtual Machines

Toward Exitless and Efficient Paravirtual I/O

Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure

Network Virtualization Technologies and their Effect on Performance

Virtualization. Jukka K. Nurminen

Virtualization System Vulnerability Discovery Framework. Speaker: Qinghao Tang Title:360 Marvel Team Leader

Using Linux as Hypervisor with KVM

Network Functions Virtualization on top of Xen

Data Centers and Cloud Computing

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM

RED HAT INFRASTRUCTURE AS A SERVICE OVERVIEW AND ROADMAP. Andrew Cathrow Red Hat, Inc. Wednesday, June 12, 2013

Cloud Computing CS

2972 Linux Options and Best Practices for Scaleup Virtualization

BHyVe. BSD Hypervisor. Neel Natu Peter Grehan

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

x86 ISA Modifications to support Virtual Machines

A quantitative comparison between xen and kvm

kvm: Kernel-based Virtual Machine for Linux

State of the Art Cloud Infrastructure

CS312 Solutions #6. March 13, 2015

How To Install Openstack On Ubuntu (Amd64)

Virtualization is set to become a key requirement

CERN Cloud Infrastructure. Cloud Networking

OpenStack Introduction. November 4, 2015

OpenStack Manila Shared File Services for the Cloud

Cloud^H^H^H^H^H Virtualization Technology. Andrew Jones May 2011

D3.1: Operational SaaS Test lab

CLOUD COMPUTING & SECURITY -A PRACTICAL APPROACH

With Red Hat Enterprise Virtualization, you can: Take advantage of existing people skills and investments

Architecture of the Kernel-based Virtual Machine (KVM)

VIRTUALIZATION 101. Brainstorm Conference 2013 PRESENTER INTRODUCTIONS

Optimizing Network Virtualization in Xen

Nested Virtualization

Cloud Computing. Up until now

Infrastructure as a Service (IaaS)

OPEN CLOUD INFRASTRUCTURE BUILT FOR THE ENTERPRISE

Virtualization for Cloud Computing

OGF25/EGEE User Forum Catania, Italy 2 March 2009

Date: December 2009 Version: 1.0. How Does Xen Work?

An Introduction to OpenStack and its use of KVM. Daniel P. Berrangé

Virtualization, SDN and NFV

Linux KVM Virtual Traffic Monitoring

2 Purpose. 3 Hardware enablement 4 System tools 5 General features.

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Security Overview of the Integrity Virtual Machines Architecture

SDN software switch Lagopus and NFV enabled software node

Windows Server 2008 R2 Hyper-V Live Migration

OpenStack IaaS. Rhys Oxenham OSEC.pl BarCamp, Warsaw, Poland November 2013

CPET 581 Cloud Computing: Technologies and Enterprise IT Strategies. Virtualization of Clusters and Data Centers

The Art of Virtualization with Free Software

Xen and the Art of. Virtualization. Ian Pratt

Models For Modeling and Measuring the Performance of a Xen Virtual Server

Virtual Machines. COMP 3361: Operating Systems I Winter

KVM, OpenStack and the Open Cloud SUSECon November 2015

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Networking for Caribbean Development

FPGA Accelerator Virtualization in an OpenPOWER cloud. Fei Chen, Yonghua Lin IBM China Research Lab

RED HAT ENTERPRISE VIRTUALIZATION 3.0

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Distributed and Cloud Computing

RED HAT ENTERPRISE VIRTUALIZATION 3.0

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Transcription:

ICT FP7-609828 RCL: Design and Open Specification D3.1.1 March 2014

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 Document Information Scheduled delivery Actual delivery Version Responsible Partner 31.03.2014 31.03.2014 1.0 IBM Dissemination Level PU Public Revision History Date 27.02.2014 05.03.2014 05.03.2014 10.03.2014 16.03.2014 Editor Ronen Kat Ronen Kat Yossi Kuperman Yossi Kuperman Dave Gilbert Ronen Kat Ronen Kat 16.03.2014 Ronen Kat 26.03.2014 Ronen Kat 09.03.2014 07.03.2014 Status Draft Draft Draft Version 0.1 0.2 0.3 Changes Skeleton added base text added section 3 (I/O consolidation) Draft, 0.4 Draft, 0.4RH Draft Ready for review Ready for review Final 0.5 0.6 added text in section 3 (I/O consolidation) added section 4 (Memory Externalization) Added section 5. Edits to section 3. Merge and finishing 0.7 Merge and finishing 1.0 Address reviewers comments Contributors Ronen Kat (IBM), Joel Nider (IBM), Yossi Kuperman (IBM), Andrea Arcangeli (Red Hat), David Gilbert (Red Hat) Internal Reviewers George Kousiouris (ICCS), Panagiotis Kokkinos (CTI) Copyright This report is by IBM and other members of the Consortium 2013-2016. Its duplication is allowed only in the integral form for anyone's personal use and for the purposes of research or education. Acknowledgements The research leading to these results has received funding from the EC Seventh Framework Programme FP7/2007-2013 under grant agreement n 609828. 2/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 Glossary of Acronyms Acronym Dx.x.x DoW I/O EC KVM Mxx PM PO QEMU RCL TBD VM VMM WP Definition Deliverable x.x.x Description of Work Input / Output in relation to data transfer European Commission A virtualization infrastructure for the Linux kernel which turns it into a hypervisor Month (xx) number from the start of the project Project Manager Project Officer Emulator and virtualization machine that allows you to run a complete operating system on top of another. Resource Consolidation Layer To Be Defined Virtual Machine Virtual Machine Manager Work Package 3/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 Table of Contents 1. Executive Summary...6 2. Introduction...7 2.1. Document Focus... 7 2.2. Next Steps... 8 3. I/O Consolidation...9 3.1. Components... 10 3.2. Interfaces... 13 3.3. Modules... 14 4. Memory Consolidation and Externalization...15 4.1. Components... 16 4.2. Interfaces... 17 4.3. Modules... 19 5. Cloud Management...20 6. References...21 4/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 List of Figures Figure 1: architecture focus on resource consolidation layer...7 Figure 2: I/O consolidation modules and interfaces...9 Figure 3: Memory externalization components and interfaces...15 Figure 4: OpenStack components to be updated for...20 5/22

1. _D3.1.1_RCLDesignAndOpenSpecification_v1.0 Executive Summary This first delivery of the Resource Consolidation Layer (RCL) Design and Open Specification document focuses on the high-level architecture of the Resource Externalization and Consolidation as a Facilitator for Fault Tolerance work package (WP). It shows the components and interactions between the components of the I/O consolidation, memory consolidation and externalization and the cloud management. The design and open specification will be extended in the future version as part of D3.1.2 and D3.1.3 throughout the progress of the project. 6/22

2. _D3.1.1_RCLDesignAndOpenSpecification_v1.0 Introduction The RCL addresses the end-to-end implementation of virtual resource consolidation, via enhancements of the Virtual Machine Manager (VMM) and Virtual Machine (VM) paravirtualized device drivers in conjunction with modern hardware supporting x86 fullvirtualization such as Intel-VT and AMD-V. The RCL enables a VM to use not only local resources but also remote resources residing on remote physical hosts within the datacenter, and make them appear to the VMs as local resources. The I/O consolidation is the I/O middleware that allows network I/O resources and block I/O resources to be accessed and shared by multiple VMs and enables resource to be switched between VMs and physical hosts. The memory consolidation concept is based on externalization of the memory, allowing VM memory to physically reside not on a local physical machine, but also on a remote physical machine. Figure 1: architecture focus on resource consolidation layer. 2.1. Document Focus This first design and open specification document includes the high level design of the resource consolidation layer, and includes the I/O consolidation and memory externalisation components and interfaces. 7/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 2.2. Next Steps On M9 (June 2014) a first software prototype implementation (D3.2.1) will realize the design and open specification in this document, to be followed by a scientific report (D3.3.1) on M12 (September 2014) describing the outcome of the design and implementation. Next, the design and open specification is to be expanded in delivery D3.1.2 on M18 (March 2015) with additional details. This will be the next update of the this document. 8/22

3. _D3.1.1_RCLDesignAndOpenSpecification_v1.0 I/O Consolidation Figure 2 shows the task's components and the dependencies with internal and external components, each of the following components are implemented as part of the Linux kernel version 3.13. Figure 2: I/O consolidation modules and interfaces. Note component 3.3.TBD which is part of Task 3.3 which is scheduled to start on M13 (October 2014). the notation TBD here reflect that the design of this component has not been performed, and in fact may be composed of multiple components. The details of component 3.3.TBD is to be included in the next update of this document. 9/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 3.1. Components The guest operating system (front-end) runs on top of KVM [1] and QEMU as a VM and will have components 3.1.12, 3.1.13, 3.1.11 and 3.1.1 installed, whereas the I/O consolidation hypervisor (back-end) runs on top of bare-metal with the same base kernel and modules 3.1.22, 3.1.23, 3.1.21 and 3.1.1 installed. (the module 3.1.1 is installed on both front-end & back-end) The following shows for each component (in Figure 2) the component number, name, description and role in, the success indication for the component, and the interfaces of the component. COMPONENT 3.1.1 Split I/O Ethernet Transport This component is responsible for implementing the Ethernet transport layer to enable efficient and scalable communication between the Split I/O front-end and Split I/O back-end components Success indicator Less than 20% overhead compared to traditional para-virtual I/O Interfaces EXPOSED (internal and 3.1.1.1 INTERFACES CONSUMED (internal and Only Linux Kernel interfaces will be used, none from other components within the project. COMPONENT 3.1.11 Split I/O Generic Front-End This component is responsible for providing common Split I/O data services to both the network and block front-ends. It is also responsible for instantiating the virtual block and virtual network devices of the VMs. Success indicator Virtual block and network devices consolidated in a centralized and remote I/O hypervisor Interfaces EXPOSED (internal and INTERFACES CONSUMED (internal and 3.1.11.1 3.1.11.2 3.1.1.1 10/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 COMPONENT 3.1.21 Split I/O Generic Back-End This component is responsible for providing common Split I/O data services to both the network and block back-ends. It is also responsible for exposing a control interface to the management cloud layer that can be used to link the VMs with the corresponding I/O hypervisor and manage the virtual network and virtual block devices BUILDING BLOCK Linux/KVM Success indicator Virtual block and network devices consolidated in a centralized and remote I/O hypervisor Interfaces EXPOSED (internal and 3.1.21.1 3.1.21.2 INTERFACES CONSUMED (internal and 3.1.1.1 COMPONENT 3.1.12 Split I/O Block Front-End This component is responsible for exposing virtual block devices to the guest operating system and sending all the read/write requests to the block back-end running in the I/O hypervisor Success indicator Virtual block devices consolidated in a centralized and remote I/O hypervisor Interfaces EXPOSED (internal and None INTERFACES CONSUMED (internal and 3.1.11.1 COMPONENT 3.1.13 Split I/O Net Front-End This component is responsible for exposing virtual network devices to the guest operating system and sending/receiving all network frames to/from the I/O the network back-end running in the I/O hypervisor BUILDING BLOCK Linux/KVM Success indicator Virtual network devices consolidated in a centralized and remote I/O hypervisor Interfaces EXPOSED (internal and none INTERFACES CONSUMED (internal and Internal Linux interfaces 3.1.11.1 Internal Linux interfaces 11/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 COMPONENT 3.1.22 Split I/O Block Back-End This component is responsible for processing the block read and block write requests sent by the block front-end running within each VM. It maps the (remote) virtual block device exposed to each VM with a (local) block device BUILDING BLOCK Linux/KVM Success indicator Virtual block devices consolidated in a centralized and remote I/O hypervisor Interfaces EXPOSED (internal and None INTERFACES CONSUMED (internal and 3.1.21.1 COMPONENT 3.1.23 Split I/O Net Back-End This component is responsible for receiving/sending virtual network L2 frames from/to the virtual network devices of each VM. It bridges the (remote) virtual network devices exposed to each VM with a (local) tap/macvtap interface. The tap interface can be connected with any virtual network (e.g. OVS/Linux bridge) and the macvtap interface can be connected to any physical NIC. BUILDING BLOCK Linux/KVM Success indicator Virtual network devices consolidated in a centralized and remote I/O hypervisor Interfaces EXPOSED (internal and none INTERFACES CONSUMED (internal and Internal Linux interfaces 3.1.21.1 Internal Linux interfaces 12/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 3.2. Interfaces The following interfaces are part of Task 3.1 (Consolidation of Virtualized I/O). The interfaces that are external to Task 3.1 include concentrate and detailed API specification, while the internal (to Task 3.1) API will be detailed in the next update of the document. The following shows for each interface (in Figure 2) the interface number, name, description and role in, and the relevant components which use the interface. interface 3.1.1.1 Split I/O Transport Abstract the concrete transport (Ethernet) used for the communication. Enables to easily plug new transports (e.g. Infiniband which will not be implemented as part of this project) consumed by components (internal and 3.1.11 interface 3.1.11.1 Split I/O Front-end Protocol Defines the generic I/O communication front-end services (based on virtio protocol) for all types of virtual I/O devices (block and net) 3.1.21 consumed by components (internal and 3.1.12 interface 3.1.21.1 Split I/O Back-end Protocol Defines the generic I/O communication back-end services (based on virtio protocol) for all types of virtual I/O back-ends (block and net) 3.1.13 consumed by components (internal and 3.1.22 interface 3.1.11.2 Split I/O Internal Control Defines the control operations (e.g. specify I/O hypervisor, create virtual network device, create virtual block device) that can be used by the back-ends to configure and manage the front-ends 3.1.23 This is a logical interface that will be used remotely (RPC over the Split I/O transport) consumed by components (internal and 3.1.21 13/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 The following interface refers to the integration between Task 3.1 (Consolidation of Virtualized I/O) and Task 3.3 (Cloud Management Integration). The following shows the interface number, name, description and role in, the relevant components which use the interface, and the list of APIs in the interface. interface 3.1.21.2 Split I/O External Control Defines the control operations (e.g. specify I/O hypervisor, specify virtual network devices and block devices, connect a virtual network device with a virtual switch, set the backing store for a virtual block device) that will be used by the cloud management component to configure and manage the system consumed by components (internal and 3.3.TBD (described in section 5) API RESET_GUEST_DEVICES() CREATE_BLOCK_DEVICE() REMOVE_BLOCK_DEVICE() CREATE_NETWORK_DEVICE() REMOVE_NETWORK_DEVICE() Reset a guest to a clean state, removing all virtual devices Parameters: IO Guest's management address (e.g. VF's MAC address) Creates a virtual block device on the specified guest Parameters: IO Guest's management address (e.g. VF's MAC address) Guest's device name (e.g. /dev/vrda) IO Hypervisor device name (e.g. /dev/sdb) QoS rate-limit (optional) Parameters: IO Guest's management address (e.g. VF's MAC address) Guest's device name (e.g. /dev/vrda) Creates a virtual network device on the specified guest Parameters: IO Guest's management address (e.g. VF's MAC address) Guest's device name (e.g. eth7) IO Hypervisor device name (e.g. eth5) QoS rate-limit (optional) Parameters: IO Guest's management address (e.g. VF's MAC address) Guest's device name (e.g. eth7) 3.3. Modules A Python library will be provided to ease the integration with the cloud management (3.3.TBD), which is described in section 5. The library will implement interface 3.1.21.2 and acts as a wrapper for the kernel module (3.1.21) that is responsible for exposing the management of the virtual network and the virtual block devices on the VMs. 14/22

4. _D3.1.1_RCLDesignAndOpenSpecification_v1.0 Memory Consolidation and Externalization Figure 3 shows the Task 3.2 components and the dependencies with internal and external components. Component 4.1.6 (Post copy Manager) is part of work package 4 (external to this report), and is not described here. Linux kernel mm subsystem Userspace page enhancements fault control 3.2.1 3.2.1.1 Remote memory dest API 3.2.2.2 Remote memory front end 3.2.2 Remote memory Protocol 3.2.2.1 Remote memory handler 3.2.3 Remote memory src API 3.2.3.1 Post copy Manager 4.1.6 Figure 3: Memory externalization components and interfaces. 15/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 4.1. Components The following shows for each component (in Figure 3) the component number, name, description and role in, the success indication for the component, and the interfaces of the component. COMPONENT 3.2.1 Linux kernel mm subsystem enhancements New mechanism to handle page faults in userspace Success indicator Running guest with remote memory (as part of post-copy) Interfaces EXPOSED (internal and 3.2.1.1 INTERFACES CONSUMED (internal and Internal kernel interfaces COMPONENT 3.2.2 Remote memory front end Routes page requests/data between the kernel on the destination machine and the network towards the source machine Success indicator Running guest with remote memory (as part of post-copy) Interfaces EXPOSED (internal and 3.2.2.1 INTERFACES CONSUMED (internal and 3.2.1.1 COMPONENT 3.2.3 Remote memory handler Satisfies page requests from the source machine, routes control messages to the Remote memory front end Success indicator Running guest with remote memory (as part of post-copy) Interfaces EXPOSED (internal and 3.2.2.1 INTERFACES CONSUMED (internal and 3.2.1.1 16/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 4.2. Interfaces The following shows for each interface (in Figure 3) the interface number, name, description and role in, the relevant components which use the interface, and the list of APIs in the interface. interface 3.2.1.1 Userspace page fault control Provides an efficient way for user-space to register for faults on a region of virtual memory, and a way for it to respond to those faults consumed by components (internal and 3.2.2 but also must be designed to be generally useful to other users API madvise(start,length, MADV_[NO]USERFAULT) [Un]Mark a region of anonymous virtual memory such that pages that have not been allocated cause a 'user fault', to be notified on a userfaultfd, and causing the faulting thread to pause int userfaultfd(int flags) (TBD) Open a file descriptor to receive user fault notifications and to unpause faulted threads remap_anon_pages(dest,src,len) Move the mapping of a page of anonymous memory to a new virtual memory location read(userfaultfd Receive an address from the kernel of a page that needs to be filled by userspace write(userfaultfd, Request the kernel unblock threads waiting on a given virtual memory page 17/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 interface 3.2.2.1 Remote memory protocol Provides a mechanism for passing pages on demand between virtual machines, and to control the process consumed by components (internal and 3.2.3 and interacts with the existing migration protocol Protocol entries These entries correspond to packets over a modified migration protocol rp.reqpages(start,len) (dest->src) A request for a region of guest physical memory of the given length. The source must ensure that at least these pages are provided PAGE(address,data) (src->dest) A page of guest physical memory (modification of semantics of existing migration message) command(code,data) (src->dest) Provides a synchronisation and control mechanism for subcommands; uses TBD but some uses given below command(sensitise_ram) (src->dest) Causes the destination to switch into userfault mode command(openrp) (src->dest) Asks the destination to open a 'return path' for sending page requests over command(reqack(id)) (src->dest) Request acknowledgement from destination (dest) command(discard, (start, range)[]) (src->dest) rp.ack(id) Invalidate a previously transmitted page, requiring the destination to send a REQPAGES to recover it Response to a REQACK interface 3.2.2.2 Remote memory destination API Provides the page destination with control over the link to/and control over the source system consumed by components (internal and 4.1.6 (Post copy manager instance on the destination side) API These are internal interfaces within QEMU and are subject to change migrate_send_rp_message Send a response to the page source system, typically for responses to commands qemu_file_get_return_path Retrieve handle to send messages from the page source receivedmap[] API TBD An ownership table identifying whether a page has already been received requestedmap[] API TBD A map holding information on pages requested from the source ram_hosttest Check for the presence of kernel enhancements provided by 3.2.1 18/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 interface 3.2.3.1 Remote memory source API Provides the page source with control over the establishment of the link to/and control over the partner system consumed by components (internal and 4.1.6 (Post copy manager instance on the source side) API These are internal interfaces within QEMU and are subject to change Send a command to the page destination system; corresponds to command() qemu_savevm_command_send(comman message in 3.2.2.1; matching APIs are provided for each command d,data,len) qemu_file_get_return_path Retrieve handle to receive messages from the page destination; a consumer (such as the postcopy manager) sentmap[] API TBD An ownership table identifying whether a page has been sent to the destination system 19/22

_D3.1.1_RCLDesignAndOpenSpecification_v1.0 4.3. Modules Following is a list of modules that will be created as part of the memory externalization. 3.2.1/remap Linux kernel facility for remapping anonymous memory 3.2.1/userfault-madvise Linux kernel facility for marking an area of memory as userfault 3.2.1/userfaultfd Linux kernel mechanism for communicating faults to userspace and allowing userspace to continue execution of previously blocked thread 3.2.2/fault-wire QEMU userspace code to accept faults from 3.2.1/userfaultfd and organise them for transmission over the network (for 3.2.2.1) 3.2.2/wire-map QEMU userspace code to accept incoming pages, use 3.2.1/remap to remap them and then allow the thread to continue with 3.2.1/userfaultfd 3.2.2/command-handler QEMU protocol management code to accept commands from the source and process them 3.2.2/destination-page-ownership QEMU page management code to keep track of incoming and requested pages 3.2.2/init QEMU code to initialise the kernel userfault code 3.2.3/return-path QEMU protocol management code to provide a return path from destination host to source and format messages carried by it 3.2.3/response-handler QEMU protocol management code to accept responses from the destination and process them 3.2.3/source-page-ownership QEMU page management code to keep track of pages that have been donated to destination 20/22

5. _D3.1.1_RCLDesignAndOpenSpecification_v1.0 Cloud Management The cloud management is part of Task 3.3 which is scheduled to start at M13 (October 2014), and will implement component 3.3.TBD which is a place holder for the cloud management implementation. The OpenStack components that are to be enhanced are emphasized (as circles) in the figure below (taken from [3]), and include: Compute component - Nova Block storage component - Cinder Networking component Neutron Dashboard (management) component - Horizon Figure 4: OpenStack components to be updated for. 21/22

6. _D3.1.1_RCLDesignAndOpenSpecification_v1.0 References KVM Kernel Based Virtual Machine. http://www.linux-kvm.org QEMU Open source process emulator. http://wiki.qemu.org OPENSTACK CLOUD ADMINISTRATOR GUIDE - HAVANA. http://docs.openstack.org/admin-guide-cloud/content/index.html [1] [2] [3] 22/22