Hardware Virtualization on. ARM Cortex-A Low-Cost Building automation
|
|
|
- Kelly Stafford
- 10 years ago
- Views:
Transcription
1 QorIQ LS1 Processor Family SILICA The Engineers of Distribution. QorIQ LS1021A Communications Processor Dual-core solution with integrated LCD controller for fanless applications Overview Target Applications Enterprise AP routers for ac/n The QorIQ LS1021A processor delivers extensive integration and power efficiency for fanless, small Multi-protocol IoT gateways form factor networked applications. Incorporating dual ARM Cortex -A7 cores with ECC protection Industrial and factory automation running up to 1.0 GHz, the QorIQ LS1021A is engineered to deliver over 5,000 Coremarks of performance, as well as virtualization support, advanced security features and the broadest array of Mobile wireless routers high-speed interconnects and optimized peripheral features ever offered in a sub-3 W processor. ARM Cortex-A Low-Cost PrintingCPUs Building automation Unparalleled Integration Hardware Virtualization on The QorIQ LS1 family of devices was designed specifically to enable a new class of powerconstrained applications by bringing together highly efficient ARM cores and over twenty years of Freescale networking expertise and IP to offer the highest level of integration under 3 W. With ECC protection on both L1 and L2 caches, QUICC Engine support, USB 3.0, and a broad range of other peripheral and I/O features, the LS1 family of devices is purpose-built for multicore platforms that and must perform more securely, intelligently and efficiently without sacrificing performance. QorlQ LS1021A Processor Block Diagram introducing Smart energy Freescale QorIQ LS1021A Processors Freescale QorIQ LS1023A and LS1043A Processors System Control Internal Boot ROM Security Fuses Security Monitor Power Management DMA System Interfaces IFC Flash QuadSPI Flash 1x SD/MMC 2x DUART, 6x LP UART 3x I 2 C, 2x SPI, GPIO Audio Subsystem: 4x I 2 S, ASRC, SPDIF 4x CAN, FlexTimer, PWM USB 3.0 w/phy USB 2.0 LCD Controller ARM Cortex -A7 Core FPU NEON 32 KB D Cache uqe (HDLC, TDM, PB) 32 KB I Cache 512 KB Coherent L2 Cache Security (XoR, CRC) ARM Cortex -A7 Core FPU NEON 32 KB D Cache 32 KB I Cache DDR3L/4 Memory Controller 128 KB SRAM Michael Röder Cache Coherent Interconnect (CCI 400) Silica The Engineers of Distribution Peter van Ackeren Freescale Semiconductor UK Ltd Ethernet Ethernet Ethernet PCIe 2.0 PCIe Lane 6 GHz SerDes SATA 3.0 Core Complex Basic Peripherals and Interconnect Accelerators and Memory Control Networking Elements
2 Authors: Michael Roeder is Sr. Business Development Manager for Software and Freescale High-End microprocessors at Silica, supporting key customers and training customers on Silica s Architech program in Central Europe. Phone: [email protected] Peter van Ackeren is Sr. Field Application Engineer at Freescale Semiconductor, responsible for technical software support on Freescale s QorIQ multicore processors for the networking and industrial markets. [email protected] SILICA The Engineers of Distribution.
3 SILICA The The Engineers of of Distribution. CONTENT I. INTRODUCTION... 4 A. Definitions... 4 B. Virtualization Concepts... 5 C. Virtualization Advantages and Disadvantages... 5 II. EMBEDDED GNU/LINUX VIRTUALIZATION... 6 A) Hypervisor Type-1-Based Solutions... 6 A.1) Hypervisor Concepts... 6 A.2) Hardware Virtualization Support... 6 A.3) Type-1 Hypervisor Software Solutions...8 B. Container-Based Virtualization... 9 III. SCENARIOS AND USE CASES A) Hypervisor vs. Container Virtualization A.1) Start-up Time A.2) Dynamic Runtime Control A.3) Speed A.4) Isolation A.5) Communication Channels between VMs and Host A.6) Flash Memory Consumption A.7) Dynamic Resource Assignment or Separation A.8) Direct HW Access A.9) Stability A.10) Update Flexibility A.11) Conclusion B) Combinations of HVV and LCV C) Virtualization and Real-time C.1) LCV for Management of RT and non-rt Partitions C.2) HVV to Separate the Non-Real-Time Partition C.3) HVV to Run an RTOS or Bare-Metal Application D) Application Scenarios D.1) Reliability and Protection D.2) Flexibility and Scalability D.3) Feature Enhancement IV. PERFORMANCE ANALYSIS A) Benchmarking Considerations B) Synthetic Benchmarks C) Experimental Results C.1) Freescale Layerscape LS102x Overview C.2) Software Setup C.3) Results V. CONCLUSION QorIQ LS1021A Communications Processor QorIQ LS1043A and LS1023A Communication Processors
4 Abstract Over the last years GNU/Linux has become the dominant embedded operating system, supporting a wide variety of SoCs, which include powerful cores (like ARM Cortex-A or Power Architecture CPUs) and co-processors, with which highperformance applications possibly with rich GUIs can be created. However, when the need for hard real-time capabilities like low latency and timing determinism or for safety certifications arises, operating systems like Linux or Android natively provide little support or capabilities. In this paper the use of virtualized systems for various applications will be discussed. With the availability of open software solutions for hardware virtualization capable sub-10 Euro CPUs, this topic is becoming more and more important and interesting for industrial and medical applications. In detail we will discuss: existing virtualization strategies and solutions with their respective advantages and disadvantages for embedded system applications hardware virtualization support on embedded SoCs partitioning strategies to optimize systems for highavailability, directed fail-over / hot stand-by or highperformance applications benchmarking and system analysis of virtualized systems mechanisms to share peripheral access and enable crosspartition communication We will also discuss various practical scenarios for industrial and medical use, including running legacy code in one dedicated partition to avoid recertification using virtualization to separate safety-critical and noncritical code (e.g. GUI, system management) use of multiple OSs or multiple instances of the same OS to optimize overall system performance and power consumption I. INTRODUCTION This chapter intents to provide an introduction into virtualization in general and starts with an overview of the currently available virtualization concepts and software solutions for embedded GNU/Linux. This is followed by an introduction into the hardware support that these virtualization solutions need or benefit from, along with an overview about the virtualization support on current Intel, ARM and Power CPUs. We will then provide an introduction into the CPU used for our practical tests, the Freescale QorIQ LS1021A processor and the virtualization solutions available for it. The chapter is started with some definitions of terms. A. DEFINITIONS In this paper, some terms are being used which are sometimes misunderstood or used in varying meanings. The definitions as used in our paper are presented in this chapter. Virtualization refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. Contrary to Virtualization, Emulation refers to the process of emulating a different machine or CPU architecture in the guest system, while virtualization implements the identical (or a subset of the identical) CPU architecture and hardware of the physical HOST. A Real-Time System is a system which is both event deterministic and timing deterministic. Event Determinism means that for each known and valid state and possible inputs of such a system, the next state and its outputs are known. In other words, no randomness is involved in the functionality. As long as the CPU works as designed, i.e. is not exposed to hard radiation, this is usually relatively easy to achieve. Timing Determinism means that the time consumed by all those state transitions is known and determinable. Usually, real-time specifications have known and defined upper boundaries to reach certain states (= complete a task). The maximum response time of the real-time system to complete this task may not exceed this upper boundary. Real-time systems are not necessarily faster than non-realtime systems. Actually, the mechanisms to achieve determinism and reliability in RTOS usually generate an overhead, so that the overall speed is often less than in a non-rtos. SILICA The Engineers of Distribution. 4
5 SILICA The The Engineers of of Distribution. The Trusted Computing Base (TCB) of a system is defined to be this part of the software (and hardware) that the security and/or safety of the system depends on and that has to be distinguished from the (usually much larger) part of the system that can misbehave without affecting security or safety. Safety of a system refers to its capability to behave reliably according to its specification, to ensure its availability and to the capability to detect misbehavior and react accordingly to minimize negative implications. Security of system refers to its resistance against harmful intentional attacks and its capability to protect its data and software against unintended distribution and modification. B. VIRTUALIZATION CONCEPTS Virtualization is well-known from the PC/Desktop world through solutions like VMWare or VirtualBox. However, in the embedded world different requirements arise and therefore different solutions exist. We can identify two relevant concepts of virtualization in the embedded world, hypervisor based concepts and virtualization containers. Hypervisor Virtualization is the process of hosting (==creating, running and managing) one or many virtual machines ( GUESTs ) on one single physical machine ( HOST ), usually with the intention of running different operating systems or separated instances of the same operating system in these virtual machines. The software that manages the virtual machines on the host machines is called the Hypervisor. Container Virtualization is a concept of virtualization on operating system level to run multiple instances of the same operating system user space, while sharing the kernel of the host operating system. Both concepts have their individual advantages and disadvantages which will be explained in the next chapter. However, some advantages and disadvantages are common to virtualization in general. Hardware resource restriction: each VM can be assigned a limited set of hardware resources like peripherals, cores, memory, etc. Delimitation: software and even complete operating systems can be run without influencing or even knowing from each other. In Chapter III we will show in detail how these advantages can be leveraged on embedded systems in different scenarios. The most important disadvantages of virtualization are: Performance decrease: despite CPU hardware virtualization support and efficient coding of the hypervisor, a small performance overhead is always present for virtualized CPUs. Influences on real-time: virtualization adds additional layers for thread management, function calling, etc. between application and software. This might decrease the real-time determinism of the respective application. Please refer to Chapter III for information about real-time and virtualization. Increased memory footprint: virtualization increases the total memory footprint required for the system, because additional memory is required for Host, Hypervisor and sometimes multiple instances of the (same) operating system. How much additional memory is required depends on the virtualization approach and the type of hypervisor used. We will cover memory consumption in Chapter II. Increased TCB: If embedded systems have to be certified for safety or reliability, each line of code that is part of the critical system (TCB) has to be analyzed and declared un-harmful. The effort grows exponentially with the number of code lines to be analyzed; therefore this code is usually kept to an absolute minimum. If virtualization is used on such a system, this code adds to the TCB, therefore adding to the analysis complexity. C. VIRTUALIZATION ADVANTAGES AND DISADVANTAGES In general, virtualization offers the following advantages over running each machine on a physical target: Manageability: virtual machines can be monitored, managed (restarted, prioritized, stopped, updated) Parallelism: multiple virtual machines can be started and run in parallel, even on one single processor core 5
6 II. EMBEDDED GNU/LINUX VIRTUALIZATION As mentioned in the introduction, only two concepts of virtualization make sense for embedded systems, Type 1 (bare metal) hypervisors and container. We will take a more detailed look into the technical background and available solutions for both in this chapter. A) HYPERVISOR TYPE-1-BASED SOLUTIONS This section covers the differences between different hypervisor concepts and the hardware requirements to run a bare metal hypervisor based solution on an embedded system. A.1) HYPERVISOR CONCEPTS When discussing CPU virtualization concepts, a common approach is to differentiate between Type-1 (bare-metal) and Type-2 (hosted) hypervisors. Bare-metal hypervisors provide a direct interface between guest and hardware and allow the guests to run mostly in direct execution mode (with direct access to the CPU for all instructions not requiring the involvement of the hypervisor). The picture below show the concept of a Type-1 hypervisor: Applications Guest OS 1 Virtual Machine 1 Hypervisor Applications Guest OS 2 Virtual Machine 2 Hardware (CPU, RAM, Devices) Applications Host OS Hosted or Type-2 hypervisors rely on a host operating system to access the underlying hardware, therefore using the host operating system as a hardware abstraction layer to the actual hardware. The picture below shows the concept of a Type-2 hypervisor: Applications Guest OS 1 Virtual Machine 1 Hypervisor Applications Guest OS 2 Virtual Machine 2 Host OS Hardware (CPU, RAM, Devices) Applications The most important advantage of Type-1 hypervisors is the better speed and improved determinism, while the main advantage of Type-2 hypervisors is the better portability to new hardware. As long as a host operating system is available, a Type-2 hypervisor can be ported to the new hardware with little effort. Type-2 hypervisors are very popular in the PC World with representatives like VMWare or VirtualBox, while they have little or no relevance in the embedded world, where resources are usually more restricted and performance matters. Many Type-1 hypervisorbased solutions exist for various processor platforms; however when it comes to solutions that are widely available over multiple platforms and supported beyond the internal investments of SoC vendors, integrators and commercial entities only two remain, XEN [1] and KVM [2]. Both are licensed under various (L)GPL licenses, use QEMU [3] to manage and execute the virtual machines and are supported by various virtualization management solutions. Beside support for one or both of these popular solutions, some CPU vendors still support and maintain proprietary Type-1 hypervisors to support special scenarios or use cases, e.g. [4]. In the next chapter we will take a look into the hardware requirements to effectively run a Type-1 hypervisor. A.2) HARDWARE VIRTUALIZATION SUPPORT Hardware virtualization support has been implemented in X86 and Power Architecture for years and therefore many Type- 1 hypervisors are available for these architectures. Recently, hardware virtualization support has been added to 32 bit ARMv7 (Cortex A7 and Cortex A15) [5] and 64 bit ARMv8 architectures. Low cost CPUs in the $ range are becoming available that support hardware virtualization and can therefore be effectively used for virtualization on a low power and low cost platform. The QorIQ LS1021A microprocessor by Freescale [6] used as a test and evaluation vehicle in this paper is one of the first representatives of this CPU class, but others will soon follow in the i.mx and QorIQ families. With virtualization support available by default in all upcoming 64 bit ARMv8 CPUs, additional use cases will arise in the network and datacenter domain to leverage virtualization. It is possible to implement hypervisors for systems without hardware virtualization support by using para-virtualization, a concept in which the guest operating system is modified to use hypervisor function calls instead of the CPU s system calls, but this usually requires changes to the guest operating systems to be run on such hypervisors. So CPUs with virtualization support help SILICA The Engineers of Distribution. 6
7 SILICA The The Engineers of of Distribution. unveil the complete power of virtualization. These are the features that ARM added to their recent extension of ARMv7 and to ARMv8. HYP CPU Mode Typically (and very simplified) an ARM CPU runs in two possible CPU modes. The first one, called the USR mode is the one in which applications are executed, the second one called SVC (system or kernel) mode is used for kernel / OS code. In a typical scenario, tasks would normally run in USR mode and the kernel would use the SVC mode to execute certain sensitive instructions and to perform direct hardware accesses. If multiple kernels are running in parallel in a virtualization scenario, each kernel would be able to enter SVC mode, reconfigure the system and execute direct accesses to hardware - which must not happen. One way to circumvent this is to patch the kernel to use hypervisor calls instead of SVC calls in this case which would require the guest OS to be modified. To allow hypervisors to use unmodified guest systems to be virtualized, a third mode, called HYP mode was introduced. HYP mode is a CPU mode which is more privileged than any other (and the mode in which the hypervisor is at least partially executed). Instead of directly accessing the hardware in case of SVC calls, the CPU enters HYP mode and allows the hypervisor to handle these situations. So the different guests will continue to run in USR and SVC mode until a condition is reached that requires intervention of the hypervisor. The CPU then traps into HYP mode and allow the hypervisor to handle the situation, like performing a hardware access or providing guest isolation. To reduce virtualization overhead, ARM allows to configure the types of system calls in which the HYP mode is entered, therefore allowing the hypervisor to configure in which situation it should be involved and when to leave the handling to the respective VM. Memory Virtualization Similar to the CPU, the MMU memory address translation also has to be extended to support virtualization. On a non-virtualized system, the MMU is used to directly translate Virtual Addresses (VA) to Intermediate Physical Addresses (IPA). The IPA corresponds to the physical address on such systems because one single operating system owns the complete physical address space. Guest OS translation: VA => IPA IPA = = PA On a virtualized system however, each guest is assigned a part of the physical memory (the guest physical address range) by the hypervisor, which manages by itself and is the complete address space available to it. However, in this case, the IPA does not correspond to the physical address, so that a second translation step has to be performed: Guest OS translation: VA => IPA Hypervisor translation: IPA => PA This second translation step is configured and handled by the hypervisor and can only be configured in HYP mode. The two-stage concept allows the first translation step still to be configured separately by each guest OS, therefore requiring no code change in the guest OS. Interrupt and Timer Virtualization Timers are used very regularly in GNU/Linux task and thread management; therefore operations like setting and reading hardware timers are frequently executed on guest machines. Physical timers are usually memory mapped devices on ARM systems, so that each time a timer is accessed from a guest, the hypervisor would have to be involved with substantial overhead on a very common operation. Therefore virtual timers and counters have been added to the architecture, which can be configured using coprocessor CPU registers. If running in HYP mode, a virtual timer or counter directly corresponds to a physical one, if running in kernel mode, a virtual timer or counter is available that can be managed by the hypervisor. For example, if one guest is preempted before a timer expires, the hypervisor can program a software timer instead and either feedback the modified value as soon as the guest is scheduled again or generate a virtual timer interrupt in the host upon timer events. Similar to timer accesses, handling interrupts from peripherals or other cores is a very common task. Therefore the GIC has been virtualized as well, allowing virtual CPUs to communicate to the VGIC without trapping the host and hypervisor by using a virtual CPU interface. Incoming hardware interrupts are handled by the hypervisor and distributed to the respective CPUs, while virtual machines can use the virtual interface to generate software generated interrupts. Other Peripheral Virtualization 7
8 For the same reasons virtualization support has been added to timers, counters and the GIC interrupt controller by ARM, this makes sense for other peripherals as well so that they can be effectively shared across multiple guests. Freescale, for example, has included virtualization support to their VeTSEC ethernet controllers. A.3) TYPE-1 HYPERVISOR SOFTWARE SOLUTIONS XEN XEN is available for mainline Linux on various platforms, including ARMv7 and ARMv8. The main advantages of XEN are that it is completely operating system agnostic and has a very small memory footprint of just about 1 MB with a good support for many ARM and X86 based systems. XEN is a classical fully bare-metal standalone hypervisor with a completely independent code base and therefore has to be adapted to each new host hardware it should support. The basic architecture is shown in the picture below: KVM KVM, on the other hand, follows a different implementation concept than XEN and builds on the /dev/kvm interface in the Linux kernel to leverage existing infrastructure for device emulation, scheduling, or memory management. This makes KVM available on every device running the Linux kernel and avoids reinventing the wheel and introducing bugs. In fact, the Linux kernel is one of the best-tested pieces of software available and for patches to become a part of the mainline kernel, an extensive peer review and approval process is followed. Special attendance is paid to finding the most reusable and simplest approach and CPU and SoC vendors regularly contribute manpower to the Linux kernel for even better results. So although the TCB is slightly higher than with a bare-metal hypervisor, the quality of the complete code is most likely better. KVM is ready-to-use as soon as the respective Linux kernel for an SoC is available. These are the reasons why we expect KVM to become the major hypervisor solution for embedded systems especially ARM based ones - in the future. In the remainder of this paper we will focus on KVM. The picture below shows the basic concept of KVM. This concept of implementing a hypervisor completely by itself directly on the hardware allows the best possible performance and the smallest trusted computing base. However, with the variety of different ARM systems with totally different configurations and IP modules that emerged on the market over the last years, it also proved to be the biggest disadvantage for XEN because its code had to be adapted, rewritten and re-verified for each new SoC. The code used for each adaption was often late, of low quality and missing extensive peer review. Therefore, although the code quantity and complexity was small, certification and bug-hunting still were time intensive tasks for the end user. In the ARM implementation, KVM uses a concept called split-mode virtualization to allow the host kernel and most parts of KVM to still run in kernel mode to leverage the existing kernel functionality, SILICA The Engineers of Distribution. 8
9 SILICA The The Engineers of of Distribution. but runs the necessary code to implement hardware virtualization in a separate module running in HYP mode. This significantly reduces virtualization overhead and allows optimal re-use of the Linux kernel without losing performance and stability. The part of KVM running in HYP mode is called the lowvisor and is responsible for protection and isolation between the different execution instances running in kernel mode. It takes care of switching between the execution contexts and handles all interrupts and exceptions that have to be handled in HYP mode. The basic idea is to keep the lowvisor to the absolute minimum and leave most of the functionality to the highvisor running in kernel mode that has access to the full power of the Linux functionality. The interaction concept is shown in the picture below: As the picture illustrates, the respective VMs directly access KVM, without having to pass through the hosting Linux kernel. QEMU: Managing the Guest Virtual Machine QEMU is widely known as an emulator for various hardware platforms and regularly used to boot target images on emulated target processors during development on host platforms. However, QEMU can also act as a virtual machine manager for both KVM and XEN. In this context, QEMU takes care of providing the virtual machine to the guest images, but execution is passed to XEN or KVM. In case of KVM, the /dev/kvm interface to the kernel is used for that. QEMU together with KVM is able to achieve % of native performance on ARMv7 devices. The picture below illustrates how the two work together. For each virtual machine, an instance of qemu has to be launched and provided with virtual machine information (on ARM through a device tree binary), kernel and root file system. QEMU then boots the virtual machine by utilizing the /dev/kvm interface. Communication between guest and host can be done via shared file systems, terminals or networking connections. QEMU in connection with KVM s vfio also allows assignments of PCIe and memory mapped devices like UARTs or SATA controllers to a certain KVM virtual machine. This device becomes a private resource of the respective VM, allowing direct access to its registers and memory regions, including DMA transfers and therefore eliminating all overhead through the host. All device interrupts are handled through the hypervisor and QEMU with the least possible effort (depending on if the device supports interrupt virtualization or not). Similarly, physical USB ports on the host can be directly assigned to a VM, the device is shown in the guest as a (virtual) USB controller. Networking devices can be bridged on the host side and each of the virtual bridge devices can be assigned to a separate VM, therefore allowing both communication between each other as well as communicating through the same physical network interface to the outside world. B. CONTAINER-BASED VIRTUALIZATION A completely different approach to virtualization is the use of Linux containers, which is a lightweight virtualization technology that allows the creation of environments in Linux called "containers" in which Linux applications can be run in isolation from the rest of the system and with fine grained control over resources allocated to the container (e.g. CPU, memory, network). There are two general usage models for containers: Application Containers: running a single application in a container. Here a single executable program is started in the container. Host Applications Application Application Host ROOTFS Container 1 Container 2 Host OS Kernel Hardware (CPU, RAM, Devices) 9
10 System Containers: booting an instance of user space in a container. Booting multiple system containers allows multiple isolated instances of user space to run at the same time, each with their own Init process, separate process spaces, separate file systems und separate network stack. Host Applications Applications RootFS Applications RootFS native execution and the memory consumption is in the low MByte range. Containers can be dynamically assigned additional CPUs and additional memory and provide a reasonable isolation against the host system and other containers in most scenarios. Files can be shared from the host system to different containers to optimize total flash space consumption. If local modifications are done to files in the container, a local copy is created within the container file system and maintained separately. Host ROOTFS Container 1 Container 2 III. SCENARIOS AND USE CASES Host OS Kernel Hardware (CPU, RAM, Devices) BEFORE looking into actual use-cases we will provide a short summary about the individual advantages of hypervisor vs. container-based virtualization solutions. Compared to a full-blown hardware virtualization, container virtualization is creating much less CPU and memory overhead. Application containers are a good way to go if only some applications need to be separated; system containers allow complete separate user spaces to run independently and to coexist. Virtualization containers are a LINUX kernel feature, but additional tools are required for setup and management. Two popular tools are LXC [7] and LibVirt [8] which both have their individual features and advantages. LXC focuses solely on the management of containers, while LibVirt is a complete solution to manage virtualization in Linux, supporting the setup and management of virtual machines based on virtually al solutions available for Linux, also including KVM and XEN. LibVirt s power and complexity is way beyond the scope of this paper, so we will focus on the LXC here to demonstrate the basic capabilities of containers. One big advantage of LXC for use in embedded systems is that 5-years long-term supported version is available. The basic Linux kernel feature used to create containers is cgroups which provide both resource isolation (CPU, memory, devices) and namespace isolation to shield the operating environments from each other. This allows assigning hardware resources, interfaces, processes and file systems to individual containers. Depending on which kind of container is used, the start-up time is in the low seconds range. Containers can be started, monitored, frozen and stopped dynamically from the HOST OS depending on system scenarios. There is virtually no CPU overhead or performance decrease for code running within a container versus A) HYPERVISOR VS. CONTAINER VIRTUALIZATION In this chapter we will analyze different criteria of interest for embedded systems and how they are affected and reflected by virtualization. Hypervisor virtualization will be abbreviated as HVV, Linux Container Virtualization will be abbreviated as LCV. A.1) START-UP TIME Two start up times are usually of interest for virtualized embedded systems, the initial start-up time, which is the time until a guest virtual machine is available after reset, and the VM start-up time, the time that one additional virtual machine needs to start up on an already fully booted system. Although XEN and KVM are Type-1 hypervisors, they still need a Linux system as host to setup and boot the virtual machines. Therefore, the host boot time has to be taken into consideration for both HVV and LCV in case of XEN or KVM. This is different for other hypervisor solutions that can start VMs directly from a bare metal interface. So the only way to reduce the initial start-up time is to scale the host OS to be as small as possible and to just contain the absolutely necessary drivers and services. The same is true for the boot time of the virtual machine itself. It completely depends on the complexity of the virtual machine image and scales with the time that this image would need to boot natively on this host. Small speedups can be achieved by leveraging the fact that the image is read from a file system instead of the flash which allows caches, DMA, burst access modes and RAM disks to be used to best effectiveness. SILICA The Engineers of Distribution. 10
11 SILICA The The Engineers of of Distribution. For LCV the same rules as above apply for the host system boot time. The containers however will start much faster than a comparable HVV VM so that in most cases LCV wins in both boot time aspects, the initial start-up time and the VM start up time. However, while the functionality available in HVV only depends on what is booted and enabled in the guest OS, for LCV all drivers and kernel modules that are required in one of the containers (even if it is not for the first one to be started up) need to be started in the host system. Therefore, a scenario is imaginable in which a small HVV host system and a very small guest VM would start up faster than a comparable system with a complex host system with a small VM which only leverages fractions of the complete host functionality. have undesired communication with each other. LCV provides a reasonable isolation on user-space level, which allows effective separation of applications. However, on kernel or peripheral driver level there is no separation at all. A malfunctioning host kernel driver will also cause malfunctions in all its containers and misbehaving applications in one container can still perform operations that can crash the whole system. So LCV does not provide effective isolation on system level at all. HVV on the other hand requires booting its own kernel and user space, therefore allowing effective isolation on system and user level. A malfunction in the hypervisor may still crash the complete system, but code running within the VM will not be able to do this and, for example, allow the host to analyse the crash reasons and restart the VM. A.2) DYNAMIC RUNTIME CONTROL Both QEMU and LXC allow extensive dynamic runtime control from the host, including features like file system access, snapshots, rebooting or tracing for performance analysis. However, due to the nature of the concept, LCV is integrated even tighter with the host system, allowing for example to start or kill applications in a container directly from the host while this would have to be done through semaphores or a virtual network/serial connection in a HVV scenario. A.3) SPEED Both solutions allow native executing on the available CPUs, so the execution speed for regular instructions shows no significant drops. This is different for scenarios that require the supervisor to be involved, like memory translation or interrupt handling. In this case, trapping to HYP mode and the general virtualization overhead of the hypervisor becomes noticeable. As a rule of thumb on ARMv7 systems, an effectiveness of 90% to 95% of the native execution speed can be expected for HVV. For peripheral accesses, the situation might be different. The possibility to directly access devices in a HVV VM can mean a significant speedup compared to accessing it through the same driver running on the host system from a LCV container. Only tests with the specific scenario can show how much the communication overhead (LCV) vs. virtualization overhead (HVV) affects both latency and throughput. A.4) ISOLATION Isolation refers to the fact of separating parts of a system from each other in a way that they cannot influence each other or A.5) COMMUNICATION CHANNELS BETWEEN VMS AND HOST Although one of the main reasons for using virtualization is to separate parts of the system from others, there usually still is a need for controlled communication between host and VM or between different VMs running on the same host. Both concepts allow communication through virtual serial or networking interfaces or shared file systems that can be used to place semaphores or messages. Although the basic possibilities are the same, LXC is more tightly integrated with the kernel, allowing for lower communication overhead and therefore higher speeds and lower latency. It also allows to easily extend the communication features, e.g. by partially removing the namespace separation. A.6) FLASH MEMORY CONSUMPTION This point is pretty obvious. LCV allows sharing both operating system kernel and whatever parts of the user space the developer chooses to be shared while HVV requires the VM images to be stored separately and allows no sharing. So LCV definitely wins in this aspect. A.7) DYNAMIC RESOURCE ASSIGNMENT OR SEPARATION Dynamic resource assignment capabilities are important in load management and fail-over scenarios. It allows, for example to assign additional CPUs to heavily loaded VMs or remove CPUs from idle VMs. In case one VM of a system fails, it allows assigning the interfaces it served to a different VM to take over. Although both HVV and LCV allow very flexible static assignment of resources, LCV is generally more flexible with dynamic assignments. 11
12 However, this is highly dependent on the type of the resource. For example in terms of networking interfaces, both solutions are equally suited to perform failover scenarios. A.8) DIRECT HW ACCESS While HVV allows to directly access hardware peripherals from the VM through systems like virtio or vfio, this cannot be done from a LCV container which only has user-space access and therefore requires to have an according driver in the (host) kernel. A.9) STABILITY Stability is highly related to isolation and code quality. A system is expected to become more stable, if code quality improves or bad code quality is isolated in a way that it does not affect stability. KVM and Linux containers have both been used in production environments are both part of the mainline Linux kernel and have undergone extensive review. Although this is no guarantee for stability, especially in new systems and application, it still suggests very high code quality. Please refer to the isolation section in this chapter to evaluate the isolation capabilities of code that might affect stability. A.10) UPDATE FLEXIBILITY This final section deals with the capabilities of both solutions to modify VMs on the fly or offline for updating. Both solutions allow to update the virtual disk files in the file system containing the VM or container from the host. This allows suspending, cloning, updating and restarting VMs dynamically from the host. An update can be performed and tested on a clone or a simple copy of the VM in question while the original VM is still running. After a test that the new machine boots as expected, it can be put to life and replace the old one. Snapshots can be used to get regular backups of virtual machines to have a fallback version in case of corruption. A.11) CONCLUSION For most aspects analyzed in the section above, LCV wins over HVV. And in fact, for many scenarios in which traditionally HVV has been used, LCV is the better choice. However, as soon as isolation is required, different kernels or even operating systems are to be used or it is required to directly access resources below the driver level, there is no alternative to using HVV. In our scenarios chapter we will look at recommended virtualization methods for each scenario. B) COMBINATIONS OF HVV AND LCV Sometimes, the question is not so much which solution to choose but to choose the right combination of both HVV and LCV, so-called nested virtualization. KVM can be run within a LCV container. Because QEMU uses regular Linux processes to represent its virtual CPUs, this allows to flexibly assign virtual CPUs to physical ones and to restrict memory for QEMU processes on the fly. Even if QEMU (which is still a user-space application) crashes or hangs, the container allows for graceful shutdown and prevents blocking all of the system s CPUs. But also the other way around LXC in a QEMU container makes sense in some scenarios. HVV has the advantage that it allows to use various instances of the kernel to work in parallel on different CPUs. So if in a scenario the processing through the kernel is the bottleneck, HVV can show significant improvements over LCV by offering the possibility to run multiple instances of the kernel, potentially with drivers for different subsets of peripherals in parallel. For example, virtualization can be used to split a big multicore, multi-interface machine in several smaller ones ( physical subsystems ). While HVV allows full physical access to the interface and to use native drivers within the guest systems, the LXC containers running within each guest subsystem allows effective load management and flexible resource assignment within the physical subsystems without additional overhead. If and which scenario makes sense is always subject to intensive testing on the target system. C) VIRTUALIZATION AND REAL-TIME Although the idea of trying to achieve real-time capabilities on a system with an additional layer between applications and hardware sounds unpromising in the beginning, we would like to present some scenarios here in which virtualization can actually help to improve real-time responsiveness. Certification of such a system gets harder, which makes most of these approaches usable mostly for soft real-time scenarios, but also hard-rt might be achievable. As described in [9], there are two major ways to improve the real-time capabilities of Linux, Xenomai which uses a dual kernel approach and the PREEMPT_RT patch, which patches the Linux Kernel itself to add real-time capability. In general, the real-time responsiveness of a system is improved, if the real-time tasks or in case of a dual kernel approach - the real-time kernel is isolated as much as possible from the rest of the SILICA The Engineers of Distribution. 12
13 SILICA The The Engineers of of Distribution. system in terms of memory, caches and interrupts. Virtualization can be used in various ways to help achieving this. C.1) LCV FOR MANAGEMENT OF RT AND NON-RT PARTITIONS LCV allows fixed CPU assignments, tuning the scheduler for load / priority management and adjusting memory boundaries and swapping policies for each container [10, 11, 12]. Along with IRQ_AFFINITY settings for the respective cores in the CPU [13] considerable real-time improvements can be achieved [14]. Montavista is using a similar concept for their Bare Metal Engine (BME) [15, page 10] Most of these settings can be done manually right within the Linux kernel as well, but utilizing containers significantly simplifies the management and controllability of real-time vs. non-real-time domain. C.2) HVV TO SEPARATE THE NON-REAL-TIME PARTITION A different approach to dividing the system into a real-time and non-real-time partition is to use the host system to provide the real-time capabilities and have all non-real-time applications on a separate virtual machine. The virtual machine, represented by its QEMU threads can then be resource-restricted to the necessary means. A common scenario for this approach would be running a GUI, potentially even on a different operating system, like Windows Embedded or Android on a virtual machine. Ideally, the VM is fixed assigned to one or more cores not utilized by the real-time applications and interrupt and memory separation are taken care of. [16, page 12] describes such a scenario. This approach can be used both with a PREEMPT_RT Linux kernel and with a dual core approach. If a dual core approach is used, it is mandatory that both the virtualized partition and the real-time kernel are assigned to different physical CPUs. C.3) HVV TO RUN AN RTOS OR BARE-METAL APPLICATION This approach seems to be the most obvious one, however also poses the most problems due to non-deterministic user-space exits in the QEMU process. However, [17] describes an approach to achieve sub-millisecond scheduling latencies inside a KVM guest VMs by careful fine tuning and using the PREEMPT_RT patches on the host system. D) APPLICATION SCENARIOS In this chapter we want to provide an overview about potential scenarios that virtualization as described in this paper can be utilized in embedded systems. They can be grouped into three categories, Reliability, flexibility and scalability, and feature enhancement. Some of these scenarios are already used in production systems, others are purely experimental. Please feel free to contact the authors for feedback, details, or specific questions. D.1) RELIABILITY AND PROTECTION Virtualization can help in various ways to improve the reliability of embedded systems and many of the means originally developed to increase reliability can be leveraged to improve the safety of systems as well. Virtualization allows shielding potentially misbehaving, unstable or known-to-be-unsafe applications or software from the rest of the system. If put into a VM, even crashes caused in the kernel will not affect the rest of the system and a supervisor running on the host can detect crashes or monitor a heartbeat from the VM to initiate a restart or failover potentially (with VMs shared over networks on a different physical machine). Snapshots can be taken regularly to have backups of system states available to be restarted. But also containers allow resource control and resource assignments that can prevent applications from interfering with each other or blocking the complete system. In a distributed system with multiple processing partitions, interfaces assigned to a failing partition can be re-assigned to different partitions. Some interfaces like network interfaces can even be shared among different partitions and served in parallel. System updates can be performed off-line on a copy of currently running VMs and then tested offline on a separate core of the system. Only if all checks succeed, the VM is switched. Used in combination, these techniques help to achieve very high system availability. D.2) FLEXIBILITY AND SCALABILITY The same techniques used in fail-over scenarios to reassign resources upon hardware failures can be used to achieve dynamic load and resource management on systems. To achieve pure load management in a multicore system, containers are usually the most effective way and also allow to employ energy-saving scenarios in which less used CPUs are put into sleep mode and their tasks are reassigned to other partitions. [18] describes load management scenarios and the implementation of load governors. A different aspect of flexibility is the consolidation of systems to 13
14 achieve cost reductions. The applications formerly run on multiple smaller systems are now run together on one bigger system. HVV allows to keep the legacy OS/kernel and the original user space and to keep all instances completely separated. The same concept can be used to achieve easy multi-core migration of a former single-core system. Multiple instances can be started in multiple containers or separate VMs without interfering with each other or adding SMP optimizations to the original code which allows easy exploitation of the power of multicore CPU architectures. D.3) FEATURE ENHANCEMENT System Upgrades Virtualization allows performing secure and controllable insystem upgrades by putting the upgradable part of the system into a container or VM (which also allows secure kernel upgrades). The upgrades are then performed off-line on a copy of the currently running VM or partition by mounting the respective file systems from the host. The update success can then be tested offline on a separate qemu instance or container and only after all checks succeeded, the old VM or container is deactivated and the new one activated. License Isolation Virtualization with commercial hypervisors is regularly used to escape conditions in the GPL licensing conditions by separating GPL licensed software from the rest of the system to avoid having to make these parts public. All virtualization solutions presented in this paper are licensed under versions of the GPL and can therefore NOT be used to do this. However, the same technique can be used to escape more restrictive license schemes tying users to certain versions of operation systems or forbidding co-existence with other software on the same system. Avoiding Re-Certification Re-certification of systems often becomes necessary if changes are made to the original system. Depending on the kind of certification, this can be avoided if the modifications to the system are completely done in the VM. This, for example, allows updates to the user interface or adding additional communication interfaces without having to re-certify the complete system. Multiple OS Virtualization can be used to execute multiple operating systems on one system. For example, Android as a user and communication interface can be run in a separate VM, while the original system remains on Linux. It is even possible to run both an Android and a Linux user space in separate containers on the same kernel [19, 20]. Security Despite extensive code reviews, quality checks automated tests, new security flaws in operating systems, stacks and application software are exposed on a regular basis. In applications that require protecting personal data from outside access or tampering, every newly exposed flaw means that extensive checking and potentially - countermeasures have to be taken to prevent attacks. Virtualization can be used to separate communication or unsafe parts of a system from the trusted part. Usually the approach is taken to put the attackable part (e.g. the part taking care of communication with the outside world) into a virtual machine and to limit and guard the communication channels to the rest of the system. Even if the VM is hacked, the rest of the system is guarded from the VM and a breakout of the hacked communication system into the host system or other VMs is prevented. Because containers rely on the same underlying kernel and drivers, only HVV is an effective means to counter the majority of threats. Easy Migration Virtualization allows easy migration of ready-to-use computing partitions to different hosts. If the interfaces to communicate to the outside world, like file systems or network interfaces are clearly defined, a VM can be easily reused across different systems and developed and tested independently. Different VMs can be used to implement different functionality. For example, an OEM product can be branded this way with various skins for various end customers and even allow customizing the system without losing or exposing the original functionality. IV. PERFORMANCE ANALYSIS One of the first important steps in the decision-making process for a specific virtualization solution is the benchmarking of potential solutions on the actual target hardware and operating system. SILICA The Engineers of Distribution. 14
15 SILICA The The Engineers of of Distribution. A) BENCHMARKING CONSIDERATIONS Enchmarking allows to: assess the performance overhead introduced through a specific virtualization solution and compare solutions with each other compare the performance of different virtualized target systems with each other find bottlenecks caused by missing target hardware support, wrong kernel configuration or missing optimizations. allow to track optimizations or overhead introduced by software/kernel changes optimize the target application software to leverage the specific performance features of a system and to avoid operations with huge overhead. Benchmarks are usually executed both on a virtualized guest and on the native system (without virtualization). This allows to actually determining the virtualization overhead on different operations. It also exposes potential deficiencies of the native system that might be multiplied in virtualization and allows removing their cause on the host system. Benchmarks should be performed both on idle and busy host systems. While benchmarks on the idle system allow evaluating the minimal possible virtualization overhead, additional performance deprivations on loaded systems expose the robustness of a solution and potential corner cases to be avoided. If multiple virtual machines or containers are to be used on a system, different load scenarios in those should be generated as well. Automatic test suites and scripts can help to automatically replicate the situations for periodic quality assessments and documentation. Two great tools to generate different load scenarios on Linux systems are Stress [19] or Lookbusy [20]. Although the ideal test suite to evaluate system performance for a specific application scenario is usually the application itself, synthetic benchmark suites are sometimes preferred especially in the early evaluation phase - for three reasons: The application is not yet available or awaiting the first test results to implement optimizations. If the application s performance is poor, it is hard to determine where exactly the overhead is being caused. It is usually harder to direct applications to the specific corner cases than to use directed tests. A commonly used compromise is to extend out-of-the-box benchmarks with algorithm implementations and application specific functions directly taken from the application code base to generate benchmarks specific to the application. The following precautions are also useful when executing benchmarks: Statically linked executables should be built to avoid tampering the results by memory cache optimizations, code loading events or accidental use of different library versions. The benchmark suite should be compiled with the same optimization settings and the same compiler version as the target application (or other benchmarks). Power management features (throttling, etc.) should be switched off and the CPU frequencies of all cores should be tied to a fixed, equal value, usually the maximum. In general, the system should be optimized to be as deterministic as possible. For example, qemu processes should be bound to specific CPUs and those CPUs be freed from other processes or unrelated interrupts. Especially in high-load scenarios, the Linux scheduler sometimes behaves undeterministically and otherwise completely different results might be created with multiple runs. If benchmarks are run on multiple CPUs at the same time, each benchmark process should be tied to one physical CPU. If log files are created, these should be written to memory hosted file systems (tmpfs). B) SYNTHETIC BENCHMARKS In general, the following metrics can and should be measured and evaluated independently from each other: CPU Memory I/O bandwidth System call overhead Interrupt Latency (especially for RT scenarios) LMBENCH [23] is one of the oldest and most commonly used general purpose synthetic benchmarks. It used to expose performance bottlenecks on hardware and operating systems and is also ideally suited to perform virtualization overhead analysis. While it can perform hardware performance benchmarks, the common operation mode for virtualization overhead analysis is to perform performance measures of the operating system (that the virtualization solution becomes an integral part of). It 15
16 contains a suite of benchmarks that are designed to measure basic operations, such as system calls, context switches, IPC, process creation, signal handling or memory accesses and measures either latency or bandwidth. [24] gives a good overview about using LMBENCH in a virtualization scenario. There also are specialized virtualization-specific benchmarks, like [25], however these mostly focus on X86 and server scenarios and are in the author s opinion an overkill for most embedded scenarios. Perf [26] has a KVM extension to specifically monitor the events and overheads in the guest kernels. It provides the CPU profile of the guest kernel and numbers about the CPU us in guest mode, host user and hypervisor mode. [27] describes how to extract the hypervisor overhead for particular VMs. [28] gives an overview of the events that can be monitored. Especially if a certain benchmark performs worse than expected, these events are a great means to find out about the reasons and to monitor improvements through optimizations. Other benchmarks that are useful to perform benchmarking are Bonny++ [29] for I/O benchmarking and Linpack [30] or Coremark [31] for CPU arithmetic performance. [32] provides a good overview about other benchmarking solutions. C) EXPERIMENTAL RESULTS The authors would like to conclude this paper with a short report of experimental results and experiences on a real-world system, the Freescale QorIQ LS1021A. However, this paper is not intended to be a test report, so we will keep the product-specific results published here to a minimum. Please feel free to contact the authors to for detailed reports, implementation recommendations and performance data on the LS1021A or any other Freescale processor. 3x enhanced triple speed Gb Ethernet controllers, with IEEE 1588 support and MII, RGMII and SGMII connections to the Ethernet PHY. 2x PCI Express 2.0, 1-lanes, 2-lanes or 4-lanes 1x USB3.0 controller with integrated PHY 1x USB2.0 controller with ULPI 1x SATA GT/s 2x DUART, 6x Low Power UART 4 x CAN The memory interfaces include Quad-SPI NAND/NOR Flash with execute-in-place support and a 32-bit DDR4L/DDR4 controller with ECC support. 128 Kbyte of internal SRAM can be used to store sensitive data in-chip. The integrated LCD controller supports up to 4 planes and is driven by a high-performance 2D-ACE engine. The SEC 5.5 security engine supports cryptographic offloading and support for encrypted and high assurance boot. The cores connect to the peripherals through an ARM CCI400 interconnect bus that supports traffic shaping for additional determinism and provides the hardware coherency support between CPUs, the security unit and the networking interfaces. The typical power consumption is below 3W and guaranteed to be between 3.7W (TDP). Temperature ranges between -40 C and +125 C are supported. The block diagram below provides an overview about the QorIQ LS1021A processor: C.1) FREESCALE QorIQ LS102X PROCESSOR OVERVIEW This section provides an overview of the features and functionality of the QorIQ LS102xA integrated processor family. All derivates feature a dual Cortex-A7 architecture with hardware virtualization support, NEON and VFPv4 support running at up to 1 GHz each. Like all members of the Freescale QorIQ processor family, the LS102x is optimized for peripheral throughput, especially on the networking interfaces and supports a wide range of industrial connectivity interfaces, including: Please refer to [33] for more information about the LS1021A or contact one of the authors. The authors were using the QorIQ LS1021A Tower board [34] as the evaluation platform, which has out-of-the-box support by Freescale s QorIQ SDK. SILICA The Engineers of Distribution. 16
17 SILICA The The Engineers of of Distribution. C.2) SOFTWARE SETUP Freescale s QorIQ SDK is a Yocto-based software development kit which integrates BSPs for all QorIQ processors, including the QorIQ LS series. Setup, configuration and build commands remain consistent over various processor architectures and a common code base is used wherever appropriate. The SDK is available free of charge [35] and also includes a Yocto build cache that allows building kernel and root file systems within minutes and without Internet connection. The QorIQ SDK documentation includes a section about virtualization both on ARM and PowerPC devices that describes setup, configuration and first steps with virtual machines and Linux containers [36]. This documentation was followed to setup test images for both KVM-based virtual machine support and for LXC-based Linux container virtualization. Freescale provides the option to build an image with out-of-the-box virtualization support for KVM and LXC with automatic inclusion of a guest root file system by building the image fsl-image-virt. The benchmarking tools, like LMBENCH were included by adding the respective Yocto layers, use [37] for searching and added to the generated image by adding the respective package names to the IMAGE_INSTALL_ append variable. The resulting image with complete support for virtualization and the guest root file system included was about 43MB in size and flashed to the TWR-LS1021A development system's flash memory and was running on Kernel on both hosts and guest systems. We included the hints in the QorIQ SDK documentation and [38] for kernel configuration options. As expected, the CPU performance scores are very close to bare metal because there is no execution overhead for instructions that do not require trapping to the hypervisor. Memory access performance depends on the type of access and the HW/SW support available; using vfio and virtio direct accesses from within a virtual machine provided a significant speedup. V. CONCLUSION The main application for virtualization in embedded systems used to be load management on big, costly server type systems. However, with hardware virtualization support now arriving in low-cost processors, it can be leveraged in a variety of interesting scenarios in the industrial and medical world. While smaller processors like the LS1021 only allow for one virtual machine in a hypervisor scenario, upcoming processors like the LS1043 with 4 x Cortex-A53 CPUs can profit immensely from hypervisor virtualization. Container based verification due to its small overhead is applicable for a multitude of applications and usable even on small two-processor systems to effectively manage resources and separate applications in scenarios in which no hard isolation is required. Silica and Freescale provide the hardware, the ecosystem and the support to start the evaluation of applications today. Get in contact! C.3) RESULTS The results after setting up the tests as described in this chapter showed about 5 to 10% overall performance decrease over native performance for HVV virtualization with KVM. In the KVM scenarios we were using one CPU to run the host system, the other one was assigned to the VM. In upcoming Layerscape products more cores will be available, which will allow the use of more CPUs within a VM or the parallel execution of multiple VMs. For the LXC LCV test, two partitions were created with equal CPU resource assignments. A virtualization overhead of 1 to 2% was noticeable; however, applications responded with much more tolerance to load scenarios in the respective other container compared to running on the host system without containers. The system s latency was measured using cyclictest [39] and profited immensely from applying the OSADL PREEMPT_RT patches. 17
18 REFERENCES CHAPTER I: [1] [2] [3] [4] [5] [6] CHAPTER II: [7] [8] CHAPTER III: [9] Roeder et al., Real-Time Linux for Embedded Processors, Embedded World Conference 2012, Nuremberg, Germany [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] CHAPTER IV: [21] [22] [23] [24] [25] [26] [27] [28] [29] make sure to drop caches first: echo 1 > /proc/sys/vm/drop_caches; bonnie++ -y -s 2000 [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] SILICA The Engineers of Distribution. 18
19 SILICA The The Engineers of of Distribution. QorIQ LS1021A Communications Processor Dual-core solution with integrated LCD controller for fanless applications. Overview The QorIQ LS1021A processor delivers extensive integration and power efficiency for fanless, small form factor networked applications. Incorporating dual ARM Cortex -A7 cores with ECC protection running up to 1.0 GHz, the QorIQ LS1021A is engineered to deliver over 5,000 Coremarks of performance, as well as virtualization support, advanced security features and the broadest array of high-speed interconnects and optimized peripheral features ever offered in a sub-3 W processor. Unparalleled Integration The QorIQ LS1 family of devices was designed specifically to enable a new class of power constrained applications by bringing together highly efficient ARM cores and over twenty years of Freescale networking expertise and IP to offer the highest level of integration under 3 W. With ECC protection on both L1 and L2 caches, QUICC Engine support, USB 3.0, and a broad range of other peripheral and I/O features, the LS1 family of devices is purposebuilt for multicore platforms that must perform more securely, intelligently and efficiently without sacrificing performance. QorlQ LS1021A Processor Block Diagram System Control Internal Boot ROM Security Fuses Security Monitor Power Management DMA System Interfaces IFC Flash QuadSPI Flash 1x SD/MMC 2x DUART, 6x LP UART 3x I 2 C, 2x SPI, GPIO Audio Subsystem: 4x I 2 S, ASRC, SPDIF 4x CAN, FlexTimer, PWM USB 3.0 w/phy Core Complex USB 2.0 LCD Controller Accelerators and Memory Control ARM Cortex -A7 Core FPU NEON 32 KB D Cache uqe (HDLC, TDM, PB) 32 KB I Cache Basic Peripherals and Interconnect 512 KB Coherent L2 Cache Security (XoR, CRC) Networking Elements ARM Cortex -A7 Core FPU NEON 32 KB D Cache 32 KB I Cache Cache Coherent Interconnect (CCI 400) Ethernet Ethernet Target Applications Enterprise AP routers for ac/n Multi-protocol IoT gateways Industrial and factory automation Mobile wireless routers Printing Building automation Smart energy Core Complex The QorIQ LS1021A processor integrates dual ARM Cortex-A7 cores running up to 1.0 GHz with ECC protected L1 and L2 caches. Both cores feature 32 KB of L1 instruction and data cache, share up to 512 KB of coherent L2 cache, and feature the NEON SimD module and dual precision floating-point unit (FPU). The DDR memory controller supports 8-, 16- or 32-bit type 3L and four memory devices at up to 1600 MHz. System Interfaces and Networking A four-lane, 6 GHz multi-protocol SerDes provides support for high-speed interfaces, including up to three Gigabit Ethernet ports with IEEE 1588 support, dual DMA controlled PCI Express generation 2.0 ports and a single SATA 3.0 port. Ethernet DDR3L/4 Memory Controller PCIe 2.0 PCIe Lane 6 GHz SerDes 128 KB SRAM SATA
20 The LS1021A processor also features dual USB controllers one supporting SuperSpeed USB 3.0 with integrated PHY, the other supporting USB 2.0 functions. Additional interfaces include QuadSPI, IFC and support for SD/MMC. For network audio applications, the LS1021A processor includes support for both ASRC and SPDF. For industrial applications, the processor provides four CAN ports and up to 10 UARTS to support industrial protocols. In addition, Serial IO includes three I2 C and two SPI interfaces. Scalability A key advantage of QorIQ processors built on Layerscape architecture is the complete compatibility of features, including virtualization and cache coherency, as well as ISA between various QorIQ LS1 devices. This, together with pin and software compatibility between the other LS1 devices the LS1020A and LS1022A processors enables customers to simply and smoothly migrate applications between nextgeneration QorIQ families. Complete Enablement, Rich Ecosystem For customer evaluation, the QorIQ LS1021A processor is supported by the TWR-LS1021A development platform, based on the modular Freescale Tower System, and features an integrated on-board probe for further cost savings. The TWR-LS1021A evaluation kit includes a Linux 3.12 SDK with optimized drivers and a free 90 day evaluation license for CodeWarrior for ARM development tools. All QorIQ LS series devices are supported by our extensive third-party ecosystem, the largest and most established in the communications market. In conjunction with our expertise and worldwide support infrastructure, the ecosystem helps customers accelerate their migration from both competitive solutions and from legacy Freescale devices, preserve investment costs and reduce time to market. QorIQ LS1021A Processor Features Dual ARM Cortex-A7 cores ECC-protected L1 and L2 cache memories Integrated security engine, supporting Secure Boot and Trust Architecture Rich connectivity and peripheral features, including PCI Express Gen 2, USB 3.0, SATA 3, IFC, uadspi, CAN LCD controller (2D-ACE) QUICC Engine Support for hardware-based virtualization DDR3L/4 Extreme power efficiency, engineered to deliver over 5,000 CoreMarks. Typical total system power of 3 W for improved performance without increased power utilization. The QorIQ LS1 family devices are the only processors in their class with ECC-protected caches and coherent 512 KB L2, adding performance and meeting networking requirements for high reliability. Based on the QorIQ SEC 5.5 hardware accelerated security engine, to provide in depth defense for customer applications High versatility that enables support for ac modules and high bandwidth connectivity for ASICs, 4G/LTE, SATA and low-cost NAND/NOR Flash Touchscreen support adds integrated HMI featured for enhanced ease of use and BOM savings. Similar IP as Freescale Vybrid controller solutions and i.mx applications processors to allow for simple software migration. Proven support required for industrial, building and factory protocols such as PROFIBUS, HDLC and TDM Enables partitioning of CPU resources on low-power parts for increased system productivity First in its class to offer support for DDR4 memory, ensuring continued performance efficiency Architecture including, QorIQ LS series devices and QorIQ Qonverge SoCs as well as DSPs based on StarCore technology. It is for designers with full system responsibility but no need for the extra costs of the specialist and architect features. Specialists Suite Level: This suite is designed so you can do more than just compile and debug. Tools in this suite are useful for customers creating products for every market. Get all the software included in the Developer Suite plus additional board-analysis tools. These tools can be used by all customers in any market but not everyone in a customer s organization needs these tools. CodeWarrior Development Suites for Networked Applications LS Tower Suite Level: This suite was created to give you an economic, yet complete fullfeatured development tool for QorIQ LS development when the LS part is in a Freescale Tower Board configuration. The tools in this suite have no limitations other than they will only work with the Tower Board. Architect Suite Level: This suite would best be used by personnel who have a need to dig deep into the networking aspects of a development project. In this suite, you will get all the software in the Specialist Suite plus software tools designed to give networking experts the extra capability to find out how their system is really working. Developer Suite Level: This suite is the primary suite for customers who develop with multicore processors built on Power SILICA The Engineers of Distribution. 20
21 SILICA The The Engineers of of Distribution. QorIQ LS1043A and LS1023A Communication Processors Quad-core, 64-bit ARM-based processors designed for enterprise edge industrial and networking applications Overview The QorIQ LS1043A processor is Freescale s first quad-core, 64-bit ARM -based processor for embedded networking. The LS1023A (two core version) and the LS1043A (four core version) deliver greater than 10 Gbps of performance in a flexible I/O package supporting fanless designs. This SoC is purpose-built solution for small-form-factor networking and industrial applications with BOM optimizations for economic low layer PCB, lower cost power supply and single clock design. The QorIQ LS1043A delivers a next-generation performance boost over dual core ARM 32-bit products such as the QorIQ LS1020A and the LS1024A processors. The LS1043A takes ARM processing performance to the next level with up to 4x 1.5 GHz 64-bit processors and a large 1 MB L2 cache for the best CPU performance per watt in the value-tier line of QorIQ communications processors. This powerful CPU complex is coupled with the proven offload engines of the QorIQ Data Path Acceleration Architecture (DPAA) to deliver 10 Gbps performance with minimal CPU overhead. Target Applications Integrated Services Branch Routers SDN & NFV Edge Platforms Industry 4.0 Gateways Industrial PLC & Control Security Appliances QorIQ LS1043A Processor Block Diagram ARM Cortex A53 64b Cores 32KB I-cache 32KB D-cache 1MB L2-cache 32-bit DDR3L/4 Memory Controller Secure Boot Trust Zone Power Management IFC Flash QuadSPI Flash SD/eMMC 2x DUART, 6xLP UART 4x 12C / SPI Security v5.4 (XoR, CRC) CCI-400 TM Coherency Fabric SMMU SMMU SMMU SMMU Real Time Debug Queue Parse, Classify, Distribute DMA Manager Buffer Manager 1G 1G 1G 1G 1G 1/10G PCIe PCIe PCIe SATA 3.0 uqe Perf Monitor Watchpoint Cross Trigger Trace GPIO, FlexTimers, PWM 3x USB 3.0 w/phy 4-Lane 10GHz SERDES Core Complex Basic Peripherals and Interconnect Accelerators and Memory Control Networking Elements 21
22 QorIQ LS1043A Processor Features Features Up to 4 ARM Cortex -A53 cores Leading Data Path Offload Engines Integrated security engine, supporting Secure Boot and Trust Architecture Rich connectivity and peripheral features, including PCI Express Gen 2, USB 3.0, SATA 3, IFC, QuadSPI QUICC Engine Support for hardware-based virtualization DDR3L/4 Benefits Best in class performance to power efficiency, engineered to deliver an estimated 15,000 CoreMarks. Total system power as low as 6 W for fan-less platform design Free up valuable CPU cycles with the proven QorIQ Data Path Acceleration Architecture targeting 10 Gbps packet offload performance with hardware packet engines, queue and buffer management engines. Based on the QorIQ SEC 5.5 hardware accelerated security engine, to provide in depth defense for customer applications High versatility that enables support for ac modules and high bandwidth connectivity for ASICs, 4G/ LTE, SATA and low-cost NAND/NOR Flash. Multiple USB 3.0 for redundant WAN fail over, storage and configuration. Advanced XFI, Quad SGMII and 2.5G Overclocked SGMII support for maximum Ethernet flexibility. Proven BOM cost saving support required for legacy networking, industrial, building and factory protocols such as PROFIBUS, HDLC and TDM Enables partitioning of physical and virtual resources on LS1043A multicore parts for increased system flexibility DDR3L/4 First in its class to offer support for DDR4 memory, ensuring continued performance efficiency Additionally, the QorIQ LS1043A processor continues the QorIQ legacy of I/O flexibility with up to 6x Gigabit interfaces, 3x PCIe interfaces, 3x USB 3.0 interfaces and integrated QUICC Engine for legacy glue-less HDLC, TDM or Profibus support. Core Complex The QorIQ LS1043A communications processor delivers the latest in energy efficiency and performance improvement of ARM Cortex-A53 64-bit processor technology. This new quad or dual core complex provides a generous 1MB L2 cache and a highly efficient 8 stage pipeline for maximum performance per watt. Coupled with the high performance CoreLink CCI-400 coherent interconnect and the SMMU units this LS1043 system also enables dedicated virtual machines with protected memory and dedicated I/O for maximum platform flexibility. System Interfaces and Networking The QorIQ LS1043A and LS1023A communications processor includes a four-lane, 10 GHz multi-protocol SerDes providing support for high-speed interfaces, including up to six Gigabit Ethernet ports with IEEE 1588 support, three DMA controlled PCI Express generation 2.0 ports and a single SATA 3.0 port. The Ethernet interfaces are backed by powerful packet processing engines called the Data Path Acceleration Architecture which provides greater than 10 Gbps packet parsing, classification and distribution along with hashing functions and even arbitrary payload processing and in-line reassembly or cryptographic processing. This powerful packet processing architecture frees up the quad 1.5 GHz ARM A53 cores for higher level, value added tasks. The LS1043A processor also features triple USB 3.0 controllers with integrated PHY for a variety of storage, WAN and configuration options. Additional interfaces include QuadSPI, IFC and support for SD/MMC. In addition, Serial IO includes quad I2C/SPI interfaces. Complete Enablement, Rich Ecosystem The rich ecosystem provided by the powerful combination of Freescale s strong legacy of networking expertise and ARM s rapidly growing development base delivers the best of both worlds. All QorIQ LS series devices are supported by our extensive third-party ecosystem, the largest and most established in the communications market. In addition the vibrant, growing ARM ecosystem is supported including the Linaro not-forprofit engineering organization and exciting OpenDataPlane project which strives to deliver open source, cross platform interoperability. In conjunction with our expertise and worldwide support infrastructure, this broad ecosystem helps customers accelerate their migration from non-freescale solutions and from legacy Freescale devices, preserve investment costs and reduce time to market. SILICA The Engineers of Distribution. 22
23 SILICA The The Engineers of of Distribution. Technical Support from Inspiration to Production. For all customers, it is absolutely essential to get to market without any hold ups. That s why SILICA offers unmatched support and engineering expertise throughout the design cycle from the planning of your project right through to its launch. SILICA provides reference designs, hardware evaluation and development tools (and component design centres) that all help to sharpen your competitive edge. Our European team of more than 110 Field Application Engineers (FAEs) focuses on design-in and technical product support. These FAEs also offer expertise in specific application areas, including industrial electronics, data communications, telecommunications, lighting and many others. This means that SILICA s technical support can help you to get from inspiration to production, no matter what industry you work in. Support throughout the Design Cycle. With local product specialists, first-rate technical support, and our field Application Engineers (FAEs) SILICA makes sure your design cycle runs smoothly. Our team is trained in multiple disciplines to offer perspectives on the overall solution. This means we DO work towards providing the best design and we DON T have a bias towards certain sockets or components. SILICA continually invests in training to maintain high standards and stay ahead of the latest technological developments. For complex technologies, you can also count on SILICA s specialist FAEs, manufacturer FAEs and design partners. SILICA works hard to help designers throughout all stages of the design process. Whether you re just getting started, in the middle of a design, or about to introduce a new product, SILICA has the expertise to help. Contact your nearest SILICA office for premier technical support: All trademarks and logos are the property of their respective owners. This document provides a brief overview only, no binding offers are intended. Avnet disclaims all representations, warranties and liabilities under any theory with respect to the product information, including any implied warranties of merchantability, fitness for a particular purpose, title and/or non-infringement, specifications, use, legal compliance or other requirements. Product information is obtained by Avnet from its suppliers or other sources deemed reliable and is provided by Avnet on an AS IS basis. No guarantee as to the accuracy or completeness of any information. All information is subject to change, modifications and amendments without notice.
24 SILICA The Engineers of Distribution. SILICA Offices AUSTRIA Avnet EMG Elektronische Bauelemente GmbH Schönbrunner Str A-1120 Wien Phone: Fax: [email protected] BELGIUM Avnet Europe Comm. VA Axxes Business Park Gebouw B 3de Verdieping Guldensporenpark 17 B-9820 Merelbeke Phone: Fax: [email protected] CZECH REPUBLIC (SLOVAKIA) Avnet Amazon Court Karolinska 661/4 CZ Prague Phone: Fax: [email protected] DENMARK Avnet Nortec A/S Ellekær 9 DK-2730 Herlev Phone: Fax: [email protected] FINLAND (ESTONIA, LATVIA, LITHUANIA) Avnet Nortec Oy Pihatörmä 1B FIN Espoo Phone: Fax: [email protected] FRANCE (TUNISIA) Avnet EMG France SA Immeuble Carnot Plaza 14 Avenue Carnot F Massy Cedex Phone: Fax: [email protected] Avnet EMG France SA Parc Club du Moulin à Vent Bât 10 33, rue du Dr. G. Lévy F Vénissieux Cedex Phone: Fax: [email protected] Avnet EMG France SA Les Peupliers II 35, avenue des Peupliers F Cesson Sévigné Phone: Fax: [email protected] Avnet EMG France SA Parc de la Plaine 35 avenue Marcel Dassault BP 5867 F Toulouse Cedex 5 Phone: Fax: [email protected] GERMANY Avnet EMG GmbH Gruber Str. 60 C D Poing Phone: Fax [email protected] Avnet EMG GmbH Rudower Chaussee 12 d D Berlin Phone: Fax: [email protected] Avnet EMG GmbH Berliner Platz 9 D Herne Phone: Fax: [email protected] Avnet EMG GmbH Wolfenbütteler Str. 22 D Braunschweig Phone: Fax: [email protected] Avnet EMG GmbH Gutenbergstraße 15 D Leinfelden-Echterdingen Phone: Fax: [email protected] Avnet EMG GmbH Borsigstr Wiesbaden Phone: Fax: [email protected] Avnet EMG GmbH Oehleckerring Hamburg Phone: Fax: [email protected] Avnet EMG GmbH Munzinger Straße Freiburg Phone: Fax: [email protected] HUNGARY Avnet Budafoki út IP WEST / Building B H-1117 Budapest Phone: Fax: [email protected] ITALY Avnet EMG Italy S.r.l. Via Manzoni 44, I Cusano Milanino MI Phone: Fax: [email protected] Avnet EMG Italy S.r.l. Viale dell Industria, 23 I Padova (PD) Phone: Fax: [email protected] Avnet EMG Italy S.r.l. Via Panciatichi, 40 I Firenze (FI) Phone: Fax: [email protected] Avnet EMG Italy S.r.l. Via Scaglia Est, 144 I Modena (MO) Phone: Fax: [email protected] Avnet EMG Italy S.r.l. Via Zoe Fontana, 220 I Roma Tecnocittà Phone: Fax: [email protected] Avnet EMG Italy S.r.l. Corso Susa, 242 I Rivoli (TO) Phone: Fax: [email protected] NETHERLANDS Avnet B.V. Stadionstraat 2, 6th fl. NL-4815 NG Breda Phone: +31 (0) Fax: +31 (0) [email protected] NORWAY Avnet Nortec AS Solbråveien 45, 2. Floor N-1383 Asker Phone: Fax: [email protected] POLAND Avnet EM Sp. z.o.o Ul. Staromiejska 7 Room 406 PL Katowice Phone: Fax: [email protected] Avnet EM Sp. z.o.o. Street Marynarska 11 PL Warszawa (Building Antares, 5th Floor) Phone: Fax: [email protected] PORTUGAL Avnet Iberia S.A. Tower Plaza Rot. Eng. Edgar Cardoso, 23 Piso 14 Sala E P Vila Nova de Gaia Phone: Fax: [email protected] ROMANIA Avnet Europe Comm. VA Construdava Business Center Sos. Pipera-Tunari 4C RO Bucharest Phone: Fax: [email protected] RUSSIA (BELARUS, UKRAINE) Avnet Korovinskoye Chaussee 10 Building 2 Office 25 RUS Moscow Phone: Fax: [email protected] Avnet Polustrovsky Prospect, 43, of.422 RUS Saint Petersburg Phone: +7 (812) Fax: +7 (812) [email protected] SLOVENIA (BOSNIA AND HERZEGOVINA, BULGARIA, CROATIA, MACEDONIA, MONTENEGRO, SERBIA) Avnet Dunajska 167 SLO-1000 Ljubljana Phone: +386 (0) Fax: +386 (0) [email protected] SPAIN Avnet Iberia S.A. C/Chile,10 plta. 2ª, ofic 229 Edificio Madrid 92 E Las Matas (Madrid) Phone: Fax: [email protected] Avnet Iberia S.A. C/Mallorca, 1 al 23 2ª plta.1a E Barcelona Phone: Fax: [email protected] Avnet Iberia S.A. Plaza Zabalgane, 12 Bajo Izqda. E Galdàcano (Vizcaya) Phone: Fax: [email protected] SWEDEN Avnet Nortec AB Lövströms Alle 5 S Sundbyberg Phone: Fax: [email protected] SWITZERLAND Avnet EMG AG Ausfahrt 46 Rössliweg 29b 4852 Rothrist Phone: Fax: [email protected] TURKEY (GREECE, EGYPT) Avnet Canan Residence, Ofis A1 Hendem Cad. No: 54 Serifali-Umraniye TR Istanbul Phone: Fax: [email protected] UNITED KINGDOM (IRELAND) Avnet EMG Ltd. Avnet House Rutherford Close Meadway Stevenage, Herts SG1 2EF Phone: +44 (0) Fax: +44 (0) [email protected] Avnet EMG Ltd. Oceanic Building Waters Meeting Road Bolton BL1 8SW Phone: +44 (0) Fax: +44 (0) [email protected] Avnet EMG Ltd. First Floor The Gatehouse Gatehouse Road Aylesbury Bucks HP19 8DB Phone: +44 (0) Fax: +44 (0) [email protected] Avnet EMG Ltd. Unit 5B Waltham Park White Waltham Berkshire SL6 3TN Phone: +44 (0) Fax: +44 (0) [email protected] ISRAEL Avnet Components Israel Ltd. P.O.Box 48 TEL-MOND, Phone: +972 (0) Fax: +972 (0) [email protected] SOUTH AFRICA Avnet Kopp Block 3 Pinewood Office Park 33 Riley Road Woodmead 2191 Sandton Johannesburg Phone: +27 (0) Fax: +27 (0) [email protected] Avnet Kopp Ground Floor, HP House Belmont Office Park Belmont Road Rondebosch 7700 Cape Town Phone: +27 (0) Fax: +27 (0) [email protected] Avnet Kopp Suite 6, Upminster Essex Gardens Nelson Road Westville 3630 Durban Phone: +27 (0) Fax: +27 (0) [email protected] 02/2015
High-Performance, Highly Secure Networking for Industrial and IoT Applications
High-Performance, Highly Secure Networking for Industrial and IoT Applications Table of Contents 2 Introduction 2 Communication Accelerators 3 Enterprise Network Lineage Features 5 Example applications
Virtualization in the ARMv7 Architecture Lecture for the Embedded Systems Course CSD, University of Crete (May 20, 2014)
Virtualization in the ARMv7 Architecture Lecture for the Embedded Systems Course CSD, University of Crete (May 20, 2014) ManolisMarazakis ([email protected]) Institute of Computer Science (ICS) Foundation
Full and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Virtualization. Dr. Yingwu Zhu
Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the
Hardware accelerated Virtualization in the ARM Cortex Processors
Hardware accelerated Virtualization in the ARM Cortex Processors John Goodacre Director, Program Management ARM Processor Division ARM Ltd. Cambridge UK 2nd November 2010 Sponsored by: & & New Capabilities
PROCESSOR VIRTUALIZATION ON EMBEDDED LINUX SYSTEMS
PROCESSOR VIRTUALIZATION ON EMBEDDED LINUX SYSTEMS Geoffrey Papaux, Daniel Gachet, and Wolfram Luithardt Institute of Smart and Secured Systems (isis), University of Applied Sciences and Arts Western Switzerland
KVM: A Hypervisor for All Seasons. Avi Kivity [email protected]
KVM: A Hypervisor for All Seasons Avi Kivity [email protected] November 2007 Virtualization Simulation of computer system in software Components Processor: register state, instructions, exceptions Memory
Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista
Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista Setting the Stage This presentation will discuss the usage of Linux as a base component of hypervisor components
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have
Hardware Based Virtualization Technologies. Elsie Wahlig [email protected] Platform Software Architect
Hardware Based Virtualization Technologies Elsie Wahlig [email protected] Platform Software Architect Outline What is Virtualization? Evolution of Virtualization AMD Virtualization AMD s IO Virtualization
Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:
Virtual Machines Uses for Virtual Machines Virtual machine technology, often just called virtualization, makes one computer behave as several computers by sharing the resources of a single computer between
The Art of Virtualization with Free Software
Master on Free Software 2009/2010 {mvidal,jfcastro}@libresoft.es GSyC/Libresoft URJC April 24th, 2010 (cc) 2010. Some rights reserved. This work is licensed under a Creative Commons Attribution-Share Alike
Parallels Virtuozzo Containers
Parallels Virtuozzo Containers White Paper Top Ten Considerations For Choosing A Server Virtualization Technology www.parallels.com Version 1.0 Table of Contents Introduction... 3 Technology Overview...
The QEMU/KVM Hypervisor
The /KVM Hypervisor Understanding what's powering your virtual machine Dr. David Alan Gilbert [email protected] 2015-10-14 Topics Hypervisors and where /KVM sits Components of a virtual machine KVM Devices:
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer [email protected] Agenda Session Length:
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization
Real-Time KVM for the Masses Unrestricted Siemens AG 2015. All rights reserved
Siemens Corporate Technology August 2015 Real-Time KVM for the Masses Unrestricted Siemens AG 2015. All rights reserved Real-Time KVM for the Masses Agenda Motivation & requirements Reference architecture
A quantitative comparison between xen and kvm
Home Search Collections Journals About Contact us My IOPscience A quantitative comparison between xen and kvm This content has been downloaded from IOPscience. Please scroll down to see the full text.
Virtualization. Jukka K. Nurminen 23.9.2015
Virtualization Jukka K. Nurminen 23.9.2015 Virtualization Virtualization refers to the act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms,
KVM in Embedded Requirements, Experiences, Open Challenges
Corporate Technology KVM in Embedded Requirements, Experiences, Open Challenges Jan Kiszka, Siemens AG Corporate Competence Center Embedded Linux [email protected] Copyright Siemens AG 2009. All rights
Hypervisors and Virtual Machines
Hypervisors and Virtual Machines Implementation Insights on the x86 Architecture DON REVELLE Don is a performance engineer and Linux systems/kernel programmer, specializing in high-volume UNIX, Web, virtualization,
Compromise-as-a-Service
ERNW GmbH Carl-Bosch-Str. 4 D-69115 Heidelberg 3/31/14 Compromise-as-a-Service Our PleAZURE Felix Wilhelm & Matthias Luft {fwilhelm, mluft}@ernw.de ERNW GmbH Carl-Bosch-Str. 4 D-69115 Heidelberg Agenda
Chapter 14 Virtual Machines
Operating Systems: Internals and Design Principles Chapter 14 Virtual Machines Eighth Edition By William Stallings Virtual Machines (VM) Virtualization technology enables a single PC or server to simultaneously
Basics of Virtualisation
Basics of Virtualisation Volker Büge Institut für Experimentelle Kernphysik Universität Karlsruhe Die Kooperation von The x86 Architecture Why do we need virtualisation? x86 based operating systems are
Leveraging Thin Hypervisors for Security on Embedded Systems
Leveraging Thin Hypervisors for Security on Embedded Systems Christian Gehrmann A part of Swedish ICT What is virtualization? Separation of a resource or request for a service from the underlying physical
Virtual Machine Monitors. Dr. Marc E. Fiuczynski Research Scholar Princeton University
Virtual Machine Monitors Dr. Marc E. Fiuczynski Research Scholar Princeton University Introduction Have been around since 1960 s on mainframes used for multitasking Good example VM/370 Have resurfaced
ARM VIRTUALIZATION FOR THE MASSES. Christoffer Dall <[email protected]> <[email protected]>
ARM VIRTUALIZATION FOR THE MASSES Christoffer Dall ARM Smartphones Smartphones Tablets Tablets ARM Servers But now also... But now also... ARM Servers
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization
Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization www.redhat.com Table of contents Introduction Page 3 Benefits of virtualization Page 3 Virtualization challenges
Virtualization for Hard Real-Time Applications Partition where you can Virtualize where you have to
Virtualization for Hard Real-Time Applications Partition where you can Virtualize where you have to Hanspeter Vogel Triadem Solutions AG Real-Time Systems GmbH Gartenstrasse 33 D-88212 Ravensburg Germany
2972 Linux Options and Best Practices for Scaleup Virtualization
HP Technology Forum & Expo 2009 Produced in cooperation with: 2972 Linux Options and Best Practices for Scaleup Virtualization Thomas Sjolshagen Linux Product Planner June 17 th, 2009 2009 Hewlett-Packard
Hypervisors. Introduction. Introduction. Introduction. Introduction. Introduction. Credits:
Hypervisors Credits: P. Chaganti Xen Virtualization A practical handbook D. Chisnall The definitive guide to Xen Hypervisor G. Kesden Lect. 25 CS 15-440 G. Heiser UNSW/NICTA/OKL Virtualization is a technique
Enterprise-Class Virtualization with Open Source Technologies
Enterprise-Class Virtualization with Open Source Technologies Alex Vasilevsky CTO & Founder Virtual Iron Software June 14, 2006 Virtualization Overview Traditional x86 Architecture Each server runs single
Sierraware Overview. Simply Secure
Sierraware Overview Simply Secure Sierraware Software Suite SierraTEE/Micro Kernel TrustZone/GlobalPlatform TEE SierraVisor: Bare Metal Hypervisor Hypervisor for ARM Para-virtualization, TrustZone Virtualization,
Virtualization. Types of Interfaces
Virtualization Virtualization: extend or replace an existing interface to mimic the behavior of another system. Introduced in 1970s: run legacy software on newer mainframe hardware Handle platform diversity
Virtualization: Hypervisors for Embedded and Safe Systems. Hanspeter Vogel Triadem Solutions AG
1 Virtualization: Hypervisors for Embedded and Safe Systems Hanspeter Vogel Triadem Solutions AG 2 Agenda Use cases for virtualization Terminology Hypervisor Solutions Realtime System Hypervisor Features
Cisco Application-Centric Infrastructure (ACI) and Linux Containers
White Paper Cisco Application-Centric Infrastructure (ACI) and Linux Containers What You Will Learn Linux containers are quickly gaining traction as a new way of building, deploying, and managing applications
Virtualization in Linux KVM + QEMU
CS695 Topics in Virtualization and Cloud Computing KVM + QEMU Senthil, Puru, Prateek and Shashank 1 Topics covered KVM and QEMU Architecture VTx support CPU virtualization in KMV Memory virtualization
Advanced Computer Networks. Network I/O Virtualization
Advanced Computer Networks 263 3501 00 Network I/O Virtualization Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week: Today: Software Defined
KVM/ARM: Experiences Building the Linux ARM Hypervisor
KVM/ARM: Experiences Building the Linux ARM Hypervisor Christoffer Dall and Jason Nieh {cdall, nieh}@cs.columbia.edu Department of Computer Science, Columbia University Technical Report CUCS-010-13 April
nanohub.org An Overview of Virtualization Techniques
An Overview of Virtualization Techniques Renato Figueiredo Advanced Computing and Information Systems (ACIS) Electrical and Computer Engineering University of Florida NCN/NMI Team 2/3/2006 1 Outline Resource
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
MontaVista Linux Carrier Grade Edition
MontaVista Linux Carrier Grade Edition WHITE PAPER Beyond Virtualization: The MontaVista Approach to Multi-core SoC Resource Allocation and Control ABSTRACT: MontaVista Linux Carrier Grade Edition (CGE)
EECatalog SPECIAL FEATURE
Type Zero Hypervisor the New Frontier in Embedded Virtualization The hypervisor s full control over the hardware platform and ability to virtualize hardware platforms are beneficial in environments that
Real-Time Virtualization How Crazy Are We?
Siemens Corporate Technology October 2014 Real-Time Virtualization How Crazy Are We? Image: Marcus Quigmire, licensed under CC BY 2.0 Unrestricted Siemens AG 2014. All rights reserved Real-Time Systems
KVM: Kernel-based Virtualization Driver
KVM: Kernel-based Virtualization Driver White Paper Overview The current interest in virtualization has led to the creation of several different hypervisors. Most of these, however, predate hardware-assisted
Virtualization. Pradipta De [email protected]
Virtualization Pradipta De [email protected] Today s Topic Virtualization Basics System Virtualization Techniques CSE506: Ext Filesystem 2 Virtualization? A virtual machine (VM) is an emulation
Virtual Private Systems for FreeBSD
Virtual Private Systems for FreeBSD Klaus P. Ohrhallinger 06. June 2010 Abstract Virtual Private Systems for FreeBSD (VPS) is a novel virtualization implementation which is based on the operating system
Beyond Virtualization: A Novel Software Architecture for Multi-Core SoCs. Jim Ready September 18, 2012
Beyond Virtualization: A Novel Software Architecture for Multi-Core SoCs Jim Ready September 18, 2012 How HW guys view the world SW Software HW How SW guys view the world SW HW Reality The SoC Software
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Building Docker Cloud Services with Virtuozzo
Building Docker Cloud Services with Virtuozzo Improving security and performance of application containers services in the cloud EXECUTIVE SUMMARY Application containers, and Docker in particular, are
Virtualization. Michael Tsai 2015/06/08
Virtualization Michael Tsai 2015/06/08 What is virtualization? Let s first look at a video from VMware http://bcove.me/x9zhalcl Problems? Low utilization Different needs DNS DHCP Web mail 5% 5% 15% 8%
Virtualization analysis
Page 1 of 15 Virtualization analysis CSD Fall 2011 Project owner Björn Pehrson Project Coaches Bruce Zamaere Erik Eliasson HervéNtareme SirajRathore Team members Bowei Dai [email protected] 15 credits Elis Kullberg
Version 3.7 Technical Whitepaper
Version 3.7 Technical Whitepaper Virtual Iron 2007-1- Last modified: June 11, 2007 Table of Contents Introduction... 3 What is Virtualization?... 4 Native Virtualization A New Approach... 5 Virtual Iron
Performance tuning Xen
Performance tuning Xen Roger Pau Monné [email protected] Madrid 8th of November, 2013 Xen Architecture Control Domain NetBSD or Linux device model (qemu) Hardware Drivers toolstack netback blkback Paravirtualized
Deeply Embedded Real-Time Hypervisors for the Automotive Domain Dr. Gary Morgan, ETAS/ESC
Deeply Embedded Real-Time Hypervisors for the Automotive Domain Dr. Gary Morgan, ETAS/ESC 1 Public ETAS/ESC 2014-02-20 ETAS GmbH 2014. All rights reserved, also regarding any disposal, exploitation, reproduction,
Virtualization for Cloud Computing
Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U
Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Datasheet Brings the performance and reliability of mainframe virtualization to blade computing BladeSymphony is the first true enterprise-class
HRG Assessment: Stratus everrun Enterprise
HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at
VMware Server 2.0 Essentials. Virtualization Deployment and Management
VMware Server 2.0 Essentials Virtualization Deployment and Management . This PDF is provided for personal use only. Unauthorized use, reproduction and/or distribution strictly prohibited. All rights reserved.
IOS110. Virtualization 5/27/2014 1
IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to
Microkernels, virtualization, exokernels. Tutorial 1 CSC469
Microkernels, virtualization, exokernels Tutorial 1 CSC469 Monolithic kernel vs Microkernel Monolithic OS kernel Application VFS System call User mode What was the main idea? What were the problems? IPC,
Intel Embedded Virtualization Manager
White Paper Kelvin Lum Fee Foon Kong Platform Application Engineer, ECG Penang Intel Corporation Kam Boon Hee (Thomas) Marketing Development Manager, ECG Penang Intel Corporation Intel Embedded Virtualization
Real-time KVM from the ground up
Real-time KVM from the ground up KVM Forum 2015 Rik van Riel Red Hat Real-time KVM What is real time? Hardware pitfalls Realtime preempt Linux kernel patch set KVM & qemu pitfalls KVM configuration Scheduling
Virtualization. P. A. Wilsey. The text highlighted in green in these slides contain external hyperlinks. 1 / 16
Virtualization P. A. Wilsey The text highlighted in green in these slides contain external hyperlinks. 1 / 16 Conventional System Viewed as Layers This illustration is a common presentation of the application/operating
KVM/ARM: The Design and Implementation of the Linux ARM Hypervisor
KVM/ARM: The Design and Implementation of the Linux ARM Hypervisor Christoffer Dall Department of Computer Science Columbia University [email protected] Jason Nieh Department of Compouter Science Columbia
How To Understand The Power Of A Virtual Machine Monitor (Vm) In A Linux Computer System (Or A Virtualized Computer)
KVM - The kernel-based virtual machine Timo Hirt [email protected] 13. Februar 2010 Abstract Virtualization has been introduced in the 1960s, when computing systems were large and expensive to operate. It
Delivering Quality in Software Performance and Scalability Testing
Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,
Servervirualisierung mit Citrix XenServer
Servervirualisierung mit Citrix XenServer Paul Murray, Senior Systems Engineer, MSG EMEA Citrix Systems International GmbH [email protected] Virtualization Wave is Just Beginning Only 6% of x86
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Virtualization: What does it mean for SAS? Karl Fisher and Clarke Thacher, SAS Institute Inc., Cary, NC
Paper 347-2009 Virtualization: What does it mean for SAS? Karl Fisher and Clarke Thacher, SAS Institute Inc., Cary, NC ABSTRACT SAS groups virtualization into four categories: Hardware Virtualization,
Achieving Real-Time Performance on a Virtualized Industrial Control Platform
White Paper Virtualization Technology Industrial Automation Achieving Real-Time Performance on a Virtualized Introduction Good for many applications down to the 100 microsecond cycle time range A mainstay
KVM KERNEL BASED VIRTUAL MACHINE
KVM KERNEL BASED VIRTUAL MACHINE BACKGROUND Virtualization has begun to transform the way that enterprises are deploying and managing their infrastructure, providing the foundation for a truly agile enterprise,
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
WIND RIVER HYPERVISOR
TABLE OF CONTENTS Embedded Virtualization... 2 Multi-core Software Configurations... 3 Applications for Embedded Virtualization... 5 Cost Reduction and Increased Capacity Through OS Consolidation.... 5
Intro to Virtualization
Cloud@Ceid Seminars Intro to Virtualization Christos Alexakos Computer Engineer, MSc, PhD C. Sysadmin at Pattern Recognition Lab 1 st Seminar 19/3/2014 Contents What is virtualization How it works Hypervisor
Knut Omang Ifi/Oracle 19 Oct, 2015
Software and hardware support for Network Virtualization Knut Omang Ifi/Oracle 19 Oct, 2015 Motivation Goal: Introduction to challenges in providing fast networking to virtual machines Prerequisites: What
9/26/2011. What is Virtualization? What are the different types of virtualization.
CSE 501 Monday, September 26, 2011 Kevin Cleary [email protected] What is Virtualization? What are the different types of virtualization. Practical Uses Popular virtualization products Demo Question,
big.little Technology Moves Towards Fully Heterogeneous Global Task Scheduling Improving Energy Efficiency and Performance in Mobile Devices
big.little Technology Moves Towards Fully Heterogeneous Global Task Scheduling Improving Energy Efficiency and Performance in Mobile Devices Brian Jeff November, 2013 Abstract ARM big.little processing
Architecture of the Kernel-based Virtual Machine (KVM)
Corporate Technology Architecture of the Kernel-based Virtual Machine (KVM) Jan Kiszka, Siemens AG, CT T DE IT 1 Corporate Competence Center Embedded Linux [email protected] Copyright Siemens AG 2010.
Virtualization. Explain how today s virtualization movement is actually a reinvention
Virtualization Learning Objectives Explain how today s virtualization movement is actually a reinvention of the past. Explain how virtualization works. Discuss the technical challenges to virtualization.
Introduction to the NI Real-Time Hypervisor
Introduction to the NI Real-Time Hypervisor 1 Agenda 1) NI Real-Time Hypervisor overview 2) Basics of virtualization technology 3) Configuring and using Real-Time Hypervisor systems 4) Performance and
What s New with VMware Virtual Infrastructure
What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management
Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM
Bern University of Applied Sciences Engineering and Information Technology Module I-7410 Advanced Linux FS-11 Part1: Virtualization with KVM By Franz Meyer Version 1.0 February 2011 Virtualization Architecture
The MIPS architecture and virtualization
The MIPS architecture and virtualization Simply put, virtualization makes one physical device appear as one or more virtual devices. Virtualization can be implemented at the processor level (e.g. CPU or
Cloud Operating Systems for Servers
Cloud Operating Systems for Servers Mike Day Distinguished Engineer, Virtualization and Linux August 20, 2014 [email protected] 1 What Makes a Good Cloud Operating System?! Consumes Few Resources! Fast
The Review of Virtualization in an Isolated Computer Environment
The Review of Virtualization in an Isolated Computer Environment Sunanda Assistant professor, Department of Computer Science & Engineering, Ludhiana College of Engineering & Technology, Ludhiana, Punjab,
Virtualization: Know your options on Ubuntu. Nick Barcet. Ubuntu Server Product Manager [email protected]
Virtualization: Know your options on Ubuntu Nick Barcet Ubuntu Server Product Manager [email protected] Agenda Defi nitions Host virtualization tools Desktop virtualization tools Ubuntu as a guest
About Parallels Desktop 10 for Mac
About Parallels Desktop 10 for Mac Parallels Desktop 10 for Mac is a major upgrade to Parallels award-winning software for running Windows on a Mac. About this Update This update for Parallels Desktop
Virtualization and the U2 Databases
Virtualization and the U2 Databases Brian Kupzyk Senior Technical Support Engineer for Rocket U2 Nik Kesic Lead Technical Support for Rocket U2 Opening Procedure Orange arrow allows you to manipulate the
COS 318: Operating Systems. Virtual Machine Monitors
COS 318: Operating Systems Virtual Machine Monitors Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall10/cos318/ Introduction Have been around
x86 ISA Modifications to support Virtual Machines
x86 ISA Modifications to support Virtual Machines Douglas Beal Ashish Kumar Gupta CSE 548 Project Outline of the talk Review of Virtual Machines What complicates Virtualization Technique for Virtualization
Cloud^H^H^H^H^H Virtualization Technology. Andrew Jones ([email protected]) May 2011
Cloud^H^H^H^H^H Virtualization Technology Andrew Jones ([email protected]) May 2011 Outline Promise to not use the word Cloud again...but still give a couple use cases for Virtualization Emulation it's
Chapter 5 Cloud Resource Virtualization
Chapter 5 Cloud Resource Virtualization Contents Virtualization. Layering and virtualization. Virtual machine monitor. Virtual machine. Performance and security isolation. Architectural support for virtualization.
KVM Security Comparison
atsec information security corporation 9130 Jollyville Road, Suite 260 Austin, TX 78759 Tel: 512-349-7525 Fax: 512-349-7933 www.atsec.com KVM Security Comparison a t s e c i n f o r m a t i o n s e c u
