Enea Hypervisor : Facilitating Multicore Migration with the Enea Hypervisor

Size: px
Start display at page:

Download "Enea Hypervisor : Facilitating Multicore Migration with the Enea Hypervisor"

Transcription

1 1 Enea Hypervisor : Facilitating Multicore Migration with the Enea Hypervisor Magnus Karlsson Principal Engineer, CTO Office Multicore is everywhere in the telecommunications and networking world. Whether one wants it or not it is the only path towards better performance. With the rise of smart phones and mobile internet that require dramatic increases in bandwidth, the carriers and service providers need improved performance in infrastructure equipment to handle the load while maintaining the equally important cost profiles for their services. But in any migration to multicore devices, software is the key issue since legacy single core code does not run on multicore. This white paper presents a method on how to take your single core application and move it to multicore, with small steps that mitigate risk while maintaining a working version for delivery to your customers. The method centers on the use of hypervisors and design patterns that can be used to move your performance critical telecommunications and/or networking code to a multicore chip. Introduction There are five top-level requirements that project leaders and architects of next generation embedded systems always hear from management these days: better performance, lower power consumption, smaller form factor, shorter development cycle, and lower Bill of Material (B.O.M.), pick any combination, usually all. Today, the solution to the first three of these problems is usually to employ a multicore chip. And why is this? For software that already has been performance optimized, the only road to better performance is to get a more powerful CPU. And the only way to get a significantly more powerful CPU these days is to use one with multiple cores. Lower power consumption and a smaller form factor can be gained by consolidating multiple boards or multiple single core CPUs into one multicore chip. Therefore, it has become imperative to produce software that can take full advantage of multicore chips. For most companies in the embedded world, the largest investment is in software, and reusing this investment in legacy software for the next generation of equipment or in new products is paramount in order to achieve shorter development cycles, lower B.O.M., and in the end profitability. The problem is that legacy software, and sometimes its operating system, is most often designed and optimized for single core and simply does not execute on multicore C PUs in a manner that effectively utilizes all the cores, if it runs at all. The big question is then, how to make the transition to multicore while still being able to use as much as possible of the legacy, single core SW investment and not having to perform the transition to multicore in one large, time consuming and risky step? In this paper, we show how your single core software investment can still be utilized on multicore CPUs by using the Enea Hypervisor and how to make the transition in small steps mitigating risk. On a high level, start by running the legacy software and its operating system as is on one single core using the virtualization support of the Enea Hypervisor. If you are consolidating multiple boards on to a multicore device, then instantiate one copy of the operating system and software on each core. If you are not performing consolidation or you need better performance, the main strategy is then to divide your software into a non-performance critical part and a performance critical part, and focus on migrating the performance critical part to use the rest of the cores using one of the design patterns presented in this paper. To get the best possible performance, the performance critical code can be migrated to run directly onto the Enea Hypervisor providing bare-metal performance. If you would like to add a more capable control and/or data plane environment with a plethora of services and functionality but with less performance, you can also add and migrate to Linux. To mitigate risk, this migration Enea is a global software and services company focused on solutions for communication-driven products. With 40 years of experience Enea is a world leader in the development of software platforms with extreme demands on high-availability and performance. Enea s expertise in real-time operating systems and high availability middleware shortens development cycles, brings down product costs and increases system reliability. Enea s vertical solutions cover telecom handsets and infrastructure, medtech, industrial automation, automotive and mil/aero. Enea has 750 employees and is listed on Nasdaq OMX Nordic Exchange Stockholm AB. For more information please visit enea.com or contact us at info@enea.com.

2 2 can be performed in small steps by just taking a piece of code at a time, so that you always have a functioning system. Once finished, you will end up with a system where all the non-performance critical software is still on the legacy OS and the performance critical parts have been migrated over to the other rest of the multicore chip. You will end up with the same kind of architecture that you usually would have gotten if you started from scratch: your non-performance critical code on a fully featured OS on a single core and your performance critical code executing on the rest of the cores in a new run-time environment. But you get the benefit of performing this multicore migration in small increments. The rest of the paper is outlined as follows. We start in the next section by detailing the application domains we are targeting, the goals we want to achieve and the high-level architecture we aim at. A central piece in the architecture is the hypervisor which will be briefly introduced in Section 3. Section 4 explains the method itself, i.e. shows how your single core system can be migrated to a multicore system in increments. The details of the multicore design patterns are cover in Section 5 and the communication mechanism is covered in Section 6. Section 7 details the requirements that a hypervisor needs to have to be able to support multicore migration and we show that the Enea Hypervisor satisfies all these requirements making it a perfect base for multicore migration. The last section contains conclusions and summary remarks. Application Domains, Architecture and Goals The application domains we are focusing on migrating in this paper are telecommunications and networking. In these domains software can be classified in to three parts: management (O&M), control plane and data plane. The management plane takes care of the tasks that are not driven by the regular flow of packets, such as reading statistics, upgrading the software, and monitoring the health of the system. It is usually non-performance critical. Control plane applications are ones that terminate control traffic usually through rather complicated and heavyweight protocols. Typical protocols in this category include SCTP and GTP-C. Protocols are most of the time terminated and they require a substantial amount of CPU processing. Data plane software, on the other hand, deal with I/O bound processing, that is sending and receiving packets with a small amount of processing per packet. This kind of code is usually highly deterministic and optimized to use as few CPU cycles as possible. Data plane applications typically do not terminate IP traffic and have much simpler protocols than the control plane. It might also be fine to drop packets in the data plane, which is typically forbidden in the control plane. Typical actions in the data plane are IP forwarding, VPN processing, etc. The goal of the migration is to take your single core telecommunications or networking application and significantly improve its performance on a multicore device. This is accomplished by taking parts of your legacy single core application and moving it over to a multicore environment where you can parallelize it and hopefully achieve your desired performance improvements. Three further goals are to be able to do this in small steps so that you always have a working system to minimize risk. Optionally you may include Linux as a control plane operating system and/or for the data plane. Figure 1 shows the system during and after the migration. Your remaining legacy software stack with operating system will run on one or more cores (if you can consolidate multiple boards), and your multicore code will execute either directly on the hypervisor for best performance and/or in Linux for a more fully fledge environment with access to more services and protocols. In the end we have achieved three important goals: we have retained and reused as much working legacy code as possible, parallelized the performance critical parts so that they are scalable and high performance, and optionally extended the system with Linux. Overall you now have an excellent base for the future.

3 3 Figure 1: The system executing on the multicore chip after migrating your application. Note that Linux and the fast path are optional but one of them should exist unless you are performing a pure consolidation by just instantiating your legacy software. Hypervisor Overview Before explaining the multicore migration methodology, we need to know some basics about hypervisors as it is one of the most important components in the method. A hypervisor performs three important functions: It provides the ability to run multiple operating systems on top of the same multicore chip; It provides the ability to reach zero-overhead performance speeds by executing data plane code straight on top of the hypervisor It enforces isolation between all the operating systems executing on top of it. In the rest of the paper, we are going to refer to an operating system and its associated applications executing on top of a hypervisor simply as a guest. Reformulated with this new definition, a hypervisor enables the execution of multiple guests on a multicore processor with complete isolation in between them. The first property is important in the multicore migration effort as we often want to migrate from one operating system to another, or when transitioning from one copy of the guest to several copies of the guest each executing on a dedicated core. The last property states that one guest cannot cause any other guest to crash, nor can any guest crash the hypervisor itself. As we will see later, this is important to maintain high availability and to facilitate debugging through fault localization. There are two different approaches to virtualize a guest in a hypervisor: full virtualization and para-virtualization. With full virtualization the guest is executed completely unmodified. Just take the binary you had before and drop it onto the hypervisor and you are up and running. The drawback with this approach is that it has significant performance penalties. With para-virtualization you have to modify the guest before you can execute it on the hypervisor. The deployment time of this approach is of course longer, unless the guest operating system has already been para-virtualized by the vendor. The advantage is that it results in faster execution. Full virtualization versus para-virtualization is really a sliding scale. You can decide to do more or less of para-virtualization providing a varying degree of performance improvements and guest porting effort. Most embedded hypervisors use para-virtualization of shared drivers and full virtualization of privileged registers and CPU state. This is a good trade-off since shared drivers and operating systems are not changed often and performance is usually of the essence. The Enea Hypervisor provides the ability to run OSE, Linux, and other operating systems, or even boot loaders, on the same multicore processor. For Linux and OSE, existing para-virtualization interfaces are used, so they can be executed as guests without any porting effort. More information about the Enea Hypervisor and the features and services it offers can be found in Section 7. But before that, we will describe the incremental steps for migrating your telecommunications or networking application from single core to multicore.

4 4 Multicore Migration Methodology How then to turn a single core application into a multicore one where we end up with a system depicted in Figure 1? A flow chart outlining this is shown in Figure 2. First we need to para-virtualize the legacy guest operating system so that it will run efficiently on the hypervisor. For the Enea Hypervisor, the only parts that need to be rewritten are shared drivers. The mandatory ones are the console driver (if you would like to see what is going on inside the guest, that is) and the interrupt controller driver. The paravirtualized interfaces are much simpler than the hardware ones, so writing a para-virtualized driver usually amounts to stripping down the existing driver and putting in the new simpler interfaces. For operating systems such as Linux and Enea OSE, these have already been paravirtualized on many hardware architectures, so you can execute them as a guest without having to modify anything. A guest is not allowed to modify and/or read some sensitive parts of the HW state such as cache controller, memory controller and low-level board initialization regions. Generally, code like this is only present in a boot loader. But if your operating system contains this, then you have to remove it as the hypervisor will have set this up for you already. The second step is to perform consolidation, if possible or desirable. If you have multiple single CPUs (or single CPU boards) in your legacy system, you may take all the software on that CPU/board, instantiate it as a guest on top of the hypervisor, and then do the same for all the other CPU/boards. In the simplest case wherein each CPU/board is running the same piece of software, you just instantiate the same guest as many times as you have cores. If you cannot consolidate more, proceed to the next step. Figure 2: An overview of the steps involved in migrating to multicore according to the approach in this paper.

5 5 In the third step, the performance of the system is measured and its functionality is gauged. What to measure critically depends on what the system is designed to do. But whatever the correct metric is, it usually depends on the existence of some counters and/or some way to measure time. Either these measurements already exist in your application or you have to add them. If the performance is good enough, congratulations, you are done! More likely, you are not happy with the performance the first time around and it has to be improved. Or you may simply wish to explore potential further optimizations that may be achieved in order to get the maximum performance possible. So now the goal is performance optimization of the code itself, and therefore we need to drill down into the architecture and/or organization of the legacy code itself. For code optimization step, we need to identify what part of the legacy code to functionally offload to the multicore part. Start with dividing the legacy code into two parts: non-performance critical (also non real-time critical) and performance critical. In the first category we generally find functionality such as initialization, configuration and debug functionality. In the second category we frequently find data plane code such as network protocol processing and terminations. Control plane code can be found in either category. Traditionally, it used to be non-performance critical. However, what we see in the telecommunications sector is that control plane code has become more and more performance critical due to the increase in data traffic stemming from the emergence of smart phones and other mobile devices that send data traffic through the mobile network. The non-performance critical functions will remain on the legacy operating system, as there is generally no reason to replace code that works. We will therefore focus on making the performance critical part faster by dividing it into smaller pieces, so that we can parallelize them one piece at a time, minimizing risk while always having something that works. Start with the piece of code that the current system spends most of its time executing in search of potential performance bottlenecks. Once a current performance bottleneck has been identified, we need to parallelize it using some design pattern and move it to the other cores. This is where most of the real performance optimization effort will be spent. Therefore, this topic will be covered in detail in Section 5. In the last step, the legacy part has to be connected with the newly parallelized part so that it becomes a fully functional system. To be able to have the workload communicate between the legacy system and the multicore part in a more seamless manner, we shall use an inter process communication (IPC) mechanism called Enea LINX that offers location, inter operating system, and inter media independence. More details about this step can be found in Section 6. Finally we loop back in the flow chart and measure the performance of the system once more. If it is good enough, we are done; otherwise we reiterate and parallelize one more part of the legacy code. The next section will explain the parallelization step in detail. Two Parallelization Approaches There are many ways to parallelize an application and we will present two, the ones that we think are the most successful for networking and telecommunications applications. These are instantiation and functional pipelining shown in Figure 3. Let us first define a task to be a unit of work that has to be performed on a packet or otherwise individual unit of data or control information whose flow the system is managing. The processing of a packet consists of one or more tasks performed on the packet in a certain order. With instantiation, each core will process all tasks associated with a packet entirely. All cores are symmetric and it is really easy to scale such a design: just instantiate another copy of the application code on another core. Load balancing is performed by sending different packets to different cores, e.g. in the simplest case in a first-come first-served fashion, or more advanced by sending related packets to the same core. The most efficient mechanism for load balancing is performed or supported by hardware, and most modern embedded processors support this to various degrees. In some cases, the flows of packets have states (e.g., multimedia transcoding, IP tunneling), which means that all packets of a flow must be sent to the same core and passed in-order through that core due to the location of the state. In other cases, such as for example IP forwarding and NAT, the flow is stateless and thus most packets can be processed by any core. In the functional pipelining approach, each core performs a specific task on the packet and sends it to the next stage in a pipelined fashion. This is a simple way to parallelize an application for a small number of cores by just taking your existing processes and/or functions and spreading them out on the cores. The drawback is that it does not scale to larger systems and adding a single core will result in a redesign. Sometimes, only two core parallelization is possible due to load imbalances. A system design may include both pipelining and instantiation, e.g., a two core pipeline instantiated four times on an eight core CPU. This type of design combines the advantages of both approaches.

6 6 Pipeling Let us start with how to design your application for the pipelining case, as that is usually the easiest way to go to two to up to maybe four cores. A flowchart of all the steps involved in a pipelining design can be found in Figure 4. First, you need to identify a number of functions/tasks that need to be performed on a packet and the sequence of these tasks. This forms the pipeline, and each task is a stage in this pipeline that the packet has to go through. The goal of the partitioning is to achieve a system with the load equally balanced across all cores as much as possible. As an example, let us say that you have identified six functions that you have to partition onto a dual core system. You start with assigning three random functions to one core and the rest to the second core, and send 100 packets a second to the system. When measuring the per core load you get 1% of the total CPU load on one core and 9% Pipelining Functional pipelining through cores Instantiation Parallel, symmetric processing on cores Packet Processing Functions Figure 3: The two models of parallelizing applications dealt with in this paper: Functional pipelining and instantiation. on the other. You increase the number of packets to 1000 per second and you measure 10% load on the first core and 90% on the second. This is not good. If you continue to increase the number of packets, the second core will soon saturate at 100% load and at that point the first core will only be at a measly 11% CPU load. The maximum total utilization of both the cores is in this case roughly 55%. But if you repartition your functions between the cores to achieve a completely load balanced system, at for example 45% CPU load on each core at 1000 packets per second, then you can theoretically reach 100% load on each core and process more than 2000 packets a second. To achieve the most efficient pipelining design, each packet should consume an equal amount of CPU load on each core. If your application is already partitioned into processes, threads, functions, or modules, then this could form the base or your tasks. Let us say that you have four threads in your application and a dual core CPU. Start by measuring the CPU load of each thread and partition the threads between the two cores so that the load is as balanced as possible. Note, that it is very important that your communication and synchronization mechanisms between the threads are multicore safe so that you can move them freely between cores. How to accomplish this is covered in Section 6. Next step is to measure the CPU loads of the two cores in your partitioned system, because the load will have changed due to cache effects and the communication overhead between the threads. If the CPU load between the two cores is balanced, then you are done. If not then try to repartition your threads so that the load will be more balanced. If this is not possible due to the amount of work performed by each thread, you need to split the highest loaded thread into two threads, so that you can partition them in a more balanced way. Now you have five threads that you can partition between the cores. Proceed in the same way as before and hopefully you will have achieved a load balanced system that will utilize all the cores effectively. The main drawback with the pipelining approach it does not automatically scalable. If one more core is added in the next generation, then you have to redo the partition all over. Another drawback is that it is static and therefore not tolerant to dynamic load changes between

7 7 No No Yes Figure 4: A flowchart showing the steps involved in parallelizing an application with the pipelining approach. the cores. With the other parallelization approach, instantiation, it is really easy to scale with number of course and it offers a more future proof design, but it does require more of a design effort up front. Instantiation Instantiation is a really good design pattern to achieve scalability. The only way to achieve perfect scalability is to share nothing. However, this might be quite hard to neigh impossible in practice, so a more realistic aim should be to share as little as possible. A flow chart of all the steps in the instantiation design pattern can be found in Figure 5. Start by defining the Packet set (P-set) to be just a single packet. The first step is then to find the set of independent data and the set of dependent data between any two P-sets. The set of independent data is the data that is never shared between two P-sets. Conversely, the set of dependent data is the data that might be used by two P-sets. Our next step is to redefine the flow so that the size of the independent set is maximized while still having the smallest (in number of packets) definition of P-set. We should not end up with a P-set definition wherein every single packet belongs to one P-set, because that means that only one single core could be used. For example, if most of the dependent data accesses are due to accesses from the same IP address, then we can redefine the P-set to be all packets from a single IP address. Next, re-compute the data sets with the new P-set definition. Then instantiate the application on all the cores not running anything else and program the hardware so that packets are routed according to the definition of a P-set. If the set of dependent data is empty, then we are done. If this is not the case, we need to make sure that the set of dependent data is accessed in the correct way. There are many ways to make sure that accesses to a inter P-set dependent data set are serialized in the correct way. If this data is accessed rarely, then the simplest way to accomplish this is to achieve serialization by sending messages to a single core that reads and writes all data in that region. This core will then serve as the serialization point. If the data is accessed frequently, then these messages will add too much overhead to be of practical use, and we have to use some other method. If the set of dependent data is frequently read but rarely written, then we can rely on shared memory. Put the data there and protect the read accesses from modifications either by performing the writes using the previous approach (reads can be made without any locking), or by introducing RCU (read-copy-update) locking. If the data is frequently both read and written, then you can either introduce a locking

8 8 mechanism such as mutexes or spinlocks to create mutual exclusion. However, this will negatively impact the scalability of the design, and if this is or becomes an issue, refactor the problem so that there is less data that is both read and written frequently. To summarize, try to use the instantiation design pattern if possible as it offers a scalable path to future larger multicore chips. However, if instantiation entails too much of a redesign effort or you do not need to ever scale to more than a small number of cores, use the pipelining method that might provide an easier path to small multicore CPUs. The two methods can also be combined for good effect. But in both methods, it is critical to have good measurements and to have a communication mechanism between the parts that allows for experimentation in the placement of functionality. How this latter part is achieved is covered in the next section. Communication Mechanisms As a communication mechanism between the operating systems and between functional units in the pipeline parallelization technique, we propose Enea LINX. LINX is an inter process communication (IPC) mechanism that has a number of essential properties that needed in the system. First of all LINX is location transparent, meaning that it does not matter from a functional perspective where the communications entities in the system are located. They can be on the same core, on different cores or even on different physical CPU devices or boards. The same code will run unmodified in all cases. This enables us to move functionality freely around the multicore as we explore the best possible design in both the instantiation and the pipelining Figure 5: A flowchart showing the steps involved in parallelizing an application with the instantiation approach.

9 9 design pattern. LINX is available for many different operating systems, such as Linux, OSE, and other selected operating systems as well as the Enea Hypervisor, so that it can be used to communicate between the different operating systems present in the system. When LINX is not available for a given operating system, it can be downloaded from Source Forge as open software and ported to your OS of choice. LINX is high performance; the overhead is less than similar mechanisms that are also present in the Linux kernel. LINX is transport interconnect media independent. So if you ever decide that you need to scale by putting in yet another multicore chip or board, then the same application code can be executed.. LINX works on many transport media straight out of the box: sharedmemory, Ethernet, srio, and even TCP/IP as a bearer protocol. LINX is Enea s default choice for control signaling between entities in our system. But for data packets, it is usually better to send between cores using hardware queues generally available on most modern multicore devices. While HW supported packet transport does not offer many of the advantages of LINX, it will often be faster and the packet will be available for any flow control and queuing algorithms implemented by the hardware. These advantages often outweigh the drawback of lack of the more scalable and portable LINX based solution. Enea Hypervisor What are then the properties needed by the hypervisor to be able to provide a good base for multicore migration? There are quite a few, but starting at the most basic level a hypervisor needs two fundamental properties: low overhead and excellent scalability. Low overhead or high performance is desired as the application and guest OS programmer wants as many cycles as possible to go to their code, not to the hypervisor. The hypervisor is only a layer that should enable multiple guests, various shared services, and isolation between them and therefore should be as unobtrusive as possible. Scalability is needed because hypervisors are mostly used for multicore CPUs in the embedded world, and the trend is that processors get more cores, not fewer. The performance should not go down just because more cores were added in the next design. If someone were to build a hypervisor from scratch, what would the main design principles be in order to achieve the two aforementioned goals? Low overhead is achieved by being as small and optimized as possible. Therefore using a micro kernel is the best solution, not a large operating system such as Linux or Windows that are often used for hosted hypervisors. As nearly all services in a micro kernel are modules on top of it, it is possible to include only the features that are deemed necessary by the user, and not load anything else. This creates a good base for providing low overhead and high performance. The main obstacle for providing good scalability is the sharing of state between cores. Shared state nearly always means that synchronization and mutual exclusion primitives have to be employed, and that state has to be transferred between the caches wasting time and consuming resources. These are all costly and limit scalability, and they tend to become more costly as multicore CPUs grow. So, in order to get good scalability each core should only use private memory. When one core has to modify state residing at some other core, messaging between cores is preferred so that memory sharing is avoided (except the memory used to carry the message). These design principles, micro kernel, private memory, and message passing, are the exact features of Enea s real-time operating system (RTOS) OSE. In other words, OSE is the perfect base for a hypervisor. The Enea Hypervisor is based on the Enea OSE real-time operating system, a truly distributed operating system using a message based programming model that provides application location transparency. The OSE architecture is modular and scalable, and consists of a large set of run-time components that executes on top of a micro kernel. Most services and kernel features are just modules on top of the micro kernel. The micro kernel in itself only contains the scheduler, an IPC mechanism, and the memory manager. The memory manager is actually optional in OSE, but it is strictly required for the hypervisor, since it is very hard to provide isolation without memory management support. The default build of the Enea Hypervisor is shown in Figure 6. This is a scaled down version of the standard OSE kernel configuration with C run-time support, the hypervisor itself, the program manager and run-time loader for launching new guests, and the console. Note that the hypervisor appears to be just another module on top of the OSE kernel, but in the real implementation, most of the hypervisor code actually resides underneath OSE, closest to the hardware with no additional overhead. The absolute minimal configuration is one where only the C run-time and the hypervisor are present. But in this configuration it would not be possible to dynamically launch and tear down guests and it would not be possible to communicate with the hypervisor remotely without a console. The main benefits of basing the Enea Hypervisor on OSE are the following: Proven in use technology OSE has been around for 20 years and is in use in Billions of systems. It works! There is no reason to go through the pain of incorporating some new hypervisor based scheduler, memory manager, device driver framework, etc. Highly scalable micro kernel OSE has proven-in-use scalability on up to 32 cores.

10 10 Existing services For example file systems, networking stacks, debug and profiling tools. As it is a micro kernel, these features can be removed and added as seen fit. No need for a master guest A master guest, or master OS, is required if the hypervisor is so small and feature poor that an OS has to be launched on top of it just in order to configure the board and supply basic services. Networking hardware is a prime example. On some hypervisors, Linux needs to be launched on top of it to configure the networking stack. This is not good if Linux is not desired. Even if Linux needs to be part of the solution, using it as a master raises a whole host of software management, boot time, and configuration issues that are best left for the hypervisor layer. Why? Because the Linux guest master becomes a single point of failure in the system. Stable APIs The APIs are stable as they have already stood the test of time. Powerful device driver framework This is used to dynamically launch, tear down and upgrade hypervisor drivers and services associated with a guest or the hypervisor. OSE applications run natively If already an OSE user, OSE applications can be run natively on top of the hypervisor. The hypervisor is then just an added load module on top of the OSE that you are using anyway. Bare metal application support It is possible to run polling run-to-completion loops, straight on top of the Enea Hypervisor without any performance overhead. No need for yet another executive environment. The Enea Hypervisor satisfies the most important features and properties needed from a hypervisor solution that facilitates migration from single core to multicore and that provides a solid foundation for your future telecommunications and networking applications requirements. These are: Isolation Between individual guests and between guests and the hypervisor for fault and/or failure isolation and localization. For the multicore consolidation case, this is important so that the same fault/failure model is present as in the old multiple CPU/multi-board system. If one CPU/ board crashes, it should not affect any other CPU/board and only a restart is necessary. Without isolation and a hypervisor on a multicore, one virtual CPU/board (core) might bring down any number of other virtual CPU/boards (cores) in which case a restart of every single virtual CPU/board in the multicore device is necessary. In the same way, we do not want a data plane guest to be able to crash any other data plane guest or the control plane guest, so isolation is essential. Common unified boot load for all guests The flexible guest loader concept present in the Enea Hypervisor can mimic any boot loader. It is delivered in source code with well-defined interfaces so it can be adapted it to any boot loader that might be used in a legacy system. Dynamic update of guest and the hypervisor at run time In a high-availability system, such as many in the telecommunications and networking world, the last thing you want to do is have to restart the hypervisor or all the guests, as this will eat up many of your precious nines in availability. The Enea Hypervisor accomplishes dynamic updating with its modular architecture. Each service, driver and many features are kernel or user mode load modules in the hypervisor that can be updated during run-time without having to reboot or recompile the hypervisor. Figure 6: The hypervisor modular architecture in a typical configuration. Power save and dynamic hot-plugging of memory and CPUs In telecommunications and networking systems, load and usage patterns varies according to the time of day, so it is important to be able to adapt to this and to utilize the hardware efficiently at the minimum power level required to sustain the desired service level. Therefore,

11 11 we should be able to withdraw and add CPUs between guests as well as physical memory. If CPUs or memory is not used, it should be possible to turn them off (if hardware allows for this). E.g., if the Linux control plane needs one more processor, it should be possible to de-allocate one from the data plane, or power one up that has been idle, and provide it to the control plane together with some memory. NO single point of failure with a master OS Many hypervisors require a master OS, generally Linux, to handle services, debug, error handling, management, and setup of devices such as networking. This is not good as it creates a single point of failure and you are forced to run Linux even if you do not want to. But even if you want Linux in your system, the boot time of your system will suffer as all guests have to wait for Linux to boot up and initialize the system. The Enea Hypervisor does not rely on a master OS. Instead, fundamental services can be hosted by the hypervisor itself. Shared services There are some shared services that are good to have in a hypervisor and many of these are available in the Enea Hypervisor. To mention just a few: shared file system, high level networking protocols, debug and profiling tools. With the shared file system, whenever a guest or one of its applications crash, the hypervisor can write the crash dump into a shared file system that some other entity can read at a later point in for off line analysis. This file system can also be used to read configuration information and launch applications and guests. Simple guest OS and application error handling and recovery Guest applications and guest operating systems will crash at some point in time, and it is prudent to design the system so that as much data as possible is dumped in this situation for off-line root cause failure analysis. With the Enea Hypervisor, the error handling is centralized to one user modifiable function. In this you can e.g., dump the whole state of the guest into a core dump. You can also retrieve the guest s virtual machine state from the hypervisor and save the whole dump to a file system or send it out onto the network. The options are plentiful as the Enea Hypervisor provides a full C run-time that you can use and a number of modular services that you can load and use for your purpose. All resources acquired by the guest during its life time, (e.g., those acquired by services and devices) are automatically reclaimed, which makes life a lot easier. These resources can then be distributed to other guests, if desired, or reclaimed by the same guest when it is restarted. Supervision and management Whenever a guest crashes, a centralized error handler in the hypervisor is called. In this you can make a system wide decision. E.g., you could simply start a new guest on the core, or you could fail over to a hot stand-by that you have on another core by rerouting all the traffic to the failed guest to this new guest. The error handler can be freely written by the user and even changed during run-time. It is not compiled into the hypervisor. As always, we try to make sure that as many things as possible are kept run-time user configurable. Debugging and profiling When bringing up a new guest that has not been para-virtualized before, it is important to be able to debug it. The Enea Hypervisor offers freeze mode debugging of the guest through both GDB and our Eclipse based IDE tool suite called Enea Optima. Profiling functionality is also available. And as it is available in the hypervisor, you get the performance profile data of the whole system, not just a single guest. Data plane applications directly on top of the hypervisor If we do not want to run our data plane application in Linux or some other operating system, it is possible to run them directly on top of the Enea Hypervisor for maximum performance. The Enea Hypervisor provides a user mode environment for native applications.. You just need to write your run-to-completion loop, compile it for the hypervisor and launch it as a load module straight on top of it. Thus it will not suffer any hypervisor overhead. By launching this load module in user mode, it is isolated from the hypervisor. Therefore the native application cannot crash the hypervisor. Conclusions Migrating single core legacy software to multicore in order to gain performance, save power, cut B.O.M., and/or to save space is one of the main challenges facing the telecommunications and networking industries today. In this paper, we have presented a methodology on how to migrate your single core telecommunications and networking applications to multicore. Briefly, it centers on virtualizing your legacy software on top of the Enea Hypervisor, moving out your performance critical code to the other cores by employing one or more of the parallelization techniques presented in the paper. This performance critical code can either be run on Linux or straight on top of the Enea Hypervisor. The main advantages with our approach is that it tries to keep as much of your legacy single core software as is. It provides a migration that can be performed in small steps, and it will provide a functioning system at the end of all these steps. These are all essential in order to minimize risk and to maximize the reuse of your software investment. Enea is a global software and services company focused on solutions for communication-driven products. With 40 years of experience Enea is a world leader in the development of software platforms with extreme demands on high-availability and performance. Enea s expertise in real-time operating systems and high availability middleware shortens development cycles, brings down product costs and increases system reliability. Enea s vertical solutions cover telecom handsets and infrastructure, medtech, industrial automation, automotive and mil/aero. Enea has 750 employees and is listed on Nasdaq OMX Nordic Exchange Stockholm AB. For more information please visit enea.com or contact us at info@enea.com.

ENEA BARE METAL PERFORMANCE TOOLS FOR NETLOGIC XLP AND CAVIUM OCTEON PLUS

ENEA BARE METAL PERFORMANCE TOOLS FOR NETLOGIC XLP AND CAVIUM OCTEON PLUS 1 Run Time Performance Visualization Tools for Optimization of Bare Metal IP Packet Processing Applications - Quickly and Easily Identify Performance Bottlenecks and Correct System Behavior Optimizing

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Top Ten Considerations For Choosing A Server Virtualization Technology www.parallels.com Version 1.0 Table of Contents Introduction... 3 Technology Overview...

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

Chapter 14 Virtual Machines

Chapter 14 Virtual Machines Operating Systems: Internals and Design Principles Chapter 14 Virtual Machines Eighth Edition By William Stallings Virtual Machines (VM) Virtualization technology enables a single PC or server to simultaneously

More information

Intel DPDK Boosts Server Appliance Performance White Paper

Intel DPDK Boosts Server Appliance Performance White Paper Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks

More information

ODP Application proof point: OpenFastPath. ODP mini-summit 2015-11-10

ODP Application proof point: OpenFastPath. ODP mini-summit 2015-11-10 ODP Application proof point: OpenFastPath ODP mini-summit 2015-11-10 What is Our Intention with OpenFastPath? To enable efficient IP communication Essential in practically all networking use-cases, including

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

Full and Para Virtualization

Full and Para Virtualization Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment

Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform

More information

Virtual Platforms Addressing challenges in telecom product development

Virtual Platforms Addressing challenges in telecom product development white paper Virtual Platforms Addressing challenges in telecom product development This page is intentionally left blank. EXECUTIVE SUMMARY Telecom Equipment Manufacturers (TEMs) are currently facing numerous

More information

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment

Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment Red Hat Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Satellite server is an easy-to-use, advanced systems management platform for your Linux infrastructure.

More information

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer kklemperer@blackboard.com Agenda Session Length:

More information

Embedded Virtualization & Cyber Security for Industrial Automation HyperSecured PC-based Control and Operation

Embedded Virtualization & Cyber Security for Industrial Automation HyperSecured PC-based Control and Operation Embedded Virtualization & Cyber Security for Industrial Automation HyperSecured PC-based Control and Operation Industrial controllers and HMIs today mostly lack protective functions for their IT and network

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Greener Virtualization www.parallels.com Version 1.0 Greener Virtualization Operating system virtualization by Parallels Virtuozzo Containers from Parallels is

More information

evm Virtualization Platform for Windows

evm Virtualization Platform for Windows B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400

More information

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available:

Tools Page 1 of 13 ON PROGRAM TRANSLATION. A priori, we have two translation mechanisms available: Tools Page 1 of 13 ON PROGRAM TRANSLATION A priori, we have two translation mechanisms available: Interpretation Compilation On interpretation: Statements are translated one at a time and executed immediately.

More information

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Virtualization is set to become a key requirement

Virtualization is set to become a key requirement Xen, the virtual machine monitor The art of virtualization Moshe Bar Virtualization is set to become a key requirement for every server in the data center. This trend is a direct consequence of an industrywide

More information

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D

VMware and CPU Virtualization Technology. Jack Lo Sr. Director, R&D ware and CPU Virtualization Technology Jack Lo Sr. Director, R&D This presentation may contain ware confidential information. Copyright 2005 ware, Inc. All rights reserved. All other marks and names mentioned

More information

Leveraging Thin Hypervisors for Security on Embedded Systems

Leveraging Thin Hypervisors for Security on Embedded Systems Leveraging Thin Hypervisors for Security on Embedded Systems Christian Gehrmann A part of Swedish ICT What is virtualization? Separation of a resource or request for a service from the underlying physical

More information

Virtual Machines. www.viplavkambli.com

Virtual Machines. www.viplavkambli.com 1 Virtual Machines A virtual machine (VM) is a "completely isolated guest operating system installation within a normal host operating system". Modern virtual machines are implemented with either software

More information

Virtualization for Cloud Computing

Virtualization for Cloud Computing Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF CLOUD COMPUTING On demand provision of computational resources

More information

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE

MODULE 3 VIRTUALIZED DATA CENTER COMPUTE MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization

More information

Virtualization. Dr. Yingwu Zhu

Virtualization. Dr. Yingwu Zhu Virtualization Dr. Yingwu Zhu What is virtualization? Virtualization allows one computer to do the job of multiple computers. Virtual environments let one computer host multiple operating systems at the

More information

Cisco Integrated Services Routers Performance Overview

Cisco Integrated Services Routers Performance Overview Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,

More information

Virtualization for Hard Real-Time Applications Partition where you can Virtualize where you have to

Virtualization for Hard Real-Time Applications Partition where you can Virtualize where you have to Virtualization for Hard Real-Time Applications Partition where you can Virtualize where you have to Hanspeter Vogel Triadem Solutions AG Real-Time Systems GmbH Gartenstrasse 33 D-88212 Ravensburg Germany

More information

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.

Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont. Objectives To describe the services an operating system provides to users, processes, and other systems To discuss the various ways of structuring an operating system Chapter 2: Operating-System Structures

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

Enterprise-Class Virtualization with Open Source Technologies

Enterprise-Class Virtualization with Open Source Technologies Enterprise-Class Virtualization with Open Source Technologies Alex Vasilevsky CTO & Founder Virtual Iron Software June 14, 2006 Virtualization Overview Traditional x86 Architecture Each server runs single

More information

Deeply Embedded Real-Time Hypervisors for the Automotive Domain Dr. Gary Morgan, ETAS/ESC

Deeply Embedded Real-Time Hypervisors for the Automotive Domain Dr. Gary Morgan, ETAS/ESC Deeply Embedded Real-Time Hypervisors for the Automotive Domain Dr. Gary Morgan, ETAS/ESC 1 Public ETAS/ESC 2014-02-20 ETAS GmbH 2014. All rights reserved, also regarding any disposal, exploitation, reproduction,

More information

Applying Multi-core and Virtualization to Industrial and Safety-Related Applications

Applying Multi-core and Virtualization to Industrial and Safety-Related Applications White Paper Wind River Hypervisor and Operating Systems Intel Processors for Embedded Computing Applying Multi-core and Virtualization to Industrial and Safety-Related Applications Multi-core and virtualization

More information

MCA Standards For Closely Distributed Multicore

MCA Standards For Closely Distributed Multicore MCA Standards For Closely Distributed Multicore Sven Brehmer Multicore Association, cofounder, board member, and MCAPI WG Chair CEO of PolyCore Software 2 Embedded Systems Spans the computing industry

More information

Resource Utilization of Middleware Components in Embedded Systems

Resource Utilization of Middleware Components in Embedded Systems Resource Utilization of Middleware Components in Embedded Systems 3 Introduction System memory, CPU, and network resources are critical to the operation and performance of any software system. These system

More information

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista

Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista Enhancing Hypervisor and Cloud Solutions Using Embedded Linux Iisko Lappalainen MontaVista Setting the Stage This presentation will discuss the usage of Linux as a base component of hypervisor components

More information

Chapter 3 Operating-System Structures

Chapter 3 Operating-System Structures Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual

More information

Virtual Private Systems for FreeBSD

Virtual Private Systems for FreeBSD Virtual Private Systems for FreeBSD Klaus P. Ohrhallinger 06. June 2010 Abstract Virtual Private Systems for FreeBSD (VPS) is a novel virtualization implementation which is based on the operating system

More information

Chapter 3: Operating-System Structures. Common System Components

Chapter 3: Operating-System Structures. Common System Components Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines System Design and Implementation System Generation 3.1

More information

Virtualization. Michael Tsai 2015/06/08

Virtualization. Michael Tsai 2015/06/08 Virtualization Michael Tsai 2015/06/08 What is virtualization? Let s first look at a video from VMware http://bcove.me/x9zhalcl Problems? Low utilization Different needs DNS DHCP Web mail 5% 5% 15% 8%

More information

Introduction to Embedded Systems. Software Update Problem

Introduction to Embedded Systems. Software Update Problem Introduction to Embedded Systems CS/ECE 6780/5780 Al Davis logistics minor Today s topics: more software development issues 1 CS 5780 Software Update Problem Lab machines work let us know if they don t

More information

The Benefits of Virtualizing Citrix XenApp with Citrix XenServer

The Benefits of Virtualizing Citrix XenApp with Citrix XenServer White Paper The Benefits of Virtualizing Citrix XenApp with Citrix XenServer This white paper will discuss how customers can achieve faster deployment, higher reliability, easier management, and reduced

More information

Knut Omang Ifi/Oracle 19 Oct, 2015

Knut Omang Ifi/Oracle 19 Oct, 2015 Software and hardware support for Network Virtualization Knut Omang Ifi/Oracle 19 Oct, 2015 Motivation Goal: Introduction to challenges in providing fast networking to virtual machines Prerequisites: What

More information

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation

More information

System Structures. Services Interface Structure

System Structures. Services Interface Structure System Structures Services Interface Structure Operating system services (1) Operating system services (2) Functions that are helpful to the user User interface Command line interpreter Batch interface

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself

How do Users and Processes interact with the Operating System? Services for Processes. OS Structure with Services. Services for the OS Itself How do Users and Processes interact with the Operating System? Users interact indirectly through a collection of system programs that make up the operating system interface. The interface could be: A GUI,

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c

More information

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General

More information

Linux Driver Devices. Why, When, Which, How?

Linux Driver Devices. Why, When, Which, How? Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may

More information

Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association

Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association Making Multicore Work and Measuring its Benefits Markus Levy, president EEMBC and Multicore Association Agenda Why Multicore? Standards and issues in the multicore community What is Multicore Association?

More information

TOP TEN CONSIDERATIONS

TOP TEN CONSIDERATIONS White Paper TOP TEN CONSIDERATIONS FOR CHOOSING A SERVER VIRTUALIZATION TECHNOLOGY Learn more at www.swsoft.com/virtuozzo Published: July 2006 Revised: July 2006 Table of Contents Introduction... 3 Technology

More information

Chapter 2: OS Overview

Chapter 2: OS Overview Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:

More information

Management of VMware ESXi. on HP ProLiant Servers

Management of VMware ESXi. on HP ProLiant Servers Management of VMware ESXi on W H I T E P A P E R Table of Contents Introduction................................................................ 3 HP Systems Insight Manager.................................................

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

The Microsoft Windows Hypervisor High Level Architecture

The Microsoft Windows Hypervisor High Level Architecture The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its

More information

Virtualization, SDN and NFV

Virtualization, SDN and NFV Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,

More information

Virtualization: Hypervisors for Embedded and Safe Systems. Hanspeter Vogel Triadem Solutions AG

Virtualization: Hypervisors for Embedded and Safe Systems. Hanspeter Vogel Triadem Solutions AG 1 Virtualization: Hypervisors for Embedded and Safe Systems Hanspeter Vogel Triadem Solutions AG 2 Agenda Use cases for virtualization Terminology Hypervisor Solutions Realtime System Hypervisor Features

More information

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Oracle Database Scalability in VMware ESX VMware ESX 3.5 Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises

More information

How Solace Message Routers Reduce the Cost of IT Infrastructure

How Solace Message Routers Reduce the Cost of IT Infrastructure How Message Routers Reduce the Cost of IT Infrastructure This paper explains how s innovative solution can significantly reduce the total cost of ownership of your messaging middleware platform and IT

More information

Example of Standard API

Example of Standard API 16 Example of Standard API System Call Implementation Typically, a number associated with each system call System call interface maintains a table indexed according to these numbers The system call interface

More information

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin rhbear@us.ibm.com IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency

More information

An Easier Way for Cross-Platform Data Acquisition Application Development

An Easier Way for Cross-Platform Data Acquisition Application Development An Easier Way for Cross-Platform Data Acquisition Application Development For industrial automation and measurement system developers, software technology continues making rapid progress. Software engineers

More information

Beyond Virtualization: A Novel Software Architecture for Multi-Core SoCs. Jim Ready September 18, 2012

Beyond Virtualization: A Novel Software Architecture for Multi-Core SoCs. Jim Ready September 18, 2012 Beyond Virtualization: A Novel Software Architecture for Multi-Core SoCs Jim Ready September 18, 2012 How HW guys view the world SW Software HW How SW guys view the world SW HW Reality The SoC Software

More information

Networking for Caribbean Development

Networking for Caribbean Development Networking for Caribbean Development BELIZE NOV 2 NOV 6, 2015 w w w. c a r i b n o g. o r g Virtualization: Architectural Considerations and Implementation Options Virtualization Virtualization is the

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Embedded Systems. 6. Real-Time Operating Systems

Embedded Systems. 6. Real-Time Operating Systems Embedded Systems 6. Real-Time Operating Systems Lothar Thiele 6-1 Contents of Course 1. Embedded Systems Introduction 2. Software Introduction 7. System Components 10. Models 3. Real-Time Models 4. Periodic/Aperiodic

More information

System Software Integration: An Expansive View. Overview

System Software Integration: An Expansive View. Overview Software Integration: An Expansive View Steven P. Smith Design of Embedded s EE382V Fall, 2009 EE382 SoC Design Software Integration SPS-1 University of Texas at Austin Overview Some Definitions Introduction:

More information

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels.

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels. Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

Servervirualisierung mit Citrix XenServer

Servervirualisierung mit Citrix XenServer Servervirualisierung mit Citrix XenServer Paul Murray, Senior Systems Engineer, MSG EMEA Citrix Systems International GmbH paul.murray@eu.citrix.com Virtualization Wave is Just Beginning Only 6% of x86

More information

Hardware Based Virtualization Technologies. Elsie Wahlig elsie.wahlig@amd.com Platform Software Architect

Hardware Based Virtualization Technologies. Elsie Wahlig elsie.wahlig@amd.com Platform Software Architect Hardware Based Virtualization Technologies Elsie Wahlig elsie.wahlig@amd.com Platform Software Architect Outline What is Virtualization? Evolution of Virtualization AMD Virtualization AMD s IO Virtualization

More information

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines:

Uses for Virtual Machines. Virtual Machines. There are several uses for virtual machines: Virtual Machines Uses for Virtual Machines Virtual machine technology, often just called virtualization, makes one computer behave as several computers by sharing the resources of a single computer between

More information

Directions for VMware Ready Testing for Application Software

Directions for VMware Ready Testing for Application Software Directions for VMware Ready Testing for Application Software Introduction To be awarded the VMware ready logo for your product requires a modest amount of engineering work, assuming that the pre-requisites

More information

Rackspace Cloud Databases and Container-based Virtualization

Rackspace Cloud Databases and Container-based Virtualization Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many

More information

virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06

virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 virtualization.info Review Center SWsoft Virtuozzo 3.5.1 (for Windows) // 02.26.06 SWsoft Virtuozzo 3.5.1 (for Windows) Review 2 Summary 0. Introduction 1. Installation 2. VPSs creation and modification

More information

Microkernels, virtualization, exokernels. Tutorial 1 CSC469

Microkernels, virtualization, exokernels. Tutorial 1 CSC469 Microkernels, virtualization, exokernels Tutorial 1 CSC469 Monolithic kernel vs Microkernel Monolithic OS kernel Application VFS System call User mode What was the main idea? What were the problems? IPC,

More information

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.

M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain

More information

9/26/2011. What is Virtualization? What are the different types of virtualization.

9/26/2011. What is Virtualization? What are the different types of virtualization. CSE 501 Monday, September 26, 2011 Kevin Cleary kpcleary@buffalo.edu What is Virtualization? What are the different types of virtualization. Practical Uses Popular virtualization products Demo Question,

More information

Novel Systems. Extensible Networks

Novel Systems. Extensible Networks Novel Systems Active Networks Denali Extensible Networks Observations Creating/disseminating standards hard Prototyping/research Incremental deployment Computation may be cheap compared to communication

More information

Virtualization Technologies ORACLE TECHNICAL WHITE PAPER OCTOBER 2015

Virtualization Technologies ORACLE TECHNICAL WHITE PAPER OCTOBER 2015 Virtualization Technologies ORACLE TECHNICAL WHITE PAPER OCTOBER 2015 Table of Contents Introduction 3 Designing a Consolidated Infrastructure 6 Seven Areas of Consideration for Consolidation 6 Security

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

The XenServer Product Family:

The XenServer Product Family: The XenServer Product Family: A XenSource TM White Paper Virtualization Choice for Every Server: The Next Generation of Server Virtualization The business case for virtualization is based on an industry-wide

More information

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction

More information

Introduction to the NI Real-Time Hypervisor

Introduction to the NI Real-Time Hypervisor Introduction to the NI Real-Time Hypervisor 1 Agenda 1) NI Real-Time Hypervisor overview 2) Basics of virtualization technology 3) Configuring and using Real-Time Hypervisor systems 4) Performance and

More information

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland

Introducing. Markus Erlacher Technical Solution Professional Microsoft Switzerland Introducing Markus Erlacher Technical Solution Professional Microsoft Switzerland Overarching Release Principles Strong emphasis on hardware, driver and application compatibility Goal to support Windows

More information

Cloud Computing CS 15-319

Cloud Computing CS 15-319 Cloud Computing CS 15-319 Virtualization Case Studies : Xen and VMware Lecture 20 Majd F. Sakr, Mohammad Hammoud and Suhail Rehman 1 Today Last session Resource Virtualization Today s session Virtualization

More information

Windows Server 2008 R2 Hyper V. Public FAQ

Windows Server 2008 R2 Hyper V. Public FAQ Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6

More information

Parallels Virtuozzo Containers

Parallels Virtuozzo Containers Parallels Virtuozzo Containers White Paper Virtual Desktop Infrastructure www.parallels.com Version 1.0 Table of Contents Table of Contents... 2 Enterprise Desktop Computing Challenges... 3 What is Virtual

More information

Scaling Networking Applications to Multiple Cores

Scaling Networking Applications to Multiple Cores Scaling Networking Applications to Multiple Cores Greg Seibert Sr. Technical Marketing Engineer Cavium Networks Challenges with multi-core application performance Amdahl s Law Evaluates application performance

More information

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux

White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables

More information

Cloud Computing. Up until now

Cloud Computing. Up until now Cloud Computing Lecture 11 Virtualization 2011-2012 Up until now Introduction. Definition of Cloud Computing Grid Computing Content Distribution Networks Map Reduce Cycle-Sharing 1 Process Virtual Machines

More information

KVM: A Hypervisor for All Seasons. Avi Kivity avi@qumranet.com

KVM: A Hypervisor for All Seasons. Avi Kivity avi@qumranet.com KVM: A Hypervisor for All Seasons Avi Kivity avi@qumranet.com November 2007 Virtualization Simulation of computer system in software Components Processor: register state, instructions, exceptions Memory

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information

COS 318: Operating Systems. Virtual Machine Monitors

COS 318: Operating Systems. Virtual Machine Monitors COS 318: Operating Systems Virtual Machine Monitors Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318/ Introduction u Have

More information

Inside the Erlang VM

Inside the Erlang VM Rev A Inside the Erlang VM with focus on SMP Prepared by Kenneth Lundin, Ericsson AB Presentation held at Erlang User Conference, Stockholm, November 13, 2008 1 Introduction The history of support for

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information