Adaptive Memory Management Scheme for MMU- Less Embedded Systems

Size: px
Start display at page:

Download "Adaptive Memory Management Scheme for MMU- Less Embedded Systems"

Transcription

1 Adaptive Memory Management Scheme for MMU- Less Embedded Systems Ioannis Deligiannis and George Kornaros Informatics Engineering Department Technological Educational Institute of Crete Heraklion, Crete, Greece Abstract This paper presents a memory allocation scheme that provides efficient dynamic memory allocation and defragmentation for embedded systems lacking a Memory Management Unit (MMU). Using as main criteria the efficiency in handling both external and internal memory fragmentation, as well as the requirements of soft real-time applications in constraintembedded systems, the proposed solution of memory management delivers a more precise memory allocation process. The proposed Adaptive Memory Management Scheme (AMM) maintains a balance between performance and efficiency, with the objective to increase the amount of usable memory in MMU-less embedded systems with a bounded and acceptable timing behavior. By maximizing memory utilization, embedded systems applications can optimize their performance in time-critical tasks and meet the demands of Internet-of-Things (IoT) solutions, without undergoing memory leaks and unexpected failures. Its use requires no hardware MMU, and requires few or no manual changes to application software. The proposed scheme is evaluated providing encouraging results regarding performance and reliability compared to the default memory allocator. Allocation of fixed and random size blocks delivers a speedup ranging from 2x to 5x over the standard GLIBC allocator, while the de-allocation process is only 20% percent slower, but provides a perfect (0%) defragmented memory. Keywords dynamic memory management, memory defragmentation, MMU-less embedded systems, real-time applications, internet-of-things I. INTRODUCTION The growth of Internet-of-Things(IoT) solutions creates vast new opportunities for developers of embedded systems by providing capabilities which can be added to just about any physical object including medical devices, household appliances, home automation, industrial controls, even clothing and light bulbs. This collection of billions of end devices, from the tiniest ultra-efficient connected end-nodes to the high-performance gateways creates a continuously growing demand in the embedded systems industry and sophisticated software design for efficiently supporting the demanding applications running on IoT devices. Internet of Things (IoT) platform designers have some difficult choices to make for storing data. They usually have to decide how much memory to include for major SoC functions, add onchip or off-chip memory and whether data programming requirement is one time, a few times, or many times. Usually, these options seem mutually exclusive, especially when the system does not provide an efficient memory management algorithm. Due to high-volume and low-price expectations for the IoT-enabled system, the cost is of great concern. Thus, the use of a real-time or lightweight OS is preferred than adding hardware support such as an MMU [23]. In this context, optimization techniques for efficient and adaptive memory management and data assignment have to be addressed. Dynamic memory management is a critical issue in the development of constraint-platform systems, which use realtime operating systems even if the application that executes on top is real-time or not. In these systems, reducing the number of malloc() and free() calls is a typical strategy which can greatly simplify protecting against memory leaks [21]. It can also eliminate the sometimes considerable CPU time associated with making many malloc() and free() calls. Achieving good performance is only achieved when dynamic allocations are statically derived, which may not be true for different applications or, may be limited by the hardware platform (i.e., when it has limited memory). In sensor network nodes for instance, the use of dedicated dynamic random access memory (DRAM) integrated circuits is typically avoided due to area and power constraints. Even if dynamic memory management in embedded systems is normally discouraged due to resource constraints, some types of applications inherently require this kind of functionality. Event-Based Systems (EBS) are the methods of choice for a near-realtime reactive analysis of data streams from sensors in applications such as surveillance, sports, RFID-systems, stock trading, etc [8][22]. Thus, handling and analyzing a-priori unknown patterns in the design of IoT devices requires ad-hoc mechanisms, such as pre-allocated buffers, adaptive customized memory management algorithms since objects can appear, change size, and disappear at runtime. In addition, in real-time applications the ability to adjust the system configuration in response to workload changes and application reconfiguration is becoming necessary. In the literature an extensive number of references to this particular

2 issue exist [1-7], trying to offer customized solutions in the domain of real-time applications to enable system responses with increased quality of service, or to precisely determine the worstcase execution time of all software components in a system. Hence, also in this domain a memory management algorithm requires features such as fast response latency, and low fragmentation [1]. However, a balance should be kept at runtime between response time and resource allocation service in presence of memory fragmentation. This is especially important when there is an extended use of dynamic allocation and deallocation, which can lead to memory fragmentation, which may cause unexpected behavior of the whole system. Usually, most developers are trying to avoid the use of dynamic memory management for this reason, as Puaut et al. [9] highlights, but it is necessary for some applications running on IoT-enabled devices. netheless, the use of dynamic memory in real-time embedded devices is important to be deterministic; the time to allocate memory should be predictable, and the memory pool should not become fragmented. In this paper, we introduce another approach of Adaptive Memory Management, called hereby AMM, which mainly focuses on small-embedded systems where memory is limited, and there is no MMU hardware support. The following Section II describes the dynamic memory allocation process and the forms of fragmentations that may occur. Section III provides a review of related work in memory management algorithms. The analysis of the proposed AMM scheme is presented in Section IV, and it is followed up by an experimental evaluation in Section V and the limitations in Section VI. Finally, Section VII provides future work and conclusions. II. DYNAMIC MEMORY ALLOCATION Dynamic memory allocation is a process that allows a program to distribute efficiently its memory space in situations of unpredictable events that need to be stored in memory due to unknown inputs [10]. Using dynamic memory allocation while a program is running, provides the ability to request more memory from the system. If there is enough memory available, the system will grant the program the right to use the amount it requests. However, as it previously mentioned, in some situations due to multiple allocation and deallocation actions with different sizes, dynamic memory allocation leads to some critical side effects such as internal and external fragmentation, which can result in unexpected memory failures of the system even if the total available space is sufficiently large to fulfill the requirement [5, 11]. Both internal and external fragmentation forms are usually referenced as wasted memory [1, 10]. However, it is worthy to analyze the difference between those two. Internal fragmentation is a form of fragmentation that arises when allocation of memory is done only in multiples of a subunit. A request of arbitrary size is served by rounding up to the next highest multiple of the sub-unit, leaving a small amount of memory that is allocated but not in use. Internal fragmentation can be decreased by reducing the size of the sub-unit; such a reduction increases though external fragmentation. In contrast, external fragmentation is a form of fragmentation that arises when memory is allocated in units of arbitrary size. When a large amount of memory is released, part of it may be used to satisfy a subsequent request, leaving an unused part that is too small to serve any further requests[12, 13]. Fig. 1 illustrates three different fragmentation states that may occur in a system using dynamic allocations. In the first one (a), a best-case scenario is presented where there is no memory fragmentation, however in the second use case (b) it is easily observed that a sort of fragmentation occurs in the heap memory section. Finally, the last use case (c) presents a situation that leads to memory failure due to unavailable free memory space that may occur by fragmentation, which is considered as a potential issue in modern embedded systems development. However, as Johnstone et al. [14] shows, well-designed allocators can efficiently manage fragmentation problems. In this direction since development kits that rely on a system heap can result in memory fragmentation, with a negative impact to system performance, industry promotes thread-free library with an integrated memory manager for a zero-heap solution for IoT devices [20]. Stack Heap Static Data Free Space Free Space Allocated Space Allocated Memory Block Fragmented Memory Block Allocated Space Allocated Space Fig 1. System memory in different time instances with diverse fragmentation states; as dynamic allocation increases fragmented memory can cause memory shortage A. Memory Management Unit Usually, an approach for avoiding memory fragmentations of the system with such a tendency is the use of system equipped with a memory management unit (MMU) alongside with a dynamic memory management algorithm. A memory management unit is a hardware component that handles all memory operations associated with the virtual memory handling[7]. The foremost goal of a memory management unit is to provide a convenient abstraction using (a (b (c

3 virtual addresses to ensure the availability of adequate memory resources. In other words, MMU is a hardware part that translates a virtual address to physical address [15, 16]. MMU-equipped embedded systems can remap the fragmented memory spaces into a whole large block or even perform defragment operations to merge those pieces into a continuous physical block. Therefore, fragmentation problems only cause performance issues [15]. However, on MMU-less embedded systems this issue is complicated due to lack of support for virtual addresses, hence remapping [7]. III. RELATED WORK Researchers have anticipated several different approaches to solve the fragmentation problem in MMU-less embedded systems. Each algorithm is classified according to the way that it finds a free block of the most appropriate size. As analyzed in Masmano el al.[1], Sun et al. [2] and Wilson [10] works, these algorithms can be categorized in: Sequential Fit, Segregated Free Lists, Buddy Systems, Indexed Fit and Bitmap Fit. A simple but not always feasible solution is the formatting of the available system memory into uniform blocks. By using this approach, embedded systems have to face a crucial constraint of serving the diverse-sized requested blocks, which makes it inefficient for complex and demanding applications. Buddy systems approach attempts to split the available memory into two same-sized blocks respecting the efficiency issue and increasing the overall performance. This method usually creates fragmented memory parts after extended use of allocation and deallocation processes. In the buddy system, the memory is broken down into power-of-two sized naturally aligned blocks [17]. This approach greatly reduces external fragmentation of memory and helps in allocating bigger continuous blocks of memory aligned to their size. On the other hand, the buddy allocator suffers increased internal fragmentation of memory and is not suitable for general kernel allocations. This purpose is better addressed by the slab allocator. Slab allocator, creates different sized blocks and matches the requested allocation to the closest one. The majority of memory allocation requests in the kernel is for small, frequently used data structures. The basic idea behind the slab allocator is that commonly used objects are pre-allocated in continuous areas of physical memory called slabs. Whenever an object is to be allocated, the slab allocator returns the first available item from a suitable slab corresponding to the object type. Because the sizes of the requested and allocated block match, the slab allocator significantly reduces internal fragmentation [6]. The advantage of this setup is that during most of the allocations, no global spinlock needs to be held. CAMA [6], which is a research follow-up of slab allocator, has a better performance by splitting and merging the block accordingly. Two-Level Segregated Fit [1] (TLSF algorithm) seeks to implement a good-fit policy in order to fulfill the most important real-time requirements. The basic segregated fit mechanism uses an array of free lists, with each array holding free blocks within a size class. In order to speed-up the access to the free blocks and also to manage a large set of segregated lists, the array of lists is organized as a two-level array. The first-level array divides free blocks into classes that are a power of two apart (16, 32, 64, 128, etc.); and the second-level sub-divides each first-level class linearly, where the number of divisions is a user configurable parameter. Each array of lists has an associated bitmap used to mark which lists are empty and which ones contain free blocks. MMU-Less Defragmentable Allocation Schema [7] has a significantly better performance in memory consistency by respecting the fact that each block should be moveable to avoid fragmentation. This approach attempts to mimic the hardware MMU operations by using a different allocation method. By using the address of the pointer as extra information in the descriptor of each block, MMU-Less Defragmentable Allocation Schema can move allocated blocks and update the referred address pointer to the new address space. However, this approach does not provide all the necessary characteristics to support more complex memory blocks such as linked lists, which makes it nonusable for many real-world applications. In addition to those approaches, there are many cases of application programs that include an extra memory management code called sub-allocator. A sub-allocator is an allocator functioning on top of another allocator. It usually obtains large blocks of memory from the system memory manager and allocates the memory to the application in smaller pieces. Sub-allocators are usually written to avoid the general inefficiency of the system s memory manager or to kate an advantage over the application s memory requirements that could not be expressed to the system memory manager. However, sub-allocators are less efficient than having a single memory manager that is well written and has an adjustable interface. All the previously mentioned methodologies indeed decrease the memory fragmentation. However, they are bounded by the type of application and the system that will use them. In general, memory fragmentation without hardware MMU component is almost impossible to avoid, and usually a memory management scheme is developed according to the demands of the application that will use it. We should clarify that the proposed memory management scheme does not try to mimic a hardware MMU, nonetheless the goal is to provide fast and stable allocation algorithm enhanced with a defragmentation process for MMU- Less devices without the need of virtual addressing. IV. ADAPTIVE MEMORY MANAGEMENT As outlined before, there is not a single memory management scheme suitable for all application domains; algorithms for dynamic memory management can help but only for specific usage patterns. Real-time applications are quite different from conventional applications, and usually, each application requires a different memory management approach to achieve both optimal performance and stability. The objective of the proposed scheme is to minimize the memory fragmentation without

4 sacrificing the overall speed of the system, while at the same time being able to support the requirements of applications from different domains. In order to meet those requirements, the proposed memory management algorithm uses a combination of a modified TLSF methodology [1, 2] and the ability to move memory blocks [7]. AMM has been developed according to the following principles. First of all, the smallest memory block that the system can support including the block s meta-data is 16 bytes, similar to previous research works [1, 2, 7, 18]. Due to constraints of embedded systems due to limited available memory, it is not efficient to store extra information or to leave unused memory blocks for long periods. In the proposed memory management scheme, free fragmented memory blocks are removed immediately from the allocated space, by moving them to the border between allocated and free space. This approach is only valid in embedded systems where the available memory is limited. The difference between the elapsed time which is required by the system to efficiently store a free block in a list and dispatch it, versus the elapsed time to move some occupied blocks, is almost insignificant. However, to obtain the maximum performance, the decision algorithm which is responsible for these actions has to provide an adaptive formation in order to eliminate unnecessary memory block movements. Usually, adaptation algorithms require a previously held calibration process for the needed parameters. For demonstration purpose, in this work the parameters are fixed values as defined in the algorithm. In this scope, all new allocation requests are served as soon as possible. For each memory block request if there is available space, the request is served by inserting the allocated size of data and a pointer to its previous block (this information is called next as meta-data) and by returning a pointer to the corresponding memory address. The advantage of this process is to provide identical constant elapsed time; as shown in the next section, in Fig. 3, it is constant and equals to three comparisons to successfully allocate a block. Essentially, this enables the use of dynamic allocation in time critical operations. At the same time, any method of the proposed memory management scheme should always check if defragmentation is needed in order to provide memory resiliency. In addition, the defragmentation process can be adaptive for keeping a balance between system resources and memory fragmentation level [2, 3, 10]. The algorithm can be configured and adapted in two ways. First, to control the margin between pointers and data in cases of high-frequency operations of dynamic allocation, since we reduce the need to do frequent re-organizations. Second, to control the percentage of the data section, either in the front or at the rear, where we perform the defragmentation of the blocks. For that reason, defragmentation is a series of small inexpensive steps in terms of time. A. Adaptive Memory Management Structure The memory data structure is a crucial part of the proposed scheme. In contrast to all other memory management approaches, our memory manager requires a different structure of how the meta-data and the blocks are stored in the memory. Fig. 2 outlines the memory data structure organization. Pointers Section and Data Section are the main sections, which are separated by a small amount of free memory space, the Margin. The Pointers Section is responsible for storing the addresses of the pointers, which are associated with an allocated data block. In principle, an allocated data block can be referenced by many different variables. This pointed address of the pointers is updated when a block movement action is in progress. The system can actually save multiple pointers for each block in order to support more complex data structures in a user application than simple allocated blocks such as linked list, pointer of pointers etc. Stack Heap Static Data Pointers Section Margin Pointer Block Pointer Block Pointer Block Data Section Header (metadata) Data (payload) Header (metadata) Data (payload) Fig 2. Organization of Memory Structure Free Memory The Data Section is the memory space where the actual data blocks are stored. By providing almost the same API allocation / free methods as the default memory management scheme of LibC[19], developers are able to transit to the new memory management scheme easily. Moreover, a margin between those two sections is responsible to maintain an amount of free memory space for registering new pointer blocks. The optimal size of this margin can be configured for each application differently, since this affects the position of the Data Section and the number of operations for each allocation operation. If the configuration of the margin is not optimal then the system will try to adapt itself by moving some data-blocks from the beginning of the data section to the rear. Each allocated block has two main parts, the meta-data part, also known as block header and the data named hereby as payload. To be able to manage the memory blocks successfully, Adaptive Memory Management Scheme adds some useful information into the meta-data of each block. The header information contains a pointer to the previously physical occupied block and the size of the contained data. Each block size is always a multiple of four bytes to match word-length alignment and increase the system performance. Using the previous structure definitions, the proposed Adaptive Memory Management scheme provides the capability for supporting more forms of objects than simple allocations such

5 as linked lists or even blocks of pointers. However, as it is clearly mentioned this allocation algorithm is optimized for embedded systems with limited memory space. B. Available Methods To maintain the size of the margin to 32 bytes the memory management scheme can add a new block either in front or at the end of the Data Section. The Memory Management API provides two public functions for allocation mem_alloc, mem_alloc_p, one public function for deallocation : mem_free, two public functions for pointers : mem_add_pointer, mem_remove_pointer, and three private functions for the management of the pointers section. The following table provides the definitions of those methods: TABLE I - DEFINITION OF AMM METHODS Method Prototype void * mem_alloc(size_t) void * mem_alloc_p(void *, size_t) void mem_free(void *) void mem_add_pointer(void *) void mem_remove_pointer(void *) Description Allocation of a size_t data block Combining mem_alloc and mem_add_pointer Deallocation of requested block and defragmentation Registering the address of the requested pointer De-registering the address of the requested pointer In order to create a request for a new allocation, the user has to call mem_alloc. This method takes as argument the size of the requested block and returns the address of the new allocated space. However, the usage of this method is not recommended in case that the address of the pointer that is allocated is not assigned in the Pointer Section. This function call is used in cases where the pointer is already registered to the Pointers Section in order to avoid extra comparisons. An example would be a linkedlist used as a stack where the head of the list should always point to the newly inserted block. In that case the head pointer is already registered. For the pointer s address registration, developers have to call mem_add_pointer function passing as parameters the address of the pointer. This function does not allocate any data block but it is responsible only for the registration of the pointer in the Pointers Section. For simplicity reasons there is a function mem_alloc_p which combines both mem_alloc and mem_add_pointer. This general-purpose function is recommended in any case the program does not know if the pointer is already registered or not. Finally, the provided methods mem_free and mem_remove_pointer are responsible for removing data blocks and pointer blocks. When a user calls the mem_free method, the system always removes the associated pointers of the block. However, in some cases developers might want to assign or disassociate a pointer without clearing the allocated space. C. Algorithm This section details the algorithmic steps of the allocation process to provide a minimal but efficient solution for memory allocation. Fig. 3 depicts the sequence of operations to serve an allocation request. Request new allocation of given size If fragmented block exists If fr/d block s size equals to req. size Use the fragmented block If there is enough Margin space Create the new requested data block in front of the Data Section Check if there is enough free space Create the new requested data block at the end of the Data Section Return NULL Fig 3. Memory allocation algorithm Return the address of the data block Step 1: In the beginning the allocation process has to check if the defragmentation process is active. Step 2: If a fragmented block is found that has sufficient memory space to keep the requested block, then the allocation process deactivates the defragmentation process and returns the address of the newly created pointer. Otherwise, it continues to the next step. Step 3: If the margin between the Pointers Section and the Data Section is large enough to fit the requested block with a minimum of 32 bytes of free space left, then the allocator occupies the free space at the beginning of the Data block. Otherwise, it goes to the end of the Data Section and checks if there is sufficient space to allocate. The returned pointer is NULL in case of failure or the address of the new block in case of success. Unlike the previously mentioned allocation processes in Section III, the Adaptive Memory Management process seeks to keep the allocation algorithm as simple as possible to provide a balance between speed and efficiency. The deallocation process destroys an allocated memory block by using a referenced pointer to it and activates the defragmentation process, as explained in the following sections. Fig. 4 depicts the algorithm.

6 Separating the process of defragmentation from the process of deallocation gives us the ability to use this memory management approach in both bare metal and multitasking applications supported by real-time operating systems. It should be mentioned that in small embedded systems using ARM Cortex M series or relevant microcontrollers the heap memory sector is visible (shared) for any task. Request deallocation of given data block If there is not any fragmented block If the data block is near the first/last 10% of the data section Remove any inner pointer from pointers section Shift the remaining blocks (left/right) Find a same-sized block near the front or the rear of the data section Swap the blocks If the data block is near the first/last 10% of the data section soft real-time embedded systems. The algorithm used in defragmentation is described below: If the fragmented block lays in the first 10% of the data section, the remaining blocks between the fragmented one and the beginning of the data section should be immediately shifted (moved) left. The same principle applies if the block is in the last 10%, but reversed. If none of the previous rules apply, the fragmented block should be swapped either with the last best-fit block found in the last 10% for the data section, or with the first best-fit block found in the front 10% for the data section and execute the first step of the algorithm again. As before, if there is no block to fit, the fragmented block is treated as if it already was in the first or last 10% of the data section. Fig. 5 presents an example of the defragmentation process in action. Each block represents a data block with an actual size (meta-data and payload). The defragmentation process begins by applying the rules mentioned above. As long as the block is in the first 40% of the data section the system swaps it with the first best-fit block at the beginning of the section. Afterwards, the system realigns (pushes) the remaining blocks to remove the fragmented one and updates the respective pointers in the pointers section. Shift the remaining blocks (left/right) Clearing the 6 th memory block (black) Keep the fragmented block Find a same-sized block near the front or the rear of the data section Swap the blocks Select the nearest-to-borders block between the fragmented block and the new one Shift the remaining blocks (left/right) and save the other block as fragmented Fig 4. Memory de-allocation algorithm The defragmentation process uses an optimal workflow for achieving the best utilization of the system. It is divided into small steps that can be carried out anytime during the system s uptime. However, the number of actions that will be executed depends on the fragmentation and the memory availability of the overall system. This approach complies with the requirements of Moving the data of the first equally-sized block(3 rd ) into the freed space Shifting the rest of the data blocks 16 bytes right Fig 5. Memory operations during a defragmentation operation In order to provide optimal decision rules for each system, a calibration process should be run when the system begins. This is a simple but effective process of defragmentation that assures there is no external fragmentation in the system. V. EXPERIMENTAL EVALUATION In this section, we present experiments that showcase the efficiency and stability of the proposed memory management scheme. To this end, we compare the default memory allocation without defragmentation with the proposed adaptive memory management scheme. Since real-time application requirements are quite demanding, the results obtained have to comply with them. We implemented AMM on an ARM Cortex M4 (MK20DX256 Chip) microcontroller. Time measurements are expressed in CPU cycles using the provided functionality of the

7 microcontroller for counting. The first experiment is using same-size blocks for allocation and free. In this test, we randomly allocate and deallocate memory space for one thousand blocks. The second experiment is using the same methodology but with random-size blocks ranging from four to twenty bytes. Both experiments are trying to mimic two different types of resource-hungry applications but mostly applications that require many dynamic allocations/free. The number of blocks and the overall size equals to the 65-70% of the system s memory. The following tables present the results of the allocation and free processes for each memory management algorithm, as well as the fragmentation level. TABLE II LATENCY OVERHEAD IN CLOCK CYCLES Memory Allocation Standard Allocation AMM Best Worst Mean Best Worst Mean Experiment Experiment Memory Deallocation Best Worst Mean Best Worst Mean Experiment Experiment TABLE III FRAGMENTATION LEVEL External Fragmentation Standard Allocation AMM Experiment 1 12% 0% Experiment 2 37% 0% Internal Fragmentation Standard Allocation AMM Experiment 1 0% 0% Experiment 2 8% 0% The main difference of those two allocation methodologies is the worst case of the allocation process. We could easily understand that if there exist fragmented blocks (Table III) in the memory space, the standard allocation methodology produces a highly degraded response time (Table II) in contrast to the adaptive memory management scheme which has a discrete stable elapsed. It is worth noting that the observed difference in CPU cycles in deallocation between the Standard allocator and the Adaptive memory management scheme is generated by the defragmentation process. VI. LIMITATIONS The proposed memory management scheme is optimized for a wide range of use. However, to be utilized in a safe and optimal environment, some modifications are essential. This approach has downsides because of memory movement actions that need to be performed. It is clearly mentioned that the overall purpose is to support a safe dynamic allocation method in MMU-Less embedded systems with limited memory space without undergoing the performance of time critical operations. The demonstration does not comply with any security rule about memory isolation, which could lead to an undesired information exposure. The Adaptive Memory management scheme is optimized to be used in low-cost, power efficient devices with a small amount of available memory, however it could also be used as a sub-allocator for general purpose operating systems such as Unix/Linux. Finally, according to statistics regarding the average time latency for memory access, the proposed technique costs three to fourteen CPU cycles and a memory copy using the default GLIBC memmove [19] method (Complexity O(n)) has around 500Mbps (megabits per second) transfer rate in an ARM Cortex M4 microprocessor. Of course, by using DMA data transfer method (800Mbps) the average time could be less, but there is a prerequisite that the hardware has to support that feature. VII. CONCLUSION AND FUTURE WORK A key issue in MMU-Less embedded systems is usually the limitation of the available memory to enable dynamic allocation and this can incur increased levels of fragmentation and as a consequence an unstable system. All the memory management schemes to our knowledge try to solve some aspects of these problems but, in extended use of dynamic allocations, usually fail to maintain a low fragmentation level. The present study demonstrates an adaptive memory management scheme for MMU-Less embedded systems, which offers a dynamic allocation implementation with no performance or fragmentation limitations. By providing the capability of moving blocks, external fragmentation is entirely avoided. The proposed approach is designed to support small, low-cost, power-effective SoCs. This memory management scheme can be used in IoTenabled devices such as sensory networks, in order to provide stable and reliable functionality. However, additional studies are required to investigate a wider range of devices, both to measure their performance and their overall stability, in order to provide a universal memory management scheme, for different memory sizes and devices, as well as an improvement on both adaptation and defragmentation processes. ACKNOWLEDGEMENTS The research leading to these results has received funding from the European Union (EU) FP7 project SAVE under contract FP7-ICT , and Horizon 2020 project TAPPS (Trusted Apps for open CPSs) under RIA grant

8 REFERENCES [1] M. Masmano, I. Ripoll, A. Crespo, and J. Real, "TLSF: A new dynamic memory allocator for real-time systems," in Real-Time Systems, ECRTS Proceedings. 16th Euromicro Conference on, 2004, pp [2] X. Sun, J. Wang, and X. Chen, "An improvement of TLSF algorithm," in Real-Time Conference, th IEEE-NPSS, 2007, pp [3] M. Masmano, I. Ripoll, and A. Crespo, "Dynamic storage allocation for real-time embedded systems," Proc. of Real-Time System Simposium WIP, [4] A. Crespo, I. Ripoll, and M. Masmano, "Dynamic memory management for embedded real-time systems," in From Model- Driven Design to Resource Management for Distributed Embedded Systems, ed: Springer, 2006, pp [5] T. Kani, "Dynamic memory allocation," ed: U.S. Patent Application 14/396,383, [6] J. Bonwick, "The Slab Allocator: An Object-Caching Kernel Memory Allocator," in USENIX summer, [7] Y.-H. Yu, J.-Z. Wang, and T.-Y. Sun, "A vel Defragmemtable Memory Allocating Schema for MMU-Less Embedded System," in Advances in Intelligent Systems and Applications - Volume 2. vol. 21, J.-S. Pan, C.-N. Yang, and C.-C. Lin, Eds., ed: Springer Berlin Heidelberg, 2013, pp [8] M. Stonebraker, U. Çetintemel, and S. Zdonik, "The 8 requirements of real-time stream processing," ACM SIGMOD Record, vol. 34, pp , [9] I. Puaut, "Real-time performance of dynamic memory allocation algorithms," in Real-Time Systems, Proceedings. 14th Euromicro Conference on, 2002, pp [10] P. R. Wilson, M. S. Johnstone, M. Neely, and D. Boles, "Dynamic storage allocation: A survey and critical review," in Memory Management, ed: Springer, 1995, pp [11] K. Wang, "Memory Management," in Design and Implementation of the MTX Operating System, ed: Springer, 2015, pp [12] H. J. Boehm and P. F. Dubois, "Dynamic memory allocation and garbage collection," Computers in Physics, vol. 9, pp , [13] W. E. Croft and A. Henderson, "Eliminating memory fragmentation and garbage collection from the process of managing dynamically allocated memory," ed: Google Patents, [14] M. S. Johnstone and P. R. Wilson, "The memory fragmentation problem: solved?," in ACM SIGPLAN tices, 1998, pp [15] D.-B. Koh, "Memory management unit with address translation function," ed: Google Patents, [16] J. E. Zolnowsky, C. L. Whittington, and W. M. Keshlear, "Memory management unit," ed: Google Patents, [17] G. S. Brodal, E. D. Demaine, and J. I. Munro, "Fast allocation and deallocation with an improved buddy system," Acta Informatica, vol. 41, pp , [18] G. Barootkoob, M. Sharifi, E. M. Khaneghah, and S. L. Mirtaheri, "Parameters affecting the functionality of memory allocators," in Communication Software and Networks (ICCSN), 2011 IEEE 3rd International Conference on, 2011, pp [19] S. Loosemore, U. Drepper, R. M. Stallman, A. Oram, and R. McGrath, The GNU C library reference manual: Free software foundation, [20] Synopsys Inc., Synopsys and Cypherbridge Accelerate TLS Record Processing for IoT Communication with Optimized Hardware/Software Security Solution, Embedded World 2016 in Nuremberg, Germany, February [21] J.Light, Embedded Programming for IoT, In Embedded Linux Conference & OpenIoT Summit, April [22] S. Wasserkrug, A. Gal, O. Etzion, and Y. Turchin, Efficient processing of uncertain events in rule-based systems, IEEE Trans. On Knowledge and Data Engineering, vol. 24, no. 1, pp , [23] ARM, mbed IoT Device Platform, internet-of-things-solutions/mbed-iot-device-platform.php

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm Journal of Al-Nahrain University Vol.15 (2), June, 2012, pp.161-168 Science Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm Manal F. Younis Computer Department, College

More information

Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc()

Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc() CS61: Systems Programming and Machine Organization Harvard University, Fall 2009 Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc() Prof. Matt Welsh October 6, 2009 Topics for today Dynamic

More information

Persistent Binary Search Trees

Persistent Binary Search Trees Persistent Binary Search Trees Datastructures, UvA. May 30, 2008 0440949, Andreas van Cranenburgh Abstract A persistent binary tree allows access to all previous versions of the tree. This paper presents

More information

Dynamic Memory Management for Embedded Real-Time Systems

Dynamic Memory Management for Embedded Real-Time Systems Dynamic Memory Management for Embedded Real-Time Systems Alfons Crespo, Ismael Ripoll and Miguel Masmano Grupo de Informática Industrial Sistemas de Tiempo Real Universidad Politécnica de Valencia Instituto

More information

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation Dynamic Storage Allocation CS 44 Operating Systems Fall 5 Presented By Vibha Prasad Memory Allocation Static Allocation (fixed in size) Sometimes we create data structures that are fixed and don t need

More information

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging Memory Management Outline Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging 1 Background Memory is a large array of bytes memory and registers are only storage CPU can

More information

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada ken@unb.ca Micaela Serra

More information

Performance Comparison of RTOS

Performance Comparison of RTOS Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile

More information

This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902

This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902 Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited

More information

Validating Java for Safety-Critical Applications

Validating Java for Safety-Critical Applications Validating Java for Safety-Critical Applications Jean-Marie Dautelle * Raytheon Company, Marlborough, MA, 01752 With the real-time extensions, Java can now be used for safety critical systems. It is therefore

More information

File Management. Chapter 12

File Management. Chapter 12 Chapter 12 File Management File is the basic element of most of the applications, since the input to an application, as well as its output, is usually a file. They also typically outlive the execution

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

Chapter 7 Memory Management

Chapter 7 Memory Management Operating Systems: Internals and Design Principles Chapter 7 Memory Management Eighth Edition William Stallings Frame Page Segment A fixed-length block of main memory. A fixed-length block of data that

More information

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General

More information

On Demand Loading of Code in MMUless Embedded System

On Demand Loading of Code in MMUless Embedded System On Demand Loading of Code in MMUless Embedded System Sunil R Gandhi *. Chetan D Pachange, Jr.** Mandar R Vaidya***, Swapnilkumar S Khorate**** *Pune Institute of Computer Technology, Pune INDIA (Mob- 8600867094;

More information

Chapter 8: Structures for Files. Truong Quynh Chi tqchi@cse.hcmut.edu.vn. Spring- 2013

Chapter 8: Structures for Files. Truong Quynh Chi tqchi@cse.hcmut.edu.vn. Spring- 2013 Chapter 8: Data Storage, Indexing Structures for Files Truong Quynh Chi tqchi@cse.hcmut.edu.vn Spring- 2013 Overview of Database Design Process 2 Outline Data Storage Disk Storage Devices Files of Records

More information

Chapter 13. Chapter Outline. Disk Storage, Basic File Structures, and Hashing

Chapter 13. Chapter Outline. Disk Storage, Basic File Structures, and Hashing Chapter 13 Disk Storage, Basic File Structures, and Hashing Copyright 2007 Ramez Elmasri and Shamkant B. Navathe Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files

More information

Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC

Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC Hardware Task Scheduling and Placement in Operating Systems for Dynamically Reconfigurable SoC Yuan-Hsiu Chen and Pao-Ann Hsiung National Chung Cheng University, Chiayi, Taiwan 621, ROC. pahsiung@cs.ccu.edu.tw

More information

Resource Allocation Schemes for Gang Scheduling

Resource Allocation Schemes for Gang Scheduling Resource Allocation Schemes for Gang Scheduling B. B. Zhou School of Computing and Mathematics Deakin University Geelong, VIC 327, Australia D. Walsh R. P. Brent Department of Computer Science Australian

More information

x64 Servers: Do you want 64 or 32 bit apps with that server?

x64 Servers: Do you want 64 or 32 bit apps with that server? TMurgent Technologies x64 Servers: Do you want 64 or 32 bit apps with that server? White Paper by Tim Mangan TMurgent Technologies February, 2006 Introduction New servers based on what is generally called

More information

Copyright 2007 Ramez Elmasri and Shamkant B. Navathe. Slide 13-1

Copyright 2007 Ramez Elmasri and Shamkant B. Navathe. Slide 13-1 Slide 13-1 Chapter 13 Disk Storage, Basic File Structures, and Hashing Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible

More information

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES ABSTRACT EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES Tyler Cossentine and Ramon Lawrence Department of Computer Science, University of British Columbia Okanagan Kelowna, BC, Canada tcossentine@gmail.com

More information

AN11008 Flash based non-volatile storage

AN11008 Flash based non-volatile storage Rev. 1 5 January 2011 Application note Document information Info Content Keywords Flash, EEPROM, Non-Volatile Storage Abstract This application note describes the implementation and use of a library that

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

Physical Data Organization

Physical Data Organization Physical Data Organization Database design using logical model of the database - appropriate level for users to focus on - user independence from implementation details Performance - other major factor

More information

CSCI E 98: Managed Environments for the Execution of Programs

CSCI E 98: Managed Environments for the Execution of Programs CSCI E 98: Managed Environments for the Execution of Programs Draft Syllabus Instructor Phil McGachey, PhD Class Time: Mondays beginning Sept. 8, 5:30-7:30 pm Location: 1 Story Street, Room 304. Office

More information

Chapter 13 Disk Storage, Basic File Structures, and Hashing.

Chapter 13 Disk Storage, Basic File Structures, and Hashing. Chapter 13 Disk Storage, Basic File Structures, and Hashing. Copyright 2004 Pearson Education, Inc. Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files

More information

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications Rouven Kreb 1 and Manuel Loesch 2 1 SAP AG, Walldorf, Germany 2 FZI Research Center for Information

More information

WHITE PAPER Optimizing Virtual Platform Disk Performance

WHITE PAPER Optimizing Virtual Platform Disk Performance WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower

More information

Performance Oriented Management System for Reconfigurable Network Appliances

Performance Oriented Management System for Reconfigurable Network Appliances Performance Oriented Management System for Reconfigurable Network Appliances Hiroki Matsutani, Ryuji Wakikawa, Koshiro Mitsuya and Jun Murai Faculty of Environmental Information, Keio University Graduate

More information

Chapter 13. Disk Storage, Basic File Structures, and Hashing

Chapter 13. Disk Storage, Basic File Structures, and Hashing Chapter 13 Disk Storage, Basic File Structures, and Hashing Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible Hashing

More information

Context-aware Library Management System using Augmented Reality

Context-aware Library Management System using Augmented Reality International Journal of Electronic and Electrical Engineering. ISSN 0974-2174 Volume 7, Number 9 (2014), pp. 923-929 International Research Publication House http://www.irphouse.com Context-aware Library

More information

Enery Efficient Dynamic Memory Bank and NV Swap Device Management

Enery Efficient Dynamic Memory Bank and NV Swap Device Management Enery Efficient Dynamic Memory Bank and NV Swap Device Management Kwangyoon Lee and Bumyong Choi Department of Computer Science and Engineering University of California, San Diego {kwl002,buchoi}@cs.ucsd.edu

More information

Analysis of Compression Algorithms for Program Data

Analysis of Compression Algorithms for Program Data Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory

More information

- An Essential Building Block for Stable and Reliable Compute Clusters

- An Essential Building Block for Stable and Reliable Compute Clusters Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative

More information

The Benefits of Virtualizing Citrix XenApp with Citrix XenServer

The Benefits of Virtualizing Citrix XenApp with Citrix XenServer White Paper The Benefits of Virtualizing Citrix XenApp with Citrix XenServer This white paper will discuss how customers can achieve faster deployment, higher reliability, easier management, and reduced

More information

System Software Prof. Dr. H. Mössenböck

System Software Prof. Dr. H. Mössenböck System Software Prof. Dr. H. Mössenböck 1. Memory Management 2. Garbage Collection 3. Linkers and Loaders 4. Debuggers 5. Text Editors Marks obtained by end-term exam http://ssw.jku.at/misc/ssw/ 1. Memory

More information

SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs

SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs SOS: Software-Based Out-of-Order Scheduling for High-Performance NAND -Based SSDs Sangwook Shane Hahn, Sungjin Lee, and Jihong Kim Department of Computer Science and Engineering, Seoul National University,

More information

Operating System Tutorial

Operating System Tutorial Operating System Tutorial OPERATING SYSTEM TUTORIAL Simply Easy Learning by tutorialspoint.com tutorialspoint.com i ABOUT THE TUTORIAL Operating System Tutorial An operating system (OS) is a collection

More information

OPERATING SYSTEM - VIRTUAL MEMORY

OPERATING SYSTEM - VIRTUAL MEMORY OPERATING SYSTEM - VIRTUAL MEMORY http://www.tutorialspoint.com/operating_system/os_virtual_memory.htm Copyright tutorialspoint.com A computer can address more memory than the amount physically installed

More information

A Design of Resource Fault Handling Mechanism using Dynamic Resource Reallocation for the Resource and Job Management System

A Design of Resource Fault Handling Mechanism using Dynamic Resource Reallocation for the Resource and Job Management System A Design of Resource Fault Handling Mechanism using Dynamic Resource Reallocation for the Resource and Job Management System Young-Ho Kim, Eun-Ji Lim, Gyu-Il Cha, Seung-Jo Bae Electronics and Telecommunications

More information

HARDWARE ACCELERATION IN FINANCIAL MARKETS. A step change in speed

HARDWARE ACCELERATION IN FINANCIAL MARKETS. A step change in speed HARDWARE ACCELERATION IN FINANCIAL MARKETS A step change in speed NAME OF REPORT SECTION 3 HARDWARE ACCELERATION IN FINANCIAL MARKETS A step change in speed Faster is more profitable in the front office

More information

A Dell Technical White Paper Dell Compellent

A Dell Technical White Paper Dell Compellent The Architectural Advantages of Dell Compellent Automated Tiered Storage A Dell Technical White Paper Dell Compellent THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

Contributions to Gang Scheduling

Contributions to Gang Scheduling CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,

More information

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &

More information

Computer Architecture

Computer Architecture Computer Architecture Slide Sets WS 2013/2014 Prof. Dr. Uwe Brinkschulte M.Sc. Benjamin Betting Part 11 Memory Management Computer Architecture Part 11 page 1 of 44 Prof. Dr. Uwe Brinkschulte, M.Sc. Benjamin

More information

ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU

ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Computer Science 14 (2) 2013 http://dx.doi.org/10.7494/csci.2013.14.2.243 Marcin Pietroń Pawe l Russek Kazimierz Wiatr ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Abstract This paper presents

More information

Modeling an Agent-Based Decentralized File Sharing Network

Modeling an Agent-Based Decentralized File Sharing Network Modeling an Agent-Based Decentralized File Sharing Network Alex Gonopolskiy Benjamin Nash December 18, 2007 Abstract In this paper we propose a distributed file sharing network model. We take inspiration

More information

Chapter 7 Memory Management

Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 7 Memory Management Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall Memory Management Subdividing

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: April 19, 2013 Condusiv s Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01 Executive

More information

OPART: Towards an Open Platform for Abstraction of Real-Time Communication in Cross-Domain Applications

OPART: Towards an Open Platform for Abstraction of Real-Time Communication in Cross-Domain Applications OPART: Towards an Open Platform for Abstraction of Real-Time Communication in Cross-Domain Applications Simplification of Developing Process in Real-time Networked Medical Systems Morteza Hashemi Farzaneh,

More information

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer.

Introduction Disks RAID Tertiary storage. Mass Storage. CMSC 412, University of Maryland. Guest lecturer: David Hovemeyer. Guest lecturer: David Hovemeyer November 15, 2004 The memory hierarchy Red = Level Access time Capacity Features Registers nanoseconds 100s of bytes fixed Cache nanoseconds 1-2 MB fixed RAM nanoseconds

More information

The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang

The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) The Key Technology Research of Virtual Laboratory based On Cloud Computing Ling Zhang Nanjing Communications

More information

Automated Virtual Cloud Management: The need of future

Automated Virtual Cloud Management: The need of future Automated Virtual Cloud Management: The need of future Prof. (Ms) Manisha Shinde-Pawar Faculty of Management (Information Technology), Bharati Vidyapeeth Univerisity, Pune, IMRDA, SANGLI Abstract: With

More information

Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip

Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Ms Lavanya Thunuguntla 1, Saritha Sapa 2 1 Associate Professor, Department of ECE, HITAM, Telangana

More information

Java Virtual Machine: the key for accurated memory prefetching

Java Virtual Machine: the key for accurated memory prefetching Java Virtual Machine: the key for accurated memory prefetching Yolanda Becerra Jordi Garcia Toni Cortes Nacho Navarro Computer Architecture Department Universitat Politècnica de Catalunya Barcelona, Spain

More information

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment

A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed

More information

A Survey of Shared File Systems

A Survey of Shared File Systems Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...

More information

Application Performance Testing Basics

Application Performance Testing Basics Application Performance Testing Basics ABSTRACT Todays the web is playing a critical role in all the business domains such as entertainment, finance, healthcare etc. It is much important to ensure hassle-free

More information

PART III. OPS-based wide area networks

PART III. OPS-based wide area networks PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity

More information

Component Based Software Design using CORBA. Victor Giddings, Objective Interface Systems Mark Hermeling, Zeligsoft

Component Based Software Design using CORBA. Victor Giddings, Objective Interface Systems Mark Hermeling, Zeligsoft Component Based Software Design using CORBA Victor Giddings, Objective Interface Systems Mark Hermeling, Zeligsoft Component Based Software Design using CORBA Victor Giddings (OIS), Mark Hermeling (Zeligsoft)

More information

APP INVENTOR. Test Review

APP INVENTOR. Test Review APP INVENTOR Test Review Main Concepts App Inventor Lists Creating Random Numbers Variables Searching and Sorting Data Linear Search Binary Search Selection Sort Quick Sort Abstraction Modulus Division

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

Lecture 25 Symbian OS

Lecture 25 Symbian OS CS 423 Operating Systems Design Lecture 25 Symbian OS Klara Nahrstedt Fall 2011 Based on slides from Andrew S. Tanenbaum textbook and other web-material (see acknowledgements) cs423 Fall 2011 1 Overview

More information

Operating System Components

Operating System Components Lecture Overview Operating system software introduction OS components OS services OS structure Operating Systems - April 24, 2001 Operating System Components Process management Memory management Secondary

More information

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering Memory management basics (1) Requirements (1) Operating Systems Part of E1.9 - Principles of Computers and Software Engineering Lecture 7: Memory Management I Memory management intends to satisfy the following

More information

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

Software Development for Multiple OEMs Using Tool Configured Middleware for CAN Communication

Software Development for Multiple OEMs Using Tool Configured Middleware for CAN Communication 01PC-422 Software Development for Multiple OEMs Using Tool Configured Middleware for CAN Communication Pascal Jost IAS, University of Stuttgart, Germany Stephan Hoffmann Vector CANtech Inc., USA Copyright

More information

Data Storage Framework on Flash Memory using Object-based Storage Model

Data Storage Framework on Flash Memory using Object-based Storage Model 2011 International Conference on Computer Science and Information Technology (ICCSIT 2011) IPCSIT vol. 51 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V51. 118 Data Storage Framework

More information

Organization of Programming Languages CS320/520N. Lecture 05. Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.

Organization of Programming Languages CS320/520N. Lecture 05. Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio. Organization of Programming Languages CS320/520N Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Names, Bindings, and Scopes A name is a symbolic identifier used

More information

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration 1 Harish H G, 2 Dr. R Girisha 1 PG Student, 2 Professor, Department of CSE, PESCE Mandya (An Autonomous Institution under

More information

Intel DPDK Boosts Server Appliance Performance White Paper

Intel DPDK Boosts Server Appliance Performance White Paper Intel DPDK Boosts Server Appliance Performance Intel DPDK Boosts Server Appliance Performance Introduction As network speeds increase to 40G and above, both in the enterprise and data center, the bottlenecks

More information

V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System

V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System André Brinkmann, Michael Heidebuer, Friedhelm Meyer auf der Heide, Ulrich Rückert, Kay Salzwedel, and Mario Vodisek Paderborn

More information

Interpreters and virtual machines. Interpreters. Interpreters. Why interpreters? Tree-based interpreters. Text-based interpreters

Interpreters and virtual machines. Interpreters. Interpreters. Why interpreters? Tree-based interpreters. Text-based interpreters Interpreters and virtual machines Michel Schinz 2007 03 23 Interpreters Interpreters Why interpreters? An interpreter is a program that executes another program, represented as some kind of data-structure.

More information

Node-Based Structures Linked Lists: Implementation

Node-Based Structures Linked Lists: Implementation Linked Lists: Implementation CS 311 Data Structures and Algorithms Lecture Slides Monday, March 30, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks CHAPPELLG@member.ams.org

More information

Input/Output Subsystem in Singularity Operating System

Input/Output Subsystem in Singularity Operating System University of Warsaw Faculty of Mathematics, Computer Science and Mechanics Marek Dzikiewicz Student no. 234040 Input/Output Subsystem in Singularity Operating System Master s Thesis in COMPUTER SCIENCE

More information

The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud.

The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud. White Paper 021313-3 Page 1 : A Software Framework for Parallel Programming* The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud. ABSTRACT Programming for Multicore,

More information

Record Storage and Primary File Organization

Record Storage and Primary File Organization Record Storage and Primary File Organization 1 C H A P T E R 4 Contents Introduction Secondary Storage Devices Buffering of Blocks Placing File Records on Disk Operations on Files Files of Unordered Records

More information

Reducing Configuration Complexity with Next Gen IoT Networks

Reducing Configuration Complexity with Next Gen IoT Networks Reducing Configuration Complexity with Next Gen IoT Networks Orama Inc. November, 2015 1 Network Lighting Controls Low Penetration - Why? Commissioning is very time-consuming & expensive Network configuration

More information

Free-Space Management

Free-Space Management 17 Free-Space Management In this chapter, we take a small detour from our discussion of virtualizing memory to discuss a fundamental aspect of any memory management system, whether it be a malloc library

More information

Energy-aware Memory Management through Database Buffer Control

Energy-aware Memory Management through Database Buffer Control Energy-aware Memory Management through Database Buffer Control Chang S. Bae, Tayeb Jamel Northwestern Univ. Intel Corporation Presented by Chang S. Bae Goal and motivation Energy-aware memory management

More information

Segmentation and Fragmentation

Segmentation and Fragmentation Segmentation and Fragmentation Operating System Design MOSIG 1 Instructor: Arnaud Legrand Class Assistants: Benjamin Negrevergne, Sascha Hunold September 16, 2010 A. Legrand Segmentation and Fragmentation

More information

Database Design for Performance Data in Integrated Network Management System

Database Design for Performance Data in Integrated Network Management System Database Design for Performance Data in Integrated Network Management System YeonJoo Na, IlSoo Ahn Network Management Solutions Lab. Telecommunication Systems Division Samsung Electronics Co., Korea e-mail:

More information

The assignment of chunk size according to the target data characteristics in deduplication backup system

The assignment of chunk size according to the target data characteristics in deduplication backup system The assignment of chunk size according to the target data characteristics in deduplication backup system Mikito Ogata Norihisa Komoda Hitachi Information and Telecommunication Engineering, Ltd. 781 Sakai,

More information

The big data revolution

The big data revolution The big data revolution Friso van Vollenhoven (Xebia) Enterprise NoSQL Recently, there has been a lot of buzz about the NoSQL movement, a collection of related technologies mostly concerned with storing

More information

Design and Implementation of the Heterogeneous Multikernel Operating System

Design and Implementation of the Heterogeneous Multikernel Operating System 223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics,

More information

NetBeans Profiler is an

NetBeans Profiler is an NetBeans Profiler Exploring the NetBeans Profiler From Installation to a Practical Profiling Example* Gregg Sporar* NetBeans Profiler is an optional feature of the NetBeans IDE. It is a powerful tool that

More information

www.dotnetsparkles.wordpress.com

www.dotnetsparkles.wordpress.com Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.

More information

The Re-emergence of Data Capture Technology

The Re-emergence of Data Capture Technology The Re-emergence of Data Capture Technology Understanding Today s Digital Capture Solutions Digital capture is a key enabling technology in a business world striving to balance the shifting advantages

More information

Dynamic resource management for energy saving in the cloud computing environment

Dynamic resource management for energy saving in the cloud computing environment Dynamic resource management for energy saving in the cloud computing environment Liang-Teh Lee, Kang-Yuan Liu, and Hui-Yang Huang Department of Computer Science and Engineering, Tatung University, Taiwan

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

OPERATING SYSTEM - MEMORY MANAGEMENT

OPERATING SYSTEM - MEMORY MANAGEMENT OPERATING SYSTEM - MEMORY MANAGEMENT http://www.tutorialspoint.com/operating_system/os_memory_management.htm Copyright tutorialspoint.com Memory management is the functionality of an operating system which

More information

DURING a project s lifecycle many factors can change at

DURING a project s lifecycle many factors can change at Proceedings of the 2014 Federated Conference on Computer Science and Information Systems pp. 1191 1196 DOI: 10.15439/2014F426 ACSIS, Vol. 2 Algorithms for Automating Task Delegation in Project Management

More information

Filesystems Performance in GNU/Linux Multi-Disk Data Storage

Filesystems Performance in GNU/Linux Multi-Disk Data Storage JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 22 No. 2 (2014), pp. 65-80 Filesystems Performance in GNU/Linux Multi-Disk Data Storage Mateusz Smoliński 1 1 Lodz University of Technology Faculty of Technical

More information

Research and Design of Universal and Open Software Development Platform for Digital Home

Research and Design of Universal and Open Software Development Platform for Digital Home Research and Design of Universal and Open Software Development Platform for Digital Home CaiFeng Cao School of Computer Wuyi University, Jiangmen 529020, China cfcao@126.com Abstract. With the development

More information

The Classical Architecture. Storage 1 / 36

The Classical Architecture. Storage 1 / 36 1 / 36 The Problem Application Data? Filesystem Logical Drive Physical Drive 2 / 36 Requirements There are different classes of requirements: Data Independence application is shielded from physical storage

More information

1 Storage Devices Summary

1 Storage Devices Summary Chapter 1 Storage Devices Summary Dependability is vital Suitable measures Latency how long to the first bit arrives Bandwidth/throughput how fast does stuff come through after the latency period Obvious

More information