A Compacting Real-Time Memory Management System

Size: px
Start display at page:

Download "A Compacting Real-Time Memory Management System"

Transcription

1 A Compacting Real-Time Memory Management System Magisterarbeit zur Erlangung des akademischen Grades Diplom-Ingenieur der Angewandten Informatik Angefertigt am Institut für Computerwissenschaften und Systemanalyse der Naturwissenschaftlichen Fakultät der Paris-Londron-Universität Salzburg Eingereicht von Bakk. techn. Hannes E. Payer Eingereicht bei Univ. Prof. Dr. Ing. Dipl. Inform. Christoph Kirsch Salzburg, September 2007

2 Danksagung An dieser Stelle möchte ich mich bei meinem Diplomarbeitsbetreuer Prof. Christoph Kirsch bedanken, der mich in jeder Phase meiner Diplomarbeit ausgezeichnet betreut hat. Vielen Dank für die hervorragende Arbeitsumgebung, all die spannenden Diskussionen und die langen fruchtbaren Whiteboard-Sessions. Sehr großer Dank gebührt meinen Eltern Eduard und Gertraud Payer, die mich mein ganzes Leben in meinen Vorhaben und Wünschen unterstützt haben. Ohne sie wäre das reibungslose Absolvieren meines Studiums nicht möglich gewesen. Danke, dass ihr die besten Eltern der Welt seid. Großer Dank gilt meinen Großeltern Josef und Margarete Eder, sowie allen weiteren Familienmitgliedern, allen voran Josef und Helga Eder. Meiner Freundin Verena Kreilinger danke ich für die Motivation und Kraft, mit der sie mein Leben versüßt. Es ist wunderschön so einen Menschen an seiner Seite zu haben. Danke, dass du immer für mich da bist. Danke an die gesamte Computational Systems Group - im besonderen an die Eurosys Paper Crew: Silviu Craciunas, Ana Sokolova, Horst Stadler und Robert Staudinger - für eine schöne Zeit, anregende Ideen und ihre Hilfe bei der Erstellung meiner Diplomarbeit. Andreas Löcker danke ich für die tolle Teamarbeit all die Jahre. Weiterer Dank gilt allen Freunden, Bekannten und Kommilitonen, die mein Leben beeinflusst haben. I

3 Abstract We introduce Compact-fit (CF), a compacting real-time memory management system for allocating, deallocating, and dereferencing memory objects, which keeps the memory always compact. CF comes with a moving and a non-moving implementation. In the moving implementation allocation operations take constant time and deallocation comes down from linear time to constant time, if no compaction is necessary. Allocation and deallocation take linear time in the non-moving implementation, which depends on the request size. Dereferencing takes constant time in both implementations. Moreover, the system provides fully predictable memory, in the sense of fragmentation. In short, it is a real real-time memory management system. We compare the moving and the nonmoving CF implementation with established memory allocators, which all fail to satisfy the memory predictability requirement. The experiments confirm our theoretical complexity bounds and demonstrate competitive performance. Furthermore, we introduce the partial compaction strategy, which allows us to control the performance vs. fragmentation tradeoff. II

4 Contents 1 Introduction Outline of the thesis Contributions Real-time memory management systems Memory management basics Real-time memory management requirements Explicit dynamic memory management systems Sequential fit Doug Lea s allocator Half-fit Two-level segregated fit Algorithms Complexity Implicit dynamic memory management systems Treadmill Metronome Jamaica Summary Compact-fit (CF) Compaction Abstract and concrete address space CF API Size-classes concept Types of fragmentation Page-block-internal fragmentation Page-internal fragmentation Size-external fragmentation Fragmentation overview Summary CF system Compaction The compaction algorithm Complexity III

5 Contents 4.2 The free-list concept Page management internals Size-class list Size-class reference Number of used page-blocks Free page-blocks Used page-blocks Memory Overhead Moving Implementation Concept Allocation Deallocation Dereferencing Non-moving implementation Concept Allocation Deallocation Dereferencing Total memory overhead Partial Compaction Allocation Deallocation Pointer Arithmetic Initialization Dynamic abstract address space Moving implementation Non-moving implementation Arraylets Summary Experiments Test environment Execution Time Processor instructions Results Moving vs. non-moving implementation benchmark Incremental benchmark Rate-monotonic scheduling benchmark Fragmentation Summary Conclusion 69 IV

6 Contents 6.1 CF usage guideline Future work A Appendix 75 V

7 List of Figures 3.1 Memory states Abstract address and pointer mapping Memory object dependencies Fragmented concrete address space Compacted concrete address space Bounded internal fragmentation p = 1/ Size-classes and different types of fragmentation Arbitrary fragmented pages of a size-class The green marked memory object becomes deallocated The size-class after applying Rule The green marked memory object becomes deallocated The size-class after applying Rule Page Layout Used page-block list and free page-block list (next-page-block mode) Used page-block list and free page-block list (free-list mode) Two dimensional bitmap (16 32) Explicit reference of a page-block to an abstract address Memory layout of the non-moving implementation Allocation instructions benchmark Allocation clock ticks benchmark Deallocation & compaction instructions benchmark Deallocation & compaction clock ticks benchmark Deallocation partial compaction instructions benchmark Deallocation partial compaction clock ticks benchmark Incremental allocation instructions benchmark Incremental allocation clock ticks benchmark Incremental deallocation & compaction instructions benchmark Incremental deallocation & compaction clock ticks benchmark Incremental deallocation partial compaction instructions benchmark Incremental deallocation partial compaction clock ticks benchmark Rate-monotonic allocation instructions benchmark Rate-monotonic allocation clock ticks benchmark Rate-monotonic deallocation instructions benchmark VI

8 List of Figures 5.16 Rate-monotonic deallocation clock ticks benchmark Rate-monotonic deallocation partial compaction instructions benchmark Rate-monotonic deallocation partial compaction clock ticks benchmark Fragmentation test Fragmentation test Fragmentation test VII

9 List of Tables 2.1 Allocator Complexity Administrative Memory Overhead CFM allocation benchmark results CFM deallocation & partial compaction benchmark results CFM incremental allocation benchmark results CFM incremental deallocation & partial compaction benchmark results Rate-monotonic scheduling tasks CFM rate-monotonic allocation benchmark results CFM rate-monotonic deallocation & partial compaction benchmark results. 63 VIII

10 1 Introduction In the beginning, the universe was created. This made a lot of people very angry, and has been widely regarded as a bad idea. Douglas Adams This thesis introduces a new real-time memory management system, called Compact-fit (CF). It is a compacting real-time memory management system for allocating, deallocating, and dereferencing memory objects. CF comes with a moving and a non-moving implementation. Memory fragmentation in CF is bounded by a compile-time constant and it is reduced by performing compaction operations. Since in CF an abstract address space (pointer indirection) is used to reference memory objects, the resulting reference updates caused by compaction operations are bounded: just the indirection pointer has to be updated. In the moving implementation allocation operations take constant time and deallocation comes down from linear time to constant time, if no compaction is necessary. Allocation and deallocation take linear time in the non-moving implementation, which depends on the memory object size. Dereferencing takes constant time: one indirection in the moving implementation and two indirections in the non-moving implementation. Furthermore we introduce a new pointer concept: A pointer in the CF model is an address (abstract address) and an offset. The CF model therefore supports offset-based rather than address-based pointer arithmetics. Note that, in principle, the moving implementation may also support address-based pointer arithmetics since each memory object is allocated in a single, physically contiguous piece of memory. In the CF model the compaction operations are bounded. Compaction may only happen upon freeing a memory object and involves moving a single memory object of similar size. Memory in the CF model is partitioned into 16KB pages. Each page is an instance of a so-called size-class, which partitions a page further into same-sized page-blocks. We adapt the concept of pages and size-classes from [3]. A memory object is always allocated in a page of the smallest-size size-class whose page-blocks still fit the allocation request. Memory objects larger than 16KB are currently not supported, but we present a suggestion how large memory objects can be handled. The key idea in the CF model is to keep the memory size-class-compact at all times. In other words, at most one page of each size-class may be not-full at any time while all 1

11 1 Introduction other pages of the size-class are always kept full. Whenever the freeing of a memory object leads to two pages in a size-class, which are not full, a memory object of the notfull page is moved to take the place of the freed memory object and thus maintain the invariant. If the not-full page becomes empty, it can be reused in any size-class - it is moved to the pool of free pages. Using an intelligible concept of a free-list, free space can be found in constant time, upon an allocation request. The moving implementation of the CF model maps page-blocks directly to physically contiguous pieces of memory, and therefore requires moving the whole memory objects for compaction. Allocation takes constant time in the moving implementation, whereas deallocation takes linear time if compaction occurs. The non-moving implementation uses a block table (a virtual memory) to map pageblocks into physical block-frames that can be located anywhere in memory. In this case, compaction merely requires re-programming the block table rather than moving memory objects, which provides faster compaction. However, although compaction is faster, deallocation still takes linear time in the size of the object due to the block table administration. For the same reason also allocation takes linear time in the non-moving implementation. In both implementations we can relax the always-compact-requirement and allow for more than one not-full page per size-class. As a result the deallocation takes less time: it reduces up to constant time. This way we formalize, control, and implement the trade-off between timing performance and memory fragmentation. This concept is called partial compaction. We present the results of benchmarking both implementations on a lightweight HAL running on bare-metal Gumstix and on Linux, as well as implementations of a noncompacting real-time memory management systems (Half-fit [25] and TLSF [22]) and non-real-time memory management systems (First-fit [16], Best-fit [16] and Doug Lea s allocator [18]) using synthetic workloads, which create worst-case and average-case scenarios. 1.1 Outline of the thesis We start this thesis with a discussion of memory management systems and focus on real-time systems. Afterwards follows the description of the CF model, followed by the presentation of the CF implementation. In the end we present the results of the experiments and benchmarks. Chapter 1, Introduction: The introduction chapter gives an outline of this thesis and its motivation. Chapter 2, Real-time memory management systems: Chapter 2 gives an overview of memory management systems and the requirements for real-time performance. The 2

12 1 Introduction problem of fragmentation is introduced. Established non-real-time memory management systems like First-fit, Best-fit, and Doug Lea s allocator and non-compacting real-time memory managements systems like Half-fit and Two-level segregated fit are discussed. Finally the memory management systems of the garbage collected systems Treadmill, Metronome, and Jamaica are presented. Chapter 3, Compact-fit (CF): The model of CF is presented in Chapter 3. We introduce the abstract and the concrete address space and present the size-classes concept. Furthermore we state fragmentation bounds for the size-class concept. Chapter 4, CF system: The system of CF is discussed in Chapter 4. There are two different CF approaches: the moving and the non-moving implementation. Both are examined in detail. Their asymptotic complexity and memory overhead are presented. Then, the partial compaction concept is explained, which brings deallocation down to constant time for the moving implementation of CF. Additionally, we discuss extensions and optimizations of the current CF implementations. Chapter 5, Experiments: Chapter 5 presents the experiments and benchmarks. Three different mutators are used, which generate synthetic worst-case and average-case scenarios. The performance of both CF implementations are measured using clock ticks and instructions benchmarks. The results are compared with the results of First-fit, Best-fit, Doug Lea s allocator, Half-fit, and TLSF. In the end of this chapter we present fragmentation tests, where we compare the CF moving implementation with TLSF. Chapter 6, Conclusion: In the last chapter we conclude the thesis. We present a review of the thesis and outline its main contributions. Finally we discuss future work ideas. Appendix A, CF implementation: The appendix lists the source code of the CF moving and non-moving implementation. Note that the source code of both implementations is merged. The desired approach is compiled by setting the respective flag. The implementation is available under the GPL at Contributions The contribution of this thesis is the CF model and the concept of a predictable memory (predictable fragmentation). Based on the CF model we implemented the moving and the non-moving CF approach. Furthermore, we presented the partial compaction strategy and implemented it for the CF moving implementation. Moreover, we benchmarked the moving and non-moving implementations as well as the partial compaction strategy of CF in a number of experiments and compared both CF implementations with the explicit dynamic memory management algorithms First-fit, Best-fit, Doug Lea s allocator, Half-fit, and TLSF. We used two different platforms for the benchmarks: Gumstix running a lightweight hardware abstraction layer to perform execution time measurements 3

13 1 Introduction (processor cycles) and the Linux operating system to perform processor instruction tests. In addition we performed fragmentation experiments, where we compare the memory utilization of the CF moving implementation with TLSF. 4

14 2 Real-time memory management systems Developers of real-time systems avoid the use of dynamic memory management because they fear that the worst-case execution time of dynamic memory allocation routines is not bounded or is bounded with an excessively large bound. Isabelle Puaut This chapter starts with an overview of memory management in general and the requirements for real-time performance in particular. Memory fragmentation is a problem in managing memory, which we discuss as well in this section. Afterwards dynamic memory management systems are examined. We discuss established allocator strategies like First-fit, Best-fit, Doug Lea s allocator, Half-fit, and Two-level segregated fit and present the memory management systems of the garbage collected systems of Treadmill, Metronome, and Jamaica. 2.1 Memory management basics By memory management we mean dynamic memory management. Dynamic memory management is a fundamental and well studied part of operating systems. This core unit has to keep track of used and unused parts of the memory. Applications use the dynamic memory management system to allocate and free memory objects of arbitrary size in arbitrary order. This is what makes the memory management dynamic. Moreover applications can use the memory management system for accessing already allocated memory objects. This operation is called dereferencing. In programming languages like C the memory management system does not handle memory dereferencing. An application can access the whole contiguous memory directly, In contrast to virtual machines like the Java virtual machine, which calls explicit dereferencing methods to gain access to the memory. The memory management system responds to an allocation request by providing an available memory slot, it responds to a deallocation request by freeing the occupied memory slot, and it responds to a dereferencing request by providing access to a memory location within the allocated memory object. Memory deallocation can lead to memory holes, which can not be reused by future allocation requests, if they are too small. Dynamic memory management systems have to minimize this problem, called the fragmentation problem. The complexity of allocation amounts to finding free memory 5

15 2 Real-time memory management systems space and it increases with the increase of fragmentation. The complexity of deallocation may also be related to the fragmentation problem. Hence, fragmentation is a key issue in memory management. As usual in the literature, we use the term fragmentation for both the phenomenon of fragmented memory space and for the size of fragmented parts of the memory. In general there are two types of fragmentation (an introduction to fragmentation is given in [34]): Internal fragmentation occurs if the memory is partitioned into blocks. Allocations that do not fit a whole block waste the memory at the end of the block. This wasted memory is called internal fragmentation. External fragmentation is a phenomenon in which the contiguous memory becomes divided into many small pieces over time, which are not usable by further allocation requests. Johnstone [15] showed that a large class of programs tends to perform many allocation operations of small and equal size. The majority of programs consist of just a few key objects that are recently used and make up the nodes of large data structures upon which the program operates. The remaining allocation operations belong to strings, arrays and buffers, which can be of varying and larger size. Johnstone concludes that fragmentation can be ignored if the right allocation strategy for an application is chosen. This might be true for short-running userland programs but for safety critical systems this arguments does not hold. For hard real-time systems the worst-case scenarios have to be taken care of and fragmentation has to be considered. A way to fight fragmentation is by performing compaction, also known as defragmentation: Initially the free memory space is contiguous. The fragmentation results in a non-contiguous free memory space. Compaction is a process of rearranging the used memory space so that larger contiguous pieces of memory become available. In the best case the whole free memory becomes contiguous again. There are two types of dynamic memory management systems: explicit, in which an application has to explicitly call the corresponding procedures of the dynamic memory management system for allocating and deallocating memory, and implicit, in which memory deallocation is implicit, i.e., allocated memory that is not used anymore is detected and freed automatically. Such systems are called garbage collected systems. The explicit dynamic memory management systems usually cover low level implementations in comparison to the implicit dynamic memory management systems that can, for example, manage Java real-time systems. Therefore they are in a way incomparable, but Berger et. al [13] introduced an experiment methology for quantifying the performance of 6

16 2 Real-time memory management systems garbage collection vs. explicit memory management. In this work we propose an explicit dynamic memory management system, which can be used for both low and high level implementations. 2.2 Real-time memory management requirements Traditional dynamic memory management strategies are typically non-deterministic. Most of them are optimized to offer excellent best and average-case response times, but in the worst-case they are unbounded. This is suitable for non real-time systems, but for hard real-time systems tight bounds have to exist. Therefore, dynamic memory allocators have been avoided in the real-time domain. The used memory of real-time applications was typically allocated statically, a sufficient solution for many real-time controllers. Nowadays real-time applications have increasing complexity which in turn demands greater flexibility of memory allocation. Therefore, there is a need of designing dynamic real-time memory management systems. In an ideal dynamic real-time memory management system each unit operation (memory allocation, deallocation, and dereferencing) takes constant time. We refer to this time as the response time of the operation of the memory management. If constant response times can not be achieved, then bounded response times are also acceptable. However, the response times have to be bounded by the size of the actual request and not by the global state of the memory. More precisely, real-time systems should exhibit predictability of response times and of available resources. If the response times are bounded, then they are predictable. The fragmentation problem affects the predictability of the response times. Consider the following example. The memory consists of n blocks of equal size. An application allocates all of the n blocks and then deallocates each second block. As a result, 50% of the memory is free. Nevertheless, any allocation request demanding at least two contiguous blocks can not be served. Depending on how the memory management resolves fragmentation, this situation can have an effect on the response times. For example, if moving objects in order to create enough contiguous space is done upon an allocation request, then the response time of allocation is no longer bounded, i.e., it may depend on the global state of the memory. Predictability of available memory means that the number of the actual allocations together with their sizes determines how many more allocations of a given size will succeed before running out of memory, independent of the allocation and deallocation history. In a predictable system also the amount of fragmentation is predictable and depends only on the actual allocations. In addition to predictability, fragmentation has to be minimized for maximal utilization of the available memory. 7

17 2 Real-time memory management systems None of the established explicit dynamic memory management systems meets the requirement on predictability of available memory, since the fragmentation depends on the allocation and deallocation history. As mentioned above, a way to minimize fragmentation is by performing compaction. The compaction workload has to be evenly and fairly distributed to get predictable response times. Compaction operations can be done in either event- or time-triggered manner: In an event-triggered system a processing activity is initiated as a consequence of the occurrence of a significant event." [17] Compaction is initiated upon the occurrence of significant events e.g. memory management API calls and can be performed before or after a memory management API call, where m memory objects are moved to another location in memory. In a time-triggered system, the activities are initiated periodically at predetermined points in real-time. [17] Compaction operations are performed every n clock ticks, independent of memory management API calls, where m memory objects are moved to another location in memory. The memory management system that we propose has bounded (constant) response times and predictable available memory, where fragmentation is minimized. The second issue is achieved via compaction. Compaction is performed in event-triggered manner. 2.3 Explicit dynamic memory management systems The procedures for allocating (malloc) and deallocating (free) memory have to be called explicitly, if an explicit dynamic memory management system is used. Wilson et al. [37] gives an survey of dynamic memory allocation strategies. Masmano et al. [24] and Puaut [29] present in their works evaluations of explicit dynamic memory management systems under real-time loads. We give in this section a brief overview of some established allocators: First-fit, Best-fit, Doug Lea s allocator, Half-fit, and Two-level segregated fit. Note that these allocators operate on a single contiguous piece of memory. The fragmentation problem is not explicitly handled by these explicit dynamic memory management systems. This means that memory compaction is not performed. The algorithms try to align the allocated memory objects in a more or less optimal manner in the contiguous memory. The usable memory depends on the allocation/free history of the application and is therefore not predictable. This is unacceptable for safety-critical hard real-time systems, where fragmentation guarantees are needed. 8

18 2 Real-time memory management systems Sequential fit First-fit and Best-fit are sequential fit allocators. [16, 34] give a detailed explanation of these algorithms. Sequential fit allocators are based on a single or double linked list of free memory blocks. The pointers of the free list are embedded in the free blocks. Therefore no memory is wasted. The First-fit allocator searches the free list and takes the first free block that fits the allocation request. The allocation request has to be smaller or equal than the size of the free block. The Best-fit allocator scans the whole list and selects the free block which best fits the allocation request. It is obvious that these algorithms with its allocation strategies are not real-time. Consider a memory constellation where the whole memory of size m consists of allocated blocks of minimum size s and every odd block is free. In this case the free list has the maximum size, i.e. in the worst case the whole free list has to be examined to fulfill an allocation request. The maximum number of list iterations is m 2s. Deallocating a used memory object takes constant time Doug Lea s allocator Doug Lea s allocator [18] is a hybrid allocator, which is widely used in several environments, e.g. in some versions of Linux [35]. It uses two different types of free lists: There are 48 free lists of the first free list type, which represent exact block sizes (from 16 to 64 bytes), called fast bins. The remaining free lists (the second type) are segregated free lists, called bins. Allocation operations are handled by the corresponding free list that fits the allocation request. The allocator uses a delayed coalescing strategy. This means that neighboring free blocks are not coalesced after deallocation operations. Instead a global block coalescing is done, if an allocation requests can not be fulfilled. Therefore deallocation operations are fast and perform in constant time, but the allocation operations offer imprecise bounds, caused by the global delayed free blocks coalesce operations that can occur. Let m denote the memory size and s denote the minimum block size, then O( m s ) represents the complexity of coalesce operations that can occur, if an allocation call can not be performed. Therefore Dough Lea s allocation strategy is not predictable and not suitable for a hard real-time system Half-fit Half-fit [25] groups free blocks in the range of [2 i, 2 i+1 [ into a free list denoted by i. Bitmaps are used to keep track of empty lists and bitmap processor instructions are used 9

19 2 Real-time memory management systems to find set bits in constant time. If an allocation request of size s is performed, the search for a suitable free block starts at i, where i = log 2 (s 1) + 1 or 0 if s = 1. If list i contains no free element, then the next free list i + 1 is examined. If a free block of a larger size class has to be used, this free block is split into two blocks of sizes r and r, where the free block r is reinserted into the corresponding free list. Masmano et. al. [22] showed that fragmentation is high in the Half-fit allocator. Especially if many allocations are performed that are not close to a power of two Two-level segregated fit The two-level segregated fit (TLSF) allocator [22], which is used in the RTLinux/GPL system [23], implements a combination of a segregated free list and a bitmap allocation strategy. The first dimension of the free list is an array that represents size classes that are a power of two apart. The second dimension sub-divides each first-level size class linearly. Each free list array has an associated bitmap where free lists are marked that contain free blocks. Processor instructions are used to find an adequate free memory location for an allocation request in constant time. If there are neighboring free blocks after a deallocation operation, then they are immediately coalesced using the boundary tag technique [16]. Each used block contains 8 byte administration information, which are stored in the header of the block. The first 4 bytes hold the size of the used block and the second 4 bytes contain a physical memory reference to the previous block, with respect of the linear order of blocks in memory. This information is necessary to perform block coalescing in constant time. The immediate coalescing technique leads to larger reusable memory ranges and therefore to less fragmentation in comparison to the Half-fit approach. Since the minimal block size in TLSF is 16 bytes, the worst case administration memory overhead is high: 8 16 = 50% Algorithms Complexity Table 2.1 shows the complexity of the allocation and deallocation operations of the presented explicit dynamic memory management systems. Half-fit and TLSF are the only allocators that offer bounded (constant) time behaviour for both operations. Allocation Deallocation First-fit O( m 2s ) O(1) Best-fit O( m 2s ) O(1) DLmalloc O( m s ) O(1) Half-fit O(1) O(1) TLSF O(1) O(1) Table 2.1: Allocator Complexity 10

20 2 Real-time memory management systems 2.4 Implicit dynamic memory management systems An implicit dynamic memory management system is in charge of collecting allocated memory objects that are not in use anymore. Implicit dynamic memory deallocation is known as garbage collection. The garbage collector is responsible for deallocating sufficient unused allocated memory objects to handle prospective allocation request of arbitrary size. We do not focus on the garbage collection strategies in this section, we are only interested in the memory management concepts of the real-time garbage collected systems. We examine the following established real-time garbage collected systems: the Treadmill concept [6] with its modifications [36, 19], the time-triggered Metronome [4, 2, 5, 3], and the event-triggered Jamaica [33, 31, 32] approach. The last two are commercial systems. Ritzau [30] presents an extensive overview in his dissertation Treadmill Baker s Treadmill [6] is a real-time, non-copying garbage collector that offers bounded response time for allocation operations. The garbage collection strategy is a four-color collection scheme. Details about this approach can be found in [6]. The algorithm uses a single block size. One free block has to be taken from the free list to handle an allocation request. All memory blocks are stored in circular doubly-linked lists. Therefore memory allocation is done in constant time. Using just one block size is very restrictive and results in high internal fragmentation. The main drawback of this approach is that unpredictable large amounts of garbage collection work can occur. Wilson [36] introduced segregated free lists for this algorithm, with size classes increasing in powers of two. Allocation requests are handled by the free list that fits the allocation request. Each list is collected separately by the Treadmill collector. A collection occurs only if a free list becomes empty. This strategy is unpredictable and therefore not suitable for a hard real-time system. A page-level memory management version of the Treadmill collector is proposed in [19], which improves memory utilization without imposing unbounded response times for allocation requests. A page remapping scheme is used to create larger free contiguous pieces of memory Metronome In Metronome [4, 2, 5, 3], allocation is performed using segregated free lists. The whole memory is divided into pages of equal size. Each page itself is divided into fixed-size 11

21 2 Real-time memory management systems blocks of a particular size. There are n different block sizes which lead to n different sizeclasses. All pages that consist of blocks of the same size build up a size-class. Allocation operations are handled by the smallest size-class that can fit the allocation request. This is done in constant time. Unused pages can be used by any size-class. Compaction operations are performed, if pages of a size class become fragmented to a certain degree due to garbage collection. First of all, the pages of a size class are sorted by the number of unused blocks per page. There is an allocation pointer, which is set to the first not-full page of the resulting list and a compaction pointer, which is set to the last page of the resulting list. Allocated objects are moved from the page, which is referenced by the compaction pointer to the page, which is referenced by the allocation pointer. Compaction is performed until both pointers reference the same page. Relocation of objects is achieved using a forwarding pointer. This pointer is located in the header of each object. A Brooks-style read barrier [8] maintains the to-space invariant. A mutator always sees its objects in to-space. A number of optimizations are applied to the read barrier to reduce its cost, e.g., barrier-sinking (the barrier is sinked down to its point of use). The mean cost of the read barrier is 4%. In the worst case it represents an overhead of 9.6%. Since Metronome is a time-triggered real-time garbage collector, compaction is part of the collection cycles, which are performed at predefined points in time. It is shown that compaction takes no more than 6% of the collection time. Therefore the compaction overhead is bounded in the Metronome approach. The remaining time is used to detect allocated objects that are not in use anymore. The duration of the collection interval has to be preset application specific in advance. An improper choice of the duration of the collection interval could lead to missed deadlines or out of memory errors Jamaica Siebert presented the Jamaica [33, 31, 32] real-time garbage collector, which does not perform compaction operations. A new object model is introduced that is based on fixed size blocks. The whole memory is subdivided into blocks of equal size. Small allocation request can be satisfied by using a single block. Larger ones require a possibly noncontiguous set of blocks, where each block holds a reference to its successor. The noncontiguity is the reason, why compaction is not necessary anymore. Objects can be build up by arbitrarily distributed blocks, which are connected by a singly-linked list or a tree data structure. When using blocks of fixed size, the most important decision is to choose an adequate block size. Siebert proposed block sizes in the range of 16 to 64 bytes. This parameter has to be chosen program specific. The complexity of allocation and deallocation operations depends on the size of the af- 12

22 2 Real-time memory management systems fected object and the used block size. Let s denote the size of an object and let b denote the block size. An object of size s requires n = s b blocks. This means that if an allocation or deallocation operation of an object of size s is performed, n list operations are required. Therefore allocation and deallocations operations are performed in linear time O( s b ), depending on the object size. Memory dereferencing can not be done in constant time using the object model of Jamaica. Since an object is build up of non-contiguous blocks, access to the last field of an object requires going through all the blocks of the object, if they are connected via a linked list. Therefore memory dereferencing takes linear time and depends on the location of the field in the object. Jamaica performs event-triggered garbage collection, which is executed when allocation operations are performed. m blocks have to be examined at every allocation operation to guarantee that all allocation request can be fulfilled. In Jamaica, the amount m of blocks that have to be checked depends on the total amount of allocated blocks. If there are only a few allocated blocks, then less collection has to be performed. Otherwise more work has to be done. In the worst case, if the memory is completely full and an allocation operation is performed, all allocated objects have to be checked. Therefore the collection overhead varies and depends on the global memory state. 2.5 Summary Two of the presented explicit dynamic memory management systems offer bounded (constant) response times for allocation and deallocation operations: Half-fit and TLSF. Since TLSF handles fragmentation better than Half-fit, it is the most applicable candidate for real-time systems. The main problem, which is not considered by these systems, is that fragmentation is unpredictable and may be unbounded. Therefore, scenarios that lead to high fragmentation are possible. As a result, using these systems for real-time applications may be problematic. The memory has to be predictable. The Treadmill approach with its modifications presents some interesting memory management layouts, but it suffers from unpredictable collection cycles. Metronome performs time-triggered garbage collection. The duration of the collection interval, where compaction is performed, has to be precisely chosen to guarantee that the real-time system is able to meet its deadlines and that sufficient memory is always available. Otherwise the system may fail. The event-triggered Jamaica system uses an object model that avoids external fragmentation. Here internal-fragmentation is the problem that has to be minimized by choosing an application adapted block size. The garbage collection overhead varies and depends on the global memory state what degrades predictability of the system. A further drawback is that memory dereferencing can not be performed in constant time in Jamaica. 13

23 2 Real-time memory management systems Memory management is the basis for a predictable hard real-time system. None of the outlined explicit dynamic memory management systems offers predictable memory operations in combination with explicit fragmentation elimination and independence of the global memory state. Hard real-time systems require dynamic memory management systems that offer all of these properties. 14

24 3 Compact-fit (CF) Simplicity is prerequisite for reliability. Edsger W. Dijkstra The following chapter describes the model of the compacting real-time memory management system Compact-fit (CF) and presents the main design decisions. It abstracts from any data-related aspects such as data organization and administration. Different element colors in status diagrams are used to represent the memory states of the CF entities: (a) unused (b) used (c) internal fragmentation (d) pageinternal fragmentation Figure 3.1: Memory states Furthermore the abstract term memory object is used to describe an allocated memory range. 3.1 Compaction Compaction can be used to bound fragmentation. However, a stop-the-world approach, where the whole memory is compacted at once would degrade predictability and is not suitable for a real-time system. Therefore the compaction workload has to be distributed incrementally and fairly over memory operations to get a predictable timing behaviour. The Jamaica System [32] presented in Section is an exception. The object model eliminates external fragmentation. Hence compaction is not necessary anymore. However, there the workload is not removed. It is shifted to memory dereferencing where more work has to be done for the rear memory object elements. In CF dereferencing operations have to be done in constant time, therefore the Jamaica object model does not fit our claim. 15

25 3 Compact-fit (CF) The compaction workload normally consists of two major tasks: copying memory objects and updating all references that point to the moved memory object [10]. Copying memory objects can be bounded by the size of the memory object, but the amount of reference updates is unpredictable. In the worst-case n allocated memory objects hold a reference to the moved memory object, which would lead to n reference updates. Furthermore this n references have to be found in memory. Predictability can be achieved if memory objects and direct references to memory objects are decoupled. The decoupling mechanism is described in the following section Abstract and concrete address space Conceptually, there are two memory layers: the abstract address space and the concrete address space. Allocated memory objects are physically placed in contiguous portions of the concrete address space. For each allocated memory object, there is exactly one entity of the abstract address space. No direct references from applications to the concrete address space are possible: an application references the abstract address of a memory object, which furthermore uniquely determines the memory object position in the concrete address space. Therefore the applications and the memory objects (in the concrete address space) are decoupled. All memory operations operate on abstract addresses. We start by defining the needed notions and notations. Definition 1 The abstract address space is a finite set of integers denoted by A. Definition 2 An abstract address a is an element of the abstract address space, a A. Definition 3 The concrete address space is a finite interval of integers denoted by C. Note that since it is an interval, the concrete address space C is contiguous. Moreover, both the concrete and abstract address spaces are linearly ordered by the standard ordering of the integers. Definition 4 A concrete address c is an element of the concrete address space, c C Definition 5 A memory object is an element of the set of memory objects i I(C). For each memory object, two elements of the concrete address space c 1, c 2 C, such that c 1 c 2, define its range, i.e. we have i = [c1, c2] = {x c 1 x c 2 }. As mentioned above, each abstract address refers to a unique range of concrete addresses, which represents a memory object. Vice versa, the concrete addresses of an allocated memory object are assigned to a unique abstract address. To express this formally we define a partial map that assigns to each abstract address, the interval of concrete addresses that it refers to. The abstract address partial map address : A I(C) maps abstract addresses to memory objects. We say that an abstract address a is in use if address(a) is defined. The 16

26 3 Compact-fit (CF) abstract address map is injective, i.e., different abstract addresses are mapped to different subintervals, and moreover for all abstract addresses a 1, a 2 A that are in use, if a 1 a 2, then address(a 1 ) address(a 2 ) =. Accessing a specific element in the concrete address space C requires two pieces of information: the abstract address a and an offset o, pointing out which element in the memory object m = address(a) is desired. Therefore the next definition: Definition 6 An abstract pointer denoted by a p is a pair a p = (a, o), where a is an abstract address in use and o is an offset, o {0,..., address(a) 1}. By. we denote the cardinality of a set. Definition 7 The abstract pointer space is the set of all abstract pointers a p, and it is denoted by A p. There is a one-to-one correspondence between A p and C. Each abstract pointer a p refers to a unique concrete address c via the abstract pointer mapping pointer : A p C. It maps an abstract pointer a p = (a, o) to the concrete address of the memory object m = address(a) that is at position o with respect to the order on address(a). These definitions and mappings are clarified by an example. Let the abstract address space A consist of 3 elements A = {1, 2, 3} and the concrete address space C consists of 10 elements C = {1, 2,..., 10}. Assume that three memory objects of different size (different amount of concrete addresses) are allocated: address(1) = [2, 3], address(2) = [6, 7] and address(3) = [8, 10]. The abstract addresses together with their offsets create abstract pointers, which are mapped to C. For example, pointer(1, 1) = 3 and pointer(3, 2) = 10. Figure 3.2 depicts this situation. Figure 3.2: Abstract address and pointer mapping The following examples are more concrete in the sense of implementation and show the benefit of an abstract address space A. Consider an application that allocates memory objects and holds references to A, which is realized by a contiguous pointer indirection table. In the examples, the pointer indirection table is called proxy table. The proxy table 17

27 3 Compact-fit (CF) entries refer to the concrete address space C, the real memory. Figure 3.3 illustrates how dependencies of memory objects are handled. Large datastructures often consist of a number of allocated memory objects connected via references (e.g. linked-lists, trees,... ). Compaction operations lead to reference updates in this data-structures. The amount of reference updates is unpredictable, if this references are direct. Therefore each memory reference has to be indirect, i.e., each reference is an abstract pointer. This situation is shown in figure 3.3. Figure 3.3: Memory object dependencies Indirect referencing provides predictability of the reference updates during compaction. If fragmentation occurs, the concrete address space C gets compacted and the references from the abstract address space A to the concrete address space C are updated, as shown in Figure 3.4 and 3.5. Hence, objects are moved in C and references are updated in A. The number of reference updates is bounded: movement of one memory object in C leads to exactly one reference update in A. In contrast, direct referencing (related to object dependencies) implies unpredictable number of reference updates. This is why we chose for an abstract address space design. 18

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation Dynamic Storage Allocation CS 44 Operating Systems Fall 5 Presented By Vibha Prasad Memory Allocation Static Allocation (fixed in size) Sometimes we create data structures that are fixed and don t need

More information

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging Memory Management Outline Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging 1 Background Memory is a large array of bytes memory and registers are only storage CPU can

More information

Dynamic Memory Management for Embedded Real-Time Systems

Dynamic Memory Management for Embedded Real-Time Systems Dynamic Memory Management for Embedded Real-Time Systems Alfons Crespo, Ismael Ripoll and Miguel Masmano Grupo de Informática Industrial Sistemas de Tiempo Real Universidad Politécnica de Valencia Instituto

More information

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm Journal of Al-Nahrain University Vol.15 (2), June, 2012, pp.161-168 Science Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm Manal F. Younis Computer Department, College

More information

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering Memory management basics (1) Requirements (1) Operating Systems Part of E1.9 - Principles of Computers and Software Engineering Lecture 7: Memory Management I Memory management intends to satisfy the following

More information

Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc()

Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc() CS61: Systems Programming and Machine Organization Harvard University, Fall 2009 Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc() Prof. Matt Welsh October 6, 2009 Topics for today Dynamic

More information

OPERATING SYSTEM - MEMORY MANAGEMENT

OPERATING SYSTEM - MEMORY MANAGEMENT OPERATING SYSTEM - MEMORY MANAGEMENT http://www.tutorialspoint.com/operating_system/os_memory_management.htm Copyright tutorialspoint.com Memory management is the functionality of an operating system which

More information

Chapter 7 Memory Management

Chapter 7 Memory Management Operating Systems: Internals and Design Principles Chapter 7 Memory Management Eighth Edition William Stallings Frame Page Segment A fixed-length block of main memory. A fixed-length block of data that

More information

Persistent Binary Search Trees

Persistent Binary Search Trees Persistent Binary Search Trees Datastructures, UvA. May 30, 2008 0440949, Andreas van Cranenburgh Abstract A persistent binary tree allows access to all previous versions of the tree. This paper presents

More information

Garbage Collection in the Java HotSpot Virtual Machine

Garbage Collection in the Java HotSpot Virtual Machine http://www.devx.com Printed from http://www.devx.com/java/article/21977/1954 Garbage Collection in the Java HotSpot Virtual Machine Gain a better understanding of how garbage collection in the Java HotSpot

More information

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen Folie a: Name & Data Processing 2 3: Memory Management Dipl.-Ing. Bogdan Marin Fakultät für Ingenieurwissenschaften Abteilung Elektro-und Informationstechnik -Technische Informatik- Objectives Memory Management

More information

Recent Advances in Financial Planning and Product Development

Recent Advances in Financial Planning and Product Development Memory Management in Java and Ada Language for safety software development SARA HOSSEINI-DINANI, MICHAEL SCHWARZ & JOSEF BÖRCSÖK Computer Architecture & System Programming University Kassel Wilhelmshöher

More information

Chapter 7 Memory Management

Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 7 Memory Management Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall Memory Management Subdividing

More information

System Software Prof. Dr. H. Mössenböck

System Software Prof. Dr. H. Mössenböck System Software Prof. Dr. H. Mössenböck 1. Memory Management 2. Garbage Collection 3. Linkers and Loaders 4. Debuggers 5. Text Editors Marks obtained by end-term exam http://ssw.jku.at/misc/ssw/ 1. Memory

More information

Understanding Java Garbage Collection

Understanding Java Garbage Collection TECHNOLOGY WHITE PAPER Understanding Java Garbage Collection And What You Can Do About It Table of Contents Executive Summary... 3 Introduction.... 4 Why Care About the Java Garbage Collector?.... 5 Classifying

More information

Storing Measurement Data

Storing Measurement Data Storing Measurement Data File I/O records or reads data in a file. A typical file I/O operation involves the following process. 1. Create or open a file. Indicate where an existing file resides or where

More information

Symbol Tables. Introduction

Symbol Tables. Introduction Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

More information

Segmentation. 16.1 Segmentation: Generalized Base/Bounds

Segmentation. 16.1 Segmentation: Generalized Base/Bounds 16 Segmentation So far we have been putting the entire address space of each process in memory. With the base and bounds registers, the OS can easily relocate processes to different parts of physical memory.

More information

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The

More information

CS5460: Operating Systems

CS5460: Operating Systems CS5460: Operating Systems Lecture 13: Memory Management (Chapter 8) Where are we? Basic OS structure, HW/SW interface, interrupts, scheduling Concurrency Memory management Storage management Other topics

More information

Record Storage and Primary File Organization

Record Storage and Primary File Organization Record Storage and Primary File Organization 1 C H A P T E R 4 Contents Introduction Secondary Storage Devices Buffering of Blocks Placing File Records on Disk Operations on Files Files of Unordered Records

More information

Sequential Data Structures

Sequential Data Structures Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year

More information

Storage Systems Autumn 2009. Chapter 6: Distributed Hash Tables and their Applications André Brinkmann

Storage Systems Autumn 2009. Chapter 6: Distributed Hash Tables and their Applications André Brinkmann Storage Systems Autumn 2009 Chapter 6: Distributed Hash Tables and their Applications André Brinkmann Scaling RAID architectures Using traditional RAID architecture does not scale Adding news disk implies

More information

14.1 Rent-or-buy problem

14.1 Rent-or-buy problem CS787: Advanced Algorithms Lecture 14: Online algorithms We now shift focus to a different kind of algorithmic problem where we need to perform some optimization without knowing the input in advance. Algorithms

More information

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface

Two Parts. Filesystem Interface. Filesystem design. Interface the user sees. Implementing the interface File Management Two Parts Filesystem Interface Interface the user sees Organization of the files as seen by the user Operations defined on files Properties that can be read/modified Filesystem design Implementing

More information

2) Write in detail the issues in the design of code generator.

2) Write in detail the issues in the design of code generator. COMPUTER SCIENCE AND ENGINEERING VI SEM CSE Principles of Compiler Design Unit-IV Question and answers UNIT IV CODE GENERATION 9 Issues in the design of code generator The target machine Runtime Storage

More information

Efficient Data Structures for Decision Diagrams

Efficient Data Structures for Decision Diagrams Artificial Intelligence Laboratory Efficient Data Structures for Decision Diagrams Master Thesis Nacereddine Ouaret Professor: Supervisors: Boi Faltings Thomas Léauté Radoslaw Szymanek Contents Introduction...

More information

File-System Implementation

File-System Implementation File-System Implementation 11 CHAPTER In this chapter we discuss various methods for storing information on secondary storage. The basic issues are device directory, free space management, and space allocation

More information

Memory Management in the Java HotSpot Virtual Machine

Memory Management in the Java HotSpot Virtual Machine Memory Management in the Java HotSpot Virtual Machine Sun Microsystems April 2006 2 Table of Contents Table of Contents 1 Introduction.....................................................................

More information

Operating Systems, 6 th ed. Test Bank Chapter 7

Operating Systems, 6 th ed. Test Bank Chapter 7 True / False Questions: Chapter 7 Memory Management 1. T / F In a multiprogramming system, main memory is divided into multiple sections: one for the operating system (resident monitor, kernel) and one

More information

Data Warehousing und Data Mining

Data Warehousing und Data Mining Data Warehousing und Data Mining Multidimensionale Indexstrukturen Ulf Leser Wissensmanagement in der Bioinformatik Content of this Lecture Multidimensional Indexing Grid-Files Kd-trees Ulf Leser: Data

More information

INTRODUCTION The collection of data that makes up a computerized database must be stored physically on some computer storage medium.

INTRODUCTION The collection of data that makes up a computerized database must be stored physically on some computer storage medium. Chapter 4: Record Storage and Primary File Organization 1 Record Storage and Primary File Organization INTRODUCTION The collection of data that makes up a computerized database must be stored physically

More information

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Formal Languages and Automata Theory - Regular Expressions and Finite Automata - Samarjit Chakraborty Computer Engineering and Networks Laboratory Swiss Federal Institute of Technology (ETH) Zürich March

More information

A Static Analyzer for Large Safety-Critical Software. Considered Programs and Semantics. Automatic Program Verification by Abstract Interpretation

A Static Analyzer for Large Safety-Critical Software. Considered Programs and Semantics. Automatic Program Verification by Abstract Interpretation PLDI 03 A Static Analyzer for Large Safety-Critical Software B. Blanchet, P. Cousot, R. Cousot, J. Feret L. Mauborgne, A. Miné, D. Monniaux,. Rival CNRS École normale supérieure École polytechnique Paris

More information

Analysis of Compression Algorithms for Program Data

Analysis of Compression Algorithms for Program Data Analysis of Compression Algorithms for Program Data Matthew Simpson, Clemson University with Dr. Rajeev Barua and Surupa Biswas, University of Maryland 12 August 3 Abstract Insufficient available memory

More information

Garbage Collection in NonStop Server for Java

Garbage Collection in NonStop Server for Java Garbage Collection in NonStop Server for Java Technical white paper Table of contents 1. Introduction... 2 2. Garbage Collection Concepts... 2 3. Garbage Collection in NSJ... 3 4. NSJ Garbage Collection

More information

Part III Storage Management. Chapter 11: File System Implementation

Part III Storage Management. Chapter 11: File System Implementation Part III Storage Management Chapter 11: File System Implementation 1 Layered File System 2 Overview: 1/4 A file system has on-disk and in-memory information. A disk may contain the following for implementing

More information

Chapter 5 Names, Bindings, Type Checking, and Scopes

Chapter 5 Names, Bindings, Type Checking, and Scopes Chapter 5 Names, Bindings, Type Checking, and Scopes Chapter 5 Topics Introduction Names Variables The Concept of Binding Type Checking Strong Typing Scope Scope and Lifetime Referencing Environments Named

More information

1 The Java Virtual Machine

1 The Java Virtual Machine 1 The Java Virtual Machine About the Spec Format This document describes the Java virtual machine and the instruction set. In this introduction, each component of the machine is briefly described. This

More information

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES

EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES ABSTRACT EFFICIENT EXTERNAL SORTING ON FLASH MEMORY EMBEDDED DEVICES Tyler Cossentine and Ramon Lawrence Department of Computer Science, University of British Columbia Okanagan Kelowna, BC, Canada tcossentine@gmail.com

More information

Binary search tree with SIMD bandwidth optimization using SSE

Binary search tree with SIMD bandwidth optimization using SSE Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous

More information

Notes on Complexity Theory Last updated: August, 2011. Lecture 1

Notes on Complexity Theory Last updated: August, 2011. Lecture 1 Notes on Complexity Theory Last updated: August, 2011 Jonathan Katz Lecture 1 1 Turing Machines I assume that most students have encountered Turing machines before. (Students who have not may want to look

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

Chapter 12 File Management

Chapter 12 File Management Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 12 File Management Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Roadmap Overview File organisation and Access

More information

Chapter 12 File Management. Roadmap

Chapter 12 File Management. Roadmap Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 12 File Management Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Overview Roadmap File organisation and Access

More information

CLOUD STORAGE AND ONLINE BIN PACKING. Swathi Venigella. Bachelor of Engineering in Computer Science and Engineering JNTU University, India May 2008

CLOUD STORAGE AND ONLINE BIN PACKING. Swathi Venigella. Bachelor of Engineering in Computer Science and Engineering JNTU University, India May 2008 CLOUD STORAGE AND ONLINE BIN PACKING By Swathi Venigella Bachelor of Engineering in Computer Science and Engineering JNTU University, India May 2008 A thesis submitted in partial fulfillment of the requirements

More information

MS SQL Performance (Tuning) Best Practices:

MS SQL Performance (Tuning) Best Practices: MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware

More information

Krishna Institute of Engineering & Technology, Ghaziabad Department of Computer Application MCA-213 : DATA STRUCTURES USING C

Krishna Institute of Engineering & Technology, Ghaziabad Department of Computer Application MCA-213 : DATA STRUCTURES USING C Tutorial#1 Q 1:- Explain the terms data, elementary item, entity, primary key, domain, attribute and information? Also give examples in support of your answer? Q 2:- What is a Data Type? Differentiate

More information

Performance Comparison of RTOS

Performance Comparison of RTOS Performance Comparison of RTOS Shahmil Merchant, Kalpen Dedhia Dept Of Computer Science. Columbia University Abstract: Embedded systems are becoming an integral part of commercial products today. Mobile

More information

(Refer Slide Time: 02:17)

(Refer Slide Time: 02:17) Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #06 IP Subnetting and Addressing (Not audible: (00:46)) Now,

More information

The Design of the Inferno Virtual Machine. Introduction

The Design of the Inferno Virtual Machine. Introduction The Design of the Inferno Virtual Machine Phil Winterbottom Rob Pike Bell Labs, Lucent Technologies {philw, rob}@plan9.bell-labs.com http://www.lucent.com/inferno Introduction Virtual Machine are topical

More information

Chapter 13: Query Processing. Basic Steps in Query Processing

Chapter 13: Query Processing. Basic Steps in Query Processing Chapter 13: Query Processing! Overview! Measures of Query Cost! Selection Operation! Sorting! Join Operation! Other Operations! Evaluation of Expressions 13.1 Basic Steps in Query Processing 1. Parsing

More information

AP WORLD LANGUAGE AND CULTURE EXAMS 2012 SCORING GUIDELINES

AP WORLD LANGUAGE AND CULTURE EXAMS 2012 SCORING GUIDELINES AP WORLD LANGUAGE AND CULTURE EXAMS 2012 SCORING GUIDELINES Interpersonal Writing: E-mail Reply 5: STRONG performance in Interpersonal Writing Maintains the exchange with a response that is clearly appropriate

More information

Organization of Programming Languages CS320/520N. Lecture 05. Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.

Organization of Programming Languages CS320/520N. Lecture 05. Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio. Organization of Programming Languages CS320/520N Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Names, Bindings, and Scopes A name is a symbolic identifier used

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Computational Complexity Escola Politècnica Superior d Alcoi Universitat Politècnica de València Contents Introduction Resources consumptions: spatial and temporal cost Costs

More information

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

More information

Instruction Set Architecture (ISA)

Instruction Set Architecture (ISA) Instruction Set Architecture (ISA) * Instruction set architecture of a machine fills the semantic gap between the user and the machine. * ISA serves as the starting point for the design of a new machine

More information

10CS35: Data Structures Using C

10CS35: Data Structures Using C CS35: Data Structures Using C QUESTION BANK REVIEW OF STRUCTURES AND POINTERS, INTRODUCTION TO SPECIAL FEATURES OF C OBJECTIVE: Learn : Usage of structures, unions - a conventional tool for handling a

More information

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D. 1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D. base address 2. The memory address of fifth element of an array can be calculated

More information

Disk Space Management Methods

Disk Space Management Methods Volume 1, Issue 1, June 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Disk Space Management Methods Ramakrishna

More information

File System Management

File System Management Lecture 7: Storage Management File System Management Contents Non volatile memory Tape, HDD, SSD Files & File System Interface Directories & their Organization File System Implementation Disk Space Allocation

More information

Chapter 13. Disk Storage, Basic File Structures, and Hashing

Chapter 13. Disk Storage, Basic File Structures, and Hashing Chapter 13 Disk Storage, Basic File Structures, and Hashing Chapter Outline Disk Storage Devices Files of Records Operations on Files Unordered Files Ordered Files Hashed Files Dynamic and Extendible Hashing

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Segmentation and Fragmentation

Segmentation and Fragmentation Segmentation and Fragmentation Operating System Design MOSIG 1 Instructor: Arnaud Legrand Class Assistants: Benjamin Negrevergne, Sascha Hunold September 16, 2010 A. Legrand Segmentation and Fragmentation

More information

DATABASE DESIGN - 1DL400

DATABASE DESIGN - 1DL400 DATABASE DESIGN - 1DL400 Spring 2015 A course on modern database systems!! http://www.it.uu.se/research/group/udbl/kurser/dbii_vt15/ Kjell Orsborn! Uppsala Database Laboratory! Department of Information

More information

APP INVENTOR. Test Review

APP INVENTOR. Test Review APP INVENTOR Test Review Main Concepts App Inventor Lists Creating Random Numbers Variables Searching and Sorting Data Linear Search Binary Search Selection Sort Quick Sort Abstraction Modulus Division

More information

Managing Variability in Software Architectures 1 Felix Bachmann*

Managing Variability in Software Architectures 1 Felix Bachmann* Managing Variability in Software Architectures Felix Bachmann* Carnegie Bosch Institute Carnegie Mellon University Pittsburgh, Pa 523, USA fb@sei.cmu.edu Len Bass Software Engineering Institute Carnegie

More information

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design

PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions. Outline. Performance oriented design PART IV Performance oriented design, Performance testing, Performance tuning & Performance solutions Slide 1 Outline Principles for performance oriented design Performance testing Performance tuning General

More information

FAT32 vs. NTFS Jason Capriotti CS384, Section 1 Winter 1999-2000 Dr. Barnicki January 28, 2000

FAT32 vs. NTFS Jason Capriotti CS384, Section 1 Winter 1999-2000 Dr. Barnicki January 28, 2000 FAT32 vs. NTFS Jason Capriotti CS384, Section 1 Winter 1999-2000 Dr. Barnicki January 28, 2000 Table of Contents List of Figures... iv Introduction...1 The Physical Disk...1 File System Basics...3 File

More information

recursion, O(n), linked lists 6/14

recursion, O(n), linked lists 6/14 recursion, O(n), linked lists 6/14 recursion reducing the amount of data to process and processing a smaller amount of data example: process one item in a list, recursively process the rest of the list

More information

Lecture 15. IP address space managed by Internet Assigned Numbers Authority (IANA)

Lecture 15. IP address space managed by Internet Assigned Numbers Authority (IANA) Lecture 15 IP Address Each host and router on the Internet has an IP address, which consist of a combination of network number and host number. The combination is unique; no two machines have the same

More information

Storage in Database Systems. CMPSCI 445 Fall 2010

Storage in Database Systems. CMPSCI 445 Fall 2010 Storage in Database Systems CMPSCI 445 Fall 2010 1 Storage Topics Architecture and Overview Disks Buffer management Files of records 2 DBMS Architecture Query Parser Query Rewriter Query Optimizer Query

More information

Compact Representations and Approximations for Compuation in Games

Compact Representations and Approximations for Compuation in Games Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions

More information

Exception and Interrupt Handling in ARM

Exception and Interrupt Handling in ARM Exception and Interrupt Handling in ARM Architectures and Design Methods for Embedded Systems Summer Semester 2006 Author: Ahmed Fathy Mohammed Abdelrazek Advisor: Dominik Lücke Abstract We discuss exceptions

More information

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1. Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 1 Quiz 1 Do not open this quiz booklet until you are directed

More information

Chapter 2: OS Overview

Chapter 2: OS Overview Chapter 2: OS Overview CmSc 335 Operating Systems 1. Operating system objectives and functions Operating systems control and support the usage of computer systems. a. usage users of a computer system:

More information

7.1 Our Current Model

7.1 Our Current Model Chapter 7 The Stack In this chapter we examine what is arguably the most important abstract data type in computer science, the stack. We will see that the stack ADT and its implementation are very simple.

More information

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure

Chapter 11: File System Implementation. Chapter 11: File System Implementation. Objectives. File-System Structure Chapter 11: File System Implementation Chapter 11: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

OPERATING SYSTEMS MEMORY MANAGEMENT

OPERATING SYSTEMS MEMORY MANAGEMENT OPERATING SYSTEMS MEMORY MANAGEMENT Jerry Breecher 8: Memory Management 1 OPERATING SYSTEM Memory Management What Is In This Chapter? Just as processes share the CPU, they also share physical memory. This

More information

Tail Recursion Without Space Leaks

Tail Recursion Without Space Leaks Tail Recursion Without Space Leaks Richard Jones Computing Laboratory University of Kent at Canterbury Canterbury, Kent, CT2 7NF rejukc.ac.uk Abstract The G-machine (Johnsson, 1987; Peyton Jones, 1987)

More information

CHAPTER 17: File Management

CHAPTER 17: File Management CHAPTER 17: File Management The Architecture of Computer Hardware, Systems Software & Networking: An Information Technology Approach 4th Edition, Irv Englander John Wiley and Sons 2010 PowerPoint slides

More information

Chapter 12 File Management

Chapter 12 File Management Operating Systems: Internals and Design Principles Chapter 12 File Management Eighth Edition By William Stallings Files Data collections created by users The File System is one of the most important parts

More information

Semester Thesis Traffic Monitoring in Sensor Networks

Semester Thesis Traffic Monitoring in Sensor Networks Semester Thesis Traffic Monitoring in Sensor Networks Raphael Schmid Departments of Computer Science and Information Technology and Electrical Engineering, ETH Zurich Summer Term 2006 Supervisors: Nicolas

More information

x64 Servers: Do you want 64 or 32 bit apps with that server?

x64 Servers: Do you want 64 or 32 bit apps with that server? TMurgent Technologies x64 Servers: Do you want 64 or 32 bit apps with that server? White Paper by Tim Mangan TMurgent Technologies February, 2006 Introduction New servers based on what is generally called

More information

Contributions to Gang Scheduling

Contributions to Gang Scheduling CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,

More information

Universal hashing. In other words, the probability of a collision for two different keys x and y given a hash function randomly chosen from H is 1/m.

Universal hashing. In other words, the probability of a collision for two different keys x and y given a hash function randomly chosen from H is 1/m. Universal hashing No matter how we choose our hash function, it is always possible to devise a set of keys that will hash to the same slot, making the hash scheme perform poorly. To circumvent this, we

More information

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling. Tim Nieberg Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

More information

Stack Allocation. Run-Time Data Structures. Static Structures

Stack Allocation. Run-Time Data Structures. Static Structures Run-Time Data Structures Stack Allocation Static Structures For static structures, a fixed address is used throughout execution. This is the oldest and simplest memory organization. In current compilers,

More information

Semantic Analysis: Types and Type Checking

Semantic Analysis: Types and Type Checking Semantic Analysis Semantic Analysis: Types and Type Checking CS 471 October 10, 2007 Source code Lexical Analysis tokens Syntactic Analysis AST Semantic Analysis AST Intermediate Code Gen lexical errors

More information

GOAL-BASED INTELLIGENT AGENTS

GOAL-BASED INTELLIGENT AGENTS International Journal of Information Technology, Vol. 9 No. 1 GOAL-BASED INTELLIGENT AGENTS Zhiqi Shen, Robert Gay and Xuehong Tao ICIS, School of EEE, Nanyang Technological University, Singapore 639798

More information

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92. Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

More information

Lecture 1: Data Storage & Index

Lecture 1: Data Storage & Index Lecture 1: Data Storage & Index R&G Chapter 8-11 Concurrency control Query Execution and Optimization Relational Operators File & Access Methods Buffer Management Disk Space Management Recovery Manager

More information

Automated Virtual Cloud Management: The need of future

Automated Virtual Cloud Management: The need of future Automated Virtual Cloud Management: The need of future Prof. (Ms) Manisha Shinde-Pawar Faculty of Management (Information Technology), Bharati Vidyapeeth Univerisity, Pune, IMRDA, SANGLI Abstract: With

More information

PPS Internet-Praktikum. Prof. Bernhard Plattner Institut für Technische Informatik und Kommunikationsnetze (TIK)

PPS Internet-Praktikum. Prof. Bernhard Plattner Institut für Technische Informatik und Kommunikationsnetze (TIK) PPS Internet-Praktikum Prof. Bernhard Plattner Institut für Technische Informatik und Kommunikationsnetze (TIK) September 2011 Zielsetzung Von unserer Webpage: Das Ziel dieser PPS-Veranstaltung ist es,

More information

Themen der Praktikumsnachmittage. PPS Internet-Praktikum. Zielsetzung. Infrastruktur im ETF B5

Themen der Praktikumsnachmittage. PPS Internet-Praktikum. Zielsetzung. Infrastruktur im ETF B5 PPS Internet-Praktikum Prof. Bernhard Plattner Institut für Technische Informatik und Kommunikationsnetze (TIK) Themen der Praktikumsnachmittage Aufbau und Analyse eines kleinen Netzwerks Routing Anwendungen

More information

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study CS 377: Operating Systems Lecture 25 - Linux Case Study Guest Lecturer: Tim Wood Outline Linux History Design Principles System Overview Process Scheduling Memory Management File Systems A review of what

More information

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller In-Memory Databases Algorithms and Data Structures on Modern Hardware Martin Faust David Schwalb Jens Krüger Jürgen Müller The Free Lunch Is Over 2 Number of transistors per CPU increases Clock frequency

More information

Appendix B Data Quality Dimensions

Appendix B Data Quality Dimensions Appendix B Data Quality Dimensions Purpose Dimensions of data quality are fundamental to understanding how to improve data. This appendix summarizes, in chronological order of publication, three foundational

More information

HP Service Manager Shared Memory Guide

HP Service Manager Shared Memory Guide HP Service Manager Shared Memory Guide Shared Memory Configuration, Functionality, and Scalability Document Release Date: December 2014 Software Release Date: December 2014 Introduction to Shared Memory...

More information