Memory Allocation Strategies for Real Time Operating System

Similar documents
Dynamic Memory Management for Embedded Real-Time Systems

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Lecture 10: Dynamic Memory Allocation 1: Into the jaws of malloc()

Memory Allocation. Static Allocation. Dynamic Allocation. Memory Management. Dynamic Allocation. Dynamic Storage Allocation

Memory Management Outline. Background Swapping Contiguous Memory Allocation Paging Segmentation Segmented Paging

Chapter 7 Memory Management

Chapter 7 Memory Management

Exception and Interrupt Handling in ARM

APP INVENTOR. Test Review

OS OBJECTIVE QUESTIONS

Operating Systems, 6 th ed. Test Bank Chapter 7

Performance Comparison of RTOS

Memory management basics (1) Requirements (1) Objectives. Operating Systems Part of E1.9 - Principles of Computers and Software Engineering

Analysis of Compression Algorithms for Program Data

IP ADDRESS ASSIGNMENT IN A MOBILE AD HOC NETWORK

System Software Prof. Dr. H. Mössenböck

Part III Storage Management. Chapter 11: File System Implementation

Real Time Programming: Concepts

Record Storage and Primary File Organization

Partition Scheduling in APEX Runtime Environment for Embedded Avionics Software

QoSIP: A QoS Aware IP Routing Protocol for Multimedia Data

Efficient Scheduling Of On-line Services in Cloud Computing Based on Task Migration

Chapter 8: Structures for Files. Truong Quynh Chi Spring- 2013

Embedded & Real-time Operating Systems

Physical Data Organization

Segmentation Segmentation: Generalized Base/Bounds

The Real-Time Operating System ucos-ii

Predictable response times in event-driven real-time systems

Ada Real-Time Services and Virtualization

Contributions to Gang Scheduling

Chapter 1 Computer System Overview

Unit Storage Structures 1. Storage Structures. Unit 4.3

Persistent Binary Search Trees

Memory Management in the Java HotSpot Virtual Machine

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

Chapter 5 Names, Bindings, Type Checking, and Scopes

CHAPTER 17: File Management

Chapter 12 File Management

Resource Allocation Schemes for Gang Scheduling

Chapter 12 File Management. Roadmap

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications

Symbol Tables. Introduction

Embedded Systems. 6. Real-Time Operating Systems

Multi-level Metadata Management Scheme for Cloud Storage System

Real-Time Scheduling (Part 1) (Working Draft) Real-Time System Example

Operating Systems CSE 410, Spring File Management. Stephen Wagner Michigan State University

Linux Process Scheduling Policy

Garbage Collection in NonStop Server for Java

Segmentation and Fragmentation

Disk Space Management Methods

(Refer Slide Time: 02:17)

CPU Shielding: Investigating Real-Time Guarantees via Resource Partitioning

Soft Real-Time Task Response Time Prediction in Dynamic Embedded Systems

Operating System Tutorial

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

SYSTEM ecos Embedded Configurable Operating System

1 Organization of Operating Systems

Validating Java for Safety-Critical Applications

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

& Data Processing 2. Exercise 3: Memory Management. Dipl.-Ing. Bogdan Marin. Universität Duisburg-Essen

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

DATABASE DESIGN - 1DL400

REAL TIME OPERATING SYSTEMS. Lesson-10:

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

Survey on Job Schedulers in Hadoop Cluster

This is an author-deposited version published in : Eprints ID : 12902

Garbage Collection in the Java HotSpot Virtual Machine

OPERATING SYSTEM - MEMORY MANAGEMENT

Transport Layer Protocols

Operating Systems. III. Scheduling.

An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems

Design of a NAND Flash Memory File System to Improve System Boot Time

WHITE PAPER. Understanding IP Addressing: Everything You Ever Wanted To Know

Organization of Programming Languages CS320/520N. Lecture 05. Razvan C. Bunescu School of Electrical Engineering and Computer Science

361 Computer Architecture Lecture 14: Cache Memory

CSE 544 Principles of Database Management Systems. Magdalena Balazinska Fall 2007 Lecture 5 - DBMS Architecture

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Dynamic resource management for energy saving in the cloud computing environment

Quiz for Chapter 6 Storage and Other I/O Topics 3.10

Chapter 8: Bags and Sets

Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) Total 92.

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture

Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip

How to Perform Real-Time Processing on the Raspberry Pi. Steven Doran SCALE 13X

On Demand Loading of Code in MMUless Embedded System

INTRODUCTION The collection of data that makes up a computerized database must be stored physically on some computer storage medium.

A Comparison of General Approaches to Multiprocessor Scheduling

Chapter 13. Chapter Outline. Disk Storage, Basic File Structures, and Hashing

ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU

Copyright 2007 Ramez Elmasri and Shamkant B. Navathe. Slide 13-1

Towards a Load Balancing in a Three-level Cloud Computing Network

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

2) Write in detail the issues in the design of code generator.

Binary search tree with SIMD bandwidth optimization using SSE

Transcription:

Memory Allocation Strategies for Real Time Operating System M.SUJITHA 1,V.KANNAN 2 1 Research Scholar, Department of Electronics and Communication Engineering, Dr. M.G.R. Educational and Research Institute University, Chennai 2 Principal, Jeppiaar Institute of Technology, Kanchipuram. E-mail: 1 msujidhas@yahoo.co.in, 2 drvkannan123@gmail.com Abstract Real time operating system (RTOS) is specially used to meet the real time constraint and to support the sophisticated facilities provided by embedded system. While designing any embedded system memory management is an important issue. Embedded system developers commonly implement custom memory management facilities on top of what the underlying RTOS provides. That s why understanding memory management is therefore an important aspect. Dynamic memory allocation algorithms have played an important role in the modern software engineering paradigms and techniques. The use of dynamic memory techniques has been not considered as an important issue by the real-time community because spatial and temporal worst case for allocation and de allocation operations were insufficiently bounded. In this paper we are dealing with different dynamic memory management strategies and algorithm that are used in real time operating system in order to improve the performance of the intended application. Keywords: Real Time Operating System, Memory allocation, sequential fit, Buddy Allocator, Indexed fit, Two Level Segregated Fit algorithm, Smart Memory Allocator. Before moving towards the memory management concept; let s have a look at what is RTOS and what kind of features it has. RTOS stands for Real Time Operating System which is nothing but an operating system that supports real-time applications by providing correct result within the deadline. Real Time Operating System can be categorize as hard real time system and soft real time system based on how strictly it follows the task completion deadline. RTOS's are designed in such a way that the timing of events is highly deterministic. RTOS is key to many embedded systems and provides a platform to build applications [14]. Figure 2 shows the overview of embedded system meant for real time applications. Sophisticated functionality provided by Embedded System 1. INTRODUCTION Embedded systems have become an integral part of daily life. Be it a cell phone, a smartcard, a music player, a router, or the electronics in an automobile - these systems have been touching and changing modern lives like never before. An embedded system is a combination of computer hardware, software, and additional mechanical or other technical components, designed to perform a dedicated function. Sophisticated facilities provided by an Embedded System are shown in Figure 1. Most of the embedded systems need to meet realtime computing requirements. Thus real time operating system (RTOS) is one of the major building blocks of the Embedded Systems. Along with the real time requirement many times embedded system is having limited memory and processing power. Understanding Memory management is therefore an important aspect. Complex Algorithm User Interface Multi rate Real time Figure 1 Facility Provided by Embedded System RTOS provides following features [14]: Synchronization: Synchronization is necessary for real time tasks to share mutually exclusive resources. For multiple threads to communicate among themselves in a timely fashion, predictable inter-task communication and synchronization mechanisms are required. Interrupt Handling: Interrupt Service Routine (ISR) is used for interrupt handling. Interrupt latency Page No: 60

SUJITHA. M, KANNAN. V is defined as the time delay between the occurrence of an interrupt and the running of the corresponding Interrupt Service Routine (ISR). Timer and clock: Clock and timer services with adequate resolution are vital part of every realtime operating system. Real-Time Priority Levels: A real-time operating system must support real-time priority levels so that when once the programmer assigns a priority value to a task, the operating system does not change it by itself. Fast Task Preemption: For successful operation of a real-time application, whenever a high priority critical task arrives, an executing low priority task should be made to instantly yield the CPU to it. Memory Management : Real-time operating system for large and medium sized application are expected to provide virtual memory support, not only to meet the memory demands of the heavyweight real-time tasks of an application, but to let the memory demanding non-real-time applications such as text editors, e-mail etc. An RTOS uses small memory size by including only the necessary functionality for an application while discarding the rest [14]. Interrupt Handling Synchroni zation Timer and clock support Features of RTOS RealTime Priority Levels Fast Task Preemptio n The requirements of a memory management system are Minimal fragmentation Minimal management overhead Deterministic allocation time The following sections described about different memory allocation algorithms,advantages,disadvantages,comparision between different algorithms etc. 2. DYNAMIC MEMORY ALLOCATION ALGORITHMS The main aim of DMA algorithms is to allow the application to access blocks of memory from a pool of free memory blocks [2][5][8]. In real-time systems it is needed to know in advance the operation bounds in order to analyses the system. A dynamic memory allocator must track of which parts of memory are in use and which parts are free. The dynamic memory allocation operations lack of a sufficient bounds to provide efficient support to the applications, which means that the time required to allocate and de-allocate memory are both unpredictable. Additionally, the dynamic memory management can introduce an important memory fragmentation that can generate unreliable service when the application runs for large period of time. It is important to point out that a fast response and high thoughtput are an important characteristic that has to be taken into account in any systems, and in p articular in a RTOS. But the main characteristic that defines a RTOS is to guarantee the timing constrains. Most of the DSA algorithms were designed to provide fast response time for the most probable case achieving a good overall performance, although the worst case can be high. For these reasons, most RTOS developers and researchers avoid the use of dynamic memory at all, or use it in a restricted way, for example, only during the system startup. Memory Managem ent Figure 2 - Features Provided By RTOS One of the key issues in real-time systems is to schedule ability analysis to determine whether the timing constraints will be satisfied at run-time or not. Regardless of which analysis and scheduling techniques are used, it is essential that the real-time designer be able to determine the worst-case execution time of all the activities involved in the application. Moreover, real-time applications run for Page No: 61

large periods of time. During its operation, memory can be allocated and de-allocated many times which aggravates the memory fragmentation problem. Total memory space exists to satisfy a request, but it is not contiguous is called external fragmentation. Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used is called internal fragmentation. Considering these aspects, the requirements of real time applications regarding dynamic memory can be stated as follows: (i) Bounded response time. The WCRT has to be known a priority and, if possible, be independent of application data. (ii) Fast response time. Although having a bounded response time is a must, the response time has to be fast to be usable. A bounded DSA algorithm which is 10 times slower than a conventional one is not practical. (iii) Memory requests have to be always satisfied. In non-real time systems applications can receive a null pointer or are just killed by the OS when the system runs out of memory. It is obvious that is not possible to always grant all the memory requested. But the DSA algorithm has to minimize the chances of exhausting the memory pool by minimizing the fragmentation and wasted memory. (iv) Determine if the just-freed block can be combined with its neighboring free blocks to form a larger piece. Number of Dynamic Memory Allocation algorithms has been developed till now and most of them are based on the strategy in which link all free blocks are linked together to form one or more list containing free memory blocks so that free memory can be made available for the another process in execution. Use of one free list to keep track of free memory blocks is the simplest method; however, algorithms utilizing multiple free lists often results in better performance specially for the system that requires good and reliable timing response at the cost of high memory footprint [2][5][8]. But still DM Allocators usage unreliable for embedded systems due to its unbounded long response time or its inefficient memory usage. The DM allocators for embedded systems should be implemented in their constrained operating systems by considering the limited available resources. Hence, the allocators should provide both the features such as optimum memory footprint [1]. The following are some DMA Algorithms that are used so far. A. Sequential Fit Sequential fit is the most basic algorithm which uses a single linear list of all free memory blocks called a free list. Free blocks are allocated from this list in one of four ways [13]: First fit: The First Fit algorithm searches the free list from the beginning for the first free memory block large enough to satisfy the allocation request. If the block found is too large, the allocator splits the block and the remaining part is put back into the free list. Next fit: The Next Fit algorithm is similar to First Fit algorithm, except the free list is searched from the location where the last allocated block large enough to satisfy the request. Best fit: the list is searched exhaustively, returning the smallest block large enough to satisfy the request. While the Best Fit algorithm has good memory utilization, in the general case it surfers from long execution times. Worst fit : The Worst Fit algorithm always searches for the largest free memory block to ysatisfy the requested allocation. In practice, this tends to work quite poorly because it quickly eliminates large blocks, so that eventually large requests cannot be met. The Worst Fit algorithm works well only when the largest possible allocation is much smaller than heap size. Simple to understand and easy to apply. Disadvantage: As memory available for allocation becomes large, the search time becomes large and in the worst case can be proportional to the memory size. As the free list becomes large, the memory used to store the free list becomes large; hence the memory overhead becomes large Sequential Fit is not a good strategy for RTOSs because it is based on a sequential search, whose cost depends on the number of existing free blocks.this search can be bounded but the bound is not normally acceptable. Page No: 62

SUJITHA. M, KANNAN. V B. Buddy allocator System A buddy allocator uses an array of free list s, one for each allowable block size (e.g., allowable block size is a power of 2 in the case of binary buddy system) [10]. The buddy allocator rounds up the requested size to an allowable size and allocates from the corresponding free list. If the free list is empty, a larger block from another free list is selected and split. A block may only be split into a pair of buddies (of the same size in case of binary buddy system). A block may be only with its buddy, and this is possible only if the buddy has not been split into smaller blocks [10]. The advantage of a buddy system is that the buddy of a block being freed can be quickly found by a simple address computation. Disadvantage: The disadvantage of a buddy system is that the restricted set of block sizes leads to high internal fragmentation, as does the limited ability to coalesce. C. Indexed Fit Indexed fits are a class of allocation mechanisms that use an indexing data structure, such as a tree or hash table, to identify suitable free blocks, according to the allocation policy [10][13]. Advantage This strategy is based on the use of advanced structures to index the free memory blocks thus provide better performance. D. Segregated Free Lists Segregated free lists are a class of allocation mechanisms which divide the free list into several subsets, according to the size of the free blocks. A freed or coalesced block is placed on the appropriate subset list. An allocation request is serviced from the appropriate subset list. Segregated free list algorithms can be categorized as follows: Simple Segregated Storage Simple segregated storage is a segregated free list allocation mechanism that divides storage into areas and only allocates objects of a single size, or small range of sizes, within each area. No memory splitting or coalescing is allowed. This makes allocation fast, but may lead to high external fragmentation. If the requested allocation size is not available, a larger memory block is allocated and the unused part of allocated memory block cannot be used for other allocations. Segregated Fit Similar to the simple segregated storage mechanism, segregated fit mechanisms make use of an array of free lists, each holding free blocks of a particular range of sizes. To satisfy an allocation request, the allocator identifies the appropriate free list and allocates from it (often using a sequential fit mechanism such as first fit). If this fails, a larger block is taken from another list and split. The remainder is put into an appropriate free list. When a memory block is freed, it might be coalesced with a block in the same free list and put into an appropriate free list. Although segregated fit algorithms do not suffer from severe fragmentation, they have large search times. E. Bitmapped Fit Bitmapped Fits are a class of allocation mechanisms that use a bitmap to represent the usage of the heap. Each bit in the map corresponds to a part of the heap, typically a word, and is set if that part is in use [10][11]. In Bitmapped Fit mechanism, the search time is proportional to the bitmap size which result in a small memory overhead. F. Two Level Segregated Fit Algorithms Two level segregated fit solves the problem of the worst case bound maintaining the efficiency of the allocation and deallocation operations allows the reasonable use of dynamic memory management in real-time applications. The two level segregated fit algorithm provides explicit allocation and deallocation of memory. There exists an increasing number of emerging applications using high amounts of memory, such as multimedia systems, video streaming, video surveillance, virtual reality, scientific data gathering, or data acquisition in control systems. Page No: 63

Figure 3 - TLSF free data structure overview TLSF was designed by the following guidelines: (i) Immediate coalescing: As soon as a block of memory is freed, TLSF will immediately merge it with adjacent free blocks, if any, to build up a larger free block. Although delayed coalescing can improve the performance of the allocator, it adds unpredictability (a block request may require to merge an unbounded number freed but not merged blocks) and also increases amount of fragmentation. Therefore, deferred coalescing cannot be used in real-time. (ii) Splitting threshold: The smallest block of memory allocated is 8 bytes. By limiting the minimum block size to 8 bytes, it is possible to store inside the free blocks the information needed to manage free block, therefore optimizing the memory usage. The list of free blocks is stored inside the same free memory blocks. (iii) Good-fit strategy: TLSF will return the smallest chunk big enough to hold the requested block. Since most applications only use a small range of sizes, best fit tend to produce the least fragmentation on real loads compared to other general approaches such as first-fit [7, 2]. Also, a best-fit (or an almost best-fit, also called good-fit) strategy can be implemented in a efficient and predictable way using segregated free lists. On the other hand, other strategies like first-fit or nextfit are difficult to implement with a predictable algorithm. Depending on the request sequence, a first-fit strategy can degenerate in a long sequential search in a linked list. TLSF implements a good-fit strategy, that is, it uses a large set of free lists, where each list is a non-ordered list of blocks between the size class and the next size class. (iv) No reallocation: It is assumed that the original memory pool is a single large block of free memory, and no sbrk() function is available. (v) Same strategy for all block sizes: The same allocation strategy and policy used for any requested size. (vi) Memory is not cleaned-up: Neither the initial pool nor the free blocks are zeroed. It is assumed that TLSF will be used in a trusted environment, where applications are written by well intended programmers. Therefore, initializing the memory introduces overhead with a useless feature. The programmer has to initialize all the data he allocates, as good programming practices recommends. To implement a good-fit strategy two level segregated fit algorithm uses a segregated fit mechanism. The basic segregated fit mechanism uses an array of free lists, with each array holding free blocks within a size class. In order to speed up the access to the free blocks and also to manage a large set of segregated lists (to reduce fragmentation) the array of lists has been organized as a two-level array. The first-level array divides free blocks in classes that are a power of two apart and the secondlevel sub-divides each first-level class linearly, where the number of divisions can be determined by the user. Each array of lists has an associated bitmap used to mark which lists are empty and which ones contain free blocks. This bitmap helps in identifying free blocks for satisfying memory requests. Inside the block itself, Information regarding each block is stored [3][4]. In Two Level Segregated Fit, there is extremely low overhead per allocation. G. Smart Memory Allocator Algorithm Smart Dynamic Memory Allocator is a new allocation style which allows designing a custom dynamic memory allocation mechanism with reduced memory fragmentation and superb response time. Here, the long-lived memory blocks are handled independently form short-lived memory blocks by the allocator. The direction of growing of heap is different for both types of memory blocks. The short-lived blocks are allocated from heap in the direction of bottom to top and the long-lived blocks are allocated from top to bottom. The used space of Page No: 64

Memory Management Algorithms SUJITHA. M, KANNAN. V the heap grows from both sides. Initially whole heap memory is free and there is only one free block each for short-lived and long-lived memory pool. The heap space is initially divided into two blocks (with no physical Boundary) in a predefined proportions, one for short-lived and the other for long-lived objects. As the heap grows from both sides, the boundary (virtual boundary) between short-lived memory pool and long-lived memory pool can be easily modified according to the run time memory requirements. For instance, let the heap size is 400 bytes, each short and long-lived memory pool has only one free block of size 200-bytes. In response to a memory block request of size 16-bytes (for instance, short-lived memory), the bottom 16-bytes of the free block corresponding to short-lived blocks is allocated for the request. In distinction, upon a memory request of block size 64-bytes (assume, long-lived object), the top 64-bytes of the free block corresponding to long-lived memory blocks is allocated for the request. A new smart dynamic memory allocator particularly for embedded systems have limited memory and processing power. The smart allocator predict the short-lived objects and allocates those objects on one side of the heap memory and remaining objects on other side of the heap memory for effective use of memory footprint. The allocator reduces the memory fragmentation by using directional adaptive allocation based on the predicted object lifetimes the lifetime can be predicted using a mixture of block size and the number of allocation events [1]. The allocator is implemented with improved multilevel segregated mechanism by means of lookup tables and hierarchical bitmaps that guarantee very good response time and dependable timing performance. A large number of free-lists are used by the smart memory allocator. Where, each Free-list keeps thefree blocks of size inside a predefined range and also the blocks belongs to some specific block-type. As shown in above figure 3, the free-lists are organized into 3 level arrays. Managing the shortlived memory blocks independently from long-lived memory blocks effectively without causing extra search overhead, the free-lists are controlled into three-level arrays. The first-level divides the freelists into free-list-classes (for example, the block sizes between (215-1) and 214 comes beneath 14th subdivided into diverse free-list-sets with each set keeping the free blocks of size within a predefined range (the dynamic rage of class is linearly subdivided among all the free-list-sets). Finally, each set of free-list-sets is divided into two free-list-ways, one equivalent to short-lived and other equivalent to Long-lived [1]. 3.Comparison between Different Memory Allocation Algorithms The comparison between various memory management algorithms on the basis of resulting fragmentation, response time and memory footprint usage etc are shown in table1. Parameters Fragmentation Memory Footprint Usage Response Time Sequential Large Maximum Slow fit Buddy Large Maximum Fast System Indexed fit Large Maximum Fast Segregated Large Maxium fast Free Lists Bitmapped Large Maximum Fast fit TLSF Small Minimum Faster Smart Reduced Minimum Excellent Memory Allocator Table 1 - Comparison between Different Memory Allocation Algorithms 4.CONCLUSION Memory Allocation is one of the most important part in RTOS based Embedded System. There are different memory allocation algorithms are available. The major challenges of memory allocator are minimizing fragmentation, providing a good response time, and maintaining a good locality among the memory blocks. This paper has presented various memory allocators available which aimed at addressing the major challenges of memory allocation, along with their merits and demerits 5.REFERENCES free-list-class), each class of free-lists are further Page No: 65

1.Ramakrishna M, Jisung Kim, Woohyong Lee and Youngki Chung, Smart Dynamic Memory Allocator for embedded systems, in proceedings of 23rd International Symposium on Computer and Information Sciences, ISCIS '08, 27-29 Oct. 2008 2.. Puaut, "Real-Time Performance of Dynamic Memory Allocation Algorithms," in Proceedings of the 14th Euromicro Conference on Real-Time Systems (ECRTS'02), June 2002 3.M. Masmano, I. Ripoll and A. Crespo, TLSF: A new dynamic memory allocator for real-time systems, in proceedings of 16th Euromicro Conference on Real-Time Systems, Catania, Italy, July 2004, pages 79-88. 4.XiaoHui Sun, JinLin Wang, xiao chan, An Improvement of TLSF Algorithm. 5.P. R. Wilson, M. S. Johnstone, M. Neely, and D. Boles, Dynamic Memory Allocation: A Survey and Critical Review, In Proceedings of the Memory Management International Workshop IWMM95, Sep 1995 6.MS Johnstone, Fragmentation Problem: Solved? In Proc. of the Int. Symposium on Memory Management (ISMM 98), Vancouver, Canada. ACM Press 7.Robart L. Budzinski, Edward S. Davidson, A Comparison of Dynamic and Static Virtual Memory Allocation Algorithms IEEE Transactions on software Engineering, Vol. SE-7, NO. 1, January 1981. 8.Takeshi Ogasawara, An Algorithm with Constant Execution Time for Dynamic Memory Allocation, in proceedings of Second International Workshop on 9.Real-Time Computing Systems and Applications, 25-27 Oct 1995,Pg No. 21 25. 10.David R. Hanson, Fast allocation and deallocation of memory based on object life times, Software-Practice and Experience, Vol. 20(1), Jan 1990 pp. 5-12. 11.Mohamed A. Shalan, Dynamic Memory Management for Embedded Real-Time Multiprocessor System On a Chip, A Thesis in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy from School of Electrical and Computer Engineering,Georgia Institute of Technology, November 2003. 12.Kenny Bridgeson, Information System Management, (published by Lotus Press, Page no. 36-38) 13.S. Baskiyar, Ph.D. and N. Meghanathan, A Survey of Contemporary Real-time Operating Systems Informatica 29 (2005),Pg No.233 240 The Memory Management Glossary, http://www.memorymanagement.org/glossary/b.html 14. W. S. Liu, Real-time System, (published by Person Education, Page no. 20-40). 15.M. Masmano, I. Ripoll, and A. Crespo, Dynamic storage allocation for real-time embedded systems Universidad Polit ecnica de Valencia, Spain. ******* IJCI 2K14 094 ******* Page No: 66