Task Priority Optimization in Real-Time Multi-Core Embedded Systems Erjola Lalo, Michael Deubzer, Stefan Schmidhuber, Erna Oklapi, Jürgen Mottok, Ostbayerische Technische Hochschule (OTH) Regensburg, Las 3 -Laboratory for Safe and Secure Systems, Faculty of Electrical Engineering and Information Technology, Seybothstraße 2, 93053 Regensburg, Germany, {erjola.lalo, erna.oklapi, stefan.schmidhuber, juergen.mottok}@oth-regensburg.de Timing-Architects Embedded Systems GmbH Bruderwöhrdstr. 15b, 93055 Regensburg, Germany michael.deubzer@timing-architects.com Abstract The shift from single-core to multi-core processors in real-time embedded systems leads to communication based effects on timing such as inter-core communication delays and blocking times. Moreover, the complexity of the scheduling problem increases when multi-core processors are used. In priority-based-scheduling, a fixed priority assignment is used in order to enable predictable behavior of the system. Predictability means that the system has to be analyzable which allows the detection of problems coming from scheduling decisions. For fixed priority scheduling in multi-core real-time embedded systems, a proper task priority assignment has to be done in a way that the system has minimal effects on timing. In this paper, we present an approach for finding near-optimal solutions for task priority assignment and the preemption/cooperation problem. A genetic algorithm is hereby used to create priority assignment solutions. A timing simulator is used for evaluation of each solution regarding real-time properties, memory consumption and communication overhead. In a case study we demonstrate that the proposed approach performs better than well known and single-core optimal heuristics for relatively complex systems. Index Terms priority optimization; priority assignment; embedded systems; genetic algorithms; multi-core; multi-core scheduling; I. INTRODUCTION A real-time embedded system functions correctly if and only if computational results are correct and produced in time [2]. Deadlines are hereby defined for the system s tasks to quantify to what extent computational results are produced in time. A deadline represents the upper limit for the response time of a task, which calculates as the difference between its starting time and finishing time. In order to ensure that the tasks meet their deadlines, the multi-core scheduling problem has to be solved [1]. A. Problem Definition The multi-core scheduling problem deals with the question where 1 and in which order the tasks should execute. In this work, we only deal with the latter question. Due to the fact that a real-time multi-core system has to be predictable, fixed priority scheduling is used in this work. Predictability means 1 When using partitioned scheduling, tasks have to be mapped to cores that the system should be analyzable to predict consequences of any scheduling decision. This paper deals with priority optimization of tasks in fixed priority scheduling with the goal to fulfill the aforementioned timing constraints as good as possible. We hereby consider both preemptive and cooperative scheduling in multi-core processors. In preemptive scheduling, a ready task with higher priority can preempt a running task with lower priority at any time during execution. In cooperative scheduling, tasks do not preempt each other at any time during execution. They instead preempt each other at re-scheduling points where re-scheduling decisions are taken by scheduler. The decision where a scheduling point is positioned inside a task is assumed to be decided by the developer, whereas this decision is often motivated by data consistency needs. The task prioritization approach provided in this paper is applicable as well in single core processors. In systems with simple task sets, the prioritization of tasks is simplified but for complex systems with multiple types of tasks running concurrently on a multi-core processor, it is important to consider every possible effect. More information regarding different types of tasks is provided in Section II-C of this paper. In the following, a formal definition of an embedded system is given. Definition 1: A real time embedded system S = (δ, γ, β, σ) consists of a task set δ, a processing unit γ, system resources β and a set of schedulers σ, where δ is the tasks set of cardinality n where T j is the j th task, with j = 1 n. A task is considered any subdivided part of the overall system according to its real time requirements. It serves as a framework for execution of functions. Each task consists of a finite sequence of identical activities or jobs called instances. Let J be the set of instances J j,i of task T j, with i = 1 p. J j,i then denotes the i th instance of the j th task. γ processing unit which consists of a set of processors γ m, with m = 1 r. Each processor consists of a set
of cores α = {C m,1, C m,2,, C m,k, C m,k+1,, C m,p }, with k = 1 p. Without loss of generality we assume a homogeneous multi-core processor. β = {B 1, B 2,, B l, B l+1,, B q } is the set of shared resources available in the system, with l = 1 q. σ is the set of schedulers in the system. Schedulers are considered to be local or global. In local scheduling there are local priority queues for each processing element. Whereas in global scheduling there is only one global priority queue for each processing element [18] [4]. In multi-core systems, task response times can be affected as a result of task interference due to access to common resources. For example, two tasks T 1 and T 2, which are running on different cores, can delay one or another because of exclusive hold of a semaphore. Thus, task T 1 is delayed or blocked for execution because of waiting for the release of the semaphore hold by task T 2. In such circumstance the response time of task T 1 increases due to the waiting time. In the worst case, the increased response time could lead to a deadline violation. Another possible impact on timing is enforced migration, where tasks are divided into parts which execute on different cores in a fixed order. Besides the additional scheduling overhead, a near-optimal priority has to be found which has to be applicable to each core the individual task parts are running on. In this paper, we consider a system without possible enforced migrations of tasks. We will however consider it in our future work. Other core-communication based effects result from interprocess activations. Inter-process activation means a task is activated through events by other tasks and is considered in this work. The activation of the inter-process activated tasks depends not only on the activation pattern itself but also on possible delays and blocking effect of the activating tasks, even across cores. B. Motivation In the following paragraph, we show three practical scenarios when a near-optimal prioritization of tasks is sufficient and necessary. 1) Minimization of Response time Minimization of response time (RT) is achieved through avoidance of any possible task delays. Delays can occur because of preemptive and cooperative task suspensions. Hence, a task T j δ is delayed as a result of activity of another task T j+1 δ. Assume the priority for task T j is higher than the priority of task T j+1 and both tasks are running on the same core. Task T j+1 delays while T j is running on the core. Therefore, RT j+1 increases. Minimization of response time of each task can improve the impact to each other and latencies are prevented. In extreme cases, long delays can cause timing constraint violations. Figure 1 shows the simulation results of a dual core system composed by two preemptive tasks: T ASK_1 and T ASK_2. The priority of T ASK_1 is lower than of T ASK_2. Hereby, T ASK_1 is delayed because of T ASK_2. Figure 1. Task delay and effects on response time In addition, delays of tasks can occur because of concurrent access to shared resources. A long delay because of waiting for access to shared resources can cause timing constraint violations. 2) Minimization of Timing Constraints Violations In Section I-A, the effects of common access to shared resources in multi-core processors have been introduced. In worst cases as a result of concurrent accesses, tasks can remain in a waiting state until a deadline violation occurs. Figure 2 shows the simulation result of dual core system. T ASK_1 and T ASK_2 are running on different cores and both request semaphore SEM_1. T ASK_2 first gets the lock on the semaphore and T ASK_1 falls in a waiting state at t = 12, 125 us and cannot continue executing until the semaphore is released at t = 12, 375 us. As a result a deadline violation of T ASK_1 occurs. Figure 2. Deadline Violations In addition, T ASK_4 which runs on the same core as T ASK_2 misses its deadline. The priority of T ASK_4 is lower than of T ASK_2 and it cannot preempt T ASK_1. We see that the effects because of waiting for the semaphore effects as well the execution behavior of T ASK_4. 3) Task Switching Overhead Minimization A task can be preempted often during execution time because of higher priority tasks. More precicely, a chain of preemptions can occur. Task T j δ is preempted by T j+1 ; T j+1 is later preempted by T j+2, which is then preempted by T j+3 and so on. In all preemptions, the operating system has to save the context stack of each task. Task switching overhead results in a time overhead and in an increase of the memory size. Figure 3 shows simulation results of a system that has high preemption overhead as a result of a bad priority assignment. In this paper we present a model-based approach for task priority optimization. The system is abstracted in a model,
Figure 3. Hierarchical Preemption and Overhead which is thoroughly described in the next section. Subsequently, in section III, we present the priority optimization approach itself. Moreover, in the case study we show a complex system where the presented prioritization approach performs better than known heuristics. II. SYSTEM MODEL In this section we provide a description of the real time embedded system in which priority optimization is applied. Here, the system description is expanded with more details regarding the types of tasks and categorization. Moreover, the system is abstracted in a model which is described in the following subsection. A. Timing Model The Timing Model describes all information regarding an embedded real-time system which is necessary to perform a timing simulation [17]. It consists of a description of hardware and software components, as well as configuration or mapping information. The Hardware model includes information regarding processors, cores, quartzes and memory modules. The Software model contains detailed information about processes (tasks and ISRs), runnables and signals. Signals represent the data that is read and written by the smallest unit within tasks which are called runnables. Runnables represent a software module that consists of instructions, write / read signals and calls to other runnables. Regarding scheduling, the timing model keeps information of task properties such as priority, preemptability and MTA (maximum number of concurrent instances of this task). The Operating System model includes the list of schedulers, the used scheduling algorithms, which cores they manage and the list of semaphores present in the system. In the Stimulation section, information regarding the activation pattern of tasks and ISRs are defined by stimuli. A Stimulus is a trigger that activates processes for execution. The following stimulus types are possible: periodic, sporadic, inter-process activated and single stimulus. Additional clocks allow the modification of the activation during simulation. Mapping information includes information regarding mapping of processes to schedulers and stimuli to processes. B. Properties of Tasks Each instance or job J j,i of each task is characterized by the tuple (E j,i, R j,i, D j,i, O j,i, P j,i ) Hereupon, instance J j,i has: 1) Computation Time, i.e. Net Execution Time E j,i ; 2) Period R j,i is the distance between activation of one instance of the task J j,i to the activation of the next instance J j,i+1. 3) Deadline D j,i is the time line before which the task has to finish execution; 4) Offset O j,i is the activation time of the first instance; 5) Priority P j,i is a value which the scheduler decides which task may execute first. The priority is the same for each instance of a task T j ; A real time task T j is characterized by timing constraints and computational parameters. Timing constraint refers to changeless properties of the task that must be met in order to have the desired execution behavior of a task. A very important timing constraint is deadline. Computational parameters refer to evaluation parameters used to evaluate if the requirements are hold. Computational Parameters consist of metrics for evaluation of timing results of each task and other entities of the system. Metric is an indicator or a measurement over a specific property to determine whether one particular requirement is met. The response time RT i of a task instance i for example is the time between the activation time and the termination time. It measures the whole life cycle of a process instance. Every interference resulting in a longer execution time or delay of the start influences this time [5]. C. Task Categorization Chategorization of tasks is done based on different attributes. - Regarding Activation Regarding the type of activation tasks are categorized as following: 1) Period Tasks are tasks with regular activation. Each instance J j,i of a task T j is activated with a period R i. In other words, the period of each instance is equal. 2) Sporadic Tasks are tasks with irregular activation. The activation is sporadic. In sporadic tasks the period is the minimal inter arrival time between two successive jobs; Hence, the period of each instance J j,i of a task T j can result different. 3) Chained Tasks are tasks activated through events by other tasks. The activation of chained task is dependent on the activation of the activating task. 4) Single Tasks are tasks activated a single time in a specific time slice. They can be multiple activation through multiple singled activated in a specific time. - Regarding Offset [16] Based on offset tasks are characterized in three type sets: 1) Synchronous Sets are groups of tasks with the same offset e.g. offset 0. 2) Asynchronous Sets are groups of tasks where constraints of the system determine the offset. They can be different or same for different tasks within the system. 3) Offset Free Sets have no constraints on offset and tasks are selected by scheduling algorithm for execution.
- Regarding Dependency Based on dependency tasks are categorized in 2 groups: 1) Dependent tasks are tasks that share common data or access to common resources. Task dependencies can be in two types: a) Sequential Dependency In this case tasks need results from other tasks in order to execute or continue executing. For example, task T j writes a signal and task T j+1 needs to read it after it is written by T j. Hence, task T j+1 necessarily has to execute after the signal was been written. b) Resource Dependency This dependency is possible in case of accesses to shared resources in exclusive mode. These resources are called critical resources / not preemptive sections e.g. semaphores, devices and certain data structures. 2) Independent tasks. Tasks have no dependencies. D. Evaluation In this work, for evaluation of solutions we use the eventbased simulator of TA Toolsuite [5]. It simulates a timing model and stores task state transitions events such as activate, start and terminate in a trace file with respective timestamps. By applying statistical estimator to the simulation trace, metrics like the response times of tasks can be calculated. Compared to response time analysis evaluation, the simulation is more feasible for complex system. The system we are focusing on is not a pure classified system, but a complex system with different execution times and activation patterns which would require highly sophisticated response time analysis. To the best of our knowledge, such a solution is not yet available for the systems we consider. III. PRIORITY OPTIMIZATION In this section, we thoroughly discuss the proposed priority optimization approach. A. Introduction Priority Optimization in this paper is based on a complex embedded system as previously described in Section II. We identified the following requirements for task prioritization: 1) No restrictions should exist in types of tasks regarding activation, offset and dependency. All type of tasks described in section II-C can be present in the system. 2) After assignment of priorities, the system should be still schedulable in the sense that no timing constraint violations occur and response time is minimized. 3) In a multi-core environment any possible inter-core effect has to be handled. 4) Besides task priority optimization, the algorithm has to find a near-optimal preemption and cooperation level between tasks. Definition 1: Prioritization of each task T j ɛδ of the system S is the function problem ζ : (P j, G j ) T j of near-optimal assignment of a scalar value called priority P j to each instance J i,j of task T j and grouping of tasks with prioritization G j for defining preemption / cooperation level in an way that no violations on timing constraints such as deadline occur; the system meets timing requirements and it responds fast and in time accordingly to the real time system specification [2]. Concentrating all of above for a complex system in one algorithm is a very tough task. Inspired by the requirements of the system and the feasibility of evolutionary algorithms used so far in priority optimization [13] [12], the prioritization optimization is solved through genetic algorithms. Other algorithms we know concerning task prioritization are heuristics and algorithms based on schedulability tests. We have seen that heuristics do not work always for systems with dependent tasks and algorithms based on schedulabilty tests require computationally expensive calculations and response time analysis. In section III-D we present the genetic algorithm in more detail. B. State-of-the-Art In 1973 Liu and Layland et al. [6] showed that Rate Monotonic Priority Assignment (RMPA) is optimal for independent synchronous tasks that share a common release time and have an implicit deadline. In 1982, Leung et al. [8] showed that Deadline Monotonic Priority Assignment (DMPA) is optimal for independent synchronous tasks with constrained deadlines. These heuristics are optimal for uniprocessor systems and in a periodic task set. No other optimal heuristic and algorithm was available for asynchronous task set until Audsley et al. [7] devised an optimal algorithm in 1991. Audsley?s algorithm was optimal for independent asynchronous task sets with arbitrary deadlines and arbitrary start time that share a critical instant [6]. Audsley?s Algorithm for Priority Assignment (OPA) assigns priorities according to a schedulability test. In 1991, Davis et al. [9] showed that OPA is applicable in multiprocessor case using fixed task priority preemptive scheduling. They show also that DMPA appears to have a poor performance in multiprocessor case. The system model that Davis and Burns define has periodic and aperiodic tasks. Tasks are independent and all the tasks are assumed to have constrained deadlines. M. A. Moncusi et al. [15] proposes an offline heuristic for allocating and assigning priorities in distributed hard real time systems with independent tasks that have an end to end deadline. In 2007 R.I. Davis et al. [10] proposes a priority assignment for fixed priority real time systems. They describe a robust priority assignment algorithm that finds the robust priority assignment with wide range of system models and interference functions. In 2009, S. Samii et al. [12] introduced an evolutionary algorithm for task priorities and flex ray frame identifiers. They assign priorities based on the study of average response time. In 2011, E. Azketa et al. [13] proposed a genetic algorithm for priority assignment for
tasks and messages in distributed hard real time systems. All algorithms above consider a system with independent task set. In 2012 Nemati [11] proposed a generalized version of OPA that considered shared resources with simple schedulability tests and applicable also in multiprocessor case. C. Priority Optimization Framework In our approach, the optimization algorithm considers a system with its properties and not-assigned attributes, requirements and configurations. In the end the algorithm produces several solutions of the system with assigned attributes. System Properties includes system information which is considered to be intrinsic property of the system. That means that these properties are not changed during optimization, instead they serve as input information. System Attributes includes entity properties of the system that can be set or changed by the optimization algorithm. Attributes are priorities of tasks, task grouping for defining preemption and cooperation level. Requirements include all requirement that have to be fulfilled by the algorithm for providing an acceptable prioritization. Such requirements for example are: minimization of response time for tasks of system S, for a task T j ɛδ the response time has to fulfill the condition RT j D j. Moreover, the minimization of multi-core effects, i.e. memory requirement or communication delays are possible. Evaluation is the process of evaluating the quality of produced solution.it measures to what extent the requirements are fulfilled. D. Genetic Algorithm In genetic algorithms a set of solutions called the population is used to sample the exploration space. In our approach a solutions is a set of tasks with their attributes: priorities, task grouping, and metrics for timing evaluation. One population, as in real evolution, is inherited or varied from one generation to another one. Best solutions searched in each generation are ensured through a process called fitness survival of individuals. Fitness is a scalar value that measures the quality of solutions. The first set of solutions considered as a start population of Genetic Algorithm is called initial population. The genetic algorithm used in this work is an extension of genetic algorithm of S. Schmidhuber in his Task Allocation Optimization work[3]. Genetic Algorithm runs through the following steps: Create Initial Population Initial population is created through uniform random generation of priorities for each task. A good initial population is a population with a variety of characteristics and differences between solutions. Evaluation It is the process of assignment of fitness to all solutions. Selection It is a process that selects the solutions that pass the best fitness condition. Variation Variation is used in order to create new solutions in each generation. It is performed by Crossover and Mutation. They play different roles in the genetic algorithm. Mutation is needed to explore new states through avoidance of local minima. Crossover increases the average quality of the population [14]. 1- Crossover Crossover combines attributes of two or more individuals for creating new solutions. In priority genetic algorithm we use single cut point crossover operator. The operator selects a random cutting point in two parents; it combines information of two parents by producing two children with inherited genes from both parents. 2- Mutation Mutation modifies the attributes of existing solutions to create new solutions. In this work, we use several mutation operators introduced in [14]. The purpose of implementation of several mutation operators is to experiment and evaluate the best operator for task prioritization. E. Variation Appliance In general mutation is applied to produce children of crossover at the same step as crossover. The order on which both mutation and crossover is applied is chosen to be first crossover and then mutation. While crossover improves the quality of solutions and trying to find the best solution it tends to shrink the exploration area. Theoretically, it comes a point when crossover cannot find any better solution. At this point applying mutation for increasing the exploration space gives the possibility to apply crossover in the next coming generations to find other good solutions. IV. CASE STUDY The goal of this case study is to show the effectiveness of the priority optimization algorithm proposed in this paper. Therefore, we optimize priorities and preemptions/cooperations of an artificial embedded real-time system. It consists of a symmetric dual core processor where each core operates with a clock frequency of 600 Mhz. The system consists of 9 tasks where 7 tasks are activated periodically, one task is inter-process activated and one task is activated sporadically modeled by a Weibull distribution. The tasks are scheduled by two schedulers which manage one core each. We assume that each scheduler invocation leads to a scheduling decision delay of 100ns. The optimization goal is to minimize response times (mnrt metric). With an upper bound of 1 for the mnrt metric, the fitness of a solution equals the respective mnrt metric is 1. A solution with fitness 1 indicates a deadline violation. In Table I are listed task properties of the engine management system such as offset O i, deadline D i, recurrence R i, net execution time E i as well as the activation type T. The tasks exchange 10 data signals in total and T ask2 and T ask6 share the semaphore Sem_1. Schedulers Sched_1 and Sched_2 manage core C_1 and C_2, respectively. Tasks T ask1, T ask2, T ask3, T ask4 are scheduled by Sched_1 whereas tasks T ask5, T ask6, T ask7, T ask8 and T ask9 are
Table I TASK PROPERTIES Task O i [ms] D i [ms] R i [ms] E i [ms] T 1 T ask1-70 100 15 E T ask2 5 80 150 20 P T ask3 0 90 150 30 P T ask4 50 140 250 25 P T ask5 4 60 60 10 P T ask6 5 40 150 10 P T ask7 5 50 100 15 P T ask8 10 70 300 20 P T ask9-150 134/146/162 2 30 S 1 Task activation type: E - IPA Event, P - Periodic, S - Sporadic Weibull 2 Respective to Min/Avg/Max scheduled by Sched_2. Priority optimization algorithm is configured in the following way: The initial population consists of 100 solutions and 50 parent solutions are selected to create 50 offspring solutions in every subsequent generation. The algorithm has been configured to stop after 100 generations. For creating offspring solutions, the IVM mutation operator and the single cut point crossover operator was used. The mutation rate was set to 30%, whereas the crossover rate was set to 70%. Each solution has been simulated with version 13.14.0 of the simulator TA Toolsuite. Hereby a simulation time of 10s was used. After the optimization run, we compared the best solution produced by priority optimization against heuristics. Figure 4 shows the comparison of normalized Response Time for each task of Priority Genetic Algorithm (PGA), the DMPA as well as the RMPA heuristic. Priorities assigned by DMPA lead to a deadline violation for T ask8. Deadline violations also occur for T ask6 and T ask9 when using the RMPA heuristic for priority assignment. In contrary to that, the best solution from priority genetic algorithm does not lead to deadline violations and task response times are minimized in general which leads to a minimized mnrt for the overall system. Figure 5 shows the fitness of the best solution provided by priority genetic algorithms and the fitness of DMPA and RMPA heuristics. For the embedded system used in this casestudy, the comparison induces that priority genetic algorithm performs better than heuristics. The performance of heuristics heavily depends on the system under evaluation. As a result, heuristics may perform equal to priority genetic algorithm for some systems. V. CONCLUSION In this work, we presented a model based approach for priority optimization of real-time tasks in fixed-priority scheduling. We showed that for complex systems with dependent tasks, the genetic algorithm finds better solutions then the DMPA and RMPA heuristics. For such systems, the heuristics can no longer yield optimal priority assignments. The genetic Figure 4. Comparison of the normalized Response Time for each task with assigned priorities according to Priority Genetic Algorithm, DMPA and RMPA heuristic Figure 5. Comparison of fitness value between priority genetic algorithm, deadline and rate monotonic heuristics algorithm minimizes task response times and therefore the mnrt of the overall system. It is important to say that the performance of heuristics is strongly dependent on the type and complexity of the system. Future research in that field will include the improvement of the optimization of preemptive and cooperative suspensions between tasks with the aim of further improving task response times. Moreover, we will also consider split tasks with partial execution on different cores through enforced migration events. Our aim hereby is to find a near-optimal priority for split tasks in a way that priority doesn t change during migration. REFERENCES [1] R. I. Davis and A. Burns, Improved priority assignment for global fixed priority pre-emptive scheduling in multiprocessor real-time systems, Real-Time Syst., vol. 47, no. 1, pp. 1-40, Sep. 2010.
[2] G. Buttazzo, Hard real-time computing systems: predictable scheduling algorithms and applications, 2005 [3] S. Schmidhuber, M. Deubzer, and J. Mottok. Genetic Optimization of Real-Time Multicore Systems with respect to Communication-Based Metrics. In Proceedings of the 2nd Applied Research Conference, ISBN 978-3-8440-1093-0, pages 21-25. SHAKER Verlag, June 2012. [4] M. Deubzer, Robust Scheduling of Real-Time Applications on Efficient Embedded Multicore Systems, 2011. [5] Timing-Architects, TA Toolsuite Version 13.14.0. TA Academic and Research License Program, 2013. http://www.timing-architects.com [6] C. L. L. J.W.Layland, Scheduling Algorithms for Multiprogramming in a Hard- Real-Time Environment Scheduling Algorithms for Multiprogramming, no. 1, pp. 46?61, 1973. [7] N. C. Audsley, Optimal Priority Assignment and Feasibility of static priority tasks with arbitrary start times, no. November, 1991. [8] J. Y. T. L. and J.Whitehead, On the complexity of fixedpriority scheduling of periodic real-time tasks. [9] M. R. Systems, R. I. Davis, and A. Burns, Priority Assignment for Global Fixed Priority Pre-emptive Scheduling in, 1991. [10] R. I. Davis and a. Burns, Robust Priority Assignment for Fixed Priority Real-Time Systems, 28th IEEE Int. Real-Time Syst. Symp. (RTSS 2007), vol. 2, pp. 3?14, Dec. 2007. [11] F. Nemati, Resource Sharing in Real-Time Systems on Multiprocessors, 2012. [12] S. Samii, Y. Yin, Z. Peng, P. Eles, and Y. Zhang, Immune Genetic Algorithms for Optimization of Task Priorities and FlexRay Frame Identifiers, 2009 15th IEEE Int. Conf. Embed. Real-Time Comput. Syst. Appl., pp. 486?493, Aug. 2009. [13] J. J. Guti, M. Marcos, E. Azketa, and J. P. Uribe, Permutational genetic algorithm for the optimized assignment of priorities to tasks and messages in distributed real-time systems, 2011. [14] P. L. Naga, R. H. Murga, I. Inza, and S. Dizdarevic, Genetic Algorithms for the Travelling Salesman Problem: A Review of Representations and Operators, no. Holland 1975, pp. 129-170, 1999. [15] M. A. Moncusi and J. M. Banus, A new heuristic algorithm to assign priorities and resources to tasks with end-to-end deadlines. [16] R. I. Davis and A. Burns, A Survey of Hard Real-Time Scheduling for Multiprocessor Systems, vol. 1, no. 216682, 2009. [17] E. Lalo, Task Priority Optimization in Multi-Core Embedded Systems, Regensburg, 2013. [18] U. D. E. Nice, S. Antipolis, L. Torres, and L. Torres, Energyaware Scheduling for Multiprocessor Real-time Systems, 2011.