Embedded Systems. Real Time Systems (Part I) Real Time Operating System (RTOS) Definition and Characteristics

Similar documents
REAL TIME OPERATING SYSTEMS. Lesson-10:

快 速 porting μc/os-ii 及 driver 解 說

OPERATING SYSTEMS SCHEDULING

CPU Scheduling. Basic Concepts. Basic Concepts (2) Basic Concepts Scheduling Criteria Scheduling Algorithms Batch systems Interactive systems

Objectives. Chapter 5: CPU Scheduling. CPU Scheduler. Non-preemptive and preemptive. Dispatcher. Alternating Sequence of CPU And I/O Bursts

Linux Process Scheduling Policy

Deciding which process to run. (Deciding which thread to run) Deciding how long the chosen process can run

Chapter 2: OS Overview

Lecture 3 Theoretical Foundations of RTOS

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

CPU SCHEDULING (CONT D) NESTED SCHEDULING FUNCTIONS

Operating Systems. III. Scheduling.

ICS Principles of Operating Systems

Process Scheduling CS 241. February 24, Copyright University of Illinois CS 241 Staff

Chapter 5 Process Scheduling

W4118 Operating Systems. Instructor: Junfeng Yang

CPU Scheduling. CPU Scheduling

Lecture Outline Overview of real-time scheduling algorithms Outline relative strengths, weaknesses

CPU Scheduling Outline

Scheduling. Yücel Saygın. These slides are based on your text book and on the slides prepared by Andrew S. Tanenbaum

CPU Scheduling. Multitasking operating systems come in two flavours: cooperative multitasking and preemptive multitasking.

Predictable response times in event-driven real-time systems

Performance Comparison of RTOS

Operating System: Scheduling

Introduction. Scheduling. Types of scheduling. The basics

OS OBJECTIVE QUESTIONS

Multiprocessor Scheduling and Scheduling in Linux Kernel 2.6

The Real-Time Operating System ucos-ii

Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010

Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Real-Time Operating Systems.

SYSTEM ecos Embedded Configurable Operating System

Operating Systems 4 th Class

Comp 204: Computer Systems and Their Implementation. Lecture 12: Scheduling Algorithms cont d

Embedded Programming in C/C++: Lesson-1: Programming Elements and Programming in C

/ Operating Systems I. Process Scheduling. Warren R. Carithers Rob Duncan

Announcements. Basic Concepts. Histogram of Typical CPU- Burst Times. Dispatcher. CPU Scheduler. Burst Cycle. Reading

Comparison between scheduling algorithms in RTLinux and VxWorks

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Objectives. Chapter 5: Process Scheduling. Chapter 5: Process Scheduling. 5.1 Basic Concepts. To introduce CPU scheduling

Operating Systems Concepts: Chapter 7: Scheduling Strategies

REDUCING TIME: SCHEDULING JOB. Nisha Yadav, Nikita Chhillar, Neha jaiswal

2. is the number of processes that are completed per time unit. A) CPU utilization B) Response time C) Turnaround time D) Throughput

A Survey of Fitting Device-Driver Implementations into Real-Time Theoretical Schedulability Analysis

Road Map. Scheduling. Types of Scheduling. Scheduling. CPU Scheduling. Job Scheduling. Dickinson College Computer Science 354 Spring 2010.

Real-Time Operating Systems With Example PICOS18. What is an Operating System?

Linux 2.4. Linux. Windows

CS 377: Operating Systems. Outline. A review of what you ve learned, and how it applies to a real operating system. Lecture 25 - Linux Case Study

Thomas Fahrig Senior Developer Hypervisor Team. Hypervisor Architecture Terminology Goals Basics Details

Scheduling 0 : Levels. High level scheduling: Medium level scheduling: Low level scheduling

Real Time Programming: Concepts

Readings for this topic: Silberschatz/Galvin/Gagne Chapter 5

Process Description and Control william stallings, maurizio pizzonia - sistemi operativi

Weighted Total Mark. Weighted Exam Mark

Operating Systems. Lecture 03. February 11, 2013

POSIX. RTOSes Part I. POSIX Versions. POSIX Versions (2)

Processor Scheduling. Queues Recall OS maintains various queues

Chapter 19: Real-Time Systems. Overview of Real-Time Systems. Objectives. System Characteristics. Features of Real-Time Systems

Operating Systems, 6 th ed. Test Bank Chapter 7

CPU Scheduling. Core Definitions

Replication on Virtual Machines

Last Class: OS and Computer Architecture. Last Class: OS and Computer Architecture

Exception and Interrupt Handling in ARM

Embedded Systems. 6. Real-Time Operating Systems

This tutorial will take you through step by step approach while learning Operating System concepts.


Processes and Non-Preemptive Scheduling. Otto J. Anshus

LAB 5: Scheduling Algorithms for Embedded Systems

Module 6. Embedded System Software. Version 2 EE IIT, Kharagpur 1

Priority-Driven Scheduling

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition

Multicore Programming with LabVIEW Technical Resource Guide

Page 1 of 5. IS 335: Information Technology in Business Lecture Outline Operating Systems

A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I DR. S. A. SODIYA

Overview of Presentation. (Greek to English dictionary) Different systems have different goals. What should CPU scheduling optimize?

4. Fixed-Priority Scheduling

Survey of software architectures: function-queue-scheduling architecture and real time OS

Real-Time Scheduling 1 / 39

REAL TIME OPERATING SYSTEMS. Lesson-18:

CS0206 OPERATING SYSTEMS Prerequisite CS0201, CS0203

Operating System Tutorial

Syllabus MCA-404 Operating System - II

Chapter 6, The Operating System Machine Level

Threads Scheduling on Linux Operating Systems

Modular Real-Time Linux

Operating Systems Lecture #6: Process Management

Operating System Manual. Realtime Communication System for netx. Kernel API Function Reference.

Operatin g Systems: Internals and Design Principle s. Chapter 10 Multiprocessor and Real-Time Scheduling Seventh Edition By William Stallings

AN585. A Real-Time Operating System for PICmicro Microcontrollers INTRODUCTION. Why do I Need a Real-Time Kernel? What is Multitasking Anyway?

Convenience: An OS makes a computer more convenient to use. Efficiency: An OS allows the computer system resources to be used in an efficient manner.

CPU Scheduling. CSC 256/456 - Operating Systems Fall TA: Mohammad Hedayati

Real Time Operating System for Embedded DSP Applications

CS414 SP 2007 Assignment 1

What is best for embedded development? Do most embedded projects still need an RTOS?

The Design and Implementation of Real-Time Schedulers in RED-Linux

Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization

Real-Time Scheduling Strategy for Wireless Sensor Networks O.S

ò Paper reading assigned for next Thursday ò Lab 2 due next Friday ò What is cooperative multitasking? ò What is preemptive multitasking?

Why Threads Are A Bad Idea (for most purposes)

Embedded Software Architecture

Transcription:

Embedded Systems Real Time Systems (Part I) Dr. Jeff Jackson Lecture 12-1 Real Time Operating System (RTOS) Definition and Characteristics A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time application requests A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task Variability in this task acceptance and completion is called jitter A hard RTOS has less jitter than a soft RTOS The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category An RTOS that can usually or generally meet a deadline is a soft real-time OS If it can meet a deadline deterministically it is a hard real-time OS Dr. Jeff Jackson Lecture 12-2 1

Foreground/Background Systems Small systems of low complexity are generally designed as shown These systems are called foreground/background systems An application consists of an infinite loop that calls modules (functions) to perform desired operations (background) Interrupt service routines (ISRs) handle asynchronous events (foreground) Foreground is also called interrupt level; background is called task level Dr. Jeff Jackson Lecture 12-3 Foreground/Background Systems (continued) Critical operations must be performed by the ISRs to ensure that they are dealt with in a timely fashion Because of this, ISRs have a tendency to take longer than they should Also, information for a background module that an ISR makes available is not processed until the background routine gets its turn to execute, which is called the tasklevel response The worst case task-level response time depends on how long the background loop takes to execute Most high-volume microcontroller-based applications (e.g., microwave ovens, telephones, toys, and so on) are designed as foreground/background systems Dr. Jeff Jackson Lecture 12-4 2

RTOS Design Philosophies The most common designs are: Event-driven which switches tasks only when an event of higher priority needs service, called preemptive priority, or priority scheduling Time-sharing designs switch tasks on a regular clock interrupt, and on events, called round robin Time-sharing designs switch tasks more often than strictly needed, but give smoother multitasking, giving the illusion that a process or user has sole use of a machine Dr. Jeff Jackson Lecture 12-5 RTOS Concepts Definitions Critical Sections of Code A critical section of code, also called a critical region, is code that needs to be treated indivisibly After the section of code starts executing, it must not be interrupted Resources A resource is any entity used by a task A resource can be an I/O device, a variable, a structure, or an array Shared Resources A shared resource is a resource that can be used by more than one task Each task should gain exclusive access to the shared resource to prevent data corruption This process is called mutual exclusion Dr. Jeff Jackson Lecture 12-6 3

RTOS Concepts Definitions (continued) Multitasking Multitasking is the process of scheduling and switching the central processing unit (CPU) between several tasks; a single CPU switches its attention between several sequential tasks Multitasking is like foreground/background with multiple backgrounds Multitasking maximizes the use of the CPU and also provides for modular construction of applications One of the most important aspects of multitasking is that it allows the application programmer to manage complexity inherent in real-time applications Dr. Jeff Jackson Lecture 12-7 Tasks A task, also called a thread, is a simple program that thinks it has the CPU all to itself The design process for a real-time application involves splitting the work to be done into tasks responsible for a portion of the problem Each task is assigned a priority, its own set of CPU registers, and its own stack area Dr. Jeff Jackson Lecture 12-8 4

Tasks (continued) Each task typically is an infinite loop that can be in any one of five states: Dormant Corresponds to a task that resides in memory but has not been made available to the multitasking kernel Ready A task is ready when it can execute but its priority is less than the currently running task Running A task is running when it has control of the CPU Waiting (for an event) A task is waiting when it requires the occurrence of an event (for example, waiting for an I/O operation to complete, a shared resource to be available, etc. ISR (interrupted) A task is in the ISR state when an interrupt has occurred and the CPU is in the process of servicing the interrupt Dr. Jeff Jackson Lecture 12-9 Task States Diagram for uc/osii Dr. Jeff Jackson Lecture 12-10 5

RTOS Scheduler Basics A real-time OS has an associated algorithm for scheduling tasks Scheduler flexibility enables a wider, computersystem orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency A real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time Dr. Jeff Jackson Lecture 12-11 Context Switches (or Task Switches) When a multitasking kernel decides to run a different task, it saves the current task's context (CPU registers) in the current task's context storage area - its stack After this operation is performed, the new task's context is restored from its storage area and then resumes execution of the new task's code This process is called a context switch or a task switch Context switching adds overhead to the application The more registers a CPU has, the higher the overhead The time required to perform a context switch is determined by how many registers have to be saved and restored by the CPU Dr. Jeff Jackson Lecture 12-12 6

Kernels The kernel is the part of a multitasking system responsible for management of tasks (i.e., for managing the CPU's time) and communication between tasks The fundamental service provided by the kernel is context switching The use of a real-time kernel generally simplifies the design of systems by allowing the application to be divided into multiple tasks that the kernel manages A kernel adds overhead to your system because the services provided by the kernel require execution time The amount of overhead depends on how often you invoke these services In a well-designed application, a kernel uses less than 5% of CPU time Dr. Jeff Jackson Lecture 12-13 Schedulers The scheduler is the part of the kernel responsible for determining which task runs next Most real-time kernels are priority based Each task is assigned a priority based on its importance The priority for each task is application specific In a priority-based kernel, control of the CPU is always given to the highest priority task ready to run When the highest priority task gets the CPU, however, is determined by the type of kernel used Two types of priority-based kernels exist: Non-preemptive Preemptive Dr. Jeff Jackson Lecture 12-14 7

Common RTOS Scheduling Algorithms Non-Preemptive (Cooperative) scheduling Preemptive scheduling Rate-monotonic scheduling Round-robin scheduling Fixed priority preemptive scheduling, an implementation of preemptive time slicing Fixed-Priority Scheduling with Deferred Preemption Fixed-Priority Non-preemptive Scheduling Critical section preemptive scheduling Static time scheduling Earliest Deadline First Dr. Jeff Jackson Lecture 12-15 Non-Preemptive Kernels Non-preemptive kernels require that each task does something to explicitly give up control of the CPU To maintain the illusion of concurrency, this process must be done frequently Non-preemptive scheduling is also called cooperative multitasking; tasks cooperate with each other to share the CPU Asynchronous events are still handled by ISRs An ISR can make a higher priority task ready to run, but the ISR always returns to the interrupted task The new higher priority task gains control of the CPU only when the current task gives up the CPU Dr. Jeff Jackson Lecture 12-16 8

Non-Preemptive Kernels (continued) One of the advantages of a non-preemptive kernel is that interrupt latency is typically low At the task level, non-preemptive kernels can also use non-reentrant functions Non-reentrant functions can be used by each task without concern of corruption by another task This is because each task can run to completion before it relinquishes the CPU However, non-reentrant functions should not be allowed to give up control of the CPU Task-level response using a non-preemptive kernel can be much lower than with foreground/background systems because task-level response is now given by the time of the longest task Dr. Jeff Jackson Lecture 12-17 Non-Preemptive Kernel Execution Profile Dr. Jeff Jackson Lecture 12-18 9

Non-Preemptive Kernel Execution Profile 1) A task is executing but is interrupted 2) If interrupts are enabled, the CPU vectors (jumps) to the ISR 3) The ISR handles the event and makes a higher priority task ready to run 4) Upon completion of the ISR the CPU returns to the interrupted task 5) The task code resumes at the instruction following the interrupted instruction 6) When the task code completes, it calls a service that the kernel provides to relinquish the CPU to another task 7) The kernel sees that a higher priority task has been made ready to run (it doesn't necessarily know that it was from an ISR nor does it care), and thus the kernel performs a context switch so that it can run (i.e., execute) the higher priority task to handle the event that the ISR signaled Dr. Jeff Jackson Lecture 12-19 Non-Preemptive Kernels (continued) The most important drawback of a non-preemptive kernel is responsiveness A higher priority task that has been made ready to run might have to wait a long time to run because the current task must give up the CPU when it is ready to do so As with background execution in foreground/background systems, task-level response time in a non-preemptive kernel is nondeterministic You never really know when the highest priority task will get control of the CPU It is up to your application to relinquish control of the CPU Dr. Jeff Jackson Lecture 12-20 10

Preemptive Kernels A preemptive kernel is used when system responsiveness is important uc/os-ii and most commercial real-time kernels are preemptive The highest priority task ready to run is always given control of the CPU When a task makes a higher priority task ready to run, the current task is preempted (suspended), and the higher priority task is immediately given control of the CPU If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended, and the new higher priority task is resumed Dr. Jeff Jackson Lecture 12-21 Preemptive Kernel Execution Profile Dr. Jeff Jackson Lecture 12-22 11

Preemptive Kernel Execution Profile (continued) 1) A task is executing but is interrupted 2) If interrupts are enabled, the CPU vectors (jumps) to the ISR 3) The ISR handles the event and makes a higher priority task ready to run. Upon completion of the ISR, a service provided by the kernel is invoked (i.e., a function that the kernel provides is called) 4) 5) This function knows that a more important task has been made ready to run, and thus, instead of returning to the interrupted task, the kernel performs a context switch and executes the code of the more important task. When the more important task is done, another function that the kernel provides is called to put the task to sleep waiting for another event (i.e. the ISR) to occur 6) 7) The kernel then sees that a lower priority task needs to execute, and another context switch is done to resume execution of the interrupted task Dr. Jeff Jackson Lecture 12-23 Preemptive Kernels (continued) With a preemptive kernel, execution of the highest priority task is deterministic You can determine when it will get control of the CPU Task-level response time is thus minimized by using a preemptive kernel Application code using a preemptive kernel should not use non-reentrant functions unless exclusive access to these functions is ensured through the use of mutual exclusion semaphores, because both a low and a high priority task can use a common function Corruption of data can occur if the higher priority task preempts a lower priority task that is using the function Dr. Jeff Jackson Lecture 12-24 12

Reentrant Functions A reentrant function can be used by more than one task without fear of data corruption A reentrant function can be interrupted at any time and resumed at a later time without loss of data Reentrant functions either use local variables (i.e., CPU registers or variables on the stack) or protect data when global variables are used Example reentrant function void strcpy(char *dest, char *src) { while (*dest++ = *src++ ) { ; } *dest=null; } Because copies of the arguments to strcpy() are placed on the task's stack, strcpy() can be invoked by multiple tasks without concern that the tasks will corrupt each other's pointers Dr. Jeff Jackson Lecture 12-25 Non-reentrant Function Example Assume a simple function that swaps the contents of it two arguments int Temp; void swap(int *x, int *y) { Temp=*x; *x=*y; *y=temp; } Assume Preemptive kernel Interrupts are enabled Temp declared as a global integer Dr. Jeff Jackson Lecture 12-26 13

Example Execution Profile Dr. Jeff Jackson Lecture 12-27 Non-reentrant Function Example (continued) This example is simple, so it is obvious how to make the code reentrant You can make swap( ) reentrant with one of the following techniques: Declare Temp local to swap() Disable interrupts before the operation and enable them afterwards Use a semaphore Dr. Jeff Jackson Lecture 12-28 14