ES Module 3
ES Module 3
Module 3
RTOS and IDE for Embedded System Design:
Operating System basics, Types of operating systems, Task, process and threads (Only POSIX
Threads with an example program), Thread pre-emption, Preemptive Task scheduling techniques,
Task Communication, Task synchronization issues – Racing and Deadlock, how to choose an RTOS,
Integration and testing of Embedded hardware and firmware, Embedded system Development
Environment – Block diagram (excluding Keil).
The Operating System acts as a bridge between the user applications/tasks and the underlying system
resources through a set of system functionalities and services OS manages the system resources and
makes them available to the user applications/tasks on a need basis
The primary functions of an Operating system are
1. Make the system convenient to use
2. Organize and manage the system resources efficiently and correctly
HSIT,ECE 1
DEPT
RTOS AND IDE FOR ESD MODULE-3
An Operating System provides services to both the users and to the programs. It provides programs an
environment to execute.
It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system: Program execution
I/O operations
Communication
Error Detection
Resource Allocation
Protection
The Kernel
The kernel is the core of the operating system. It is responsible for managing the system resources and the
communication among the hardware and other system services. Kernel acts as the abstraction layer
between system resources and user applications.
Kernel contains a set of system libraries and services. For a general purpose OS, the kernel contains
different services like
Process Management
Primary Memory Management
File System management
I/O System (Device) Management
Secondary Storage Management
Protection
Time management
Interrupt Handling
1. Process Management:
A program does nothing unless their instructions are executed by a CPU.A process is a
program in execution. A time shared user program such as a complier is a process. A word
processing program being run by an individual user on a pc is a process.
A system task such as sending output to a printer is also a process. A process needs
certain resources including CPU time, memory files & I/O devices to accomplish its
task.
These resources are either given to the process when it is created or allocated to it while it
is running. The OS is responsible for the following activities of process management.
Creating & deleting both user & system processes.
Suspending & resuming processes.
HSIT,ECE 2
DEPT
RTOS AND IDE FOR ESD MODULE-3
HSIT,ECE 3
DEPT
RTOS AND IDE FOR ESD MODULE-3
HSIT,ECE 4
DEPT
RTOS AND IDE FOR ESD MODULE-3
Monolithic Kernel
All kernel services run in the kernel space
All kernel modules run within the same memory space under a single kernel thread
The tight internal integration of kernel modules in monolithic kernel architecture allows the
effective utilization of the low-level features of the underlying system
The major drawback of monolithic kernel is that any error or failure in any one of the kernel
modules leads to the crashing of the entire kernel application
LINUX, SOLARIS, MS-DOS kernels are examples of monolithic kernel
Microkernel
The microkernel design incorporates only the essential set of Operating System services into the
kernel
Rest of the Operating System services are implemented in programs known as ‘Servers’ which
runs in user space
The kernel design is highly modular provides OS-neutral abstraction
Memory management, process management, timer systems and interrupt handlers are examples of
essential services, which forms the part of the microkernel
QNX, Minix 3 kernels are examples for microkernel
HSIT,ECE 5
DEPT
RTOS AND IDE FOR ESD MODULE-3
Depending on the type of kernel and kernel services, purpose and type of computing systems where
the OS is deployed and the responsiveness to applications, Operating Systems are classified into
1. General Purpose Operating System (GPOS)
Operating Systems, which are deployed in general computing systems
The kernel is more generalized and contains all the required services to execute generic
applications
Need not be deterministic in execution behavior
May inject random delays into application software and thus cause slow responsiveness of an
application at unexpected times
Usually deployed in computing systems where deterministic behavior is not an important
criterion
Personal Computer/Desktop system is a typical example for a system where GPOSs are
deployed.
Windows XP/MS-DOS etc are examples of General Purpose Operating System
HSIT,ECE 6
DEPT
RTOS AND IDE FOR ESD MODULE-3
Task/Process Management
Deals with setting up the memory space for the tasks, loading the task’s code into the
memory space, allocating system resources, setting up a Task Control Block (TCB) for the
task and task/process termination/deletion. A Task Control Block (TCB) is used for holding
the information corresponding to a task. TCB usually contains the following set of
information
• Task ID: Task Identification Number
• Task State: The current state of the task. (E.g. State= ‘Ready’ for a task which is ready to
execute)
• Task Type: Task type. Indicates what is the type for this task. The task can be a hard real
time or soft real time or background task.
• Task Priority: Task priority (E.g. Task priority =1 for task with priority = 1)
• Task Context Pointer: Context pointer. Pointer for context saving
• Task Memory Pointers: Pointers to the code memory, data memory and stack memory for
the task
• Task System Resource Pointers: Pointers to system resources (semaphores, mutex etc) used
by the task
• Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
• Other Parameters Other relevant task parameters
• The parameters and implementation of the TCB is kernel dependent. The TCB parameters
vary across different kernels, based on the task management implementation
• Task/Process Scheduling: Deals with sharing the CPU among various tasks/processes. A
kernel application called ‘Scheduler’ handles the task scheduling. Scheduler is nothing but
an algorithm implementation, which performs the efficient and optimal scheduling of tasks
to provide a deterministic behavior.
Memory Management
The memory management function of an RTOS kernel is slightly different compared to the
General Purpose Operating Systems
In general, the memory allocation time increases depending on the size of the block of
memory needs to be allocated and the state of the allocated memory block (initialized
memory block consumes more allocation time than un-initialized memory block)
Since predictable timing and deterministic behavior are the primary focus for an RTOS,
RTOS achieves this by compromising the effectiveness of memory allocation
RTOS generally uses ‘block’ based memory allocation technique, instead of the usual
dynamic memory allocation techniques used by the GPOS.
HSIT,ECE 7
DEPT
RTOS AND IDE FOR ESD MODULE-3
RTOS kernel uses blocks of fixed size of dynamic memory and the block is allocated for a
task on a need basis. The blocks are stored in a ‘Free buffer Queue’.
Most of the RTOS kernels allow tasks to access any of the memory blocks without any
memory protection to achieve predictable timing and avoid the timing overheads
RTOS kernels assume that the whole design is proven correct and protection is unnecessary.
Some commercial RTOS kernels allow memory protection as optional and the kernel enters
a fail-safe mode when an illegal memory access occurs
The memory management function of an RTOS kernel is slightly different compared to the
General Purpose Operating Systems
A few RTOS kernels implement Virtual Memory concept for memory allocation if the
system supports secondary memory storage (like HDD and FLASH memory).
In the ‘block’ based memory allocation, a block of fixed memory is always allocated for
tasks on need basis and it is taken as a unit. Hence, there will not be any memory
fragmentation issues.
The memory allocation can be implemented as constant functions and thereby it consumes
fixed amount of time for memory allocation. This leaves the deterministic behavior of the
RTOS kernel untouched.
Interrupt Handling
Interrupts inform the processor that an external device or an associated task requires
immediate attention of the CPU.
Interrupts can be either Synchronous or Asynchronous.
Interrupts which occurs in sync with the currently executing task is known as Synchronous
interrupts. Usually the software interrupts fall under the Synchronous Interrupt category.
Divide by zero, memory segmentation error etc are examples of Synchronous interrupts.
For synchronous interrupts, the interrupt handler runs in the same context of the interrupting
task.
Asynchronous interrupts are interrupts, which occurs at any point of execution of any task,
and are not in sync with the currently executing task.
The interrupts generated by external devices (by asserting the Interrupt line of the
processor/controller to which the interrupt line of the device is connected) connected to the
processor/controller, timer overflow interrupts, serial data reception/ transmission interrupts
etc are examples for asynchronous interrupts.
For asynchronous interrupts, the interrupt handler is usually written as separate task
(Depends on OS Kernel implementation) and it runs in a different context. Hence, a context
switch happens while handling the asynchronous interrupts.
Priority levels can be assigned to the interrupts and each interrupts can be enabled or
disabled individually.
Most of the RTOS kernel implements ‘Nested Interrupts’ architecture. Interrupt nesting
allows the pre-emption (interruption) of an Interrupt Service Routine (ISR), servicing an
interrupt, by a higher priority interrupt.
Time Management
Interrupts inform the processor that an external device or an associated task requires
immediate attention of the CPU.
Accurate time management is essential for providing precise time reference for all
applications
The time reference to kernel is provided by a high-resolution Real Time Clock (RTC)
hardware chip (hardware timer)
HSIT,ECE 8
DEPT
RTOS AND IDE FOR ESD MODULE-3
The hardware timer is programmed to interrupt the processor/controller at a fixed rate. This
timer interrupt is referred as ‘Timer tick’
The ‘Timer tick’ is taken as the timing reference by the kernel. The ‘Timer tick’ interval may
vary depending on the hardware timer. Usually the ‘Timer tick’ varies in the microseconds
range
The time parameters for tasks are expressed as the multiples of the ‘Timer tick’
The System time is updated based on the ‘Timer tick’
If the System time register is 32 bits wide and the ‘Timer tick’ interval is 1 microsecond, the
System time register will reset in
232 * 10-6/ (24 * 60 * 60) =~ 0.0497 Days = 1.19 hrs
If the ‘Timer tick’ interval is 1 millisecond, the System time register will reset in
2 * 10-3 / (24 * 60 * 60) = 497 Days = 49.7 Days =~ 50 Days
32
HSIT,ECE 9
DEPT
RTOS AND IDE FOR ESD MODULE-3
Stack Memory
Data Memory
Code Memory
Created
n Ready
o m pletio quired
I/O Cesource Ac
e dR
Shar
Scheduled for
Interrupted or
Execution
Preempted
Blocked
Wait
Wait
ing ing for I/
for s O
h a re d
R e so
u r ce Running
Execution Completion
Completed
HSIT,ECE 11
DEPT
RTOS AND IDE FOR ESD MODULE-3
• Created State: The state at which a process is being created is referred as ‘Created State’.
The Operating System recognizes a process in the ‘Created State’ but no resources are
allocated to the process
• Ready State: The state, where a process is incepted into the memory and awaiting the
processor time for execution, is known as ‘Ready State’. At this stage, the process is placed
in the ‘Ready list’ queue maintained by the OS.
• Running State: The state where in the source code instructions corresponding to the
process is being executed is called ‘Running State’. Running state is the state at which the
process execution happens.
• Blocked State/Wait State: Refers to a state where a running process is temporarily
suspended from execution and does not have immediate access to resources. The blocked
state might have invoked by various conditions like- the process enters a wait state for an
event to occur (E.g. Waiting for user inputs such as keyboard input) or waiting for getting
access to a shared resource like semaphore, mutex etc
• Completed State: A state where the process completes its execution
The transition of a process from one state to another is known as ‘State transition’
When a process changes its state from Ready to running or from running to blocked or
terminated or from blocked to running, the CPU allocation for the process may also change
Threads
A thread is the primitive that can execute code
A thread is a single sequential flow of control within a process
‘Thread’ is also known as lightweight process
A process can have many threads of execution
Different threads, which are part of a process, share the same address space; meaning they
share the data memory, code memory and heap memory area
Threads maintain their own thread status (CPU register values), Program Counter (PC) and
stack.
HSIT,ECE 12
DEPT
RTOS AND IDE FOR ESD MODULE-3
Thread Standards:
Thread standards deal with the different standards available for thread creation and management.
These standards are utilized by the Operating Systems for thread creation and thread management.
It is a set of thread class libraries. The commonly available thread class libraries are
POSIX Threads: POSIX stands for Portable Operating System Interface. The POSIX.4 standard
deals with the Real Time extensions and POSIX.4a standard deals with thread extensions. The
POSIX standard library for thread creation and management is ‘Pthreads’. ‘Pthreads’ library
defines the set of POSIX thread creation and management functions in ‘C’ language.
POSIX Thread Creation
int pthread_create(pthread_t *new_thread_ID, const pthread_attr_t *attribute, void *
(*start_function)(void *), void *arguments);
Creates a new thread for running the function start_function. Here pthread_t is the handle to the
newly created thread and pthread_attr_1 is the data type for holding the thread attributes.
‘start_function’ is the function the thread is going to execute and arguments is the arguments for
start_function. On successful creation of a Pthread, pthread_create( ) associates the Thread control
block (TCB) corresponding to the newly created thread to the variable of type pthread_t.
The primitive
Int pthread_join (pthread_t new_thread, void* *thread_status);
Blocks the current thread and waits until the completion of the thread pointed by it.
All the POXIS ‘thread calls’ returns an integer. A return value of zero indicates the success of the
call. It is always good to check the return value of each call.
HSIT,ECE 13
DEPT
RTOS AND IDE FOR ESD MODULE-3
Thread pre-emption
Thread pre-emption is the act of pre-empting the currently running thread (stopping the currently
running thread temporarily).
• User Level Thread: : User level threads do not have kernel/Operating System support and
they exist solely in the running process. Even if a process contains multiple user level
threads, the OS treats it as single thread and will not switch the execution among the
different threads of it. It is the responsibility of the process to schedule each thread as and
when required. In summary, user level threads of a process are non-preemptive at thread
level from OS perspective.
• Kernel Level/System Level Thread: Kernel level threads are individual units of execution,
which the OS treats as separate threads. The OS interrupts the execution of the currently
running kernel thread and switches the execution to another kernel thread based on the
scheduling policies implemented by the OS.
The execution switching (thread context switching) of user level threads happen only when
the currently executing user level thread is voluntarily blocked. Hence, no OS intervention
and system calls are involved in the context switching of user level threads. This makes
context switching of user level threads very fast.
Kernel level threads involve lots of kernel overhead and involve system calls for context
switching. However, kernel threads maintain a clear layer of abstraction and allow threads to
use system calls independently
Many-to-One Model: Many user level threads are mapped to a single kernel thread. The
kernel treats all user level threads as single thread and the execution switching among the
user level threads happens when a currently executing user level thread voluntarily blocks
itself or relinquishes the CPU. Solaris Green threads and GNU Portable Threads are
examples for this.
One-to-One Model: Each user level thread is bonded to a kernel/system level thread.
Windows XP/NT/2000 and Linux threads are examples of One-to-One thread models.
Many-to-Many Model: In this model many user level threads are allowed to be mapped to
many kernel threads. Windows NT/2000 with ThreadFiber package is an example for this.
HSIT,ECE 14
DEPT
RTOS AND IDE FOR ESD MODULE-3
Preemptive scheduling
Employed in systems, which implements preemptive multitasking model.
Every task in the ‘Ready’ queue gets a chance to execute. When and how often each process gets a
chance to execute (gets the CPU time) is dependent on the type of preemptive scheduling algorithm
used for scheduling the processes.
The scheduler can preempt (stop temporarily) the currently executing task/process and select another
task from the ‘Ready’ queue for execution.
When to pre-empt a task and which task is to be picked up from the ‘Ready’ queue for execution after
preempting the current task is purely dependent on the scheduling algorithm.
A task which is preempted by the scheduler is moved to the ‘Ready’ queue. The act of moving a
‘Running’ process/task into the ‘Ready’ queue by the scheduler, without the processes requesting for it
is known as ‘Preemption’.
Time-based preemption and priority-based preemption are the two important approaches adopted in
preemptive scheduling.
HSIT,ECE 15
DEPT
RTOS AND IDE FOR ESD MODULE-3
Ex 1): Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together. A new process P4 with estimated completion time 2ms
enters the ‘Ready’ queue after 2ms. Assume all the processes contain only CPU operation and no I/O
operations are involved.
At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue and
the SRT scheduler picks up the process with the Shortest remaining time for execution completion
(In this example P2 with remaining time 5ms) for scheduling. Now process P4 with estimated
execution completion time 2ms enters the ‘Ready’ queue after 2ms of start of execution of P2. The
processes are re-scheduled for execution in the following order
Waiting Time for P2 = 0 ms + (4 -2) ms = 2ms (P2 starts executing first and is interrupted by P4 and has
to wait till the completion of P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the execution time for
completion of P4 (2ms) is less than that of the Remaining time for execution completion of P2 (Here it
is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms (Time spent in Ready Queue + Execution Time = (Execution
Start Time – Arrival Time) + Estimated Execution Time = (2-2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds
HSIT,ECE 16
DEPT
RTOS AND IDE FOR ESD MODULE-3
Round Robin scheduling is similar to the FCFS scheduling and the only difference is that a time slice
based preemption is added to switch the execution between the processes in the ‘Ready’ queue
Ex 2): Three processes with process IDs P1, P2, P3 with estimated completion time 6, 4, 2 milliseconds
respectively, enters the ready queue together in the order P1, P2, P3. Calculate the waiting time and Turn
Around Time (TAT) for each process and the Average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.
• The scheduler sorts the ‘Ready’ queue based on the FCFS policy and picks up the first process P1
from the ‘Ready’ queue and executes it for the time slice 2ms. When the time slice is expired, P1 is
preempted and P2 is scheduled for execution. The Time slice expires after 2ms of execution of P2.
Now P2 is preempted and P3 is picked up for execution. P3 completes its execution within the time
slice and the scheduler picks P1 again for execution for the next time slice. This procedure is
repeated till all the processes are serviced. The order in which the processes are scheduled for
execution is represented as
Waiting Time for P1 = 0 + (6-2) + (10-8) = 0+4+2= 6ms (P1 starts executing first and waits for two time
slices to get execution back and again 1 time slice for getting CPU time)
Waiting Time for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1 executes for 1 time slice and
waits for two time slices to get the CPU time)
Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the first time slices for P1 and P2
and completes its execution in a single time slice.)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (6+6+4)/3 = 16/3
= 5.33 milliseconds
Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 10 ms (-Do-)
Turn Around Time (TAT) for P3 = 6 ms (-Do-)
HSIT,ECE 17
DEPT
RTOS AND IDE FOR ESD MODULE-3
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (12+10+6)/3 = 28/3
= 9.33 milliseconds
Ex 3): Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7 milliseconds and
priorities 1, 3, 2 (0- highest priority, 3 lowest priority) respectively enters the ready queue together. A new
process P4 with estimated completion time 6ms and priority 0 enters the ‘Ready’ queue after 5ms of start of
execution of P1. Assume all the processes contain only CPU operation and no I/O operations are involved.
At the beginning, there are only three processes (P1, P2 and P3) available in the ‘Ready’ queue and the
scheduler picks up the process with the highest priority (In this example P1 with priority 1) for scheduling.
Now process P4 with estimated execution completion time 6ms and priority 0 enters the ‘Ready’ queue
after 5ms of start of execution of P1. The processes are re-scheduled for execution in the following order
Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and gets preempted by P4 after 5ms
and again gets the CPU time after completion of P4)
Waiting Time for P4 = 0 ms (P4 starts executing immediately on entering the ‘Ready’ queue, by preempting
P1)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue + Execution Time = (Execution Start
Time – Arrival Time) + Estimated Execution Time = (5-5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (16+6+23+28)/4 = 73/4
= 18.25 milliseconds
HSIT,ECE 18
DEPT
RTOS AND IDE FOR ESD MODULE-3
Task Communication:
In a multitasking system, multiple tasks/processes run concurrently (in pseudo parallelism) and each
process may or may not interact between. Based on the degree of interaction, the processes /tasks running
on an OS are classified as
• Co-operating Processes: In the co-operating interaction model one process requires the inputs from
other processes to complete its execution.
• Competing Processes: The competing processes do not share anything among themselves but they
share the system resources. The competing processes compete for the system resources such as file,
display device etc
The co-operating processes exchanges information and communicate through
• Co-operation through sharing: Exchange data through some shared resources.
• Co-operation through Communication: No data is shared between the processes. But they
communicate for execution synchronization.
Shared Memory
Processes share some area of the memory to communicate among them
Information to be communicated by the process is written to the shared memory area
Processes which require this information can read the same from the shared memory area
Same as the real world concept where a ‘Notice Board’ is used by the college to publish the
information for students (The only exception is; only college has the right to modify the information
published on the Notice board and students are given ‘Read’ only access. Meaning it is only a one
way channel)
HSIT,ECE 19
DEPT
RTOS AND IDE FOR ESD MODULE-3
1. Pipes:
‘Pipe’ is a section of the shared memory used by processes for communicating. Pipes follow the client-server
architecture. A process which creates a pipe is known as pipe server and a process which connects to a pipe
is known as pipe client. A pipe can be considered as a conduit for information flow and has two conceptual
ends. It can be unidirectional, allowing information flow in one direction or bidirectional allowing bi-
directional information flow. A unidirectional pipe allows the process connecting at one end of the pipe to
write to the pipe and the process connected at the other end of the pipe to read the data, whereas a bi-
directional pipe allows both reading and writing at one end.
Process 1 Process 2
Write Pipe Read
(Named/un-named)
The implementation of ‘Pipes’ is OS dependent. Microsoft® Windows Desktop Operating Systems support
two types of ‘Pipes’ for Inter Process Communication. Namely;
Anonymous Pipes: The anonymous pipes are unnamed, unidirectional pipes used for data transfer between
two processes.
Named Pipes: Named pipe is a named, unidirectional or bi-directional pipe for data exchange between
processes. Like anonymous pipes, the process which creates the named pipe is known as pipe server. A
process which connects to the named pipe is known as pipe client. With named pipes, any process can act as
both client and server allowing point-to-point communication. Named pipes can be used for communicating
between processes running on the same machine or between processes running on different machines
connected to a network.
Message Passing
A synchronous/asynchronous information exchange mechanism for Inter Process/ thread
Communication
Through shared memory lot of data can be shared whereas only limited amount of info/data is passed
through message passing
Message passing is relatively fast and free from the synchronization overheads compared to shared
memory.
1.Message Queues:
Process which wants to talk to another process posts the message to a First-In-First-Out (FIFO)
queue called ‘Message queue’, which stores the messages temporarily in a system defined memory
object, to pass it to the desired process.
HSIT,ECE 20
DEPT
RTOS AND IDE FOR ESD MODULE-3
Messages are sent and received through send (Name of the process to which the message is to be
sent, message) and receive (Name of the process from which the message is to be received, message)
methods
The messages are exchanged through a message queue
The implementation of the message queue, send and receive methods are OS kernel dependent.
HSIT,ECE 21
DEPT
RTOS AND IDE FOR ESD MODULE-3
3.Signal:
An asynchronous notification mechanism
Mainly used for the execution synchronization of tasks process/tasks
Signal do not carry any data and are not queued
The implementation of signals is OS kernel dependent
VxWorks RTOS kernel implements ‘signals’ for inter process communication
A task/process can create a set of signals and register for it
A task or Interrupt Service Routine (ISR) can signal a ‘signal’
Whenever a specified signal occurs it is handled in a signal handler associated with the
signal.
The IPC mechanism used by a process to call a procedure of another process running on the same
CPU or on a different CPU which is interconnected in a network.
In the object oriented language terminology RPC is also known as Remote Invocation or Remote
Method Invocation (RMI) .
RPC is mainly used for distributed applications like client-server applications.
With RPC it is possible to communicate over a heterogeneous network (i.e. Network where Client
and server applications are running on different Operating systems).
The CPU/Process containing the procedure which needs to be invoked remotely is known as server.
The CPU/Process which initiates an RPC request is known as client.
In order to make the RPC communication compatible across all platforms it should stick on to
certain standard formats.
Interface Definition Language (IDL) defines the interfaces for RPC.
Microsoft Interface Definition Language (MIDL) is the IDL implementation from Microsoft for all
Microsoft platforms.
The RPC communication can be either Synchronous (Blocking) or Asynchronous (Non-blocking)
CPU CPU
Network
Process
Process
TCP/IP or UDP Procedure
Over Socket
CPU
Process 1 Process 2
HSIT,ECE 22
DEPT
RTOS AND IDE FOR ESD MODULE-3
Task Synchronization
Multiple processes may try to access and modify shared resources in a multitasking environment. This may
lead to conflicts and inconsistent results.
Processes should be made aware of the access of a shared resource by each process and should not be allowed
to access a shared resource when it is currently being accessed by other processes.
The act of making processes aware of the access of shared resources by each process to avoid conflicts is
known as ‘Task/Process Synchronization’.
Task Synchronization is essential for avoiding conflicts in shared resource access and ensuring a specified
sequence for task execution.
Various synchronization issues may arise in a multitasking environment if processes are not synchronized
properly in shared resource access.
HSIT,ECE 23
DEPT
RTOS AND IDE FOR ESD MODULE-3
From a programmer perspective, the value of counter will be 10 at the end of execution of processes A & B. But it need
not be always. program statement counter++; looks like a single statement from a high level programming language (C
Language) perspective. The low level implementation of this statement is dependent on the underlying processor
instruction set and the (cross) compiler in use. The low level implementation of the high level program statement
counter++; under Windows XP operating system running on an Intel Centrino Duo processor is given below. The code
snippet is compiled with Microsoft Visual Studio 6.0 compiler.
At the processor instruction level, the value of the variable counter is loaded to the Accumulator register (EAX Register).
The memory variable counter is represented using a pointer. The base pointer register (EBP Register) is used for pointing
to the memory variable counter. After loading the contents of the variable counter to the Accumulator, the Accumulator
content is incremented by one using the add instruction. Finally the content of Accumulator is loaded to the memory
location which represents the variable counter. Both the processes Process A and Process B contain the program
statement counter++; Translating this into the machine instruction.
Process A Process B
mov eax,dword ptr [ebp-4] mov eax,dword ptr [ebp-4]
add eax,1 add eax,1
mov dword ptr [ebp-4],eax mov dword ptr [ebp-4],eax
Imagine a situation where a process switching (context switching) happens from Process A to Process B
when Process A is executing the counter++; statement. Process A accomplishes the counter++; statement
through three different low level instructions. Now imagine that the process switching happened at the point
where Process A executed the low level instruction mov eax,dword ptr [ebp-4] and is about to execute the
next instruction add eax,1. The scenario is illustrated below.
Process B increments the shared variable ‘counter’ in the middle of the operation where Process A tries to
increment it. When Process A gets the CPU time for execution, it starts from the point where it got
interrupted (If Process B is also using the same registers eax and ebp for executing counter++; instruction,
the original content of these registers will be saved (PUSHed) by Process B before using it and the contents
will be retrieved (POPed) after finishing the operation. Hence the content of eax and ebp remains intact
irrespective of context switching). Though the variable counter is incremented by Process B, Process A is
unaware of it and it increments the variable with the old value. This leads to the loss of one increment for
the variable counter.
HSIT,ECE 24
DEPT
RTOS AND IDE FOR ESD MODULE-3
Mutual Exclusion: The criteria that only one process can hold a resource at a time. Meaning processes
should access shared resources with mutual exclusion. Typical example is the accessing of display device in
an embedded device
Hold & Wait: The condition in which a process holds a shared resource by acquiring the lock controlling
the shared access and waiting for additional resources held by other processes
No Resource Preemption: The criteria that Operating System cannot take back a resource from a process
which is currently holding it and the resource can only be released voluntarily by the process holding it.
Circular Wait: A process is waiting for a resource which is currently held by another process which in turn
is waiting for a resource held by the first process. In general there exists a set of waiting process P0, P1 ….
Pn with P0 is waiting for a resource held by P1 and P1 is waiting for a resource held by P0, ……, Pn is
waiting for a resource held by P0 and P0 is waiting for a resource held by Pn and so on… This forms a
circular wait queue.
‘Deadlock’ is a result of the combined occurrence of these four conditions listed above. These conditions are
first described by E. G. Coffman in 1971 and it is popularly known as Coffman conditions.
A smart OS may foresee the deadlock condition and will act proactively to avoid such a situation. Now if a
deadlock occurred, how the OS responds to it? The reaction to deadlock condition by OS is non uniform.
The OS may adopt any of the following techniques to detect and prevent deadlock conditions.
Ignore Deadlocks: Always assume that the system design is deadlock free. This is acceptable for the reason
the cost of removing a deadlock is large compared to the chance of happening a deadlock. UNIX is an
example for an OS following this principle. A life critical system cannot pretend that it is deadlock free for
any reason.
Detect and Recover: This approach suggests the detection of a deadlock situation and recovery from it. This
is similar to the deadlock condition that may arise at a traffic junction. When the vehicles from different
directions compete to cross the junction, deadlock (traffic jam) condition is resulted. Once a deadlock (traffic
jam) is happened at the junction, the only solution is to back up the vehicles from one direction and allow the
vehicles from opposite direction to cross the junction. If the traffic is too high, lots of vehicles may have to
be backed up to resolve the traffic jam. This technique is also known as ‘back up cars’ technique
Operating Systems keep a resource graph in their memory. The resource graph is updated on each
resource request and release. A deadlock condition can be detected by analyzing the resource graph by graph
analyzer algorithms. Once a deadlock condition is detected, the system can terminate a process or preempt
the resource to break the deadlocking cycle.
HSIT,ECE 25
DEPT
RTOS AND IDE FOR ESD MODULE-3
Avoid Deadlocks: Deadlock is avoided by the careful resource allocation techniques by the Operating
System. It is similar to the traffic light mechanism at junctions to avoid the traffic jams.
Prevent Deadlocks: Prevent the deadlock condition by negating one of the four conditions favoring the
deadlock situation.
Ensure that a process does not hold any other resources when it requests a resource. This can be
achieved by implementing the following set of rules/guidelines in allocating resources to processes
1. A process must request all its required resource and the resources should be allocated before
the process begins its execution.
2. Grant resource allocation requests from processes only if the process does not hold a
resource currently
Ensure that resource preemption (resource releasing) is possible at operating system level. This can
be achieved by implementing the following set of rules/guidelines in resources allocation and
releasing
1. Release all the resources currently held by a process if a request made by the process for a
new resource is not able to fulfill immediately.
2. Add the resources which are preempted (released) to a resource list describing the resources
which the process requires to complete its execution.
3. Reschedule the process for execution only when the process gets its old resources and the
new resource which is requested by the process.
Livelock: The Livelock condition is similar to the deadlock condition except that a process in livelock
condition changes its state with time. While in deadlock a process enters in wait state for a resource and
continues in that state forever without making any progress in the execution. In a livelock condition a
process always do something but unable to make any progress in the execution completion. The livelock
condition is better explained with the real world example, two people attempting to cross each other in a
narrow corridor. Both of the persons move towards each side of the corridor to allow the opposite person to
cross. Since the corridor is narrow, none of them are able to cross each other. Here both of the persons
perform some action but still they are unable to achieve their target- Cross each other.
Starvation: In the task synchronization issue context, starvation is the condition in which a process does not
get the resources required to continue its execution for a long time. As time progresses the process starves on
resource. Starvation may arise due to various conditions like byproduct of preventive measures of deadlock,
scheduling policies favoring high priority tasks and tasks with shortest execution time etc.
HSIT,ECE 26
DEPT
RTOS AND IDE FOR ESD MODULE-3
HSIT,ECE 27
DEPT
RTOS AND IDE FOR ESD MODULE-3
HSIT,ECE 28
DEPT
RTOS AND IDE FOR ESD MODULE-3
HSIT,ECE 29
DEPT
RTOS AND IDE FOR ESD MODULE-3
HSIT,ECE 30
DEPT
RTOS AND IDE FOR ESD MODULE-3
The Development Environment consists of a Development computer (PC) or host, which acts as the
heart of the development environment, Integrated Development Environment (IDE) tool for
embedded firmware development and debugging, Electronic Design Automation (EDA)tool for
Embedded Hardware design. An emulator hardware for debugging the target board, Signal sources
(like function generator0 for simulating the inputs to the target board, Target hardware
HSIT,ECE 31
DEPT