Rate
Monotonic Analysis
An Overview
Paul S.
Heidmann
Introduction.......................................................................................................................
1
Rate Monotonic
Analysis...................................................................................................
2
History..................................................................................................................
2
An Overview.........................................................................................................
2
Assumptions of
Rate Monotonic Analysis...............................................................
3
Refinements of
Rate Monotonic Analysis............................................................................
4
The First
Assumption.............................................................................................
4
The Second
Assumption........................................................................................
5
The Third
Assumption............................................................................................
6
The Fourth
Assumption..........................................................................................
9
The Fifth
Assumption.............................................................................................
10
Ada Programming
Issues...................................................................................................
11
Paradigm 1............................................................................................................
11
Pardigm 1 Coding
Restrictions...................................................................
12
Paradigm 1
Considerations........................................................................
13
Paradigm 2............................................................................................................
14
Paradigm 2 Coding
Restrictions.................................................................
14
Paradigm 2
Considerations........................................................................
15
Bibliography......................................................................................................................
17
Section 1
Since its
conception in 1973, rate monotonic theory has made a significant contribution to
real-time applications programming.
Using rate monotonic theory, a given task set may be analyzed to
determine which tasks in the task set will meet their deadlines. Moreover, the rate monotonic algorithms
have been shown to be optimal in the sense that if a task set is schedulable by
any other algorithm, then it is schedulable by the rate monotonic algorithms [3,
p. 51].
The purpose of
this paper is not to introduce new results, but rather to present an overview of
the theory as it applies to the Ada programming language. Section 2 presents a brief introduction
to the precepts of rate monotonic analysis. The assumptions that rate monotonic
theory makes are presented.
The original
assumptions made by rate monotonic theory are quite restrictive; much work has
been done to relax these assumptions.
Section 3 presents this work, grouping each result obtained with the
assumption (or assumptions) that it relaxes.
Finally, section
4 presents two paradigms for the creation of periodic tasks in Ada. The advantages and disadvantages of each
are discussed.
Section 2
Real-time system
correctness depends not only on logical correctness, but on timing constraints
as well. Systems designers,
therefore, must suitably schedule all tasks so that each meets its
deadline. This scheduling was
typically accomplished in one of two manners: either a cyclical scheduler or a
prioritized scheme. A cyclical
scheduler proved very difficult to maintain during system updates, while
assigning tasks priorities based solely on their perceived importance caused
deadlines to be needlessly missed [5, p. 2].
Rate monotonic
scheduling theory was introduced in 1973 by Lui and Layland [4] to address these
problems. To make the analysis
possible, however, five assumptions concerning the task set had to be made. Unfortunately, these assumptions
eliminated most task sets from consideration. Further development has since produced
means for relaxing some of these assumptions [3, 6, 7, 9].
Originally, rate
monotonic analysis was developed as a means of scheduling periodic tasks
alone. A periodic task is defined
as a task with a periodic request rate and a hard deadline immediately preceding
the next periodic request.
The fundamental
precept of rate monotonic analysis is, then, the assignment of priorities to
tasks according to the period with which they occur. That is, those tasks with a shorter
period (and hence a higher request rate) receive a higher priority. Lui and Layland have shown that this
scheduling protocol is optimal in the sense that if a task set is schedulable
(that is, it is possible to schedule the task set in such a way that all
deadlines will be met), then it is also schedulable by the rate monotonic
protocol [4, p. 51].
In some systems,
this strict ordering of tasks by period may not be acceptable. For example, a system may depend on a
longer period, and hence lower priority, task meeting its deadline. In these cases, it is possible to do a
period transformation on the critical task, raising its priority by splitting
the longer task into several shorter (and hence higher priority) tasks [5, p.
9-10].
Lui and Laylund
provided the following theorem to test whether or not a task set is shedulable
[4, p. 54]. It should be noted that
the bound provided by this theorem is rather pessimistic; a task set that fails
to meet the bound in this theorem may still be schedulable. This theorem is still useful, however,
as a quick check for a given task set.
Theorem
1: A set of n tasks, with fixed priority order, is
schedulable if
where is the execution time required for task
i and
is the period of task i.
Lehoczky, Sha,
and Ding have exactly characterized the schedulability of any task set with the
following theorem, stated by Sha and Goodenough [5, p.
5-6]:
Theorem
2: A set of n independent periodic tasks scheduled
by the rate monotonic algorithm will always meet its deadlines, for all task
phasings, if and only if
where and are defined as in the previous theorem,
and
In order to make
the schedulability analysis possible, five assumptions need to be made
concerning the task set. These
assumptions are as follows [4, p. 48]:
1. The requests for all tasks for
which hard deadlines exist are periodic, with constant interval between
requests.
2. Deadlines consist of run-ability
constraints only -- i.e. each task must be completed before the next request for
it occurs.
3. The tasks are independent in that
requests for a certain task do not depend on the initiation or the completion of
requests for other tasks.
4. Run-time for each task is a
constant for that task and does not vary with time. Run-time here refers to the time which
is taken by a processor to execute the task without
interruption.
5. Any aperiodic tasks in the system
are special; they are initialization or failure recovery routines; they displace
periodic tasks while they themselves are being run, and do not themselves have
hard, critical deadlines.
These
assumptions are indeed restrictive; however, as mentioned previously, work has
been done to relax them.
Section 3
Since 1973,
several advancements in the theory of rate monotonic scheduling have been
made. The impositions made by the
assumptions of rate monotonic theory on the task set were unacceptable for most
systems. Therefore, efforts to
improve the theory have been focused on the relaxation of the
assumptions.
This section
presents the five assumptions made of the task set by rate monotonic theory,
along with an overview of efforts to relax each
assumption.
The requests for all tasks for which hard
deadlines exist are periodic, with constant interval between
requests.
This assumption
eliminates most real-time systems from consideration by eliminating sporadic
events (a sporadic event is defined as an aperiodic event with a hard
deadline).
Initial attempts
to relax this assumption involved creating a periodic task to poll aperiodic
events. This approach, although
implementation is trivial, suffers from an unnecessarily long average response
time [9, p. 4][6, p. 4].
Further attempts
produced the priority exchange protocol [1, p. 4][6, p. 5][9, p. 6-10]. In this scheme, a periodic task is
created to process sporadic events.
If no sporadic events have been requested at the onset of the server
task, then the server exchanges its high priority with a lower priority periodic
task. That is, the server allows a
lower priority periodic task to execute until completion or the arrival of a
sporadic request, which ever comes first.
If the periodic task executes to completion and still no sporadic
requests have occurred, then the server will allow the next periodic task to
execute until completion or the arrival of a sporadic request. This continues until the priority of the
server degenerates to background.
In the event of a sporadic request, the request is processed at the
current priority level of the server until completion or the depletion of the
server's allotted execution time, which ever comes first. Replenishment of the server's execution
time occurs periodically.
While the
priority exchange protocol has the advantage of fast average response times, it
suffers from implementation complexity and unnecessary accruement of run-time
overhead [6, p. 5][9, p. 16].
An attempt to
lessen the impact of the priority exchange protocol on run-time overhead then
produced the deferrable server protocol [1, p. 4][5, p. 11][6, p. 5][9, p.
6-10]. As with the priority
exchange protocol, a periodic server task is created to process sporadic
requests. If no sporadic requests
are pending at server onset, the deferrable server is suspended until a sporadic
request arrives. When this occurs,
if the server's allotted time is not depleted, the task executing is
preempted. As with the priority
exchange protocol, the server's allotted execution time is replenished
periodically.
The deferrable
server has low implementation complexity and accrues little run-time
overhead. Unfortunately, the
protocol violates the fourth assumption of rate monotonic theory [6, p.
5].
The shortcomings
of the priority exchange and deferrable server protocols where overcome in 1989
by Sprunt, Sha, and Lohoczky with the introduction of the sporadic server
protocol [9]. The sporadic server
protocol [1, p. 4][5, p. 12-13][6, p. 5][8][9, p. 11-18], like the priority
exchange and deferrable server protocols, requires the creation of a periodic
task to service sporadic requests.
If, at the onset of the server, no sporadic requests are pending, the
server preserves its allotted time at high priority (in the same manner as the
deferrable server). Then, as the
deferrable server, it preempts any lower priority periodic task upon the arrival
of a sporadic request. The
fundamental difference between the deferrable server and the sporadic server is
the manner in which each server is replenished. Where the full amount of the deferrable
server's allotted execution time is replenished at the beginning of each period,
the amount of execution time consumed by each sporadic request is replenished
one full period after it was consumed.
The sporadic
server does not accrue as much run-time overhead or require the implementation
complexity of the priority exchange protocol. The replenishment scheme of the sporadic
server does not periodically require processing while no sporadic requests are
made, as the deferrable server's replenishment scheme does. Although the sporadic server violates
the same assumption as the deferrable server, Sprunt, Sha, and Lohoczky have
demonstrated the schedulability of the sporadic server in a rate monotonic
run-time system [9, p. 23-24] (for alternative applications of the sporadic
server, see [8, p. 3-5]).
With the
sporadic server, then, the assumption that requests for all tasks for which hard
deadlines exist are periodic is relaxed.
The remaining component of the first assumption, that all tasks have a
constant interval between requests, can be addressed either by taking the
minimum possible task period (this will result in a lowering of processor
utilization in the average case) or by employing a mode change (see The Fourth Assumption) [6, p.
7-9].
Deadlines consist of run-ability
constraints only -- i.e. each task must be completed before the next request for
it occurs.
With this
assumption, the run-time executive is free from monitoring the periods of all
tasks. An alternative would be to
have the run-time executive buffer all requests and for each task initiate one
request every period.
The tasks are independent in that
requests for a certain task do not depend on the initiation or the completion of
requests for other tasks.
This assumption
eliminates from consideration those task sets that employ either inter-task
communication, semaphores, or input/output. Clearly, this assumption is unacceptable
for most real-time systems.
Understandably, there has been no small amount of effort given to the
relaxation of this assumption.
While the advances are substantial, the theory is not yet
complete.
With the
introduction of semaphores comes the problem of unbounded priority
inversion. Priority inversion is
defined as the blocking of a higher priority task by a lower priority task. Unbounded priority inversion can occur
when a high priority task and a low priority task share a common semaphore. For illustration, consider tasks H, M, and L; H being the highest priority task, M a medium priority task, and L the lowest priority task. Assume that H and L share a common resource guarded by a
semaphore. Suppose further that L enters its critical section, locking
the semaphore. Assume now that H attempts to enter its critical section
and is blocked. The arrival of task
M would now preempt L, forcing task H to wait not only for the completion of
task L's critical section, but also
for task M (and any other
intermediate priority tasks). In
this manner, a high priority task may be blocked for an indefinitely long period
of time.
To address this
problem, the priority inheritance protocol was developed [1, p. 5][3, p.
13-14][5, p. 14][7, p. 4-6][9, p. 26].
Under the priority inheritance protocol, if a lower priority task blocks
a higher priority task, the lower priority task inherits the priority of the
higher priority task for the duration of its critical section.[1]
Using this scheme, Sha, Ragunathan, and Lehoczky have demonstrated that
if there are m semaphores that can be
used to block a job J, then J can be suspended for at most the
duration of m critical sections [7,
p. 6] (see [7, p. 5-6] for a complete listing of properties of the priority
inheritance protocol).
Unfortunately,
priority inversion is not the only problem that semaphores introduce. Deadlocking needs to be addressed as
well. Priority inheritance alone
will not prevent deadlocks; the ceiling priority protocol was developed to
eliminate deadlocks [1, p. 6][3, p. 15-16][5, p. 14-16][6, p. 6][7, p.
6-14]. Under this protocol, every
semaphore is given a ceiling priority defined to be the priority of the highest
priority task that can lock the semaphore.
The following rules, then, must be followed [7, p.
9-10]:
1. Suppose that J has the highest priority among the
jobs ready to run and is assigned the processor, and let S* be the semaphore with the
highest priority ceiling of all semaphores currently locked by jobs other than
job J. Before job J enters its critical section, it must
first obtain the lock on semaphore S
guarding the shared data structure.
Job J will be blocked, and the
lock on S denied, if the priority of
job J is not higher than the priority
ceiling of semaphore S*. In this case, job J is said to be blocked by the job which
holds the lock on S*. Otherwise, job J will obtain the lock on semaphore S and enter its critical section. When a job J exits its critical section, the binary
semaphore associated with the critical section will be unlocked and the jobs, if
any, that are unblocked by J become
ready to run. The highest priority
job that is ready to run will be assigned the processor.
2. A job J uses its assigned priority, unless it
is in its critical section and blocks higher priority jobs. If job J blocks higher priority jobs, J inherits PH, the highest priority of
the jobs blocked by J. When J exits its (outermost) critical
section, it resumes its original priority.
Priority inheritance is transitive.
Finally, the operations of priority inheritance and resumption of
original priority must be indivisible.
3. A job J, when it does not attempt to enter a
critical section, can preempt another job JL if its priority is higher
than the priority, inherited or assigned, at which job JL is
executing.
When the
priority inheritance and ceiling priority protocols are followed, the theorems
in Section 1 may be modified to allow for the blocking time introduced by task
synchronization [5, p. 15]:
Theorem
3: A set of n periodic tasks using the priority
ceiling protocol can be scheduled by the rate monotonic algorithm, for all task
phasings, if the following condition is satisfied:
where and are the execution time and period of task
i (as in the theorems in Section 1)
and is the longest duration of blocking time
that task i can
experience.
As with the
corresponding theorem in Section 1, the bound is rather pessimistic. It is possible that a task set that
fails the test in the above theorem is schedulable. A more realistic bound is given in the
next theorem [5, p. 15-16]:
Theorem
4: A set of n periodic tasks using the priority
ceiling protocol can be scheduled by the rate monotonic algorithm for all task
phasings if the following condition is satisfied:
where ,
,
and are defined as in theorem 3, and
Two differences
between this theorem and its analog in section 1 merit mention. First, the theorem in section 1 is an
exact characterization, this theorem is not. Secondly, the sum in the theorem in
section 1 is over ,
the sum in this theorem is over .
Finally, this
assumption precludes the use of input and output. Both inputting and outputting to/from an
external device requires the suspension of the task making the request. The time spent in such a suspension will
be termed idle time. There are subtle differences between
idle time and blocking time (in the sense of the ceiling priority
protocol). Blocking time is
dependent on the length of a limited number of critical sections and is governed
by that property of the ceiling priority protocol that strictly bounds the
number of critical sections in lower priority tasks for which each higher
priority task must wait. Idle time
depends on the amount of time needed to process an input or output request. The priority ceiling protocol governs
which task is given the processor while a higher priority task is blocked. The protocol governing which task is
given the processor during a higher priority task's idle time is different. For these reasons, idle time may not be
considered blocking time in the above two theorems.
Klien and Ralya,
in their paper An Analysis of
Input/Output Paradigms for Real-Time Systems [3], begin to deal with this
issue. The introduction of idle
time into tasks scheduled with the rate monotonic algorithm may create any of
the following difficulties [3, p. 19-27]:
1. Idle time will have a direct impact on the
requesting task's schedulability.
In this respect, idle time may be considered as blocking time (however,
for the requesting task only).
2. Idle time has an effect on the blocking
time components of higher priority processes. This can occur when the completion of an
input or output request made by a lower priority task causes an interrupt during
the execution of a higher priority task.
3. Idle time may also affect the blocking
properties of the process that experiences inactivity. This effect is the result of applying
the priority ceiling protocol to input and output devices. A task will become suspended if it
requests an input or output transaction while any other input or output routine
with a priority ceiling higher than the priority of the task in question is
active.
4. Idle time can also affect lower priority
processes. Deferring the
execution of a higher priority process can cause a lower priority process to
miss a deadline [3, p. 15].
To deal with
these issues, Klien and Ralya introduce a constant into the inequality given in
theorem 1. The computation of this
constant is straightforward in the synchronous case [3, p. 23]; in the
asynchronous case, however, the computation is more involved and is derived from
a pessimistic paradigm [3, p. 33].
Run-time for each task is a constant for
that task and does not vary with time.
Run-time here refers to the time which is taken by a processor to execute
the task without interruption.
Tasks which have
a varying run-time can be analyzed by considering their worst case run-time and
the average case run-time. The
assumption that tasks cannot be interrupted can be relaxed with the techniques
used in relaxing the first and third assumptions.
Often, in
real-time systems, a mode change is required. This could involve the removal or
addition of tasks to the task set or a dramatic change in the amount of time
required to process a currently existing task. Such operations, clearly, violate this
assumption. The relaxation of this
assumption to allow mode changes can be realized (assuming the ceiling priority
protocol is in effect) if the following protocol is observed [6, p.
7-9]:
1. For each unlocked semaphore S whose priority ceiling needs to be
raised, S's ceiling is raised
immediately and indivisibly.
2. For each locked semaphore S whose priority ceiling needs to be
raised, S's priority ceiling is
raised immediately and indivisibly after S is unlocked.
3. For each semaphore S whose priority ceiling needs to be
lowered, S's priority ceiling is
lowered when all the tasks which may lock S, and which have priorities greater
than the new priority ceiling of S,
are deleted.
4. If task
's
priority is higher than the priority ceilings of locked semaphores ,
which it may lock, the priority ceilings of must first be raised before adding task
.
5. A task
,
which needs to be deleted, can be deleted immediately upon the initiation of a
mode change, and its processor time reclaimed, if it has not begun to
execute. If has initiated, then it may be deleted
only after completion.
6. A task may be added into the system
only if sufficient processor capacity exists.
This protocol
preserves the ceiling priority protocol's properties of freedom from deadlock
and the guarantee that a higher priority task may be blocked by a lower priority
task at most once. If both task
sets are schedulable, then all tasks will meet their deadlines across the mode
switch [6, p. 9].
Any aperiodic tasks in the system are
special; they are initialization or failure recovery routines; they displace
periodic tasks while they themselves are being run, and do not themselves have
hard, critical deadlines.
This assumption
may be relaxed by employing the techniques used in relaxing the first and third
assumptions.
Section 4
The assumptions
of rate monotonic theory restrict the form a periodic task may take. This section presents two periodic
task[2] paradigms. For each paradigm, applicable coding
restrictions and limitations are discussed.
The Ada Language
Reference Manual specifies only that a delay statement will cause a delay at
least as long as the given expression [10, 9.6.1]. Moreover, there is no requirement that a
scheduling decision be made upon completion of the delay. The paradigms presented in this section
require both that the delay be for exactly the amount of time specified and that
at the completion of any given delay, a scheduling decision be
made.
It is not
possible, without the capability of dynamically changing priorities, to implement a sporadic server under
either of the following two paradigms.
This can be seen by considering the case of an aperiodic request
occurring when the sporadic server's execution time is not completely exhausted,
and there is not enough time left to process the event. If the aperiodic handler is given a
lower priority than the periodic task set, then it must remain in rendezvous
with the sporadic server to attain the high priority needed for it to begin its
execution. In this case, however,
the sporadic server task is not capable of suspending the aperiodic handler when
its time is exhausted. If the
aperiodic handler is given a higher priority than the periodic task set, then it
will continue to preempt periodic tasks after the sporadic server's time is
exhausted. An approximation of the
sporadic server is given in [8].
This implementation is consistent with both
paradigms.
The approach of
the first paradigm is to create an additional task, termed a dispatcher, to regulate the periodic
tasks. This is acheived with an
entry call to each periodic task, at the beginning of each of its periods. The dispatcher, to function properly,
must have the highest priority.
Following is an example of a dispatcher:
task body MY_DISPATCHER
is
begin
loop
MY_PERIODIC_TASK_1.DO_IT;
delay (DELAY_1); -- DELAY_1
may be an expression
MY_PERIODIC_TASK_2.DO_IT;
delay (DELAY_2); -- DELAY_2
may be an expression
end loop;
end
MY_DISPATCHER;
Every periodic
task, under this paradigm, has a single accept statement inside a loop. Such a task would have the following
form:
task body MY_PERIODIC_TASK
is
entry DO_IT;
begin
loop
accept DO_IT;
end loop;
end
MY_PERIODIC_TASK;
Little variation
from the above form will be permissible for periodic tasks. Specifically, the following
constructions, when used in periodic tasks, are shown to be either completely
inconsistent with the assumptions of rate monotonic theory or allowable only
with restrictions:
Multiple accept
statements.
Abort statements.
Delay statements.
Exception
handlers.
Input and
output.
Pragma shared statements.
Rendezvous.
Multiple accept
statements are
completely inconsistent with rate monotonic theory. The language does not define the order
in which the accept statements are handled. It is possible that a task might wait on
one entry while requests for the other entry queue up. This violates the second
assumption.
Since an abort statement causes the deletion of
a task from the task set, its use must be governed by the mode change protocol
detailed in section 2.
Ada allows a delay statement to occur in three
distinct contexts: within a selective wait, within a timed entry call, and
inline with other code. A selective
wait with a delay alternative is not allowable. Should the delay condition become open,
the task will begin execution beyond the control of the rate monotonic
run-time. A timed entry call is
allowable so long as the principles of the ceiling priority protocol are not
violated (this assumes that rendezvous are governed according to the ceiling
priority protocol -- see the paragraph in this section dealing with
rendezvous). A delay statement
inline with other statements is not allowable, since it would introduce blocking
not governed by the ceiling priority protocol.
The handling of
exceptions may not involve the
termination of a task, unless the mode change protocol is followed. Any additional overhead caused by the
handling of exceptions must be accounted for in the
analysis.
Input and output must be governed by the protocol
discussed in [3].
Pragma shared introduces blocking time into a task's
execution. Hence, this construction
is permissible only if this blocking time is governed by the ceiling priority
protocol.
Ada rendezvous (except the accept statement used by the dispatcher
to control the periodic task), will necessarily introduce blocking time into one
or both of the tasks in question.
All blocking must be governed by the ceiling priority protocol. While it is possible that some
implementations will have provisions for the use of the rendezvous for
inter-task communication, it is more likely that most implementations will
restrict inter-task communication to semaphores (that is, rendezvous will be
allowed only with a semaphore task).
Questions
concerning the methods of implementing input/output, inter-task communication,
the handling of aperiodic events, and resource contention naturally arise. This section addresses these concerns,
followed by a critique of this model.
Task interaction
with external devices presents complications in the theory as well as in the
computations. Close adherence to
the protocol presented in [3] is required.
If this paradigm is being used, one additional restriction must be added
to input/output requests: no request may be made while in rendezvous with the
dispatcher task. Such a rendezvous
would cause suspension of the dispatcher task for the duration of the
input/output request.
Inter-task
communication and resource contention both require the use of a semaphore. Since this will introduce blocking time,
the ceiling priority protocol must be followed. As the priority inheritance and ceiling
priority protocols are typically issues of the run-time executive, discussion
will not be presented here.
The handling of
aperiodic requests may be accomplished either by employing the modified sporadic
server discussed in [8] or by polling.
While polling requires less implementation effort, its response time is
often undesirably slow. The
modified sporadic server has a faster response time (albeit not as fast as a
true sporadic server) and a more complex implementation.
This paradigm
has the advantages of a simple means of task control and tolerance of varying
execution times of periodic tasks.
Both advantages are due to the existence of the high priority dispatcher
task, which may preempt lower priority tasks which for some reason have not yet
completed. This guarentees that
higher priority tasks will start their periods at the proper time (the next
paradigm does not share this advantage).
In this model,
periodic tasks are governed by the placing of a delay statement inline within
each task body. For each task, the
length of the delay is set equal to the execution time of the task in question
subtracted from the task's period.
Unlike the first paradigm, under this model there is no centralized
control of tasks. Following is the
form of a periodic task under this model:
task body MY_PERIODIC_TASK
is
begin
loop
delay (DELAY_1); -- DELAY_1
may be an expression.
end loop;
end
MY_PERIODIC_TASK;
As with paradigm
1, certain Ada tasking constructions, if used in conjunction with this model,
violate the principles of rate monotonic scheduling. Specifically, the following
constructions are shown to be either completely inconsistent with rate monotonic
theory, or applicable in only a restricted sense:
Multiple accept
statements.
Rendezvous.
Abort
statements.
Delay
statements.
Exception
handling.
Input/Output.
Pragma shared
statements.
Multiple accept
statements violate the
principles of rate monotonic theory; see paradigm 1 coding
restrictions.
The Ada rendezvous necessarily introduces
blocking time into one or both of the tasks in question. All blocking time, to remain consistent
with rate monotonic principles, must be governed by the ceiling priority
protocol. As most current Ada
implementations do not govern rendezvous in a manner consistent with rate
monotonic theory, inter-task communication will be restricted to
semaphores.
The abort statement, as previously
discussed, causes the termination of the task in question. Such a mode change must be governed by
the protocol in [6, p. 7-9].
Delay statements (other than those used to
regulate periodic tasks) introduce blocking time that is not governed by the
priority ceiling protocol. Their
use is therefore prohibited.
The handling of
exceptions may not cause a task to
terminate unless the mode change protocol is followed. Any additional execution time that
exception handling adds to the task must be accounted for in the
analysis.
The use of input or output devices must be governed by the
protocol presented in [3].
The use of
shared variables (pragma shared)
introduces blocking time into the task in question. This blocking time must be governed by
the ceiling priority protocol.
As with the
previous paradigm, inter-task communication will most probably be accomplished
with semaphores. The blocking time
introduced must be governed by the priority ceiling protocol. All input and output must be governed be
the protocol in [3].
This paradigm
suffers from an intolerance of periodic tasks with varying execution times. An applications programmer who chooses
this model is faced with the choice of using either fixed or dynamic delay
times.
Fixed delay
times will lead to period shifting as periodic tasks vary their execution
times. Suppose, for example, that
an applications programmer creates a periodic task with period 10ms and normal
execution time 2ms, with execution time 3ms if, say, an external event
arrives. He then uses a fixed delay
statement at the end of this task with value 8ms. Now suppose further that during the
execution of this task, the external event arrives, causing an extra 1ms of
execution time. This will cause a
period shift of 1ms; that is, the next period will begin 1ms
late.
The alternative
is to dynamically change the delay length, computing it from the system
clock.[3]
This approach, however, accrues needless run-time
overhead.
This paradigm,
while simpler to implement than the previous paradigm, is too simplistic for
most real-time applications.
[1]
A.
Burns
Scheduling Hard Real-Time Systems: A Review
Technical Report
CS13-88
Department of Computing, University of
Bradford
1988
[2] J. B.
Goodenough, Lui Sha
The Priority Ceiling Protocol: A Method for Minimizing the Blocking of
High-Priority Ada Tasks
CMU/SEI-88-SR-4
Carnegie-Mellon
University
1988
[3] Mark
H. Klien, Thomas Ralya
An Analysis of Input/Output Paradigms for
Real-Time Systems
CMU/SEI-90-TR-19
Carnegie-Mellon
University
1990
[4] C. L.
Liu, James W. Layland
Scheduling Algorithms for
Multiprogramming in a Hard-Real-Time Envirnment
Journal of the Association for Computing
Machinery, Vol. 20, No. 1, January 1973
[5] Lui
Sha, John B. Goodenough
Real-Time Scheduling Theory and
Ada
CMU/SEI-89-TR-14
Carnegie-Mellon
University
1989
[6] Lui
Sha, Mark H. Klein, John B. Goodenough
Rate Monotonic Analysis for Real-Time
Analysis
CMU/SEI-91-TR-5
Carnegie-Mellon
University
1991
[7] Lui
Sha, Ragunathan Rajkumar, John P. Lehoczky
Priority Enheritance
Protocols
CMU-CS-87-181
Carnegie-Mellon
University
1987
[8]
Brinkley Sprunt, Lui Sha
Implementing Sporadic Servers in
Ada
CMU/SEI-90-TR-6
Carnegie-Mellon
University
1990
[9]
Brinkley Sprunt, Lui Sha, John Lehoczky
Scheduling Sporadic and Aperiodic Events
in a Hard Real-Time System
CMU/SEI-89-TR-11
Carnegie-Mellon
University
1989
[10] United States
Department of Defense
Reference Manual for the Ada Programming
Language
ANSI/MIL-STD-1815A-1983
1983
[1]Priority inheritance must be fully transitive. This often adds significantly to the run-time overhead.
[2]A periodic task may not be placed in a subprogram. A subprogram containing a periodic task would not be allowed to complete until the periodic task terminated.
[3]This operation must be indivisable.