**7.3 Scheduling**

The *μ*C/OS-II the scheduler does two things: (i) select the highest priority ready task, and (ii) in case it is different from the currently running one, do a context switch. Our hierarchical scheduler replaces the original *OS\_SchedNew()* method, which is used by the *μ*C/OS-II scheduler to select the highest priority ready task.

It first uses the *global scheduler HighestReadyServer()* to select the highest priority ready server, and then the server's *local scheduler HighestReadyTask()*, which selects the highest priority ready

*ServerBudget(σi)* Returns the current value of *βi*, which represents the lower bound on the processor time that server *σ<sup>i</sup>* will receive within the next interval of Π*<sup>i</sup>* time units.

**8.1 Modularity and memory footprint**

for each extension.

scheduler and servers types.

**8.2 Performance analysis**

**8.2.1 Handling a single event**

server, i.e. *O*(*m*(*σi*)).

handling it requires *O*(1) time.

stack of 128B each) in the original *μ*C/OS-II.

The design of RELTEQ and the HSF extension is modular, allowing to enable or disable the support for HSF and different server types during compilation with a single compiler directive

An Efficient Hierarchical Scheduling Framework for the Automotive Domain 87

The complete RELTEQ implementation including the HSF extension is 1610 lines of code (excluding comments and blank lines), compared to 8330 lines of the original *μ*C/OS-II. 105 lines of code were inserted into the original *μ*C/OS-II code, out of which 60 were conditional compilation directives allowing to easily enable and disable our extensions. No original code was deleted or modified. Note that the RELTEQ+HSF code can replace the existing timing mechanisms in *μ*C/OS-II, and that it provides a framework for easy implementation of other

The 105 lines of code represent the effort required to port RELTEQ+HSF to another operating system. Such porting requires (i) redirecting the tick handler to the RELTEQ handler, (ii) redirecting the method responsible for selecting the the highest priority task to the HSF

The code memory footprint of RELTEQ+HSF is 8KB, compared to 32KB of the original *μ*C/OS-II. The additional data memory foot print for an application consisting of 6 servers with 6 tasks each is 5KB, compared to 47KB for an application consisting of 36 tasks (with a

In this section we evaluate the system overheads of our extensions, in particular the overheads of the scheduler and the tick handler. We express the overhead in terms of the maximum number of events inside the queues which need to be handled in a single invocation of the scheduler or the tick handler, times the maximum overhead for handling a single event.

• When a dummy event expires, it is simply removed from the head of the queue. Hence,

• When a task period event expires, an event representing the next periodic arrival is inserted into the corresponding server queue. In this section we assume a linked-list implementation, and consequently insertion is linear in the number of events in a queue. Note that we could delay inserting the next period event until the task completes, as at most one job of a task may be running at a time. This would reduce the handling of a periodic arrival to constant time, albeit at the additional cost of keeping track for each task of the time since its arrival, which would be taken into account when inserting the next period event. However, if we would like to monitor whether tasks complete before their deadline, then we will need to insert a deadline event into *σi*.*sq* anyway. Hence the time for handling an event inside of a server queue is linear in the number of events in a server queue. Since there are at most two events per task in a server queue (period and deadline events), handling a period event is linear in the maximum number of tasks assigned to a

scheduler, and (iii) identifying when tasks become ready or blocked.

Handling different events will result in different overheads.

task belonging to that server. This approach allows to implement different global and local schedulers, and also different schedulers in each server. Our fixed-priority global scheduler is shown in Figure 11.

```
highestServer := HighestReadyServer();
if highestServer  currentServer then
  if currentServer  ∅ then
     ServerSwitchOut(currentServer);
  end if
  if highestServer  ∅ then
     ServerSwitchIn(highestServer);
  end if
  currentServer := highestServer;
end if
if currentServer  ∅ then
  return currentServer.HighestReadyTask();
else
  return idleTask;
end if
```
Fig. 11. Pseudocode for the hierarchical scheduler.

The *currentServer* is a global variable referring to the currently active server. Initially *currentServer* = ∅, where ∅ refers to a null pointer.

The scheduler first determines the highest priority ready server. Then, if the server is different from the currently active server, a server switch is performed, composed of 3 steps:


Finally the highest priority task in the currently active server is selected, using the current server's local scheduler *HighestReadyTask()*. If no server is active, then the idle task is returned.

### **7.4 Enforcement**

When a server becomes depleted during the execution of one of its tasks (i.e. if a depletion event expires), the task will be preempted and the server will be switched out. This is possible, since we assume preemptive and independent tasks.

## **8. Evalulation**

In this section we evaluate the modularity, memory footprint and performance of the HSF extension for RELTEQ. We chose a linked-list as the data structure underlying our RELTEQ queues and implemented the proposed design within *μ*C/OS-II.
