**2.1. The kernel block**

A computer node is simulated using TrueTime kernel block, this node has a generic real-time kernel, A/D and D/A converters, and network interfaces. An initialization scrip is used to configure the block, in this script it is possible to create several objects as task, timers, interrupt handlers, semaphores, etc., this objects establish the software executing in the computer node. The kernel continuously calls the code functions of the tasks and interrupt handlers. Either Matlab m-files code or C++ language may be used to write initialization scripts and the code functions, the main advantage to use C++ is the speed, nevertheless m-file code is very easy to use. Several scheduling policy is able to use in TreTime kernel block, these can be fixed-priority scheduling and earliest-deadline-first scheduling and custom scheduling policies [2].

#### 4 Will-be-set-by-IN-TECH 52 Numerical Simulation – From Theory to Industry Issues on Communication Network Control System Based Upon Scheduling Strategy Using Numerical Simulations <sup>5</sup>

The *task* is the main construction in the TrueTime environment, this object is used to simulate periodic and aperiodic activities, for example controller and I/O tasks can be periodic and communication and event-driven controller can be aperiodic tasks. A set of attributes and a code function define a task; attributes as name, release time, worst-case execution time, budget, relative and absolute deadlines, priority (if fixed priority scheduling is used), period (if the task is periodic). Release time and absolute deadline are attributes constantly updated bye the kernel during simulation, while period and priority are kept constant, although can be changed by callas to kernel primitives trough execution [1]. An example of the definition of a task is shown below:

The *code* associated with task and interrupt handlers is scheduled and executed bye the kernel while simulation progresses. The code may be divided in segments, which can interact with other tasks and with the environment at the beginning of each code segment. The simulated execution time of each segment is returned by the code function and can be modeled as constant, random, or even data-dependent [1]. During the simulation the kernel saves the current segment and calls the code functions with proper arguments. Execution resumes in the next segment when the task has been running for the time associated with previous

Issues on Communication Network Control System Based Upon Scheduling Strategy Using Numerical Simulations

53

This function implements a simple sensor node. In the first segment, the plant is sampled using a execution time of .5 ms. In the second segment, the control signal is sent to the controller node. The third segment indicates the end of execution by returning a negative execution time. The structure data represent the local memory and is used to store the measured variable between calls to the different segments. The kernel primitives ttAnalogIn and ttAnalogOut can be perform A/D and D/A conversion. Besides A/D and D/A conversion, a large set of kernel primitives exist which can be called from code

Monitors and events support *sincronization* between tasks. Monitors are used to guarnatee mutual exclusion when accessing comon data. Events are associated with monitors to

Different *output graphs* are generated by truetime blocks. Each computer block will produce two graphs: A computer graph will display the execution trace of each task and interrupt handler during the simulation. If the signal is high, it means that the task is running. A medium signal indicates that the task is ready but not running, a low signal means that the task is idle. Otherwise a monitor graph shows which tasks are holding and waiting on the

The TrueTime network block simulates the physical layer and the medium-access layer of several local-area networks. CSMA/CD (Ethernet), CSMA/AMP (CAN), Round Robin

segment [7]. An example of a sensor code is given bellow:

% Receive data from analog input

data.y = ttAnalogIn(1); exectime = 0.0005;

% Shows the current time

ttSendMsg(3, data, 80) exectime = 0.0004;

represent condition variables [1].

**2.2. The network block**

different monitors during simulation [1].

exectime = -1; % finished

switch seg, case 1,

case 2,

case 3,

function [12].

end

ttCurrentTime

function [exectime, data] = senscode(seg, data)

% Send message (80 bits) to node 3 (controller)

```
function sensor_init(arg)
% Initialize TrueTime kernel
  ttInitKernel(1, 0, 'prioFP'); %Inputs,Outputs,FixedPriority
% Create sensor task
  offset = 0;
  prio = 1;
  period = 0.010;
  ttCreatePeriodicTask('sens_task', offset, period, ...
  prio, 'senscode', data);
```
The kernel primitive ttInitKernel() initializes a sensor node. The kernel is initialized by specifying the number of A/D and D/A channels and scheduling policy. The built-in priority function prioFP specifies fixed-priority scheduling. Rate monotonic prioRM, earliest deadline first prioEDF, and deadline monotonic prioDM scheduling are additional predefined scheduling policies [6].

*Interrupts* can be generated in two ways: An external interrupt is associated with one of the external interrupt channels of the computer block; when the signal of the corresponding channel changes value the interrupt triggers. The usefulness of this type of interrupt lies to simulate distributed controllers that execute when measurements arrive on the network. Internal interrupts work to construct timers, when a timer expires the interrupt is triggered. A user-defined interrupt handler is scheduled when an external or internal interrupt occurs. An interrupt, as a task, handles works but it is scheduled on a higher priority level. An interrupt handler is defined by name, a priority and a code function [1]. An example of a definition of a interrupt handler is as follows:

```
%Initialize the network
 ttCreateInterruptHandler('nw_handler1', prio, 'msgRcvSensor');
 ttInitNetwork(4, 'nw_handler1'); % node #4 in the network
```
Cervin *et al.* [1, 2] mentions that simulated execution occurs at three distinct *priority* levels: the interrupt (highest priority), kernel and task (lower priority) levels. The execution may be preemptive or non-preemptive. At interrupt level, interrupt handlers are *scheduled* according to fixed priorities. At task level, dynamic-priority scheduling may be used. At each scheduling point, the priority of task is given by user-defined priority function which is a function of the task attributes, this makes it easy to simulate different scheduling policies. Predefined priority functions exist for most of the commonly used scheduling schemes.

The *code* associated with task and interrupt handlers is scheduled and executed bye the kernel while simulation progresses. The code may be divided in segments, which can interact with other tasks and with the environment at the beginning of each code segment. The simulated execution time of each segment is returned by the code function and can be modeled as constant, random, or even data-dependent [1]. During the simulation the kernel saves the current segment and calls the code functions with proper arguments. Execution resumes in the next segment when the task has been running for the time associated with previous segment [7]. An example of a sensor code is given bellow:

```
function [exectime, data] = senscode(seg, data)
switch seg,
 case 1,
  % Receive data from analog input
  data.y = ttAnalogIn(1);
  exectime = 0.0005;
 case 2,
  % Shows the current time
  ttCurrentTime
  % Send message (80 bits) to node 3 (controller)
  ttSendMsg(3, data, 80)
  exectime = 0.0004;
 case 3,
  exectime = -1; % finished
end
```
This function implements a simple sensor node. In the first segment, the plant is sampled using a execution time of .5 ms. In the second segment, the control signal is sent to the controller node. The third segment indicates the end of execution by returning a negative execution time. The structure data represent the local memory and is used to store the measured variable between calls to the different segments. The kernel primitives ttAnalogIn and ttAnalogOut can be perform A/D and D/A conversion. Besides A/D and D/A conversion, a large set of kernel primitives exist which can be called from code function [12].

Monitors and events support *sincronization* between tasks. Monitors are used to guarnatee mutual exclusion when accessing comon data. Events are associated with monitors to represent condition variables [1].

Different *output graphs* are generated by truetime blocks. Each computer block will produce two graphs: A computer graph will display the execution trace of each task and interrupt handler during the simulation. If the signal is high, it means that the task is running. A medium signal indicates that the task is ready but not running, a low signal means that the task is idle. Otherwise a monitor graph shows which tasks are holding and waiting on the different monitors during simulation [1].
