Fig. 5. Context switching metric distinguishes complicated but synchronized architectures from those with complex temporal behaviour.

(top) High Entropy (S ~ 15) 15 tasks with Poisson events

(bottom) Lower Entropy (S ~ 12-13) 15 tasks, randomized events

The multi-scale aspect senses the different temporal frequencies in the underlying signal, comparable to what the contest-switching metric does, but instead pairs or groups the data points to measure a different coarse graining effect. This works out fairly straightforwardly in terms of a autocorrelation. The essential algorithm groups adjacent time samples together in a window of length *Scale* as the coarse graining or moving average measure. Then it counts the number of times, *n*, that the amplitude will change from one coarse-grained time step to the next.

If the amplitudes don't change for a given coarse-grain then it is predictable and the entropy will be low. To calculate the sample entropy they calculate

$$\text{Complexity(Scale)} = -\log\left(\frac{n(\text{Scale}+1)}{n(\text{Scale})}\right) \tag{6}$$

over each of the scale factors *Scale* = 1 .. *maxScaleFactor*.

#### **3.1 Usage domains**

358 Applications of Digital Signal Processing

for cyber-physical applications such as a complex event-driven system. In this case, the time scales can range from fast interrupt-processing, to human-scale interactivity, to the even

Fig. 5. Context switching metric distinguishes complicated but synchronized architectures

from those with complex temporal behaviour.

(top) High Entropy (S ~ 15) 15 tasks with Poisson events

(bottom) Lower Entropy (S ~ 12-13) 15 tasks, randomized events

more sporadic environmental influences.

A graph of the multi-scale entropy will appear flat if it is measuring "1/f" (van der Ziel, 1950) or the so-called pink noise as the underlying behaviour. Pink noise shows a predictable constant change of amplitude density per scale factor; in other words it has a constant energy per frequency doubling while white noise shows constant per frequency interval (Montroll & Shlesinger, 2002).

In comparison to the structure-less noise, if structure does exist in the signal, you will see observable changes in the entropy from one scale factor to the next. For example a superimposed sine wave would show a spike downward in sample entropy when it crossed a harmonic in the scale factor.

A simple interpretation suggests that we scale the measured results relative to the *1/f* noise part of the signal. The *1/f* noise includes the greatest variety of frequencies of any behaviour known and therefore the highest entropy (Milotti, 2002). So by providing a good visualization or graph that plots the *1/f* asymptotic value we can immediately gauge the complexity of a signal. Costa *et al* discuss the difficulty of distinguishing between randomness and increasing complexity, which has importance in the realm of event-driven systems.

> *"In fact, entropy-based metrics are maximized for random sequences, although it is generally accepted that both perfectly ordered and maximally disordered systems possess no complex structures. A meaningful physiologic complexity measure, therefore, should vanish for these two extreme states."*

This is a key insight and one echoed by researchers in complexity theory (Gell-Mann, 1994) in that the most interesting and challenging complexity measures occupy the middle range of the complexity scale. In other words, the most ordered signals can be described by a few harmonic periods and at the other extreme, the complexity reduces to simple stochastic measures akin to statistical mechanics (Reif, 1965). In between these extremes, we require a different level of sophistication.

Entropic Complexity Measured in Context Switching 361

Fig. 7. GANTT chart of a typical vehicle system execution trace showing interacting threads. Time proceeds left to right, and one thread exists per horizontal entry. The lines indicate thread synchronization points. This diagram is only meant to give a notional idea of complexity, and the text description along the left edge is irrelevant to the discussion.

### **3.2 Comparison to single-scale metric**

The same inputs as that for the Context-Switching Metric described earlier result in the following *Figure 6*. Note that in this case as well, the sample entropy is always higher for the disordered signal than for the ordered signal. The reference *1/f* noise level is shown on the plot to indicate the asymptotic maximum entropy level achievable.

Fig. 6. For the same pair of inputs we used on the context-switching metric, the multiscale entropy appears as the following graph. It shows greater variety than the context-switching metric over the time scales because the metric compares at different levels of resolution.

In practice, the multi-scale algorithm requires only a basic periodogram method invoked over different time scales. The output is one value per temporal scale factor so the results are best displayed as a graph, via a spreadsheet or bar-charting software for example. The calculation is somewhat more brute force compared to the FFT, with complexity *o(n2)* versus *o(n\*ln(n)).* The context-switching metric operates over a narrower time scale so gets rolled into a single value, simplifying the presentation into a classical scalar metric.
