**2. Fundamentals of Asymptotic Complexity Analysis**

From now on, the letters **R**<sup>+</sup> and **N** will denote the set of nonnegative real numbers and the set of positive integer numbers, respectively.

Our basic reference for complexity analysis of algorithms is (Brassard & Bratley, 1988).

In Computer Science the complexity analysis of an algorithm is based on determining mathematically the quantity of resources needed by the algorithm in order to solve the problem for which it has been designed.

time taken by the algorithm to solve the problem under consideration (or equivalently to process the input data of size *n* for each *n* ∈ **N**). From now on, we will say that *f* belongs to

<sup>103</sup> A Common Mathematical Framework for Asymptotic Complexity Analysis

In the light of the above, from an asymptotic complexity analysis viewpoint, to determine the

In 1995, Schellekens introduced a new mathematical framework, now known as the Complexity Space, with the aim to contribute to the topological foundation for the Asymptotic Complexity Analysis of Algorithms via the application of the original Scott ideas for Denotational Semantics, that is via the application of fixed point and successive approximations reasoning, (Schellekens, 1995). This framework is based on the notion of a

Following (Künzi, 1993), a quasi-metric on a nonempty set *<sup>X</sup>* is a function *<sup>d</sup>* : *<sup>X</sup>* <sup>×</sup> *<sup>X</sup>* <sup>→</sup> **<sup>R</sup>**<sup>+</sup>

Of course a metric on a nonempty set *X* is a quasi-metric *d* on *X* satisfying, in addition, the

A quasi-metric space is a pair (*X*, *d*) such that *X* is a nonempty set and *d* is a quasi-metric on

Each quasi-metric *d* on *X* generates a *T*0-topology T (*d*) on *X* which has as a base the family of open *d*-balls {*Bd*(*x*,*ε*) : *x* ∈ *X*, *ε* > 0}, where *Bd*(*x*,*ε*) = {*y* ∈ *X* : *d*(*x*, *y*) < *ε*} for all *x* ∈ *X*

(*x*, *y*) = max (*d*(*x*, *y*), *d*(*y*, *x*)),

A quasi-metric space (*X*, *d*) is called bicomplete whenever the metric space (*X*, *ds*) is complete.

((0, <sup>∞</sup>], *<sup>u</sup>*−1) plays a central role in the Schellekens framework. Indeed, let us recall that the

*<sup>y</sup>* <sup>−</sup> <sup>1</sup> *x* , 0 ,

<sup>∞</sup> = 0. The quasi-metric space

A well-known example of a bicomplete quasi-metric space is the pair ((0, <sup>∞</sup>], *<sup>u</sup>*−1), where

*<sup>u</sup>*−1(*x*, *<sup>y</sup>*) = max <sup>1</sup>

Given a quasi-metric *<sup>d</sup>* on *<sup>X</sup>*, the function *<sup>d</sup><sup>s</sup>* defined on *<sup>X</sup>* <sup>×</sup> *<sup>X</sup>* by

*ds*

for all *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> (0, <sup>∞</sup>]. Obviously we adopt the convention that <sup>1</sup>

running time of an algorithm consists of obtaining its asymptotic complexity class.

and Denotational Semantics for Recursive Programs Based on Complexity Spaces

**3. Quasi-metric spaces and Asymptotic Complexity Analysis: the Schellekens**

the asymptotic complexity class of *g* whenever *f* ∈ Θ(*g*).

**approach**

quasi-metric space.

such that for all *x*, *y*, *z* ∈ *X* :

(*iii*) *d*(*x*, *y*) = *d*(*y*, *x*).

*X*.

and *ε* > 0.

is a metric on *X*.

(*i*) *d*(*x*, *y*) = *d*(*y*, *x*) = 0 ⇔ *x* = *y*; (*ii*) *d*(*x*, *y*) ≤ *d*(*x*, *z*) + *d*(*z*, *y*).

following condition for all *x*, *y* ∈ *X*:

A typical resource, playing a central role in complexity analysis, is the running time of computing. Since there are often many algorithms to solve the same problem, one objective of the complexity analysis is to assess which of them is faster when large inputs are considered. To this end, it is required to compare their running time of computing. This is usually done by means of the asymptotic analysis in which the running time of an algorithm is denoted by a function *T* : **N** → (0, ∞] in such a way that *T*(*n*) represents the time taken by the algorithm to solve the problem under consideration when the input of the algorithm is of size *n*. Of course the running time of an algorithm does not only depend on the input size *n*, but it depends also on the particular input of the size *n* (and the distribution of the data). Thus the running time of an algorithm is different when the algorithm processes certain instances of input data of the same size *n*. As a consequence, it is usually necessary to distinguish three possible behaviors when the running time of an algorithm is discussed. These are the so-called best case, the worst case and the average case. The best case and the worst case for an input of size *n* are defined by the minimum and the maximum running time of computing over all inputs of size *n*, respectively. The average case for an input of size *n* is defined by the expected value or average running time of computing over all inputs of size *n*.

In general, given an algorithm, to determine exactly the function which describes its running time of computing is an arduous task. However, in most situations is more useful to know the running time of computing of an algorithm in an "approximate" way than in an exact one. For this reason the Asymptotic Complexity Analysis focus its interest on obtaining the "approximate" running time of computing.

In order to recall the notions from Asymptotic Complexity Analysis which will be useful for our aim later on, let us assume that *f* : **N** → (0, ∞] denotes the running time of computing of a certain algorithm under study. Moreover, consider that there exists a function *g* : **N** → (0, ∞] such that there exist, simultaneously, *<sup>n</sup>*<sup>0</sup> <sup>∈</sup> **<sup>N</sup>** and *<sup>c</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> satisfying *<sup>f</sup>*(*n*) <sup>≤</sup> *cg*(*n*) for all *<sup>n</sup>* <sup>∈</sup> **<sup>N</sup>** with *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*<sup>0</sup> (<sup>≤</sup> and <sup>≥</sup> stand for the usual orders on **<sup>R</sup>**+). Then, the function *<sup>g</sup>* provides an asymptotic upper bound of the running time of the studied algorithm. Hence, if we do not know the exact expression of the function *f* , then the function *g* gives an "approximate" information of the running time of the algorithm for each input size *n*, *f*(*n*), in the sense that the algorithm takes a time to process the input data of size *n* bounded above by the value *g*(*n*). Following the standard notation, when *g* is an upper asymptotic bound of *f* we will write *f* ∈ O(*g*).

In the analysis of the complexity of an algorithm, besides obtaining an upper asymptotic bound, it is useful to assess an asymptotic lower bound of the running time of computing. In this case the Ω-notation plays a central role. Indeed, the statement *f* ∈ Ω(*g*) means that there exist *<sup>n</sup>*<sup>0</sup> <sup>∈</sup> **<sup>N</sup>** and *<sup>c</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> such that *cg*(*n*) <sup>≤</sup> *<sup>f</sup>*(*n*) for all *<sup>n</sup>* <sup>∈</sup> **<sup>N</sup>** with *<sup>n</sup>* <sup>≥</sup> *<sup>n</sup>*0. Of course, and similarly to the O-notation case, when the time taken by the algorithm to process an input data of size *n*, *f*(*n*), is unknown, the function *g* yields an "approximate" information of the running time of the algorithm in the sense that the algorithm takes a time to process the input data of size *n* bounded below by *g*(*n*).

Of course, when the complexity of an algorithm is discussed, the best situation matches up with the case in which we can find a function *g* : **N** → (0, ∞] in such a way that the running time *f* holds the condition *f* ∈ O(*g*) ∩ Ω(*g*), denoted by *f* ∈ Θ(*g*), since, in this case, we obtain a "tight "asymptotic bound of *f* and, thus, a total asymptotic information about the time taken by the algorithm to solve the problem under consideration (or equivalently to process the input data of size *n* for each *n* ∈ **N**). From now on, we will say that *f* belongs to the asymptotic complexity class of *g* whenever *f* ∈ Θ(*g*).

In the light of the above, from an asymptotic complexity analysis viewpoint, to determine the running time of an algorithm consists of obtaining its asymptotic complexity class.
