**7. Acknowledgements**

The authors thank the support from the Spanish Ministry of Science and Innovation, grant MTM2009-12872-C02-01.

## **8. References**

20 Will-be-set-by-IN-TECH

Moreover, *f f act*(*n*) = Γ(*f f act*)(*n*) if and only if *f f act*,1(*n*) = Ψ*T*(*f f act*,1)(*n*) and *f f act*,2(*n*) =

Notice that a complexity function (a function belonging to C) is a solution to the recurrence equation (15) if and only if it is a fixed point of the functional Ψ*<sup>T</sup>* and that a word is a fixed point of the mapping *M* if and only if it satisfies the denotational specification (14). Whence we obtain that *f f act*,1 is the solution to the recurrence equation (15) and that *f f act*,2 is the solution to the recursive denotational specification (14). Moreover, by construction of *M*, *f f act*,2(*n*) = *n*! for all *n* ∈ **N**. Hence *f f act*,1 represents the running time of computing of the recursive algorithm under study and *f f act*,2 provides the meaning of the recursive algorithm that computes the factorial of a nonnegative number using the denotational specification (14),

So we have shown that generalized complexity spaces in the sense of (Romaguera & Schellekens, 2000) are useful to describe, at the same time, the complexity and the program correctness of recursive algorithms using recursive denotational specifications. The exposed method is also based on fixed point techniques like the Schellekens and Scott techniques but it presents an advantage with respect the latter ones. Indeed, the method provided by generalized complexity spaces allows to discuss the correctness of recursive algorithms which is an improvement with respect to the Schellekens technique and, in addition, it allows to analyze the correctness of the recursive algorithms by means of quantitative techniques which is an improvement with respect to the classical Scott technique that, as we have pointed out

In 1970, D.S. Scott introduced a mathematical framework as a part of the foundations of Denotational Semantics, based on topological spaces endowed with an order relation that represents the computational information, which allowed to model the meaning of recursive denotational specification by means of qualitative fixed point techniques. Later on, in 1995, M.P. Schellekens showed a connection between Denotational Semantics and Asymptotic Complexity Analysis applying the original Scott ideas to analyze the running time of computing of Divide and Conquer algorithms but, this time, via quantitative fixed point techniques (the topology is induced by a quasi-metric that provides quantitative information about the elements of the mathematical framework). In this chapter, we have extended the Schellekens technique in order to discuss the complexity of recursive algorithms that do not belong to the Divide and Conquer family. In particular, we have introduced a new fixed point technique that allows to obtain the asymptotic complexity behavior of the running time of computing of the aforesaid recursive algorithms. The new technique has the advantage of providing asymptotic upper and lower bounds of the running time of computing while that the Schellekens technique only allows to obtain upper asymptotic bounds. Furthermore, we have gone more deeply into the relationship between Denotational Semantics and Asymptotic Complexity Analysis constructing a mathematical approach which allows, in the spirit of Scott and Schellekens, to model at the same time the running time and the meaning of a recursive algorithm that performs a task using a recursive denotational specification by means of quantitative fixed point techniques and, thus, presents a improvement with respect to the

*M*(*f f act*,2(*n*)) for all *n* ∈ **N**.

in Section 1, is based only on qualitative reasonings.

respectively.

**6. Conclusions**

Scott and Schellekens approaches.


**0**

**6**

*Spain*

**A Semantic Framework for the Declarative**

**Declarative Constraint Programming**

Rafael del Vado Vírseda and Fernando Pérez Morente

*Universidad Complutense de Madrid*

**Debugging of Wrong and Missing Answers in**

Debugging tools are a practical need for helping programmers to understand why their programs do not work as intended. Declarative programming paradigms involving complex operational details, such as constraint solving and lazy evaluation, do not fit well to traditional debugging techniques relying on the inspection of low-level computation traces. As a solution to this problem, and following a seminal idea by Shapiro (Shapiro, 1982), *declarative debugging* (a.k.a. *declarative diagnosis* or *algorithmic debugging*) uses *Computation Trees* (shortly, *CT*s) in place of traces. *CT*s are built *a posteriori* to represent the structure of a computation whose top-level outcome is regarded as a *symptom* of the unexpected behavior by the user. Each node in a *CT* represents the computation of some observable result, depending on the results of its children nodes, using a program fragment also attached to the node. Declarative diagnosis explores a *CT* looking for a so-called *buggy node* which computes an unexpected result from children whose results are all expected. Each buggy node points to a program fragment responsible for the unexpected behavior. The search for a buggy node can be implemented with the help of an external *oracle* (usually the user with some semiautomatic support) who has a reliable declarative knowledge of the expected program semantics, the so-called *intended*

The generic description of declarative diagnosis in the previous paragraph follows (Naish, 1997). Declarative diagnosis was first proposed in the field of *Logic Programming* (*LP*) (Ferrand, 1987; Lloyd, 1987; Shapiro, 1982), and it has been successfully extended to other declarative programming paradigms, including (lazy) *Functional Programming* (*FP*) (Nilsson, 2001; Nilsson & Sparud, 1997; Pope, 2006; Pope & Naish, 2003), *Constraint Logic Programming* (*CLP*) (Boye et al., 1997; Ferrand et al., 2003; Tessier & Ferrand, 2000), and *Functional-Logic Programming* (*FLP*) (Caballero & Rodríguez, 2004; Naish & Barbour, 1995). The nature of unexpected results differs according to the programming paradigm. Unexpected results in *FP* are mainly *incorrect values*, while in *CLP* and *FLP* an unexpected result can be either a single computed answer regarded as *incorrect*, or a set of computed answers (for one and the same goal with a finite search space) regarded as *incomplete*. These two possibilities give rise to the declarative debugging of *wrong* and *missing* computed answers, respectively. The case of unexpected *finite failure* of a goal is a particular symptom of missing answers with special relevance. However, diagnosis methods must consider the most general case, since finite

**1. Introduction**

*interpretation*.

