1

1 1 *x*

*i*

**Example** 

following.

**Theorem Specification**:

**Representation**:

*x1/seq(Z),* 

That is,

**Theory** 

*Comp(Pr)* 

**Representation**:

 *Spec A |=* 

 *x1/seq(Z), x2/Z (sumS(x1,x2) ↔ x2* =

**3.7 An example theory and theorem** 

*theorem\_struct(1, progr1, spec\_struct1,*

*x6/seq(Z),[x1=x5::x6* 

x1/seq(Z),[p1(x1) ↔ empty\_seq(x1)]

The ground representation of theory is also illustrated.

 *sum\_s(v(5):seq(int), v(3):int) )]) )]).*

The logic program completion *Comp(Pr)* of *Pr* is as follows.

 *[all v(1):seq(int), all v(2):int, (sum(v(1):seq(int), v(2):int) <->* 

 *x2/Z (sums(x1,x2) ↔ [x1=<>*

 *((eq(v(1):seq(int),nil\_seq:seq(tv(1))) /\eq(v(2):int,0:int)) \/ [ex v(3):int, ex v(4):int, ex v(5):seq(int), (eq(v(1):seq(int), seq\_cons(v(4):int, v(5):seq(int)):seq(int) ) /\ eq(v(2):int,plus(v(4):int, v(3):int)) /\* 

*x2=x5+x4* 

x1/seq(Z),x2/Z,[p2(x1,x2) ↔ neutral\_add\_subtr\_int(x2)]

 x1/seq(Z),x2/Z,x3/seq(Z),[p5(x1,x2,x3) ↔ head(x1,x2)] x1/seq(Z),x2/Z,x3/seq(Z),[p6(x1,x2,x3) ↔ tail(x1,x3)]

x1/seq(Z),x2/Z,[sum(x1,x2) ↔ (p1(x1) p2(x1,x2) [x3/Z, x4/seq(Z), x5/Z,[~p1(x1)p3(x1,x3,x4)p4(x1,x3,x5,x2)sum(x4,x5)]])]

x1/seq(Z),x2/Z,x3/seq(Z),[p3(x1,x2,x3) ↔ p5(x1,x2,x3)p6(x1,x2,x3)]

x1/seq(Z),x2/Z,x3/Z,x4/Z,[p4(x1,x2,x3,x4) ↔ plus\_int(x3,x2,x4)]

Domain closure axiom for sequences

 *x1/seq(a2),[x1= < > ( x3/a2,x4/seq(a2),[x1=x3::x4])]* 

Its ground representation is shown in section 3.4.

Uniqueness axioms for sequences


#### **Representation**:


Definition of summation operation over 0 entities

*x1/seq(Z),[x1= < > Σ((i=1 to #x1 ) x1i )=0]* 

#### **Representation**:

 *axiom\_def(4, sequences, "summation over 0 entities", [all v(1):seq(int), [eq(v(1):seq(int),nil\_seq) -> eq(sum(1:int,len(v(1):seq(int)):int, v(1,v(3):nat):int), 0:int)]]).* 

*Lemmas* 

Knowledge Representation in a Proof Checker for Logic Programs 173

• Select Theory • Select Proof Scheme • Specify Theorem

> **Knowledge Base**

‐specifications ‐axioms ‐lemmas ‐specs DT operations ‐logic programs

Make necessary selections

Perform Proof Step

The process of proving a theorem is shown in Fig. 3 and consists of three steps.

 Step 1: In order to prove the correctness of a theorem the user initially has to specify the theorem that is going to be proved and to select the theory and the proof scheme that will be used for the proof. The theory is retrieved from the *KB* and it is presented to the user for selection. It consists of a *program complement*, a *logic specification*, *axioms* and *lemmas.* The corresponding window of the interface which allows the user to make these

 Step 2: After the selection the user proceeds to the actual proof of the specific theorem. In order to do that he has to select specific parts from the theorem, the theory and the transformation rules that will be applied. The transformation rules that can be applied

 Step 3: In this step the selected transformation is applied and the equivalent form of the theorem is presented to the user. The user can validate the result. He is allowed to

user

1

2

3

Fig. 3. Schematic View of the Theorem Proof Checker.

**4.1 Schematic view of the theorem proof checker** 

are first order logic (FOL) laws, folding and unfolding.

The last two steps are performed iteratively until the theorem is proved.

approve or cancel the specific proof step.

selections is shown in Fig. 6.

*x1/seq(a2),[x1 ≠ < > ↔ [x3/a2,x4/seq(a2),[x1=x3::x4/a2]]] x1/a2,x3/seq(a2),x4/seq(a2),[x3=x1::x4(x5/N,[2≤x5≤#x3x3(x5)=x4(x5-1)])] x1/seq(a2),x3/seq(a2),x4/a2,[x1=x4::x3#x1=#x3+1] x1/a2,x3/seq(a2),x4/seq(a2),[x3=x1::x4/a2x1=x31/a2]* 

#### **Representation**:


*Logic specifications of DT operations* 

 *x1/seq(a2),[empty\_seq(x1) ↔ x1= < > ] x1/Z,[neutral\_add\_subtr\_int(x1) ↔ x1=0] x1/seq(a2),x3/a2,[head(x1,x3) ↔ [x1≠ < > [ x4/seq(a2),[x1=x3::x4/a2]]]] x1/seq(a2),x3/seq(a2),[tail(x1,x3) ↔ [x4/a2,[x1≠< > x1=x4::x3/a2]]] x1/Z,x2/Z,x3/Z,[plus\_int(x1,x2,x3)x3=x2+x1]* 

#### **Representation**:


#### **4. Schematic view of the Interaction of the main components**

In this section, a schematic view of the proof checker and the interaction of its main components will be shown. In addition, the functions of its components will be discussed. An example of a proof step will illustrate the use of the KB representation in the proof task.

*(*

 *lemma\_sp(1, sequences, "Non-empty sequences have at least one element",[all v(1):seq(tv(1)), [~eq(v(1):seq(tv(1)),nil\_seq:seq(tv(1))) <-> [ex v(2):tv(1), ex v(3):seq(tv(1)), [eq(v(1):seq(tv(1)),* 

 *lemma\_sp(2,sequences,"if sequence s has tail t then the element si is identical to the element ti-1",[all v(1):tv(1), all v(2):seq(tv(1)), all v(3):seq(tv(1)),[eq(v(2):seq(tv(1)), seq\_cons(v(1):tv(1), v(3):seq(tv(1))):seq(tv(1))) -> (all v(4):nat, [le(2:nat, v(4):nat) /\ le(v(4):nat, len(v(2):seq* 

 *lemma\_sp(3, sequences, "If sequence s has tail t then the length of s is equal to the length of t plus 1", [all v(1):seq(tv(1)), all v(2):seq(tv(1)), all v(3):tv(1), [eq(v(1):seq(tv(1)), seq\_cons(v(3):tv(1), v(2):seq(tv(1))):seq(tv(1))) -> eq(len(v(1):seq(tv(1))): nat,plus(len(v(2):* 

 *lemma\_sp(4, sequences, "If sequence s is non-empty then its head h is identical to its first element", [all v(1):tv(1), all v(2):seq(tv(1)), all v(3):seq(tv(1)),[eq(v(2):seq(tv(1)),* 

> *[*

*dtOp\_sp(empty\_seq, 1, "seq: empty", [all v(1):seq(tv(1)), [empty\_seq(v(1):seq(tv(1))) <->* 

 *dtOp\_sp(head, 2, "seq: head", [all v(1):seq(tv(1)), all v(2):tv(1),[head(v(1):seq(tv(1)), v(2):tv(1)) <-> [~eq(v(1):seq(tv(1)),nil\_seq:seq(tv(1))) /\ [ex v(3):seq(tv(1)), [eq(v(1):seq* 

 *dtOp\_sp(tail, 3, "seq: tail", [all v(1):seq(tv(1)), all v(2):seq(tv(1)), [tail(v(1):seq(tv(1)), v(2):seq(tv(1))) <-> [ex v(3):tv(1),[~eq(v(1):seq(tv(1)), nil\_seq:seq (tv(1))) /\ eq(v(1),* 

*dtOp\_sp(neutral\_add\_subtr\_int, 8, "int: neutral\_add\_subtr\_int", [all v(1):int,* 

*dtOp\_sp(plus\_int, 9,"int: plus\_int", [all v(1):int, all v(2):int, all v(3):int,* 

In this section, a schematic view of the proof checker and the interaction of its main components will be shown. In addition, the functions of its components will be discussed. An example of a proof step will illustrate the use of the KB representation in the proof task.

*[plus\_int(v(1):int,v(2):int,v(3):int) <-> eq(v(3):int,plus(v(2):int,v(1):int))]]).* 

**4. Schematic view of the Interaction of the main components** 

*x4/a2,[x1≠< >* 

*x3=x2+x1]* 

*seq\_cons(v(1):tv(1), v(3):seq(tv(1))):tv(1)) -> eq(v(1):tv(1), v(2, 1:int):tv(1))]]).* 

*x4/seq(a2),[x1=x3::x4/a2]]]* 

*x5/N,[2≤x5≤#x3*

*x1=x31/a2]* 

*#x1=#x3+1]* 

 *x4/seq(a2),[x1=x3::x4/a2]]]]* 

*x1=x4::x3/a2]]]* 

*x3(x5)=x4(x5-1)])]* 

*x1/a2,*

*x1/a2,*

 *x1/Z,x2/Z,*

 *x1/seq(a2),*

 *x1/seq(a2),*

**Representation**:

*x1/seq(a2),*

**Representation**:

*x1/seq(a2),[x1 ≠ < > ↔ [*

*x3/seq(a2),*

*x3/seq(a2),*

*seq(tv(1))):nat,1:nat))]]).* 

*Logic specifications of DT operations* 

 *x1/seq(a2),[empty\_seq(x1) ↔ x1= < > ]* 

*eq(v(1):seq(tv(1)),nil\_seq:seq(tv(1)))]]).* 

 *x1/Z,[neutral\_add\_subtr\_int(x1) ↔ x1=0]* 

*x3/a2,[head(x1,x3) ↔ [x1≠ < >* 

*x3/Z,[plus\_int(x1,x2,x3)*

*(tv(1)),seq\_cons(v(2):tv(1), v(3):seq(tv(1))):tv(1))]]]]]).* 

*[neutral\_add\_subtr\_int(v(1):int) <-> eq(v(1):int,0:int)]]).* 

*seq\_cons(v(3):tv(1),v(2):seq(tv(1))):tv(1))]]]]).* 

*x3/seq(a2),[tail(x1,x3) ↔ [*

*x3/a2,*

*seq\_cons(v(2):tv(1), v(3):seq(tv(1))):tv(1))]]]]).* 

*x4/seq(a2),[x3=x1::x4*

*x4/a2,[x1=x4::x3*

*(tv(1))):nat) -> eq(v(2, v(4):nat):tv(1), v(3, minus(v(4): nat,1:nat) ))])]]).* 

*x4/seq(a2),[x3=x1::x4/a2*

*x3/seq(a2),*

Fig. 3. Schematic View of the Theorem Proof Checker.

#### **4.1 Schematic view of the theorem proof checker**

The process of proving a theorem is shown in Fig. 3 and consists of three steps.


The last two steps are performed iteratively until the theorem is proved.

Knowledge Representation in a Proof Checker for Logic Programs 175

The function block diagram of the algorithm "*performProofStep"* shown in Fig. 5 will be discussed through an example. Consider that our theorem has been transformed and its

Initially, the current theorem is converted to the corresponding ground representation by

*x2=x4+x3* 

*sums(x4,x3)]]))* 

Then, the procedure "a*pplyProofStep*" applies the transformation rule to the current theorem and derives the new theorem. In order to do that, this procedure constructs and asserts a set of clauses which implement the selected transformation rule. Then, it applies this set of

 *false/\eq(v(2):int,plus(v(4):int,v(3):int):int)/\sum\_s(v(5):seq(int),v(3):int)]]])]]* 

The new theorem is then converted to the corresponding non-ground form in order to be

*x2=0*   *[x3/Z,[*

*x4/seq(Z),*

clauses and derives the new theorem in ground representation. That is,

*x2/Z,sums(x1,x2) ↔ (x1=<>* 

*[all v(1):seq(int),[all v(2):int,sum\_s(v(1):seq(int),v(2):int)<-> (eq(v(1):seq(int),nil\_seq:seq(int))/\eq(v(2):int,0:int) \/* 

*[all v(1):seq(int),[all v(2):int,sum\_s(v(1):seq(int),v(2):int)<-> (eq(v(1):seq(int),nil\_seq:seq(int))/\eq(v(2):int,0:int) \/ [ex v(3):int,[ex v(4):int,[ex v(5):seq(int),* 

 *false /\ sum\_s(v(5):seq(int),v(3):int)]]])]]* 

The user has selected the following FOL law to be applied to the above theorem:

*x4/seq(Z),false* 

Fig. 5. Schematic view of the "performProofStep" procedure.

*x2/Z (sums(x1,x2) ↔*

current form is the following:

*false ↔ false* 

*x2=0 [x3/Z,[*

the procedure "*ConvertGr*" and we get:

*[ex v(3):int,[ex v(4):int,[ex v(5):seq(int),* 

*sums(x4,x3)]])]]* 

presented to the user. That is,

*x1/seq(Z),[*

 *false*

*P* 

*x1/seq(Z),*

 *(x1= <>* 

*[*

#### **4.2 Schematic view of specification transformer**

Fig. 4 depicts the procedure for transforming a specification in the required structured form, which is similar to the previous case. In this case however, the underlying theory consists of *Spec* U *Axioms* U *Lemmas*. Initially, the user selects a specification, then the rest elements of the theory are automatically selected by the system. Next, in step 2, the user has to select specific theory elements and transformation rules. In step 3, the selected transformation rule is performed. Step 2 and step 3 are performed iteratively until the specification is transformed in the required structured form.

Fig. 4. Schematic View of Specification Transformer.

#### **4.3 Illustration of a proof step**

The "*Transformation Step"* procedure is actually a sub-procedure of the *"Perform Proof Step"* procedure and that is why we will not present it. The schematic view of the main algorithm for the procedure which performs a proof step, i.e. "*performProofStep", is* shown in Fig. 5*.* It is assumed that the user has selected some theory elements, and a transformation rule that should be applied to the current proof step.

Fig. 5. Schematic view of the "performProofStep" procedure.

The function block diagram of the algorithm "*performProofStep"* shown in Fig. 5 will be discussed through an example. Consider that our theorem has been transformed and its current form is the following:

*x1/seq(Z),x2/Z (sums(x1,x2) ↔ (x1= <> x2=0 [x3/Z,[x4/seq(Z),false x2=x4+x3 sums(x4,x3)]]))* 

The user has selected the following FOL law to be applied to the above theorem:

*P false ↔ false* 

174 Advances in Knowledge Representation

Fig. 4 depicts the procedure for transforming a specification in the required structured form, which is similar to the previous case. In this case however, the underlying theory consists of *Spec* U *Axioms* U *Lemmas*. Initially, the user selects a specification, then the rest elements of the theory are automatically selected by the system. Next, in step 2, the user has to select specific theory elements and transformation rules. In step 3, the selected transformation rule is performed. Step 2 and step 3 are performed iteratively until the specification is

> Make necessary selections

Select Logic Specification and rest elements of the theory

Transformation Step

The "*Transformation Step"* procedure is actually a sub-procedure of the *"Perform Proof Step"* procedure and that is why we will not present it. The schematic view of the main algorithm for the procedure which performs a proof step, i.e. "*performProofStep", is* shown in Fig. 5*.* It is assumed that the user has selected some theory elements, and a transformation rule that

**Knowledge Base**


**4.2 Schematic view of specification transformer** 

transformed in the required structured form.

1

2

3

Fig. 4. Schematic View of Specification Transformer.

should be applied to the current proof step.

**4.3 Illustration of a proof step** 

user

Initially, the current theorem is converted to the corresponding ground representation by the procedure "*ConvertGr*" and we get:

```
[all v(1):seq(int),[all v(2):int,sum_s(v(1):seq(int),v(2):int)<-> 
 (eq(v(1):seq(int),nil_seq:seq(int))/\eq(v(2):int,0:int) \/ 
 [ex v(3):int,[ex v(4):int,[ex v(5):seq(int), 
 false/\eq(v(2):int,plus(v(4):int,v(3):int):int)/\sum_s(v(5):seq(int),v(3):int)]]])]]
```
Then, the procedure "a*pplyProofStep*" applies the transformation rule to the current theorem and derives the new theorem. In order to do that, this procedure constructs and asserts a set of clauses which implement the selected transformation rule. Then, it applies this set of clauses and derives the new theorem in ground representation. That is,

```
[all v(1):seq(int),[all v(2):int,sum_s(v(1):seq(int),v(2):int)<-> 
(eq(v(1):seq(int),nil_seq:seq(int))/\eq(v(2):int,0:int) \/ 
[ex v(3):int,[ex v(4):int,[ex v(5):seq(int), 
 false /\ sum_s(v(5):seq(int),v(3):int)]]])]]
```
The new theorem is then converted to the corresponding non-ground form in order to be presented to the user. That is,

$$\begin{array}{ccccc} \{ | \forall \mathbf{x} 1/\mathsf{seq}(\mathbf{Z})\_{\wedge} | & \forall \mathbf{x} 2/\mathsf{Z}, \mathsf{sum}^{\mathsf{S}}(\mathbf{x}1, \mathsf{x}2) & \leftrightarrow & \langle \mathbf{x}1 = \mathsf{c} \mathsf{s} \rangle \rightarrow & \mathsf{x} 2 = 0 \ & \quad [\exists \mathbf{x} 3/\mathsf{Z}, [ \ \exists \mathbf{x} 4/\mathsf{seq}(\mathbf{Z})\_{\wedge} \ \mathsf{false} \land \mathsf{sum}^{\mathsf{S}}(\mathbf{x}4, \mathsf{x}3)] \rangle] \end{array}$$

Knowledge Representation in a Proof Checker for Logic Programs 177

 *x2=x4+x3* 

*x5/seq(Z),* 

*x3/Z,x4/Z,*

 *x1=x4::x5* 

Fig. 6. The window for selecting Theory, Theorem and Proof Scheme

Fig. 7. The window for proving a correctness theorem

 *x1<>* 

*x1/seq(Z),*

 *sum(x5,x3)]* 

 *x1=x4::x5* 

*x2/Z, sum(x1,x2) ↔ [*

 *x1 <>* 

## **5. System interface**

To enable users to guide this proof checker it is necessary to provide a well-designed user interface. The design of the interface of an interactive verifier depends on the intended user. In our verifier we distinguish two kinds of users, the "*basic users*" and the "*advanced* or *experienced users*". We call "*basic user*" a user who is interested in proving a theorem. We call an "*advanced user*" a user who in addition to proving a theorem he/she may want to enhance the KB of the system in order to be able to deal with additional theorems. Such a user is expected to be able to update the KB of axioms, lemmas, predicate specifications, specifications of DT operations and programs. We will use the word "*user*" to mean both the "*basic user*" and the "*advanced user*". Both kinds of users are expected to know very well the correctness method which is supported by our system (Marakakis, 2005).

Initially, the system displays the main, top-level window as shown in Fig. 1. This window has a button for each of its main functions. The name of each button defines its function as well, that is, "*Transform Logic Specification into Structured Form*", "*Prove Program Correctness"* and "*Update Knowledge Base*". The selection of each button opens a new window which has a detailed description of the required functions for the corresponding operation. Now we will illustrate the *"Prove Program Correctness*" function to better understand the whole interaction with the user.

### **5.1 Interface illustration of the "Prove Program Correctness" task**

If the user selects the button *"Prove Program Correctness"* from the main window, the window shown in Fig. 6 will be displayed. The aim of this window is to allow the user to select the appropriate theory and proof scheme that he will use in his proof. In addition, the user can either select a theorem or define a new one.

After the appropriate selections, the user can proceed to the actual proof of the theorem by selecting the button "*Prove Correctness Theorem*". The window that appears next is shown in Fig. 7. The aim of this window is to assist the user in the proof task. The theorem to be proved and its logic specification are displayed in the corresponding position on the top-left side of the window. This window has many functions. The user is able to choose theory elements from the KB that will be used for the current proof step. After selection by the user of the appropriate components for the current proof step the proper inference rule is selected and it is applied automatically. The result of the proof step is shown to the user. Moreover, the user is able to cancel the last proof step, or to create a report with all the details of the proof steps that have been applied so far.

#### **5.1.1 Illustration of a proof step**

Let's assume that the user has selected a theorem to be proved, its corresponding theory and a proof scheme. Therefore, he has proceeded to the verification task. For example, he likes to prove the following theorem:

*Comp(Pr) Spec A |= x1/seq(Z), x2/Z (sum(x1,x2) ↔ sumS(x1,x2))* 

The user has selected the "*Incremental*" proof scheme which requires proof by induction on an inductive DT. Let assume that the correctness theorem has been transformed to the following form:

To enable users to guide this proof checker it is necessary to provide a well-designed user interface. The design of the interface of an interactive verifier depends on the intended user. In our verifier we distinguish two kinds of users, the "*basic users*" and the "*advanced* or *experienced users*". We call "*basic user*" a user who is interested in proving a theorem. We call an "*advanced user*" a user who in addition to proving a theorem he/she may want to enhance the KB of the system in order to be able to deal with additional theorems. Such a user is expected to be able to update the KB of axioms, lemmas, predicate specifications, specifications of DT operations and programs. We will use the word "*user*" to mean both the "*basic user*" and the "*advanced user*". Both kinds of users are expected to know very well the

Initially, the system displays the main, top-level window as shown in Fig. 1. This window has a button for each of its main functions. The name of each button defines its function as well, that is, "*Transform Logic Specification into Structured Form*", "*Prove Program Correctness"* and "*Update Knowledge Base*". The selection of each button opens a new window which has a detailed description of the required functions for the corresponding operation. Now we will illustrate the *"Prove Program Correctness*" function to better understand the whole interaction

If the user selects the button *"Prove Program Correctness"* from the main window, the window shown in Fig. 6 will be displayed. The aim of this window is to allow the user to select the appropriate theory and proof scheme that he will use in his proof. In addition, the

After the appropriate selections, the user can proceed to the actual proof of the theorem by selecting the button "*Prove Correctness Theorem*". The window that appears next is shown in Fig. 7. The aim of this window is to assist the user in the proof task. The theorem to be proved and its logic specification are displayed in the corresponding position on the top-left side of the window. This window has many functions. The user is able to choose theory elements from the KB that will be used for the current proof step. After selection by the user of the appropriate components for the current proof step the proper inference rule is selected and it is applied automatically. The result of the proof step is shown to the user. Moreover, the user is able to cancel the last proof step, or to create a report with all the

Let's assume that the user has selected a theorem to be proved, its corresponding theory and a proof scheme. Therefore, he has proceeded to the verification task. For example, he likes to

The user has selected the "*Incremental*" proof scheme which requires proof by induction on an inductive DT. Let assume that the correctness theorem has been transformed to the

 *x1/seq(Z), x2/Z (sum(x1,x2) ↔ sumS(x1,x2))* 

correctness method which is supported by our system (Marakakis, 2005).

**5.1 Interface illustration of the "Prove Program Correctness" task** 

user can either select a theorem or define a new one.

details of the proof steps that have been applied so far.

**5.1.1 Illustration of a proof step** 

prove the following theorem:

 *Spec A |=* 

*Comp(Pr)* 

following form:

**5. System interface** 

with the user.

*x1/seq(Z),x2/Z, sum(x1,x2) ↔ [x3/Z,x4/Z,x5/seq(Z), x1<> x1=x4::x5 x1 <> x1=x4::x5 x2=x4+x3 sum(x5,x3)]* 

Fig. 6. The window for selecting Theory, Theorem and Proof Scheme


Fig. 7. The window for proving a correctness theorem

Knowledge Representation in a Proof Checker for Logic Programs 179

the implementation of the proof checker has been given on flexibility so the system being developed could be enhanced with additional proof tasks. Finally, the main implementation criteria for the knowledge representation are the support for an efficient and modular

In our proof checker, a proof is guided by the selected proof scheme. The selection of a proof scheme is related with the construction of the top-level predicate of the program that will be verified. The user-friendly interface of our system facilitates the proof task in all stages and the update of the KB. Its modular implementation makes our proof checker extensible and

The natural progression of our proof checker is the addition of automation. That is, we intend to move proof decisions from the user to the system. The verifier should have the capacity to suggest proof steps to the user. Once they are accepted by the user they will be performed automatically. Future improvements aim to minimize the interaction with the

Clarke, E. & Wing, J. (1996). Formal Methods: State of the Art and Future Directions, *ACM* 

Gallagher, J. (1993). Tutorial on Specialization of Logic Programs, *Proceedings of PERM'93,* 

Hill, P. M. & Gallagher, J. (1998). Meta-programming in Logic Programming, *Handbook of* 

Hill, P. M. & Lloyd, J. W. (1994). The Gödel Programming Language, The MIT Press,

Lindsay, P. (1988). A Survey of Mechanical Support for Formal Reasoning, *Software* 

Lloyd, J.W. (1994). Practical Advantages of Declarative Programming. In Proceedings of the Joint Conference on Declarative Programming, GULP-PRODE'94, 1994. Loveland, D. W. (1986). Automated Theorem Proving: Mapping Logic in AI, *Proceedings of* 

Marakakis, E. (1997). Logic Program Development Based on Typed, Moded Schemata and

Marakakis, E. (2005). Guided Correctness Proofs of Logic Programs, *Proc. the 23th IASTED* 

Marakakis, E. & Gallagher. J.P. (1994). Schema-Based Top-Down Design of Logic Programs

*the ACM Sigplan Symposium on Partial Evaluation and Semantics-Based Program* 

*Logic in Artificial Intelligence and Logic Programming*, vol. 5, edited by D. Gabbay, C.

*the ACM SIGART International Symposium on Methodologies for Intelligent Systems*, pp.

*International Multi-Conference on Applied Informatics*, edited by M.H. Hamza, pp. 668-

Using Abstract Data Types, *LNCS 883, Proc. of 4th Int. Workshops on Logic Program Synthesis and Transformation - Meta-Programming in Logic*, pp.138-153, Pisa, Italy,

*Computing Surveys*, Vol. 28, No. 4, pp. 626-643, December, 1996.

Hogger, J. Robinson, pp. 421-497, Clarendon Press, Oxford, 1998.

214-229, Knoxville, Tennessee, United States, October 22-24, 1986.

Data Types, *PhD thesis*, University of Bristol, February, 1997.

user and to maximize the automation of the verification task.

*Manipulation*, pp. 88-98, ACM Press, 1993.

*Engineering Journal*, vol. 3, no. 1, pp.3-27, 1988.

673, Innsbruck, Austria, 2005.

implementation of the verifier.

amenable to improvements.

**8. References** 

1994.

1994.

In order to proceed to the next proof step the following steps should be performed:

 First, the user selects "*Logic Spec. of DT\_Op*" and then he selects the "*Head"* DT operation:

*x1/seq(a2),x3/a2,[head(x1,x3) ↔ [x1 < > [x4/seq(a2),[x1=x3::x4/a2]]]]* 

 Then he selects the button "*Apply Proof Step"* and the result is shown in the next line of the "*Induction Step*" area:

```
x1/seq(Z),x2/Z, sum(x1,x2) ↔ [x3/Z,x4/Z,x5/seq(Z), 
 x1<> x1=x4::x5  head(x1, x4) x2=x4+x3 sum(x5,x3)]
```
The user continues applying proof steps until to complete the proof of the theorem.

## **6. Results**

The results of this research work involve the development of a proof checker that can be used efficiently by its users for the proof of correctness theorems for logic programs constructed by our schema-based method (Marakakis, 1997). The system has been tested and allows the verification of non-trivial logic programs. Our proof checker is highly modular, and allows the user to focus on proof decisions rather than on the details of how to apply each proof step, since this is done automatically by the system. The update of the KB is supported by the proof-checker as well. The overall interface of our system is user– friendly and facilitates the proof task.

The main features of our system which make it to be an effective and useful tool for the interactive verification of logic programs constructed by the method (Marakakis, 1997) are the following.


#### **7. Conclusions**

This chapter has presented our proof checker. It has been focused on the knowledge representation layer and on its use by the main reasoning algorithms. Special importance on the implementation of the proof checker has been given on flexibility so the system being developed could be enhanced with additional proof tasks. Finally, the main implementation criteria for the knowledge representation are the support for an efficient and modular implementation of the verifier.

In our proof checker, a proof is guided by the selected proof scheme. The selection of a proof scheme is related with the construction of the top-level predicate of the program that will be verified. The user-friendly interface of our system facilitates the proof task in all stages and the update of the KB. Its modular implementation makes our proof checker extensible and amenable to improvements.

The natural progression of our proof checker is the addition of automation. That is, we intend to move proof decisions from the user to the system. The verifier should have the capacity to suggest proof steps to the user. Once they are accepted by the user they will be performed automatically. Future improvements aim to minimize the interaction with the user and to maximize the automation of the verification task.

#### **8. References**

178 Advances in Knowledge Representation

First, the user selects "*Logic Spec. of DT\_Op*" and then he selects the "*Head"* DT

Then he selects the button "*Apply Proof Step"* and the result is shown in the next line of

The results of this research work involve the development of a proof checker that can be used efficiently by its users for the proof of correctness theorems for logic programs constructed by our schema-based method (Marakakis, 1997). The system has been tested and allows the verification of non-trivial logic programs. Our proof checker is highly modular, and allows the user to focus on proof decisions rather than on the details of how to apply each proof step, since this is done automatically by the system. The update of the KB is supported by the proof-checker as well. The overall interface of our system is user–

The main features of our system which make it to be an effective and useful tool for the interactive verification of logic programs constructed by the method (Marakakis, 1997) are

 The proof of the correctness theorem is guided by the logic-program construction method (Marakakis, 1997). That is, the user has to select a proof scheme based on the applied program schema for the construction of the top-level predicate of the logic

Proof steps can be cancelled at any stage of the proof. Therefore, a proof can move to

The system supports the proof of a new theorem as part of the proof of the initial

The overall verification task including the update of the KB is performed through a

 At any stage during the verification task the user can get a detailed report of all proof steps performed up to that point. So, he can get an overall view of the proof performed

This chapter has presented our proof checker. It has been focused on the knowledge representation layer and on its use by the main reasoning algorithms. Special importance on

The update of the theories stored in the KB of the system is supported as well.

*x4/seq(a2),[x1=x3::x4/a2]]]]* 

*x5/seq(Z),* 

*sum(x5,x3)]* 

In order to proceed to the next proof step the following steps should be performed:

 *< > [*

*x2=x4+x3* 

The user continues applying proof steps until to complete the proof of the theorem.

*x3/Z,x4/Z,*

operation:

the "*Induction Step*" area:

*x1=x4::x5* 

friendly and facilitates the proof task.

any previous state.

user-friendly interface.

theorem.

so far.

**7. Conclusions** 

program whose correctness will be shown.

*x3/a2,[head(x1,x3) ↔ [x1*

*x2/Z, sum(x1,x2) ↔ [*

 *head(x1, x4)* 

*x1/seq(a2),*

*x1/seq(Z),*

**6. Results** 

the following.

 *x1<>* 


**8** 

*Poland* 

**Knowledge in Imperfect Data** 

*Warsaw University of Technology* 

Andrzej Kochanski, Marcin Perzyk and Marta Klebczyk

Data bases collecting a huge amount of information pertaining to real-world processes, for example industrial ones, contain a significant number of data which are imprecise, mutually incoherent, and frequently even contradictory. It is often the case that data bases of this kind often lack important information. All available means and resources may and should be used to eliminate or at least minimize such problems at the stage of data collection. It should be emphasized, however, that the character of industrial data bases, as well as the ways in which such bases are created and the data are collected, preclude the elimination of all errors. It is, therefore, a necessity to find and develop methods for eliminating errors from already-existing data bases or for reducing their influence on the accuracy of analyses or hypotheses proposed with the application of these data bases. There are at least three main reasons for data preparation: (a) the possibility of using the data for modeling, (b) modeling acceleration, and (c) an increase in the accuracy of the model. An additional motivation for data preparation is that it offers a possibility of arriving at a deeper understanding of the process under modeling, including the understanding of the significance of its most

The literature pertaining to data preparation (Pyle, 1999, 2003; Han & Kamber, 2001; Witten & Frank, 2005; Weiss & Indurkhya, 1998; Masters, 1999; Kusiak, 2001; Refaat, 2007) discusses various data preparation tasks (characterized by means of numerous methods, algorithms, and procedures). Apparently, however, no ordered and coherent classification of tasks and operations involved in data preparation has been proposed so far. This has a number of reasons, including the following: (a) numerous article publications propose solutions to problems employing selected individual data preparation operations, which may lead to the conclusion that such classifications are not really necessary, (b) monographs deal in the minimal measure with the industrial data, which have their own specific character, different from that of the business data, (c) the fact that the same operations are performed for different purposes in different tasks complicates the job of preparing such a classification. The information pertaining to how time-consuming data preparation is appears in the works by many authors. The widely-held view expressed in the literature is that the time devoted to data preparation constitutes considerably more than a half of the overall data exploration time (Pyle, 2003; McCue, 2007). A systematically conducted data preparation can reduce this time. This constitutes an additional argument for developing the data

preparation methodology which was proposed in (Kochanski, 2010).

**1. Introduction** 

important parameters.

Marakakis, E. & Papadakis, N. (2009). An Interactive Verifier for Logic Programs, *Proc. Of 13th IASTED International Conference on Artificial Intelligence and Soft Computing* , pp. 130-137, 2009.
