**4. Nomad test patterns**

Security policies are more and more expressed by means of formal languages to express without ambiguity concepts such as obligation, permission or prohibition for an organization. A natural way to test policies consists in deriving, manually or semi-automatically, test cases directly from the latter. Usually, we obtain abstract tests, that we call *test patterns*. Some works Mouelhi et al. (2008) have proposed solutions to derive test patterns from basic security rules.

In this Section, to illustrate our methodology and to experiment existing Web services, we propose to formalize some test patterns from the recommendations provided by the OWASP organization OWASP (2003). Thereby, these test patters are specialized for Web services and will help to detect attacks/vulnerabilities such as empty passwords, brute force attack, etc. They are related to the following criteria: availability, authentication and authorization. We do not provide an exhaustive test pattern list, because this one depends firstly of the security policy established by an organization and because an exhaustive list would deserve a separate book of its own. Nevertheless, the following test patterns cover most of the OWASP recommendations. These test patterns are constructed over a set of attacks (brute force, SQL injection, etc.) and model how a Web service should behave when it receives one of them.

As stated previously, we have chosen to formalize test patterns with the Nomad language. Nomad is based upon a temporal logic, extended with alethic and deontic modalities. It can easily express the obligation, the prohibition and the permission for atomic or non-atomic actions with eventually timed constraints. Bellow, we recall a part of the Nomad grammar. The complete definition of the Nomad language can be found in Cuppens et al. (2005).

• *OutputResponseWS* ←→ *OutputResponse*(*p*) ∨ *OutputResponserobust*(*p*) corresponds to

A Guided Web Service Security Testing Method 205

<sup>S</sup><sup>↑</sup> <sup>×</sup> *<sup>I</sup>n*}. *OutputResponserobust*(*p*) ←→!*soapf ault*("*SOAP f aultException*") is the only SOAP fault

• (*opReq*(*p*), *TestDom* := {*Spec*(*op*); *RV*; *Inj*}) is a particular input modelling an operation

gathers all the values satisfying the execution of the action ?*opReq*(*p*). These values are

*RV* is composed of random values and specific ones well-known for relieving bugs for each classical type. For instance, Figure 4 depicts the *RV*(*String*) set, which gathers different values for the "String" type. RANDOM(8096) represents a random value of 8096 characters. *Inj* ←→ *XMLInj* ∪ *SQLInj* corresponds to a value set allowing to perform both XML and SQL injections. *XMLInj* and *SQLInj* are specific value sets for the "String" type only. For instance, XML injections are introduced by using specific XSD keywords such as *maxOccurs*, which may represent a DoS (Denial of Service) attack attempt. More details

� ∈→S<sup>↑</sup> and (*l*, *<sup>v</sup>*) ?*opReq*(*p*),*<sup>θ</sup>*

<sup>S</sup><sup>↑</sup> , <sup>O</sup>≤60(*start*(*output OutputResponseWs*)|*done*(*input*

−−−−−−→ (*l*

� , *v*� ) ∈→||S↑||}

a response set.

*Spec*(*opReq*) = {*θ* ∈ *DI*

<type id="String">

Fig. 4. RV(String)

a delay of 60s.

be written with the test pattern *T*2.

*<sup>T</sup>*<sup>2</sup> ←→ ∀*opReq* <sup>∈</sup> <sup>Λ</sup>*<sup>I</sup>*

(*opReq*(*p*), *TestDom* := {*Spec*(*opReq*); *RV*})))

*OutputResponse*(*p*) = {!*opResp*(*p*) <sup>∈</sup> <sup>Λ</sup>*<sup>O</sup>*

<val value=null /> <val value="" /> <val value=" " /> <val value="\\$" /> <val value="\*" /> <val value="&" /> <val value="hello" />

which should be received according to Definition 4,

given in the LTS semantics (valued automaton) of S↑.

request with parameter values in *Spec*(*opReq*) ∪ *RV* ∪ *Inj*.

<sup>S</sup><sup>↑</sup> , *<sup>l</sup>* ?*opReq*(*p*)*ϕ*,*�* −−−−−−−→S<sup>↑</sup> *<sup>l</sup>*

about XML and SQL injections can be found in OWASP (2003).

<val value=RANDOM(8096)" /></type>

Availability is also ensured in condition that the response delay ought to be limited. This can

This one describes that for each operation request, it is obligatory to receive a response within

This test pattern can be implicitly tested if we take into account the notion of quiescence (no response observed after a timeout) during the testing process. Indeed, if quiescence is observed after a delay set to 60s, while an operation invocation, we can consider that *T*2 is not

satisfied. So, this test pattern will be implicitly taken into account in Section 5.

Nomad notations:







As a first step, we augment the Nomad language with this straightforward expression to model the repeating of an action:

If *A* is an action, then *A<sup>n</sup>* = *A*; ...; *A* (n times) is an action.

Now, we are ready for the test pattern description.

#### **4.1 Availability test patterns**

The Web Service availability represents its capability to respond correctly whatever the request sent. Especially, a Web service is available if it runs as expected in the presence of faults or stressful environments. This corresponds to the robustness definition *IEEE Standard glossary of software engineering terminology* (1999). So, it is manifest that availability implies robustness. As a consequence, the Web Service robustness must be taken into consideration in availability tests. We studied the Web Service operation robustness in Salva & Rabhi (2010): we concluded that the only robustness tests, which can be applied in SOAP environments without being blocked by SOAP processors, are requests composed of "unusual values" having a type satisfying the Web Service WSDL description. The terms "unusual values" stand for a fault in software testing Kropp et al. (1998), which gathers specific values well-known for relieving bugs. We also defined the operation robustness by focusing on the SOAP responses constructed by Web Services only. The SOAP faults, added by SOAP processors and expressing an unexpected crash, are ignored. This implies that a robust Web Service operation must yield either a response as defined in the initial specification or a SOAP fault composed of the "SOAPFaultException" cause only.

**Definition 4.** *Let* S =*< L*S, *l*0S, *V*S, *V*0S, *I*S, ΛS, →S*> be a STS specification and* S<sup>↑</sup> *be its augmented STS. An operation op* <sup>∈</sup> <sup>Λ</sup>S<sup>↑</sup> *is robust iff for any operation request* ?*opReq*(*p*) <sup>∈</sup> <sup>Λ</sup>S<sup>↑</sup> <sup>×</sup> *<sup>I</sup>n, a SOAP message different from* !*soapf ault*(*c*) ∈ ΛS<sup>↑</sup> × *I with c* �= "*SOAPFaultException*" *is received.*

The first test pattern *T*1 is derived from this definition and expresses that an operation is available if this one does not crash and responds with a SOAP message after any operation request. *T*1 means that if an operation request is "done" then it is obligatory (O) to obtain a response *OutputResponseWS*.

```
T1 ←→ ∀opReq ∈ ΛI
                           S↑ , O(start(output OutputResponseWS)| done(input
(opReq(p), TestDom := {Spec(opReq); RV; Inj}))) where:
```
10 Will-be-set-by-IN-TECH





As a first step, we augment the Nomad language with this straightforward expression to

The Web Service availability represents its capability to respond correctly whatever the request sent. Especially, a Web service is available if it runs as expected in the presence of faults or stressful environments. This corresponds to the robustness definition *IEEE Standard glossary of software engineering terminology* (1999). So, it is manifest that availability implies robustness. As a consequence, the Web Service robustness must be taken into consideration in availability tests. We studied the Web Service operation robustness in Salva & Rabhi (2010): we concluded that the only robustness tests, which can be applied in SOAP environments without being blocked by SOAP processors, are requests composed of "unusual values" having a type satisfying the Web Service WSDL description. The terms "unusual values" stand for a fault in software testing Kropp et al. (1998), which gathers specific values well-known for relieving bugs. We also defined the operation robustness by focusing on the SOAP responses constructed by Web Services only. The SOAP faults, added by SOAP processors and expressing an unexpected crash, are ignored. This implies that a robust Web Service operation must yield either a response as defined in the initial specification or a SOAP fault

**Definition 4.** *Let* S =*< L*S, *l*0S, *V*S, *V*0S, *I*S, ΛS, →S*> be a STS specification and* S<sup>↑</sup> *be its augmented STS. An operation op* <sup>∈</sup> <sup>Λ</sup>S<sup>↑</sup> *is robust iff for any operation request* ?*opReq*(*p*) <sup>∈</sup> <sup>Λ</sup>S<sup>↑</sup> <sup>×</sup> *<sup>I</sup>n, a SOAP message different from* !*soapf ault*(*c*) ∈ ΛS<sup>↑</sup> × *I with c* �= "*SOAPFaultException*" *is*

The first test pattern *T*1 is derived from this definition and expresses that an operation is available if this one does not crash and responds with a SOAP message after any operation request. *T*1 means that if an operation request is "done" then it is obligatory (O) to obtain a

<sup>S</sup><sup>↑</sup> , O(*start*(*output OutputResponseWS*)| *done*(*input*


is a formula whose semantic is "in the context *β*, *α* is true"

If *A* is an action, then *A<sup>n</sup>* = *A*; ...; *A* (n times) is an action.

Now, we are ready for the test pattern description.

composed of the "SOAPFaultException" cause only.


Nomad notations:

*B*) are actions

are formulae

A within a delay of d units of time"

model the repeating of an action:

**4.1 Availability test patterns**

*received.*

response *OutputResponseWS*.

*<sup>T</sup>*<sup>1</sup> ←→ ∀*opReq* <sup>∈</sup> <sup>Λ</sup>*<sup>I</sup>*

(*opReq*(*p*), *TestDom* := {*Spec*(*opReq*); *RV*; *Inj*}))) where:

• *OutputResponseWS* ←→ *OutputResponse*(*p*) ∨ *OutputResponserobust*(*p*) corresponds to a response set. *OutputResponse*(*p*) = {!*opResp*(*p*) <sup>∈</sup> <sup>Λ</sup>*<sup>O</sup>* <sup>S</sup><sup>↑</sup> <sup>×</sup> *<sup>I</sup>n*}.

*OutputResponserobust*(*p*) ←→!*soapf ault*("*SOAP f aultException*") is the only SOAP fault which should be received according to Definition 4,

• (*opReq*(*p*), *TestDom* := {*Spec*(*op*); *RV*; *Inj*}) is a particular input modelling an operation request with parameter values in *Spec*(*opReq*) ∪ *RV* ∪ *Inj*.

*Spec*(*opReq*) = {*θ* ∈ *DI* <sup>S</sup><sup>↑</sup> , *<sup>l</sup>* ?*opReq*(*p*)*ϕ*,*�* −−−−−−−→S<sup>↑</sup> *<sup>l</sup>* � ∈→S<sup>↑</sup> and (*l*, *<sup>v</sup>*) ?*opReq*(*p*),*<sup>θ</sup>* −−−−−−→ (*l* � , *v*� ) ∈→||S↑||} gathers all the values satisfying the execution of the action ?*opReq*(*p*). These values are given in the LTS semantics (valued automaton) of S↑.

*RV* is composed of random values and specific ones well-known for relieving bugs for each classical type. For instance, Figure 4 depicts the *RV*(*String*) set, which gathers different values for the "String" type. RANDOM(8096) represents a random value of 8096 characters. *Inj* ←→ *XMLInj* ∪ *SQLInj* corresponds to a value set allowing to perform both XML and SQL injections. *XMLInj* and *SQLInj* are specific value sets for the "String" type only. For instance, XML injections are introduced by using specific XSD keywords such as *maxOccurs*, which may represent a DoS (Denial of Service) attack attempt. More details about XML and SQL injections can be found in OWASP (2003).

```
<type id="String">
```

```
<val value=null />
<val value="" />
<val value=" " />
<val value="\$" />
<val value="*" />
<val value="&" />
<val value="hello" />
<val value=RANDOM(8096)" /></type>
```
Fig. 4. RV(String)

Availability is also ensured in condition that the response delay ought to be limited. This can be written with the test pattern *T*2.


This one describes that for each operation request, it is obligatory to receive a response within a delay of 60s.

This test pattern can be implicitly tested if we take into account the notion of quiescence (no response observed after a timeout) during the testing process. Indeed, if quiescence is observed after a delay set to 60s, while an operation invocation, we can consider that *T*2 is not satisfied. So, this test pattern will be implicitly taken into account in Section 5.

*T*5 ←→ ∀*opReq* ∈ *inputRequestConf* ∃*op*2*Req* ∈ *inputAuth*, O(*start*(*output OutputResponseWS*(*r f ail*))|(*done*(*input op*2*Req*(*p*)); *done*(*output op*2*Resp*(*rl f ail*));

A Guided Web Service Security Testing Method 207

• *OutputResponseWS*(*r f ail*) ←→ *OutputResponse*(*r f ail*) ∨ *OutputResponseFault*(*p*) describes, as previously, an operation response where the message *r f ail* corresponds to

The last test pattern *T*6 is dedicated to the receipt of confidential data by means of XML or SQL injections. This one checks that an error message is received when a request containing

Figure 5 describes a non-exhaustive list of attacks and of vulnerabilities which are covered by the previous test patterns. This list is still extracted from the larger one given in OWASP (2003). This table also expresses the portion of Web Service vulnerabilities which shall be

T4 Brute force attack Brute force attack vulnerability,

T5 Bypassing attacks Privacy violation, failure to provide

T6 XML, SQL injection Missing SQL, XML validation, improper

Test patterns represent abstract tests that can be used to test several Web services. Such test patterns cannot be used directly for testing since they are composed of abstract operation names. In order to derive and to execute concrete test cases, we shall translate these patterns

Test purposes describe the test intention which target some specification properties to test in the implementation. We assume that these ones are composed exclusively of specification properties which should be met in the implementation under test. Thereafter, we intend to

<sup>S</sup><sup>↑</sup> × *DI* is an authorization operation response composed of the *rl f ail*

<sup>S</sup><sup>↑</sup> , O(*start*(*output OutputResponseWS*(*r f ail*))|*done*(*input*

validation

validation

insufficient ID length

data validation

confidentiality for stored data

Catch null pointer exception, deserialization of unstructured data, uncaught exception, format string, buffer overflow, improper data

Empty password, improper data

*done*(*input* (*opReq*(*p*), *TestDom* := {*Spec*(*opReq*); *RV*})))) where:

an error message. *r f ail* must be extracted from the specification.

Test pattern Attacks Vulnerabilities

T1,T2 Denial of service, special character injection, format string attack

T3 Format string attack, special character

message, which describes a fail login attempt,

• *opResp*2(*rl f ail*) <sup>∈</sup> <sup>Λ</sup>*<sup>O</sup>*

an XML or SQL injection is sent:

*<sup>T</sup>*<sup>6</sup> ←→ ∀*opReq* <sup>∈</sup> <sup>Λ</sup>*<sup>I</sup>*

(*opReq*(*p*), *TestDom* := {*Inj*})))

**4.4 Attack and vulnerability coverage**

detected with the testing method.

injection

Fig. 5. Attack and vulnerability coverage

into test requirements, called test purposes.

**4.5 Test purpose translation**

#### **4.2 Authentication test patterns**

Authentication aims to establish or to guarantee the Client identity and to check that a Client with no credits has no permission. The logon process is often the first step in user authentication. We propose here two classical test patterns relating to this one. We suppose that the logon process is implemented classically by means of specific operation requests gathered in a set denoted *inputAuth* <sup>⊆</sup> <sup>Λ</sup>*<sup>I</sup>* which are called with authentication parameters (passwords, keys, etc.) and which return SOAP responses. *T*3 refers to the mandatory of returning a fail authentication result each time an authentication request is sent to a Web Service with unusual parameter values such as empty parameters. So, this test pattern covers the well-known empty password vulnerability:

*T*3 ←→ ∀*opReq* ∈ *inputAuth*, O(*start*(*output OutputResponseWS*(*rl f ail*))|*done*( *input* (*opReq*(*p*), *TestDom* := {*RV*}))) where:

*OutputResponseWS*(*rl f ail*) ←→ *OutputResponse*(*rl f ail*) ∨ *OutputResponseFault*(*p*). *OutputResponse*(*rl f ail*) represents an operation response where the message *rl f ail* in *DI* <sup>S</sup><sup>↑</sup> suggests a failed login attempt. *rl f ail* must be extracted from the specification. *outputResponseFault*(*p*) ⇔ *soapf ault*(*c*) with *c* �= "Client" ∧ *c* �= "the endpoint reference not found" is a SOAP fault whose cause is different from "Client" and "the endpoint reference not found". The first one means the operation is called with bad parameter types while the second cause means that the operation name does not exist (Section 3.2).

The test pattern *T*4 is dedicated to the "brute force" threat. The latter aims to decrypt or to find authentication parameters by traversing the search space of possible values. A well-known countermeasure is to forbid a new connection attempt after *n* failed ones for the same user. With *n* = 10, the corresponding test pattern can be written with:

*T*4 ←→ ∀*opReq* ∈ *inputAuth*, O(*start*(*output OutputResponseWS*(*rl f orbid*))| (*done*((*input* (*opReq*(*p*), *TestDom* :<sup>=</sup> {*RV*}); *output outputResponseWS*(*rl f ail*))10); *done*(*input* (*opReq*(*p*), *TestDom* := {*RV*})))) where:

*OutputResponseWS*(*rl f ail*) ←→ *OutputResponse*(*rl f ail*) ∨ *OutputResponseFault*(*p*) is an operation response as previously. The *rl f ail* message expresses a failed login attempt. The message *rl f orbid* indicates that any new connection attempt is forbidden. These messages must be extracted from the specification as well.

#### **4.3 Authorization test patterns**

Authorization represents the access policy and specifies the access rights to resources, usually for authenticated users. We define here, two test patterns which aim to check that a user, requesting for confidential data, is really authenticated.

The following test pattern checks that the request of confidential data with the operation set *inputRequestConf* , returns a "permission denied" message if the user is not authenticated (a fail login attempt has been made with the operation request *opReq*<sup>2</sup> ∈ *inputAuth*):

12 Will-be-set-by-IN-TECH

Authentication aims to establish or to guarantee the Client identity and to check that a Client with no credits has no permission. The logon process is often the first step in user authentication. We propose here two classical test patterns relating to this one. We suppose that the logon process is implemented classically by means of specific operation requests gathered in a set denoted *inputAuth* <sup>⊆</sup> <sup>Λ</sup>*<sup>I</sup>* which are called with authentication parameters (passwords, keys, etc.) and which return SOAP responses. *T*3 refers to the mandatory of returning a fail authentication result each time an authentication request is sent to a Web Service with unusual parameter values such as empty parameters. So, this test pattern covers

*T*3 ←→ ∀*opReq* ∈ *inputAuth*, O(*start*(*output OutputResponseWS*(*rl f ail*))|*done*(

*OutputResponseWS*(*rl f ail*) ←→ *OutputResponse*(*rl f ail*) ∨ *OutputResponseFault*(*p*). *OutputResponse*(*rl f ail*) represents an operation response where the message

specification. *outputResponseFault*(*p*) ⇔ *soapf ault*(*c*) with *c* �= "Client" ∧ *c* �= "the endpoint reference not found" is a SOAP fault whose cause is different from "Client" and "the endpoint reference not found". The first one means the operation is called with bad parameter types while the second cause means that the operation name does not exist

The test pattern *T*4 is dedicated to the "brute force" threat. The latter aims to decrypt or to find authentication parameters by traversing the search space of possible values. A well-known countermeasure is to forbid a new connection attempt after *n* failed ones for the same user.

*T*4 ←→ ∀*opReq* ∈ *inputAuth*, O(*start*(*output OutputResponseWS*(*rl f orbid*))| (*done*((*input* (*opReq*(*p*), *TestDom* :<sup>=</sup> {*RV*}); *output outputResponseWS*(*rl f ail*))10);

*OutputResponseWS*(*rl f ail*) ←→ *OutputResponse*(*rl f ail*) ∨ *OutputResponseFault*(*p*) is an operation response as previously. The *rl f ail* message expresses a failed login attempt. The message *rl f orbid* indicates that any new connection attempt is forbidden. These messages

Authorization represents the access policy and specifies the access rights to resources, usually for authenticated users. We define here, two test patterns which aim to check that a user,

The following test pattern checks that the request of confidential data with the operation set *inputRequestConf* , returns a "permission denied" message if the user is not authenticated (a

fail login attempt has been made with the operation request *opReq*<sup>2</sup> ∈ *inputAuth*):

<sup>S</sup><sup>↑</sup> suggests a failed login attempt. *rl f ail* must be extracted from the

**4.2 Authentication test patterns**

*rl f ail* in *DI*

(Section 3.2).

the well-known empty password vulnerability:

*input* (*opReq*(*p*), *TestDom* := {*RV*}))) where:

With *n* = 10, the corresponding test pattern can be written with:

*done*(*input* (*opReq*(*p*), *TestDom* := {*RV*})))) where:

must be extracted from the specification as well.

requesting for confidential data, is really authenticated.

**4.3 Authorization test patterns**

*T*5 ←→ ∀*opReq* ∈ *inputRequestConf* ∃*op*2*Req* ∈ *inputAuth*, O(*start*(*output OutputResponseWS*(*r f ail*))|(*done*(*input op*2*Req*(*p*)); *done*(*output op*2*Resp*(*rl f ail*)); *done*(*input* (*opReq*(*p*), *TestDom* := {*Spec*(*opReq*); *RV*})))) where:


The last test pattern *T*6 is dedicated to the receipt of confidential data by means of XML or SQL injections. This one checks that an error message is received when a request containing an XML or SQL injection is sent:

```
T6 ←→ ∀opReq ∈ ΛI
                        S↑ , O(start(output OutputResponseWS(r f ail))|done(input
(opReq(p), TestDom := {Inj})))
```
### **4.4 Attack and vulnerability coverage**

Figure 5 describes a non-exhaustive list of attacks and of vulnerabilities which are covered by the previous test patterns. This list is still extracted from the larger one given in OWASP (2003). This table also expresses the portion of Web Service vulnerabilities which shall be detected with the testing method.


Fig. 5. Attack and vulnerability coverage

#### **4.5 Test purpose translation**

Test patterns represent abstract tests that can be used to test several Web services. Such test patterns cannot be used directly for testing since they are composed of abstract operation names. In order to derive and to execute concrete test cases, we shall translate these patterns into test requirements, called test purposes.

Test purposes describe the test intention which target some specification properties to test in the implementation. We assume that these ones are composed exclusively of specification properties which should be met in the implementation under test. Thereafter, we intend to

Fig. 6. Test purpose pattern derived from *T*1

Fig. 7. Test purpose pattern derived from *T*4

Fig. 8. A test purpose derived from *T*1

while *fail* means that the implementation has rejected it.

**Definition 5.** *A test case is a deterministic and acyclic STS* TC =*< L*TC, *l*0TC,

A Guided Web Service Security Testing Method 209

Intuitively, when the test case is executed, *pass* means that it has been completely executed,

The proposed testing method constructs test cases to check whether the implementation behaviours satisfy a given set of security test patterns. This can be defined by means

*V*TC, *V*0TC, *I*TC, ΛTC, →TC*> where the final locations are labelled in* {*pass*, *f ail*}*.*

synchronize the STS specification with test purposes, so that final test cases will be composed of both specification behaviours and test pattern properties. So, test purposes must be formalized with STSs as well.

For a specification S =*< L*S, *l*0S, *V*S, *V*0S, *I*S, ΛS, →S*>* we also formalize a test purpose with a deterministic and acyclic STS *tp* =*< Ltp*, *l*0*tp*, *Vtp*, *V*0*tp*, *Itp*, Λ*tp*, →*tp>* such that:


We denote *TP* the test purpose set derived from test patterns. In particular, a test pattern *T* is translated into the test purpose set *TPT* ⊆ *TP* with the following steps:


For instance, test purpose patterns extracted from the test patterns *T*1 and *T*4 are given in Figures 6 and 7. These STSs formulate the test intention described in *T*1 and in *T*4. *T*4 describes a countermeasure for the brute force threat which is well described in the second test pattern since after ten connection attempts done by the same user, the latter cannot login anymore. *getsender* and *count* are internal procedures which return the IP address of the client and the number of times the client has attempted to connect. From the specification depicted in Figure 2, we also have *TPT*<sup>1</sup> = {*tpT*1(*ItemSearchReq*), *tpT*1(*ItemLookUpReq*)}. *tpT*1(*ItemSearchReq*) is illustrated in Figure 8. It represents a test purpose constructed from *T*1 with the operation "ItemSearch". It illustrates the semantics of *T*1 with a concrete operation name.

Unfortunately, there is no available tool for transforming a Nomad expression into an automaton yet. At the moment, abstract test purposes must be constructed manually.

#### **5. Testing methodology**

Now, that Web services, SOAP, and security test patterns expressing security rules, are formalized, we are ready to express clearly the security level of an implementation (relative to its specification and a given set of test patterns). We initially assume that the implementation should behave like its model and can be experimented by means of the same actions. It is represented by an LTS *Impl* and Δ(*Impl*) its LTS suspension. The experimentation of the implementation is performed by means of test cases defined with STSs as the specification. Test cases are defined as:

14 Will-be-set-by-IN-TECH

synchronize the STS specification with test purposes, so that final test cases will be composed of both specification behaviours and test pattern properties. So, test purposes must be

For a specification S =*< L*S, *l*0S, *V*S, *V*0S, *I*S, ΛS, →S*>* we also formalize a test purpose with

• *Vtp* ∩ *V*<sup>S</sup> = ∅ and *Vtp* also contains a string variable *TestDom* which is equal to the

• →*tp* is composed of transitions modelling specification properties. So, for any transition

We denote *TP* the test purpose set derived from test patterns. In particular, a test pattern *T* is

1. *T* is initially transformed into an abstract test purpose *AtpT*, modelled with an STS, composed of generic operation requests. For a test pattern *T*, we denote *OPT* the operation

2. the test purpose set *TPT* = {*tpT*(*op*) | *op* ∈ *OPT*} is then constructed by replacing generic

For instance, test purpose patterns extracted from the test patterns *T*1 and *T*4 are given in Figures 6 and 7. These STSs formulate the test intention described in *T*1 and in *T*4. *T*4 describes a countermeasure for the brute force threat which is well described in the second test pattern since after ten connection attempts done by the same user, the latter cannot login anymore. *getsender* and *count* are internal procedures which return the IP address of the client and the number of times the client has attempted to connect. From the specification depicted in Figure 2, we also have *TPT*<sup>1</sup> = {*tpT*1(*ItemSearchReq*), *tpT*1(*ItemLookUpReq*)}. *tpT*1(*ItemSearchReq*) is illustrated in Figure 8. It represents a test purpose constructed from *T*1 with the operation "ItemSearch". It illustrates the semantics of *T*1 with a concrete operation

Unfortunately, there is no available tool for transforming a Nomad expression into an

Now, that Web services, SOAP, and security test patterns expressing security rules, are formalized, we are ready to express clearly the security level of an implementation (relative to its specification and a given set of test patterns). We initially assume that the implementation should behave like its model and can be experimented by means of the same actions. It is represented by an LTS *Impl* and Δ(*Impl*) its LTS suspension. The experimentation of the implementation is performed by means of test cases defined with STSs as the specification.

automaton yet. At the moment, abstract test purposes must be constructed manually.

�

S↑ ,

*<sup>i</sup>* and a value set (*x*1, ..., *xn*) ∈ *DV*∪*<sup>I</sup>* such

*a*(*p*),*ϕi*,*�<sup>i</sup>* −−−−−→<sup>S</sup> *l*

a deterministic and acyclic STS *tp* =*< Ltp*, *l*0*tp*, *Vtp*, *V*0*tp*, *Itp*, Λ*tp*, →*tp>* such that:

formalized with STSs as well.

• *Itp* ⊆ *I*S, • Λ*tp* ⊆ ΛS,

> *a*(*p*),*ϕj*,*�<sup>j</sup>* −−−−−→*tp l*

� *j*

that *ϕ<sup>j</sup>* ∧ *ϕi*(*x*1, ..., *xn*) |= true.

*lj*

name.

**5. Testing methodology**

Test cases are defined as:

parameter domain provided in test patterns,

, it exists a transition *li*

set targeted by the tests in *T*. For instance *OPT*<sup>1</sup> = Λ*<sup>I</sup>*

translated into the test purpose set *TPT* ⊆ *TP* with the following steps:

operation invocations in *AtpT* by a real operation name *op* ∈ *OPT*.

Fig. 6. Test purpose pattern derived from *T*1

Fig. 7. Test purpose pattern derived from *T*4

Fig. 8. A test purpose derived from *T*1

**Definition 5.** *A test case is a deterministic and acyclic STS* TC =*< L*TC, *l*0TC, *V*TC, *V*0TC, *I*TC, ΛTC, →TC*> where the final locations are labelled in* {*pass*, *f ail*}*.*

Intuitively, when the test case is executed, *pass* means that it has been completely executed, while *fail* means that the implementation has rejected it.

The proposed testing method constructs test cases to check whether the implementation behaviours satisfy a given set of security test patterns. This can be defined by means

3. The augmented specification S<sup>↑</sup> and the test purpose set *TPT* are combined together: each test purpose *tpT*(*op*) is synchronized with the specification to produce the product P*T*(*op*) whose paths are complete specification ones combined with the test purpose properties. We denote *ProdT* = {P*T*(*op*) = S<sup>↑</sup> × *tpT*(*op*) | *tpT*(*op*) ∈ *TPT*} the resulting synchronous

A Guided Web Service Security Testing Method 211

4. The synchronous product locations are labelled by "pass" which means that to reach this

5. Synchronous products are completed on the output action set to express both correct and incorrect behaviours. A completed synchronous product is composed of *Pass* locations to express behaviours satisfying test purposes and *Fail* locations to express that test purposes

6. Finally, test cases are selected from the completed synchronous products in *Prodcompl*

by means of a reachability analysis. For a completed synchronous product P*compl*

test cases in *TCT*(*op*) are STS trees which begin from the initial location of <sup>P</sup>*compl*

Each of these steps is detailed below. We assume having an augmented specification S↑ and a

A test purpose represents a test requirement which should be met in the implementation. To test this statement, both the specification and the test purpose are synchronized to produce

Let *tpT*(*op*) =*< Ltp*, *l*0*tp*, *Vtp*, *V*0*tp*, *Itp*, Λ*tp*, →*tp>* and S<sup>↑</sup> =*< L*S<sup>↑</sup> , *l*0S<sup>↑</sup> , *V*S<sup>↑</sup> , *V*0S<sup>↑</sup> , *I*S<sup>↑</sup> , ΛS<sup>↑</sup> , →S↑*>* be two STSs. The synchronous product of S<sup>↑</sup> with *tpT*(*op*) is defined by an

*<sup>a</sup>*0(*p*),*ϕ*0,*�*<sup>0</sup> −−−−−−→*l*1...*li*−<sup>1</sup>

*ai*−1(*p*),*ϕi*−1,*�i*−<sup>1</sup> −−−−−−−−−→*li*∈→<sup>S</sup><sup>↑</sup>

and which aim to call the operation *op*. The reachability analysis ensures that these STSs can be executed on the implementation. For the test pattern *T*, the test case set

*TCT*(*op*). The final test case set *TC* is the union of the test case

*<sup>T</sup>* <sup>=</sup> {P*compl*

*<sup>T</sup>* (*op*) |

*T*

*<sup>T</sup>* (*op*),

*<sup>T</sup>* (*op*)

and thus security test patterns are not satisfied. It results that *Prodcompl*

test purpose *tpT*(*op*) ∈ *TPT* derived from a test pattern *T* given in Section 4.

paths which model test purpose runs with respect to the specification.

• →<sup>P</sup> is defined with the two following rules, applied successively:

(*l*0S<sup>↑</sup> *<sup>l</sup>*0*tp* ) *<sup>a</sup>*0(*p*),*ϕ*0,*�*<sup>0</sup> −−−−−−→P(*l*1*lj*)...(*li*−<sup>1</sup>*lj*) *ai*−1(*p*),*ϕi*−1,*�i*−<sup>1</sup> −−−−−−−−−→P(*lilj*)

� 2

� 2)

STS P*T*(*op*) = S<sup>↑</sup> × *tpT*(*op*) =*def < L*P, *l*0P, *V*P, *V*0P, *I*P, ΛP, →P*>*, where:

P*T*(*op*) ∈ *ProdT*} is the completed synchronous product set,

product set,

*TCT* = P*compl*

*<sup>T</sup>* (*op*)∈*Prodcompl T*

**5.1 Synchronous product definition**

• *L*<sup>P</sup> = *L*<sup>S</sup> × *Ltp*, *l*0<sup>P</sup> = *l*0<sup>S</sup> × *l*0*tp*, • *V*<sup>P</sup> = *V*<sup>S</sup> ∪ *Vtp*, *V*0<sup>P</sup> = *V*0<sup>S</sup> ∧ *V*0*tp*,

> � 1

<sup>1</sup>) *<sup>a</sup>*(*p*),*ϕ*∧*ϕ*�,*�*��=[*�*;*�*�] −−−−−−−−−−−→P(*l*2*<sup>l</sup>*

*a*(*p*),*ϕ*�,*�*� −−−−−→*tp <sup>l</sup>*

−−−−→P(*li*<sup>+</sup>1*lj*<sup>+</sup>1),*li*�=*l*0S<sup>↑</sup> ,(*l*0S<sup>↑</sup> *<sup>l</sup>*0*tp* )(*lilj*),*l*0S<sup>↑</sup>

• *I*<sup>P</sup> = *I*S, • Λ<sup>P</sup> = ΛS,

*sync* : *<sup>l</sup>*<sup>1</sup>

*assemble* : (*lilj*) *<sup>a</sup>*(*p*),*ϕ*,*�*

(*l*1*l* �

*a*(*p*),*ϕ*,*�* −−−−→<sup>S</sup><sup>↑</sup> *<sup>l</sup>*2,*<sup>l</sup>*

sets *TCT* obtained from each test pattern *T*.

location, a correct behaviour has to be executed,

Fig. 9. Test case generation

of a relation based on traces, i.e., the observed valued actions suites expressing concrete behaviours.

More precisely, Since the implementation is seen as a black box, the method checks that the suspensions traces (actions suites) of the implementation can be found in the suspension traces of the combination of the specification with test purposes modelling concrete test patterns. We consider suspension traces, and not only traces, to take into account quiescence, i.e., lack of observation and so response delays. This can be written more formally by means of the following test relation:

$$\text{Impl\\_score}\_{TP} \ $ \Leftrightarrow \forall tp \in TP, \text{STracces}(\text{Impl}) \cap \text{NC\\_Taxes}(\$ ^\top \times tp) = \mathcal{Q}$$

with *TP* the test purpose set extracted from the security test patterns, S the specification, S↑ its suspension and *NC*\_*Traces*(S<sup>↑</sup> <sup>×</sup> *tp*) = *STraces*(S<sup>↑</sup> <sup>×</sup> *tp*).Λ<sup>0</sup> ∪ {!*δ*} \ *STraces*(S<sup>↑</sup> <sup>×</sup> *tp*) the non-conformant traces of the synchronous product S<sup>↑</sup> × *tp*.

To check this relation, the test case generation is performed by several steps, summarized in Figure 9 and given below. The main advantage of our model based approach, is that these steps can be automated in a tool.


16 Will-be-set-by-IN-TECH

of a relation based on traces, i.e., the observed valued actions suites expressing concrete

More precisely, Since the implementation is seen as a black box, the method checks that the suspensions traces (actions suites) of the implementation can be found in the suspension traces of the combination of the specification with test purposes modelling concrete test patterns. We consider suspension traces, and not only traces, to take into account quiescence, i.e., lack of observation and so response delays. This can be written more formally by means of the

*Impl secureTP* S ⇔ ∀*tp* ∈ *TP*, *STraces*(*Impl*) ∩ *NC*\_*Traces*(S<sup>↑</sup> × *tp*) = ∅

with *TP* the test purpose set extracted from the security test patterns, S the specification, S↑ its suspension and *NC*\_*Traces*(S<sup>↑</sup> <sup>×</sup> *tp*) = *STraces*(S<sup>↑</sup> <sup>×</sup> *tp*).Λ<sup>0</sup> ∪ {!*δ*} \ *STraces*(S<sup>↑</sup> <sup>×</sup> *tp*) the

To check this relation, the test case generation is performed by several steps, summarized in Figure 9 and given below. The main advantage of our model based approach, is that these

1. Security test patterns are firstly translated into test purposes modelled by STSs as described in Section 4.5. For a test pattern *T*, we obtain a test purpose set *TPT* = {*tpT*(*op*) | *op* ∈

2. The specification S is augmented to take into consideration the SOAP environment, as

*OPT*} composed of test purposes *tpT*(*op*) with *op* the tested operation,

non-conformant traces of the synchronous product S<sup>↑</sup> × *tp*.

Fig. 9. Test case generation

following test relation:

steps can be automated in a tool.

described in Section 3.2,

behaviours.


sets *TCT* obtained from each test pattern *T*.

Each of these steps is detailed below. We assume having an augmented specification S↑ and a test purpose *tpT*(*op*) ∈ *TPT* derived from a test pattern *T* given in Section 4.

#### **5.1 Synchronous product definition**

A test purpose represents a test requirement which should be met in the implementation. To test this statement, both the specification and the test purpose are synchronized to produce paths which model test purpose runs with respect to the specification.

Let *tpT*(*op*) =*< Ltp*, *l*0*tp*, *Vtp*, *V*0*tp*, *Itp*, Λ*tp*, →*tp>* and S<sup>↑</sup> =*< L*S<sup>↑</sup> , *l*0S<sup>↑</sup> , *V*S<sup>↑</sup> , *V*0S<sup>↑</sup> , *I*S<sup>↑</sup> , ΛS<sup>↑</sup> , →S↑*>* be two STSs. The synchronous product of S<sup>↑</sup> with *tpT*(*op*) is defined by an STS P*T*(*op*) = S<sup>↑</sup> × *tpT*(*op*) =*def < L*P, *l*0P, *V*P, *V*0P, *I*P, ΛP, →P*>*, where:


*sync* : *<sup>l</sup>*<sup>1</sup> *a*(*p*),*ϕ*,*�* −−−−→<sup>S</sup><sup>↑</sup> *<sup>l</sup>*2,*<sup>l</sup>* � 1 *a*(*p*),*ϕ*�,*�*� −−−−−→*tp <sup>l</sup>* � 2 (*l*1*l* � <sup>1</sup>) *<sup>a</sup>*(*p*),*ϕ*∧*ϕ*�,*�*��=[*�*;*�*�] −−−−−−−−−−−→P(*l*2*<sup>l</sup>* � 2) *assemble* : (*lilj*) *<sup>a</sup>*(*p*),*ϕ*,*�* −−−−→P(*li*<sup>+</sup>1*lj*<sup>+</sup>1),*li*�=*l*0S<sup>↑</sup> ,(*l*0S<sup>↑</sup> *<sup>l</sup>*0*tp* )(*lilj*),*l*0S<sup>↑</sup> *<sup>a</sup>*0(*p*),*ϕ*0,*�*<sup>0</sup> −−−−−−→*l*1...*li*−<sup>1</sup> *ai*−1(*p*),*ϕi*−1,*�i*−<sup>1</sup> −−−−−−−−−→*li*∈→<sup>S</sup><sup>↑</sup> (*l*0S<sup>↑</sup> *<sup>l</sup>*0*tp* ) *<sup>a</sup>*0(*p*),*ϕ*0,*�*<sup>0</sup> −−−−−−→P(*l*1*lj*)...(*li*−<sup>1</sup>*lj*) *ai*−1(*p*),*ϕi*−1,*�i*−<sup>1</sup> −−−−−−−−−→P(*lilj*)

Fig. 11. A completed synchronous example

negations of the guards of transitions in S.

the transition 2*<sup>B</sup>* !*<sup>δ</sup>*

*<sup>a</sup>*∈Λ*<sup>O</sup>*

<sup>S</sup>∪{!*δ*}, *<sup>ϕ</sup><sup>a</sup>* <sup>=</sup>

!*a*(*p*),*ϕa* ,∅ −−−−−→S*compl Fail*

*l*1

*l*1

A Guided Web Service Security Testing Method 213

A location *l*<sup>1</sup> is completed with new transitions to Fail, labelled by unexpected outputs with

By applying this step on the synchronous product example P*T*1(*ItemSearch*) of Figure 10, we obtain the completed STS depicted in Figure 11. Dashed transitions depict the completion. For sake of readability, we use the label !any to model any output action. Intuitively, the dashed transitions represent unexpected output actions which lead to the Fail location. For instance,

*<sup>a</sup>*(*p*),*ϕn*,*�<sup>n</sup>* −−−−−→S*ln*

−→ *Fail* expresses that quiescence must not be observed. This transition

¬*ϕ<sup>n</sup>*

The first rule combines one specification transition with one test purpose one by synchronizing actions, variables updates and guards. This yields a initial transition set which is completed with the second rule to ensure that there is a specification path such that any synchronized transitions is reachable from the initial location. For sake of readability, we have denoted in the second rule (*l*0S<sup>↑</sup> *l*0*tp*) (*lilj*) to express that there is no path from the initial location to (*lilj*) in P*T*(*op*).

The synchronous product of the test purpose *tpT*1(*ItemSearchReq*) given in Figure 8 with the completed specification is depicted in Figure 10. The synchronized transitions obtained from the first rule are depicted in red. Initially, the test purpose aims to test the ItemSearch operation. So, the synchronous product is composed by the two ItemSearch invocations of the specification combined with test purpose properties.

Fig. 10. A synchronous example

#### **5.2 Incorrect behaviour completion**

This straightforward part aims to complete synchronous products to express incorrect behaviours. Thanks to this steps, the generated test cases will be composed of final locations labelled either by local verdicts "pass" or "fail". The final test verdict shall be obtained without ambiguity from these local ones.

This completion is made by means of the STS operation *compl* which is defined as follows. For an STS <sup>S</sup>, *compl* <sup>S</sup> <sup>=</sup>*def* <sup>S</sup>*compl* <sup>=</sup>*<sup>&</sup>lt; <sup>L</sup>*<sup>S</sup> ∪ {*Fail*}, *<sup>l</sup>*0S, *<sup>V</sup>*S, *<sup>V</sup>*0S, *<sup>I</sup>*S, <sup>Λ</sup>S, <sup>→</sup>S*compl<sup>&</sup>gt;* where <sup>→</sup>S*compl* is obtained with the following rule:

18 Will-be-set-by-IN-TECH

The first rule combines one specification transition with one test purpose one by synchronizing actions, variables updates and guards. This yields a initial transition set which is completed with the second rule to ensure that there is a specification path such that any synchronized transitions is reachable from the initial location. For sake of readability, we have denoted in the second rule (*l*0S<sup>↑</sup> *l*0*tp*) (*lilj*) to express that there is no path from the initial

The synchronous product of the test purpose *tpT*1(*ItemSearchReq*) given in Figure 8 with the completed specification is depicted in Figure 10. The synchronized transitions obtained from the first rule are depicted in red. Initially, the test purpose aims to test the ItemSearch operation. So, the synchronous product is composed by the two ItemSearch invocations of the

This straightforward part aims to complete synchronous products to express incorrect behaviours. Thanks to this steps, the generated test cases will be composed of final locations labelled either by local verdicts "pass" or "fail". The final test verdict shall be obtained without

This completion is made by means of the STS operation *compl* which is defined as follows. For an STS <sup>S</sup>, *compl* <sup>S</sup> <sup>=</sup>*def* <sup>S</sup>*compl* <sup>=</sup>*<sup>&</sup>lt; <sup>L</sup>*<sup>S</sup> ∪ {*Fail*}, *<sup>l</sup>*0S, *<sup>V</sup>*S, *<sup>V</sup>*0S, *<sup>I</sup>*S, <sup>Λ</sup>S, <sup>→</sup>S*compl<sup>&</sup>gt;* where <sup>→</sup>S*compl* is

location to (*lilj*) in P*T*(*op*).

Fig. 10. A synchronous example

ambiguity from these local ones.

obtained with the following rule:

**5.2 Incorrect behaviour completion**

specification combined with test purpose properties.

Fig. 11. A completed synchronous example

$$\frac{a \in \Lambda\_{\mathcal{S}}^{\mathcal{O}} \cup \{!\delta\}, \, \varphi\_{\mathcal{O}} = \bigwedge\_{\substack{I\_1 \xrightarrow{a(p), \varphi\_{\mathcal{O}}, \varphi\_{\mathcal{O}}} \atop \mathcal{S} \text{ conv}^{\mathcal{O}}}} \neg \varphi\_{\mathcal{O}}}{\iota\_1 \xrightarrow{\iota\_1(p), \varphi\_{\mathcal{O}}, \mathcal{O}} \sideset{}{^{\bullet}}\_{\mathcal{S} \text{ conv}^{\mathcal{O}}} \, \, \mathit{Fail}}$$

A location *l*<sup>1</sup> is completed with new transitions to Fail, labelled by unexpected outputs with negations of the guards of transitions in S.

By applying this step on the synchronous product example P*T*1(*ItemSearch*) of Figure 10, we obtain the completed STS depicted in Figure 11. Dashed transitions depict the completion. For sake of readability, we use the label !any to model any output action. Intuitively, the dashed transitions represent unexpected output actions which lead to the Fail location. For instance, the transition 2*<sup>B</sup>* !*<sup>δ</sup>* −→ *Fail* expresses that quiescence must not be observed. This transition

**Algorithm 1:** STS extraction from synchronous products

?*opReq*(*p*1,...,*pn*),*ϕ<sup>k</sup>* ,*�<sup>k</sup>* −−−−−−−−−−−−→P*compl*

**<sup>9</sup>** *Spec*(*opReq*) = {(*x*1, ..., *xn*) ∈ *D*(*p*1,...,*pn*) | (*x*1, ..., *xn*) := *Solving*(*path*.*t*) }; **<sup>10</sup>** *Value*(*opReq*) := {(*x*1, ..., *xn*) ∈ *Spec*(*opReq*) if *Spec*(*opReq*) ∈ *Domain*}∪ {(*x*1, ..., *xn*) ∈ *D*(*p*1,...,*pn*) | *xi* ∈ *RV*(*type*(*pi*)) if *RV* ∈ *Domain*}∪

<sup>1</sup>, ..., *x*�

?*opReq*(*p*1,...,*pn*),*ϕk*∪*ϕtc* ,*�<sup>k</sup>* −−−−−−−−−−−−−−−→ *lk*<sup>+</sup>1);

**<sup>24</sup>** (*x*1, ..., *xn*) = *solver*(*c*) //solving of the guard *c* composed of the variables (*X*1, ..., *Xn*) such

!*a*(*p*),*ϕ*,*�* −−−−−→ *lk*<sup>+</sup><sup>2</sup> **do**

**<sup>18</sup> foreach** *transition t*� <sup>=</sup> *<sup>l</sup>* !*a*(*p*),*ϕ*,*�* −−−−−→ *Fail such that l is a location of path* **do**

*Inj* if *type*(*pi*) == "*String*"), if *Inj* ∈ *Domain*};

� ;

� ; *<sup>n</sup>*) ∈ *Spec*(*opReq*), *xi* = *x*�

A Guided Web Service Security Testing Method 215

*<sup>T</sup>* (*op*) *lk*<sup>+</sup><sup>1</sup> *with �<sup>k</sup> composed of the assignment*

*<sup>i</sup>* if *type*(*pi*) � "*String*", *xi* ∈

*<sup>T</sup>* (*op*)

**output**: A test case set *TCT*(*op*)

*TestDom* := *Domain (Section 4.5)* **do**

{(*x*1, ..., *xn*) ∈ *D*(*p*1,...,*pn*) | ((*x*�

**<sup>11</sup> foreach** (*x*1, ..., *xn*) ∈ *Value*(*opReq*) **do**

**<sup>13</sup>** *�*<sup>0</sup> is the variable initialization of *tc*; **<sup>14</sup>** *ϕtc* := [*p*<sup>1</sup> := *x*1, ..., *pn* := *xn*];

**<sup>1</sup>** Testcase(STS): TC; **input** : An STS P*compl*

**<sup>3</sup> repeat**

**<sup>6</sup> until** *�*<sup>0</sup> � ∅; **<sup>7</sup> if** *�*<sup>0</sup> == ∅ **then**

**<sup>12</sup>** *STStc* := ∅;

**<sup>15</sup>** →*tc*:=→*tc* ∪ *path*.(*lk*

**<sup>17</sup>** →*tc*:=→*tc* ∪*t*

**<sup>19</sup>** →*tc*:=→*tc* ∪*t*

**<sup>21</sup>** *Solving*(*path p*) : *�*;

that *c*(*x*1, ..., *xn*) true; **<sup>25</sup>** 2 **if** (*x*1, ..., *xn*) == ∅ **then**

**<sup>26</sup>** *�* := ∅

**<sup>27</sup> else**

**<sup>16</sup> foreach** *transition t*� = *lk*<sup>+</sup><sup>1</sup>

**<sup>20</sup>** *TCT*(*op*) := *TCT*(*op*) ∪ *tc*;

**<sup>23</sup>** *<sup>c</sup>* = *<sup>ϕ</sup>*<sup>0</sup> ∧ *<sup>ϕ</sup>*1(*�*0) ∧ ... ∧ *<sup>ϕ</sup>k*(*�k*−1);

**<sup>28</sup>** *�* := {*X*<sup>1</sup> := *x*1, ..., *Xn* := *xn*}

**<sup>22</sup>** *p* = (*l*0, *l*1, *a*0, *ϕ*0, *�*0)...(*lk*, *lk*+1, *ak*, *ϕk*, *�k*);

**<sup>2</sup> foreach** *transition t* = (*lk*

**<sup>4</sup>** *path* = *DFS*(*l*0, *lk*); **<sup>5</sup>** *�*<sup>0</sup> := *Solving*(*path*);

**<sup>8</sup>** go to next transition;

can be used to test the satisfaction of the test pattern *T*2 (Section 4) directly: if no response is observed after a timeout, we consider that the Web Service under test is not available and therefore faulty.

#### **5.3 Synchronous product path extraction with reachability analysis**

Test cases are extracted from the completed synchronous products with Algorithm 1. For a synchronous product P*compl <sup>T</sup>* (*op*), the resulting STSs in *TCT*(*op*) are trees which aim to call the operation *op*, referred in *tpT*(*op*) by extracting acyclic paths of <sup>P</sup>*compl <sup>T</sup>* (*op*) beginning from its initial location and composed of the input action ?*opReq*(*p*). A reachability analysis is performed on the fly to ensure that these paths can be completely executed.

The algorithm constructs a preamble by using a Depth First Path Search (DFS) algorithm between the initial location *l*<sup>0</sup> and *lk*. A reachability analysis is also performed to check whether the transition *t* labelled by ?*opReq*(*p*) is reachable (lines 2-8). In line 9, the value set *Spec*(*opReq*), composed of values satisfying the firing of the transition *t* is generated with the Solving procedure. The Value set *Value*(*opReq*), composed of values used for testing *op* is also constructed according to the *TestDom* variable provided in test patterns. This set may be composed of values in *Spec*(*opReq*), of unusual values in *RV* or of SQL/XML injection values in *Inj* (see Section 4). SQL/XML injections are only used if the variable type is equal to "String". If the variable types are complex (tabular, object, etc.), we compose them with other types to obtain the final values. We also use an heuristic to estimate and eventually to reduce test number according to the tuple number in *Value*(*opReq*). Intuitively, for a constant denoted *Max*, if *card*(*Value*(*opReq*)) *> Max*, we reduce the cardinality of *Value*(*opReq*) by removing one value of *RV*(*type*(*p*1)), and one of value of *RV*(*type*(*p*2)), and so on up to *card*(*Value*(*opReq*)) ≤ *Max*. This part is discussed in the next Section.

The STS *tc*, modelling a test case, is reset, its variables are initialized with *�*0. The previous preamble path and the transition labelled by the operation request ?*opReq* with one value of *Value*(*opReq*) are added to the transition set of *tc* (lines 12-15). Then, the algorithm also adds each next transition (*lk*+1, *lf* , !*a*(*p*), *ϕk*<sup>+</sup>1, *�k*+1) with the location *lf* labelled by a verdict in {*pass*, *f ail*} and transitions to *Fail* (lines 16-19). We obtain an STS tree, which describes a complete operation invocation. *tc* is finally added to *TCT*(*op*).

The "Solving" method takes a path *path* and returns a variable update *�*<sup>0</sup> which satisfies the complete execution of *path*. If the constraint solvers Een & Sörensson (2003); Kiezun et al. (2009) cannot compute a value set allowing to execute *path*, then "solving" returns an empty set (lines 21-28). We use the solvers in Een & Sörensson (2003) and Kiezun et al. (2009) which work as external servers that can be called by the test case generation algorithm. The solver Kiezun et al. (2009) manages "String" types, and the solver Een & Sörensson (2003) manages most of the other simple types.

Go back to our example of Figure 11 which depicts the completed synchronous product P*compl <sup>T</sup>*<sup>1</sup> (*ItemSearch*). If we suppose having *Spec*(*ItemSearch*) = {("*ID*", "*book*", "*potter*")} and *Inj* = {"� *or*� 1� =� 1"}, we obtain four test cases, two per value since the operation ItemSearch can be called two times in P*compl <sup>T</sup>*<sup>1</sup> (*ItemSearch*). Figure 12 illustrates the two test cases for the SQL injection "' or '1'='1". With the second test case, the operation ItemSearch is firstly called with ("ID","book","potter") to reach the second invocation, which is tested with the value "' or '1'='1".

20 Will-be-set-by-IN-TECH

can be used to test the satisfaction of the test pattern *T*2 (Section 4) directly: if no response is observed after a timeout, we consider that the Web Service under test is not available and

Test cases are extracted from the completed synchronous products with Algorithm 1. For a

its initial location and composed of the input action ?*opReq*(*p*). A reachability analysis is

The algorithm constructs a preamble by using a Depth First Path Search (DFS) algorithm between the initial location *l*<sup>0</sup> and *lk*. A reachability analysis is also performed to check whether the transition *t* labelled by ?*opReq*(*p*) is reachable (lines 2-8). In line 9, the value set *Spec*(*opReq*), composed of values satisfying the firing of the transition *t* is generated with the Solving procedure. The Value set *Value*(*opReq*), composed of values used for testing *op* is also constructed according to the *TestDom* variable provided in test patterns. This set may be composed of values in *Spec*(*opReq*), of unusual values in *RV* or of SQL/XML injection values in *Inj* (see Section 4). SQL/XML injections are only used if the variable type is equal to "String". If the variable types are complex (tabular, object, etc.), we compose them with other types to obtain the final values. We also use an heuristic to estimate and eventually to reduce test number according to the tuple number in *Value*(*opReq*). Intuitively, for a constant denoted *Max*, if *card*(*Value*(*opReq*)) *> Max*, we reduce the cardinality of *Value*(*opReq*) by removing one value of *RV*(*type*(*p*1)), and one of value of *RV*(*type*(*p*2)), and so on up to

The STS *tc*, modelling a test case, is reset, its variables are initialized with *�*0. The previous preamble path and the transition labelled by the operation request ?*opReq* with one value of *Value*(*opReq*) are added to the transition set of *tc* (lines 12-15). Then, the algorithm also adds each next transition (*lk*+1, *lf* , !*a*(*p*), *ϕk*<sup>+</sup>1, *�k*+1) with the location *lf* labelled by a verdict in {*pass*, *f ail*} and transitions to *Fail* (lines 16-19). We obtain an STS tree, which describes a

The "Solving" method takes a path *path* and returns a variable update *�*<sup>0</sup> which satisfies the complete execution of *path*. If the constraint solvers Een & Sörensson (2003); Kiezun et al. (2009) cannot compute a value set allowing to execute *path*, then "solving" returns an empty set (lines 21-28). We use the solvers in Een & Sörensson (2003) and Kiezun et al. (2009) which work as external servers that can be called by the test case generation algorithm. The solver Kiezun et al. (2009) manages "String" types, and the solver Een & Sörensson (2003) manages

Go back to our example of Figure 11 which depicts the completed synchronous product

*<sup>T</sup>*<sup>1</sup> (*ItemSearch*). If we suppose having *Spec*(*ItemSearch*) = {("*ID*", "*book*", "*potter*")}

cases for the SQL injection "' or '1'='1". With the second test case, the operation ItemSearch is firstly called with ("ID","book","potter") to reach the second invocation, which is tested with

1� =� 1"}, we obtain four test cases, two per value since the operation

*<sup>T</sup>*<sup>1</sup> (*ItemSearch*). Figure 12 illustrates the two test

*<sup>T</sup>* (*op*), the resulting STSs in *TCT*(*op*) are trees which aim to call

*<sup>T</sup>* (*op*) beginning from

**5.3 Synchronous product path extraction with reachability analysis**

the operation *op*, referred in *tpT*(*op*) by extracting acyclic paths of <sup>P</sup>*compl*

*card*(*Value*(*opReq*)) ≤ *Max*. This part is discussed in the next Section.

complete operation invocation. *tc* is finally added to *TCT*(*op*).

performed on the fly to ensure that these paths can be completely executed.

therefore faulty.

synchronous product P*compl*

most of the other simple types.

*or*�

ItemSearch can be called two times in P*compl*

P*compl*

and *Inj* = {"�

the value "' or '1'='1".

**Algorithm 1:** STS extraction from synchronous products

**<sup>1</sup>** Testcase(STS): TC; **input** : An STS P*compl <sup>T</sup>* (*op*) **output**: A test case set *TCT*(*op*) **<sup>2</sup> foreach** *transition t* = (*lk* ?*opReq*(*p*1,...,*pn*),*ϕ<sup>k</sup>* ,*�<sup>k</sup>* −−−−−−−−−−−−→P*compl <sup>T</sup>* (*op*) *lk*<sup>+</sup><sup>1</sup> *with �<sup>k</sup> composed of the assignment TestDom* := *Domain (Section 4.5)* **do <sup>3</sup> repeat <sup>4</sup>** *path* = *DFS*(*l*0, *lk*); **<sup>5</sup>** *�*<sup>0</sup> := *Solving*(*path*); **<sup>6</sup> until** *�*<sup>0</sup> � ∅; **<sup>7</sup> if** *�*<sup>0</sup> == ∅ **then <sup>8</sup>** go to next transition; **<sup>9</sup>** *Spec*(*opReq*) = {(*x*1, ..., *xn*) ∈ *D*(*p*1,...,*pn*) | (*x*1, ..., *xn*) := *Solving*(*path*.*t*) }; **<sup>10</sup>** *Value*(*opReq*) := {(*x*1, ..., *xn*) ∈ *Spec*(*opReq*) if *Spec*(*opReq*) ∈ *Domain*}∪ {(*x*1, ..., *xn*) ∈ *D*(*p*1,...,*pn*) | *xi* ∈ *RV*(*type*(*pi*)) if *RV* ∈ *Domain*}∪ {(*x*1, ..., *xn*) ∈ *D*(*p*1,...,*pn*) | ((*x*� <sup>1</sup>, ..., *x*� *<sup>n</sup>*) ∈ *Spec*(*opReq*), *xi* = *x*� *<sup>i</sup>* if *type*(*pi*) � "*String*", *xi* ∈ *Inj* if *type*(*pi*) == "*String*"), if *Inj* ∈ *Domain*}; **<sup>11</sup> foreach** (*x*1, ..., *xn*) ∈ *Value*(*opReq*) **do <sup>12</sup>** *STStc* := ∅; **<sup>13</sup>** *�*<sup>0</sup> is the variable initialization of *tc*; **<sup>14</sup>** *ϕtc* := [*p*<sup>1</sup> := *x*1, ..., *pn* := *xn*]; **<sup>15</sup>** →*tc*:=→*tc* ∪ *path*.(*lk* ?*opReq*(*p*1,...,*pn*),*ϕk*∪*ϕtc* ,*�<sup>k</sup>* −−−−−−−−−−−−−−−→ *lk*<sup>+</sup>1); **<sup>16</sup> foreach** *transition t*� = *lk*<sup>+</sup><sup>1</sup> !*a*(*p*),*ϕ*,*�* −−−−−→ *lk*<sup>+</sup><sup>2</sup> **do <sup>17</sup>** →*tc*:=→*tc* ∪*t* � ; **<sup>18</sup> foreach** *transition t*� <sup>=</sup> *<sup>l</sup>* !*a*(*p*),*ϕ*,*�* −−−−−→ *Fail such that l is a location of path* **do <sup>19</sup>** →*tc*:=→*tc* ∪*t* � ; **<sup>20</sup>** *TCT*(*op*) := *TCT*(*op*) ∪ *tc*; **<sup>21</sup>** *Solving*(*path p*) : *�*; **<sup>22</sup>** *p* = (*l*0, *l*1, *a*0, *ϕ*0, *�*0)...(*lk*, *lk*+1, *ak*, *ϕk*, *�k*); **<sup>23</sup>** *<sup>c</sup>* = *<sup>ϕ</sup>*<sup>0</sup> ∧ *<sup>ϕ</sup>*1(*�*0) ∧ ... ∧ *<sup>ϕ</sup>k*(*�k*−1); **<sup>24</sup>** (*x*1, ..., *xn*) = *solver*(*c*) //solving of the guard *c* composed of the variables (*X*1, ..., *Xn*) such that *c*(*x*1, ..., *xn*) true; **<sup>25</sup>** 2 **if** (*x*1, ..., *xn*) == ∅ **then <sup>26</sup>** *�* := ∅ **<sup>27</sup> else <sup>28</sup>** *�* := {*X*<sup>1</sup> := *x*1, ..., *Xn* := *xn*}

*q*1 *a* −→Δ(*Impl*)*q*2, *<sup>q</sup>*�

the observed event, and covers it.

**6. Experimentation and discussion**

**6.1 Experimentation results**

Fig. 13. Test tool architecture

given in Section 4.

these results are not surprising.

state.

*q*1*q*� 1 *a* 1 *a* −→*tcq*� 2

2

−→Δ(*Impl*)||*tcq*2*q*�

A Guided Web Service Security Testing Method 217

Pragmatically, the tester executes a test case by covering branches of the test case tree until a *Pass* or a *Fail* location is reached. If a test case transition corresponds to an operation invocation, the latter is called with values given in the guard. Otherwise, the tester observes an event such as a response or quiescence. It searches for the next transition, which matches

Now, we can say that the implementation *Impl* is *secureTP* or in other terms, satisfying a test purpose set *TP*, if for all test case TC in *TC*, the execution of TC on *Impl* does not lead to a Fail

This section illustrates the benefits of using our method for security testing by giving some experiment results. We also discuss about the test coverage and the methodology complexity.

We have implemented a part of this methodology in a prototype tool to experiment existing Web services. The tool architecture is illustrated in Figure 13. It performs the steps described in Section 5 i.e., synchronous products between test purposes and STS specifications, the completion of the synchronous products to add incorrect behaviours, and the test case extraction. Finally, test cases are translated into XML semi-automatically to be executed with the SOAPUI tool Eviware (2011), which is a unit testing tool for Web services. For simplicity, we have only considered String type parameters and the Hampi solver to generate values for the test case generation. To obtain a reasonable computation time, the String domain has been limited by bounding the String variable size with ten characters and by using a set of constant String values such as identification keys. We have also limited the test case number to 100. The experimentation is based upon six initial abstract test purposes, one for each test pattern

Firstly, we experimented our methodology on the whole Amazon AWSECommerceService (2009/10 version) Amazon (2009). The current test purpose set had not risen security issues. Actually, this Web Service is taken as example in several research papers and many new versions of this service have been released to improve its reliability and its security. Therefore,

Fig. 12. Test case examples

#### **5.4 Test verdict**

In the test case generation steps, for a test purpose *tp* ∈ *TP*, we have defined the completion of the product S<sup>↑</sup> × *tp* to recognizes non-conformant behaviours leading to its Fail states. So, the non-conformant trace set *NC*\_*STraces*(S<sup>↑</sup> × *tp*) can be also written with the expression *STracesFail*((S<sup>↑</sup> <sup>×</sup> *tp*)*compl*), which represents the suspension trace set leading to Fail. As a consequence, the *secureTP* relation can be also defined by:

$$\begin{array}{c} \text{Impl\\_score}\_{TP} \\$ \Leftrightarrow \forall tp \in TP, \text{STraces}(\text{Impl}) \cap \text{NC\\_STraces}(\\$^{\uparrow} \times tp) = \oslash \\ \Leftrightarrow \forall tp \in TP, \text{STraces}(\text{Impl}) \cap \text{STraces}\_{\text{Fail}}((\\$^{\uparrow} \times tp)^{\text{compl}}) = \oslash \end{array}$$

Now, it is manifest that the test case set, derived by our method, allows to check the satisfaction of the relation *secureTP* since a test case TC ∈ *TC* is selected in the product (S<sup>↑</sup> <sup>×</sup> *tp*)*compl*. So, when a test case yields a suspension trace leading to a Fail state, then the implementation does not respect test purposes and security test patterns.

For a test case TC, the suspension traces of TC are obtained by experimenting the implementation *Impl*. This execution of one test case TC on *Impl* corresponds to the parallel composition of the LTS semantics *tc* = ||TC|| with Δ(*Impl*), which is modelled by the LTS <sup>Δ</sup>(*Impl*)||*tc* <sup>=</sup>*<sup>&</sup>lt; QImpl* <sup>×</sup> *Qtc*, *<sup>q</sup>*0*Impl* <sup>×</sup> *<sup>q</sup>*0*tc*, <sup>∑</sup>*Impl*, <sup>→</sup>Δ(*Impl*)||*tc<sup>&</sup>gt;* where <sup>→</sup>Δ(*Impl*)||*tc* is given by the following rule:

22 Will-be-set-by-IN-TECH

In the test case generation steps, for a test purpose *tp* ∈ *TP*, we have defined the completion of the product S<sup>↑</sup> × *tp* to recognizes non-conformant behaviours leading to its Fail states. So, the non-conformant trace set *NC*\_*STraces*(S<sup>↑</sup> × *tp*) can be also written with the expression *STracesFail*((S<sup>↑</sup> <sup>×</sup> *tp*)*compl*), which represents the suspension trace set leading to Fail. As a

*Impl secureTP* S ⇔ ∀*tp* ∈ *TP*, *STraces*(*Impl*) ∩ *NC*\_*STraces*(S<sup>↑</sup> × *tp*) = ∅

the implementation does not respect test purposes and security test patterns.

Now, it is manifest that the test case set, derived by our method, allows to check the satisfaction of the relation *secureTP* since a test case TC ∈ *TC* is selected in the product (S<sup>↑</sup> <sup>×</sup> *tp*)*compl*. So, when a test case yields a suspension trace leading to a Fail state, then

For a test case TC, the suspension traces of TC are obtained by experimenting the implementation *Impl*. This execution of one test case TC on *Impl* corresponds to the parallel composition of the LTS semantics *tc* = ||TC|| with Δ(*Impl*), which is modelled by the LTS <sup>Δ</sup>(*Impl*)||*tc* <sup>=</sup>*<sup>&</sup>lt; QImpl* <sup>×</sup> *Qtc*, *<sup>q</sup>*0*Impl* <sup>×</sup> *<sup>q</sup>*0*tc*, <sup>∑</sup>*Impl*, <sup>→</sup>Δ(*Impl*)||*tc<sup>&</sup>gt;* where <sup>→</sup>Δ(*Impl*)||*tc* is given

⇔ ∀*tp* <sup>∈</sup> *TP*, *STraces*(*Impl*) <sup>∩</sup> *STracesFail*((S<sup>↑</sup> <sup>×</sup> *tp*)*compl*) = <sup>∅</sup>

consequence, the *secureTP* relation can be also defined by:

Fig. 12. Test case examples

**5.4 Test verdict**

by the following rule:

$$\frac{q\_1 \xrightarrow{a}\_{\Delta(Impl)} q\_2, \ q\_1' \xrightarrow{a}\_{lc} q\_2'}{q\_1 q\_1' \xrightarrow{a}\_{\Delta(Impl)} | |\_{lc} q\_2' q\_2'|}$$

Pragmatically, the tester executes a test case by covering branches of the test case tree until a *Pass* or a *Fail* location is reached. If a test case transition corresponds to an operation invocation, the latter is called with values given in the guard. Otherwise, the tester observes an event such as a response or quiescence. It searches for the next transition, which matches the observed event, and covers it.

Now, we can say that the implementation *Impl* is *secureTP* or in other terms, satisfying a test purpose set *TP*, if for all test case TC in *TC*, the execution of TC on *Impl* does not lead to a Fail state.

#### **6. Experimentation and discussion**

This section illustrates the benefits of using our method for security testing by giving some experiment results. We also discuss about the test coverage and the methodology complexity.

#### **6.1 Experimentation results**

Fig. 13. Test tool architecture

We have implemented a part of this methodology in a prototype tool to experiment existing Web services. The tool architecture is illustrated in Figure 13. It performs the steps described in Section 5 i.e., synchronous products between test purposes and STS specifications, the completion of the synchronous products to add incorrect behaviours, and the test case extraction. Finally, test cases are translated into XML semi-automatically to be executed with the SOAPUI tool Eviware (2011), which is a unit testing tool for Web services. For simplicity, we have only considered String type parameters and the Hampi solver to generate values for the test case generation. To obtain a reasonable computation time, the String domain has been limited by bounding the String variable size with ten characters and by using a set of constant String values such as identification keys. We have also limited the test case number to 100. The experimentation is based upon six initial abstract test purposes, one for each test pattern given in Section 4.

Firstly, we experimented our methodology on the whole Amazon AWSECommerceService (2009/10 version) Amazon (2009). The current test purpose set had not risen security issues. Actually, this Web Service is taken as example in several research papers and many new versions of this service have been released to improve its reliability and its security. Therefore, these results are not surprising.

Step Complexity Location nb Transition nb

Synchronous product nn'+(n+k)n' k n Completion k k+1 n+kn Test case extraction (k+1+n+kn)n×Value(opReq) / /

A Guided Web Service Security Testing Method 219

Both the complexity and test coverage was left aside in the methodology description. These

• *Methodology complexity:* the whole methodology complexity is polynomial in time in the worst case (with large test purposes testing exhaustively the implementation). This complexity is summarized in Figure 15, for one test purpose and with *n* (*n*�

location number respectively. The *location nb* (*Transition nb*) column gives the location number (transition number) of the resulting STS once the step is achieved. In the experimentation part, we have observed that this complexity is strongly reduced since the synchronous product step produces STSs with a few more locations and transitions than the specification ones. Nevertheless, this complexity also depends on the number of testing values in *Value*(*opReq*). So, if *Value*(*opReq*) is large, both the complexity and the test case number may manifestly explode. This is why we implemented a heuristic which limits the test case set, by limiting the *Max* value in the test case extraction algorithm (Algorithm 1). When the test case number is limited to 100, testing one Web Service with our tool takes at most some minutes. The execution of 1500 tests require less than one hour. The whole test cost naturally depends on the test case number, but also on the delay required to observe quiescence. We have set arbitrarily this delay to 60s but it may be necessary to augment or

• *test coverage:* the test coverage of the testing method depends on the test pattern number and on the *Max* parameter, which represents the test number per operation. Firstly, the larger the test pattern set, the more issues that will be detected, while testing. However, our experiment results show that a non exhaustive test purpose set is already able to detect issues on a large number of Web services. The method is also scalable since the predefined

The test coverage depends, besides the test pattern number, on the number of parameters per operation: the higher the number of parameters, the more difficult it will be to cover the variable space domain. This corresponds to a well-known issue in software testing. So, we have chosen a straightforward solution by bounding the test case number per operation. The *Max* value must be chosen according to the available time for test execution but also according to the number of parameters used with the Web service operations so that each parameter ought to be covered by a sufficient value set. For instance, for one operation composed of 4 parameters, each covered with at least 6 values, the *Max* parameter must be set to 1300 tests. Nevertheless, as it is illustrated in our results, a lower test case number (100 tests) is sufficient to discover security issues. There exist other interesting solutions, for the parameter coverage, which need to be investigated, such as *pairwize* testing Cohen et al. (2003) which requires that, for each pair of input parameters, every combination of

) the

) the specification (test purpose)

Fig. 15. Time complexity of the methodology

specification (test purpose) transition number, *k* (*k*�

set of values *RV* and *Inj* can be upgraded easily.

values of these two parameters are covered by a test case.

**6.2 Discussion**

to reduce it,

ones can now be discussed:


Fig. 14. Experimentation results

We also tested about 100 other various Web Services available on the Internet. Security vulnerabilities have been revealed for roughly 11 percent although we have a limited test purpose set. 6 percent have authorization issues and return confidential data like login, password and user-private information. Figure 14 summarizes our results.

Different kinds of issues have been collected. For instance, the Web Service *getGermplasmByPhenotype* is no more available when this one is called with special characters. Here, we suspect the existence of the "improper data validation" vulnerability. Authorization issues have been detected with *server\_hanproducts.php* since its returns SOAP responses containing confidential data, such as table names and database values. Similar issues are raised with the Web Service *cdiscount*. So, these ones fail to provide confidentiality for stored data. With *slaveProject/Service1.asmx*, the "brute force" attack can be applied to extract logins and passwords.

The experimentation part has also revealed that other factors may lead to a fail verdict. For instance, the test of the *Ebay shopping* Web Service showed that quiescence was observed for a third of the operation requests. In fact, instead of receiving SOAP messages, we obtained the error "HTTP 503", meaning that the Service is not available. We may suppose here that the server was experiencing high-traffic load.


Fig. 15. Time complexity of the methodology

#### **6.2 Discussion**

24 Will-be-set-by-IN-TECH

Web Service (WSDL) test number Availability Authentication Authorization

WS/dialupVoiceService.asmx?WSDL <sup>22</sup> <sup>0</sup> <sup>0</sup> <sup>2</sup>

WS/SecurityService.asmx?WSDL <sup>18</sup> <sup>0</sup> <sup>0</sup> <sup>3</sup>

is/vidskiptavefurservice.asmx?WSDL <sup>66</sup> <sup>6</sup> <sup>1</sup> <sup>0</sup>

server\_hanproducts.php?wsdl <sup>78</sup> <sup>2</sup> <sup>0</sup> <sup>4</sup>

index.php?wsdl <sup>100</sup> <sup>1</sup> <sup>0</sup> <sup>0</sup>

ModbusXmlDa?WSDL <sup>30</sup> <sup>2</sup> <sup>0</sup> <sup>0</sup>

.com/ws/cdiscount?wsdl <sup>30</sup> <sup>2</sup> <sup>0</sup> <sup>2</sup>

latest/ShoppingService.wsdl <sup>30</sup> <sup>10</sup> <sup>0</sup> <sup>0</sup>

password and user-private information. Figure 14 summarizes our results.

We also tested about 100 other various Web Services available on the Internet. Security vulnerabilities have been revealed for roughly 11 percent although we have a limited test purpose set. 6 percent have authorization issues and return confidential data like login,

Different kinds of issues have been collected. For instance, the Web Service *getGermplasmByPhenotype* is no more available when this one is called with special characters. Here, we suspect the existence of the "improper data validation" vulnerability. Authorization issues have been detected with *server\_hanproducts.php* since its returns SOAP responses containing confidential data, such as table names and database values. Similar issues are raised with the Web Service *cdiscount*. So, these ones fail to provide confidentiality for stored data. With *slaveProject/Service1.asmx*, the "brute force" attack can be applied to extract logins

The experimentation part has also revealed that other factors may lead to a fail verdict. For instance, the test of the *Ebay shopping* Web Service showed that quiescence was observed for a third of the operation requests. In fact, instead of receiving SOAP messages, we obtained the error "HTTP 503", meaning that the Service is not available. We may suppose here that the

56 0 0 1

60 0 1 1

26 0 6 0

20 10 0 0

http://research.caspis.net/ webservices/flightdetail.

http://student.labs.ii.edu. mk/ii9263/slaveProject/ Service1.asmx?WSDL

http://biomoby.org/services/ wsdl/www.iris.irri.org/get GermplasmByPhenotype

http://www.infored.com.sv/ SRCNET/SRCWebServiceE xterno/WebServSRC/servSR CWebService.asmx?WSDL

http://81.91.129.80/Dialup

http://81.91.129.80/Dialup

https://intrumservice.intrum.

http://www.handicap.fr/

http://193.49.35.64/

http://nesapp01.nesfrance

Fig. 14. Experimentation results

and passwords.

https://gforge.inria.fr/soap/

http://developer.ebay.com/webservices/

server was experiencing high-traffic load.

asmx?wsdl

Both the complexity and test coverage was left aside in the methodology description. These ones can now be discussed:


The test coverage depends, besides the test pattern number, on the number of parameters per operation: the higher the number of parameters, the more difficult it will be to cover the variable space domain. This corresponds to a well-known issue in software testing. So, we have chosen a straightforward solution by bounding the test case number per operation. The *Max* value must be chosen according to the available time for test execution but also according to the number of parameters used with the Web service operations so that each parameter ought to be covered by a sufficient value set. For instance, for one operation composed of 4 parameters, each covered with at least 6 values, the *Max* parameter must be set to 1300 tests. Nevertheless, as it is illustrated in our results, a lower test case number (100 tests) is sufficient to discover security issues. There exist other interesting solutions, for the parameter coverage, which need to be investigated, such as *pairwize* testing Cohen et al. (2003) which requires that, for each pair of input parameters, every combination of values of these two parameters are covered by a test case.

Eviware (2011). Soapui. http://www.soapui.org/.

URL: *http://doc.utwente.nl/41390/*

*version 3.1, ISO/IEC 15408*.

Washington, DC, USA, p. 230.

*Conference (GLOBECOM 2008)*.

*Conference (SEC)*.

ioco.

Frantzen, L., Tretmans, J. & de Vries, R. (2006). Towards model-based testing of web services,

A Guided Web Service Security Testing Method 221

*IEEE Standard glossary of software engineering terminology* (1999). *IEEE Standards Software Engineering 610.12-1990. Customer and terminology standards*, IEEE press. ir. H.M. Bijl van der, Rensink, D. A. & Tretmans, D. G. (2003). Component based testing with

ISO/IEC (2009). Common Criteria for Information Technology Security (CC), *ISO/IEC 15408,*

Kalam, A. A. E., Benferhat, S., Miège, A., Baida, R. E., Cuppens, F., Saurel, C., Balbiani, P.,

POLICY '03, IEEE Computer Society, Washington, DC, USA, pp. 120–132.

Kiezun, A., Ganesh, V., Guo, P. J., Hooimeijer, P. & Ernst, M. D. (2009). Hampi: a solver for

Kropp, N. P., Koopman, P. J. & Siewiorek, D. P. (1998). Automated robustness testing of

Le Traon, Y., Mouelhi, T. & Baudry, B. (2007). Testing security policies: going beyond functional testing, *ISSRE'07 (Int. Symposium on Software Reliability Engineering)*.

Mallouli, W., Bessayah, F., Cavalli, A. & Benameur, A. (2008). Security Rules Specification

Mallouli, W., Mammar, A. & Cavalli, A. R. (2009). A formal framework to integrate timed

Martin, E. (2006). Automated test generation for access control policies, *Companion to the*

Mouelhi, T., Fleurey, F., Baudry, B. & Traon, Y. (2008). A model-based framework

*applications*, OOPSLA '06, ACM, New York, NY, USA, pp. 752–753.

URL: *http://dl.acm.org/citation.cfm?id=826036.826869*

URL: *http://www.irisa.fr/triskell/publis/2007/letraon07.pdf*

*Engineering Conference (ASPEC'09), Malaysia*.

URL: *http://doi.acm.org/10.1145/1176617.1176708*

'08, Springer-Verlag, Berlin, Heidelberg, pp. 537–552.

*testing and analysis*, ACM, New York, NY, USA.

Deswarte, Y. & Trouessin, G. (2003). Organization based access control, *Proceedings of the 4th IEEE International Workshop on Policies for Distributed Systems and Networks*,

string constraints, *ISSTA '09: Proc of the eighteenth international symposium on Software*

off-the-shelf software components, *FTCS '98: Proceedings of the The Twenty-Eighth Annual International Symposium on Fault-Tolerant Computing*, IEEE Computer Society,

and Analysis Based on Passive Testing, *in* IEEE (ed.), *The IEEE Global Communications*

security rules within a tefsm-based system specification, *16th Asia-Pacific Software*

*21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and*

for security policy specification, deployment and testing, *Proceedings of the 11th international conference on Model Driven Engineering Languages and Systems*, MoDELS

*in* A. Bertolino & A. Polini (eds), *in Proceedings of International Workshop on Web Services Modeling and Testing (WS-MaTe2006)*, Palermo, Sicily, ITALY, pp. 67–82. Frantzen, L., Tretmans, J. & Willemse, T. (2005). Test Generation Based on Symbolic

Specifications, *in* J. Grabowski & B. Nielsen (eds), *Formal Approaches to Software Testing – FATES 2004*, number 3395 in *Lecture Notes in Computer Science*, Springer, pp. 1–15. Gruschka, N. & Luttenberger, N. (2006). Protecting web services from dos attacks by soap

message validation, *in Proceedings of the IFIP TC11 21 International Information Security*
