**10. Generation of convincing explanations**

The generation of convincing explanations using statistical methods is generally accepted that it is not easy. A variety of explanations may be created by other methods.

For example, by spanning the axes briefly presented below:

1.From verbose to concise text.

This axis spans along the conciseness of explanatory texts. This span starts from long and verbose explanations that are usually the easiest to generate automatically when tracing the steps of a deduction used as the basis of explanation. Using summarization, the texts may be reduced in size and made more concise.

2.From mathematical model to qualitative causal reasoning steps.

For instance, we have implemented a system using differential equations that describe the interactions of the proteins p53 and Mdm2 that generate descriptions of their output waveforms useful for an explanation of the behavior of the cellular system that they define. If the user is ignorant of mathematics, we may still use a mathematical model invisibly to the user and refer to the parameters of the model described in a language familiar to the user that runs as follows:


The answer from the above text to the question "What process of alveoli causes flow of lungs air?" is 'become smaller", and the explanation is:

*"Because surface tension elastic forces are part of elastic forces and alveoli becoming smaller causes an increase of surface tension elastic forces alveoli becoming smaller causes rise of alveolar pressure because alveoli becoming smaller causes an increase in elastic forces and elastic forces cause rise of alveolar pressure alveoli becoming smaller causes flow of lungs air because alveoli becoming smaller causes rise of alveolar pressure and rise of alveolar pressure."*

3.From purely textual to purely pictorial.

In the case of this axis, there will exist alternatives with variable degree of reliance on textual or pictorial presentation of the explanations.

4.From oral to visual.

In the case of this axis, the explanations will be adapted to users that either prefer to hear or see the relevant information.

5.From a term definition to documentation by publications.

If a user does not understand some of the information in the explanations, there will be provision for asking for additional clarifying information ranging from the definition of an unknown term to the user to publications relevant to the unknown concepts or data.

6.From deterministic to probabilistic models.

In the case explanations based on a causal model, there will be a choice between deterministic or probabilistic (e.g., stochastic automata) models.

7.From factoid XQA to deductive XQA.

Users will be asked to provide examples of questions they may wish to ask the explainable question answering subsystem. These questions may be either factoid or needing deduction or both.

8.From biochemical to system level of the disease being examined.

If she/he is not familiar with biochemistry, then the system will restrict the substances involved.

9.From colloquial to formal language.

There will be a variation of the language used to express the explanation from everyday colloquial to formal scientific jargon depending on the preferences of the user.

An example concerning the interaction of the two proteins P53 and MDM2 is used for illustration. The mathematical model is derived from biomedical papers. The qualitative causal explanation is derived from the PubMed text base consisting of abstracts of biomedical research papers. The pictorial explanation

with natural language comments is derived from a computer simulation of a simplified set of equations that approximate the equations found in biomedical literature**.**

A rather verbose explanation is generated automatically as follows:

*"I found that the entity <p53> is one of the tokens of the chunk <the p53 protein>. which is the chunk to the left of the verb <regulates> of the sentence <1>. I found that the chunk to the right of the verb <regulates> of the sentence <1>. is the chunk <the mdm2 gene > and since its first token is not an entity, I tested the rest of the tokens. The entity <mdm2> is one of the tokens of the chunk <the mdm2 gene>. Which is the chunk to the right of the verb <regulates> of the sentence <1>. I found that e entity <mdm2> is one of the tokens of the chunk <the mdm2 oncogene >. Which is the chunk to the left of the verb <inhibits> of the sentence <3>. I found that the chunk to the right of the verb <inhibits> of the sentence <3>. is the chunk <p53 mediated transactivation > and the entity <p53> is one of the tokens of the chunk <p53 mediated transactivation>. Hence, it follows that <p53> is influenced by <p53>."*

The causal relations recognized above in the text fragment processed by the system form a closed loop. The above explanation may not be convenient for a user facing a crisis like in a defense situation or in a medical emergency department due to its length. A shorter one could be generated as follows:

*"I found that the entity <p53> occurs at the left of the verb of sentence <1> and that the mdm2 gene occurs at the right of the verb of <1> and that mdm2 occurs at the left of the verb of <3> and that p53 occurs at the right of the verb of <3>; hence, it follows that <p53> is influenced by <p53>."*
