**3.5 Separation of intent and implementation**

In our proposed framework, there is a profound significance underlying intents as types. And this is because it allows for the separation of intents and implementations, which we believe to be critical towards enabling collaborations. Today, intent and implementation are intertwined which can be seen from how systems often specify and dictate how we carry out tasks in fulfilment of some intent. However, in context of the knowledge workplace, this is probably too draconian and rigid where the creativity and autonomy of individuals are especially valued. Furthermore, in reality, intents are often separate from implementations and may even be contributed by different people. Particularly, collaboration at scale is complex and we cannot reasonably expect it to be well-defined or pre-defined from the onset. On contrary, we can expect that for any collaboration:


Essentially, we need to flexibly handle the division of interdependent labour, interconnected intents, and diverse methods of handling the task at hand. Therefore, the question is: how can we tie people's collaboration together - managing the flow of information, etc. - allowing each person to define what they can do for others without overtly constraining their implementation, then allowing each task with input data to be broken into groups of data which can be handled with different implementations?

To achieve this, we believe that there needs to be a separation of intent from the implementation using type theory, enabled through our proposed framework.

#### **3.6 Framework axioms**

Next, we derive the rules (axioms) of our proposed framework which will govern the operations in a type theoretic manner.

Given some data *x* : *X* of type *X* and an intent *G x*ð Þ, the output is some implementation *g x*ð Þ. We represent this statement type-theoretically in Eq. (4), which relates an implementation (term) to its intent (type).

$$\mathbf{g}(\mathfrak{x}) : \mathbf{G}(\mathfrak{x}) \tag{4}$$

Alternatively, we can write

$$\mathbf{g} : \Pi\_{\mathbf{x} \cdot \mathbf{X}} \mathbf{G}(\mathbf{x}) \tag{5}$$

where we view *g* as a term of a dependent product type. To fulfil an intent *G x*ð Þ:

• there may exist data groups or subtypes *X*1,*X*2, … , *Xk* ⊆*X* where the intent former *G* can be applied. This means that for different data *x*<sup>1</sup> : *X*1, *x*<sup>2</sup> : *X*2, … , *xk* : *Xk*, we could form different intents.

$$G(\varkappa\_1), G(\varkappa\_2), \dots, G(\varkappa k) \tag{6}$$

• Given some data *xi* : *Xi*, the intent *G x*ð Þ*<sup>i</sup>* may have one or more implementations. Moreover, there may exists any number of possible strategies or implementation formers *g*1, *g*2, … , *gm* for constructing implementations for *G x*ð Þ*<sup>i</sup>* .

$$\{\mathbf{g}\_1, \mathbf{g}\_2, \dots, \mathbf{g}\_m : G(\mathbf{x}\_i) \tag{7}$$

• Each implementation former *g <sup>j</sup>* for *G x*ð Þ*<sup>i</sup>* may consume the outputs from one or more constituent intents Γ1, Γ2, … , Γ*<sup>n</sup>* and associated implementations may receive inputs from one or more constituent intents (Eq. (8)). This means that an intent may contain its own hierarchy of constituent intents.

$$\begin{aligned} \chi\_1: \Gamma\_1(\mathfrak{x}\_i) &\to \chi\_2: \Gamma\_2(\mathfrak{x}\_i, \mathfrak{y}\_1) \\ &\to \dots \\ &\to \chi\_n: \Gamma\_n(\mathfrak{x}\_i, \chi\_1, \dots, \mathfrak{y}\_{n-1}) \\ &\to \mathcal{g}\_j(\mathfrak{x}\_i, \mathfrak{y}\_1, \mathfrak{y}\_2, \dots, \mathfrak{y}\_n): G(\mathfrak{x}\_i) \end{aligned} \tag{8}$$

In summary, these framework rules allow us to:


#### **3.7 Group and assign**

Next, to complete the framework, we will introduce two algorithms, Group and Assign, as abstract methods respectively described in Algorithm 1 and Algorithm 2.

*Group-Assign: Type Theoretic Framework for Human AI Orchestration DOI: http://dx.doi.org/10.5772/intechopen.96739*

The Group function (Eq. (9)) is defined as: For every intent, we have a set of data that can be further grouped into smaller groups based on some grouping criteria *J*.

$$Group(G, [\mathfrak{x}]) \to (G, [(\mathfrak{x}, j)])\tag{9}$$

**Algorithm 1** Group.

1: **Input**: G, ½ � *x* where *x*∈*X*, *J* where *j*∈*J* 2: **Output**: ð Þ *G*, ½ � ð Þ *x*, *j* 3: **Initialise**: 4: *L* Ø where *L* is a placeholder list to collate all ð Þ *x*, *j* pairs 5: **for** each *j* in *J* **do** 6: **if** *x* matches criteria *j* **then** 7: *L* ð Þ *x*, *j* 8: **end if** 9: **end for** 10: ð Þ¼ *G*, ½ � ð Þ *x*, *j* ð Þ *G*, *L* 11: **return** ð Þ *G*, ½ � ð Þ *x*, *j*

The Assign function (Eq. (10:)) is defined as: For each group belonging to an intent *G*, some implementation *g* is defined and applied to the group.

$$\text{Assign}\left(\mathbf{G}, \left[ (\mathbf{x}, j]) \to \left( \mathbf{G}, \left[ \mathbf{x}, j, \mathbf{g}\_j(\mathbf{x}, \boldsymbol{\gamma}\_1, \boldsymbol{\gamma}\_2, \dots, \boldsymbol{\gamma}\_m) \right] \right] \right) \tag{10}$$

#### **Algorithm 2** Assign.

1: **Input**: (G, [(x, j)]) 2: **Output**: *G*, *x*, *j*, *g <sup>j</sup>* � � h i � � 3: **for** each ð Þ *x*, *j* in ½ � ð Þ *x*, *j* **do** 4: **if** some *g* exists for ð Þ *x*, *j* **then** 5: *x*, *j*, *g <sup>j</sup>* � � ð Þ *<sup>x</sup>*, *<sup>j</sup>* .append *<sup>g</sup> <sup>j</sup>* 6: **end if** 7: **end for** 8: **return** *G*, *x*, *j*, *g <sup>j</sup>* � � h i � �

Our proposed framework is collectively formed by the framework rules (Section 3.6), Group and Assign. This completes the description of our framework and in the following sections, we will discuss the evaluation strategies and findings for our proposed framework.

#### **3.8 Evaluation approach**

Our proposed work brings together type theory, type theoretic framework axioms and associated functionalities as a human AI intent orchestration framework intended for real world application. To the best of our knowledge, this is a novel effort and uniquely positioned idea to introduce capabiltiies for enabling a

collaborative human AI future. This concurrently presents a challenge for us in determining how best to provide an evaluation of the proposed framework as widespread adoption and understanding will require more time and effort beyond our scope of research, as is reasonably expected given that the collaborative human AI society has yet to be a norm at the point of writing.

Going into the future, we intend to utilise our proposed framework and progress beyond our preliminary evaluation efforts to progressively identify and engage external parties for further evaluation through joint collaborations. Nevertheless, we endeavour to provide an evaluation of our proposed framework here and therefore consider the following:


We believe that evaluation strategies for a toolkit (which we liken as comparable to our proposed framework) from the domain of human computer interaction [15] is relevant and suitable. We also note that the authors have expressed that "The problem is that toolkit evaluation is challenging, as it is often unclear what 'evaluating' a toolkit means and what methods are appropriate.", which speaks of a similar challenge for us and is still commonly faced till date by researchers of toolkits. Concretely, we reference and adopt two well-established evaluation strategies for the purpose of our evaluation, namely:

