**Abstract**

In today's Information Age, we work under the constant drive to be more productive. Unsurprisingly, we progress towards being an AI-augmented workforce where we are augmented by AI assistants and collaborate with each other (and their AI assistants) at scale. In the context of humans, a human language suffices to describe and orchestrate our intents (and corresponding actions) with others. This, however, is clearly insufficient in the context of humans and machines. To achieve this, communication across a network of different humans and machines is crucial. With this objective, our research scope covers and presents a type theoretic framework and language built upon type theory (a branch of symbolic logic in mathematics), to enable the collaboration within a network of humans and AI assistants. While the idea of human-machine or human-computer collaboration is not new, to the best of our knowledge, we are one of the first to propose the use of type theory to orchestrate and describe human-machine collaboration. In our proposed work, we define a fundamental set of type theoretic rules and abstract functions *Group* and *Assign* to achieve the type theoretic description, composition and orchestration of *intents* and *implementations* for an AI-augmented workforce.

**Keywords:** AI Augmentation, Human Computation, Human AI Collaboration, Human AI Framework, Artificial Intelligence

#### **1. Introduction**

The nature of work is always transformed when automation is introduced. Looking back in history, automation has typically served to reduce or eliminate the need for manual work. From hand-delivered messages to telegraph, written communications in the past has required much manual labour. Today, email and a slew of instant messaging platforms get the same job done instantaneously and better. In recent years, highly advanced forms of automation involving AI are making headways into the mainstream workplace. Examples include manufacturing robots learning how to perform bin picking, robot patrol enforcing social distancing in midst of a virus situation [1] and many more. Automation therefore has an overall effect of moving human workers up the cognitive value chain, shifting towards increasingly managerial and strategic roles that are more knowledge-based. The reason for this is apparent as automation introduces characteristics such as being stronger and more tireless relative to human workers, and thereby allowing businesses to do more. Taking a step further, AI is also beginning to encroach into the

cognitive realm at work whereby as an instance, an insurance company reportedly replaced its employees with an AI system [2]. All things considered; it is understandable why the workforce is under a constant drive to do more.

Generally, however, automation is well-suited only for tasks that are repeatable within a fixed context. Contrast to this, the handling of work tasks across shifting contexts is not a good candidate for automation. As an example, in the face of a widespread consumer behaviour change due to a pandemic, AI models supporting sentiment analysis, fraud detection, marketing and inventory management operations no longer behaved as expected. The article [3] writes:

*What's clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key.*

Though AI models are designed for robustness to changes in the incoming data, situations like these reveal the AI models' brittleness when there is a significant shift in input data distribution. This is termed as *out-of-distribution* (OOD). This means that the input data at point of inference is no longer the same as the training data's distribution that the AI model learnt from. This has the negative effect where the AI model not only potentially makes a mistake on OOD inputs, but even confidently classifying it as a known class. Clearly, this is undesirable and in critical deployments, the mistake can be costly. Clearly, this is undesirable and in critical deployments, the mistake can be costly.

To counter this, there are research efforts looking into OOD detection. To illustrate with an example, a deep generative model for OOD detection was trained using in-distribution genomic sequences [4], with the log-likelihoods plotted for both in-distribution and OOD inputs. We can see from the results (**Figure 1**) that the histogram of log-likelihood overlaps significantly for both in-distribution and OOD inputs, thus showing the model's inability to differentiate between indistribution and OOD. The authors further note that their observations are not in isolation and are congruent with earlier works using image data.

Naturally, artificial general intelligence (AGI) comes to mind when we broach the topic of narrow AI (as afore discussed). We can think of AGI as the AI's ability being on par or exceeding that of a human's ability to learn, understand and perform intellectual tasks. To conceptualise the relationship of narrow AI and AGI, one may think of them as two sides of the same coin or rather, both ends of the same AI spectrum. Simply put, advancements in narrow AI will evolve towards AGI ultimately. Consensus indicates that AGI is not here today [5, 6] and predictions for the

**Figure 1.** *Log-likelihood hardly separates in-distribution and OOD inputs, adopted from [4].*

*Group-Assign: Type Theoretic Framework for Human AI Orchestration DOI: http://dx.doi.org/10.5772/intechopen.96739*

advent of AGI ranges widely anywhere from a few years to many decades away. Towards this goal, there are research directions on multiple fronts such as to progress deep learning from its current System 1-like abilities to be System 2-capable [7], improving language understanding through increasingly larger and complex language models such as BERT [8] (and its variants) and GPT-3 [9], etc. These are tangible indications that AGI is still some way off.

Meanwhile, we believe there is a complementary and parallel research direction to advancing narrow AI towards AGI. And that is human AI collaboration in which we exploit AI (be it narrow or AGI) to augment our natural abilities. In this sense, for as long as work is envisioned to involve humans, the pursuit for human AI collaboration remains a valuable research direction which will only be more relevant and strengthened further by AI's advancements and increasing pervasiveness at work. In the following sections, we will progressively present our proposed framework designed to enable the collaboration within a network of humans and AI. To begin, we will first discuss some background concepts relevant towards our proposed work.

### **2. Type theory: formal language of terms and types**

Type theory is a branch of symbolic logic in mathematics. The theory of types was conceived to address Russell's paradox arising from naïve set theory, which says that any definable collection is a set. If we define a set of all sets that do not contain themselves, the paradox is illustrated when the set is not a member of itself and therefore needs to contain itself, which then leads to a contradiction of its own definition (Eq. (1)).

$$Let \ S = \{ \mathfrak{x} \in \text{Set} | \mathfrak{x} \notin \mathfrak{x} \}. Then \ S \in \mathcal{S} \Leftrightarrow \mathcal{S} \notin \mathcal{S}. \tag{1}$$

We can think of type theory as a formal language, complete with a set of rules that inform us on the construction and computations for strings of symbols which we will further introduce hereon the concept of *terms* and *types*.

$$
\mathfrak{a}: \mathcal{A} \tag{2}
$$

We begin with a basic representation (Eq. (2)), more formally referred to as a judgement, often encountered in type theory, which simply means that *a* is a term of type *A*. We can also think of this as *a* is an element of *A*. To illustrate for further clarity, we define some examples of terms and types using some real-world objects:


In type theory, every term must have a type as seen in List 2. For ease of understanding, we can loosely correspond this to the set theoretic statement *a*∈ *A* where red, green, yellow and orange are all members of type Colour. Another correspondence can be thought of as *propositions as types* under the Curry-Howard isomorphism [10], where *A* is a proposition, then *a* validates the existence of *A*. To provide the evidence, we will have to perform some mathematical operation to construct the object, that is the term inhabiting the type. Here we observe that *constructivism* is a foundational aspect of type theory, meaning we cannot just assume that objects exist through means such as "Suppose that there exists such an object … ." or by proving the existence of some object through deriving a contradiction from the assumption that object does not exist.

Like other concepts in mathematics, the construction of an object in type theory is governed by rules and we will focus on the introduction of the relevant concepts in the following sections.

#### **2.1 Function type**

To start, we look at the *function type*. Given types *A* and *B*, we can construct the type *A* ! *B* of functions mapping from domain *A* to codomain *B*. If we define a function *f* of type *A* ! *B* and apply to the term *a* of type *A*, we can obtain a term *f a*ð Þ of type *B* (also can be written as *f a*ð Þ : *B*). For function types, the mapping between the domain to the codomain is constant and fixed. Let us look at a more relatable and practical example of a function type:


Hence, a function type will always map from some domain *A* to the codomain *B*.

#### **2.2 Dependent product type**

Next, we introduce the *dependent product type* for which its terms are functions whose type of codomain varies depending on the term of the domain that the function is applied to. It is also referred to as a *dependent function type* or Q*-type*. Given a type *A* and a family of types *B* : *A* ! *U*, we can construct the type of dependent products Q *<sup>x</sup>*:*<sup>A</sup>B x*ð Þ : *U* where *U* is known as *universe* whose elements are types. The dependent product type is often used in type theory, and we can think of it as a more generalised form of the function type. The main difference lies in *B* being a constant family in the function type, such that Q *<sup>x</sup>*:*<sup>A</sup>B* � *A* ! *B*.

To illustrate, let us define *f* as a dependent function of type Q *<sup>x</sup>*:*<sup>A</sup>B x*ð Þ and apply term *a* of type *A*. The result is such that we obtain a term *f a*ð Þ of type *B a*ð Þ (also can be written as *f a*ð Þ : *B a*ð Þ). We further provide a more relatable example of a dependent product type as follow:


Hence, a dependent product type will map to a different codomain depending on the input term.

## **2.3 Propositions as types**

Earlier on, we briefly talked about the correspondence referred to as *propositions as types*. To validate the truth about a proposition, it means that the corresponding type needs to be inhabited by some term and this is the *evidence* (or *witness*) to the proposition. Generally, the evidence will not be constructed explicitly but rather, translated from proofs into a term of a type and in this sense, it feels like classical set theory reasoning. However, a proposition in type theory goes beyond being true or false, to being a collection of all possible evidence towards the proposition's truth. This mirrors much of our real-world work scenarios, in the sense that there is often more than one correct (true) way of fulfilling a task.

Furthermore, the correspondence between type theoretic and logic operations (**Table 1**) allows us to syntactically construct a type theoretical operation with the semantics of the corresponding logical operation. This is significant because with the ability to correspond between type theoretical and logical operations, the evidence (or proofs) are therefore first-class mathematical objects instead of being just a means for communicating mathematics.

Although it may not be immediately apparent, what we just discussed has impactful implications, mainly:



**Table 1.**

*Correspondence of logical and type theoretical operations.*

### **2.4 Reasoning through structured types**

Type theory can be viewed as a mathematical formalisation for a programming language. Examples of such programming languages include Agda, Coq, Haskell and more. One notable usage is in proof assistants, that resulted in a verifiable proof of the four colour theorem [11] well over a century after its introduction in 1852. Another notable usage is in formal program verification, which is a software programming paradigm that ensures that the resulting computer program has the rigour of a mathematical proof. This is achieved through specifying how a program should behave and ensuring that it works as specified, which is synonymous with the creation and proof of a mathematical model. Beyond the guarantee of the program's correctness, this has significant implications on cyber security in our highly connected digital society.

Though dependently typed functional programming is not mainstream at the point of writing, it is on the rise and initiatives such as CompCert [12] are active in taking functional programming forward. In concluding Section 2, we find the following quote [13] useful as a succinct summary of type theory:

*In type theory, unlike set theory, objects are classified using a primitive notion of type, similar to the data-types used in programming languages. These elaborately structured types can be used to express detailed specifications of the objects classified, giving rise to principles of reasoning about these objects.*
