**6. Designing situation-aware decision support systems**

An outline of the design process is given in this section (**Figure 4**). Describing the situation-aware user interface, the design process encourages the layout of declarative abstract models [2].

To export these models to the runtime, the aggregate of the models can be serialized. The corresponding UI can be created in the form of a prototype to check the responsiveness of the device in order to evaluate the outcome of these models. Considering the prototype, certain adjustments may be made to the templates in the modeling process to improve the appearance of the UI, for example, or how changes in the scenario can affect the UI.

*Situation-based Task Model*: Firstly, a work model is defined that specifies the tasks that users and applications can experience as they communicate with the system. Since we want situation-aware UIs to be created, tasks often focus on the current situation. This is why activities are drawn up for different circumstances in the task model. The designer will specify various roles for different scenarios in this way.

*Input Model*: The developer needs to identify what kind of input will affect the interaction, i.e. the tasks, when the task model is defined. This can be achieved by choosing the input collection objects (Perception Objects or POs). The aggregation objects (AO) may be aggregated by these objects and described by interpretation objects (IO). The designer will do this by binding AOs to POs and choosing how the information needs to be interpreted from a set of predefined interpretation rules. At the comprehension layer, the IOs reflect the interpreted data. The designer has to connect the IOs to task model nodes when the input model is defined (intermodal connection). The designer may thus denote which activities can be done under which scenario.

*Situation-Specific Dialogue Models*: Next, for each case, the tool can automatically generate a dialogue model from the task model. After that, inter-model links are automatically inserted between the dialogue model states and the task model tasks that are allowed for each individual state. The nodes of the dialogue model (states) of the different dialogue models are connected to denote which states can alter the situation between them.

*Introduction to Intelligent User Interfaces (IUIs) DOI: http://dx.doi.org/10.5772/intechopen.97789*

#### **Figure 4.**

*Situation-aware Design Process for User Interface.*

*Presentation Model*: Designers need to compose abstract UI components to provide the interface model with information about how the interaction should be interpreted to the user, and connect these to the relevant tasks for each node of the presentation model. In order to organize presentation components for layout purposes, the presentation model nodes may be arranged hierarchically. There are many abstract UI components for the designer to choose from, such as static, data, option, navigation control, hierarchy, and custom widget. Finally, it is possible to organize the UI components and arrange them into a hierarchical structure.

*Situation-aware Interface Model*: A situation-aware interface model benefits from the aggregate of all the models.

*Usability evaluations*: In order to assess and enhance the performance of the graphical interface of the models, usability tests are then conducted.

### **7. Current challenges and envisioning the future**

Today, the development of intelligent user interfaces causes challenges equivalent to those encountered in Artificial Intelligence. It is well known that the optimal user interface is not the actual one in the area of human computer interaction. Currently, however, it is important to provide an intermediate between the wishes of the individual and the application of such intentions. It doesn't matter how elegant a user interface is, it will still be there to provide the user with a mental workload.

The potential machine, according to A. van Dam or T. Furness [42], would be a great butler who knows my setting, my preferences and my character, and without needing clear orders, gets ahead of my needs in a subtle way. When the user communicates with this butler, movements, facial expressions [43, 44] and other means of human speech, such as designing drafts, will mostly talk about the interaction. A goal of Artificial Intelligence from its very beginning, 50 years ago, is to be able to provide objects that facilitate learning, development or connectivity peer to peer with an individual.

To actually come true, by applying computer vision techniques or speech recognition to perceive and understand natural language, certain agents may be able to translate gestures and emotions. The inference engine for such agents would be Artificial Intelligence and knowledge-based technologies. They would do so in a friendly way when these agents speak with the user, and potentially modulate their voice according to the mood of the user at a given point in time.

The complexities of this technology can be separated into three areas: input, inference and output and, more precisely, the analysis of human language expressions, the depiction and control of knowledge of the world and, ultimately, the perception of human beings as social beings.

In [45], a virtual companion-based interface was proposed by the authors to simplify the mobile interface for the elderly. This interface shows menus through a virtual agent as per the scenario and the request of users and animates information in a 3-dimensional layout. To collect input from users, the authors used speech recognition technologies from smartphones and wearable devices. To imagine suitable actions based on user feedback, virtual avatars were chosen. The authors predict that this mobile interface might be the future of the next user interface for smartphones.

In the treatment and healing process, IUI is often used in tasks of determining the functional status of an individual. The performance of functional state estimation is dependent on combined data from the accelerometer and the EEG [46].

### **8. Conclusions**

Computer products today have the potential to provide information to us, to entertain us or to make our lives easier, but whether the user interface provided is limited or difficult to use, they may also slow down our job. A vision of how various approaches from different fields, including Artificial Intelligence, Software Engineering and Computer-Human Interaction, have been added over the years to help establish a successful user interaction experience, increasing the overall usability of the device, has been seen in this chapter. In [47] the authors explore the community's evolving tacit viewpoint on intelligence characteristics in intelligent user interfaces and provide suggestions for expressing one's own perception of intelligence more clearly.

User interfaces can be as normal in the future as listening to a human. We should also note that Human-Computer Interaction's main aim is to reduce the mental distance between the user and his activities, dimming the interface until it becomes unseen. None the less, to get to the stage where we can provide users with virtual interfaces, there is always a long way to go. Computer-based automated interpretation of human sentiment and effect is generally predicted to play a significant role in future Intelligent User Interfaces, as it carries the potential of supplying immersive applications with emotional intelligence [48].

*Introduction to Intelligent User Interfaces (IUIs) DOI: http://dx.doi.org/10.5772/intechopen.97789*

Another key of the study is that adding context to input in situation-aware systems results in automatic adjustment in awareness of the situation and action list, making the UI adapt to the actual operator's unique needs. In addition, adaptation can only actually happen where the context and climate adjustment is sufficiently important to result in the transition between two possible user interface statuses. Adapting the interface to the actual scenario as defined in this prototype and providing reusable tasks with a reduced number of commands, clicks and choices decreases the operator's cognitive burden and hence encourages interactions [49].
