**5.3. Care plans, workflows and medical guidance**

The normalization of processes is more and more present in current society. The formal definitions of procedures that are usually deployed in enterprises and factories are considered the best practices in order to support the management. This normalization allows not only predefine which is expected in the organization procedures but also allows a continuous monitoring of the processes that is crucial for their correct management and continuous improving. Moving these ideas to Health Care, Care Plans or Clinical Pathways [20] there are protocols for standardization of health processes. Nevertheless, the standardization of those processes is more complicated than usual enterprise processes. First of all, the Health processes are very complex. The high number of variables that are taken into account in a care process, the pluripatological patients and the wide quantity of different treatments that can be applied to them requires a special expressivity in the process definition. In addition, the processes standardized should be designed by health professionals. This requires that the specification language should be legible by those experts in order to ensure that the process can be understood and repeated. Moreover, the specification of the process should be non-ambiguous. This is a key problem in the specification of protocols. It is said that a protocol is ambiguous when more than one interpretation possible for the same specification exists. The presence of ambiguity in Care plans is a great problem that prevents that the process will be deterministic.

Usually, care plans are described as great manuals, free text written, that explain the care processes in a whole. Those manuals are available through big medical libraries like the Cochrane [21] or PubMed [22]. Although those manuals have the expressivity of natural language, they are often ambiguous and the high quantity of data is tedious to read. Other approaches are based on specific formal languages like GLIF [23] that provide tools to avoid ambiguity and to ensure the completeness of the protocols defined. Nevertheless, the use of rule based systems can difficult the creation of legible and controllable frameworks because the high number of rules to be taken into account. In enterprise environments, processes are usually defined as workflows. Workflows [24][12] are formal specifications of processes designed to be automatized. The main advantage of using workflows is that they usually have a graphical interface that makes easier their design and understandability to nonprogramming experts like doctors. In Care Plans environment there are works, available in the literature, which faces this problem using workflows technology [25][26]. The main problem of workflows against other approach like GLIF or traditional techniques is that workflows have less expressivity than them. Nevertheless, there are available workflows approaches in literature [26] that ensures a high expressivity for defining very complex workflows and even for the design of clinical pathways [27].

302 Risk Management – Current Issues and Challenges

the following aspects:

monitoring data. Future PHRs covering the area of occupational health should be based in

It includes an option to generate summaries to share information with other PHRs.

One of the advantages, for example, would be when a worker goes to work in other factory. If the new factory is enabled, his PHR could be downloaded in the system of the new factory in order to have a more complete file and ensure continuity in the management of the risks for that particular worker. Stored Data can also be extracted for consultations and referrals in case health professionals need to, of course under the corresponding access control that prevents unauthorized sharing of the data. Data can be easily anonymized to be used for statistical and epidemiological studies in order to detect population based health problems.

The normalization of processes is more and more present in current society. The formal definitions of procedures that are usually deployed in enterprises and factories are considered the best practices in order to support the management. This normalization allows not only predefine which is expected in the organization procedures but also allows a continuous monitoring of the processes that is crucial for their correct management and continuous improving. Moving these ideas to Health Care, Care Plans or Clinical Pathways [20] there are protocols for standardization of health processes. Nevertheless, the standardization of those processes is more complicated than usual enterprise processes. First of all, the Health processes are very complex. The high number of variables that are taken into account in a care process, the pluripatological patients and the wide quantity of different treatments that can be applied to them requires a special expressivity in the process definition. In addition, the processes standardized should be designed by health professionals. This requires that the specification language should be legible by those experts in order to ensure that the process can be understood and repeated. Moreover, the specification of the process should be non-ambiguous. This is a key problem in the specification of protocols. It is said that a protocol is ambiguous when more than one interpretation possible for the same specification exists. The presence of ambiguity in Care

Includes relevant health hearths and risks linked to the work conditions.

It allows an exchange of information with the healthcare system (HER).

plans is a great problem that prevents that the process will be deterministic.

Usually, care plans are described as great manuals, free text written, that explain the care processes in a whole. Those manuals are available through big medical libraries like the Cochrane [21] or PubMed [22]. Although those manuals have the expressivity of natural language, they are often ambiguous and the high quantity of data is tedious to read. Other approaches are based on specific formal languages like GLIF [23] that provide tools to avoid ambiguity and to ensure the completeness of the protocols defined. Nevertheless, the use of rule based systems can difficult the creation of legible and controllable frameworks because

It allows patient to introduce data (automatically or manually).

**5.3. Care plans, workflows and medical guidance** 

In addition to graphical design, Workflow has more advantages that can be useful for the design and deployment of care plans. Current workflow systems usually have associated an engine able to automatically execute the processes defined in the graphical way. That means that the formally defined processes can be automatically used for deploying the process by using automatic deploying systems. In addition, Process mining [27] technologies allows the application of pattern recognition technologies to support the iterative design of Care Plans. Furthermore, thanks to the low grammatical complexity of some workflow approaches [27] is possible to apply a great quantity of algorithms and tools for ensuring the completeness, the non- ambiguity and the simulation of processes in order to detect problems in their design before their deployment. In the Figure below is presented a basic specification of a Care Plan using a workflow based approach.

**Figure 10.** Example of Workflow Based Clinical Pathway

The continuous assessment of processes is critical for ensuring an efficient execution of the enterprise procedures. In this way, there are more and more business intelligence systems available to empower and support the management providing important information about the processes filtered by using Data Mining technologies. Usually those systems, present information about static or evolution of numeric data according to parameters that can, indirectly, help managers to detect inefficiencies or bottlenecks in the processes. In this scenario, an emerging technology is growing in order to enrich those business intelligence systems providing a more directly view about the process execution. This technology is called Process Mining (A.K.A Workflow Mining) [28]. Process Mining technology is research field, based on pattern recognition paradigm, that uses the events or activities of the process logs in order to automatically infer a graphical workflow that explains the actual execution of the process. There are many process mining algorithms in the literature, based on Events like Alpha [16] or Genetic Process Miner [29], or based on activities like PALIA algorithm [27]. Those algorithms are able to create workflows from samples and present graphically in order to know exactly how processes behave in real implementations. Comparing the results of those algorithms with the designed processes, it is possible to directly know what are the most usual paths followed by the processes within the designed workflows, what are the differences and exceptions to the designed processes that are occurring in the implantation of them, etc. In this way, there are available algorithms that allows the comparison between the designed workflows and their real implantation [27].

Integrated and Personalised Risk Management in the Sensing Enterprise 305

**Figure 11.** Ontology-based reasoning

**Figure 12.** Mapping process results

As depicted by the Figure above, the selected architecture is based on a distributed multiontology structure. This way, there is an ontology for each risk and a set of context ontologies related to different existing and influent entities in each cycle of risk management in the factory. The context ontology properties might be source of different risk ontologies properties. Thus, risk can be defined through different entities or actors of the company.

### **6. Real-time risk management solutions**

The implementation of proactive risk management in a sensing enterprise demands that distributed reasoning capabilities are provided as a means for intelligence and analytics. FASyS should therefore not stop simply at the data collection and distribution level, but on the contrary, it should be able to analyse the vast amount of available information in the presence of data unavailability, ambiguity, imprecision and error. Therefore, for being able to detect, decide and act in real-time to dynamic risk levels various challenges need to ba addressed in the areas of (a) iindustrial safety reasoning engines (b) Real-time risk detection tools (c) Personalized decision support tools and (d) Semantic solutions for services coordination

#### **6.1. Industrial safety ontologies and reasoning engines**

To deal with safety in a quite heterogeneous environment in terms of information sources, it is required to create a formal data specification that will be used within ontologies structures that support risk management and context. The ontologies are used as a knowledge base for reasoning engines that are responsible for the detection of risk and responsible of the implementation of relevant actions committed in any situation [30].

Integrated and Personalised Risk Management in the Sensing Enterprise 305

**Figure 11.** Ontology-based reasoning

304 Risk Management – Current Issues and Challenges

their real implantation [27].

coordination

[30].

**6. Real-time risk management solutions** 

**6.1. Industrial safety ontologies and reasoning engines** 

The continuous assessment of processes is critical for ensuring an efficient execution of the enterprise procedures. In this way, there are more and more business intelligence systems available to empower and support the management providing important information about the processes filtered by using Data Mining technologies. Usually those systems, present information about static or evolution of numeric data according to parameters that can, indirectly, help managers to detect inefficiencies or bottlenecks in the processes. In this scenario, an emerging technology is growing in order to enrich those business intelligence systems providing a more directly view about the process execution. This technology is called Process Mining (A.K.A Workflow Mining) [28]. Process Mining technology is research field, based on pattern recognition paradigm, that uses the events or activities of the process logs in order to automatically infer a graphical workflow that explains the actual execution of the process. There are many process mining algorithms in the literature, based on Events like Alpha [16] or Genetic Process Miner [29], or based on activities like PALIA algorithm [27]. Those algorithms are able to create workflows from samples and present graphically in order to know exactly how processes behave in real implementations. Comparing the results of those algorithms with the designed processes, it is possible to directly know what are the most usual paths followed by the processes within the designed workflows, what are the differences and exceptions to the designed processes that are occurring in the implantation of them, etc. In this way, there are available algorithms that allows the comparison between the designed workflows and

The implementation of proactive risk management in a sensing enterprise demands that distributed reasoning capabilities are provided as a means for intelligence and analytics. FASyS should therefore not stop simply at the data collection and distribution level, but on the contrary, it should be able to analyse the vast amount of available information in the presence of data unavailability, ambiguity, imprecision and error. Therefore, for being able to detect, decide and act in real-time to dynamic risk levels various challenges need to ba addressed in the areas of (a) iindustrial safety reasoning engines (b) Real-time risk detection tools (c) Personalized decision support tools and (d) Semantic solutions for services

To deal with safety in a quite heterogeneous environment in terms of information sources, it is required to create a formal data specification that will be used within ontologies structures that support risk management and context. The ontologies are used as a knowledge base for reasoning engines that are responsible for the detection of risk and responsible of the implementation of relevant actions committed in any situation As depicted by the Figure above, the selected architecture is based on a distributed multiontology structure. This way, there is an ontology for each risk and a set of context ontologies related to different existing and influent entities in each cycle of risk management in the factory. The context ontology properties might be source of different risk ontologies properties. Thus, risk can be defined through different entities or actors of the company.

**Figure 12.** Mapping process results

The use of ontologies is related to the introduction of intelligence and reasoning on the information processing and the relation established between concepts that increase the relevance of such intelligence. Furthermore, this structures will be used for mapping data with information systems of the own company or potential external information systems [31]. The ontologies are considered therefore as the bridge between the heterogeneous information ecosystem and the actual services implementing the FASyS system logic.

Integrated and Personalised Risk Management in the Sensing Enterprise 307

However, in the context of the sensing enterprise, ubiqutous smart object deployment and the availability of suitable universal virtual object abstractions make such control possible. Thus, a new risk management cycle can be designed based on the capability to master big data stream technologies in a scalable manner. FASyS has designed a consolidated framework for risk detection. FASyS does not only leverages effective data management but also consolidates an integrated approach to health (medical) and safety (security) risk

FASyS has proposed a complex event processing (CEP) [32] unit network that through a number of pre-set patterns, in the form of a complex formula compound by an undetermined number of factors, will feed and evaluate the patterns in a continuous way, creating alerts based on particular thresholds that have been previously defined and particular set of actions taken concurrently or sequentially place. These tools allow for big

However, FASyS data processing solutions go beyond big data volume. FASyS looks for flexible solutions that can create complex feedback and feedforward loops across CEP units to ensure that the time variable, the event frequency or event correlations-workflow can be

As it has become apparent from the previous sections FASyS provides the tools and models for being able to process as much as information in less time as possible. Thus, the enterprise safety and healthy manager can work with relevant information to make informed decisions. The aim of FASyS personalised decision support tools is not just to warn and make apparent a particular risk level but also to ease the decision process based on strong knowledge support. FASyS Monitoring and Control Human Macine Interface (HMI) has therefore being designed to provide highly visual interfaces about risk levels. Moreover, the system also makes suggestions of the most suitable procedures to be aplied when a risk situation is detected, so that the reaction time can be hugely reduced and the user can send a

The decision support system works with risk patterns that require a human interaction, either to provide additional information or to select preferred option in front of a multiple

In the context of the sensing enterprise, FASyS has to deal not only with the detection of risks but also has to support the actuation and deployment of the preventive actions selected by the safety and healthy manager through the personalised decision suppport tools. This implies that FASyS has envisaged a service oriented scenario, where the factory is populated by a large amount of services that exchange messages and perform

data volumes processing with really low computing infrastructure requirements.

management. The consolidated approach is shown below.

processed at high speed.

selection.

**6.3. Personalised decision support tools** 

highly effective execution action plan immediately.

**6.4. Semantic solutions for services coordination** 

### **6.2. Real-time risk detection tools**

Inherent to the concept of proactive prevention is the aim of the system to predict a particular risk by evaluating all the variables of the worker, the working place and the environment. For being able to perform such evaluation the first requirement is to have tools that are able to process all the information in a complete, efficient and, at the same time, very light way. Currently, risk management is focused on monitoring the proposed actions after an evaluation process is periodically; e.g. yearly, performed in the companies. With nowadays facilities, there is no way to propose a 'real time' evaluation of the situation, due to the fact that the safety manager cannot control all the time what is happening in the shop-floor, the data of all the machines involved in the process and the information referred to the state of workers that are moving around the factory. This is inconceivable.

**Figure 13.** Continuous real-time risk management based on Complex Event Processing Technology

However, in the context of the sensing enterprise, ubiqutous smart object deployment and the availability of suitable universal virtual object abstractions make such control possible. Thus, a new risk management cycle can be designed based on the capability to master big data stream technologies in a scalable manner. FASyS has designed a consolidated framework for risk detection. FASyS does not only leverages effective data management but also consolidates an integrated approach to health (medical) and safety (security) risk management. The consolidated approach is shown below.

FASyS has proposed a complex event processing (CEP) [32] unit network that through a number of pre-set patterns, in the form of a complex formula compound by an undetermined number of factors, will feed and evaluate the patterns in a continuous way, creating alerts based on particular thresholds that have been previously defined and particular set of actions taken concurrently or sequentially place. These tools allow for big data volumes processing with really low computing infrastructure requirements.

However, FASyS data processing solutions go beyond big data volume. FASyS looks for flexible solutions that can create complex feedback and feedforward loops across CEP units to ensure that the time variable, the event frequency or event correlations-workflow can be processed at high speed.
