**A Future for Integrated Diagnostic Helping**

Mathieu Thevenin and Anthony Kolar *CEA, LIST, Embedded Computing Laboratory* 

*France* 

#### **1. Introduction**

Medical systems used for exploration or diagnostic helping impose high applicative constraints such as real time image acquisition and displaying. This is especially the case when they are used in surgical room where a high reactivity is required from operators. Large computing capacity is required in order to obtain valuable results. Integrators mainly prefer the use of general purpose architectures such as workstations (Gomes, 2011). They have to cope with manufacturing cost and setup simplicity. As general purpose devices need a large amount of space, the main part of the processing is deported from the handled diagnostic tools to an external unit. For example, this is the case of endoscopic device. Today, dedicated rooms are usually used for this purpose in many hospitals. Their associated external computers that are used for diagnostic system are cumbersome and are also energy consumers. These issues are too problematic to use efficiently these systems in a limited space. Indeed, they restrain the movements of the medical staff and complexify the deployment on the ground for military or humanitarian operations. Therefore it seems logical to integrate the maximum computing capacities diagnostic into helping devices themselves to make them completely handleable.

A large part of computing requirement of these systems is devoted to image processing. They can be quite simple like images reconstruction and enhancement, features detector or 3D reconstruction. Today, a large part of these processing is mainly embedded inside handled consumer's devices such as digital cameras or advanced driving assistance systems (ADAS). By the analysis of both medical and consumer's applications systems, it is possible to notice that they rely on similar algorithmic approaches. Also, most of integration constraints are similar if someone wants to miniaturize these consumer devices. This mainly concerns the chips silicon areas, their power consumption and their computing capacities. For example, a digital video sensor and image processor integrated to a cell phone cannot reach more than a half watt of power consumption for a silicon area of less than a dozen square millimeters. This is also the case for one of the most integrated medical diagnostic device which is the endocapscule. It form factor (Harada, 2008) limits components size while its autonomy is driven by energies efficiency. The whole device may not exceed a Watt of power consumption. About a half Watt is devoted to the part dedicated to computation for diagnostic, especially based on image processing. However, this part depends on the device features, such as communication systems and mechanical elements that may be used for mobility or biopsy.

Integrators also demands versatility in order to design unique products that can be used for different targets. For example, endoscopic exploration of larynx or intestinal and lung exploration do not uses the same devices, but these applications are all based on similar

A Future for Integrated Diagnostic Helping 5

In the case of endocapsule (Karargyris, 2010), the main goal of video processing is to analyse a video sequence in order to find different features like bleeding, polyps, tumours, etc. Theses kinds of diagnosis helping are usually done in two steps. First a camera equipped device grab images to diagnostic, these images are then transmitted through a wireless connection to a workstation that analyzes them during an off-line processing. Figure 2.1 depicts the PillCam by GivenImaging, and the Endocam by Olympus can also be cited;

Some researches focus on the integration of mechanicals devices to endocapsules in order to give them the ability to surgery using micro-instrumentation such as biopsy. An example of such an endocapsule is "Miro'' (Kim and al., 2007) under Korea's Frontier 21 project as shown on Figure 2.2. The "Scuola Superiore Sant'Anna'' (Quirini, 2007) also tries to integrate small mechanic legs to a video-capsule in order to give the practitioner the ability

Fig. 2.2. Principe of the endocapsule ''Miro'' and prototypes of a mobile endo-capsule

Communication protocol in human body is defined by the norm IEEE 820.15. Its frequency is 403 MHz. This is defined by the norm for in-vivo electronic devices. Antennas for this band are small while low emitting power is required due to limited loss of the signal in the environment. Moreover this frequency should not infers with usual communications devices. Energy efficiency is a critical point for the energetic life of an integrated and autonomous system. For this reason, many researchers work in order to find an optimal way to communicate between the device and the external world. There are three aspects of this research: the first one focuses on the silicon device technologies and materials. The second one focuses on the architecture trying to define the most efficient hardware architecture for

1. The video processing:

Fig. 2.1. PillCam by GivenImaging

to move freely in the intestinal system.

3. The communication and transfer protocol:

2. The mechanical systems for autonomous devices:

image processing with minor variations. Moreover, these systems should be updatable to follow the science developments.

These requirements are also valid for large market devices such as cell phones and cameras. For example, general purposes or specific embedded processors are widely used like ARM microprocessors and Texas Instrument Digital Signal Processors (DSP) which are integrated into transportation, photonics, communications or entertainment (Texas Instrument, 2006). These markets drive both academic and industrial researches. The background knowledge is present inside laboratories; however its transfer to medical applications is not yet completely industrially ready.

This chapter provides clues to transfer consumers computing architecture approaches to the benefit of medical applications. The goal is to obtain fully integrated devices from diagnostic helping to autonomous lab on chip while taking into account medical domain specific constraints.

This expertise is structured as follows: the first part analyzes vision based medical applications in order to extract essentials processing blocks and to show the similarities between consumer's and medical vision based applications. The second part is devoted to the determination of elementary operators which are mostly needed in both domains. Computing capacities that are required by these operators and applications are compared to the state-of-the-art architectures in order to define an efficient algorithm-architecture adequation. Finally this part demonstrates that it's possible to use highly constrained computing architectures designed for consumers handled devices in application to medical domain. This is based on the example of a high definition (HD) video processing architecture designed to be integrated into smart phone or highly embedded components.

This expertise paves the way for the industrialisation of intergraded autonomous diagnostic helping devices, by showing the feasibility of such systems. Their future use would also free the medical staff from many logistical constraints due the deployment of today's cumbersome systems.
