**3. Algorithms for wound image analysis: wound size and colour**

Given the key finding of the user trial of the significant value of wound photographs, further work focussed on developing algorithms that would add intelligence to SmartWoundCare relative to image analysis.

The objective of the image analysis work was to develop algorithms to determine the size of the wound in both relative and absolute terms, and to analyse the colour breakdown of a wound, all from an image of the wound taken by a smartphone or tablet camera. Further, this objective was to be carried out without any peripheral or ancillary devices. Such devices, as seen in related literature, might include templates or positioning boxes by which the user would help the patient to position themselves and the wound, or it may include ultrasonic transducers and additional lenses for the mobile device. Carrying out the image analysis independent of any ancillary devices contrasts work by other researchers which, for example, control the lighting and wound position with an image capture box when performing image analysis of diabetic foot ulcers [27].

The application represented a general objective applicable to other fields, in that the work was intended to produce non‐contact measurements of irregularly‐shaped images taken with a smartphone or tablet camera, where the target range for error is <10% for images taken from distances of up to 30 cm. Relying only on the internal smartphone sensors to generate high‐ accuracy measurements brings novelty to the work and specifically to the field of wound management.

Each new smartphone and tablet that comes to market generally has a higher‐resolution camera than the previous version of the device, and these progressions are often evident in short to medium timeframes of 6–18 months. Nonetheless, consumers are still hesitant to rely on on‐board cameras for any application that requires high precision and accuracy. In prior work, the state of image analysis from photographs was reviewed [28]. At first instance, several mobile apps were identified which claim to measure objects and distances in the 0.5–20 m range [29, 30], as well as ultrasonic‐transducer that ranges for measurements in the 1–6 cm range [31], and infrared distance measurements in the 4–30 cm range [32]. Depth‐of‐field cameras were also considered [33–35]. That early research also explored one method for determining distance from the camera to the wound and two algorithms to determine the size of the wound. Although both methods are promising, the specifications for error were not met [28].

It is foreseeable that smartphones with dual‐lens camera will enter the market within a timeframe of 6–24 months [36]. This development would create new and significant potential for high‐resolution images and subsequent analysis for accurate and precise characterization. The analysis techniques would build on the existing work in other fields, such as stereoscopic cameras in manufacturing. Google's Project ARA, a collaborative effort to develop modular smartphone hardware may also provide a future framework by which to include dual‐lens cameras in mobile devices.

#### **3.1. Overview**

the film, and then to lay the film over graph paper and count the number of squares. When comparing this technique to measurements derived from digital images, the latter method resulted in improved accuracy, lower inter‐observer variations, and improved ease of use. Because the film physically touches the patient's wound and can cause irritation, the digital photograph also had the advantage of being a non‐contact method. Another study explored the potential of telehealth, specifically videoconferencing, compared to in‐person assessment for pressure ulcer assessment. Both procedures led to very similar assessment of the stage of the wound. However, the telehealth approach led to an overestimate of wound size and

**3. Algorithms for wound image analysis: wound size and colour**

Given the key finding of the user trial of the significant value of wound photographs, further work focussed on developing algorithms that would add intelligence to SmartWoundCare

The objective of the image analysis work was to develop algorithms to determine the size of the wound in both relative and absolute terms, and to analyse the colour breakdown of a wound, all from an image of the wound taken by a smartphone or tablet camera. Further, this objective was to be carried out without any peripheral or ancillary devices. Such devices, as seen in related literature, might include templates or positioning boxes by which the user would help the patient to position themselves and the wound, or it may include ultrasonic transducers and additional lenses for the mobile device. Carrying out the image analysis independent of any ancillary devices contrasts work by other researchers which, for example, control the lighting and wound position with an image capture box when performing image

The application represented a general objective applicable to other fields, in that the work was intended to produce non‐contact measurements of irregularly‐shaped images taken with a smartphone or tablet camera, where the target range for error is <10% for images taken from distances of up to 30 cm. Relying only on the internal smartphone sensors to generate high‐ accuracy measurements brings novelty to the work and specifically to the field of wound

Each new smartphone and tablet that comes to market generally has a higher‐resolution camera than the previous version of the device, and these progressions are often evident in short to medium timeframes of 6–18 months. Nonetheless, consumers are still hesitant to rely on on‐board cameras for any application that requires high precision and accuracy. In prior work, the state of image analysis from photographs was reviewed [28]. At first instance, several mobile apps were identified which claim to measure objects and distances in the 0.5–20 m range [29, 30], as well as ultrasonic‐transducer that ranges for measurements in the 1–6 cm range [31], and infrared distance measurements in the 4–30 cm range [32]. Depth‐of‐field cameras were also considered [33–35]. That early research also explored one method for determining distance from the camera to the wound and two algorithms to determine the size

volume when compared to in‐person assessment [26].

126 Mobile Health Technologies - Theories and Applications

relative to image analysis.

analysis of diabetic foot ulcers [27].

management.

Three components of the image analysis work are outlined in the following sections. In the first component referred to as Mask Image, the objective is to obtain the relative dimensions of an object in the image (in this case, a wound), in which the size determination is relative to the previous image of the same object. The second component, referred to as Camera Calibra‐ tion, reconstructs an image taken on an angle and references it back to a two‐dimensional (2D) plane, in this way facilitating a measurement of the absolute or actual size of the object in the image. The third algorithm determines the range of colours present in an image. The algorithm separates the image into three component colours by extracting components from the red‐ green‐blue (RGB) format of the image, and by doing so, makes possible an inference of the wound stage.

The software framework (**Figure 6**) in a high level abstraction consists of modules including acquisition of the wound image, pre‐processing of the wound image, segmentation of the wound image, recognition of the wound type, and classification of the wound. In reference to the three major components of the analysis indicated previously, the Mask Image component lies within the image acquisition module. Grabcut (a segmentation method [37]) and the

**Figure 6.** Basic application model.

Camera Calibration component both lie within the segmentation module, and the colour analysis component lies within both the segmentation and the wound recognition modules.

Although the wound photographs are taken with the cameras built into a mobile device (smartphone or tablet as per **Table 2**) or a webcam, all of the processing takes place on a computer. Computation times are generally in the order of seconds. Further work to have the processing take place on the mobile device itself is ongoing, and comes with the usual challenges of carrying out computation‐ and memory‐intensive processes on mobile devices.

Processing the photograph on a computer allows for both static and dynamic environments. In this case, a static environment denotes an environment where both the camera setup relative to the wound position is fixed (e.g. known, constant distance and angle, often with the use of staging devices) and the light source is stable. A dynamic environment refers to a mobile camera (i.e. smartphone or tablet) and/or the wound in a natural position at varying distances and angles to the camera and in varying lighting conditions.

With a series of photographs taken in a static environment, the Camera Calibration component, which corrects for angle by reconstructing an image in three‐dimensional (3D) space back to a 2D plane, only needs to be done once and the correction can be applied to the entire series of photographs. In a dynamic environment where distance and angle between the wound and the camera vary with each photograph, the Camera Calibration component needs to be done for each image.


**Table 2** summarizes the hardware and software specification applied in this work.

**Table 2.** Hardware and software specifications.

#### **3.2. Mask image for relative size**

Camera Calibration component both lie within the segmentation module, and the colour analysis component lies within both the segmentation and the wound recognition modules.

Although the wound photographs are taken with the cameras built into a mobile device (smartphone or tablet as per **Table 2**) or a webcam, all of the processing takes place on a computer. Computation times are generally in the order of seconds. Further work to have the processing take place on the mobile device itself is ongoing, and comes with the usual challenges of carrying out computation‐ and memory‐intensive processes on mobile devices.

Processing the photograph on a computer allows for both static and dynamic environments. In this case, a static environment denotes an environment where both the camera setup relative to the wound position is fixed (e.g. known, constant distance and angle, often with the use of staging devices) and the light source is stable. A dynamic environment refers to a mobile camera (i.e. smartphone or tablet) and/or the wound in a natural position at varying distances

With a series of photographs taken in a static environment, the Camera Calibration component, which corrects for angle by reconstructing an image in three‐dimensional (3D) space back to a 2D plane, only needs to be done once and the correction can be applied to the entire series of photographs. In a dynamic environment where distance and angle between the wound and the camera vary with each photograph, the Camera Calibration component needs to be done

Android 4.2 (Jelly Bean)

Matlab

**Table 2** summarizes the hardware and software specification applied in this work.

Krait Quad‐core 1.5 GHz Processor 2.6 GHz Intel Core i7 Display resolution 1280 × 768 Memory 8 GB 1600 MHz DDR3 Camera resolution 8MP (3264 × 2448) Graphics Intel Iris Pro 1024 MB High Performance Adreno 320 GPU Software OS X 10.9.5 (13F34)

**Nexus 4 (LG‐E960) MacBook Pro**

Wi‐Fi 802.11 a/b/g/n **Software**

**Samsung Galaxy S4** Android NDK r9d

Display resolution 1080 × 1920 Python 2.7.10 13+ megapixel camera Numpy Bluetooth 4.0 Matplotlib 802.11 a/b/g/n OpenCV 3.0.0

**Table 2.** Hardware and software specifications.

ARM Cortex‐A15 Quad‐core 1.9 GHz processor OpenCV 2.4.9 Android SDK

and angles to the camera and in varying lighting conditions.

128 Mobile Health Technologies - Theories and Applications

for each image.

Bluetooth 3.0 BLE

The first two components of the image analysis work, Mask Image and Camera Calibration, are used to determine the relative size and the absolute size of a wound, respectively, from the wound photograph. **Figure 7** expands the first two modules of basic software framework in **Figure 6**, specifically the image acquisition module and the image pre‐processing module. The Mask Image component is situated within these modules.

**Figure 7.** Image acquisition and pre‐processing flowchart.

Wounds are generally three‐dimensional, with volume below the skin surface. Wounds can also exhibit undermining, which refers to a wound that is larger at its base (below the skin) than the opening at the surface of the skin suggests, creating a cavity below the surface of the skin. Tunnelling refers to wounds, similar to undermining, which have channels (rather than cavities) below the skin surface.

As noted earlier, conventional methods to measure wound dimensions and/or area often use contact methods, in which adhesive strips or transparent films are laid around or on the wound, respectively, and wound edges are noted on the strips or films. The strips or films are then read directly for size or overlaid on to graph paper or rulers for measurement. The depth is generally measured with a cotton‐tipped applicator to the deepest part of the wound.

Two approaches in the literature to automatically determine the size of a wound include grid capture and scanner capture. Grid capture is a hybrid of conventional contact methods and digital image analysis. In this case, a transparent film with a marked grid is placed on the wound and the wound perimeter is traced on to the film. The film with the tracing on a known grid is then the basis from which the dimensions and area of the wound can be calculated with a software application [38]. This approach has the advantage of basing the calculation on a real tracing of the wound perimeter and a known grid, thus capturing the near‐real orientation of the wound. However, the disadvantage remains the potential for discomfort to the patient when the film rests on the wound.

In another approach denoted as scanner capture, a box with two internal mirrors is constructed as a template. The box has openings for a mobile device and an LED light source. In the scanner capture approach developed by others, a box with two mirrors inside is placed at 45 degrees relative to the horizontal, with openings for a smartphone and an LED light source [27]. The patient rests their foot in the box, and in this way, the setup maintains a constant distance between camera and wound and constant lighting conditions. While the computation remains intensive, the advantage of this method is that these two conditions serve to simplify the image processing requirements. The disadvantage of this method is the reliance on ancillary staging devices, and the setup will be impractical for certain areas of the body.

In this work, the objective of the Mask Image component is to obtain the comparative dimen‐ sions of an object in the image relative to a previous image of the same wound. An initial photograph is taken, from which a transparent digital 'mask' of the wound is created. The user then overlays or aligns this digital mask to the wound for the subsequent assessment and photograph (**Figures 8** and **9**). While most of the perimeter is expected to align between the mask image and the wound in its current state, one can reasonably anticipate that if the wound is either healing or deteriorating, portions of the perimeter between the digital mask and the wound in its current state will differ. The algorithm compares the digital mask to the current wound image, recognizing and aligning wound perimeter, and estimating the relative size difference. From this size difference, either healing, deterioration, or no change is inferred. The result is given as a percentage change in the area of the most current image relative to the previous digital mask image.

The mask image or mask overlay essentially serves to provide a point of reference when aligning the wound for the current assessment with its previous condition. As such, the point of reference does not necessarily need to be the transparent mask overlay. A medical tattoo could also act as a point of reference. In this case, it would be either a temporary or permanent skin marker or pattern (e.g. three dots) close to the wound. This marker or pattern would be used each to create a digital overlay which would provide the point of reference when aligning the camera for all subsequent photographs.

**Figure 8.** Creating a mask image from the wound.

As noted earlier, conventional methods to measure wound dimensions and/or area often use contact methods, in which adhesive strips or transparent films are laid around or on the wound, respectively, and wound edges are noted on the strips or films. The strips or films are then read directly for size or overlaid on to graph paper or rulers for measurement. The depth is generally measured with a cotton‐tipped applicator to the deepest part of the wound.

Two approaches in the literature to automatically determine the size of a wound include grid capture and scanner capture. Grid capture is a hybrid of conventional contact methods and digital image analysis. In this case, a transparent film with a marked grid is placed on the wound and the wound perimeter is traced on to the film. The film with the tracing on a known grid is then the basis from which the dimensions and area of the wound can be calculated with a software application [38]. This approach has the advantage of basing the calculation on a real tracing of the wound perimeter and a known grid, thus capturing the near‐real orientation of the wound. However, the disadvantage remains the potential for discomfort to the patient

In another approach denoted as scanner capture, a box with two internal mirrors is constructed as a template. The box has openings for a mobile device and an LED light source. In the scanner capture approach developed by others, a box with two mirrors inside is placed at 45 degrees relative to the horizontal, with openings for a smartphone and an LED light source [27]. The patient rests their foot in the box, and in this way, the setup maintains a constant distance between camera and wound and constant lighting conditions. While the computation remains intensive, the advantage of this method is that these two conditions serve to simplify the image processing requirements. The disadvantage of this method is the reliance on ancillary staging

In this work, the objective of the Mask Image component is to obtain the comparative dimen‐ sions of an object in the image relative to a previous image of the same wound. An initial photograph is taken, from which a transparent digital 'mask' of the wound is created. The user then overlays or aligns this digital mask to the wound for the subsequent assessment and photograph (**Figures 8** and **9**). While most of the perimeter is expected to align between the mask image and the wound in its current state, one can reasonably anticipate that if the wound is either healing or deteriorating, portions of the perimeter between the digital mask and the wound in its current state will differ. The algorithm compares the digital mask to the current wound image, recognizing and aligning wound perimeter, and estimating the relative size difference. From this size difference, either healing, deterioration, or no change is inferred. The result is given as a percentage change in the area of the most current image relative to the

The mask image or mask overlay essentially serves to provide a point of reference when aligning the wound for the current assessment with its previous condition. As such, the point of reference does not necessarily need to be the transparent mask overlay. A medical tattoo could also act as a point of reference. In this case, it would be either a temporary or permanent skin marker or pattern (e.g. three dots) close to the wound. This marker or pattern would be used each to create a digital overlay which would provide the point of reference when aligning

devices, and the setup will be impractical for certain areas of the body.

when the film rests on the wound.

130 Mobile Health Technologies - Theories and Applications

previous digital mask image.

the camera for all subsequent photographs.

**Figure 9.** Overlay of mask image to new wound.

The Mask Image component of the work provides the relative size of the wound from one assessment to the next. Users can choose to create one digital mask and compare all subsequent photographs to the initial digital mask; alternately, users can create a new digital mask at each wound assessment so that wound size comparison is always to the most recent assessment. A combination of the two methods is also possible. The advantage of the method is the absence of direct contact with the wound, thus preventing patient discomfort. Another advantage is that no additional devices to the camera or to the patient (e.g. props) are required. The error inherent in the approach is largely determined by the user's dexterity in aligning the digital mask over the current wound. A limitation of the method is that wound depth is not considered in the calculation. A further limitation of the method is that the outcome is a relative size of the wound rather than an absolute size. When an absolute size of the wound is desired, the Camera Calibration component is implemented.

#### **3.3. Camera calibration for absolute size**

**Figure 10** shows the Camera Calibration component within the basic software framework outlined in **Figure 6**.

**Figure 10.** Size estimation with segmentation flowchart.

Grabcut, a segmentation method used to differentiate an object (in this case, a wound) in the foreground from its background (in this case, the surrounding skin or body part), is applied in this module. Grabcut accomplishes this by using colour information to compare side by side pixels and also by using edge or contrast information to identify an object in an image. Further, Grabcut uses progressive iteration and runs the process multiple times to optimize the results. The result is a segmented image (the foreground object, in this case, a wound). This segmented image is then used in the Camera Calibration component as well as the third component of colour analysis. While other segmentation algorithms are available, Grabcut is considered an efficient algorithm and has the benefit of minimal user interaction [37], which was a require‐ ment in this work. An example of Grabcut applied to wound photographs is found at https:// youtu.be/Iyvochswrws.

inherent in the approach is largely determined by the user's dexterity in aligning the digital mask over the current wound. A limitation of the method is that wound depth is not considered in the calculation. A further limitation of the method is that the outcome is a relative size of the wound rather than an absolute size. When an absolute size of the wound is desired, the

**Figure 10** shows the Camera Calibration component within the basic software framework

Grabcut, a segmentation method used to differentiate an object (in this case, a wound) in the foreground from its background (in this case, the surrounding skin or body part), is applied

Camera Calibration component is implemented.

**3.3. Camera calibration for absolute size**

132 Mobile Health Technologies - Theories and Applications

**Figure 10.** Size estimation with segmentation flowchart.

outlined in **Figure 6**.

The purpose of the Camera Calibration component is to take an image photographed on an angle and reconstruct or reference it back to a two‐dimensional plane. Essentially, the Camera Calibration module computationally achieves one of the objectives of the scanner capture box [27] in terms of aligning the wound to known and fixed positions relative to the camera. The Camera Calibration component uses a known pattern with 13 or more fixed reference points, and applies the Tsai2D algorithm [39, 40] to obtain a reconstructed image of the wound. Since the distance between the points are known from the calibration model, the view angle can be calculated and the image can be reconstructed on a 2D plane. From here, the size of the wound can be calculated. Like the Mask Image component, the Camera Calibration component also does not identify depth or volume of wounds. This is a known limitation, given that surface size and area alone are an incomplete descriptor of wounds.

A chessboard pattern was chosen as the pattern. This was found to be effective for photographs taken in static and dynamic conditions. Similar to the conventional approach of placing an adhesive ruler near the wound to measure size, the chessboard pattern is placed close to the wound and then photographed. The inherent assumption is that the wound and the pattern are in the same two‐dimensional plane. Given that the chessboard pattern is known and fixed, the planar orientation of the pattern in the photograph can be calculated and then the image corrected accordingly. This approach has been shown to be effective in calculating the dimensions of a soccer field, in which a top (plan) view of the field was reconstructed from images taken on an angle, using Camera Calibration [41]. In this work, the chessboard pattern is used for calibration to obtain the extrinsic matrix of the wound. The extrinsic matrix provides information on the camera location and the view direction, allowing for translation and rotation to the two‐dimensional plane.

**Figure 11** demonstrates the Camera Calibration sequence at a high level. The red lines denote the objects which were detected, i.e. the dark squares. The algorithm finds the centre of each square and applies the Tsai 2D algorithm to process the coordinates. The blue lines show the scanning sequence. The green lines are the re‐projected lines from the model points to the real world coordinates, as an indication of the success of the Camera Calibration algorithm. If the green lines were curved or otherwise irregular, this would indicate that the projection back to a two‐dimensional plane was not successful.

**Figure 12** shows the Camera Calibration component applied to a wound. The wound was photographed at an angle and then re‐projected on a two‐dimensional plane at 90 degrees to the viewer.

**Figure 11.** Original and re‐projected planes.

**Figure 12.** Wound image before (left) and after (right) reconstruction.

While the Mask Image component results in a relative size of the wound and the Camera Calibration component results in a corrected orientation and an absolute size of the wound, taken together, they allow for more accurate calculations. When applied to a Canadian dollar coin (26.5 mm diameter with eleven edges), the actual size was determined with an error of <1%.

A demonstration of the Camera Calibration module is available at https://youtu.be/ OiJk3nMymSE.

#### **3.4. Colour analysis**

The third algorithm focuses on colour analysis of the wound. It determines the range of colours present in an image, separating the image into three component colours by extracting com‐ ponents from the red‐green‐blue (RGB) format of the image and presenting them in a histo‐ gram. These data can then be fed into an expert system to infer the stage of the wound. **Figure 13** shows the Colour Analysis component within the software framework outlined in **Figure 6**.

**Figure 13.** Colour analysis flowchart.

**Figure 11.** Original and re‐projected planes.

134 Mobile Health Technologies - Theories and Applications

<1%.

OiJk3nMymSE.

**3.4. Colour analysis**

**Figure 12.** Wound image before (left) and after (right) reconstruction.

While the Mask Image component results in a relative size of the wound and the Camera Calibration component results in a corrected orientation and an absolute size of the wound, taken together, they allow for more accurate calculations. When applied to a Canadian dollar coin (26.5 mm diameter with eleven edges), the actual size was determined with an error of

A demonstration of the Camera Calibration module is available at https://youtu.be/

The third algorithm focuses on colour analysis of the wound. It determines the range of colours present in an image, separating the image into three component colours by extracting com‐ ponents from the red‐green‐blue (RGB) format of the image and presenting them in a histo‐ gram. These data can then be fed into an expert system to infer the stage of the wound. **Figure 13** shows the Colour Analysis component within the software framework outlined in **Figure 6**.

Pressure ulcers will be assessed as one of six stages (stage I through IV, Suspected Deep Tissue Injury, and Unstageable) [42]. Because the current work is unable to calculate the depth of the wound, the last two categories (both of which are wounds with some depth below the skin surface) have been combined as Unstageable. In addition to wound depth, other factors that determine the stage of a wound include skin condition (intact or broken), tissue loss, the colour of the skin, tissue, and wound bed, and the presence and nature of discharge.

To analyse the colour of a wound, the algorithm uses an RGB format of the image and determines the presence of the three component colours. Each component colour has a defined range, although the user can adjust that range or calibrate the range for variable lighting conditions.

While segmentation is not mandatory, the results of the colour analysis component are much more accurate if done on a segmented image, as this allows the algorithm to disregard the background (**Figure 14** images taken from http://reference.medscape.com/features/slideshow/ pressure‐ulcers).

Users can also consider hue, saturation, value (HSV) and red‐yellow‐black (RYB) formats for colour analysis. Hue, saturation, value (HSV) format responds to lighting, and as such, it may be a good option when one wants to tune the colour more specifically. RYB (red‐yellow‐black) has a fitting relationship to wound stages, and RGB results can be converted to RYB. The approximate ratios of red, yellow, and black correlated to wound stages are shown in **Figure 15**. Wound stages I and II rely only on red, but are differentiated on the intensity of the red in the image. The subsequent wound stages are differentiated on the proportions of each of the three colours in the image. The error inherent in this method depends to some extent on the definitions of colours set by the user. A recommendation is to associate this component with a machine learning component, once a large enough data set is collected. In this way, colour parameters can be more precisely defined.

Finally, expert systems can be developed to determine wound stages from the RGB and/or RYB data. This again relies on collecting a sufficiently large data set. Alternatively, support vector machine (VSM) or another machine learning algorithm can be applied to determine the

**Figure 14.** Histogram results before and after segmentation.

stages of a wound. In the current work, the framework for an expert system is in place. The next step is to collect and populate the expert system with training data.

An example of the colour analysis on wound photographs can be viewed at https://youtu.be/ Iyvochswrws.

**Figure 15.** RYB output correlated to wound stage.
