From Insect Vision to a Novel Bio-Inspired Algorithm for Image Denoising

*Manfred Hartbauer*

## **Abstract**

Night active insects inspired the development of image enhancement methods that uncover the information contained in dim images or movies. Here, I describe a novel bionic night vision (NV) algorithm that operates in the spatial domain to remove noise from static images. The parameters of this NV algorithm can be automatically derived from global image statistics and a primitive type of noise estimate. In a first step, luminance values were ln-transformed, and then adaptive local means' calculations were executed to remove the remaining noise without degrading fine image details and object contours. Its performance is comparable with several popular denoising methods and can be applied to grey-scale and color images. This novel algorithm can be executed in parallel at the level of pixels on programmable hardware.

**Keywords:** night vision, spatial integration, contrast enhancement, noise reduction, denoising, image enhancement, image processing, local means calculation

### **1. Introduction**

Some insect species have attracted the attention of researchers due to their astonishing visual abilities under extremely dim light conditions [1–3]. These insects cope with noise that degrades visual information and has multiple origins: the sparsity of photons results in shot noise, which is overlaid by transducer noise. To increase the sensitivity of compound eyes, the clusters of photoreceptor cells direct light on to a certain part of the associated rhabdom in order to gather the photons from a wide field of view. In other, nocturnal insect species (e.g., *Megalopta genalis*) neurons in the brain sum up the information provided by individual ommatidia forming the apposition eye. Insects equipped with such neural apposition eyes can even see at star light conditions [3]. Filtering in the spatial [4] and temporal domains is mirrored in some denoising algorithms that are available for cleaning up noisy films (e.g., [5]). However, there is still a lack of image enhancement methods that improve the quality of underexposed static images while avoiding artifacts and preserving image sharpness. The elimination of noise from static images is usually a challenging task for any denoising algorithm, because the temporal domain is not available for filtering.

The quality of images taken under dim light conditions is also often reduced by imperfections in the sensor itself ('sensor grain noise') and shot noise. Generally, dim images have a very limited luminance range, which limits the content of available information. If any measure is undertaken to improve image contrast,

such as the traditional method of histogram stretching, sensor noise is unavoidably amplified. Therefore, the goal of image enhancement is to preserve as many image details as possible while eliminating noise. Typically denoising can be achieved by the application of linear and nonlinear filters. Linear filters take the forms of smoothing or low-pass, sharpening, Laplacian, un-sharp masking, or high-boost filters. Nonlinear filters include order statistic filters such as minimum, median, and maximum filters (for a review of methods see [6, 7]).

Simple denoising techniques, such as linear smoothing or median filtering, can reduce noise, but at the same time smooth away edges, so that the resulting image becomes blurry. A popular alternative denoising method is total variation (TV) denoising, which has been described by Rudin et al. [8]. This method minimizes the total variation of the luminance values that can mainly be attributed to noise. The TV regularization method preserves salient edges while effectively removing noise. Lee et al. [9] published a framework for a Moving Least Squares Method with Total Variation Minimizing Regularization and Yoon et al. [10] improved the preservation of fine image details by developing an adaptive, TV-minimization-based image enhancement method (ATVM). Bilateral filtering described by Tomasi and Manduchi [11] is another powerful non-linear denoising algorithm that preserves object contours. Here, denoising is based on the spatial distance of surrounding pixels relative to an output pixel and its grey value difference. Bilateral filtering is fast but the tuning of parameters is rather difficult (see Zhang and Gunturk [12]) and staircase effects and inverse contours are known possible artifacts. Another possibility is the operations that are performed on a Fourier transform of the image rather than on the image itself. The techniques that fall under this category include low pass, high pass, homomorphic, linear and root filtering. Fourier-transformed images are filtered and inverse transformed to reduce noise and prevent blurring effects. The disadvantages of frequency domain methods are that they introduce certain artifacts and cannot simultaneously enhance all parts of the image very well. In addition, it is difficult to automate the image enhancement procedure. Despite these drawbacks, frequency filtering of similarity maps has proved to be a powerful method for image denoising (BM3D, published by Maggioni et al. [13]; see original work of Dabov et al. [14]). This method divides the image into small pieces (2D blocks) and, after 3D transformation of similar blocks, the filtering process eliminates noise while leaving the object details mostly untouched. In addition, wavelet-domain hidden Markov models have been applied to image denoising with fascinating results, especially when applied to diagnostic images [15–17].

In order to reduce the computational time required by complex imageprocessing algorithms such as edge detectors, homomorphic filtering, and image segmentation, general-purpose computing methods using graphics processing units were developed [18, 19]. More simple, computationally less demanding algorithms were developed as another strategy to reduce processing time. For example, the simple piecewise linear (PWL) function sharpens image edges and reduces noise simply by evaluating the luminance of pixels in a window of 3 × 3 pixels around each pixel [20]. Its effects can easily be controlled by varying only two parameters. Such simple algorithms can be implemented in reconfigurable hardware in the form of field-programmable gate arrays (FPGA), which is considered a practical way to obtain high performance when using computationally-intensive image processing algorithms [21, 22]. Performing parallel operations on hardware significantly reduces processing time, but simple algorithms are easier to implement on programmable hardware compared to mathematically complex ones.

Here, I describe a rather simple, bio-inspired algorithm that can be used to enhance the contrast of dim images and remove noise without affecting fine image details much. It operates in the spatial domain at the level of pixels and can be run in parallel on FPGA hardware.

**37**

**Figure 1.**

*exhibited low variance among their luminance values.*

*From Insect Vision to a Novel Bio-Inspired Algorithm for Image Denoising*

This novel night vision (NV) image enhancement method increases the quality of underexposed pictures by combining three subsequent image processing steps (see **Figure 1**). These are executed at the level of pixels which perform simple calculations to mimic the amplification of the transduction process in photoreceptors and the spatial integration of image information as known from nocturnal insects [2]. The photoreceptors of *Megalopta genalis* (Halictidae), a nocturnal bee found in the Neotropics, has a rather great gain of transduction, which decreases the signalto-noise ratio and information capacity in dim light for an increased sensitivity. This amplification of visual information is mirrored in the first image processing step of this night vision method by performing a logarithmic transformation of pixel grey values (luminance values). Logarithmic transformation of luminance values leads to an increase of small values while high values remain rather constant. Therefore,

The photoreceptors of nocturnal insects generate slow and noisy visual signals that are spatially summed by second-order monopolar cells in the lamina [1]. Summing visual information from a wide angle of view leads to the reduction of noise and, thus, improves signal-to-noise ratio. The neuronal correlate for this can be found in large dendritic trees of lamina interneurons in *M. genalis*. However, to prevent image blur, spatial summation should be small in image areas where contrast is high and large in more homogeneous image regions. This "adaptive spatial averaging"

*Schema of image-processing steps that make up the NV algorithm. Global image statistics and a simple method of noise estimate were used to derive the parameters "gain" and "variability threshold". These were used in subsequent image-processing steps to enhance the quality of dim images and remove noise. Images with average brightness skip "Ln transformation" and the "contrast enhancement" routine was only applied to images that* 

*DOI: http://dx.doi.org/10.5772/intechopen.91911*

**2. Bionic method of image denoising**

image details of dark image regions become visible.

**2.1 Method overview**
