**2.1 Method overview**

*Biomimetics*

such as the traditional method of histogram stretching, sensor noise is unavoidably amplified. Therefore, the goal of image enhancement is to preserve as many image details as possible while eliminating noise. Typically denoising can be achieved by the application of linear and nonlinear filters. Linear filters take the forms of smoothing or low-pass, sharpening, Laplacian, un-sharp masking, or high-boost filters. Nonlinear filters include order statistic filters such as minimum, median,

Simple denoising techniques, such as linear smoothing or median filtering, can reduce noise, but at the same time smooth away edges, so that the resulting image becomes blurry. A popular alternative denoising method is total variation (TV) denoising, which has been described by Rudin et al. [8]. This method minimizes the total variation of the luminance values that can mainly be attributed to noise. The TV regularization method preserves salient edges while effectively removing noise. Lee et al. [9] published a framework for a Moving Least Squares Method with Total Variation Minimizing Regularization and Yoon et al. [10] improved the preservation of fine image details by developing an adaptive, TV-minimization-based image enhancement method (ATVM). Bilateral filtering described by Tomasi and Manduchi [11] is another powerful non-linear denoising algorithm that preserves object contours. Here, denoising is based on the spatial distance of surrounding pixels relative to an output pixel and its grey value difference. Bilateral filtering is fast but the tuning of parameters is rather difficult (see Zhang and Gunturk [12]) and staircase effects and inverse contours are known possible artifacts. Another possibility is the operations that are performed on a Fourier transform of the image rather than on the image itself. The techniques that fall under this category include low pass, high pass, homomorphic, linear and root filtering. Fourier-transformed images are filtered and inverse transformed to reduce noise and prevent blurring effects. The disadvantages of frequency domain methods are that they introduce certain artifacts and cannot simultaneously enhance all parts of the image very well. In addition, it is difficult to automate the image enhancement procedure. Despite these drawbacks, frequency filtering of similarity maps has proved to be a powerful method for image denoising (BM3D, published by Maggioni et al. [13]; see original work of Dabov et al. [14]). This method divides the image into small pieces (2D blocks) and, after 3D transformation of similar blocks, the filtering process eliminates noise while leaving the object details mostly untouched. In addition, wavelet-domain hidden Markov models have been applied to image denoising with fascinating results,

and maximum filters (for a review of methods see [6, 7]).

especially when applied to diagnostic images [15–17].

In order to reduce the computational time required by complex imageprocessing algorithms such as edge detectors, homomorphic filtering, and image segmentation, general-purpose computing methods using graphics processing units were developed [18, 19]. More simple, computationally less demanding algorithms were developed as another strategy to reduce processing time. For example, the simple piecewise linear (PWL) function sharpens image edges and reduces noise simply by evaluating the luminance of pixels in a window of 3 × 3 pixels around each pixel [20]. Its effects can easily be controlled by varying only two parameters. Such simple algorithms can be implemented in reconfigurable hardware in the form of field-programmable gate arrays (FPGA), which is considered a practical way to obtain high performance when using computationally-intensive image processing algorithms [21, 22]. Performing parallel operations on hardware significantly reduces processing time, but simple algorithms are easier to implement on programmable hardware compared to mathematically complex ones.

Here, I describe a rather simple, bio-inspired algorithm that can be used to enhance the contrast of dim images and remove noise without affecting fine image details much. It operates in the spatial domain at the level of pixels and can be run in

**36**

parallel on FPGA hardware.

This novel night vision (NV) image enhancement method increases the quality of underexposed pictures by combining three subsequent image processing steps (see **Figure 1**). These are executed at the level of pixels which perform simple calculations to mimic the amplification of the transduction process in photoreceptors and the spatial integration of image information as known from nocturnal insects [2]. The photoreceptors of *Megalopta genalis* (Halictidae), a nocturnal bee found in the Neotropics, has a rather great gain of transduction, which decreases the signalto-noise ratio and information capacity in dim light for an increased sensitivity. This amplification of visual information is mirrored in the first image processing step of this night vision method by performing a logarithmic transformation of pixel grey values (luminance values). Logarithmic transformation of luminance values leads to an increase of small values while high values remain rather constant. Therefore, image details of dark image regions become visible.

The photoreceptors of nocturnal insects generate slow and noisy visual signals that are spatially summed by second-order monopolar cells in the lamina [1]. Summing visual information from a wide angle of view leads to the reduction of noise and, thus, improves signal-to-noise ratio. The neuronal correlate for this can be found in large dendritic trees of lamina interneurons in *M. genalis*. However, to prevent image blur, spatial summation should be small in image areas where contrast is high and large in more homogeneous image regions. This "adaptive spatial averaging"

#### **Figure 1.**

*Schema of image-processing steps that make up the NV algorithm. Global image statistics and a simple method of noise estimate were used to derive the parameters "gain" and "variability threshold". These were used in subsequent image-processing steps to enhance the quality of dim images and remove noise. Images with average brightness skip "Ln transformation" and the "contrast enhancement" routine was only applied to images that exhibited low variance among their luminance values.*

is performed in the second image processing step of this night vision method and preserves object contours and image sharpness. This procedure assumes a higher variability of luminance values near object contours as compared to homogeneous image regions. Thus, circles in which local luminance values can be averaged may not exceed a critical variability of grey values (*threshold\_var*). Adaptive averaging is performed at the level of pixels and evaluates the variability of local grey values to find the dimension of a circle in which the variability of grey values remains below the predefined variability threshold. After exceeding this threshold, the average of grey values of pixels belonging to this circle is calculated and stored at the central pixel. As a final processing step an automatic contrast-enhancement procedure was applied by means of linear histogram stretching. Two parameters (gain and variability threshold) are essential for this method and were derived from global image statistics and a simple kind of noise estimate. The image-enhancement algorithm described here was developed using Netlogo 5.2 (developed by Uri Wilensky; http://ccl.northwestern.edu/ netlogo/), a multi-agent programming environment that allows the parallel execution of commands at the level of pixels (named patches in the Netlogo language).
