*3.1.1.2. Background subtraction*

A technique of background subtraction is used to eliminate picture information that is not related to traffic (**Figure 1**). The background removal is carried out through the process as follows [12]:


To get a clean image of the traffic, it is necessary to define a distance threshold of brightness when comparing the background image to the traffic + background images (see Algorithm 2). For every pixel, if the absolute difference of brightness between the image with traffic and the background image is lower than the threshold (empirically defined at 30), then the corresponding pixels are considered identical. In this case, the pixels are colored white

The calculation of the value of the difference is based on the computation of the distance between each color component of a pixel (RGB). In other words, colors are considered as

Regression Models to Predict Air Pollution from Affordable Data Collections

http://dx.doi.org/10.5772/intechopen.71848

21

points in a three-dimensional space (line 13 of Algorithm 2).

Algorithm 2. Generating the background subtraction.

5: color frameColor = frame[i].pixels[pos] 6: color refColor = background.pixels[pos]

7: float rFrame = red(frameColor)

10: float rRef = red(refColor)

14: if (diff <30)

16: else

colored pixels.

1: for(int i = 0, i < allFrame, i++)

2: float red = 0, orange = 0, green = 0

*3.1.1.3. Pixel extraction*

11: float gRef = green(refColor) 12: float bRef = blue(refColor)

15: image.pixels[pos] = color(255)

17: image.pixels[pos] = frame[i].pixels[pos]

Algorithm 3. Generating the pixel color extraction.

8: float gFrame = green(frameColor) 9: float bFrame = blue(frameColor)

13: float diff = dist(rFrame, gFrame, bFrame, rRef, gRef, bRef)

To identify traffic density, three categories of pixel colors are extracted: green, orange, and red (see Algorithm 3). The green, orange, and red pixels mean low, medium, and high amount of traffic, respectively. The pixel number of each category is obtained by getting the RGB component of the whole pixels in the image. After excluding the white pixels (line 6 of Algorithm 3), three rules are implemented to classify the remaining pixels in one or another category (lines 7 to 12 of Algorithm 3). Once the picture is entirely read, the percentage of each category is calculated by dividing the number of green, orange, and red pixels by the total number of

1: for(int i = 0, i < allFrame, i++) 2: for(int x = 0, x < width, x++)

3: for(int y = 0, y < height, y++)

4: int. pos = x + y \* width

**Figure 1.** Description of the principle of background removal. The background image (b) is subtracted from the image that includes the traffic (a). The result is a picture with the traffic information only (c).

(lines 14 and 15 of Algorithm 2). On the contrary, if the brightness difference is higher than the threshold, the color of the pixel does not change (lines 16 and 17 of Algorithm 2). Thanks to this method, it is possible to extract only the color information of the traffic. The calculation of the value of the difference is based on the computation of the distance between each color component of a pixel (RGB). In other words, colors are considered as points in a three-dimensional space (line 13 of Algorithm 2).

Algorithm 2. Generating the background subtraction.

