*3.2.4.1. BFAST*

The BFAST and BFAST monitor algorithms were applied as a trajectory analysis strategy. Canopy cover derived from Landsat from the period 1988 to 2014 was used to implement the time series analysis. The algorithms were implemented using the BfastSpatial package for R software available at http://github.com/dutri001/bfastSpatial [64, 89]. The steps followed to implement BfastSpatial were (a) pre‐processing of surface reflectance data, (b) inventorying and preparing data for analysis, and (c) analysis and formatting of change detection results.

#### *3.2.4.2. Bi-temporal change detection*

Change detection on a bi‐temporal basis was implemented in NPP layers. The imaging differencing method allowed direct comparison between images and was used for two reasons: it is straightforward and allows an easy interpretation of the results [70]. The image differenc‐ ing method consisted of precisely co‐registered multi‐temporal images used to produce a residual image to represent changes. Although the USGS service provides Landsat imagery as LT1 (geometrically corrected), an automatic image registration was performed for every change detection process.

The difference between layers was measured directly from values of the pixel image. The expression of image differencing is as follows:

$$I\_d(\mathbf{x}, \mathbf{y}) = I\_1(\mathbf{x}, \mathbf{y}) - I\_2(\mathbf{x}, \mathbf{y}) \tag{6}$$

where *I*1 and *I*2 are images from time *t1* and *t2*, *(x, y*) are coordinates, and *I*d is the difference image. Pixels with no change were distributed around the mean while pixels with change were circulated in the tails of the distribution curve. Since change can occur in both directions, it is therefore up to the analyst to decide which image to subtract from which [90].

The image differencing method was carried out by the ENVI™ 5.2 interface. Possible incon‐ stancies between indicators used in this process due to errors associated with estimations were minimized using a normalization process between Time 1 and Time 2 layers. This normaliza‐ tion process applies a gain and an offset to the Time 2 layer so that it has same mean and standard deviation as the time layer.

The next step was to select a threshold value that allows the method to identify areas that have a significant change. Otsu's auto‐thresholding method [91] was used to set the threshold for identifying important changes. Otsu's is a histogram shape‐based method. It is based on discriminate analysis and uses the zeroth‐ and the cumulative first‐order moments of the histogram for calculating the value of the thresholding level.

A clean‐up process was carried out where a kernel size of 3 × 3 pixels was applied to remove speckling noise, and a minimum aggregate size set to 25 was configured to remove minus‐ cule regions.

The outputs produced by the changed detection method were (a) an image change and (b) an image difference. The latest was kept to identify "degraded" areas by applying a classification tree based on field observations and very high‐resolution imagery as training sites. This approach followed the same logic described earlier to detect break points in the time series. The image change was used to determine deforestation in the study area.
