**7. Experimental results**

We provide solutions for a standard evaluation set without detection and description. Then, we discuss the solutions obtained during when applying the algorithm to apply the real object. All detectors and descriptions are based on comparison with the original application of the authorizer. In standard evaluation, we test our detectors and describe the applied sequence of images and software tests. The test set included images of actual, narrow, and structured scenes. Given the limited page count of this manuscript, we cannot provide the results of all sequences. To compare the performances of the detectors, we selected images with changes in perspective (Graffiti and Wall), magnification and rotation (Boats), and illumination (Leuven). Test notes for all sequences are presented in addition to the base sequence. We applied the degree of repetition, as described in [10], to detect the number of points of interest in two images relative to the indicator of interest (which is only the visible part of both images). The performance of the detection algorithm was compared with that of the Gaussian (DOG) [2], Harris, and Hessian Laplace [12] algorithms. All algorithms provided similar number of points of interest. This finding applies to all images, including the database used in the object recognition experiment (see **Table 1** for an example). In addition,


**Table 1.** Threshold element, numbers of points detected, and computational time (the first image of the graffiti sequence, 900 × 640).

the computational speed of our Quick Hessian detector was more than three times faster than that of DOG and five times quicker than that of Hessian Laplace. At certain timepoints, the repetitions of our detector approximated (Graffiti, Leuven, Boat) or exceeded (Walls) that of the competition. The Graffiti and Walls sequences contained out-of-play gyration, and solutions in affine contortions when the detection compared only gyration and were scaled invariantly. Therefore, distortions must be addressed through the overall durability of features. The descriptors were evaluated by the applied call diagrams (1 precisely) in [3, 9]. In each evaluation, we applied the first and fourth images of the sequence, except for the Graffiti image and the Walls scenario. The corresponding perspective change was 30 and 50 points., we compared our SURF signifier (GLOH0, SIFT, and PCA-SIFT) with our "Quick Hessian" detector. SURF outperformed the other signifiers in almost all tests. In **Figure 4**, we equated the solutions applied to two different corresponding techniques one established on the same threshold element and one founded on the closest neighbor proportion (see [9] for a discussion of this technique). This phenomenon affected the order of descriptors but SURF performance is better in both events because of limited spacing. However, the only solutions on likeness similar to the similarity threshold are shown in **Figure 7** because this technique is most appropriate for representing the runner distribution in its advantage spacing [9] and used more routinely. SURF descriptor is systematically and extensively superior to other descriptors and exhibited 11% improvement. Its computational time is rapid (**Table 2**). The microprocessor (Surf-126) seems to be slightly superior to the general SURF system. However, its matching process was slow. Thus, it may be unsuitable for applications that require speed. Object recognition was performed under a similar set of standards and threshold element (**Table 1**). The moment was evaluated on a standard Unix computer (Pentium IV, 2.5GHZ). The objects are recognized because we experienced new

performance. In addition, SURF has excellent performance that surpasses that of the latest state-of-the-art algorithm. To provide an index of the pairing phase, Laplacian signs (i.e., the Hessian matrix effect) are included for the basic point of interest. Typically, the points of interest are in plug-type structures. The label marks luminous points on the darker background of the reversed situation. This functionality is available at an additional price, which has already been calculated throughout the detecting process. During matching, we compare the feature only if they have the similar contrast types. Thus, this minimum data speeds up matching and

We provide solutions for a standard evaluation set without detection and description. Then, we discuss the solutions obtained during when applying the algorithm to apply the real object. All detectors and descriptions are based on comparison with the original application of the authorizer. In standard evaluation, we test our detectors and describe the applied sequence of images and software tests. The test set included images of actual, narrow, and structured scenes. Given the limited page count of this manuscript, we cannot provide the results of all sequences. To compare the performances of the detectors, we selected images with changes in perspective (Graffiti and Wall), magnification and rotation (Boats), and illumination (Leuven). Test notes for all sequences are presented in addition to the base sequence. We applied the degree of repetition, as described in [10], to detect the number of points of interest in two images relative to the indicator of interest (which is only the visible part of both images). The performance of the detection algorithm was compared with that of the Gaussian (DOG) [2], Harris, and Hessian Laplace [12] algorithms. All algorithms provided similar number of points of interest. This finding applies to all images, including the database used in the object recognition experiment (see **Table 1** for an example). In addition,

improves performance.

**7. Experimental results**

**Figure 9.** Line graphs for different methods.

42 Evolving BCI Therapy - Engaging Brain State Dynamics


The threshold element is adjusted to detect the same number of indicators of interest for all methods. The relatively shorter calculation time also represents the other image.

**Table 2.** Calculation time for common detectors—descriptive applications, testing on the first image of the Graffiti sequence.

functionalities on the practical application, aiming to identify the art object in the museum. The data consisted of 216 images of 22 objects. The test group images comprised 116 images. Under different conditions, including extreme illumination changes, object reflections in glass cabinetry, changes in perspective, magnification, and differences in camera quality, images are small (319 × 240) and difficult to recognize because they lose detail. To identify the objects in the database management, we proceed as follows: The images of the test group are compared with all the images of the reference group by associating their respective indicators of interest. The object represented on the reference image is selected with the greatest amount of correspondence with respect to the test image as a recognized object. Correspondence is performed as follows: A perspective of interest in the test image is compared with a perspective of interest in the referenced image by computing the value of Euclidean space between the vector and its descriptors. A corresponding pair is detected if the vision distance is closer by 0.6 times than that from the closest neighbor to the second. It is the closest strategy that corresponds to the ratio of the neighbors [2, 8, 27]. Extra engineering restrictions reduced the impact of false-positive matching, and this can be performed over any situation. For comparative reasoning, this does not make sense because it may be hiding the lack of the basic tables. On average, the rating reflection of the solutions of our performed appraisal is established. The leaders are SURF-126 with a recognizability rate of 85.7%, followed by U-SURF (84.8%), and SURF (83.7%). The other descriptors were 78.4% for GLOH, 78.2% for SIFT, and 72.3% for

Rotation Invariant on Harris Interest Points for Exposing Image Region Duplication Forgery

http://dx.doi.org/10.5772/intechopen.76332

45

A brain-computer interface (BCI) is a direct interface between the human brain and an artifi-

Many researchers have proposed modern algorithms to solve the problem of image authentication. This study explored and compared the application of different algorithms that detect common types of image forgery. The characteristics of the algorithms are shown in **Table 2**. The algorithms we examined in this study are undoubtedly important for the detection of image counterfeiting. Previous researchers have attempted to improve the reliability of image fraud detection algorithms. They have achieved this objective by (1) reducing algorithm complexity and computational time. This objective was achieved by using small vector dimensions, as shown in Refs. [18, 37–41] increasing the robustness of the algorithms. This aim was achieved by adopting a powerful feature that is consistent for a wide range of image processes, as shown in Refs. [42–48]. The algorithm based on fixed key indicators and fixed instances exhibits remarkable performance, as shown in **Table 2**. However, several barriers and challenges remain. We summarize the defects of available algorithms in **Tables 1** and **2**: (1) the algorithms cannot handle all possible types of image processing that can be applied to forge images; (2) some algorithms rely heavily on several threshold elements or initial value, and the identification of these threshold elements and values require experimentation and improvements; and (3)

PCA-SIFT (**Figures 10** and **11**).

**8. Discussion and conclusions**

cial system. Its purpose is to control the actuation of a device.

**Figure 10.** Example images of the reference group (left) and the test group (right). Note the difference in perspective and colors.

**Figure 11.** Left to right and from top to bottom: Frequency of Walls-Graffiti (perspective change), Leuven (illumination change), and Boats (magnification and rotation).

functionalities on the practical application, aiming to identify the art object in the museum. The data consisted of 216 images of 22 objects. The test group images comprised 116 images.

Under different conditions, including extreme illumination changes, object reflections in glass cabinetry, changes in perspective, magnification, and differences in camera quality, images are small (319 × 240) and difficult to recognize because they lose detail. To identify the objects in the database management, we proceed as follows: The images of the test group are compared with all the images of the reference group by associating their respective indicators of interest. The object represented on the reference image is selected with the greatest amount of correspondence with respect to the test image as a recognized object. Correspondence is performed as follows: A perspective of interest in the test image is compared with a perspective of interest in the referenced image by computing the value of Euclidean space between the vector and its descriptors. A corresponding pair is detected if the vision distance is closer by 0.6 times than that from the closest neighbor to the second. It is the closest strategy that corresponds to the ratio of the neighbors [2, 8, 27]. Extra engineering restrictions reduced the impact of false-positive matching, and this can be performed over any situation. For comparative reasoning, this does not make sense because it may be hiding the lack of the basic tables. On average, the rating reflection of the solutions of our performed appraisal is established. The leaders are SURF-126 with a recognizability rate of 85.7%, followed by U-SURF (84.8%), and SURF (83.7%). The other descriptors were 78.4% for GLOH, 78.2% for SIFT, and 72.3% for PCA-SIFT (**Figures 10** and **11**).
