*Risk Assessment and Automated Anomaly Detection Using a Deep Learning Architecture DOI: http://dx.doi.org/10.5772/intechopen.96209*

#### **Figure 19.**

in [0,1] with value range [0,1], so the minimum AUC value is 0 and the maximum is 1. Higher value means better true positive and true negative to false positive and false negative ratio. The two metrics generally return similar results, but in our case more weight is given in metric PR, as it offers a better estimate in cases that interest us more the positive class of results, or the results consist of significantly more elements of one of the two classes. Both features apply in the case of this risk

During the experiment, networks emerged that failed to find an acceptable solution to why either they were trapped in a local minimum or they encountered the phenomenon of exploding gradient. These cases appeared to be random and independent of the parameters noise, so the network was initialized differently and the training started from the beginning. The experiments were performed using 3 levels of congestion in space, low, moderate, and high, and for each of them 40 neural networks were created, one for each training/testing noise combination mentioned above. For every desired network, 4 independent trainings were conducted and the averages of metrics of interest were kept. The same test data, corresponding to low-to-medium congestion, were used for testing all tested

From the ROC AUC score graphs above, it is seen that the models that result in the highest performance correspond to the following noise level in the training data: For low noise data, the best performing data with AUC = 0.91 corresponds to

For medium noise data, the best performing data with AUC = 0.96 corresponds

For high noise data, the best performing data with AUC = 0.95 corresponds to

From the above results it is clear that the performance of the networks remains

constant when we apply noise to the test data. This implies that, since training completed, the network remains robust and is not affected by data noise, so it can to be used in a real application. Of particular interest are variations that occur when present noise in education data. As mentioned above, training a neuron network usually benefits from the difference in training data, as it helps learn the patterns that appear in the data instead of the data itself. This obviously does not mean that the more noise the better. In every network and for every application there is some

optimal noise level that offers the best performance. At cases with low and

*Training with low congestions data with different noise levels. Testing with low-to-medium congestion data.*

models. The final results are presented below (**Figures 18**–**20**):

noise level σ<sup>2</sup> = 0.8 in the training data.

noise level σ<sup>2</sup> = 1.4 in the training data.

**Figure 18.**

**130**

to noise level σ<sup>2</sup> = 1.6 in the training data.

assessment system.

*Deep Learning Applications*

*Training with medium congestions data with different noise levels. Testing with low-to-medium congestion data.*

#### **Figure 20.**

*Training with high congestions data with different noise levels. Testing with low-to-medium congestion data.*

moderate congestion it seems that the Gaussian noise with σ2 0.5–0.8 has the best performance, while for high congestion the training with noise performs better with σ2 1.4. The variance has not yet been attributed to any of its specific features network, model, training method, or data.

The evaluation results from the performance of the risk assessment algorithm with the iCrowd simulator demonstrates that risk assessment can be done accurately and without necessarily inducing additional delays in the security screening process since the trajectory classification in normal or suspicious is done by overhead cameras while the travelers go about their normal check-in routine at the airport. To that extent, the proposed risk assessment method based on anomaly detection on traveler trajectories can be used to improve the security screening effectiveness while keeping the delay low (or moving the operating point in **Figure 10** from high delay to low.).

Furthermore, the proposed method can be used as a financial investment tool for estimating the cost of acquiring the necessary equipment (in this case overhead cameras) for a certain level (probability of accuracy) before purchasing it, and for performing a trade-off analysis between the cost of acquisition of the necessary

## *Deep Learning Applications*

equipment and the expected performance improvement in risk assessment. This way, the risk assessment simulator allows to be used as a cost–benefit tool for the analysis of performance of a risk-based security system.

passenger stands. This way, the security personnel that uses the security mobile app, needs to identify passengers only by their indexing number in the line they stand when reporting to the risk assessment back office system any suspicious behaviors about them. This way, anonymity of passengers and their personal data protection are maintained by the security mobile app. The information sent this way by the security personnel on the floor is then fused along with all other risk assessment reports about each passenger and the risk estimate is updated. The risk is reported to the security screening system and the passenger is classified in one of

*Risk Assessment and Automated Anomaly Detection Using a Deep Learning Architecture*

*DOI: http://dx.doi.org/10.5772/intechopen.96209*

the three risk categories, namely green, yellow or red, as mentioned earlier.

the features and the achieved system performance.

*3.2.1 System architecture and interfaces*

PDAs, mobile phones, tablets, etc.).

**133**

In FLYSEC [3], a novel system architecture for Security and Safety surveillance systems that aims to identify adverse events or behaviors which may endanger the safety of people or their well-being has been introduced [12]. Through proper adaptations the system is applicable to a variety of monitoring systems for various critical infrastructures, border crossing points, and other places of interest (e.g., malls, mass transport systems). The proposed architecture depicts an Internet of Things (IoT) platform which comprises a sensing tier, a back – end processing and intelligence tier and a front end for visualization and user feedback tier. In further monitor and surveillance is performed mainly on the back – end intelligence component which consists of two modules: (a) the event detection module combined with a data fusion component responsible for the fusion of the sensors inputs along with relevant high level metadata, which are pre-defined features that are correlated with a suspicious event, (b) an adaptive learning module which takes inputs from security personnel about the correctness of the detected events, and uses it in order to properly parameterize the event detection algorithm. Moreover, a statistical and stochastic analysis component is incorporated which is responsible for specifying the appropriate features to be used by the event detection module. Statistical analysis estimates the correlations between the features employed in the study, while stochastic analysis is used for the estimation of dependencies between

The system architecture is organized basically in three tiers: Sensing components, back–end components, and front – end devices. The sensing components are responsible for acquiring input which is either high or low level heterogeneous data coming from visual sensors (CCD, IR, etc.), biometric sensors (fingerprints, other), audio sensors (microphones), indoor localization equipment (Wi-Fi, beacons, RFID scanners, etc.), document scanners which provide information about visitors (for example travel documents in an airport, or purchase information recorded on personal discount electronic cards), or human reports via terminal devices (e.g.

Front–end devices are responsible for visualizing information to end–users and assisting their operations (for example official authorities receiving information about detected incidents of great interest, or visitors getting navigation information inside an infrastructure, etc.). Front–end devices consist of official management terminal tools which manage the information collected and processed by the back– end and sensing components and assist personnel operations by providing alerts and notifications about significant events (**Figure 21**), visualizations of infrastructure's layout along with real – time updates about essential points of interest (for example size of queues, sensors viability, crowd distribution, etc.) (**Figure 22**). Moreover, front – end devices include also mobile user devices which operate as a personal assistant to passengers at an airport or a BCP. These mobile devices may provide online and offline services regarding indoor navigation, recommendation
