**1. Introduction**

Artificial Intelligence (AI) has been quickly assimilated into our daily lives during the past 10 years. It drives a lot of our behaviors, including how we use social media and definitely the future of the planet. So to be employable in the expanding sector, it is essential to understand the primary distinctions between Artificial Intelligence (AI), Machine Learning (ML,) and Deep Learning (DL) [1–5]. We will define AI, ML, and

DL in the following sections, as well as emphasize the key distinctions between the two latter subsets (see **Figure 1**).

The term "Artificial Intelligence" was popularized in 1956 at the Dartmouth conference, which was organized by John McCarthy, the father of AI. It is the development of intelligent devices and systems that are capable of performing tasks which would normally require human intelligence. It is defined as a branch of computer science concerned with the simulation of intelligent behaviors in computers. AI systems possess the ability to do a variety of activities similar to human intellect, including planning, learning, manipulation, and problem-solving. Artificial narrow intelligence and artificial general intelligence are the two most commonly types of artificial intelligence. Narrow Artificial Intelligence is systems or computers with artificial limited intelligence, commonly referred to as weak AI, that are capable of carrying out solitary tasks. However, they are unable to perform tasks that are not part of their intended function and capability. The kinds of AI machines or programs that we use on a daily basis are instances of limited or weak AI. Examples that are often used include facial recognition software, email spam filters, and Google Translate. General Artificial Intelligence is a more sophisticated machine that has all of a human's talents, including emotional intelligence and creativity, which is referred to as strong AI. However, it is a difficult challenge that has not yet succeeded in replicating human intelligence, emotions, and the capacity to respond in unanticipated circumstances. The only way we can see this type of AI in action is in science fiction films. Machine Learning (ML) is a branch of AI [6]. It focuses on developing algorithms that can take in the provided data, learn from it, and make judgments based on the patterns found in it. When an unpleasant or wrong choice is made, these intelligent systems will need human intervention. ML is used to teach machines how to manage data more efficiently. We may be unable to perceive the obtained information from the data after viewing it. In that case, we employ ML. With an abundance of datasets available, the demand for machine learning is increasing. Many industries use ML to extract data. The goal of ML is to learn from data. Deep Learning (DL) is a further division of ML. Without requiring human input, it processes data using a number of layers of algorithms and artificial neural network to arrive at an accurate conclusion. DL was first introduced in the 1980s, but it has achieved popular success since 2006. Due to the enormous quantity of data needed to train a DL network, significant computational power and time are required. The time required to train the network could eventually be cut in half, from weeks to hours, as cloud computing and Graphics Processing Units

### *Artificial Intelligence Approaches for Studying the* pp *Interactions at High Energy… DOI: http://dx.doi.org/10.5772/intechopen.111552*

(GPUs) advance and grow. As shown in **Table 1**, there are some significant differences between DL and ML based on the descriptions of both concepts above [7–10].

A component of AI called ML, teaches programs how to spot patterns in datasets and use those patterns to infer conclusions and predictions. The use of datasets, features, and algorithms in ML software is essential. Datasets are used to train machine learning programs to identify patterns and correlations, so they are necessary. Images, numbers, words, and other types of data are included in such databases. Features, often known as variables, draw attention to important data points that the program should concentrate on. In order to teach the program how to make the best judgments, the appropriate features and algorithms must be chosen. These features and algorithms are the tools for data analysis. The speed and precision of getting the results may vary, even when utilizing multiple methods for the same task may produce equal solutions. It's also crucial to remember that the accuracy of the results is determined by the caliber of the data. Therefore, gathering reliable data will help in reaching the intended results [11, 12].

Forecasting and prediction of data, such as stock market prices, are frequent applications of machine learning. Other straightforward applications that employ it include email spam filters and social media connection suggestions. While DL is used in more complex situations like autonomous vehicles. The network is able to identify traffic lights, find barriers, and more. AI approaches have been recently used in a number of modeling methodologies based on soft computing systems. In the fields of nuclear physics, high-energy physics, materials science, and other sciences, these evolution algorithms have a strong physical existence. The Interaction behavior is complicated by the non-linear relationship between the interaction parameters and the result. AI approaches are essential for the multi-part data processing required to understand the interactions of fundamental particles. These techniques act as alternatives to more traditional methods. In this regard, AI methods like genetic algorithms, genetic programming, and Genetic Expression Programming (GEP) can be used as substitute tools to imitate these interactions. The learning algorithm of AI approaches serves as a driving force behind their use, as it discovers the links between variables in datasets. Then builds models to account for those associations (mathematically dependent). To evaluate experimental data and gain a better knowledge of many physics processes, a new computer science approaches are needed. Experimental data were acquired and described by a mathematical equation.

The success of modeling enables us to anticipate areas where experimental data are lacking. Due to its generalization, noise tolerance, and fault tolerance, AI has become increasingly popular in recent years as a potent tool for creating data correlations and


**Table 1.** *Comparison between DL and ML.* has been successfully applied in materials science. Alaa F. Abd El-Rehim et al., used ANN to Simulate and forecast the Vickers Hardness [13–16]. H A M Ali and D M Habashy used ANN model for calculating the electrical impedance, AC conductivity and dielectric properties [17]. D. M. Habashy et al., simulate and forecast Entropy per Rapidity using ANN model at LHC Energies [18]. D. M. Habashy et al., used ANN model for training Particles multiplicity per rapidity for different charged particles observed in Au + Au heavy-ion collisions with energies varying between 2 and 200 GeV [19]. D.M. Habashy et al., train and forecast micro-hardness of nanocrystalline TiO2 using ANNs [20].
