**1. Introduction**

The pure sciences, such as mathematics and physics, have been fundamental in evolution and human survival; the great researchers in these areas were the first Nobel Prize winners in the history of world discoveries. Discoveries that have undoubtedly marked a line in time, in the postulations of the different theories of physics have led to technological advances in different areas of science, whose origin has been observational and experimental from the beginning [1].

At this point in history, the different questions, predictions, and estimates related to the behavior of a certain event or circumstance began to take shape, which was references to other research areas, especially in the medical sciences, for the study of diseases, from the causative agents to the damage caused to health [1].

These pure sciences gave rise to computational sciences, which were the gateway to research focused on artificial intelligence. Shortly after the Second World War, around 1950, when the first article on artificial intelligence was published in the

philosophical journal MIND Computing Machinery and Intelligence, Alan Turing was the first mathematician and researcher who applied his knowledge in developing a computational machine that could perform mathematical analysis. And at the same time, he wondered if this same computational machine could have the ability to think like a human being [2, 3].

The first works developed with artificial intelligence were focused on mathematical sciences, statistical analysis, and on an event that marked history during the Second World War: algorithms to decode the Nazis' attack plan towards the allied countries against the government of Adolf Hitler, and whose intervention saved thousands of soldiers and civilians from each of these countries. On this occasion, artificial intelligence was used for the common good, to stop the machiavellian attacks from a sick and ambitious mind that took more than 20 million people [2, 3].

However, parallel to these feats uncertainty also grows to arise from the historical background of advances in the pure sciences. Such as the discovery of uranium and nuclear weapons, for human survival purposes, the discovery of dynamite to accelerate construction and firearms, for human defense. All of these have been questioned for their use in world wars, where human annihilation has prevailed, the most famous of all, the atomic bomb [2, 3].

Today we are thinking of artificial intelligence for the benefit of health, just health, which allows the approach from genetic and environmental risk factors and social and institutional determinants of the population. However, arises the concern as to whether the use of artificial intelligence is only of interest to ensure the survival of human beings from many diseases of the XXI century and its evolution, or whether we are approaching our annihilation [2–4].

The survival of the human race has been constantly threatened since ancient times by different outbreaks, epidemics, and pandemics of different infectious agents such as bacteria and viruses, an example of this, is the bubonic plague or black plague that wiped out 50% of the European population, and currently, the Covid-19, that had a demographic, social, economic and social impact. This has brought the world population to its knees and has made us understand that we are not indestructible, in fact, we are very vulnerable, and we must take into account nature's limits, the environment that surrounds us, and that exceeding them has led us to commit recklessness that has led to the annihilation of the human race [5].

The question we ask ourselves now is if we have learned the lesson from previous experiences, which have put human survival at risk. Or whether in this case, artificial intelligence will focus only on preserving human life and the environment around us improving our quality of life, or otherwise, we will be conquered again by the ambition for power, expansion, and wanting to be superior to others when we should be working together for our well-being.

The big bang of artificial intelligence focused on medicine and surgical procedures has already begun, the latest industrial revolution, a technological avalanche focused on the solution of more timely health interventions, reflecting real-time decision making, and influencing the health of different specific populations.

Likewise, it is urgent to regulate the uses and applications of artificial intelligence with laws and norms, imposing limits that guarantee the use of this technology only to preserve the human race, not replace it, much less annihilate it.

All countries, whether developed or developing, without exception, must sign international agreements and treaties in which they commit to use it only for the common good.

Being prudent in the development of these technologies and sharing this knowledge among sister countries that we are, without a doubt, would allow an unprecedented advance in science in general.

We must be very alert, and be able to impose limits on ourselves so that we do not engage in unethical behavior that puts the human race at risk.

We are in a historical moment, where technological advances have allowed the survival of the human race, it is important to mention that all the technology used for the creation in record time of vaccines against the covid-19 disease, which has taken so many lives worldwide.

In this way, we must unite our knowledge and efforts from all research fields, and educational institutions, such as universities worldwide, are the guarantors of imposing the limits of the different research approaches, thus favoring the good use of artificial intelligence applications.

If the academy is the one that produces the generation of knowledge for research purposes, and to solve problems, which in this case is focused on the solution of health problems, then it should impose the limits of uses on human beings and their environment.

Each research group, in their different areas, from different universities, should be able to build their methods focus on this discipline, to generate high-level knowledge, which can be translated into different computational languages, with the only purpose of making decisions in real-time.

Currently, the areas of development of artificial intelligence are focused on: Machine learning (machine learning, deep learning, unsupervised, supervised), driven by Massive Data (massive exploitation of data, identifying relationships between them, detecting patterns, making inferences, and learning through probabilistic mathematical models), natural language processing (content extraction, classification, translation, text generators), expert systems (knowledge and rule-based systems, diagnostics), computer vision (exploring the recognition and understanding of images and videos), robotics (advanced laparoscopic surgery such as Da Vinci, the Sojourner, Spirit, Opportunity, and Curiosity robots for space research) and speech recognition [6–8].

The applications, however, are focused on language analysis and understanding, information retrieval, information extraction, answer searches, automatic summaries, automatic translation, automatic document classification, speech recognition, chatbot, child content control, document and opinion detection, and anti-spam filters. Spam, voice assistants, Siri as Apple assistant, Cortana, Alexa, and Bixby all these applications use natural language techniques [6–8].

Search engines and entertainment and communications platforms such as Google Search, Google Maps, Netflix, and social networks such as Facebook, Pinterest, Twitter, Instagram, and Google Photos [6–8].

The challenge in the health areas is not only to create information systems with artificial intelligence, big data, and data mining with sophisticated algorithms for information management; making decisions in real-time is the key to intervening quickly in health problems [6–8].
