**1. Introduction**

Artificial Intelligence (AI) is full of biological analogies we tend to recognize their machinic bodies and organs, through sci-fi stories, as in the case of Computer Vision (VC) from 2001: A Space Odyssey [1]. In the movie [2], HAL 9000, a sentient AI, a Heuristic Algorithmic programming computer, is composed of cameras with fisheye lenses and an internal structure constituted by the binomial <algorithms+data>, which in many branches of AI is impossible to dissociate [3]. This structure provides Hal with the visual inputs needed to scan and analyze the spacecraft Discovery, and is imbued with natural language (similar to a human voice) to interact with the interstellar mission crew.

As in history, we are surrounded by voices (Siri, Alexa, and Google Assistant) and machine eyes through cell phones, notebooks, cameras installed on poles, in ATMs, subways, cars, busses, and drones, whether autonomous or guided. All configured with objectives and must present results with the means shown to them; thus, the scanning of facial expressions goes through a series of tangles inspired by the human brain,

the convolutional neural networks (CNN), which are responsible for the analysis and processing of data. Videos and images so for are an effective knowledge result, and a lot of data is needed for the neural network to learn about who or what is seeing these ramifications are made up of norms and models, which regulate the constitutive patterns of what to see, for whom to look, and what should be described about whom one is looking at. Commonly, these models are known as algorithms, in terms of computer science, and they are any computational, mathematical, and statistical procedures aligned to take values or set of values as input and produce set of values as output [4, 5].

In other words, machines learn by using models and analyzing patterns through algorithms, so the computer learns about what it is seeing according to the data presented to it, over and over again, so that it can understand what differs a leopard from a cat, for example. In this way, the objective of AI algorithms is learning, so when new information is presented, it knows how to classify, regardless of what was previously shown to the algorithm, analyzing patterns through the data articulated with each other to generate results. From this point of view, Russell [6] and Lee [7] argue that neural networks demonstrate effective recognition after proper training with labeled examples that connect the many data points to the expected result, and this action according to both requires massive amounts of "relevant data."

Data considered relevant are those that are always contained in the machines, even with the massive production of existing data, not all are collected and if they are, they go through a screening. Thus, no matter how sophisticated they are, algorithms are useless in isolation, and part of their results is supported by the data and samples contained in them, as well as in the way it interacts with the environment [8]. Crucially, data are people [9] if a certain group is included and others are on a smaller scale, statistically the "data that is left out" does not exist according to the analysis of the machine, so the algorithms analyze that a certain group of people is considered hegemonic [10]. In this way, the machine develops algorithmic decision making based on what appears in the data, establishing parameters that express the machinic biases.
