**2. Anthropology of algorithmic bias**

Medeiros [3] argues that: a) algorithms can be blamed for certain results, even when there is curation performed on the entered data. In other cases: b) algorithms specified under inappropriate assumptions generate inappropriate solutions, even though the data have been well configured. There is also the possibility that: c) both algorithms and data have human biases, which may have been unconsciously inserted by their programmers and creators. In this sense, from the anthropological point of view, Forsythe [11] argues that software embodies values that are tacitly maintained by those who build them.

*You need to consider sources of bias throughout the data lifecycle – collection, curation, analysis, storage and archiving. [...] the responsibility does not end with archiving data, or delivering software. Regardless of whether bias exists in data, algorithms, or in their combination, it always appears due to humans – in collection, analysis or interpretation, whether intentional or through ignorance. And when adding machines to the binomial, more questions arise. ([3]:12)*

To a certain extent, AI algorithms are human extensions, so by extensions it is understood that there is an automated widening of biases and as noted earlier, even though machines are named as autonomous, such as AI algorithms, human bias

### *Computer Vision: Anthropology of Algorithmic Bias in Facial Analysis Tool DOI: http://dx.doi.org/10.5772/intechopen.110330*

[5, 12] is embedded for effective learning. It is with the premise of corrosive software in the social context that investigations about Computer Vision (VC) are consistent in several debates about its use and the biases inserted in the machines, therefore racist algorithmic biases [13–16], as well as disparities related to gender, class, politics, democracy, and surveillance.

Like Hal 9000—the computer presented at the beginning of this article—the information contained in it constituted his decision making for what he should do and in which situations he should act, but his actions were based on the decisions of his creator Dr. Chandra. There is a lot of controversy about the story, but it still presents a scenario that demonstrates how the objectives embedded in analysis software can establish significant changes in the short/long term, if they do not go through a bias audit process. In this sense, when analyzing the bibliographies and documentary about the algorithmic gender bias, race1 , it was analyzed that a large part is articulated around the social binary "woman" or "man."

### **2.1 AWS Amazon rekognition (facial analysis)**

From the situations presented by Joy Buolamwini in the documentary *Coded Bias* [17], in which she describes the process of a project for the MIT Media Lab in which facial analysis software from conglomerates is used: Google Cloud, Microsoft, AWS Amazon, and IBM Watson, and in most of these software, her face is not recognized, which configures that the tools did not recognize her as a human only when she started to use the white mask<sup>2</sup> —symbol of *Algorithmic Justice League*—there is recognition and it is noticed that in the course of the documentary, there are changes in the algorithms/data of these companies that begin to detect the scientist's face, in addition to classifying her, indicating gender, age, and emotional aspects.

Considering that algorithms are not static, conglomerates modify, add, or remove data, modifying nuances of the algorithms frequently, whether for user experience (UX) or to prevent stocks from fluctuating, after all CEO's of Google Cloud, Microsoft, AWS Amazon, IBM Watson, and other companies would not allow their brands to be linked to racism/gender discrimination or any factor that harms the brand, not for reasons of social concern, but for monetary reasons.

As a result, when researching the aforementioned tools, AWS Amazon Rekognition (Facial Analysis) proved to be more accessible, offering a layout in which the user can choose which tools she needs to use, in addition to: "no monetary charges." Undoubtedly, in this segment, the principles of advertising do not fail: "there is no free lunch," if I do not pay cash, so my information is the bargaining chip and that includes email address, digital traces, photo, age, academic background, orders placed on Amazon, mobile number, geolocation, etc.… still, Google was tested, but not to the same degree as Amazon (**Figure 1**).

Let us return to the tool, AWS Amazon Rekognition (Facial Analysis) is a tool based on computer vision and deep learning, as previously mentioned, and machine learning works by making various data look for patterns repeatedly to then discern and recognize

<sup>1</sup> Joy Buolamwini, revealed biases in facial analysis algorithms from Amazon, IBM, Google Cloud, Face++, Microsoft, and others, demonstrating that the services often viewed black women as "men" but made little or no mistakes when it came to men of color. Light skins in: Ref. [17].

<sup>2</sup> Joy Buolamwini describes this moment by relating to Frantz Fanon's [1925–1961] Black Skin, White Masks, in which she comes to question the complexities of changing herself by putting on a mask to conform to the norms or expectations of a dominant culture. in this case of dominant technologies.

### **Figure 1.**

certain images, with that it performs facial recognition and analysis, detection of objects and texts, information about where faces are detected in an image or video, assigning points on faces and eye position, and "detecting emotions" (e.g., happy or sad) [18]. With that, I tested the tool to see if it was able to give me a result that was different from the one shown in the documentary in which Buolamwini was made invisible, I inserted 15 photographs of women, black celebrities, who socially identify as straight people, and the machine recognized them, classified them according to the available parameters as follows: looks like a face 99.9%, looks like a woman 99.9%, age group 27–37 years old, smiling 96.1%, looks happy, 95.9% is not wearing glasses 97.4%, is not wearing sunglasses 99.9%, eyes open 97.5% mouth open 95.4%, no mustache 98.2%, and no beard 95.8%.
