**2.2 Algorithmic misgender**

Thus, based on the results obtained, I asked if the same would happen if photographs of bodies were inserted that do not correspond (and do not need to correspond) to the expectations of "conventional gender norms" if the tool analyzes, identifies, and classifies characteristics of people of dissident genders, and if analyzed, how are they taught to "see"? would be through of the biological/social limits of the "male/female" binary? [19, 20] or would they be unfeasible?

The analysis was carried out from December 2021 to February 2022, then from March 2022 to July 2022, and these spaces were purposeful to verify if there would be changes in the classification algorithms in relation to gender labels, in all months 15 photographs were used, and this time of celebrities who socially identify as non-binary, I observed that the Amazon Rekognition facial analysis method includes the following: a) inverted pyramid: eyes, nose, lips from one end to the other, and b) pyramid straight: nose lips from one end to the other. Triangulations are part of the parameters in which algorithms focus on the biological for precision analysis, in all analyzed photographs there were no classifications that transposed the female/male binary.

*Computer Vision: Anthropology of Algorithmic Bias in Facial Analysis Tool DOI: http://dx.doi.org/10.5772/intechopen.110330*

According to AWS, a binary gender prediction (male/female) is based on the physical appearance of a face in a given image. It does not enforce gender identity [21].

},

```
"Gender": {
                "Value": "Female",
                "Confidence": 55.517173767089844
  Here the machine returns the labeling as being 55.51% female, but the person 
considers himself genderless/gender fluid.
   },
   "Gender": {
```
"Value": "Female", "Confidence": 99.7735595703125

Above, the machine returns the labeling as being 99.77% female, but the person self-identifies as non-binary and other factors that contribute to the analysis are not demonstrated, that is, we do not know what data makes up the analysis of the tool so that she arrived at this result.

The use, treatment, and/or mention of gender terms that do not correspond to the self-identity that a non-binary or trans-person self-identifies generates the misgender experience [22–24], even when recognition goes beyond the human social line, consisting in Automatic Gender Recognition (AGR) the automatism that algorithmically identifies the gender of individuals generating self-identification errors, and it is considered algorithmic misgender.

In fact, the joining of data and algorithms together with the company's regulation determines the machinic vision indicating that the biological that is inserted in the social overlaps the self-identity; thus, the invisibility happens in the sense of the prominence of the "norm" that hegemonizes bodies and makes them invisible, in the sense of self-identity, and the dynamics of observation, analysis, and classification define labels that match the result. It seems to be a woman or it seems to be a man, according to the imposed biological/social norm.

Google, however, returned the result of gender-related labels such as: person. The "algorithmic adaptation" of this tool was to decode the labeling to indicate that the analyzed image is of a person, a human, a neutral, colorless, and genderless being. Indeed, the results indicate that these analysis tools do not learn and do not know that there are people who do not fit into the categorical double woman/man, so the catalog of possible identities [25] is not part of what machines need to learn.

The limitations presented in this research are in line with those of Keyes [24] and Scheuerman [26] in which the search for diversity in certain tools coincides with a large wall of data that cannot be analyzed and reviewed, but which still return with results. According to the objective imposed by the companies, the machinic eye is organized according to the categorical binary woman/man.
