**Abstract**

The usage of Computer Vision (CV) has led to debates about the bias within the technology. Despite machines being labeled as autonomous, human bias is embedded in data labeling for effective machine learning. Proper training of neural network machines requires massive amounts of "relevant data," however, not all data is collected. This contributes to a one-sided view and feeds a "standard of data that is not collected." The machine develops algorithmic decision-making based on the data it is presented, which can create machinic biases such as differences in gender, race/ethnicity, and class. This raises questions about which bodies are recognized by machines and how they are taught to "see" beyond binary "male or female" limitations. The study aims to understand how Amazon's Rekognition, a facial recognition and analysis tool, analyzes and classifies people of dissident genders who do not conform to "conventional" gender norms. Understanding the mechanisms behind the technology's decision-making processes can lead to more equitable and inclusive outcomes.

**Keywords:** artificial intelligence, computer vision, algorithmic Bias, Misgendering, Amazon recognition
