An International Platform for perspectives that transcend the traditional Divides between the Humanities and stem    

Decoding Discrimination: Bias Perpetuated by Technology

In a world where humans are dependent on technology to do simple tasks, we place a great amount of trust in computers to be impartial. However, with improving technology and the increased use of artificial intelligence, the negative effect of algorithmic discrimination on job applicants has become more and more prominent. Numerous companies initially filter out applicants through resume screening, but behind this seemingly innocuous process lies a machine learning-based algorithm that decides whether an applicant is qualified for a job or not in a certain company. Discrimination, though popularly viewed as a political issue, originates heavily from misplaced trust in technology.

The disparities in treatment by algorithms are impossible to ignore. Algorithms disproportionately affect individuals on the basis of protected attributes in the job application process even before their resumes reach employers by displaying hiring advertisements in a discriminatory manner. Lambrecht and Tucker found through an empirical study that women were less likely to see ads for STEM careers posted on Facebook and similar platforms, which they believed was because of the increased cost to advertise to women [1]. Sahar Sajadieh, who holds a PhD from the University of California, Santa Barbara in Media Arts and Technology, shared an instance where a robot she designed displayed algorithmic discrimination. The robot was designed to flirt with humans, so she trained it to identify the warmest spot in the room. Then, to ensure that the target was a person, it detected whether it had a face using the most popular open resource package Sajadieh could find. Sajadieh trained the algorithm using Cornell Movie Database and data from OkCupid databases. However, she discovered in one of her demonstrations that the facial recognition algorithm had difficulty recognizing black faces. Her robot would only approach, and ultimately flirt with, Caucasian people. Though the origin is disputed, we see that historic discrimination is reflected in subpar resources and that consideration for BIPOC has made its way into technology as well. One of the largest cases of algorithmic discrimination was discovered in Amazon’s hiring process. Amazon was looking to develop an algorithm to rank applicant resumes based on keywords [2]. Resumes that contained the word “women,” even in the context of a phrase like “women’s chess club captain” were penalized. However, resumes containing verbs associated with masculinity, such as “executed” and “captured,” were favored. This resulted in a higher percentage of resumes submitted by males making it past the initial screening process.

The list of case studies of discrimination is unending because pinpointing the stage of programming where bias is introduced is difficult.

An essential stage in the process of algorithm development that contributes to bias is the goal definition stage, where companies have to translate conceptual, human goals into something an algorithm can understand and execute. The gap between what we’ve designated as the goal in human language and what the algorithm is eventually designed to do is the source of this bias. In the Amazon discrimination scandal, for example, the algorithm’s goal was to find the ideal employee to fill technical positions by filtering through resumes the company received and selecting those most suited for the position [3]. The algorithm defined the ideal employee using previous hiring data on which type of applicants had historically been hired. As a result, Amazon introduced bias into its resume screening algorithm. It is difficult for companies to quantify ideas such as hiring competence in algorithm design and identify this as a significant phase where bias is often introduced.

The difficulty in stopping algorithmic bias sources partly from the natural repetitive cycle that is carried on by machine learning and artificial intelligence. In some case studies, the discrimination was caused in part by the datasets used to train the algorithm. In the Amazon case, the algorithm was designed to find applicants to fill tech positions, and the hiring data used to train it reflected the general trend of male dominance in the tech industry [4]. The data used contained the biases of Amazon’s human recruiters accreted over the past 10 years and once trained, the algorithm continued the cycle of discrimination; The only improvement was increased efficiency. Training data is sourced from a wide variety of information, such as Wikipedia articles with contributions from all members of the public and classic books that are read as part of school curriculums. This makes the bias the algorithm learns and expresses a product of the subconscious biases that permeate the body of information humans have created, shared, and consumed [5]. Flawed data reflecting past internal hiring trends traps companies into unknowingly perpetuating the discrimination that inhibits increased workplace diversity. All in all, the biases inherent in humans as a collective are also reflected in algorithms as they learn to copy human thought processes and actions to complete their tasks, highlighting the difficulty of eradicating algorithmic discrimination when humans are having trouble addressing their own overt and subconscious biases.

In an attempt to manage algorithmic discrimination, the European Union passed the GDPR in May of 2018. The GDPR emphasizes transparency between those collecting data and the people they collect data from. In the hiring process, applicants have the right to not be subject to automated processes. Companies which incorporate resume screening, or other automated processes, must state so on their websites. In resume screening itself, transparency signifies the overhaul of non-ethical and highly discriminatory algorithms that filter through applicants.

Although other factors, such as address or zip code, can serve as proxy factors from which algorithms can extrapolate socioeconomic status, the GDPR is very effective in engendering ethical awareness in hiring processes in the European Union. Laws and hiring protections in place before the GDPR include Article 14 from the European Convention on Human Rights, as well as the Data Protection Directive. Article 14 from the European Convention on Human Rights gave rights to non-discrimination on the basis of gender, race, ethnic background, or sexual identity. Although to the point, these laws were challenged in the realms of artificial intelligence and machine learning. Algorithms are often not intended to be biased. In complex neural networks, it is almost impossible to determine whether some algorithms are intentionally biased or not. The GDPR was a necessary ethical measure taken to increase protections in the evolving hiring process.

With improving technology and the increased use of artificial intelligence, the negative effect of algorithmic discrimination on job applicants has become more and more prominent. The issue, though hard to detect because of the blackbox nature of algorithms, can only be solved with increased consciousness from both job applicants and hiring companies. Discrimination is not just a social and political issue, but also a technological issue that society often overlooks because of misplaced trust in mathematical algorithms.

[1] Anja Lambrecht and Catherine Tucker, “‘Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads,” Management Science 65, no. 7 (April 2019).

[2] Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10. 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[3] Ibid.

[4] Ibid.

[5] Cade Metz, “We Teach A.I. Systems Everything, Including Our Biases,” The New York Times, November 11, 2019, https://www.nytimes.com/2019/11/11/technology/artificial-intelligence-bias.html.

The Great Leap Backward: A Feminist Analysis of Chinese Famine Politics from 1958-1961

Zimmerman’s Assimilation Model: Moral Obligations in Animals