Throughout all of US history, racial biases have been causing discrimination against many ethnicities. Today, Artificial Intelligence is putting a mirror up to our faces by reflecting our biases rooted in the training data we give them.
Bias and discrimination against African Americans has manifested itself in many ways. Through banking, African Americans struggled to get loans (Badger). In medicine, African Americans were often treated worse and taken less seriously than whites (Schroeder). In the housing market, African Americans were systematically blocked from mortgages and segregated from whites (Fullilove). In the present day, AIs trained on biased historical data are exhibiting the same biases as humans and are being allowed to thrive under the mask of being an objective algorithm. In criminal justice, African Americans are denied bail more often and receive longer prison sentences than whites for the same crime because many districts are using biased AI to help speed up decisions (Larson). In the job market, African Americans struggle to get accepted for interviews by AIs trained on a database from the past showing only whites getting hired (Benson). This problem is huge in society today. In order to help solve or even minimize it, effort is needed by individuals and organizations.
I discovered this topic through my love for artificial intelligence. Last year I did a major research paper called the I-Search on computer vision, another type of AI, and I absolutely loved learning about the technology. Recently, I have been hearing about major companies that are hiring people to study the ethics of AI, and one of the major issues these people work on is dealing with bias in training data. I am also interested in bias because of its large effects for a hard to notice problem. Proving there is bias in anything is challenging as it most often is hidden among huge amounts of other data, and can only be found when looking at the big picture. Only when looking at trends over a large amount of data do differences appear, and even then it is hard to prove that race itself is what caused the difference (Sherman, Abrams; Johnson). This combination created a unique opportunity where my love for AI connects perfectly with a problem in US history, racial bias.
See my full interest essay: https://docs.google.com/document/d/1mr5wG_8-gzF3Rueal4p6XIsccab-5-WaEc6W6JyLaM0/edit?usp=sharing
The History of the Problem:
African Americans have always faced racial discrimination in some form. For centuries, this manifested itself in slavery. After the civil war, some improvements were made, but institutionalized racism still persisted. In my research, I explored examples of racial bias in common fields of the 20th century and studied their causes, effects, and solutions.
In the housing market of most of the 20th century, redlining was used to systematically discriminate against African Americans. To start, redlining was the practice of rating neighborhoods by desirability to help banks decide where it is safest, and riskiest, to give loans. This was done by the Home Owners’ Loan Corporation (HOLC). Neighborhoods were given a rating from ‘A’ to ‘D’, with ‘A’ being most desirable and ‘D’ being least desirable (Badger). The HOLC was created in 1933 during the Great Depression. In this time, home foreclosures were running rampant, so the Franklin Roosevelt Administration sought to help. The HOLC was their solution, which intended to lessen foreclosures by helping banks, insurance companies, and other loan giving organizations better evaluate where to safely invest (Gaspaire). However, neighborhoods were not rated as they should have been by actual desirability, but by race. In many cases, having just one non-white family in a neighborhood could cause its rating to fall to a ‘D’ (Fullilove). This caused neighborhoods to become more segregated as whites fled the low rated neighborhoods with less investment (Fullilove). From the 1930s until 1970, the percentage of people in redlined neighborhoods who were African Americans kept rising. In the 1930s, under 20% of people were African American, but by 1970 that number was nearly 50% (Badger). In 1968, the Civil Rights Act outlawed redling in an attempt to solve this problem. However, much of the damage was already done. The lack of investment meant that it was harder for African Americans to become homeowners, which affected the family’s wealth. This in turn made it harder for their kids to own homes, and on into the next generation. In this way, disparities between previously redlined, ‘D’ rated neighborhoods and ‘A’ rated neighborhoods are still visible today (Badger), 50 years after redlining officially ended. In sum, racial bias hidden in the practice of redlining caused African Americans to be segregated and suffer from a lack of investment in their neighborhoods.
This is just one of the many areas where African Americans are systematically discriminated against. To learn about other fields, including the criminal justice system and the field of medicine, you can read my whole paper:https://docs.google.com/document/d/1T0O27WxQ7TDW9P_BRqijUD8H24xLIOvSHJI8OCEStjc/edit?usp=sharing
Present Day Problem:
Today, we use artificial intelligence (AI) in many fields with the hope of changing our society for the better; however, America’s racist history is causing those AIs to reflect our biases in their actions. First, it is important to understand how AI works. AIs first arose when researchers and developers could no longer code solutions to their problems, as they were too complex. Instead, they designed systems called neural networks, which are meant to mimic the way our brain’s neurons work together to solve problems (Dormehl). Neural networks are made up of thousands, millions, or billions of nodes, each of which do a simple mathematical operation based off of some set values they hold. By connecting huge numbers of these together, they can be programed to accomplish almost any task. However, programing all the values of each node would be far too hard, so researchers found a way to make the networks learn on their own. They do this by starting every node at a random value and then feeding the system test cases. The network is scored based on how well it performed, and a method called back propagation is used to tweak the values of every node to make the system smarter (Nielsen). With that background, it is possible to see why AI picks up our biases from the training data we give it.
When an AI is learning, it assumes everything we tell it is exactly how we want it. Therefore, if an AI being trained to determine to whom loans should be given out is trained using the records of a bank’s past loan decisions from the last hundred years, the AI will have the same biases that the workers at the bank have had over those hundred years. For example, if the bank denied loans to African Americans disproportionately to whites, the AI would see that pattern in the data and not want to give African Americans loans as often (Johnson). This problem can be helped by never showing the AI race data, but this causes an even worse underlying problem to appear. Because of redlining, the practice described in my historical essay, certain zip codes, most of them with a large African American population, were denied loans more often. Using this zip code data would cause the AI to become racially biased without the inclusion of any actual racial data (Johnson). As it would be exceedingly difficult to train an AI to properly judge the granting of mortgages without giving it locations, removing all bias from the AI training data isn’t always feasible. The only real efforts to solve this problem today are by companies hiring ethics officers in order to judge their AI’s bias, but this job hasn’t been implemented widespread yet. For one example, Salesforce has hired Kathy Baxter to manage the ethics of their AI products (Bruce).
For my research, I dove into cases of AI bias and discrimination in the job market and the criminal justice system today. To read more into that, you can read my essay on the present day problem:https://docs.google.com/document/d/1YjcK17swfTn7WsQ6MMWJW–oMSEwBmshjGp26MnOm7g/edit?usp=sharing
Call to Action!
On the level of individuals, the biggest thing you can do is educate yourself. First, you should learn to recognize your own biases. This helps to cut the problems at its roots, as people being biased in the past caused the biased data which causes biased AIs. To help educate yourself about biases, read through the information at https://www.aauw.org/2016/03/30/fight-your-biases/. Second, you should learn the basics of AI functionings. Many people still believe AI to be an omnipotent, unbiased decision maker using facts alone. However, with even a small amount of research it is clear that this is not the case. By understanding when AI can be biased, you can see where AI shouldn’t be used alone, such as in the criminal justice system, as it could cause damage. To learn how AI works, you can read the overview in my present day problem essay. In sum, you should fight the problem at the bottom and the top by fighting your own biases and preventing yourself and others from blindly trusting AI made decisions.
On a higher level, companies which use AI should closely moderate them. First, AI should never be used alone. AI can be used to expedite processes, but should always be overseen or double checked. This prevents AI decisions from being followed blindly. Second, AI should not be used at all, or only be used minimally, if its decisions could hurt or hinder any person’s life or liberty. For example, AI should not be used in the criminal justice system to decide bail or sentences, as is currently being done by programs such as COMPAS which I mentioned in my present day problem essay (Larson). Third, companies should try their best to sanitize their data by removing race information or anything else that could cause bias (Foley). This isn’t always possible, but in cases where it is, this could drastically reduce bias. Last, companies should hire diverse teams to work on AI. A homogenous group only sees a problem from one side, while a diverse group can better spot and remove biases before AIs are released (Socher). Overall, bias in AI is a large problem of the present day, but small actions by individuals and necessary care by companies can lessen the problem and even reduce bias in society with truly fair AI.
View my full Micro and Macro solutions:https://docs.google.com/document/d/1lH0YRugPR1jV-lgtGJ1oJwOsDnoeE1AZQJpVi_nlk_k/edit?usp=sharing
Now that you have seen my research, please use the comment forum below for constructive, open-minded discussion.