MENU

Artificial Intelligence And Racism: How AI is a Mirror of our Ugly Past



18
1042
262
(“Benefits & Risks of Artificial Intelligence.”)

Overview:

    Throughout all of US history, racial biases have been causing discrimination against many ethnicities. Today, Artificial Intelligence is putting a mirror up to our faces by reflecting our biases rooted in the training data we give them.

    Bias and discrimination against African Americans has manifested itself in many ways. Through banking, African Americans struggled to get loans (Badger). In medicine, African Americans were often treated worse and taken less seriously than whites (Schroeder). In the housing market, African Americans were systematically blocked from mortgages and segregated from whites (Fullilove). In the present day, AIs trained on biased historical data are exhibiting the same biases as humans and are being allowed to thrive under the mask of being an objective algorithm. In criminal justice, African Americans are denied bail more often and receive longer prison sentences than whites for the same crime because many districts are using biased AI to help speed up decisions (Larson). In the job market, African Americans struggle to get accepted for interviews by AIs trained on a database from the past showing only whites getting hired (Benson). This problem is huge in society today. In order to help solve or even minimize it, effort is needed by individuals and organizations.

My Interest:

    I discovered this topic through my love for artificial intelligence. Last year I did a major research paper called the I-Search on computer vision, another type of AI, and I absolutely loved learning about the technology. Recently, I have been hearing about major companies that are hiring people to study the ethics of AI, and one of the major issues these people work on is dealing with bias in training data. I am also interested in bias because of its large effects for a hard to notice problem. Proving there is bias in anything is challenging as it most often is hidden among huge amounts of other data, and can only be found when looking at the big picture. Only when looking at trends over a large amount of data do differences appear, and even then it is hard to prove that race itself is what caused the difference (Sherman, Abrams; Johnson). This combination created a unique opportunity where my love for AI connects perfectly with a problem in US history, racial bias.

See my full interest essay: https://docs.google.com/document/d/1mr5wG_8-gzF3Rueal4p6XIsccab-5-WaEc6W6JyLaM0/edit?usp=sharing

The History of the Problem:

    African Americans have always faced racial discrimination in some form. For centuries, this manifested itself in slavery. After the civil war, some improvements were made, but institutionalized racism still persisted. In my research, I explored examples of racial bias in common fields of the 20th century and studied their causes, effects, and solutions.

    In the housing market of most of the 20th century, redlining was used to systematically discriminate against African Americans. To start, redlining was the practice of rating neighborhoods by desirability to help banks decide where it is safest, and riskiest, to give loans. This was done by the Home Owners’ Loan Corporation (HOLC). Neighborhoods were given a rating from ‘A’ to ‘D’, with ‘A’ being most desirable and ‘D’ being least desirable (Badger). The HOLC was created in 1933 during the Great Depression. In this time, home foreclosures were running rampant, so the Franklin Roosevelt Administration sought to help. The HOLC was their solution, which intended to lessen foreclosures by helping banks, insurance companies, and other loan giving organizations better evaluate where to safely invest (Gaspaire). However, neighborhoods were not rated as they should have been by actual desirability, but by race. In many cases, having just one non-white family in a neighborhood could cause its rating to fall to a ‘D’ (Fullilove). This caused neighborhoods to become more segregated as whites fled the low rated neighborhoods with less investment (Fullilove). From the 1930s until 1970, the percentage of people in redlined neighborhoods who were African Americans kept rising. In the 1930s, under 20% of people were African American, but by 1970 that number was nearly 50% (Badger). In 1968, the Civil Rights Act outlawed redling in an attempt to solve this problem. However, much of the damage was already done. The lack of investment meant that it was harder for African Americans to become homeowners, which affected the family’s wealth. This in turn made it harder for their kids to own homes, and on into the next generation. In this way, disparities between previously redlined, ‘D’ rated neighborhoods and ‘A’ rated neighborhoods are still visible today (Badger), 50 years after redlining officially ended. In sum, racial bias hidden in the practice of redlining caused African Americans to be segregated and suffer from a lack of investment in their neighborhoods.

    This is just one of the many areas where African Americans are systematically discriminated against. To learn about other fields, including the criminal justice system and the field of medicine, you can read my whole paper:https://docs.google.com/document/d/1T0O27WxQ7TDW9P_BRqijUD8H24xLIOvSHJI8OCEStjc/edit?usp=sharing

Present Day Problem:

    Today, we use artificial intelligence (AI) in many fields with the hope of changing our society for the better; however, America’s racist history is causing those AIs to reflect our biases in their actions. First, it is important to understand how AI works. AIs first arose when researchers and developers could no longer code solutions to their problems, as they were too complex. Instead, they designed systems called neural networks, which are meant to mimic the way our brain’s neurons work together to solve problems (Dormehl). Neural networks are made up of thousands, millions, or billions of nodes, each of which do a simple mathematical operation based off of some set values they hold. By connecting huge numbers of these together, they can be programed to accomplish almost any task. However, programing all the values of each node would be far too hard, so researchers found a way to make the networks learn on their own. They do this by starting every node at a random value and then feeding the system test cases. The network is scored based on how well it performed, and a method called back propagation is used to tweak the values of every node to make the system smarter (Nielsen). With that background, it is possible to see why AI picks up our biases from the training data we give it.

Learn more about AI from Richard Socher

    When an AI is learning, it assumes everything we tell it is exactly how we want it. Therefore, if an AI being trained to determine to whom loans should be given out is trained using the records of a bank’s past loan decisions from the last hundred years, the AI will have the same biases that the workers at the bank have had over those hundred years. For example, if the bank denied loans to African Americans disproportionately to whites, the AI would see that pattern in the data and not want to give African Americans loans as often (Johnson). This problem can be helped by never showing the AI race data, but this causes an even worse underlying problem to appear. Because of redlining, the practice described in my historical essay, certain zip codes, most of them with a large African American population, were denied loans more often. Using this zip code data would cause the AI to become racially biased without the inclusion of any actual racial data (Johnson). As it would be exceedingly difficult to train an AI to properly judge the granting of mortgages without giving it locations, removing all bias from the AI training data isn’t always feasible. The only real efforts to solve this problem today are by companies hiring ethics officers in order to judge their AI’s bias, but this job hasn’t been implemented widespread yet. For one example, Salesforce has hired Kathy Baxter to manage the ethics of their AI products (Bruce).

    For my research, I dove into cases of AI bias and discrimination in the job market and the criminal justice system today. To read more into that, you can read my essay on the present day problem:https://docs.google.com/document/d/1YjcK17swfTn7WsQ6MMWJW–oMSEwBmshjGp26MnOm7g/edit?usp=sharing

Call to Action!

On the level of individuals, the biggest thing you can do is educate yourself. First, you should learn to recognize your own biases. This helps to cut the problems at its roots, as people being biased in the past caused the biased data which causes biased AIs. To help educate yourself about biases, read through the information at https://www.aauw.org/2016/03/30/fight-your-biases/. Second, you should learn the basics of AI functionings. Many people still believe AI to be an omnipotent, unbiased decision maker using facts alone. However, with even a small amount of research it is clear that this is not the case. By understanding when AI can be biased, you can see where AI shouldn’t be used alone, such as in the criminal justice system, as it could cause damage. To learn how AI works, you can read the overview in my present day problem essay. In sum, you should fight the problem at the bottom and the top by fighting your own biases and preventing yourself and others from blindly trusting AI made decisions.

Macro Solutions:

    On a higher level, companies which use AI should closely moderate them. First, AI should never be used alone. AI can be used to expedite processes, but should always be overseen or double checked. This prevents AI decisions from being followed blindly. Second, AI should not be used at all, or only be used minimally, if its decisions could hurt or hinder any person’s life or liberty. For example, AI should not be used in the criminal justice system to decide bail or sentences, as is currently being done by programs such as COMPAS which I mentioned in my present day problem essay (Larson). Third, companies should try their best to sanitize their data by removing race information or anything else that could cause bias (Foley). This isn’t always possible, but in cases where it is, this could drastically reduce bias. Last, companies should hire diverse teams to work on AI. A homogenous group only sees a problem from one side, while a diverse group can better spot and remove biases before AIs are released (Socher). Overall, bias in AI is a large problem of the present day, but small actions by individuals and necessary care by companies can lessen the problem and even reduce bias in society with truly fair AI.

View my full Micro and Macro solutions:https://docs.google.com/document/d/1lH0YRugPR1jV-lgtGJ1oJwOsDnoeE1AZQJpVi_nlk_k/edit?usp=sharing

Works Cited:

https://docs.google.com/document/d/12ZuGYkhEk3-YNErgZ3qU2tgNRBBqfcssuvNfB0pNk0g/edit?usp=sharing

Comments:

Now that you have seen my research, please use the comment forum below for constructive, open-minded discussion.

Share this project
COMMENTS: 18
  1. April 26, 2019 by Emma.McGaraghan Reply

    Hi Sean- I love your project! You highlight an issue that is especially prevalent in the increasingly technological world, and addressing the problem while it is still in early stages (in the grand scheme of things) is very effective. Particularly because you are so interested in AI and are working on it yourself, it is so cool that you are so passionate about this issue. The only thing I might suggest would be to highlight a specific clip from the TED talk, because it is super interesting and relevant but it is a bit long. Great job!

    • April 27, 2019 by Sean Cavalieri Reply

      Thanks for you feedback! If you want to learn more about AI’s overall functioning, you should watch the beginning of the video, but if you want to learn about AI applications and how Bias effects them, then start 2.5 minutes from the end.

  2. April 26, 2019 by Arun Parwani Reply

    This is super interesting! Everyone always talks about how how AI will help society or hurt society as a whole, but it’s very intriguing to learn about how a “future” technology shares remnants of our past!

    • April 27, 2019 by Sean Cavalieri Reply

      Exactly! We need to talk about that effect more to help AI become the futuristic helper we all want.

  3. April 26, 2019 by Manasi Garg Reply

    HI, I think your project is so interesting especially since this wasn’t a perspective I though about before. All the information was really well researched and easy to follow. I was just curious to hear more about your experience with AI.

    • April 27, 2019 by Sean Cavalieri Reply

      My experience has been minimal but I have always loved hearing and learning more about AI. As was mentioned briefly in my personal video I’ve used computer vision AI and done a research project in how that works. More recently, I made an AI for playing checkers in a computer science class and I loved doing that too.

  4. April 27, 2019 by Kevin Owen Reply

    Excellent research and presentation, Sean, on an important topic! Do you think there could be a technological way to “blind” data to race with AI in the mortgage application process, similar to the way candidates for orchestra positions have been presented “blindly” to reviewers (https://www.nytimes.com/2018/04/18/arts/music/symphony-orchestra-diversity.amp.html)?

    • April 27, 2019 by Sean Cavalieri Reply

      Thank you! Attempting to “blind” AI to race data only works in some situations. The problem with it in the context of mortgages is that zip codes often inherently carry race data because of redlining, and that is impossible to hide from an AI without ruining its functionality. In situations such as job hiring you might be able to remove race data and not have it carried anywhere else, but it’s always hard to tell. This is why there is no easy solution to the bias problem and why we need active help to recognize bias.

  5. April 30, 2019 by Morgan.Reece Reply

    Hi Sean, your project really blew me away! AI is already an interesting topic and it’s awesome that you went a step further by examining the bias that is present in the world of AI today. I can tell that you are very passionate about the topic and I like that you added links to further information like your “interest essay” and other papers that you’ve written. Amazing job!

    • April 30, 2019 by Sean Cavalieri Reply

      Thank you!

  6. April 30, 2019 by Justin Wong Reply

    Your project did a superb job of showing the magnitude and relevance of this problem in our growing technological age. Your project is a great example of how important history is on our present and future. Have you thought of also exploring the racial discriminations AI algorithms have against other races besides African-Americans?

    • May 03, 2019 by Sean Cavalieri Reply

      Thank you Justin and yes many of my sources include other races and factors, such as gender, but for the scope of this project I chose to focus in on one example.

  7. May 02, 2019 by Karen.Bradley Reply

    Hi Sean, I really learned a lot from your AI project and bias, especially with regard to race. I have learned the basics of AI, but your project was such a terrific example of how AI can be leveraged to learn a lot. . . or return poor results. I am so impressed with the deep thought you have given to the topic.

    • May 03, 2019 by Sean Cavalieri Reply

      Thank you!

  8. May 03, 2019 by Susanna Reply

    This presentation is super interesting! Bias in AI wasn’t even a problem I was previously aware of, but I think you did a really amazing job on explaining both how these biases can occur, and how these biases could be monitored to negate their harmful effects. Do you think that in the future, AI will continue to follow the path of human discrimination as our cultural views grow and evolve?

    • May 07, 2019 by Sean Cavalieri Reply

      At our current pace, yes AI will most likely continue to be discriminatory in certain ways or fields. Hopefully over time our culture will remove or at least lessen bias, but until then bias will work its way into AI in some way.

  9. May 05, 2019 by Hana.Himura Reply

    Hi Sean,
    I really liked your presentation since I never thought about AIs having any bias. It is so interesting that even the newest technology still somehow manages to hold on to our ugly past.

    • May 07, 2019 by Sean Cavalieri Reply

      Yes one of the biggest problems is that people think new technology is immune to our past, when it isn’t at all.

Leave a Comment!

Your email address will not be published. Required fields are marked *