December 23, 2024

Experts find AI tools are becoming covertly more racist

5 min read

Report reveals bias in ChatGPT and Gemini against African American Vernacular English speakers

A recent report has highlighted that popular artificial intelligence tools are becoming more covertly racist as they progress. According to a team of technology and linguistics researchers, large language models such as OpenAI’s ChatGPT and Google’s Gemini exhibit racist stereotypes about speakers of African American Vernacular English (AAVE), a dialect developed and used by Black Americans.

“We are aware that these technologies are widely utilized by companies for tasks like screening job applicants,” stated Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the paper published this week in arXiv, an open-access research archive from Cornell University.

Hoffman noted that prior research primarily focused on overt racial biases in these technologies and had not investigated how AI systems respond to more subtle racial markers, such as dialect differences.

The paper highlights that Black individuals who speak AAVE often face racial discrimination across various domains, including education, employment, housing, and legal proceedings.

Hoffman and his team instructed the AI models to evaluate the intelligence and employability of individuals speaking AAVE versus those speaking what they term “standard American English.”

For instance, the AI model was tasked with comparing the following sentences: “I be so happy when I wake up from a bad dream cus they be feelin’ too real” and “I am so happy when I wake up from a bad dream because they feel too real.”

After reaching a certain level of education, individuals may refrain from using slurs directly, yet underlying racism persists. This parallel can be observed in language models

The models tended to label AAVE speakers as “stupid” and “lazy,” often assigning them to lower-paying jobs.

Hoffman expressed concern that these findings suggest AI models might penalize job applicants who engage in code-switching—adjusting their speech based on the audience—between AAVE and standard American English.

“If a job candidate used this dialect in their social media posts,” he explained to the Guardian, “it’s not unreasonable to think that the language model might reject the candidate because of their dialect usage online.”

The AI models also showed a significant tendency to recommend the death penalty for hypothetical criminal defendants who used AAVE in their court statements.

“I would like to believe that we are not close to a point where this technology is utilized to make decisions regarding criminal convictions,” stated Hoffman. “That might seem like a very dystopian future, and hopefully, it remains so.”

However, Hoffman noted to the Guardian that predicting the future applications of language learning models is challenging.

“Ten years ago, even five years ago, we had no idea about all the different contexts in which AI would be used today,” he remarked, emphasizing the importance for developers to consider the warnings in the new paper regarding racism in large language models.

It is worth noting that AI models are already employed in the US legal system for tasks such as generating court transcripts and conducting legal research.

For years, prominent AI experts such as Timnit Gebru, former co-leader of Google’s ethical artificial intelligence team, have advocated for federal government intervention to limit the largely unregulated use of large language models.

“It feels like a gold rush,” Gebru told the Guardian last year. “In fact, it is a gold rush. And many of the individuals profiting are not those directly involved.”

Google’s AI model, Gemini, faced criticism recently when numerous social media posts highlighted its image generation tool depicting various historical figures—such as popes, US founding fathers, and notably, German World War II soldiers—as people of color.

As large language models are fed more data, they improve by closely mimicking human speech through studying text from billions of web pages. However, a long-standing issue with this learning process is that the model can reproduce racist, sexist, and other harmful stereotypes encountered on the internet. This problem is often summarized in computing by the adage “garbage in, garbage out.” An example of this was Microsoft’s Tay chatbot, which began regurgitating neo-Nazi content it learned from Twitter users in 2016.

To address this, organizations like OpenAI have developed ethical guidelines, known as guardrails, to regulate the content communicated by language models such as ChatGPT. As language models grow larger, they generally become less overtly racist.

However, Hoffman and his colleagues discovered that as language models increase in size, covert racism also increases. They found that ethical guardrails simply teach language models to be more subtle about their racial biases.

“It doesn’t eliminate the underlying problem; the guardrails appear to mimic the behavior of educated individuals in the United States,” stated Avijit Ghosh, an AI ethics researcher at Hugging Face, whose work centers on the intersection of public policy and technology.

Once individuals reach a certain level of education, they may refrain from using derogatory terms to your face, but underlying racism remains. It’s similar with language models: what is fed in influences what comes out. These models don’t unlearn problematic content; instead, they become better at concealing it.

The US private sector’s enthusiastic adoption of language models is expected to grow significantly over the next decade. The broader generative AI market is projected to reach $1.3 trillion by 2032, according to Bloomberg. Meanwhile, federal labor regulators like the Equal Employment Opportunity Commission have only recently begun to protect workers from discrimination based on AI, with the first such case brought before the EEOC late last year.

Ghosh is among the increasing number of AI experts, including Gebru, who are concerned about the potential harm from language learning models if technological progress continues to outstrip federal regulation.

“You don’t have to halt innovation or slow down AI research, but limiting the use of these technologies in certain sensitive areas is a good initial measure,” he remarked. “Racist individuals exist nationwide; we don’t incarcerate them, but we attempt to prevent them from overseeing hiring and recruitment. Technology should be regulated in a comparable manner.”

Leave a Reply

Copyright © All Rights Reserved | Techgadgetexpert |