Every time a new application of AI is announced, I feel a short-lived rush of excitement — followed soon after by a knot in my stomach. This is because I know the technology, more often than not, hasn’t been designed with equity in mind. One system, ChatGPT, has reached 100 million unique users just two months after its launch. The text-based tool engages users in interactive, friendly, AI-generated exchanges with a chatbot that has been developed to speak authoritatively on any subject it’s prompted to address.In an interview with Michael Barbaro on the The Daily podcast from the New York Times, tech reporter Kevin Roose described how an app similar to ChatGPT, Bing’s AI chatbot, which also is built on OpenAI’s GPT-3 language model, responded to his request for a suggestion on a side dish to accompany French onion soup for Valentine’s Day dinner with his wife. Not only did Bing answer the question with a salad recommendation, it also told him where to find the ingredients in the supermarket and the quantities needed to make the recipe for two, and it ended the exchange with a note wishing him and his wife a wonderful Valentine’s Day — even adding a heart emoji. The precision, specificity, and even charm of this exchange speaks to the accuracy and depth of knowledge needed to drive the technology. Who would not believe a bot like this?Bing delivered this information by analyzing keywords in Roose’s prompt — especially “French onion soup” and “side” — and using matching algorithms to craft the response most likely to answer his query. The algorithms are trained to answer user prompts using large language models developed by engineers working for OpenAI. In 2020 members of the OpenAI team published an academic paper that states their language model is the largest ever created, with 175 billion parameters behind its functionality. Having such a large language model should mean ChatGPT can talk about anything, right? Unfortunately, that’s not true. A model this size needs inputs from people across the globe, but inherently will reflect the biases of their writers. This means the contributions of women, children, and other people marginalized throughout the course of human history will be underrepresented, and this bias will be reflected in ChatGPT’s functionality.
AI bias, Bessie, and Beyoncé: Could ChatGPT erase a legacy of Black excellence?
Earlier this year I was a guest on the Karen Hunter Show, and she referenced how, at that time, ChatGPT could not respond to her specific inquiry when she asked if artist Bessie Smith influenced gospel singer Mahalia Jackson, without additional prompting introducing new information. While the bot could provide biographical information on each woman, it could not reliably discuss the relationship between the two. This is a travesty because Bessie Smith is one of the most important Blues singers in American history, who not only influenced Jackson, but is credited by musicologists to have laid the foundation for popular music in the United States. She is said to have influenced hundreds of artists, including the likes of Elvis Presley, Billie Holiday, and Janis Joplin. However ChatGPT still could not provide this context for Smith’s influence. This is because one of the ways racism and sexism manifests in American society is through the erasure of the contributions Black women have made. In order for musicologists to write widely about Smith’s influence, they would have to acknowledge she had the power to shape the behavior of white people and culture at large. This challenges what author and social activist bell hooks called the “white supremacist, capitalist, patriarchal” values that have shaped the United States. Therefore Smith’s contributions are minimized. As a result, when engineers at OpenAI were training the ChatGPT model, it appears they had limited access to information on Smith’s influence on contemporary American music. This became clear in ChatGPT’s inability to give Hunter an adequate response, and in doing so, the failure reinforces the minimization of contributions made by Black women as a music industry norm. In a more contemporary example exploring the potential influence of bias, consider the fact that, despite being the most celebrated Grammy winner in history, Beyoncé has never won for Record of the Year. Why? One Grammy voter, identified by Variety as a “music business veteran in his 70s,” said he did not vote for Beyoncé’s Renaissance as Record of the Year because the fanfare surrounding its release was “too portentous.” The impact of this opinion, unrelated to the quality of the album itself, contributed to the artist continuing to go without Record of the Year recognition. Looking to the future from a technical perspective, imagine engineers developing a training dataset for the most successful music artists of the early 21st century. If status as a Record of the Year Grammy award winner is weighted as an important factor, Beyoncé might not appear in this dataset, which is ludicrous.
Underestimated in society, underestimated in AI
Oversights of this nature infuriate me because new technological developments are purportedly advancing our society — they are, if you are a middle class, cisgender, heterosexual white man. However, if you are a Black woman, these applications reinforce Malcolm X’s assertion that Black women are the most disrespected people in America. This devaluation of the contributions Black women make to wider society impacts how I am perceived in the tech industry. For context, I am widely considered an expert on the racial impacts of advanced technical systems, regularly asked to join advisory boards and support product teams across the tech industry. In each of these venues I have been in meetings during which people are surprised at my expertise. This is despite the fact that I lead a team that endorsed and recommended the Algorithmic Accountability Act to the U.S. House of Representatives in 2019 and again in 2022, and the language it includes around impact assessment has been adopted by the 2022 American Data Privacy Act. Despite the fact I lead a nonprofit organization that has been asked to help shape the United Nations’ thinking on algorithmic bias. And despite the fact that I have held fellowships at Harvard, Stanford, and the University of Notre Dame, where I considered these issues. Despite this wealth of experience, my presence is met with surprise, because Black women are still seen as diversity hires and unqualified for leadership roles.ChatGPT’s inability to recognize the impact of racialized sexism may not be a concern for some. However it becomes a matter of concern for us all when we consider Microsoft’s plans to integrate ChatGPT into our online search experience through Bing. Many rely on search engines to deliver accurate, objective, unbiased information, but that is impossible — not just because of bias in the training data, but also because the algorithms that drive ChatGPT are designed to predict rather than fact-check information. This has already led to some notable mistakes. It all raises the question, why use ChatGPT? The stakes in this movie mishap are low, but consider the fact that a judge in Colombia has already used ChatGPT in a ruling — a major area of concern for Black people. We have already seen how the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm in use in the United States has predicted Black defendants would reoffend at higher rates than their white counterparts. Imagine a ruling written by ChatGPT using arrest data from New York City’s “Stop and Frisk” era, when 90 percent of the Black and brown men stopped by law enforcement were innocent.
Seizing an Opportunity for Inclusion in AI
If we acknowledge the existence and significance of these issues, remedying the omission of voices of Black women and other marginalized groups is within reach. For example, developers can identify and address training data deficiencies by contracting third-party validators, or independent experts, to conduct impact assessments on how the technology will be used by people from historically marginalized groups. Releasing new technologies in beta to trusted users, as OpenAI has done, also could improve representation — if the pool of “trusted users” is inclusive, that is. In addition, the passage of legislation like the Algorithmic Accountability Act, which was reintroduced to Congress in 2022, would establish federal guidelines protecting the rights of U.S. citizens, including requirements for impact assessments and transparency about when and how the technologies are used, among other safeguards. My most sincere wish is for technological innovations to usher in new ways of thinking about society. With the rapid adoption of new resources like ChatGPT, we could quickly enter a new era of AI-supported access to knowledge. But using biased training data will project the legacy of oppression into the future. Mashable Voices columns and analyses reflect the opinions of the writers.