News LLMs have a strong bias against use of African American English

News

Команда форума
Редактор
Регистрация
17 Февраль 2018
Сообщения
38 930
Лучшие ответы
0
Реакции
0
Баллы
2 093
Offline
#1

Enlarge (credit: Aurich Lawson | Getty Images)


As far back as 2016, work on AI-based chatbots revealed that they have a disturbing tendency to reflect some of the worst biases of the society that trained them. But as large language models have become ever larger and subjected to more sophisticated training, a lot of that problematic behavior has been ironed out. For example, I asked the current iteration of ChatGPT for five words it associated with African Americans, and it responded with things like "resilience" and "creativity."

But a lot of research has turned up examples where implicit biases can persist in people long after outward behavior has changed. So some researchers decided to test whether the same might be true of LLMs. And was it ever.

By interacting with a series of LLMs using examples of the African American English sociolect, they found that the AI's had an extremely negative view of its speakers—something that wasn't true of speakers of another American English variant. And that bias bled over into decisions the LLMs were asked to make about those who use African American English.


Read 14 remaining paragraphs | Comments
 
Сверху Снизу