News Anthropic’s AI chatbot Claude can now choose to stop talking to you

News

Команда форума
Редактор
Регистрация
17 Февраль 2018
Сообщения
38 911
Лучшие ответы
0
Реакции
0
Баллы
2 093
Offline
#1
For now, the feature will only kick in during particularly "harmful or abusive" interactions.

Image: Anthropic

Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the AI to choose to end certain conversations.

According to the company, this only happens in particularly serious or concerning situations. For example, Claude may choose to stop engaging with you if you repeatedly attempt to get the AI chatbot to discuss child sexual abuse, terrorism, or other “harmful or abusive” interactions.


This feature was added not just because such topics are controversial, but because it provides the AI an out when multiple attempts at redirection have failed and productive dialogue is no longer possible.

If a conversation ends, the user cannot continue that thread but can start a new chat or edit previous messages.


The initiative is part of Anthropic’s research on AI well-being, which explores how AI can be protected from stressful interactions.


This article originally appeared on our sister publication PC för Alla and was translated and localized from Swedish.

Author: Viktor Eriksson, Contributor, PCWorld



Viktor writes news and reports for our sister sites, M3 and PC för Alla. He is passionate about technology and is on the ball with the latest product releases and the hottest talking points in the consumer tech industry.

Recent stories by Viktor Eriksson:

 
Сверху Снизу