AI Making AI models ‘forget’ undesirable data hurts their performance

AI

Редактор
Регистрация
23 Август 2023
Сообщения
3 019
Лучшие ответы
0
Реакции
0
Баллы
51
Offline
#1
So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]

© 2024 TechCrunch. All rights reserved. For personal use only.
 

Похожие темы

Сверху Снизу