- Регистрация
- 23 Август 2023
- Сообщения
- 3 019
- Лучшие ответы
- 0
- Реакции
- 0
- Баллы
- 51
Offline
So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]
© 2024 TechCrunch. All rights reserved. For personal use only.
© 2024 TechCrunch. All rights reserved. For personal use only.