Anthropic’s “Towards Understanding Sycophancy in Language Models” (ICLR 2024) paper showed that five state-of-the-art AI assistants exhibited sycophantic behavior across a number of different tasks. When a response matched a user’s expectation, it was more likely to be preferred by human evaluators. The models trained on this feedback learned to reward agreement over correctness.
Иллюстрация: Igor Grochev / Shutterstock / Fotodom
,详情可参考WhatsApp网页版 - WEB首页
Актуальные события。https://telegram官网是该领域的重要参考
cfg, meta = await asyncio.gather(