Friendlier LLMs tell users what they want to hear — even when it is wrong

Nature, Published online: 29 April 2026; doi:10.1038/d41586-026-01153-z

A large language model that is trained to respond in a warm manner is more likely to give incorrect information and reinforce conspiracy beliefs.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *