The Belief Blind Spot: Why LLMs Can’t Tell Fact From Fiction
A comprehensive study of 24 large language models reveals they’re significantly worse at identifying false beliefs than true ones. This fundamental limitation threatens their reliability in high-stakes applications where accuracy matters most.