People Don’t Learn to Trust Bots

The researchers used an AI algorithm that, when posing as a person, implemented a strategy that was better than people are at getting human partners to cooperate. But previous work suggested people tend to distrust machines, so the scientists wondered what would happen if the bot revealed itself as such. The team hoped people playing with a known bot would recognize its ability to cooperate (without being a pushover) and would eventually get past their distrust. “Sadly, we failed at this goal,” says Talal Rahwan, a computer scientist at New York University in Abu Dhabi and a senior author on the paper, published last November in Nature Machine Intelligence. “No matter what the algorithm did, people just stuck to their prejudice.” A bot playing openly as a bot was less likely to elicit cooperation than another human, even though its strategy was clearly more beneficial to both players.


Home About Contact