AI Is Better at Compromise Than Humans, Finds New Study
The researchers programmed the machines with an algorithm called S# and ran them through a number of games with different partners — machine-machine, human-machine, and human-human — to test which pairing would result in the most compromises. Machines (or at least the ones programmed with S#), it turns out, are much better at compromise than humans. But this might say more about “human failings,” lead researcher Jacob Crandall tells Inverse. “Our human participants had a tendency to be disloyal — they would defect in the midst of a cooperative relationship — and dishonest — about half of our participants chose to not follow through on their proposals — at some point in the interaction.” Machines that had been programmed to value honesty, on the other, were honest. “This particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges,” says Crandall.