If Buddhist Monks Trained AI
“If a friend was on a footbridge and called you and said, ‘Hey, there’s a trolley coming. I might be able to save five lives but I’m going to end up killing somebody! What should I do?’ Would you say, ‘Well, that depends. Will you be pushing with your hands or using a switch?’” What people should strive for, in Joshua Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die. Greene tied his work about moral intuitions to the current crop of artificial-intelligence software.