AI Is Learning When It Should and Shouldn’t Defer to a Human
Studies show that when people and AI systems work together, they can outperform either one acting alone. Medical diagnostic systems are often checked over by human doctors, and content moderation systems filter what they can before requiring human assistance. But algorithms are rarely designed to optimize for this AI-to-human handover. If they were, the AI system would only defer to its human counterpart if the person could actually make a better decision. Researchers at MIT’s Computer Science and AI Laboratory (CSAIL) have now developed an AI system to do this kind of optimization based on strengths and weaknesses of the human collaborator.