There’s a big problem with AI: even its creators can’t explain how it works

No one really knows how the most advanced algorithms do what they do. That could be a problem.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

Source: There’s a big problem with AI: even its creators can’t explain how it works (MIT Technology Review)