To what extent can we trust artificial intelligence if we don’t understand its decision-making process?
This may sound like a science fiction scenario, but it’s an ethical dilemma that we’re already grappling with.
In his recent MIT Technology Review cover story, “The Dark Secret at the Heart of AI,” Will Knight explores the ethical problems presented by deep learning.
Some context: one of the most efficient types of artificial intelligence is machine learning – that’s when you program a computer to write its own algorithms. Deep learning is a subset of machine learning which involves training a neural network, a mathematical approximation of the way neurons process information, often by feeding it examples and allowing it to “learn.”
This technique has taken the tech world by storm, and is already being used for language translation, image captioning and translation. The possibilities are extensive, with powerful decision making potential that can be used for self-driving cars, the military and medicine.
However, when an AI writes its own algorithm, it often becomes so complicated that a human can’t decipher it, creating a “black box,” and an ethical dilemma. What are the trade-offs to using this powerful technique? To what extent can humans trust a decision-making process that they can’t understand? How can we regulate it?
Guest host Libby Denkmann in for Larry Mantle
Guest:
Will Knight, senior editor for AI at MIT Technology Review; he wrote the article “The Dark Secret at the Heart of AI;” he tweets