Support for LAist comes from
Local and national news, NPR, things to do, food recommendations and guides to Los Angeles, Orange County and the Inland Empire
Stay Connected
Listen
Podcasts AirTalk
Artificial intelligence – it’s deep (learning)
solid blue rectangular banner
()
AirTalk Tile 2024
This is an archival story that predates current editorial management.

This archival content was originally written for and published on KPCC.org. Keep in mind that links and images may no longer work — and references may be outdated.

Jun 29, 2017
Listen 10:52
Artificial intelligence – it’s deep (learning)
To what extent can we trust artificial intelligence if we don’t understand its decision-making process?
(EDITOR'S NOTE: Image was created as an Equirectangular Panorama. Import image into a panoramic player to create an interactive 360 degree view.) Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition at Science Museum on February 7, 2017 in London, England.
Maschinenmensch (machine-human) on display at the Science Museum at Preview Of The Science Museum's Robots Exhibition at Science Museum London.
(
Ming Yeung/Getty Images Entertainment Video
)

To what extent can we trust artificial intelligence if we don’t understand its decision-making process?

To what extent can we trust artificial intelligence if we don’t understand its decision-making process?

This may sound like a science fiction scenario, but it’s an ethical dilemma that we’re already grappling with.

In his recent MIT Technology Review cover story, “The Dark Secret at the Heart of AI,” Will Knight explores the ethical problems presented by deep learning.  

Some context: one of the most efficient types of artificial intelligence is machine learning – that’s when you program a computer to write its own algorithms. Deep learning is a subset of machine learning which involves training a neural network, a mathematical approximation of the way neurons process information, often by feeding it examples and allowing it to “learn.”

This technique has taken the tech world by storm, and is already being used for language translation, image captioning and translation. The possibilities are extensive, with powerful decision making potential that can be used for self-driving cars, the military and medicine.

However, when an AI writes its own algorithm, it often becomes so complicated that a human can’t decipher it, creating a “black box,” and an ethical dilemma. What are the trade-offs to using this powerful technique? To what extent can humans trust a decision-making process that they can’t understand? How can we regulate it?

Guest host Libby Denkmann in for Larry Mantle

Guest:

Will Knight, senior editor for AI at MIT Technology Review; he wrote the article “The Dark Secret at the Heart of AI;” he tweets

Credits
Host, AirTalk
Host, All Things Considered, AirTalk Friday
Senior Producer, AirTalk & FilmWeek
Producer, AirTalk with Larry Mantle
Producer, AirTalk with Larry Mantle
Associate Producer, AirTalk & FilmWeek
Apprentice News Clerk, AirTalk
Apprentice News Clerk, FilmWeek