How To Thwart A Robot Apocalypse Oxfords Nick Bostrom On The Dangers Of Superintelligent Machines

"If we one day develop machines with general intelligence that surpasses ours, they would be in a very powerful position," says Nick Bostrom, Oxford professor and founding director of the Future of Humanity Institute.

Bostrom sat down with Reason science correspondent Ron Bailey to discuss his latest book, Superintelligence: Paths, Dangers, Strategies, in which he discusses the risks humanity will face when artificial intelligence (AI) is created. Bostrom worries that, once computer intelligence excedes our own, machines will be beyond our control and will seek to shape the future according to their will. If the machines' goals aren't properly set by designers, they could see humans as liabilities—leading to our annihilation.

How do we avoid a robot apocalypse? Bostrom proposes two solutions: either limit AI to only answering questions in a preset boundary, or engineer AI to include human preservation. "We have got to solve the control problem before we solve the AI problem," Bostrom explains. "The big challenge then is to reach into this huge space of possible mind decisions, motivation system designs, and try to pick out one of the very special ones that would be consistent with human survival and flourishing."

Until such time, Bostrom believes research into AI should be dramatically slowed, allowing humanity ample time to understand its own objectives.

Shot by Todd Krainin and Joshua Swain. Edited by Swain.

About 8 minutes long.

Go to Reason.tv for downloadable versions and subscribe to Reason TV's YouTube Channel to receive automatic notification when new material goes live.

  • How To Thwart A Robot Apocalypse: Oxford's Nick Bostrom on the Dangers of Superintelligent Machines ( Download)
  • Nick Bostrom on Superintelligence: Paths, Dangers and Strategies ( Download)
  • What happens when our computers get smarter than we are | Nick Bostrom ( Download)
  • Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ( Download)
  • Nick Bostrom on Superintelligence and the Future of AI ( Download)
  • FOUR ways that Artificial Intelligence could go wrong: Nick Bostrom & Ros Picard ( Download)
  • Superintelligence: Paths, Dangers, and Strategies - Nick Bostrom ( Download)
  • Nick Bostrom on AI on Talk TV - analysis ( Download)
  • Deceiving AI Might Backfire On Us - Nick Bostrom ( Download)
  • Bostrom Preview ( Download)
  • Could Robots Be More Dangerous Than Nuclear Bombs ( Download)
  • Intelligent Technology -----From Artificial Intelligence to Superintelligence Nick Bostrom on ( Download)
  • NICK BOSTROM AI AND MACHINES ( Download)
  • Nick Bostrom on Artificial Intelligence and Existential Risks ( Download)
  • Nick Bostrom - The SuperIntelligence Control Problem - Oxford Winter Intelligence ( Download)