Tech luminaries and scientists have been worried for years about the existential consequences of artificial intelligence for the human race. Philosopher Nick Bostrom of Oxford University's Future of Humanity Institute thinks money ought to be invested in how to manage machine superintelligence that could one day surpass us -- or even wipe us out. Economics correspondent Paul Solman reports.
Ещё видео!