Anticipating Superintelligence with Nick Bostrom – TWiML Talk #181

In this episode, we’re joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics.

Nick is of course also author of the book “Superintelligence: Paths, Dangers, Strategies.” In our conversation, we discuss the risks associated with Artificial General Intelligence and the more advanced AI systems Nick refers to as superintelligence. We also discuss Nick’s writings on the topic of openness in AI development, and the advantages and costs of open and closed development on the part of nations and AI research organizations. Finally, we take a look at what good safety precautions might look like, and how we can create an effective ethics framework for superintelligent systems.

The notes for this episode can be found at https://twimlai.com/talk/181.

Subscribe:

Apple Podcasts:
https://tinyurl.com/twimlapplepodcast
Spotify:
https://tinyurl.com/twimlspotify
RSS:
https://twimlai.libsyn.com/rss
Full episodes playlist:

Subscribe to our Youtube Channel:
https://www.youtube.com/channel/UC7kjWIK1H8tfmFlzZO-wHMw?sub_confirmation=1

Podcast website:


Sign up for our newsletter:

Newsletter Sign-Up


Check out our blog:

Blog


Follow us on Twitter:
https://twimlai.com/twimlai
Follow us on Facebook:
https://facebook.com/twimlai
Follow us on Instagram:
https://instagram.com/twimlai

YouTube Source for this AI Video

AI video(s) you might be interested in …