#034 Eray Özkural- AGI, Simulations & Safety

Dr. Eray Ozkural is an AGI researcher from Turkey, he is the founder of Celestial Intellect Cybernetics. Eray is extremely critical of Max Tegmark, Nick Bostrom and MIRI founder Elizier Yodokovsky and their views on AI safety. Eray thinks that these views represent a form of neoludditism and they are capturing valuable research budgets with doomsday fear-mongering and effectively want to prevent AI from being developed by those they don’t agree with. Eray is also sceptical of the intelligence explosion hypothesis and the argument from simulation.

Panel — Dr. Keith Duggar, Dr. Tim Scarfe, Yannic Kilcher

00:00:00 Show teaser intro with added nuggets and commentary
00:48:39 Main Show Introduction
00:53:14 Doomsaying to Control
00:56:39 Fear the Basilisk!
01:08:00 Intelligence Explosion Ethics
01:09:45 Fear the Automous Drone! … or spam
01:11:25 Infinity Point Hypothesis
01:15:26 Meat Level Intelligence
01:21:25 Defining Intelligence … Yet Again
01:27:34 We’ll make brains and then shoot them
01:31:00 The Universe likes deep learning
01:33:16 NNs are glorified hash tables
01:38:44 Radical behaviorists
01:41:29 Omega Architecture, possible AGI?
01:53:33 Simulation hypothesis
02:09:44 No one cometh unto Simulation, but by Jesus Christ
02:16:47 Agendas, Motivations, and Mind Projections
02:23:38 A computable Universe of Bulk Automata
02:30:31 Self-Organized Post-Show Coda
02:31:29 Investigating Intelligent Agency is Science
02:36:56 Goodbye and cheers!

Pod version: https://anchor.fm/machinelearningstreettalk/episodes/034-Eray-zkural–AGI–Simulations–Safety-eo1a14

Blog: http://log.examachine.net
LinkedIn: https://www.linkedin.com/in/erayozkural/

Note I cited a tweet from John Carmack at the start of the video and forgot to paste it in — https://twitter.com/ID_AA_Carmack/status/1340369768138862592

Content: “Because I believe that current supercomputers are actually sufficient for human level AGI, I entertain the (less than 1%) possibility that there is already one running in a secret lab somewhere. The capital is easily available to hundreds of organizations, but that mode of development”

More from Eray:

Eray Özkural: Omega: An Architecture for AI Unification. AGI 2020: 267-278
https://arxiv.org/abs/1805.12069

AI ethics:

Eray Özkural: Godseed: Benevolent or Malevolent?, Philosophy of Mind: Contemporary
Perspectives, 2017
preprint: https://arxiv.org/abs/1402.5380

Eray Özkural: Epistemological and Ethical Implications of the Free-Energy Principle,
The Age of Artifcial Intelligence: an Exploration, 2020
https://www.preprints.org/manuscript/201908.0318/v1

Ultimate Intelligence papers which analyze limits of intelligence and introduces the Energy Prior:
Eray Özkural: Ultimate Intelligence Part I: Physical Completeness and Objectivity
of Induction. AGI 2015: 131-141 (Kurzweil Best AGI Idea
Award)
preprint: https://arxiv.org/abs/1501.00601

Eray Özkural: Ultimate Intelligence Part II: Physical Complexity and Limits of Inductive
Inference Systems. AGI 2016: 33-42
https://arxiv.org/abs/1504.03303

Eray Özkural: Ultimate Intelligence Part III: Measures of Intelligence, Perception and Intelligent Agents
preprint draft:
https://arxiv.org/abs/1709.03879

Transfer learning papers:
Eray Ozkural: Stochastic Grammar Based IncrementalMachine Learning Using Scheme,
Artifcial General Intelligence 2010 Conference, Lugano, Switzerland.
Eray Ozkural: Towards Heuristic Algorithmic Memory. Artificial General Intelligence
2011: 382-387.
Eray Ozkural: An Application of Stochastic Context Sensitive Grammar Induction to
Transfer Learning. Artificial General Intelligence 2014: 121-132
http://agi-conf.org/2014/wp-content/uploads/2014/08/ozkural-application-agi14.pdf
Eray Ozkural: Zeta Distribution and Transfer Learning Problem. AGI 2018: 174-184

YouTube Source for this AI Video

AI video(s) you might be interested in …

Comment on this AI video …

Your email address will not be published. Required fields are marked *