SingularityNET: General Theory of General Intelligence: Developmental AGI Ethics(10/10)
This episode embarks on some critical and difficult issues regarding the ethics of AGI systems — including how to architect AGI systems to remain ethical as they evolve and grow, the nature of ethical systems as they mature, and problems related to the embedding of growing AGI systems within current human socioeconomic systems. The importance of democratic, decentralized AGI frameworks such as SingularityNET is discussed, along with the criticalness of the elevation of human ethics to a more enlightened/reflective standard so that we are able to provide high-quality examples for the AGIs we create and teach. It is proposed that AGI algorithms capable of robustly representing and manipulating their own states and dynamics at an abstract level (e.g. OpenCog Hyperon) may be more capable of achieving and maintaining advanced enlightened/reflective ethics.
(Note from Ben: I recorded this one around 1AM and was pretty sleepy and thus speaking perhaps a little less animatedly than usual … but on listening to the recording afterwards I felt I’d made some key points fairly well and if I were to re-record it while wide awake I’d lose some of the spark that comes from doing something the first time around. Also I don’t have time to re-record this sort of thing, I’ve got a thinking machine to build, etc. etc. Never look back is my motto for such things! The previous episode on consciousness has the same issues for the same reason. Welp, so it goes…! With a 2 month old baby and a highly energetic 3 year old around the house, sleepy-but-hopefully-still-fairly-mentally-lucid is sorta the thematic…)
Some additional references relevant to this episode are:
Goertzel, Ben. “GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement.” Journal of Experimental & Theoretical Artificial Intelligence 26, no. 3 (2014): 391-403.
Goertzel, Ben, and Joel Pitt. “Nine ways to bias open-source artificial general intelligence toward friendliness.” Intelligence unbound (2014): 61-89.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies, Oxford U. Press.
Yudkowsky, Eliezer. Rationality: From AI to Zombies.
Goertzel, Ben. Superintelligence: Fears, Promises and Potentials. Journal of Evolution and Technology, 2015
Goertzel, Ben. Infusing Advanced AGIs with Human-Like Value Systems: Two Theses. Journal of Evolution and Technology, 2016
Goertzel, Ben and Stephan Vladimir Bugaj. Stages of Ethical Development in Artificial General Intelligence Systems. Proceedings of AGI-08. 2008.
Goertzel, Ben. The Singularity Institute’s Scary Idea (and Why I Don’t Buy It). The Multiverse According to Ben.
Part 1: https://www.lesswrong.com/posts/TpNRpncLBAzddBnRB/muehlhauser-goertzel-dialogue-part-1
Part 2: https://www.lesswrong.com/posts/qF6hvXi2ytBsyzttp/muehlhauser-goertzel-dialogue-part-2
Goertzel, Ben. Between Ape and Artifact.
Goertzel, Ben and Ted Goertzel. The End of the Beginning: Life, Society and Economy on the Brink of the Singularity.
Goertzel, Ben. The AGI Revolution: An Inside View of the Rise of Artificial General Intelligence
Some websites with links relevant to these issues (note the first 3 organizations listed here maintain many positions I disagree with, for reasons outlined in some of the above-referenced articles. However, these are intelligent and well-intentioned people thinking and writing about relevant issues, even if often coming to wrong-headed conclusions… and some of their thoughts are certainly worth digesting…)
Machine Intelligence Research Institute
Future of Humanity Institute
Future of Life Institute