#54 Gary Marcus and Luis Lamb – Neurosymbolic models

Professor Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. Gary said in his recent next decade paper that — without us, or other creatures like us, the world would continue to exist, but it would not be described, distilled, or understood. Human lives are filled with abstraction and causal description. This is so powerful. Francois Chollet the other week said that intelligence is literally sensitivity to abstract analogies, and that is all there is to it. It’s almost as if one of the most important features of intelligence is to be able to abstract knowledge, this drives the generalisation which will allow you to mine previous experience to make sense of many future novel situations.

Also joining us today is Professor Luis Lamb — Secretary of Innovation for Science and Technology of the State of Rio Grande do Sul, Brazil. His Research Interests are Machine Learning and Reasoning, Neuro-Symbolic Computing, Logic in Computation and Artificial Intelligence, Cognitive and Neural Computation and also AI Ethics and Social Computing. Luis released his new paper Neurosymbolic AI: the third wave at the end of last year. It beautifully articulated the key ingredients needed in the next generation of AI systems, integrating type 1 and type 2 approaches to AI and it summarises all the of the achievements of the last 20 years of research.

We cover a lot of ground in today’s show. Explaining the limitations of deep learning, Rich Sutton’s the bitter lesson and “reward is enough”, and the semantic foundation which is required for us to build robust AI.

Pod: https://anchor.fm/machinelearningstreettalk/episodes/54-Gary-Marcus-and-Luis-Lamb—Neurosymbolic-models-e125495

Tim Epic Intro [00:00:00]
Main Intro [00:38:05]
Gary introduces the field [00:42:12]
Luis introduces his thoughts on Neurosymbolic methods [00:47:56]
On the history of achieving a logical foundation and mathematical foundation for semantics [00:54:12]
Will emulating discrete reasoning break optimizability?
Buzzwords without basis [01:04:34]
We have known for decades about the statistical regularities in language [01:07:02]
Intension vs extension [01:09:14]
Easy to demand abstraction, but what is a workable definition? [01:13:33]
Abstraction is a “terrorist attack on neural networks” [01:20:38]
To succeed we need both, we are the moderates [01:30:14]
What would the future world look like with better semantics? [01:31:32]
Promising current approaches to discrete reasoning systems [01:39:58]
The challenge of machine knowledge acquisition [01:47:32]
Prof. Lamb’s more on relational learning [01:53:06]
The role of vector embeddings and neural symbolics [02:02:30]
Humans seem both good and bad at reasoning, what’s going on? [02:09:06]
Is reasoning a first-class citizen in the human brain? [02:15:06]
Does reasoning happen on the same substrate as system 1? [02:17:08]

GM papers:

The Next Decade in AI

Innateness, AlphaZero, and Artificial Intelligence

Deep Learning: A Critical Appraisal

Rule learning by seven-month-old infants

Rethinking Eliminative Connectionism

GM YB Debate
The Best Way Forward For AI


Rebooting AI

Kluge: The Haphazard Evolution of the Human Mind

The Birth of The Mind

The Algebraic Mind


Neurosymbolic AI: The 3rd Wave

Understanding Boolean Function Learnability on Deep Neural Networks

Graph Neural Networks Meet Neural-Symbolic Computing

Discrete and Continuous Deep Residual Learning Over Graphs

Learning to Solve NP-Complete Problems

Neural-symbolic Computing

Neural-symbolic learning and reasoning

Neural-symbolic learning and reasoning

LL books:

Neural-Symbolic Cognitive Reasoning

A Uniform Presentation of Non-Classical Logics

YouTube Source for this AI Video

AI video(s) you might be interested in …