Computerphile: Holy Grail of AI
In this AI video ...
Right, so last time, which was quite a while ago, we were talking about intelligence in general, and the way that you can model intelligence as an optimization process. This was the Hill Club in Algorithm. Yeah, that was an example that we gave. We were using evolution as an example of an optimizing algorithm, or an optimizing system anyway. And then we were using that as a way of talking about other types of intelligence. We were talked about chess AI very briefly, that kind of thing. So then the question is, what’s the difference between the type of AI that we have now, the type of AI that might play chess or drive a car or win a jeopardy or whatever, versus the ideas that we have of AI in the future? The kind of science fiction AI is what you might call true AI. What is it that really makes the difference? Is it a matter just of power or is there something else? And one real distinguishing factor is generality. And what that means is how broad a set of domains can the intelligence act in, can it optimize in? So if you take a chess AI, it’s very intelligent in the domain of chess, and it is absolutely useless in almost any other domain. If you put a chess AI in a Google self-driving car, not only can it not drive the car, it doesn’t have the concept, it doesn’t know what a car is, it doesn’t have any of the necessary architecture, cognitive architecture, to drive a car. And vice versa, right? The Google car can’t play chess, and it can’t win a jeopardy. Whereas we have a working example of a general intelligence, which is human intelligence, right? Human brains can do a lot of different things in a lot of different domains, including brand new domains. The domains we didn’t evolve for particularly. So in fact, chess, right? We invented chess, we invented driving, and then we learned to become good at them. So a general intelligence is, in a sense, a different class of thing, because it’s a single optimization system that’s able to optimize in a very broad variety of different domains. And if we could build an artificial general intelligence, that’s kind of the holy grail of AI research, that you have a single program or a single system, that’s able to solve any problem that we throw at it, or at least tackle any problem that we throw at it. Recently, Professor Brailsford, we did the idea of the churrent test, so that strikes me from what you’re saying, is that that’s a very specific domain or pretend to be a human talking. Yes, in a sense, it’s a very specific domain. The churrent test is a necessary but not sufficient test for general intelligence. It depends how you form at your test, right? Because you could say, well, if the AI has to pretend to be a human, convincingly, churrent’s original test was only in a brief conversation using only text. But you could say, to convince me you’re a human, tell me what move I should make in this chess game. To convince me you’re a human, tell me how I would respond in this driving situation, or what’s the answer to this jeopardy question. So you can, in a churrent test, deliberately test a wide variety of other domains. But in general, conversation is one domain. Yeah, you could formulate a true churrent test in that way, but it would take longer and be more sort of rigorous. One way of thinking about a general intelligence is a domain-specific intelligence, but where the domain is the world or physical reality. And if you can reliably optimize the world itself, that is, in a sense, what general intelligence does. Is that like humans have been changing the world to make their needs? Absolutely. So when you say changing the world, obviously we’ve been changing the world on a very grand scale. But everything that humans do in the real world is in a sense changing the world to be better optimized to them. Like if I’m thirsty and there’s a drink over there, then picking it up and putting it to my lips and drinking, I’m changing the world to improve my hydration levels, which is something that I value. So I’m sort of optimizing, I’m using my intelligence to optimize the world around me in a very abstract sense, but also quite practically. But in a bigger scale, as you say, in a grander scale, building it down and irrigating a field and putting it apart to your house and allowing you to have a tap, is doing the same thing but on a grander scale. Right, and there’s no hard boundary between those two things. It’s the same basic mechanism at work, the idea that you want things to be in some way different from how they are. So you use your intelligence to come up with a series of actions or a plan that you can implement that will make a world that better satisfies your values. And that’s what a true AI, a general AI, would do as well. So you can see the metaphor to optimization is still there. You’ve got this vast state space, which is all possible states of the world. Remember before we were talking about dimensionality and how it’s kind of a problem if you have too many dimensions. So when we have a two-dimensional space, you take a particular… This is what kills basic implementations of general AI off the bat because the world is so very, very complicated. It’s an exceptionally high-dimensional space. With the… I’m drinking a drink, example. You’ve got the same thing again. You’ve got a state of the world, which is a place in this space, and you’ve got another state of the world, which is the state in which I’ve just had a drink, and that one of them is higher in my utility function. It’s higher in my ordering, my preference ordering over world states. So I’m going to try and shift the world from places that are lower in my preference ordering to places that are higher. And that gives you a way to express the making of plans and the implementing of actions and intelligent behavior in the real world in mathematical terms. It’s not… You can’t just implement it because of this enormous dimensionality problem. All these dimensions, if you’re trying to brute force infinite dimensions, you’re going to follow a big quicker. Yeah, yeah, immediately. Change the world. Right. And if that sounds a little bit threatening, it is. LAUGHTER So when we have a two-dimensional space, you take a particular creature. There we are. Put this interface chip here. This chip.