#95 – Prof. IRINA RISH – AGI, Complex Systems, Transhumanism #NeurIPS

Arena Rish is a world-renowned professor of computer science and operations research at the University of Montreal and the core member of the prestigious Miele organization. She’s a Canada SIFR AI Chair and the Canadian Excellence Research Chair in Autonomous AI. Arena holds an MSC and a PhD in Artificial Intelligence from UC Irvine in California, as well as an MSC in Applied Mathematics from the Moscow Gurbkin Institute. Her research focuses on machine learning, neurodata analysis, neuroscience inspired AI. In particular, she’s exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modeling and probabilistic inference, dialogue generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Professor Rich holds 64 patents. She’s published over 80 research papers and several book chapters, as well as three entire edited books, and also a monograph on sparse modeling. She served as a senior area chair for Nureps and ICML, and Arena’s research is focused on taking us closer to what she calls the holy grail of artificial general intelligence. She continues to push the boundaries of machine learning, striving to make advancements in neuroscience inspired artificial intelligence. Anyway, I had this in prompt to off-the-cuff conversation with Arena over at Nureps a couple of weeks ago, after speaking with Alan, actually, and the audio quality could have been better. It was a very, very loud environment, but I think the quality of the conversation kind of carries itself. Anyway, I give you Professor Arena Rich. Very interesting. I just wanted to thank you for the kind of talk, not to think that this is one trajectory of thought that clearly was started by Nick Postram’s book, which is amazing book. But the whole example of the owl that supposedly will be helping those sparrows and all this analogy with the GI is just an analogy. And nobody said it’s a correct analogy. And there is no other book with alternative opinion, or maybe three books of four. This is, you know, it’s mind-boggling. Just like how much people tend to follow one line of thought. I don’t understand it’s easier. I mean, it’s definitely easier to cluster. And then you just follow. And then basically, say, I think, but it’s not you think. Somebody else did. Yes. What would be another because this line of thought, I think you’re speaking of, is some of the extreme consequentialism. And I think it wasn’t just Postram. As I understand, I think Postram and Elias and Robin Hanson and all these fights, they were very close together in the early days of the less wrong community. So I think a lot of this was kind of, you know, it was embryonically formed around that. I guess it was a, yeah, it was in a sense, woman cluster of ideas. And precisely because as you say, they were close. So they were so aligned. Yes. Yes. All puns intended. Yeah. Yeah. But basically, like, it’s maybe it’s a little bit of echo chamber. Interesting. Yeah. It’s spicy spicy take. Yeah. Yeah. That’s right. They’re like, okay, now, I mean, they have some point. They have some hypothesis. And then everybody is talking in that terminology. And then that part of the mental space, which is fine. But I think mental space is much larger than that. And this is just the hypothesis. And we all know what happens with the ideas and echo chambers. So, we’re doing my I’m just saying, I mean, as I said, it’s great book and everything and Stuart Russell is probably kind of also on board with that. We’ve had good conversations at 3 play I 2000, we’re in here. He was also talking about ethics. I kind of know Stuart from back when I was a student at the Sylvain and so on. And he’s absolutely brilliant. But it was the same approach that AI is something to be controlled and strained regulated just like this. And I was like, where is coming from? It’s maybe, but do you at least admit that’s one way of looking at things? Yes. Right? Yes. So I know, I don’t want to sound too cliché and put my psychologist from 15 years ago who was listening for an hour not doing much and then saying, but it doesn’t have to be this way. Yes. But actually, yeah, it doesn’t have to be this way. Yes. If you think about it. So. What’s the alternative? The alternative. So, okay, first of all, what? H.I. model version one. I’m sorry. Yes. We still that you don’t say it. Remember X-Markina? No. What’s right? Remember X-Markina? No. Oh, yeah, that X-Masina. Yeah. Yeah. Yeah. Remember she escaped in civilization. Yes. I started going to New Reaps. Yes. Probably said. We know the secret now. Okay, I said too much. Maybe we did some research. It wasn’t job. Sure, it was a job. Seriously. No. So you’re the secret out. That’s it. Okay, I’m H.I. And I’m not very aligned. No, yeah. David, what we need is some reinforcement learning for human feedback. Well, I would be up to a lining human stage. Okay. No other way around. Yeah. Yeah. I mean, the other way around would be boring. We’re now. I mean, I think I would be would not be a better idea to align humans towards the GI. Hello, David. It’s well. I know something that you could say. You could say your opinion about how people are bullying you online for just mentioning the word AGI. Hey, hey, hey, look. A proper AGI doesn’t care about people bullying. Like, why would I even waste time on? But what I could say is I did post on Facebook and Twitter and trying to put together same idea that people keep saying that we would like to build AI, which is human life. Yes. Well, we might think maybe we should consider how to become a bit more AI-like. I mean, then people jump at you say, like, you want to make us to robots. I say, first of all, I don’t want to make anyone. Like, you don’t want to. You don’t have to, right? But if you want to kind of go along the lines of say transhumanism, there are some pluses to AI and some minuses to humans and vice versa. So I think as usual, the complex combination is better than each extreme. And one topic that is very controversial for some reason, especially, and people are jumping on that one. I say, look, I don’t have anything against emotions in general, but everybody would agree that sometimes you wish you were a bit more rational. I think you wouldn’t get angry or anoregialis or whatever. So anything that kind of clouds your judgment, like would you spend like thousands of years trying to figure out how and teach people how to control your mind. Technology could help with that. People don’t hear what you’re saying and they hear that you’re trying to kill emotions in there for you evil. And therefore you should be canceled. Oh, no, few months. Could ask you. You said something really interesting a second ago, which is that you know that I think I agree with you that AI intelligence can be expressed in many different ways. And you suggested that there was a convex space between the intelligence. Yeah, why is the space of intelligence as convex? Okay, that was not very precise expression. I didn’t. Okay, I know going to defend the point that is particularly convex. What I meant to say, it’s some kind of blend or kind of some kind of symbiotic hybrid intelligence. Because I always, I really kind of feel much better and much more motivated to work on AI where AI stands for augmented, not for artificial. Because honestly, I’m very selfish. I don’t care about computers. And I just care about like, I don’t know, people being happy, more capable. I don’t know. So whatever can help technology can help you can help technology technology can help you. But the idea of building artificial intelligence is some standalone thing that is as smart as humans or smarter. What’s up? Like why? I agree. To me, augmented means it’s more creative and interesting, but also more bottleneck. Agmented means that essentially people invented glasses to see better. They invented hearing aids, they invented cars, they invented computers, they keep inventing things to expand their capabilities. So we want even smarter technology to even better expand capabilities. And essentially, we all blend with technology like, right, you cannot really exist with this. And this allows you five discord, two slacks, email, FB messenger and Twitter. They kind of help you to do things you couldn’t do otherwise. Yes, what charm is cool, the extended mind. Yeah, I should. I was flying at the time he was giving a talk. So I need to watch the talk. But yeah, so in a sense, it’s indeed it’s kind of an extended mind. And okay, here it is. I think my ideal future plan is a rare sci-fi which is utopian, not dystopian, gentle seduction. You might have read it. No, it’s very, very inspiring. And if you read the first page, you may think it’s some romantic story. It’s not romantic story. It’s a blueprint for transhumanist future. It’s for gentle seduction. It’s online, PDF, you can just get it. Amazing. And one last question. Can you sell transhumanism to me and remove the simplest possible terms? So basically, as I said, I mean, you, if you’ve vision declines, you put glasses on. So imagine now you had extension of yourself. Okay. Maybe physically with neural link or maybe even like you have those ups, you can have like my dream for many years since I was at IBM Research and Computational Psychiatry Group. Yeah. I wanted to build this agent along the lines of movie for. I know like all the research ideas are inspired by either sci-fi stories or yeah, but nevertheless, having this like a companion, guardian angel type of thing that extends your capabilities. For example, like in better understanding your sought patterns and hopefully improving them, it comes from more like this indeed, as I said, computational psychology psychiatry side. And the reason for that is it’s possible because there is lots of signal in text and speech and acoustic, but just in text, there are a bunch of papers on that from that group I used to be in, from my colleague, Gujarva Chichi. And it’s amazing what you can detect and predict just from text. Yeah. Whether like predicting that person gonna develop a psychotic episode, like within two years or the person is on placebo versus MDMA, just measure coherence or a measure distance between the text vector and the vector for worse like compassion and love and 90% accuracy and the image there. So many things you can detect, many things you can predict, therefore if you have your companion that kind of goes tracks your mental states, but also kind of serves as your mirror. Yeah. Basically, it extends you, you don’t need maybe always to have human psychiatrists to say, colleges, it can be a proxy at times when you cannot access the person, not the replaced person, but it can extend capability of that therapist and it can extend your capabilities in terms of like better understanding yourself or tracking yourself. And many other ways. Yeah, so essentially I want to expand functional capacities of our brain by using AI technology and I think it’s quite doable. And there are many, many other kind of ideas along the transhumanism, but essentially you getting some symbiotic relationship with technology and we kind of work together to properly have some good relationship and that relationship is having positive effect on both parties. Yeah, so you want to improve human flourishing by… Yeah….with AI flourishing in a sense. So you kind of have the policy relationship with AI. But you said that you want a AI to be less anthropocentric, but for the purpose of an anthropocentric goal. Well, I want AI again with AI being augmented. Yes. Like I’m less motivated by just the goal of creating a stand-alone separate and intelligent creature. I mean there are much faster ways to do this, right? People create a GI over like thousands of years. So in a sense like what is exactly the motivation? And it’s maybe my personal thing because whenever I have to write proposals like a research proposals and people say that we’re going to bring a GI to the next level and this and that and the question is like and why are you doing that, right? Yes. Because unless it’s something personal it’s very hard to keep yourself motivated. Yes. Like what’s a personal about that, right? If this thing can help me become hyper and better and others and so on, I am much more personally motivated. I don’t believe in abstract motivation which is not related to yourself. Yes. Yes. Or maybe the research saying and basically even altruism is selfish. Because you do it, it makes you feel better. Okay and just quickly something really interesting happens when you contrast different types of intelligence. So we have a mode of understanding and thinking and agency and intentionality. You contrast that with a very different rationality based artificial intelligence and something very interesting might emerge from that. And then yeah, I am pretty sure they’re going to be all kind of paradoxes like classical things in like in a like the trolley problem and so on. So the rational decision that yeah you need to kill the person to save five people, right? Or like in this other side five movies anyway, like would you kill millions to save billions? So rationally, if you count things, well again, it may be one type of rational answer, maybe you’re not taking into account some other variables. So it may be not actually rational answer. But this classical example will say, well, this is rational but human will not do that. Yes. Yes. So trolley problems for example. The trolley problem is a classical example and yes. So I don’t pretend that I know the answer how this type of thing is going to be resolved. Yeah. But I think it’s a good resource question to precisely to figure out like how can you take into account this different ways of reasoning. Yes. And how can you, I don’t know, in some sense combine the best of both worlds. Yes. And again, whoever is listening to that and who read my messages on Facebook and Twitter, I’m not against human emotions per se. I am only against well, sometimes I call it the obsolete software stack developed by evolution that may need to be refactored, augmented or rewritten because there are parts of that software stack emotional that you probably would like to get rid of. And probably if you did, many wars and other kind of disasters would have been avoided. So you can see that the evolution found and built software that is absolutely ideal. So there are things that can be improved. Absolutely. And then just final thing. So a completely rational, you know, AIXI agent, how would you program in these very difficult moral boundaries into that agent? Yeah. I don’t think first of all it’s possible to even program in a head of time. They may just like with people they in a sense develop. They develop because of some goals of like maintaining existence and flourishing. And for example, compassion is byproduct of the selfish goal to survive in the group because outside of a group it’s much harder to survive. So you need to survive in the group. Therefore you need to make sure that your actions are aligned with kind of well-being of the group. So in a sense it’s rational to be compassionate. So it kind of emerges from interaction with environment under different sort group stands. And there are one type of circumstance when you find and can survive along. Maybe you will not develop at all. I mean it’s a separate interesting topic like basically it goes back to the question whether they think like objective ethics exist. And I’m not a physicist, I’m not philosopher. I’m a admirer of people like Derek Parfid. I’m not the only one. But it’s a hard question. He didn’t finish what matters. He was trying to come to the same summit from different sites and trying to unify ethics, trying to see if you can develop objective ethics. I don’t think we know for sure if it’s possible. I think it’s possible for some particular domains and in certain situations you can clearly say that certain behavior is objectively ethical and everybody would agree on most people. But it’s hard to talk about those things at such level of generality. But I think if maybe I managed to include Derek Parfid and to recommend the readings for my scaling and alignment course this winter. It’s on the website from the last year. People just didn’t read it. I think it might be a good topic for discussion there too. But again objective ethics is a difficult open research question. Indeed it is. I really thank you so much. I hope I can grab some more time with you tomorrow. But I really appreciate this in front of you discussion. Thank you. Amazing. Thank you very much indeed. Okay. Another analogy. There was a very interesting story by Jorge Luis Borges, Garden of Forking Pass. I don’t know if you read it. I don’t want to spoil the story but roughly speaking it’s okay it’s about a book written by Ampera, I think in China a long time ago, which didn’t make sense. It was like complete. And an intersection of different trajectories of different lives and then basically the point is that somebody was trying to describe all possible trajectories that events can happen in and so on. And the story is called the Garden of Forking Pass meaning that at any point of time there is a whole tree that can grow out of that. And we don’t know which kind of which trajectory in the tree will be taken and so on. But the fact that there is always a tree and it keeps branching at every moment. And at every moment you can make, you can take certain direction or you can take another one. It has not even anything specific to do with alignment but I was thinking about history of deep learning. Like at some point it happened that the backtracking, I mean, back propagation became popular at work and everybody got into that and now everybody using back propagation because it’s convenient because software is implemented. It doesn’t have to be this way. There are non-backprop-based approaches to optimization. I mean, I’m a little bit subjective maybe because I was interested. I was looking into them. We have a few papers on that. There are other papers. But that direction that could have been explored. It could have been probably much more efficient and better parallelizable. It wouldn’t have the chain of gradients. You would probably do it much better for scaling large models. It’s under explored why because the branch was taken and became stronger. They usually reach gets richer and so is other ideas. This is the hard and serahooka cause that the hardware lottery. It’s basically it’s like we are bound by the decisions and ideas of the past. It doesn’t have to be this way. No, but the thing is you get stuck in these basins of attraction and the further you get into the basin the harder it is to jump out of it. I mean, I share your intuition. There’s the classic gradient descent. It’s amazing and it’s also a basin of attraction because having these differentiable models allows us to learn and scale. But there’s an entire class of function spaces that we’re excluding ourselves from being able to. There is also another class of neural networks that are not our classical second kind of generation in NNs and this good old. It doesn’t have to be necessarily spiking but like a third generation in NNs which are like the reservoir computing any of that. So anything that tries to take into account time between activations or at least sequence because think about that. I mean, a good classical argument. Yeah, SD, TP, this is the spiking biologically in spide neural networks. It may be not necessarily spiking but it might not necessarily kind of be the best thing. But the idea that like what was always also bothering me with classical neural networks is that brain is constantly active. It’s like complex dynamical system. Even if you sleep and don’t have input, you don’t see any images. It still is active unless you’re dead. Yes. Neural nets are not. They sit there waiting for the next, I don’t know, amnesty image to appear or something. And then between there is no internal dynamics. And yet from your science, we know that the properties of that dynamical system without any input, so called the kind of resting state of a moron. So I mean, I used to work in brain imaging and this computational psychiatry group at IBM. That’s where it comes from. And it was not just neuroscience but was like working with formal physicists. So the view at the world and at myself as a year and other complex dynamical system after 10 years there just really converted me. So think about that. Changes in the dynamics are also associated with mental disorders. So it’s really important like what are the parameters of this dynamical system? Input to the system. Combined with this produces output but again it’s even in the neuroscience there is this perception and there is a book that brain inside out by Buzaki that says, guys, the output that you produce is determined a little bit by the input and to large extent by the state of the system. That’s why you say same thing to different people and some love, some ignore and some get like ballistic and so on and so forth. So are you not a behaviorist? When what sense behavior? Why isn’t so you care about the state of the system as well as just the output and the input? Yeah, I mean it’s not just input to output and that’s a whole point. The neural net is a function. The function is deterministic given input it will produce out. Brain is not that. There is input it will produce output and depending on the huge hidden state of the system and parameters of this dynamical system that will determine output to large extent. That’s why I mean Buzaki was criticizing neuroscientists and all these experiments that let’s provide stimulus and see how the stimulus will affect the brain and what going to light up and activate. So it was outside in. So like what’s going on guys? It’s inside out. Things happen and that produces stuff. So it’s not like the world programs you only but you programs the world, right? So at least you need to take that into account. Neural nets now are not doing that. There is no dynamics. So you said a couple of really interesting things. So first of all about the tree which is to say all of the counterfactual trajectories that you can make. Now charmers by the way he says that it’s that the counterfactual trajectories that gives rise to consciousness in his conscious mind. But I was, yeah well I wanted to ask you because I’m interested in intentionality and free will because what you’re basically saying there you’re getting to this issue of intentionality. So you know in silico what would intentionality entail? Yeah okay don’t ask me about three words. Is that a tricky one? Well yeah I don’t have like clear cut answer to large extent. I mean it’s determined by the current state of your dynamical system. So the question is like what is free will? But I know it can do very far and remember my colleague Kishir Machichitay B.M. used to say that kids these days like my five-year-old says after doing something wrong. My neurons made me do it. Not my fault. So in a sense yes and in a sense no and it’s a good question and then I was also reading the article of S.B.F’s mom who wrote about punishment essentially guilt punishment assigning I’m very much with her on that one. Okay but that’s probably on popular opinion this days. You said something else fascinating which is that my neurons made me do it which is you know like a microscopic level of analysis. Now what do you think about? No but it’s beautiful it’s beautiful. So what do you think you know you know the mind emerges you know when you read a book the story it’s written on the page but the story emerges in your mind right because the mind is this kind of confection of information processing. So do you think this conception of the mind is useful for AI or is that just again an anthropomorphic thing? I think it is. Well you know go by people try and create the mind and we as as neuro network people we try to recreate the brain. Not exactly I think everybody not everybody okay sir should never say everybody and so on but I think I think neural network people assume that we’re working on the system one level right at a low level and we would like the properties of system two which is well mind planning and thinking emerge and there is a reason to believe it’s possible because it’s already happened once with this hardware and might happen with other hardware right so it doesn’t have to be like go five people the problem is go five people they’re trying to manually program that stuff the system to and like a neural network people would like that thing to emerge and that’s kind of the main difference it’s just like a bitter lesson message that maybe well first of all history shows that every time you hard code something in like rule based expert system you will be outperformed later on by something which is more generic and kind of emerges you hard code whatever tricks of playing chess you will be outperformed by massive search and so on and so forth same with alpha dual like self-plane bottom like he says like it’s not like we have to ignore the nature but maybe again it might translate of riches kind of bitter lesson because I often have to argue with your show about inductive biases is it lucky I’m nothing against inductive biases but you can have inductive bias in the form of rule based expert system that everything is included and that’s probably not gonna scale and not gonna work or you can have inductive bias of much higher abstract level of how the network scale so the scaling algorithm is more efficient and you end up with this brain rather than whale brain so like riches last paragraph was precisely maybe we shouldn’t be trying to focus on the end result of evolution but on the process it’s also can be called inductive biases there is also some patterns of how dynamical systems evolve so that the result will be good but we don’t have to include the final result yes so you said so many really interesting things there so first of all I’m a huge fan of of Yosher’s G flow nets we interviewed him absolutely amazing work so you were talking about isn’t it interesting that you can start at the microscopic level and then you get these emergent functions like reasoning and planning and so on and even that was a bit of an insight because it’s a functionalist view of intelligence to say you know it’s a bit of you read Norvig that he talks about planning talks about reasoning it talks about sensing and actually this is just our view of what is a very complex phenomenon and I know you’re a big fan of the blind men in the elephant right which is to say that even though this is our view from different perspectives it’s all it’s all true isn’t it but to some extent the the intelligence that emerges might just be beyond our cognitive horizon like does it make sense to talk about reasoning in your view well again just like with that elephant each person has a point yes so I mean there is such thing as reasoning you cannot say that it’s totally like bogus or something it might be again it’s one perspective maybe it makes sense to just try to accumulate multiple perspectives instead of so maybe we should be Bayesian instead of like trying to find a point estimate of HGI right you can have a distribution of views yeah and I’m big fan of Eastern as opposed to Western views than anti-individualist as in viewing everything like that happens to you into the world as well a large dynamical system and yes you are particle of that one yeah so so the it’s almost a shearing individual agency yeah so in a sense it’s yes and no because okay so when people say there is no self again yes and no there is self but you also understand that it’s like in the whole hierarchy of selves like there is you and you’re part of that larger dynamical system and so on so I how to say I mean I’m not saying that back to a question that we shouldn’t be looking into reasoning functionality as aspect of intelligence that we may want to develop yeah so I mean I don’t see a problem with that yeah I mean it might be a sufficient condition but not a necessary condition yeah but basically basically intelligence or consciousness is probably much more than that and definitely much more than reasoning and here we go to another topic that I really like to talk about but yeah I don’t want to keep everyone I’m big fan of Michael Livin who you might desperate to get him on the podcast and yeah because we’ve done lots of stuff on emergence recently recently cellular automates a self-organization and his take on it is absolutely fascinating so yeah he talks are fascinating he I think I met met him first at Newrips 2018 he gave the plenary talk what bodies think about the point was guys you talk about intelligence as something that emerges in cellular networks like neural networks way before neurons appeared other kind of more primitive types of cells had their bioelectric communication in their networks and that determined what they remember and how they adapt he focuses on morphogenesis basically how the organism takes shape and that relates to like embryonic development and so on and so forth and the point is that if you look at that from the dynamical system point of view and if you say that properties of the system like shape will emerge out of communication across those cells in certain way that certain parameters of dynamical system if you tweak that dynamics and he will basically he was doing some simulations of where he want to intervene how he will intervene like chemical interventions just close open some ion channels cellular kind of system starts working in different way and this is essentially his way of programming biological computers and the famous two headed worms three headed worms next move into and whatever stuff and point was like guys like evolution found this solution or this solution wonderful there are many others and there may be better ones and look at that to head at warm it’s not a fluke it’s a stable attractor that replicates and evolution didn’t create ever anything like that we did and it’s stable so it makes you think what else can you do if you start the programming it right but yeah two questions on that so I don’t know whether you’ve seen that there’s that example from Alex Moore vincef with a gecko and it’s a CNN cellular automata and now we’re in this regime where we’re transgressing rungs of the emergence ladder so we’re creating a high resolution cellular automata and then even though it’s only doing like local message passing we get this emergent global phenomenon of of a picture or a lizard or whatever and now when you build systems like this they can repair themselves they can heal themselves they have interesting dynamics but as you’re saying we don’t understand the macroscopic phenomenon and we can only nudge it because it’s not it’s unintelligible to us right um anyway it’s a whole kind of complex systems science of complex system like yeah and basically how do you program dynamical systems across multiple variants by local interventions so they will take the global properties that you would like yes and avoid those that it I mean this relates to everything it relates to the classical mollock problem right what a small lock problem it’s a complex dynamical system that with the current dynamics is getting into bad attractor and most likely the way to get out is coordinated simultaneous distributed action sounds okay we’re not gonna go there because I have to run unfortunately but I’d be happy to yeah I have some plans I don’t want to be late but I’d happy to talk about that and I mentioned I mentioned Michael Levin also not just because of two headed warms with each father but also because we talked about self and I talked about in a sense hierarchy of selves and like what self means and how selves organize into larger selves and we had an amazing discussion with him I invited him to a Bm research when I was there three years ago after his talk I talked for five hours I was it was great and the idea basically to some extent was that you can he was also giving examples not just of embryos frogs and those warms but cancerous cells if you look at them like what’s going on when historically cells emerge like is independent selves and everything around them is non-self and therefore self to survive tries to eat and use everything around which means no self but when the cell becomes part of the network of the organism then it changes behavior so that it kind of supports the well-being not just of that self but the larger self it is part of now what is cancerous cell it’s a cell that forgot it’s part of the community reverted to its old state of being cell in the environment that is just environment so and it tries to eat it to survive and it’s stupid in a sense because its objective function survival thrive is right it just applied at wrong scale it’s spatial scale reduced and its temporal scale reduced too because like if you kill the organists believe and they’ll die so in order to understand that you need to apply objective function to longer time scale and then you get the hierarchy from cells you get to organs like to whatever particular organisms to societies to planet to universe and a same Michael so this is a good formulation of Buddhism basically Buddhism means applying this function at the infinite time and space scale agreed yeah so yeah ever since I was saying I gonna write a book about Buddhism for machine learning and somehow it just didn’t happen here but I should you should do it was so nice to meet you well nice to meet you I’ll see you tomorrow and I’m really sorry I have to run but tomorrow yeah yeah that was amazing that was a really good interview.

AI video(s) you might be interested in …