Research: To be like a bat

Welcome back to Cogax 2021. I’m Christine Foster, CCO at the Allen Turing Institute, and we’re the UK’s Institute for Data Science and Artificial Intelligence. There’s so much happening across the festival, so I’ll mention two of the Turing events. First, Vanessa Lawrence, who’s on the board of the Turing, will join a panel about reinventing the high street with geospatial and place-based innovation. That’s going to be on Wednesday at noon on the Plan in Smart City stage. Also, Andrea Barenchelli, the theme lead for economic data science at the Turing, will be discussing NFTs, non-fundable tokens, and the future of digital art. That’s tomorrow on the Kray Tech stage at 1 p.m. We’re going to be starting the session in a moment, and so I’d encourage you to submit questions throughout the session because we’ll be doing some Q&A at the very end. This next session is a fascinating one. It’s about that. So let me introduce the expert speakers. We have Alex Turpin. He’s assigned to Senate Entrepreneur. He’s an expert in deep tech, including photonics, computer vision, and artificial intelligence. We have Danielle Liefachio, who’s a professor in quantum technologies. His research focuses on the physics of light, how we harness light to answer fundamental questions, and how we harness light to improve society. Welcome. Thank you, Christine. Thank you. Hi, everyone. So Alex, tell us about that. What do you mean when you say to be like a bat? To be like a bat, we are a very fascinating thing to be honest in. In this respect, we consider how bats navigate and go through the world, especially during nights. So we have heard about all these equal location sensitivity that bats have, and the way this works is that bats emit high-pitched sounds, and they wait and listen to the echo. So what they basically do is that they emit vocalizations, like we basically talk very similar. They do it under the very high frequency range, something that is over the audible region of our ears. And they do very different things with these vocalizations. First, they have like a stopwatch in their brain. They tell them how long does it take to these vocalizations to go back since the moment they emit it. And with this, they can sort of tell where or how far a mouth or a possible prey is in distance. They are also able to tell if the echo that are reflected back from this prey are coming fast from the left or from the right ear. And also because of the shape of their ears, they can also tell if this prey is up or down. And additionally, they are also able to tell that if the vocalizations that they are sending this frequency, right, this noise changes in terms of frequency, they can tell if this mouth or this little prey is coming towards them or moving far away from them. So basically, by just sending these vocalizations, they can tell, even at night in the presence, without the presence of anybody, they can tell how far an object is, where it is coming from, and the speed that they are having. And additionally, they are also able to tell more or less the shape of this little animal or all this object, right. And this fascinating because by just meeting noises and hearing to the echoes that are coming back from their own noises, they can tell many, many things from their environment. And they do this with a very high precision. It’s amazing, but their brain is even able to tell them with a precision and a resolution of about 400 nanoseconds. This is 400 parts over a million, the death that this prey is moving. So they are able to tell within less than a millimeter how far one of these modes are leading insects. So bats are really fascinating. They are really able to use their own body and their senses to just navigate. And they probably do it with much more precision that we can do with our own lives. So tell me about your work and this tech that you’re building. How does it work? So coming back to bats and we were inspired by them. What we do is that we are also using echoes, echoes of waves. In our case, we can use waves that are also acoustic waves, they can be really frequency waves and even potentially optical waves. But we do prefer to use really frequency waves like the similar to that we use in our cell phones that we have in our Wi-Fi at home. And we do a very similar things that what bats do. We also meet birds of these waves. And we also have a sort of a sensor that waits to hear the echoes of these waves. What bats do that is different to what we do is that they only use the first echo that comes from the prey. So they send their waves and they wait for these waves to come back to them. In our case, we do something a little bit more complex. What we do is that we use what we call as multi-path echoes. Make sure you’re self in a valley full of mountains, right? If you just shout hello or the typical echo, you will hear your own voice coming back. If it’s been reflected in one of the mountains, but if there are more mountains that are a bit farther away, you will hear a second echo, right? So this is one more echo. And then if you hear and there’s another mountain farther, you will hear a third echo. And if you picture this information in your mind, how this is looking like, it’s basically like a time trace that is telling you the amount of power in terms of waves that is arriving for instance of time. Now imagine that you take this plot that we call a history of and we make it a bit more complex. Imagine now that you’re in a room and this room is big by walls and there is an object. There is for instance a human walking in this room and you have this sensor and this antenna or this meter that is able to emit waves. And these waves can now bounce not only once but many times within the room. For instance, the waves can go to the person from the person they can be reflected to the ceiling and to the ceiling to the floor and from the floor to the sensor. And we can do this many many times. We can send this verse and we can allow them to propagate within the room. The nice thing of this approach is that now we are also able to gather information from the viewpoint that is behind the person. Not only what is light in direct light of sight with the spectrules. I don’t know if you hear about this piece so far from Jajoy Kusama who has all these infinite mirror rooms. So this would be very similar to having an infinite mirror room. A room that is made by walls that are mirror and if you think that what you will see is just basically you will see yourself if you are inside the room from all the possible perspectives. And this is basically what we are doing with our algorithm. We are sending waves, we are allowing these waves to propagate within the room to reflect and interact with all the objects in all the different viewpoints and we also record the arrival time of these waves for instance of time. And as you can imagine this is an extremely complex type of data and we as humans we are not able to work with this data because it is basically unconprehensible for us. There are no patterns of using this data. And this is the reason why we decided to use artificial intelligence algorithm. This algorithm is now able to interpret this data. It is able to understand what is behind all these many many peaks that we have in our signal and is able to transform this signal into a 3D image. It is able to transform just one of these multi path echoes into an estimate of the image or sorry of the scene in 3D dimensions. And this is basically how our our system work. We are we are influence and we are inspired by paths. But it’s a little bit more complex and thanks to our artificial intelligence we can now provide images just from one of these echoes. That’s cool. I love Jerema Jury of the sort of Hall of Mirrors, the Kassama work. It’s very interesting. So Danielle maybe you can talk a little bit about the sort of inspiration beyond the bats right. So sort of what prompted you to head into this area and why you do this kind of work. Yeah thanks thanks for the question. So I mean we didn’t originally set out thinking about bats. And everything started I’d say several years ago when we were looking at a related but quite a different problem. And the problem we were looking at was to try and see if we can we could image behind corners or image create images of a scene that’s unfolding itself behind the wall. There are many reasons for doing this. There are many applications for doing this. I think for example if I could see behind a corner and I could mount this device on a car then you could have some very advanced collision avoidance for example or in a certain sense your car would be able to see into the future and be able to tell you what’s happening behind the corner of there are other vehicles approaching. But of course I mean one could think of many many other applications. Now this problem is related to sort of the bat sensing that Alex has been talking about because here too the way this is done is I mean originally we were working with lasers with laser beams. So you should the ideas that you shine the laser beam onto the floor or a ceiling or some surface which would then reflect the light through through an open door for example or through a window into the room where your vision is blocked by a wall. This light would then interact with the environment it would bounce back and then hit the same wall or door again and then be reflected back to you. So this is an example again of what we would call multi-path imaging because I sent the light out to the wall it’s hit the wall it’s gone inside your room interactive to some objects it’s bounced back again and then has been reflected back to me that’s like that’s three path imaging if you want. And you know many groups around the world have been working on this. There’s some amazing results also from the US. So it turns out you can actually do full 3D imaging of a room by doing this what we call non-line of sight sensing or imaging. And one of the key technologies to doing this is to use this very precise timing information and this Alex was talking about this 400 nanoseconds is 400 parts and a million sensitivity that that’s have. And with lasers we can do much better much better that we can have sensitivity of one part in the trillion but essentially it’s this very sort of explicit temporal detail and information that we can get from the return signals that allow us to image behind the wall or inside a closed and closed through. And so that got us thinking you know what else can we do with this with this kind of technology and we were looking a little bit at some of the problems that you have with the laser. So when you’re shining light on the wall I mean that the wall behind me here looks opaque it certainly is not a mirror. And the reason for this is because you’ve got some surface roughness and scattering so the light hits the wall and it sort of diffuses and scatters when it comes back it’s scattered into all directions. It’s kind of the same reason why when you look at snow it’s white and you can’t see through the snow it’s not absorbing the light it’s just scattering it in all different directions and it’s just confusing all the information and scrambling things up and so you end up a bit like also clouds behave the same way you end up with this sort of like white fluff instead of an actual image so you can’t see through snow you can’t image through a cloud and likewise I can’t use reflection from the wall behind me to as if it were a mirror to see to see things. However if you change the wavelength of your of your illumination and so you go to much longer wavelengths and this comes back to the radar that Alex was talking about radar wavelengths you know the wavelengths in your microwave are essentially just forms of light but with a longer wavelength. These no longer see the surface roughness on the wall and the wall starts to behave like a mirror and this is a great advantage now we don’t have cameras that can see that work work for radio waves but we now have these radio waves that are bouncing off of all the surfaces as if these were mirrors and that’s highly much more efficient than bouncing off a diffusing wall as happens with the light reflecting off the wall behind me and so that thing got us thinking we started playing around with the technology we started noticing that when we just look at the return signal we can now really because we’ve removed all this diffusive scattering we could now very precisely see in the in the temporal echoes we could see all of these reflections and you get peaks and you can actually count them and we can see all these echoes coming back from the multiple reflections and so then then we started looking at this in more detail information theory analysis or these complicated things but long story short we’ve suddenly realized that there was a huge amount of information in all these multiple paths and that led us to developing the technology that Alex was talking about moment ago. On a personal did you get teased by your colleagues so should from the physics of light to radio waves? A little bit yes and we’re even going towards the sun next now using sounds so yeah. Okay but in all seriousness I’m hoping that the audience will have been thinking their questions and we’ll get to them later so I’ll take the privilege of moderating and ask one more which is really about applications so there’s also it’s a mischief with being able to see through a wall but in all seriousness what do you hope this technology can do and for whom? So yeah I think this is one of the most exciting aspects of this and it’s sort of following up the what I was saying a moment ago about how one thing leads to another and I think that’s more of that is happening again now so it’s yes the technology itself is exciting but it’s also the ramifications and different ideas that bend this then sets in motion which I think is equally exciting so we we have developed and tested this technology so far in rooms of closed environment so if we take a closer look at that where can that lead us so at the moment so to give you some statistics in 2018 more than 7% of the US population required an overnight hospital stay and by 2050 the world’s population age 65 years or older is going to increase from 700 million that we have today to 1.5 billion and so what that is telling you is that we’re heading for problem when it comes to hospitalizations and taking care of of an aging population and the the thinking here is that the the way we track and monitor our health is going to have to move more away from hospitals and more towards the the setting in which we live so our homes and that the idea is that homes will become intelligent and we’ve already seen some signs of this you know intelligent fridges that could order food for you but what I’m talking about here is health care it’s a hat it’s an intelligent ambient that will follow what you’re doing will track your health it’s not about tracking you and what you’re doing but it’s about tracking your your vital signs and the interesting thing is that there is a whole series special mental age you know new degeneration problems where there’s evidence that these can be caught at a very early stage it just for example by tracking the way you move how many times you move a day but also what we call the micro movements or how you’re moving so very subtle changes that you you cannot pick up because they happen very slowly over time we don’t notice they’re happening but they are the results for example new degeneration and if picked up in time can lead to sort of effective cure or at least slow down of of of disease progression now this is just one example we have been coming back to this idea so you know what kind of ramifications different ideas what what we have been doing so far that we’re puts that set in motion we at the moment right now we are so we’re developing a technology that can also for example remotely detect your heart set and it can so it can detect your heart sound and it’s based on very similar concepts but it can do so with incredible detail and again that the point here is to monitor over time very subtle changes and variations which can be indications of of of of certain diseases but we can also use it for example for biometric identification each heart sound is unique a bit like a fingerprint and this can be used to to identify people another direction in which we’re going is currently we have a project with the deaf blind community and the idea is to use some kind back to the idea of the bat sense now and the idea is to use this bat sensing technology combined with some hat tics that’s so wearable at the moment we’re looking at a hat a wearable device so completely and obtrusive it doesn’t look weird it’s something that it’s a hat that the deaf blind person can can wear but what it gives them is sort of an augmented sensation of of their surroundings so they’ll still have their guide dog for example with them but now they will be able to tell where people are standing how many people are there how far away they are and the feedback we’ve been having so far from from that community is extremely extremely positive so these are just a few ideas to give sort of a sense of where we’re going but we do believe this technology and related ideas hopefully will have a huge impact on the way we live in the near future thank you Alex as a non-traperner I said I was going to ask the last question but let me ask another question while we see whether any Q&A comes from the audience Alex as a non-traperner what are the VCs and angel investors saying about the potential of this kind of technology have you have you had any feedback with we’ve had quite a few interest from from different VCs and even companies on the technology the question is to find the right market and the right application and we are currently working on this also with Daniel to work developing the appropriate application based on this technology that could reach the market there are many different opportunities in terms of surveillance, inter-health monitoring also for monitoring what’s going on inside a cabin like a car or a public transport place so there are there are definitely different opportunities that we are expanding this area and no steer on sort of which which of those seems most likely at this point and so to the nearer term the next steps we are having now yeah no I’m saying which of those are there’s so many areas right so was there any steer from the investment community on which of those seems most likely or so one of the one of the ones that seems more promising it’s in smart homes and we can we ever try to implement this technology in the smart homes and can we help to monitor as Daniel was saying to monitor the for instance the health status of someone we can also possibly track when an elderly person falls down in their toilet for instance when they are going to have a shower there’s a position or it is a place where you would never have a camera to monitor your your grandma or your grandpa right but you can have these type of sensors that will give you lots of information on what’s going on so these are some of the applications that we are that we are having in mind and then for both of you what do you worry about you know what keeps you up at night with this kind of work who wants to start I think one of the I think one of the main points is the perception from the public and acceptance of new technology we have seen time and time again new technologies come in and creating various reactions from the public some people might like them some might be full of suspicious and and that’s natural I know if it’s something you don’t understand or something that’s new I think it needs to be approached very carefully and of course when you’re talking about monitoring over long times and you’re talking about intelligence homes I lots of scenarios sort of can come to mind and I think we need to be very careful in addressing those concerns and making sure that at no given time that the technology oversteps it’s actual intended purpose and you can do that I think the nice thing about this technology is that it’s quite robust so for example people are trying to do similar things with webcams and see about we blur out the faces and then for not really imaging but you know it’s a hard sell because actually you are imaging you do have a camera and if you’re blurring a face or maybe you can blur it and then and you’ve got all kinds of data privacy privacy issues but in this case I mean we have none of that I mean these are just data streamers they’re not images data streams that are going to computer they’re being interrupted by by an AI but at no point are they actually taking images or recordings of you so in that sense I think I think it’s quite robust but still I mean I can understand there can be anxiety and questions and these are things I think are sort of the questions that we need to want and need to be careful of. That’s helpful. I see a question here from a Lane Taylor specifically about the deaf blind example you use so how does the use of this technology impact on the dogs so if you’re using high-frequency sounds. Yeah so we need to be careful good question so at the moment we’re not planning on using all the sounds for the deaf blind where actually have a radar sensor and these these devices can be made timely so the idea really is that these will be and they could they could even be built into a button they could be very not too so very small but using radar so in this wavelength region animals are not sensitive to that when using ultrasound we do need to I mean so for example you know we have lexar devices and in our homes they do have microphones and speakers and that could be an area where one might might look at ultrasound alternatives to this but then you know we need to we need to think about how pets or perceive or not this technology it’s a good point and sort of in a related question when you were speaking about the sort of heartbeat signature I was also wondering sort of how much quiet do you need or does it depend on the wavelength that you use and sort of which ones will have interference and which ones won’t. Ah so the interesting thing here is that we’re really happy to come back to using light. Ah so what we’re doing is we’re picking up so every time your heart beats yeah you’ll get and vibrate so I mean you can’t really feel it I mean if you go to right position on your neck you can sort of feel the doodum but we’re picking up a lot more than that we’re picking up sort of very fine vibrations a bit it’s a bit like you know when you’re shoving water through a tube the tube will vibrate a bit and then the same thing is happening here that your arteries are quite deeply embedded into your neck but still the vibrations propagate outwards and using a laser that is just illuminating your skin your neck area we can we can pick these vibrations up so that again this would be at wavelengths that aren’t visible to the eye with very little powers and so that that wouldn’t require any any form of quietness. So interesting so with all of these use cases you have this intersection of the sort of appropriate sensor the appropriate wavelength the appropriate algorithm the appropriate sort of you know privacy protections it’s it’s it’s quite a sort of it’s almost like a principles based approach this technology and then actually swapping out the specifics. Exactly. It’s a principle of the of the bat sense if you want yeah and that’s precisely what we’ve been thinking about and as I said we started from lasers looking around corners but the concept always was it’s picking up these echoes what can I do with them and then in the sense that frees you up, frees it frees your mind up in terms of the technology you want to use because then you start to discover that you can pick up echoes over across the whole spectrum of depth. So your team’s about to get more and more multidisciplinary I would guess yeah you’re going to be in the in the sort of field of echoes as opposed to anyone. I like it. Yeah yeah. There’s a question here from Sasha on de baka which probably would be specific to the different wavelengths but an indication of the costs of these kinds of remote sensing technologies so maybe a feeling for how expensive the sensors are how expensive the sort of data streaming collection how expensive the outside algorithm building like like what do you know about the costs. The nice thing of all these technologies is that we are using existing technologies not nothing that we are developing on our own we are using of the shelf devices so if you just think of microphones there are microphones everywhere right and even in our cars we have food or sound sensors that allow us to park we also do have in our cars radar sensors and we have Wi-Fi antennas in our Wi-Fi routers of home in our cell phone so this is really technology that is off the shelf that we are that we are starting to use and it’s new for us but the technology is being there and it’s being currently used so we are talking about in terms of the hardware it’s just few dollars it’s really cheap in terms of the of the algorithm and how we process the data you know that now many of these different applications can be performed in the cloud so these would free part of the resources and speed of the of the algorithm. We can also embed this and with Danieli we are we are working on embedding all these algorithms in microprocessors right which these microprocessors are much smaller they are more power efficient they are compact and they are also very cheap so we are always talking of technology that it’s really on few dollars maybe tens of dollars it’s really really cheap technology. I love that when things are built with existing components you can go a really long way quite quickly can’t you? So I don’t know whether the audience has any more questions but feel free to put them in if you do one of the things I was thinking but it’s just you know what you’ve really managed to do is sort of change my view on on echoes I think I’d always thought of them as a binary you know there is an echo there isn’t an echo and I think this sort of multi-path you’ve really sort of opened my eyes and I guess my ears to the idea that it could you know about these things bounce off all sorts of different surfaces in different ways that’s just it’s it’s something really different that I hadn’t considered. Now were you hoping to say anything else about your work? Does anything you’d like to encourage the audience to to look out for? I think you summarized it very nicely. Earlier on with your comment that was sort of doing echo science and and your right hand this all this work and thinking about echoes has led us to think very differently about what echoes mean and you know it’s always been fun as a kid to find places that echo and sometimes you find these arches where you can hear multiple echoes and it’s fun but it’s it’s really intriguing to see that actually it goes a lot further than that it’s fascinating it’s useful and could potentially change our lives in the near future and I think that’s that’s the interesting sort of take-home message to take away. Thank you and Alex did you want to have a last word? So just going back to the to the idea of echoes and why we find it so fascinating and related to our work it’s just because I was reading a physiology book and apparently we as humans we cut in terms of how much time do we wait to hear the next sound in our brain it’s that for a few minutes and you talk yourself and then you wait for your own voice or other voices to go back to your ear there is basically a cutoff and after this cutoff we can’t hear so our brain can’t process more data and and that’s why we are generally used to this form of very extended and multi-path echoes and this is precisely what using hardware and and this technology allows right we are we are not using animal brains anymore we are using brains that artificial through our algorithms and these allow us to hear longer and these allow us to extract much more information and as I was saying these allow us for a problem for a brighter future in terms of how we can use all this information what applications can we find from all this and so on. Well thank you thank you thank you thank you thank you sorry thank you thanks Alex I had no idea there were so many implications from this really appreciated. Thank you very much thank you for testing thank you. Thank you. Well, good. So we are headed into a break now. The next session starts at 6 p.m. I do recommend you come back. It’s called 1,000 brains. It’s with Azim Azar and Jeff Hawkins. And it should be a really good one. In the meantime, they’re still networking. There’s all sorts of things to explore in the platform. So please do that. See you in a bit. Bye.

AI video(s) you might be interested in …