AI for High-Stakes Decision Making with Hima Lakkaraju – #387
Welcome to the Twimal AI Podcast. I’m your host Sam Charrington. Hey, what is up, good Twimal people? Before we jump into today’s show from our CVPR series, I’d like to share a few quick details about the next great event in our continuing live discussion series. Join us on Wednesday, July 1st for the great machine learning language underbait as we explore the strengths, weaknesses and approaches of both popular and emerging programming languages for machine learning. We’ll have great speakers representing Python, R Swift, Closure, Scala, Julia and more. The session kicks off at 11am Pacific time on the first and you won’t want to miss it, so head over to twimbleai.com slash languages to get registered. At this point, I’d like to send a huge thank you to our friends at Qualcomm for their support of this podcast and their sponsorship of our CVPR series. Qualcomm AI research is dedicated to advancing AI to make its core capabilities, perception, reasoning and action ubiquitous across devices. Their work makes it possible for billions of users around the world to have AI enhance experiences on Qualcomm technology’s powered devices. To learn more about Qualcomm and what they’re up to on the AI research front, visit twimbleai.com slash Qualcomm. Now on to the show. All right, everyone. I am here with HEMA Lockaraju. HEMA is an assistant professor at Harvard University with joint appointments in both the business school and the Department of Computer Science. HEMA, welcome to the Twimbleai podcast. Thank you so much, Sam. I’m super excited to be here. Same here. I’m really looking forward to our conversation. We will start where we typically do on the show and have you share a little bit about your background and how you came to work in machine learning in particular, your focus on fair and interpretable ML and the implications and kind of mission critical high stakes domains like criminal justice, healthcare and public policy. How did you get started in all this? So, that’s actually an interesting story. Let me try and kind of summarize it in a hopefully few sentences so that we are not hogging all the time here. So yeah, I was actually working in machine learning. Right. So basically I come from India. I moved to the United States for my PhD in 2012. I have been working in machine learning since I was a student in India. I was publishing actively in machine learning, but my interest in sort of the applications of machine learning to some of these domains like criminal justice or healthcare, that sort of started or it became a prominent threat in my research when I started my PhD. And this was I guess mostly due to a collaboration now between my advisor and another professor and another couple of professors and economics who were dealing with behavioral economics and they sort of introduced us to all these fascinating problems. As I was sort of I think by then I had already explored machine learning to a reasonable extent and I was looking for applications which were more than just like ad recommendations or you know friendship recommendations and so on so that I could keep myself going in the field and also anchor on to something which is more applied in the sense of like real world settings and so on. So I guess my PhD was like one of the you know main times in my life where I got into both machine learning as well as its applications to some of these domains which are super fascinating and the broad field of fairness and interpretability. Yeah. You know I suspect when we dig into your upcoming well actually your recent talk at CVPR where you were an invited speaker in the fair data efficient and trusted computer vision workshop. We will learn about a bit of your research but kind of broadly how do you frame out the kinds of questions that you’re looking to answer with your work. I guess in a broad sense the way I think about my research is it’s about enabling machine learning to help with decision making in high stakes settings right and that involves some sub questions like how can we make sure that machine learning models which are of course getting more and more complex day by day are in a more palatable form to these decision makers who are not necessarily experts in machine learning. So how do we sort of explain these models what are the algorithms that we can use which can in turn explain these models to people who are not machine learning experts. So that’s of course one of the key questions and also other core question behind interpretable ML right and beyond that when we develop some of these tools which will assist these decision makers in important decisions how do we ensure that the tools or the algorithms that we are developing are by default fair otherwise they can induce their own discriminatory biases and undesirable biases into the entire real world decision making. So that’s of course another question and more broadly also just trying to develop models and methods to understand what kinds of biases exist as of now in human decision making like even if there was no algorithm involved in the picture as well as how to diagnose biases if someone gives me an algorithm what is the best way to do that. So these are roughly the broad questions I think about. And so your talk is titled Understanding the Limits of Explainability in ML Assisted Decision Making and there’s some interesting tidbits that I’m looking forward to digging into around like some of the explainability algorithms like Lyme and Shap. But before we even do that thinking about the topic of your talk and the workshop makes me think of a podcast that I did with Cynthia Rudin last year and her perspective seems to come from a different direction which is we shouldn’t even be using black box models for the kinds of problems that you’re studying. We should be using models that are kind of more fundamentally understandable and in many conversations I’ve had in the topic there’s this tension between explainability and interpretability and you know, yeah. So I’m curious kind of out of the gate what’s your take on all that. Oh that’s and I think we’re already starting with a very interesting and controversial topic with yeah, I mean Cynthia has been a mentor and a collaborator for several years but we somehow managed to coexist with this dichotomy. So yeah, I mean, so I agree with this point or rather might take on this would be that if at all it is possible for you to develop a model that is interpretable by default and is also accurate and you have the data to build such a model by all means you should go for it. So because there are no barriers here but unfortunately the real world is always not like that. So in some cases you may not have enough training data for example to build a disease diagnosis model, right. So you might be then using another proprietary model that some other company has built and in that case you would still want to do some diagnostic checks to ensure that the model is you’re doing what it’s supposed to do and it’s put the way it’s making predictions reasonable and so on. So those are the kinds of cases where explaining a given model or a black box model as we are calling it you know is probably the only option, right. So because you don’t have the ability to sort of build such a model because of lack of data or resources and empty number of other reasons but you have the capability to buy or get this model from a third party but you still want to wet it or understand what’s roughly going on with the tiny bit of data that you have or like just doing some diagnostic checks with whatever little amount of data that you have which may not be enough to develop and accurate model but at the same time might be you know decent enough to sort of wet a given model. So if that is the context you are dealing with then essentially explaining or understanding what the black box might be doing is probably the only option. So these are of course like you know again as I mentioned earlier these are constraints that is arise in the real world and that’s what we are sort of thinking about but yes if you have the means to develop an interpretable model from scratch you have all the data you have the necessary means I think that is definitely and it’s also accurate of course. So that is definitely the way to go but just that there are many other real world context where that might not be the case which you could actually pursue. In the scenarios where you can’t do something that is more fundamentally transparent you are doing something that’s a black box and you want to explain it. There are some kind of known and popular methods for achieving some level of explainability and I mentioned a couple of those already time and shop but a part of your presentation references some previous research that you have done that has shown that you know that work can be vulnerable to attack. With your presentation what is kind of the broad landscape that you are looking to carve out and we will get to the particular of Lyman shop when we get to them. How do you kind of frame the problem of kind of understanding the limits of the explainability tools? Yeah. So I think this talk and more broadly my recent research has been kind of exploring as we are thinking about the limits of explainability and what I mean by that is so far at least in the past few years there has been a lot of interest in coming up with new algorithms which can explain black boxes. So there is like a huge research that has built up I think since 2016 pretty much like paper on top of papers every paper comes up with another new method for explaining a given black box classifier or a prediction model. So one of the things that as we were seeing more and more work on this topic that you know me and some of my collaborators of this work we got excited about is how to now start thinking about what are all the ways in which these explanation techniques can be gained or potentially even unintentionally misused to sort of generate explanations which could fool people or mislead and users into trusting something that they should not trust. So for example maybe if your explanation basically if your model let’s say if you have a black box model which is actually using you know lace or gender some of these sensitive features which are prohibited to be the key aspects for making you know critical decisions like for example who gets a bail or who gets a loan and so on. So if a black box model is using some of these features and if your explanation is somehow misled to sort of think that that’s not the feature that it is using but instead it’s using another correlate for example a zip code when making prediction and end user might look at this explanation and just be misled that oh this seems like a model that’s not using grace it’s not rationally biased it’s using other correlates so maybe it’s fine to deploy it right. So an explanation that can be misleading in terms of explaining what the black box is doing can have serious consequences in the real world as we can just see with this kind of example right. So the line of work that I’m pursuing currently is how to identify some of the issues or vulnerabilities of existing methods which can potentially lead to these kinds of misleading explanations and also understanding what is a real world impact if there is a misleading explanation. So what would be the consequences of that in real world and for that I’m also doing a bunch of user studies with students from law schools and healthcare professionals and so on to see how a misleading explanation can affect their work. This is pretty much what the talk is all about. I’ve got to imagine that the effects of these misleading explanations vary pretty dramatically depending on the setting in which they’re used. That is definitely true. Yeah. I think like for example, I think you know since anyways we are discussing this let me segue a bit into the second part of this talk which essentially talks about the effects of these explanations in a particular context and the context we are looking at is let’s say if there is a model that is you know designed to predict who should get a bail right or at least it’s designed to assist a judge in determining who should get a bail but in the process the model is also making predictions as to who should get a bail right. So before such a model is even deployed in the real world, ideally the judges or the teams of the people it will definitely go through a lot of betting and people would like to look at what the model is exactly doing before they sort of decide to trust and deploy it in some sense right. So in that context we designed a small experiment with law school students of course is all proxy because you know the time from judges and these senior domain experts is their time is much more valuable and it’s not easily available. So we were trying to sort of mimic that with the law school students here at Harvard and Yukon. And basically what we did was we essentially built like a simple in fact, in the rodent style rule based model which is the actual black box which explicitly has some rules which are used to determine who gets a bail or not. And in that we specifically used all these bad features or undesirable features like race or gender to determine who gets a bail and then we constructed an explanation for this rule based model which is another set of rules and then in that we explicitly avoided the usage of the features race and gender but the explanation is free to replace it with its correlates like replace these features with its correlates. So race could be substituted by zip code or anything else but the explanation is just kind of not allowed to show race or gender explicitly right but it’s still it should still make the same predictions as what would be made by the original black box model that we constructed. And there was an actual system that generated these explanations or was this you know did you create these explanations to simulate what a system might do under the set of conditions that you outlined. So one of the existing techniques is what we used to generate the explanations for this black box. So it’s just that we put an additional constraint on that technique that just make sure that race or gender do not show up in the explanations. Now the experiment was something that revealed very insightful things to us which is we showed we basically split people into a couple of groups or we took a bunch of law school students we split them into two groups. For one set of people we showed the actual model which has race and gender and then we asked them if this is the model or if this is an explanation of a model which is showing you that race and gender are being used in making these predictions of who should be jail. Would you trust this model enough to deploy it in your code if you were a judge assuming you were a judge right. And most of the people of course as expected said no the model is using race and gender I don’t want to deploy it anywhere close to me. Sorry. But then when we gave them the explanation which was tailored to sort of hide or cover up some of these problematic features and then showed it to the other half of the people and we said this is an explanation generated by state of the art machine learning to explain a black box using this now do you trust the underlying model enough to deploy it and most people said yeah sure because it seems to be doing something which reasonably matches my intuition of how you know we should determine someone should be given a bail or not I don’t see you say which of any prohibited features or problematic features yeah sure let’s go ahead. So the actual true model less than 10% of the people trusted it and the explanation that we are generated which is essentially doing the same thing but replacing race and gender with its correlates almost about 80% plus people trusted it. So it reminds me a little bit of some experiments that I on a Howard shared with me and her about her research into just the authority that we tend to confer on computing systems in her case robots. Robots yeah. And the examples that she gave were you know a robot that is presumably supposed to lead you out of a fire or a dangerous condition in a building you will like stand behind it banging itself against the wall and waiting for it to you know all of a sudden do the right thing because we just want to believe that these things are you know more infallible than they then they are. Right yeah no definitely I think this research I guess also fits in line with some of that work in the sense that people are probably already approaching models and model explanations from the perspective of some prior trust right so they are already willing to sort of trust them which is why some of these you know issues are like the things that we are seeing are actually being seen. So yeah. And realizing we’ve kind of jumped ahead to the end of the talk but did you further explore around you know different ways to present that result that you know helped you know besides from you know the one example where you kind of show the race and the other where you hide it. Yeah. Are there things that you’ve played with like showing the different correlating features or other things that can help the human understand what’s really happening. Yeah so that’s actually the ongoing research that we’re actually doing which is what is the best way in which we can educate people that hey the explanation that you saw it is purely correlational and it’s like when it says that zip code is being used it could essentially mean that any zip code or it’s correlate could be actually used to make predictions right. And in fact one thing is we design like a very short 5 to 10 minute like a primer or tutorial just highlighting some of the examples that just because you don’t see race you know it could still be present because the correlation between race and zip code is like greater than point it or something right. And then designing some examples like these and it’s a very short 10 minute tutorial and using that we can already see some you know like mega improvements in terms of how people already like latched on to some of those ideas and the next time we ask them a similar question they are like less likely to make this kind of a mistake. So we are also just thinking about what training is might help people in like realizing some of these things because again what we are designing or you know even the way that we are sort of producing these explanations are intention is that they would be used by someone who is not an expert in machine learning. So we should also be prepared to teach them how to think about these explanations and what they can or they cannot provide in terms of information. So I guess that’s the next step or the next research that we are conducting. And again going back to your earlier question we are also looking at this of course one scenario as we talked about where misleading explanations had this impact. We are also looking at other scenarios again in healthcare and a bit in like the business domains where we are looking at different kinds of decisions like some of which are more high stakes the others which are a bit more low stakes. So in those cases what would be the implications of misleading explanations. So let’s maybe take a few steps back and talk a little bit about the explainability techniques that you know you are seeing in use and you know where you are seeing them in use. Have you done a survey of the various techniques and how they are being used in practice? I know you talk specifically about Lyme and Shapp and I hear those come up probably more than any others but I’m wondering if you look broadly at that. Yeah so why let’s not an active area of my research. So there are I think some other folks who are sort of thinking about these things but in general you write that these two techniques one of the reasons we also picked those was because you know they were being very widely used in practice and industry and in other real world settings. So that was also one reason to sort of see if there are any vulnerabilities in those first because they are so widely used. Beyond that also there are like several techniques which are probably much less popular but they try to address some of the issues that are present in the first two techniques, Lyme and Shapp. Just to name a few for example like Maple is another approach that has been proposed which tries to sort of get rid of some of the you know like some sort of an ad hoc perturbations or like some of the ad hoc pieces within Lyme and sort of make them more systematic. So that’s another approach just to give an example and of course there are like several more which are like you know muse and a bunch of other things anchors and so on. So there has been a lot of work just built up on you know this entire explaining black boxes as I said like since 2016. So by now there are countless approaches but I think the well known ones among the most well known among these are Lyme and Shapp. Now coming to the second part of your question where you are thinking about how are people using these techniques. Honestly the domains that I look at so we are still kind of trying to make decision makers like doctors or judges kind of aware of these techniques and how they should even use them. So but that’s the domains that you know I deal with a lot but I can easily imagine that if you’re looking at you know maybe a starter or a tech company and so on. There people are like more widely you know more well familiar with these kinds of techniques and they may already be using be using some of them in their day to day you know job whether as a developer or an engineer or a scientist you are trying to understand what a particular model is doing maybe to debug the model and so on. So I can imagine all those kinds of use cases already underway and you know where explainability is becoming you know playing a bigger role in practice in day to day applications. But yeah the domains that I deal with like especially where you are sort of a bit more detached from you know the core machine learning you’re dealing with people who make different kinds of decisions they’re not tech people they’re not experts. So it is like these kinds of approaches are sort of like reaching them at this point barely I would say. In your talk did you kind of go over how the different techniques work and you know what some of the you know weaknesses or blind spots that are inherent to them are and where they come from. Yeah so I think the first part of the stock is mainly about what are the weaknesses of lineman shape and just to kind of think about more broadly the explanation techniques you can roughly characterize them into like two categories. So one is local explanation methods and the other is global explanation methods. I guess as the name suggests local means you know you just think of explaining a complaint behavior only within a particular tiny locality or neighborhood in the data right. So a small piece of the feature space you are trying to explain what the model is doing there as best as you can. So that’s the local explanation methods. And then the global explanation methods is you somehow want to give and in the pick the entire picture of what the black box model might be doing like the whole big picture. You know so that like someone like for example the use cases of these two could be different where in the case of global explanation the idea would be that someone who is deciding if a model is good enough or whether it should be deployed like maybe a team of judges or you know a stakeholder who has a lot of authority on deciding if some model should be deployed or not. He or she might use those to wet and decide is this model even reasonable enough to deploy right. So that’s where the global explanations come into picture because you are giving like a zoomed out view of what the model’s behavior looks like right. On the other hand when you think of local explanations it could be to just like as a model is deployed or after a model is deployed let’s say in a hospital to diagnosis or something for every patient the model will give you a particular you know diagnosis saying that this is basically what the diagnosis should be say someone has diabetes or not for example right. So in such cases you also want to get an explanation for why that prediction is made the way it is. So then we focus more on the local or instance level or like singular predictions whereas and that’s the case where after a model is deployed a doctor is just double checking that a single prediction makes sense right. So that’s the use case and then for the global it is to decide if a model at a high level is even good enough to sort of beat it. And so do the local and global methods share the same weaknesses or issues or are they different? Yeah. So in fact the weaknesses are specific to the exact techniques employed to generate a global or local explanations like example the first part of my talk is broadly focusing on what is called as perturbation based methods right and I’ll get into the details of what I mean by that a bit. Then there are other methods which focus on you know kind of using gradients to sort of determine what features are being used when making a prediction. So the attacks these two classes of techniques have are vulnerable to different kinds of attacks. So the same attack would not work both for the perturbation based methods as well as a gradient based method. So yeah the attacks are specific to the exact techniques that these methods are using. So they’re much more finely fine grained than even just local and global. Okay. So in the case of a perturbation type of method like lime what does the attack look like? How are those attacks constructed? Right. So let me I think start by just giving an intuition about what line does so that it becomes clear what the attack would look like right. So what line does is it is trying to at a very basic or code level. Lime is trying to explain individual predictions of classifier. So for each prediction it’s trying to give you which features were important and what was their weightage in making this prediction right. So now what lime does is it goes to sort of every data point. So basically if we want to explain a prediction of a particular data point lime takes the data point and then it’s sort of perturbs that data point and when I say that think of it as like you add some noise to different features of this data point. Okay. And then that’s what we call as perturbation. So you just kind of slightly massage the values of these data points generate another artificial data point and keep doing this until you have a bunch of data points which were result which resulted from perturbing that initial instance or the data point you wanted to explain right. So now that we have let’s say we got 100 such perturbations or massage data points and then you have this actual data point that you wanted to explain. Now think of it as you just build a linear regression model on top of this so that that model is predicting what the black box models predictions are for these 100 data points. So it’s basically taking a data point massage is to create some artificial data set around the data point now just put a regression model and then it will give you what are the feature importance weights for each of the features right. So that’s what lime is doing. So now why this is called a perturbation based method is in order to even fit a linear regression model there or any local linear model there. You are generating some perturbations of this initial data point where you started from right. So that’s so typically these are the this is what we call as perturbation based methods. So now the attack once you sort of know a key intuition becomes very clear and obvious which is so what we found as part of like you know analyzing what lime is doing and so on. In fact one of the PhD students that works with us found this was the perturbations that are being generated by lime are actually not the points that are in the data distribution. So these points look very different than the points that are actually in the data distribution that we care about. So these could potentially be even often manifold data points or you know points that are like very far off because you are just massaging a point and assuming that you will end up with a point that is sort of close enough right but that does not always happen. So given that the perturbations that you are generating using lime are not really the points belonging to the in sample data distributions that’s a key you know point to sort of remember. So that has happened or now that we realize that that was the case what we did was design a simple attack or like build this kind of a what we call as an adversarial classifier which does something you know very sneaky which is this classifier it looks at a point and then it basically decides if the point is in the data sample or if it’s a perturbation. So it kind of makes this prediction as to is this point like generated by perturbation is this a synthetic data point or was this point in the data. Now when a point is in the data then this classifier says yeah let’s use race or gender or any other undesired attribute that you could think of and make a prediction. But if a point is a synthetic data point which means it’s a perturbation then look like you’re super fair like don’t touch any bad features remain like super pristine be very careful like do something very clean and innocuous looking right. So when you do that what happens is and when I give that classifier to line to explain since line relies so much on perturbations it thinks oh this model is doing fine on perturbations I don’t see race as you know an important feature when it’s making predictions on those. So it’ll just assume that the model is not using race as a feature when making predictions. Whereas underneath what is happening is this like a wrapper it’s like an adversarial wrapper you can think of that it’s kind of nicely of shielding it’s kind of shady behavior for lack of a better word by doing all the shady things on in sample data points and then looking very innocent on any perturbations. So that is the attack which is throwing off line and though the model uses race as the only and main feature in making predictions because it looks like because it has an innocent behavior on the you know perturbation data points line is just assuming that it is using some very innocuous features when making predictions and it can never catch this underlying racial behavior or that is how behavior which uses race. So the setting kind of goes back to your setup at the very beginning of our conversation you maybe you can’t use a you know a transparent model that you developed yourself so you’re getting a model from someone else kind of shelf and the attacker in this case is whoever’s creating the model. And the scenario is kind of reminds me of Volkswagen gaming the EPA when the cars detected that they were you know being tested for emissions test. They changed the way that they were you know throttled or whatever to make their emissions fall within spec but you know out on the road they were you know yeah. Seating the levels. Exactly. Yeah this is I think a very good analogy. Yeah that’s pretty much what this adversary who is designing this classifier is also doing this off the shelf classifier is also doing. So the main idea is if people are just using like for example line more shaft to determine you know are there any underlying racial or gender biases in this classifier then the adversary can successfully fool them because they’re able to fool these explanation methods. And were you in this work in this presentation do you propose any protections for this or are you identifying the attack. Yeah. The attack vector. Right. So that particular piece of work was just identifying the attack because that itself was I think one of the initial works which even talks about attacks on explanation methods. But our ongoing work is definitely looking at how to sort of design these explanation methods which are robust to those attacks or which cannot be gained to sort of you know make these kinds of attacks successful. So how to think about them. That’s an ongoing stream of research. And so it is a method like lime work if your perturbations are only producing or in the direction of kind of in distribution results like is that a direction that you’re looking at or right. Yeah. So that’s one of the directions we are looking at. But then you can also think of this as there are some two problems which are like two sides of a coin right. So one is then you can basically make these perturbations more and more similar to your data instances right. So that will potentially subvert this kind of attack. So we can make a lime plus plus where your perturbations look more and more similar to your data instance right. So that’s one thing but then there comes another problem that’s a fix for the set attack right. But then there comes another problem which is your explanations will start becoming more and more data dependent right. So because if you have a data set then the explanation that you build will only hold for that data set. So which means if you change the data set or something then the explanation is no longer going to hold. So that’s another problem of this because you are making this explanation very tight to the data. So now how do we fix that is another problem but I think this is again like a bit of a trade off where you can think of this like a scale where the more you move to one side you are probably creating some issues on the other side. So we are also looking at sort of formalizing those trade-offs and like you know saying that yeah as you try to achieve more of you know perturbations that look more and more similar to your data yes you subvert one attack but then you are creating explanations that are only holding for your data. So is that good or is that bad so like what are the trade-offs between these two. Okay. And your research have you identified any other similar types of attacks are there others that have been proposed. So again as I said like this one of the initial ones I think as a follow-up paper or as a follow-up work there is another team in Utah that has recent ICML paper on specific attacks to Shaq. So that is a follow-up work which again sort of plays on or builds on some of our earlier work but I think so far they have mostly been attempts at looking at like perturbation based methods so there is a lot of scope for open work on other kinds of methods including radiant based methods and even you know global versus local what needs to be attacked and what is most vulnerable in each of these and so on. So there is like a whole set of open problems that haven’t really been addressed so far. Yeah. And are there any interesting connections between the work that you’re doing here and the broader research field of kind of adversarial machine learning attacks. You know a lot of this is based on kind of perturbations and noise and so there is at least a problem in Clayter overlap. You know does one have something to offer the other and vice versa. Yeah. So yes a lot. I mean I think this work is actually inspired by you know some sort the adversarial machine learning literature. So there the focus was more on finding examples or data points which can throw off a classifier right whereas here the focus is now let’s find something which throws off an explanation method. So like I guess that way there is like a very clear parallel what adversarial machine learning was doing for classifiers and prediction models. We are trying to do that with explanations. So I guess that way there is like a pretty tight connection. In fact you know as I was saying about some ongoing work which tries to address some of these vulnerabilities like in perturbation methods or otherwise. The way we also try to fix the vulnerabilities and come up with a new explanation method is also inspired by how people think of adversarial robust classifiers in the adversarial machine learning literature. So just like people had this you know let’s first see what are the vulnerabilities where things are breaking down in terms of classifiers then how can we develop a robust classifier. So the same thing is playing out in parallel in the explainability literature. So yeah that is a pretty clear connection. A lot of that ended up saying you know more regularization is there approach. Is that going to be the answer here too? I think so I guess there are a couple of things though right. So for example one thing is even beyond regularization it’s also about thinking about like formulations like maybe minimax which is you know sort of like thinking about the maximum possible error that you could have on like a variety of distributions that you want your model to work on or like a variety of data sets you want your explanation to hold on and minimizing that maximum error. So I guess those kinds of formulations are also very helpful apart from you know regularization and so on. So I think those ideas are useful to sort of flow from that community to the explainability community. Beyond that I also am hopeful that there could be other interesting challenges with explainability. The reason why I say that is because there is as algorithms are being like increasingly used for various decisions like whether someone gets a loan right or whether you know someone gets a particular treatment or whether they are given a bail or not. So there is an increasing sort of like call from both legal scholars and social sciences scholars to sort of make these machines also provide resources to people and when I say that what do I mean is if I as a bank am using an algorithm and if I tell someone a loan is denied for you I also need to tell them what needs to be changed in their profile so that they can come back and get a loan right. So they are making these algorithms more accountable which means the gaming of these kinds of methods is going to be like very real. Like when you think of you know classifiers and some adversaries sort of giving you an adversarial sample or example and so on. So I guess the danger of that somehow seems a little bit more limited to me than like when it comes to explainability where people are relying heavily on this and like things are moving increasingly in the directions that people are looking at these and making decisions of what models to use make decisions like whether a prediction is reliable or not. So as you are hitting more and more of these real world scenarios I think well we also need to be worried because the risk of these things being manipulated and game is very real and very high but at the same time I can see lot more real world applications or like usages of these scenarios probably way more than you know someone trying to change pixels and any image and so on right. So that’s while that’s an interesting concept to think about you know to solve a problem or like from an engineering perspective technical perspective here the social implications are very real. So I’m hoping that this would also bring with that more interesting technical challenges as well. Is there a GAN application here where you’ve got some model that’s trained to try to fully or some model that’s you know you’ve got kind of these two adversarial models that are trying to try to pick out you know the out of distribution samples or something like that and you know when the one model is trying to cheat the other. Yeah I think they’re headed there. I think some of even our ongoing work is sort of headed there. But I guess the way in which sort of at least like you know me and my group or some of the other researchers are approaching this problem is kind of trying to keep like a real year to the ground because a case where someone can build a classifier which can do something messy within sample data points and look very pristine and clean with these perturbations that the approach relies on is like a very realistic thing and it’s not even like a super sophisticated attack if you think about it right. So we are trying to sort of at this point keep our year to the ground and look at those most plausible scenarios and how to fix them and then of course you know some of these will automatically happen which are definitely interesting from technical perspective and so on. Right. Well Hima thanks so much for sharing a bit about your presentation and your research. Yeah thank you so much. This was amazing. This was amazing. I had a great time talking to you. Same here. Same here. Thank you. All right everyone that’s our show for today. For more information on today’s show visit twomolai.com slash shows. As always thanks so much for listening and catch you next time.