Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content •

In this AI video ...

Welcome everybody to the Microsoft Research Summit, the responsible AI track and right now the panel entitled Content Moderation Beyond the Band, reducing borderline, toxic, misleading and low quality content. My name is Jonathan Gillespie. I’m a senior principal researcher at Microsoft Research in the New England Lab and I’ve co-organized this panel with Zoe Darmay. Zoe Darmay is currently at Google as a, I wrote this down as a senior manager in public policy and global affairs handling search. And I want to tell you a bit about the panel and what we’re going to talk about today and then I’ll introduce our three speakers. Hopefully we’ll have a chance to hear from them and then also open up a discussion. Let me start by saying that we’re thinking about content moderation here and it’s been important and amazing to see that content moderation has grown as a public discussion. We pay more attention to it. It’s sometimes on the front page of the newspaper. It’s a discussion in legislative corners and academic corners and this is phenomenal. But one thing that Zoe and I were noticing and wanted to talk with some experts about is that we tend to focus very exclusively on the question of removal. So the removal of content, things that get deleted and the removal of users, people who get banned. And this is an important part of what social media platforms do but it’s not the only thing. In fact, we think there is a whole set of tactics and techniques that platforms often use when they’re trying to deal with misinformation, harassment, threats of violence, terrorism, pornography, and you name it that don’t look like removal and suspension. And then in fact, this array of tactics are quite important and perhaps growing and important. So the idea of this panel is to dedicate some time to think about those tactics, bring them up in the forefront and really try to bring an interdisciplinary lens to what are these tactics, how do they work, what are their implications, how effective might they be, what kind of problems do they raise, and how do we go forward thinking about content moderation in a way that includes these tactics and recognizes their growing importance. So Eric Goldman wrote an excellent paper called Content Moderation Remedies and he was making a similar point that if we set aside the question of removal, there’s a whole set of other tactics that platforms have been using for quite a long time, not just theoretical but are in practice. Let me give you a few examples. This is not an exhaustive list and more may come up as we’re talking about it. But think back to when we were debating whether Twitter and other platforms should be putting fact check messages or warnings on President Trump’s tweets, other political figures. The idea of putting a fact check or warning is a tactic that doesn’t remove the tweet necessarily but changes where it’s going to go and how people understand it. YouTube is playing around with nudging commenters who might be writing something that could be harassing or could be violent saying are you sure you want to post this comment in order to sort of keep discussion respectful. It doesn’t prevent someone from doing it but maybe it’s going to nudge or change the tendency to do so. Reddit as a platform has been playing around with a technique called quarantining where whole areas, whole subreddits on topics that have indicated that they are tend to be places full of misinformation or violence can be sort of set aside. They still exist but the posts they write won’t show up on the front page. There’s a warning before you enter them and their limits on how they can earn money. We’ve seen growing efforts to reduce the visibility of misinformation. We’re seeing a lot of platforms doing that. That can mean taking certain kinds of borderline content and removing it from recommendations, recommending it less, leaving it out of search results so that they’re still there but they don’t circulate quite as widely. We’re seeing this especially around the pandemic and vaccine misinformation but not exclusively. Instagram recently introduced a sensitive content control letting users decide how much and how racy the content they would see would show up in their recommendations and those settings were based on the idea that you might want to see more of it, you might want to see less of it and you could sort of adjust that. Other platforms are playing with kind of design techniques to give users different ways to calibrate that content. Recently, very recently in the Wall Street Journal we’ve had revelations about Facebook and a program they called X-Check that was setting aside many users who were public figures, people who might be sort of like PR issues, people who had prominent followings to handle them differently, to handle them in a different process. So sort of whole shadow structures of processes that might not look like your traditional kind of do you take it down, do you remove the user. And we think that these tactics are growing in prominence and are underattended to because we tend to discuss the things that look like censorship because we worry the most about content that’s removed or content people that are kicked off because that triggers a whole lot of questions about free speech and journalism and bias and politics and they’re also extremely hard to study. It’s hard to know when these tactics are being introduced, it’s hard to know how much a video has been reduced or how far a tweet would have gone otherwise. And so these are very difficult things for us to consider and things for us to research. So the idea of this panel again is to look beyond the band, to think about these tactics as a category across many platforms, some that have been placed for a long time, some that are showing up quite recently. Why do these tactics matter? How do they work? What are the different ways of thinking about the logics behind them and the whole sort of moderation ecosystem? And what are the implications of these approaches for the platforms, for the users, for possible regulation, for the future of online discourse? Okay, so that’s a quick introduction, but I’m sure that you’ll learn more about those types as we begin to discuss how we should think about them. So let me introduce the rest of our panel. So we’re going to hear first from Charlotte Wielner. Wielner is the director of the Trust and Safety Professional Association after having spent many years working first for Facebook and then for Pinterest. Then we’ll hear from Ryan Kalo. Ryan Kalo is a professor in the School of Law at the University of Washington. And then finally we’ll hear from Serita Shanebach, who is an associate professor in the School of Information at the University of Michigan. And then we’ll have some Q&A from Zoe and myself and see what we can figure out. I want to highlight the fact that we very deliberately wanted to get some very different perspectives here, an industry perspective, a legal perspective, a design perspective, a sociological perspective. These problems require this kind of questioning and they touch on all these issues. Okay, so let’s start with Charlotte. Thank you so much for being here. So we know that content moderation is a messy project for platforms large and small doing all sorts of things. These tactics, these reduction tactics and filtering tactics and labeling tactics, the ones that go beyond removal. Are they as messy? Are they a solution to the mess? How are you thinking about them and how do you think platforms are thinking about them? Absolutely. First, I would say in the words of Marie Kondo, I love meth. And I think that to be in content moderation in a way you got to love meth because the reality is life is messy, right? Humans are messy. So there is no one clear answer on the question of content moderation where it’s like, ah, this is the un messy way to do it. You’re always going to be thinking about trade-offs. And a lot of the way we think about our work is in terms of trade-offs. If there were one right answer that everyone could do, everyone would be doing it, right? Instead, what you’re often trying to do is think through, okay, we have a few, sometimes all you have is bad choices, but usually you have a few sort of media over choices and you’re going to have to figure out what are the outcomes that we’re probably going to see here and are we willing to accept those outcomes? Are we going to get to a better place sort of pursuing those? So even thinking through the list of things that you had sort of introduced us with, these tactics, I think it could be useful to talk about, you know, in the seat of a content moderator, how are you thinking about the tools that you have in your toolkit? And I think first and foremost, in content moderation, mostly what we want to do is prevent problems to begin with. And so that’s where when you look at tools like the Be Respectful Nudges, right? When YouTube’s like, hey, you know, remember, be respectful when you’re talking to people. Twitter has the feature that I run into all the time where you go to retweet something and it’s like, do you want to read the article first? Yeah, a next door, if you post anything about COVID, like where can you get a COVID test, those little pop-up thing, like, remember, a lot of misinformation around that about COVID. Are you sure you want to post this? Be thinking about how you’re communicating with your community. And the idea with all of those is to help a user moderate themselves. And that’s a choice that takes, you know, takes obviously some burden operationally off of your team, but it allows people to be in charge of their own content. It allows people to make choices about, you know, the way they represent themselves online. And that in a lot of ways is our ideal in a content moderation world. You might also be looking at, you know, these medium, sort of medium level decisions for content where you’re trying to get a faster resolution for a user. So often when people think about content moderation, they’re like, ah, yes, content moderation. Someone reports it and then someone looks at it and it’s up or it’s down, right? And even if that were all the content moderation we were talking about, you’re waiting, right? That user is waiting for an answer on their report. The person who’s had their content reported is waiting for a decision on whether their content is going to exist or not. So a lot of those, you know, sort of, in sort of intermediate decisions that we’re talking about today are ways to help either, you know, the reporting user or, you know, the accused user or the public sort of on your platform at large feel some sense of more immediate resolution even if it’s temporary. So those are things you might see like when you report something, it gets hidden, maybe gets hidden for you, maybe gets hidden for everybody until someone reviews it, right? Just sort of a set of levers you can pull there as a content moderator. The fact checks, the quarantines, right? Those are often, you know, not necessarily doing a lot. They’re not removing the content, but they’re showing your community, hey, we’re on it, right? We’ve noticed we’ve heard what you’re telling us and we are, you know, monitoring the situation. Here’s what you need to know. And so having a visible accountability is a content moderation choice that is not, it’s up or it’s down. And again, I think helps the community often feel like they’ve got some sense of control or they’re being listened to or they have, you know, the ability to participate in that process. You know, you do a lot of things like, you’re focused on deemphasizing disruptive content. You know, you’d mentioned low quality content. I think we all have a debate about, like what constitutes low quality? But you see this with spam, right? One of the things that in general has been very uncontroversial in our world is, you know, either delineate spam or often sort of quarantining or demoting spam. And a lot of people, most people, maybe not spammers, but most people really like that. That’s a good choice. Even though that is often a little imprecise, it’s often not that transparent to the end user because we’re able to determine through a series of metrics we made to discuss a little bit later, we’re able to determine that that’s low quality content. I think increasingly what we’re seeing is platforms really thinking through what it means to recommend content. And that’s something I hope we can talk a lot about with Sreeta and Ryan with you because I think historically recommendations, I think our view is like, oh, this is like content that performs well. And so we’re going to be pushing this out because it’s like, yeah, this is like, you’re going to get really a lot of engagement or whatever it is. I know certainly when I was working at Pinterest, I was at Pinterest from 2013 to the end of 2020, we thought a lot about recommendations because we sort of felt like, well, recommendations means we think this is great and are recommending it to you. There’s the sense of agency. That’s really an interesting lever for folks to pull when they’re like, okay, it’s not just that it’s up or it’s down. It’s like, do we think it’s good? And you should have more of it. There’s a lot of interesting decisions to be made along that spectrum that increasingly content moderation teams are part of, either in labeling that data. We’re actually making some of those decisions. And then finally, and then we’ll really get into it. I think there is a really important role for these intermediate solutions when you just don’t know the answer. And that is extremely common for a lot of things. And you need to make a decision, right? Content moderators don’t have that option to just say, well, I don’t know. Therefore, nothing happens, right? Even doing nothing with a piece of content is making a choice. An example I’d give here is actually a lot of the different types of decisions we’re seeing on Ivermectin. So it’s September of 2021. And different platforms are making a lot of different choices. And Ivermectin is the substance where a lot of folks are trying to use it to treat COVID-19. That may not be the best medical decision. Some platforms have gone right out and said, nope, no Ivermectin anywhere we ban all of it. Ivermectin is a real medical substance, right? It’s real pharmaceutical. It’s used in veterinary context. It’s used in human context. It’s one of the only treatments we have a river blindness. So some platforms are saying, well, maybe we don’t ban it overall, but we have to, apparently, there’s a platform where it’s asking you to upload a picture of you in your horse. I don’t know, right? There are all these decisions that people have to make on the upper down, where there really are these real-world consequences. And so I think, you know, in situations where the information is changing fast or the information is something we just can’t know, that’s where this intermediate road becomes really interesting. Ryan, thank you for being here. So we can hear this like the way that these techniques might sort of intervene at the right moment. They might sort of involve themselves in a different way. I’ve heard you write about incentives, thinking about content moderation as a kind of incentive process. How did these tactics, not the removal, but these other ones? How do they work as incentives? Do they work as incentives? How do we think about that? Sure, well, so I am one of the copious at a center here at the University of Washington called the Center for Inform Public. And what we do is we study and try to come up with techniques to resist misinformation, and we span multiple different disciplines. So I’m writing this essay with my colleagues. I’m a Spiro and Keith Starbird in Jevin West in Chris Coward, in which we’re drawing a few distinctions about misinformation that I think will be instructive here. And the first is simply the distinction between misinformation and disinformation. Where misinformation refers to just kind of getting something wrong, just innocently maybe, maybe not, but the idea is you’re just saying something incorrect. I’ve done that myself. Many people have done that, right? Disinformation is pardoned about horses, but a horse of a different color. Because it’s actually a strategic campaign that has both elements of misinformation, elements of lies and falsities and misleading statements. But it also has as part of a whole campaign a lot of truth and a lot of opinion. And why does that matter from an intervention perspective? Well, because if you think about the themes of sort of responsible artificial intelligence, I mean, there may be things that machine learning can do about individual pieces of content. So for example, an AI system may be able to figure out the context in which mentioning this drug is okay and the one over here when it’s not and when it’s going to be misleading. But AI is not going to be terribly good at figuring out what might be part of an involved disinformation campaign, right? And there’s all kinds of examples like that. The more you dig into the motivations of the speaker, the more you will change how you might want to address it. And it’s true of misinformation, but it’s also been true in the past of things like hate speech, right? I mean, there’s a lot of people out there who are hateful people who are dedicated to it, their white supremacist, nothing you’re going to do, short of banning them, is going to work. Right? There are other people who are teens who are trying to transgress and get a thrill. And if you just remind them, hey, we’re paying a little bit of attention, they cease entirely. Right? And so this is not a one-size-fits-all environment. And what works is often a product of the motivations and the identity of the speaker. The last thing I’ll say is that at that same center, we’ve worked on some computational modeling techniques. It’s Joe Buckholman and colleagues modeling what the interventions do on Twitter. And one of the takeaways from that paper, which it’s an excellent paper, I would look it up, is that often it takes actually multiple different interventions in order to damp down the proliferation of misinformation on an ecosystem. So I also think we need to be talking a little bit about when it’s not just one thing. Not only does one size fit, not fit all, but often you’re talking about two or three different interventions on top of one another. And I’ll stop there. Thank you. Yeah. Sarita, we wanted to bring you into this conversation because of the way your work has been thinking about how the users are treated by these interventions and how they could be. We’ve heard sort of why these tactics might make sense for the platforms in a tough situation, trying to intervene in sort of moving targets. And why, from a kind of like, here’s a harm, we might want to invest in reducing it and this and not a simple thing. If we approach this and think about users and think about the ethics of care, how do these tactics look from your perspective? Yeah. Ryan’s comment around digging into kind of the motivations of the speaker is a really nice segue. So I’ll kind of expand on that. A couple days ago, I went to this panel called Then and Now, 50 Years After Attica. It was a panel of people who had expertise in incarceration and prison systems and they were reflecting on kind of, you know, the past 50 years of prison systems. And I feel like we can learn a lot from judicial and kind of carceral systems in the US, but it’s not really like happy lessons. So it’s kind of, you look at those systems and we’ve spent decades or centuries trying to create safe and whole communities, but we struggle to do that. And I think it’s pretty similar online. So you have these online governance systems, but they, it’s not clear they’re in kind of users and communities best interests a lot of the time. And so I, you know, I would say that they’re overly punitive, which is this banning content, ban users. And I’m not, I think it’s okay. Like punishment is okay. So I’m not opposed to that. Sometimes that’s what you need. But the problem is that those, those responses don’t kind of allow opportunities for accountability or correction or repair. So you know, Ryan gave the example of say the teenager, maybe post something that they shouldn’t have on Instagram or you might have a mom of a baby, you post a naked picture of the baby on Facebook and I guess removed. And then you also have, you know, Donald Trump or whoever inciting violent behaviors on Twitter. And right now you kind of have the same set of responses and trajectories for all those cases, even though they’re just, they’re dramatically different and they should be met with different responses. So, so this, you know, the one size fits all argument I’d really agree with. And I think a focus on user behavior instead of, or at least on top of a focus on just moderating content would help a lot. So you kind of move away from like this whack-a-mole governance towards ideas about accountability, repairing harms. And you know, not everyone will do that, of course. Sometimes like I said, just ban them, punish them, it’s okay. But people make mistakes. I think we could do more to encourage better behavior, whether it’s saying, hey, here’s why you made the mistake, which has seemed very obvious. Even our judicial systems offline say, here’s what you did, and here’s your sentencing. And you don’t really get that, you know, most people don’t know why they were, why the content was removed or why someone else’s was or wasn’t. And then, so you may educate them, provide more transparency. And then maybe even sometimes ideas like apologies or communication or mediation in some context where there’s maybe some community norms and some commitment to those communities could help have other pathways beyond just going straight to banning the content, removing any content or banning the users. Should we ask a couple of questions? Yeah, thank you. Yeah, please. Have you got one to start? I do. So I think one of the things that I’d like to know from you all is whether you think we have enough information on these different tactics that aren’t just going straight to the punitive measures that Serena mentioned. So we are kind of as an industry experimenting, but do we know whether these things actually work? We’ll have seen, you know, Instagram tried hiding likes as an experiment. They were fairly open that it wasn’t very clear whether that was having an effect or not. Could we still experiment as an industry with things like that even if we’re not sure that they’re effective? I can go first with that. I think the answer is yes, but to Serena’s point earlier, I think we need to understand what success, what we mean by success, right? Like one of the challenges in a lot of these decisions is folks are often a little lump clear on, okay, well, what’s the end goal here, right? Is the end goal a reduction of offline harm? That’s a little difficult to measure. Is it a reduction of number of user reports? Well, that’s an easier one to measure, but there’s all kinds of other ways you can influence that, right? So I think even starting out with that, you have to decide based on your values as a company and your values within the product, what is it you’re actually trying to achieve? But then yes, I think there has to be room for experimentation because we’re not going to know. I think the important thing is that we are transparent about that experimentation. Ryan, can you point to some that in your studies or the research that you’ve seen, that some of these techniques that you think have demonstrated some effectiveness, the kind of effectiveness that’s always asking about, or is it still kind of an ongoing object of study? Oh, sure. I mean, you know, there have been in, for example, the European H speech context, there have been, I wouldn’t necessarily say experiments per say, but what I would say is that if there are organizations out there that are trying to reduce the amount of hate speech in your up, right? And so to assess their efficacy, to show that they’re making a change, they will keep some pretty good records of what happens. And there was one experiment that involved, this was in sort of a much less sort of channel, the less algorithmically sort of driven ecosystem, but one in which there were just these chat rooms and people would make comments on a particular platform. And they made an arrangement wherein they would get the ISP or get the host to send a note. And the content of the note would just say, look, here’s what you said, and when you said that, that’s in violation, not only are the terms of service of this community, but also, you know, law, because in some cases in Europe, things are illegal. We’re not going to have that lever here in terms of being able to police content. Largely, you have to be doing something rather agree just and transgressive for it not to be visible to the First Amendment. And they found enormous rates of success where there was not repeat offenses, right? And so I think you have, I think that you have the possibility that companies will do their own testing, AB testing and other testing. I agree with Charlotte needs to be deeply transparent. I’m especially worried about something Tarleton alluded to earlier, which is the idea that you’re subtly modulating people’s reach. And I think there’s already a sense among a lot of users that they’re almost like having a good or a bad hair day. I don’t have either of those anymore, but I mean, it’s like having a good or a bad hair day. One day, all of a sudden, something you say on Twitter that you think is going to be reaching everybody goes nowhere. Another day, some random thing you say, and you start to wonder, is this about the content is about, or is there something going on in the background? And I think that leads to a sense of distrust and unease that is not healthy. So I think that transparency is critically important. We also, of course, saw Facebook get sanctioned for some of its work on emotional responses to different kinds of content because it wasn’t above the table. It felt it just felt odd to people. And the other thing I think is critically, critically important here, and this is a legal point, is that we need to protect the people that are either blowing whistles or whistle blowers within the company, or the people that are doing external accountability research to determine whether or not a stated intervention is actually effective. And one of the things that really concerns me and others in the community is the idea that a company would, for example, weaponize their terms of service through something like the Computer Fraud and Abuse Act and go around suing people who are trying to determine whether or not there’s bad content on the site. And I think that the law should be crystal clear that if you’re doing accountability research to look for bias, look for toxic content, look for misdemeanor, look for validation that a company is doing what it says it’s doing, that really should be off limits for civil sanction. And so, yes, I do think experimentation is critical, but it should be transparent and it shouldn’t be reigned in or police by the company themselves. But let me pick up on what Ryan was just saying, because when we started out, there seems to be a real appeal to some of these techniques, right? There’s a kind of flexibility in them that’s responsive to the kind of problems that we know are sort of always moving. There is a gentleness to them that avoids the end of the speech altogether when it’s removed. But Ryan raised one real concern here, which is that it’s especially because we’re trying out these techniques. They can be not so public and it leaves users in the place that Ryan was describing where they may not know why things are happening and that breeds a lot of suspicion. So let me push that further and kind of say, not only are some of these techniques, you can imagine platforms not being public about them, but they’re awfully hard to be transparent about, right? Let’s take two examples, because some of them are quite visible, right? You put a fact check on a tweet or you put an age barrier on content, you can at least see it’s happened. The fingerprints of the intervention are there for the user who’s involved for a researcher. But let’s take two. So one is reducing something through recommendations, an example that both Charlotte and I mentioned, taking some borderline content and just saying, this is not going to travel as far. You can still get to it. But maybe it’s the bad tweet day that Ryan was talking about. It just doesn’t go anywhere, right? It’s hard to know how far it should have gone. So how do you understand that it’s happened and how do you be transparent about what was done? Because the idea that that video or that post didn’t go very far compared to what. And then let’s ask a second one, which is a different kind of question. So Rita is talking about maybe stepping towards much more involved interventions, taking situations where there could be some restorative justice, there could be steps taken. Those are time intensive, they happen very specifically. And they’re private too, right? If some platform is trying to help you and I resolve attention, that isn’t something that they can exactly sort of like list as something they’re doing. So how do we deal with these kind of deeper problems of transparency where it’s not clear you can be or we haven’t at least invented a way to do it? How do we deal with that kind of problem? Yeah, it would be nice to have a set of criteria that one is evaluating on. And so you imagine what are the outcomes or the dependent variables, call it what you want. So Ryan mentioned, do they do it again? Like classic recidivism measure. Or could be deterrence of other people, do other people see some sort of things happen and decide to behave differently. There’s been studies over that on Reddit. But you could also imagine other kinds of community values, like does someone who experiences harm harassment, whatever, do they come back or do they leave the community? Like that’s a pretty good measure of, did whatever you do make them feel safe or comfortable coming back and that’s measurable, much more measurable online than offline. And I think the second thing would be centering human rights or civil rights in any of these kinds of experiments or techniques and thinking of a nice paper by Brandeis Marshall on algorithmic massage and war, which is basically algorithmic, a kind of anti-black massage and oil, a Bailey’s concept. And she is saying, you know, there’s shadow banning and these things and a number of people pointing out the experience of being shadow banned or certainly the perception, but it’s really hard to know on TikTok and things like that. And so I think centering the values of care about, you know, around bias, around race and gender, in evaluating any of these. So it’s not just do people recidivate or do they come back, but it’s who’s doing that. And from a civil rights perspective, you might focus especially on, you know, our groups that are maybe already harmed in whatever context being further harmed by this, then that’s even if overall it looks good, if those groups are being further harmed, then this is not probably the right step. I think that’s exactly right. And I think, you know, the point especially about focusing not just on, you know, offender behavior, but focusing on, you know, either the follow-up behavior of the people who felt wronged or were wronged in that scenario, or in general, you know, people who maybe have never filed a report or made a complaint, but you’re able to understand that they might have had a bad experience and understand things going forward, you know, what does that look like for their experience with your product? I mean, I think the product, I was the point I was going to come out with originally was just, you know, an observation that, you know, Talton, to your point earlier, it can be very difficult to know exactly what’s happening. And I think that’s like an easy thing to say, well, yeah, obviously these systems are complex, but you know, certainly something I have observed universally in my time in tech, you know, with any size of company, but especially when you are, you know, small to medium and you are moving fast and all these things are happening, employees are going in and out, engineers are going in and out, you know, all different experiments are running, you’re running 100 experiments at the same time, and that’s not to say like, and that’s the way it should always be, and it’s okay if we never know what’s happening, but it really is incredibly difficult to know often exactly what part of the system is affecting which outcome, you know, and even, you know, going into like, well, what, just someone ever come, who turns out of the system, right? No, okay, they might turn out of the system because they had a bad experience. And certainly I’ve had teams where that’s exactly what we’ve looked at. We’ve said, oh, you know, this person reported a piece of content or they, you know, had to block someone, did they, you know, did they turn out? And the data, no matter sort of how we positioned it over years, was always super unclear because people, you know, leave services for all kinds of reasons. And so I think all of, all of those things are really important sort of angles to be analyzing. And we also are going to have to be okay with like, maybe we’re not going to know sometimes, maybe we’re not going to know a lot of the times. And that’s where it comes back to, I think a lot of the companies need to be examining, like, what are our values here and what are the effects we are trying to have? And that’s where I think society should be really brought in and be partnering as well, right? It’s not just also about what platforms want. It’s about, you know, what is good for us as a human society? And I think that’s where those perspectives have to come in. It’s just difficult because this is a world where we really try as much as possible to operate on data. And ironically, there’s not a lot of data out there. It’s just not maybe readable. And that’s one of the, I think, really hard, certainly for me, one of the hard truths in the field. I just want to add a quick point if I may. Zoe and Tarzan. You know, I think sometimes this sort of thing boils down really just picking up on something, Charles, it’s said to political will, right? I mean, so for example, if the way to address disinformation campaigns from abroad, is less about, you know, this or that particular warning or throttling or shadow betting, whatever happens to be, then more about, you know, steep, steep craft and economic sanctions and so on, it really takes the government to intervene in that instance, right? And if we really expect a lot of companies and we want them to operate in an environment where they can’t do the thing that they’re doing unless they’re able to get a handle on the toxicity in the environment, again, these are somewhat matters of political will. So I think we need to focus not just on the fact that there’s a bunch of tools we should use them all, which you think about who they’re appropriate for. We also, once we figure that tool cut out, we need the political will to force people actually to play and practice. Well, Ryan, I have a follow up question for you. I guess what do you think is the role of government here in terms of the tactics that we’re talking about because already they’re looking to legislate industry in terms of what we do, what our practices are, what our processes are, how transparently are about setting policies and enforcing them. But they haven’t really gotten to this second order of action, which is demotion, which is interstitials, which is warning screens and all these types of things. Do you think there is a role for government to have oversight there or is that perhaps a step too far? I want to be clear that I’m just speaking at Houston, radically, I’m about my own views. But what I will do is I’ll invoke a second distinction, and we think about a lot of the center, which is the distinction between speech and action or inaction. And the law should not pick winners and losers in terms of speech. And the law should not treat platforms like as though they were the speakers. And I don’t think we should actually do a lot about that law that says that platforms shouldn’t be viewed as the speakers. But that doesn’t mean that we can’t ensure accountability in other ways. So for example, in the context of cybersecurity, we expect adequate security. And although there do exist standards for security and best practices, it’s not like the government sits down and says you have to do X, Y, and Z, and D, right? But rather, you’re going to be held accountable if given your scale. You don’t have adequate security. And there too, the perpetrators, the problem makers, are not the companies per se. It’s somebody else who’s coming into the ecosystem and causing harm. And yet we expect our tech platforms to be resilience against that, right? So I think we need to move to a place where there’s accountability without picking winners and losers in speech and without treating platforms as though they are the speaker. Yeah, there seems to be real attention between the need for the platforms that actually have the levers to have some flexibility in responding to things that change, like security, the desire for some kind of consistency and accountability, and then a sort of regulatory framework that can both like impose those expectations and leave some room for that kind of experimentation so those techniques can grow as well. That’s a tricky balance. I’m noticing that we have about two minutes left. And so what I was hoping to do was finish with a quick fire round. This is Research Summit. And so one of the things I want to highlight is that there’s a lot we don’t know here, right? This is platforms reaching new territories, the law reaching new territories, communities reaching new territories. And we can tell that like, this is a widely used set of techniques if we draw a big circle around them. So I wonder if each of you would take sort of max 30 seconds and say, what do you think is the next thing that you wish we knew from a research perspective that we don’t know yet? If we’re going to include this as part of the big picture of the content moderation ecosystem and you could send off a researcher to answer a question, what do you think is the next thing that we need to think about or that we’d love to answer that you don’t think we’ve an answer to? If I call on someone first, you’re like the first person who has to jump in. And if you can make a horse reference, that’d be great because we’ve seen that horse theme. I was going to jump in until you asked for horses, but yeah, outside of the horses. So one thing that from the perspective of kind of justice and accountability frameworks, I would love to have not just want to like a team, a cabal of like fantastic students from various places around the world countries, languages around the world where the harms, shame, the justice theories, what their online experiences look like and what would be restorative for them could be better understood and centered beyond kind of the Silicon Valley ideas of free speech and what we center in the US. Awesome. Charlotte, the first quick answer? To say, yeah, for me, it would be understanding what interventions work best for the user and also work best for the moderator. Moderators are in general the least well paid and have the least power in the overall ecosystem when it comes to who is employed by all of these companies. And they do some of the hardest work. And so having a better scientific understanding of the impact that this work has on them and their lives would be taught for me. Yeah, thank you for that, Ryan. Last thought? I’ll just quickly say, I’d really like to understand what reasonableness looks like in this context. As a torts professor, I spend two or three days with my students on what’s reasonable behavior, what should we really expect in these different. And I think this, we’re very far from knowing what constitutes not just best practice, but what is a reasonable way to address this? And I think once we get that, we’ll have a better sense of what standards should be applied. That’s great. Thank you. Those are excellent questions. I’m really glad I got to hear from all of you. We have to close. Thank you so much, Serita, Charlotte, Ryan, Zoe, of course, and to Microsoft Research for Hosting. Thanks. I look forward to talking to you more about this in the near near future. Thank you. Thank you. Thank you.�e

AI video(s) you might be interested in …