Markdown Version

Session Date/Time: 16 Mar 2026 08:30

This is a transcript of the Human Rights Protocol Considerations (HRPC) Research Group meeting.


Mallory Knodel: All right. We're at time, so welcome and... I know we have some echo issues, so I'll try to speak a little less loud, but let me know if you can't hear me. Right. Welcome to the Human Rights Protocol Considerations Research Group. Make sure that you scan the QR code if you've come in the room. It's important to sign in. But yeah, let me get going. I'm going to share my welcome slides.

(Chair's Welcome Slides)

Mallory Knodel: Would somebody be able to close that door? Yeah. Yeah. I think maybe the person leaving can do it. No, never mind. All right. Welcome everybody. Thank you so much for coming. We have an exciting agenda for you today. We are going to make one agenda change, though, before we get started. We only have three talks. We'll have three talks by Dodgy, Kate, and Shaye that are happening one after the other. They can go beyond 20 minutes because we don't have Win-Hou here today, and I don't believe I see them online. I could be wrong about that. Anyway, so that's the agenda planned. If you have any other business, we'll probably have time for that. So there are drafts we can talk about, there's other work, and then hopefully we'll have robust Q&A after each of the speakers.

I'm still hearing an echo. Do you all hear an echo out there? What about folks online? How's the echo? All right. I'm going to move on, and I don't see anybody shouting. Okay, sounds good. Thank you, Andrew. Great. Okay.

Wonderful. The reminder, I already reminded everybody to please sign in. You don't have to actually sign the blue sheets anymore; you just need to scan the QR code if you're here in the room, please. A reminder that this session is going to be recorded, like all IETF sessions. It's going to be posted to YouTube. So keep that in mind when you are speaking on camera, on mic. And also the chat is somewhat recorded as well, in the sense that folks are capturing it. What you write in the chat can be relevant to notes and so on.

And speaking of notes, is there someone who could volunteer to take notes? Loura, thank you so much. You've been taking notes a lot today. I just want to appreciate the work that you're putting in. So thank you for taking notes for us. We're trying to just capture the Q&A. You don't have to capture notes from the talks themselves because those will be recorded and they already have slides on. So it's just the conversation.

And also for folks who need a reminder, we'll use the Meetecho queue. So if you're in person and you want to ask any of the speakers a question, you need to have... you need to be in the Meetecho app, either on your browser, on your phone, and get in the queue for that. We also have a lot of folks who have never been to the IETF before and haven't used Meetecho at all. So know that you can direct message me; you can also direct message Meetecho if you have tech issues or questions about the tool.

Then, I guess, yeah, this is really crucial as well. Towards the end of the week, you'll get a lot of chairs skipping over this, but because it's Monday, it may be the first time you're seeing it. I want to take a little bit of time to talk about the Note Well. These are the sort of terms and conditions under which we all participate in the IETF and the IRTF. We follow intellectual property rights disclosure rules, that you essentially mean if you know of any IPR that's being discussed, even if it's not yours, it's important to disclose that in the meeting itself. And there are lots of RFCs to support that if you need to know more further information.

I already noted that we're doing audio and video recordings. This is streamed by audio for people that don't have to authenticate, so that's also happening. And I know there's more here. We have important measures in place to protect privacy, and we also have a code of conduct that covers this meeting. And anyone who's in this room and online, there are... there is a need to follow, as per the Note Well and as per the code of conduct, local laws. And so we take those seriously. So please, again, if you have questions about that, there are some links that can give you more information.

Right. Um, so this is an IRTF research group. The Internet Research Task Force is looking at long-term research issues. We're not writing specifications or standards track standards track documents; we're writing informational documents, things like that. Human rights is certainly a long-term issue, and it's great that we've been going for so long. And it's great to have so many different speakers just today, but over the course of our time, we've covered a lot of different topics.

And we dovetail very nicely with a lot of other research groups—the Privacy Research Group, for example, and GAIA, the Global Access to the Internet for All research group. So happy to be under the IRTF banner here. We were chartered to research how protocols either strengthen or threaten human rights. Generally, we refer to it as just as impact, right?

You can't actually go to this URL anymore, but you could look on the Wayback Machine for it if you wanted to check it out. hrpc.io gives you a pretty rich history of how we started and some of the outputs we've had that go beyond actually IETF documents. There's like a documentary that was made, other things.

Our objectives are to draw these links between protocols and human rights. We had a talk—it was during the pandemic, but I couldn't tell you exactly what year it was—by a couple who had written a book on the history of standardization, and they had this pretty amazing section in one of their later chapters about how they saw the Universal Declaration of Human Rights as a form of standardization. And they made this really amazing historical analysis between like Kofi Annan's work to get the UDHR through the UN and how he was really inspired by standardization. So that's just to say that this connection is not new and it's not unique to this group. It's long and fundamental to both standards and to human rights.

We have developed guidance on how to protect the internet and how to protect human rights on the internet, making sure that the internet remains a rights-enabling environment. So you can check out some of the documents we have on our data tracker page.

And we also, just to speak to the last bullet point, we obviously bring a lot of human rights defenders and experts to HRPC every single time we meet. But a lot of us that have either given presentations or chaired or been involved in HRPC in some way have also gone out into the world, into internet governance spaces, into human rights contexts, and talked about this work here. So it's a sort of two-way street in terms of making those connections.

I'm not going to outline everything here because I need to get to the talks, but this just gives you, if you're new to this group, some orientation to what we've done. So our two main RFCs, 8280 and 9620, are all about the Universal Declaration of Human Rights, the ICCPR, looking at those documents and sort of elaborating what that looks like for protocol considerations. 9620 is a shorter guidelines-based version of 8280. But we've been around since 2014. I think my first... yeah, that was the year of my first meeting.

And so in this time, we've had something on the order of 33 meetings, and then within every meeting, we have something like three or four speakers. So over time, we have had a lot of really awesome talks in the history of HRPC. And this is just a screen cap of the long playlist that you can find on YouTube. And somebody is in this photo that's in the audience back there. So it's an accurate depiction.

So some of the folks involved: Sofia Celi and I are chairs. Sofia couldn't make it today. We also want to acknowledge folks that have helped us in the past. So DKG and Melinda Shore have both been technical advisors in the past, and Nick Doty is a doc shepherd on one of our... it's not an active draft, but it's a draft that we could make active again, draft association.

We could talk about some of these documents that we that are active in our RG. I don't have it on the agenda, so we probably will not do it this time around. But potentially we can get that... we can get these things lined up for Vienna. I realized I could take guidelines off that list because we already RFCed that. Anyway, those are my welcome slides. I'm going to now ask Dodji to come up for the first presentation.

Dodji: Okay, thanks. Okay. So this is an interesting fit for this audience. I want to acknowledge that up front. I mean, so I'm going to be talking about antitrust law, which is not usually thought of in the same context as human rights. But it touches a lot of things, including human rights, and I think that I will try to make that tangent a little bit clearer as we go forward. But I'm just glad to have an audience for something where it's policy touching a lot of the issues that we see in technology and things like that. So that's why I'm here.

And I do think this has a lot of relevance to a lot of things because antitrust is really touching so many different aspects of the digital world right now. So this is joint work with my advisor, Sunoo Park at NYU where I'm doing a PhD, and Electra Bieti at Northeastern.

(Security vs interoperability v2)

So quickly, what is antitrust? It's basically the government trying to increase and promote competition where they see fit. And by the government, this is pretty much true of every single government on the planet. I was looking at what's happening in China, what's happening in the EU and the US, which will be our focus, as well as EU member states. Also interesting cases in South Korea, India, Brazil, all over. And so this is even though it seems like a narrow branch of policy-tech relevance, it's happening in many, many different contexts. And each of these cases is a little different, like the way that say the US and the EU are addressing similar issues differs. So there's a lot going on here.

And so this regulation is often regulating mergers, which is not what we're interested in, reducing the power of monopoly firms, just like, "You're too big; we're going to break you up," and then also abuse of market dominance. And so this can look like a failure to deal with competitors, things like this. This is just a broad overview to get our bearings.

And so here we care about people. I thought the human rights thing was caring about people. So why is antitrust important for people? Well, prices are really important. Generally, a lack of competition correlates with a potential to increase prices unfairly. Prices are really important. This is sort of not a very nuanced, "Oh, prices are important," but it's a really big deal to a lot of people.

Competition and innovation: Some people take issue with seeing antitrust as having this goal. I'm not here to make that argument one way or the other, but I do believe personally that this is a really important part of antitrust.

Options for consumers: Especially in the digital space, we see there's this massive concentration. A lot of companies you just cannot avoid in modern digital spaces. And this has a lot of knock-on effects. And these companies are just have a lot of power and a lot of control over markets that are, again, unavoidable. And so this has knock-on effects to all sorts of things that are relevant here, including the intimate partner violence stuff, also a lot of privacy issues. Like, there's a lot of things that are intertwined with the fact that we are dealing with a few humongous companies, and that is a feature of these markets.

So hopefully you're convinced that there's a connection here. But this policy is very important and important to more than just human rights, but also I think has knock-on effects there.

So what we're looking at today is instances where regulators are mandating interoperation. And so this is, "You're in a monopoly; you need to interoperate your product with other products to sort of dull this market power that exists." The companies are pushing back, and one of the ways that companies push back is saying that these interoperation mandates are going to undermine their existing security guarantees.

And so this is what these arguments look like. This is an example from the DMA, which is the Digital Markets Act in the EU. Apple here, and we've got one from Google in the US context. This is what it looks like. They're often in the context of public-facing documents, although they can also be in legal proceedings themselves. And then, you know, that requires even more scrutiny because they're legal arguments. But also a lot of these are public-facing white papers, developer documentation, stuff like that.

So one important thing that I want to reiterate of why this is important is that we don't want security to be sort of used as an excuse or just an excuse to dodge regulation. Obviously, sometimes security issues are very important, even when they're going against regulation. But we don't want this to be sort of used unfairly. At the same time, we don't want regulators to ignore security issues, right? And there's sort of this trade-off if you cry wolf every time, well, then when it's really important, maybe you're not going to get the sort of serious treatment that it probably should.

So this is an issue I think with a lot of different angles to it. We contributed some academic stuff to this. And just to get a bearing on what I'm talking about here, we're going to do some case studies. I think the WhatsApp case people are very familiar with in this context. EU is saying you need to interoperate WhatsApp. That means that users of a third-party messaging app that chooses to interoperate and behaves with Meta's rules and all this—users of that app should be able to send and receive messages from users of WhatsApp. That's what interoperation means here.

This is really hard. I am a cryptographer. This is hard. It's a really difficult thing to do. There's a lot of different issues here. I don't think I need to elaborate too much here, but there's good papers on this, lot of people working on this problem. This is not an easy thing to do. And indeed there are security arguments about it. Here's WhatsApp making a security argument. And then lo and behold, here's Signal coming back and agreeing that there's a security issue here. They don't necessarily agree on the details, but like there is a security issue here. Both the incumbent company WhatsApp (Meta) and Signal sort of are on the same page about that at least.

And just to make a note, this is not asking for WhatsApp to have a fully federated model here, like far from it. This is just sort of saying you need to allow interoperation. It's not saying become federated. And this is sort of I think something that doesn't have a ton of precedent of a dictator model where it's not centralized. You do have to allow interoperation, but you sort of get to decide how that's done. I mean, it's like you get to be your own standards body in a certain sense. You get to update your rules, you get to sort of have a fair amount of power and maybe then not have to worry about some of the frictions of federation. But obviously there's going to be more friction than just the centralized model. This is kind of interesting. I don't think there's been any academic work on it. And as an academic, you know, that's what I'm interested in.

So just an aside there. The next case study is super different. We're going to look at in-app payments, which you know, is not a particularly technical issue. So before regulation, before all of this was happening, if I were buying a digital good on say an iPhone—a lot of this applies to Google, but I'm just going to focus on Apple for simplicity. If I were buying, say, a skin for my character in Fortnite, which is a video game, this would have to go through in-app purchases. It would incur a 30% fee. This is not true for any physical good. So if you're buying a physical good on Amazon on your phone, this does not get processed by Apple, right? That it would just be in-app purchase.

And so in particular, companies could not put link-outs. So that's a link to a website that would process a payment. You were not allowed to do that. You're not allowed to say, "Oh, go to our website and buy the skin there." It was not allowed by the App Store rules. Thanks to regulation in the EU and it's starting in the—I mean, like these a lot of these cases are at different stages right now. But this is becoming allowed in certain places. These are link-outs. This is an example of what that would look like. And then you don't pay this 30% fee, so the prices are going to be lower. Like, this is about fees. It's not a very technical issue.

But here we have nonetheless a security argument from Apple. This isn't just about processing payments; it's also about other relevant app rules. But again, that this is going to be a security issue. And in contrast to the Signal case, here we have Epic, the counterpart, not only like not agreeing that this is a security issue, but sort of suggesting perhaps that security is being used as an excuse. Obviously, this is within their—it is aligned with their economic incentives to say this, but it's an interesting thing to note.

The third case study is about tap-and-go payments. Basically, the issue here is that the any third-party on iOS would not have access to the necessary functionality to make something like Apple Wallet and Apple Pay. It simply wasn't possible. And this is changing due to regulation. Here again is a relevant security argument.

Okay, so are these are these case studies? They're not particularly similar. Again, we see that there are security arguments being made, but other than that, these aren't very similar. The economic incentives are different, the technology involved is different, the engineering difficulty of actually executing any of these things—very different. Someone should make a framework.

So we sorted these into three different buckets. Basically, those that are mostly about engineering, those that are mostly about policy and vetting, and those where both play a significant role. And it's a little more nuanced than sort of pure buckets. See the paper for the nuance. But it's a pretty clean sorting.

Go, yes. Go. Thank you. Yes. So the engineering concerns, and again, we're sort of sorting the security concerns themselves, but of course, the pattern of the actual technology, what needs to happen with the actual technology, follows the types of concerns that are involved. So the engineering concern again is, "We have to build this to be interoperable. This is going to be difficult. This is going to be impossible. We can't preserve existing security," etc.

The vetting concern: Same thing, but it's about policies. "If we change our say App Store policy, this will not preserve security in our existing security." And then again, the hybrid involves both. But I want to say here that the hybrid concerns are not some simple interpolation of the other two. There's a lot of interesting and specific things going on here in particular. I think this is the most interesting case.

And so here's some examples, are the ones we already discussed. I think the most interesting thing in the engineering case is that, yes, WhatsApp is a closed thing. Opening it up is difficult. I don't think that's a controversial thing to say. iMessage is in great contrast to this because iMessage has always been able to send SMS to Android phones. SMS kind of sucks. It's not secure. And this has been changing due to regulation and other things to RCS, and it's a lot better. And so here we see that the interoperation is actually strictly increasing the security and not opening up problems, actually just making things better.

The vetting cases: Pretty much stuff to do with App Stores, although this is not necessarily the case, and I think some of the new cases that are starting to percolate up are going to be vetting that are not about apps, but we'll see. And these are a lot of like rules about App Stores: where you can download them, what browser engine you use, if you can have alternative App Stores, things like this.

The hybrid concerns tend to have to do with physical connected devices or connections between two devices, like with AirDrop. And again, I think the some of the most interesting puzzles lie in this in this category.

Yes. Okay, nice framework. Now what? I like academics just like sorting things. I promise there's more to it than that. The key thing here and with a lot of these things is you need to care about the economics and the incentives.

So going through that. Oh, also security economics exist. I didn't just invent it. There's more I could say on that, but for this audience, let's look at the incentives themselves. Okay, so with engineering, and again, this is messaging, think of WhatsApp as the most simple case. Pretty much the advantage is network effects. Everyone in Europe is on WhatsApp because everyone in Europe is on WhatsApp. There's not a lot more to it. It's a very strong monopoly. It's a very strong market position. But WhatsApp is not able to, say, block Signal entirely from the relevant market, things like this.

And this is in contrast to the vetting case. So again, App Store type things, where Apple and Google are in a position to fully block, say, Fortnite from accessing users on mobile. They can just block them completely if they don't follow certain rules. And this is a lot more power than simply being large.

And this walled garden that is very effective at preserving the security of especially iOS is important, right? Like having a security walled garden is important, but at the same time, within the walls, you get to extract fees pretty much with impunity. I mean, it's 30% is a lot. And for context, most payment providers charge 3%. So the fact that these like the walled garden for security and the walled garden for fees are seen as kind of being one and the same is I think the interesting thing here. These could be decoupled.

And in fact, Apple has proposed a solution that's notarization, where they would cryptographically sign apps, and only apps with such a signature could be downloaded onto an iPhone. Sounds great. This is sort of decoupling the walled garden in some like the security walled garden and the fee walled garden. But the fee structure for this notarization does not seem to give much relief and might actually be worse for some especially very large app developers. Again, here we're thinking of Fortnite and Epic Games. So this is an interesting case, but Apple and Google still benefit immensely from having robust app ecosystems, right? Like they are incentivized to charge fees, but at the same time, they are incentivized to allow developers to have access.

And this is in contrast to the hybrid case. This is a little bit more complicated, but especially with the tap-and-go payments, Apple used to have complete control over that market. No one could build a competitive app that did tap-and-go payments. And of course, the incentive is to keep it that way. And this keeping competitors out completely is an incredibly strong incentive and an incredibly strong market position because they can do it, or at least until the regulation. And so this is just saying that we're having sort of increasing levels of market power in these three categories.

Okay, great. So the economic incentives are important. You need to keep them in mind when you're evaluating security concerns. But what what do we care about here? Well, a lot of especially the hybrid cases involve protocols that are proprietary, and you know, maybe they shouldn't be proprietary for all sorts of reasons, not just antitrust goals. And so this is a list of things I pulled this straight from one of the EU cases of things that they want to have interoperable.

And a lot of this stuff could use standards. Apple in a lot of cases is using their own proprietary version of something where there is a standard or there might be a good candidate for a standard. So I think this is a role for the IETF perhaps is to not only help with the implementation of this regulation by, you know, having good standards, but it could also be an opportunity. Like, regulators are on the side of wanting interoperation, which is I guess what we want. And and so taking the opportunity of this regulation being such a big deal could be a good chance to have some really good and and well-baked-into-the-system standards.

And it's not just sort of technical standards. There's a lot of policy choices that I think are very important in this space. I'm thinking a lot of the stalking issues here with AirTag and Tile and things like that. These are these are very serious issues, and it is very possible that opening up some of these functionalities, like making some of the connected devices have more functionality, could cause issues along these lines. But then well, we can just have policies that apply to more than just Apple's products.

This is just an example, but I think that there's probably a lot of things to do with sort of this more policy side, where having good ideas of norms of what we expect from companies as more of this functionality is opened up could be a really good way to get in on this regulatory effort to open things more up—open things up more and make sure that the norms that we have developed, maybe with just Apple because I know that Apple has done a lot on this, are spread to the broader community.

Because the mandated interoperation is here. It's coming. We're not going to get away from it. So making sure that it's done as best as possible in a way that respects existing security issues, that doesn't make light of security issues, and also perhaps bakes in some of our our expectations about good standards, you know, if that's possible. Because this stuff isn't going anywhere.

And also it shouldn't be treated as the same. I mean, not to say that this is precisely what's happening, but a lot of these security issues sort of look the same and smell the same on an initial viewing. And first of all, we need to keep in mind that they are never free from the economic context. You can't separate the two. And then also sometimes they are real difficult security issues and sometimes maybe maybe not so much. And then yes, again, I think it's a—I don't really know, but it could be a good opportunity to get into the mix and make sure this stuff is done as as best as possible, as the sort of regulators are kind of on the side of interoperation.

Mallory Knodel: Awesome. Thank you so much. And we have a queue. So make sure you get in it in the next little while because I'm going to close the queue so we can make sure to stay on time. And you say we have robust Q&A, that's it. We have robust. Um, I don't know how to turn off the timer. It's just going to keep blinking like that until I figure out the button to push. But in the meantime, Rodney, go ahead.

Rodney Van Meter: Yeah, Rod Van Meter. Um, good talk. Thank you. The... so you said the words Google and Apple a lot and a couple of other things. You didn't say the words United States, Canada, China, UN. In particular, how does this balance play out for smaller countries like Laos or Bhutan or Niger or some some place like that relative to the power level that they actually have relative to to those biggest companies?

Dodji: So I don't know about the the Bhutan level. I do know that South Korea, who they've got their own sort of incumbent large companies, which maybe is exactly what you're not talking about. But they have this interesting problem where their antitrust efforts would naturally go over their domestic behemoths, and this going after domestic behemoths is in favor of large American companies. So that's an interesting puzzle. I don't have a good answer for that.

I think that in for a lot of countries, India's doing some very interesting things, but a lot of countries with even smaller, um, you know, like staff at whatever their version of the FTC is, I I don't know. I don't know if they would be able to, but they would also presumably benefit, you know, from the Brussels effect or from some of this regulation elsewhere. Um, we also see a lot of cases where the regulation that's being done, especially in the EU, is being learned from. Again, more at the Japan, India, Brazil level and and not smaller countries. I I don't know what to say for smaller countries. I think that the best hope is that a lot of this stuff will have knock-on effects.

Rodney Van Meter: Thanks.

Mallory Knodel: Thanks for that, Rodney. Mark, you're up.

Mark: Hi. Um, thank you. Great talk. Really interesting stuff. Um, quickly to the last question. Um, there is definitely coordination between regulators at the international level. They listen to each other; they have conferences. And I know that a lot of the regulators in the middle powers and smaller countries are watching very carefully what's happening in the UK and the EU. So yeah, that happens.

Um, a couple of things. Uh, first of all, you talked about, you know, uh, standards filling a role to to provide interoperability. And and I agree; I'm excited by that. Part of the problem in in actually bringing that to reality is that what we do here is voluntary, and it's about voluntary adoption and getting the the participation of the implementers. And so when you have something that is, shall we say, more forcefully suggested to these parties, uh, you get malicious compliance, you get foot dragging. And so figuring your way through that process is is difficult. We don't have the right tool set to match up the incentives.

Um, and so that's why, you know, part of this I think is competition law. And then I look at, well, what other uh forces can be brought to bear to uh create alternatives that they then have to get in line with within the marketplace—so something like digital public infrastructure or whatever. But um, that's one of the big problematic parts of this for me is figuring how to align those incentives.

Dodji: Uh, may I say something?

Mark: Sure, please.

Dodji: Well, I think that this is one of the few cases where the regulation is maybe not perfectly on our side, but suddenly we like the regulators are trying to force this. So to actually sort of uniquely like for once it's not voluntary, and we don't have to expect people to do voluntary stuff. Obviously, it's not like they're like, "Oh, go to the IETF and do what they say." But we actually have the regulators kind of on our side this one like one-off talk.

Mark: Completely. Absolutely. And and having that force is I think a powerful thing that we as a community should think about how to incorporate that. It's just that all of our processes and internal decision-making uh mechanisms and the culture is all voluntary. And so it's actually a big change to have somebody, you know, come in and say, "No, you have to do that." It changes the incentive structure. And for example, we don't have participation. We don't, you know, there there's not um a membership model here. And so, you know, uh somebody can flood with a bunch of people making, you know, bad faith arguments, and we don't gain consensus, and then it kind of falls down. So we we have to change here somehow too, I think, to...

Dodji: Well, it doesn't have to literally be the IETF either.

Mark: Absolutely. Sure. Like we can all go over to Brussels and sit in a bar and talk about it.

Dodji: But the IETF is great.

Mark: From a regulator's standpoint, um, part of the problem here that that really um what you were saying really rung true for me was that, you know, they often have these folks come to them and say—and I'm not going to name names—but they say, "Oh, but security," you know, "it's that we're you know look you're going to reduce security in our model." And it's it's such a uh self-advantageous model of security that it it's based upon their architecture not changing and it's not balancing any other factors beyond security. And so it's very disingenuous. But the regulators don't know how to evaluate those. So the more help that we can give them and others can give them, I think, would be really valuable to to debunk those.

And finally, just as an aside, you mentioned the kind of the 2-3% or clearance fees versus 30%. I agree, uh, you know, 30% is incredibly onerous. But my wife is a uh an author, and that industry charges her 85 or 90%. So it's not the only evil in the world.

Dodji: Well, just because it could be worse doesn't mean it can't be better, you know?

Mark: Oh, absolutely.

Mallory Knodel: Excellent. Thank you, Mark. Dirk, you're up.

Dirk Kutscher: Dirk Kutscher, HBUSD as an individual. Um, yeah, brilliant nice talk. Thank you, um, and important work. Um, I think there's a bit of a dilemma here. So you mentioned you want to marry security and interoperability, and you said maybe we need some standards. But what if um the economic incentives are not really aligned? So maybe, you know, those companies are actually not interested so much in that.

Dodji: Well, I don't think they are. That's what the regulator... that's why the regulator's coming in and being like, "You need to do stuff."

Dirk Kutscher: Right. So but how successful—I mean, maybe that's just again following up on previous question, but I mean how successful would any regulation then be in that case?

Dodji: Oh well, it's I mean so some of this stuff has been happening for a while. Um, I think some of it is proving to be not particularly effective. Also, with when the goal right because the goal here is to improve market conditions, right? The interoperability is just a means to that end. And I think we're seeing as the WhatsApp case develops, that's not going so great. Um, because it's not necessarily in the interest of the potential third-party companies to interoperate. Also WhatsApp is maybe not doing it in a way that's the most conducive to them. Like I don't want to get too into it, but that one... meh.

The App Store stuff is I think going to—this is my personal prediction—I think it is going to shake out to be really good. Again, because it's fairly simple. It's like, "I want to be able to do my transactions on my website." Like that's not a hard thing to like sit down and like make. Um, so I think that that case, even if it's not that interesting from a technical perspective, is going to be a win. That's my prediction. I really hope I'm right.

And I think some of this hardware stuff, it's much more early days. Like the this has been developing like late 2025. So I think it's a bit hard to see. But I think I think that they know what they're doing, and I think that it might work. And there's a lot of different issues. Like it's many, many different things. It's headphones, it's smartwatches, there's all sorts of things involved. But I've I'm optimistic that there's going to be some wins in there.

And then I think the tap-and-go payments, which actually got an earlier start than the DMA in the EU, I think that that one also might be good. We'll see because this is these are very capable big banks who I think would be the most interested in that. This isn't like some developer in his garage kind of thing, right? So I'm optimistic about some of these things. I'm not optimistic about the WhatsApp thing. The iMessage thing, which is a US case to be clear that's not an EU one, I think also has been like the development of RCS and all this stuff like that's that's great. That's great. So like and that was I think more complicated than this was just an antitrust win, but it is an antitrust win and I'm happy about it also because I'm an Android. So I'm optimistic.

Mallory Knodel: Yeah. I I'm also excited about the um the excitement around making the security better through interoperability, which I feel like for folks involved in the IETF, like in and of itself, like as a full like it's a very satisfying engineering goal, right? So maybe that's it's not a incentive in the competition economic term, but it like it's a motivator for us, right?

Okay, we have two more questions from Arthur and Gianpaolo, and I think we're good actually on time because we don't have that fourth talk. So we can we can take them both. Go ahead.

Arthur: Cool. Um, my name is Arthur. Um, I'm a PhD student at NYU Chicago. Is this better? Oh, sounds a lot better. Um, yeah. So I was curious um like, you know, based on um, you know, legislation of the past like 10 years and regulation like the GDPR and then like, you know, state laws like CCPA and stuff in the US. Uh, I think one of the issues that was right up in previous questions was this issue of maybe like malicious compliance or like, you know, not full compliance. And I was just wonder and like just to give an example, you know, with things like uh like portability rights, right? Like moving your personal data from one platform to another, uh that just like not really being implemented super well, like uh requiring the consumer to like download their data before uploading it rather than like a server-to-server transfer. So I was wondering if uh like you had any learnings, I guess, from these attempts at like implementing past regulation and like how it can apply to uh, you know, your use the your kind of talk and like thoughts on interoperability. So yeah.

Dodji: I mean, there are already plenty of instances of malicious compliance with this stuff that you don't have to look but very far afield. I mean, the notarization thing that I mentioned—yes, the notarization thing that I mentioned, um uh was was I would I think that personally it's malicious compliance the way the fee structures are all set up and stuff like that. There's a lengthy discussion—well, it's not that lengthy, it's like a paragraph of discussion in the appendix of the paper. Um, but I like take great issue personally—I don't know, this rubbed me the wrong way—of like the fee structures for the notarization. I think that this is an example of malicious compliance, again, in a very sort of uninteresting from a technical perspective way, but it's important, you know? It's like malicious compliance doesn't need to be complicated to fit that description. I think that seeing a lot of the stuff with the portability—I mean, that's that's kind of interoperation, right? Like so yeah, I mean, it's learning from it though it's like just the lesson to take is, "Oh, this is going to happen." Um, but it's not like it's hard to identify when it does.

Mallory Knodel: Thanks. Thank you. Um, all right. Gianpaolo, you're up.

Gianpaolo: Gianpaolo Scalone, Vodafone. So thank you; very interesting the presentation. I and I think it is touching the point that it is very convenient if standardization arrives before the regulator, because if the regulator arrives, then it is proposing something that uh will harm both the market and the customers. We have seen, for example, with the blockings, for example the privacy shield is proposing a very bad user experience and also is hitting the market with something that is not working really because there is not a standard. If if there is a standard, this could be helpful for the development of a good user experience and also interoperability without hurting later everybody.

Dodji: Yeah, I agree. Was that was that a—I mean, I I think that um I'm not going to remember this example all that well, but uh the peer-to-peer Wi-Fi issue I think is one of these things where there's an existing standard and Apple does something else that may or may not be based on that standard. And like having a standard that the regulators can whip out of their pocket and be like, "Why don't you do this?" I mean, same the Signal protocol is kind of this way, although again the messaging is so complicated I don't really want to get into it. So I think I'm just agreeing with you. Yeah, or is was there something else to answer?

Gianpaolo: Thank you.

Dodji: Yeah, okay.

Mallory Knodel: That was great. Thank you, Dodji. Appreciate it. Yes. All right. Kate, I have your slides, and I'm going to pass you control.

Kate: Thanks. Can folks hear me?

Mallory Knodel: Yes. Welcome.

Kate: Okay. Hear that there's a bit of an echo.

Mallory Knodel: All right. Are you able to advance those?

Kate: Um, yes. Okay. Cool. All right. I'll get started. Um, thank you, everyone. Um, my name is Kate and I'm a director of a research program, Children's Online Safety and Privacy Research or COSPR, and I'm a visiting scholar at UCLA. Um, this topic on children's online safety has been at the center of heated debates from family table dinner tables to morning talk shows to parliaments around the world. Uh, from age assurance and the social media ban to client-side scanning, we see over and over again how children's safety and adults' privacy are pitted against each other as people from all sides grapple with the question of how to regulate tech and what kind of digital futures we want to create for young people.

(COSPR slides FINAL)

This leaves us at an impasse. Children's safety versus adults' privacy, Big Tech accountability versus tech apologism, moral panic versus Trojan horse, doing something versus doing nothing at all. These binaries are not only inaccurate, but they paralyze all of us from actually engaging with the social, moral, political, and technological complexity that this topic holds. And this community, as technologists, digital rights advocates, parents, and former children, um has a unique positionality that can help us move past the impasse. Because this community has been at the forefront of imagining and shaping what a safe, secure, and collective digital future can look like. And to do that, we need to have a fundamental reorientation in how we understand this problem area.

So just a little bit of context—I feel particularly well-positioned to take you on this journey. I've spent the past 15 years of experience in sexual violence prevention and response, in frontline work, community organizing and advocacy, research, and policy. I actually didn't start out with an interest in tech, um but soon I had to become uh informed because it was showing up in all different uh all the different ways in survivors' lives. And when I did, I was startled um by how the divide between privacy and safety came up again and again, ultimately to the detriment of people who were impacted by sexual abuse. I share my journey with the hope that it gives you a glimpse into what's at stake when we define this issue as one of safety and privacy. There's so much more to this when we allow ourselves to see beyond the impasse.

So let's start with the current policy landscape because I think it really powerfully illustrates how contestations and negotiations about children's safety and privacy are playing out in real time. So this is a um ongoing updated uh map of all the different uh child safety related legislation around the world. Um, this is, like I said, rapidly evolving, um but I just want to draw your attention particularly um to Australia, which as of December 10th of last year passed um the social media ban, and since then it um has been adopted in other jurisdictions, including the UK, um in the Asia Pacific countries, um and Nigeria is currently contemplating their own Online Harms Protection Bill. Um, and the EU continues to have heated discussions about chat control, although we had some development um of um maybe taking a pause for at least a little while um just last week. Um, but I share this map uh to show that taken as a whole, that this suite of child safety related legislation um are populating um regulatory conversations as not just about children's safety but also about what it means to do online content moderation and how we should be thinking about Big Tech regulation. Um, but when you take all of them as a whole, they have three main characteristics.

First, they have vague definitions that conjure strong mental models. Child safety, quote, "encompasses a range of problems; it involves grooming, child sexual abuse material, sextortion, scams, misinformation, radicalization, mental health, and so on." The fuzziness of child safety leaves a lot for interpretation. So when you read the policy language closely or listen closely to politicians' statements, it's very clear that they're constantly evoking and conjuring child sexual abuse and child sexual abuse material as a strongest justification for why the internet needs to be regulated in a protectionist way. Child safety may be vague, but this vagueness enables child sexual abuse to become a fixed mental model for regulating online harms. And it also glides over other harms like pro-terror materials, which are often implicated in these conversations but rarely explicitly addressed.

Two is harm is reduced to content. Under the protectionist rhetoric, harm exists within the four corners of content. Harm appears to exist purely online, isolated from the complex interplay of online and offline factors. We lose sight of how risks differ from harm and how toddlers and teenagers are impacted differently, and how harm emerges from behaviors. This singular focus on content removal has the effect of diluting the motivations and incentives of the institutions and people who take advantage of and mistreat young people. Rather than addressing the root causes that create predatory systems that mistreat young people, the goal then becomes the harm removal via content removal.

And three, solutions are privatized and punitive. It's no wonder that the proposed interventions are tech-solutionist, like age assurance methods, client-side scanning, user reporting tools, grooming classifiers, and so on. But more importantly, these measures are privatized and singularly focused on punishment. They enable the tech industry to collect, archive, consolidate, and centralize more data about everyone's private and intimate lives in the name of finding bad guys. We end up assuming that catching the bad guys means that young people are being protected. And these threads form what I describe as a protectionist approach. This is an approach to safeguarding young people that collapses meaningful differences in harm, motives, and impacts in order to justify taking a blunt and universalized law and order response.

Now, I do want to caveat that protection is not inherently harmful, but in this protectionist approach, young people are frozen as helpless victims rather than complex people whose needs evolve over time. Safety is offered as a special carve-out for young people, even though they're the kinds of things that we might all benefit um as people who use the internet. Risks and harms become one to warrant a blunt response that's more concerned about catching the bad guys rather than supporting young people. And protectionism has a way of taking over the discussion and pushing it to the extreme. It does so because child sexual abuse, and in the online context, child sexual abuse material, are fixed as a mental model for how we think about child safety and how we think about content moderation. This jump happens through what my colleague, uh who's a long-time restorative justice practitioner for child sexual abuse um harms, named Elizabeth Clements, calls the ick factor of child sexual abuse.

So what do I mean by the ick? Um, and before I go on, I just want to take a moment to acknowledge that from this point onwards I'll be talking in a very detailed and candid manner about child sexual abuse. It can be a difficult topic for many people, so I encourage you all to engage with it at your own pace and your own term. Um so, you know, um give yourself time; feel free to go off-camera, feel free to step out, whatever you need to do. Um and for those of you who are in person together, I encourage you to practice group care, like, you know, having a debrief conversation afterwards or going on a walk together.

So why do we get the ick? Let's return to the ick. Um this avoidance is a response to our shared imaginaries about child sexual abuse, which mainly takes the uh shape in the figure of the predator. The figure of the predator is almost always imagined as an adult man with pathological sexual interest in young children and predisposed to seek them out and abuse them. So we often refer to them as sex offenders, predators, pedophiles, and groomers, who are singularly responsible for the prevalence of child sexual abuse. But contrary to these associations, the reality of CSA, or child sexual abuse, is quite different. About 90% of children are harmed by a person known and trusted by them, usually at home. So this happens within the family system. And juvenile sexual offending, which means um offending that happens between children, accounts for over two-thirds of CSA. It often involves siblings or people in sibling-like relationship, and the average age difference between them is usually six years. Um so even for those who are aware of these things, um the constant sensationalist cultural representations, like, you know, um To Catch a Predator or horror movies like It and the prevalence of true crime podcasts, constantly set us back from contending with the reality of child sexual abuse.

And as a result, it masks systemic factors that enable CSA in our families, communities, and institutions. Uh for 78% of children who experience CSA, it happens more than once, and for 42%, it happens more than six times. So this isn't a one-off; it's something that happens throughout a young person's life. And it also happens in tandem with other forms of maltreatment. Four in ten children experience more than one type of abuse, such as neglect, physical violence, emotional violence, and exposure to domestic violence. And it disproportionately impacts those who are already marginalized and vulnerable. Girls are at a greater risk for most types of child maltreatment, particularly for sexual abuse, emotional abuse, and neglect. Children with disabilities are three times more likely than children without them to experience CSA—um this is a stat from the US. And children in the foster care system, also from the US, um are more likely to have a higher lifetime experiences of CSA.

So not only is the figure of the predator inaccurate, the truth is that it also masks um harms that children and adults impacted by CSA experience. Um and one of the ways in which this manifests is through delayed disclosure. So for those who have been harmed, um it's really common for them to have a delayed disclosure, um and that can last anywhere between one week to 46 years. Um and in fact, 66% of incidents are never disclosed to any adult.

How then has the ick factor shaped children's online safety? The story of protectionism in many ways is a story of an ecosystem, one that's constantly reproducing and ever-expanding. It's a story about an ecosystem um that's really fixated on chasing numbers, and in doing so, cast real people and real harm aside in pursuit of what the numbers represent.

So let's start with unpacking the numbers, um because this is how we end up talking about child sexual abuse material and it distances us from the reality of what those numbers represent. Um, so take this commonly cited figure—um for example, in 2023, NCMEC, which is a U.S.-based clearinghouse that stands for National Center for Missing and Exploited Children, received 36.2 million reports of suspected child sexual exploitation. Here we have a big scary number of CSAM that conveys a certain idea about how prevalent and insidious CSAM is on the internet. So let's unpack this. Um, first things first, in the US, there's a mandatory reporting obligation for electronic service providers or ESPs to detect, report, and remove suspected CSAM to the CyberTipline, which is run by NCMEC. But so let's start with the question of what CSAM actually is. Um it's actually a very particular legal and technological object that's rooted in US legal history that is both over-inclusive and under-inclusive of how child sexual abuse and exploitation might be captured as pieces of content.

So the definition has three components that I want to highlight: It's any visual depiction of sexually explosive—sexually explicit conduct involving an apparent minor. So the three pieces that I want to highlight are first, visual depiction. Um this is something that varies across jurisdictions around the world, but in the US, it's very much focused on um images and videos, um although there are some arguments for expanding it to include text-based. Um sexually explicit conduct almost always uh refers to the presence of genitalia and penetrative sexual acts. Um this reflects a heterosexual bias in the US law, uh so you know we can see something like um same-sex encounters that have later been included in legal understanding of sexual violence. And then finally, apparent minor. Um apparent is doing a lot of work here, so it's a body that appears to be a minor regardless of what their um actual age is.

So with this definition, we see that there's a really particular idea of what a young person looks like and what their sexuality ought to be that's very much encoded into the legal definition of CSAM that then gets exported through these um, you know, globalized detection systems that are run under a black box through ESPs. So when you zoom out and look what that system catches, the CyberTipline includes actually a whole range of content that ranges from um understandable and maybe even innocuous to exploitative and abusive that we should be prioritizing. So there's the genuine, you know, when you hear the word CSAM, you think about kids in cages, um those horrific things that we are often seeing in, you know, popular culture and hear about in the news. Those abusive and exploitative materials are also there. Um but there's also, you know, baby bum on the beach. Um you know, buttocks are considered part of genitalia, um babies running around, your grandmother taking a—um you know picture of a of her grandson taking a bath. Like, those things technically meet the legal definition. There's also developmentally normative um expressions of sexuality from older children. Um they, you know, in the industry it's called self-generated materials. Um I personally hate this term because it's still sort of, you know, passing the judgment that it's abusive material um for young people who are engaging in what is developmentally normative expressions of sexuality. Um there's also, you know, um what's called potential meme, that refers to people who are resharing known content um either out of, you know, out of horror or to raise awareness or to troll other people. Um and then there's obscene visual content referring to uh animated or um you know visual depictions of children.

Um so when you take all of this into account, we can start to complicate the number of three—36.2 million CyberTips. Um of the tips, um 49% are considered actionable, means that they were able to be escalated to law enforcement, um and 3% actually uh refer to law enforcement. And .18%, and that's 63,892 pieces of CyberTips escalated for likely involving hands-on abuse of a real child. I want to sit with this number because this number is serious, um and it really captures what CSAM detection is intended to capture. But I also want to acknowledge that this is very different from the 36.2 million that we started out with. Um, so to determine the presence of abuse, we often need um information that's beyond um the detection. So we have things like consent and harmful behavior, um motivations of innocuous curiosity to egregious depictions, um and relationality between the people involved. Um, mindful of the time, so I'm going to skip a few slides real quick. Um, so what we end up getting is an ecosystem that's very much centered around chasing numbers.

So at the center is this idea of CSAM that we have, um that are, you know, developed in proprietary detection technologies by the ESPs, who then pass it on to law enforcement and regulators who investigate CSN tips for policing responses. We have lobbyists and campaigners whose job is to raise awareness and influence policy on CSM. Um, and they work closely with clearinghouses and tiplines whose job is to collect, review, archive, and refer CSM tips and maintain databases. And all of these actors work really closely with tech solutions vendors who are third-party actors that develop and sell compliance tools to other ecosystem actors.

Now, that's a lot of, you know, people in this world, um but you'll notice that a bunch of people are actually missing from here, and those are the frontline support services, the helplines, the social workers, the teachers who actually interface with young people and have a holistic view of how child sexual abuse and online harms might be interacting with other forms of harm. Um, here, let me actually go back to the slide. Um so when you actually put CSAM in conversation with real-life experiences of abuse, we see a lot of parallel. So 44% of CSAM offenders are under the age of 18 and 68% of CSAM offenders are known to the child. Um and in fact, where online commercial sexual exploitation is involved, most cases, 92%, are self-negotiated with money being the primary motivator. Online offending also disproportionately implicates adolescents with autism spectrum disorder, so um having a bias against young people with neurodivergence. Uh white and not Latinx children's cases of physical and sexual abuses are more likely to be substantiated than black and Latinx children's, even though in the images that we see they tend to be broadly uh white children and white offenders. Um and it also reproduces a digital divide, um given that the CyberTips mostly feature white children, um and doesn't actually um catch black and brown children.

Um and there's also the fact that there's breach of trust—and mindful of the time, so I'll wrap this up and then um finish. Uh many victims of CSAM are actually not aware that their images are used for detection systems. Um in a recent survey of 80 victims of CSAM, 60% were aware that materials were retained by the police, 45% by the tech companies, and 35% by researchers. And 90% of those who were interviewed wanted to know how their imagery would be used. And I think this is a powerful note to end on in the fact that we have this entire ecosystem that's fixated on detection with the promise of finding the bad guys while young people are not necessarily receiving the help that they need, um and the people whose images are the founding blocks of this entire ecosystem don't even know that their um images have been used in this way. Um, so I'll pause there and open up for questions.

Mallory Knodel: Great. Thank you, Kate. Um, folks can jump in the queue, um in person or online. Go ahead, Corinne. Welcome.

Corinne: Hi, uh hi Kate. Um I hope everyone can hear me okay. Um thank you for giving us a a rundown that so few people do. Um and really pulling this question into its broader social domain as opposed to just the online domain as so many people do. Um one of the things that I would love to hear you talk about um that we didn't get into in the talk is what is next, right? For example, I work for an ex—for an organization that is focused on protecting uh freedom of expression, including online, and um we often see that many of the different policy proposals, including the ones that you so helpfully mapped out, um end up severely restricting freedom of expression online in the name of uh saving children, which according to your talk might not even be where we end up if we pursue this particular path. So I'd love to hear what you think can and should be done when it comes to this global trend, um and what policy paths you think are better to pursue. Thanks so much.

Kate: Yeah, thank you. I'll take this as an opportunity to um show my other slides where I was going to answer this question. Um, thank you. I mean, my my broader argument is like the current ecosystem is fixated on detection even though um, you know, decades of research has shown that in order to actually break the cycles hard—cycles of harm that underpins child sexual abuse is to focus on prevention. And we see a disproportionate allocation of resources on detection over prevention. So instead of having a detection ecosystem, what would that ecosystem look like if we had a prevention-oriented ecosystem? Um and to give a little bit more context, um in general, people seek help, um when they're given the opportunity, but they're not always able to receive it. So before there is an incident of child sexual abuse, you know, again between an adult and a child or between peers, there are multiple opportunities for intervention and for um harmful behaviors to be acknowledged. Um but in fact in online spaces, um when you ask, 70% of those um who sought help in relation to their problematic um sexual behaviors online by viewing CSAM were not able to receive help. Um and this is a result of often people don't know that they can receive help, and when they do know, they simply don't have access to it in the jurisdictions that they live. Um and the fact that people can find information about things that are not immediately available to them, that they can do so anonymously, is one of the most impactful things that uh online intermediaries can do as a help-seeking entity. So help-seeking is integral to CSA prevention, and this is why I do insist that there is a role for technologists to play. Um and this idea that privacy and safety are antagonistic actually does a fundamental disservice. Um young people, particularly young men, are more likely to seek help from online services as a freely accessible, anonymous, and stigma-reducing alternative. So privacy is necessary for safety, and safety is necessary for privacy. Um thank you.

And from the shrinking ecosystem, the help-seeking attitudes of youth workers, um which are considered, you know, particularly important again because they get such a frontline insight, and youth workers have been um identified as key community gatekeepers. So the people who are siloed in the image of the ecosystem that I showed earlier, they're the people who um should be driving the prevention ecosystem. But we saw, um in the mapping, all the ways in which funding and, you know, regulatory momentum and allocation of resources continue to keep them siloed and and shriveling uh while the detection ecosystem continues to expand.

Mallory Knodel: I'm glad I'm glad that question came up and I'm glad it gave you a extra chance to talk about the privacy-security interplay. We are running short of time, so I'm going to suggest Andrew and Edmond... is Edmond in the room? Actually take—you are. Okay. Throw also... I was encouraging you to throw anything specific in the chat. But okay, Andrew, you're left. Can you keep it really brief? Um, thank you.

Andrew: Hey, hi, hopefully my sound is working.

Kate: Yep. Yeah, I can hear you.

Andrew: I'll post some longer comments in the chat in the interest of time, but uh a couple of brief comments. Um, yes, prevention's absolutely important, but uh certainly I don't know in the US but certainly outside the US, the the ecosystem includes agencies for prevention and also for help. Um, so it's not as narrow as that slide suggested. So that might be true in the US; it's not true outside the US. Uh also, images uh are not retained by tech companies and others in many countries because it's illegal to possess them. Um, so they can't uh retain them; that's an offense uh in law in many countries. Again, not sure in the US. Um, and then two other brief comments. Uh I disagree with the presentation in terms of detection absolutely matters, um because the the material itself has consequences. Its sharing re-victimizes uh the children uh that feature in the content, um and they say that. Um, so that uh they feel sort of to be re-victimized by the existence and sharing of the material. So finding it and taking it down uh is absolutely important to victims. Uh also, it places them at additional risk, and there are plenty of case studies of people actively seeking out the victims of CSAM uh including into their uh when they become adults. Uh so we shouldn't dismiss the enormous benefits of taking down the content. Um and then finally, in terms of the child-centered view...

Mallory Knodel: Andrew, I'm so sorry, you have to wrap it up. We're going to move on, but I really appreciate you um sharing those nuanced statistics, um and you guys can take it to the chat. Thank you. Thank you so much, Kate. Really appreciate your presentation. Awesome. All right. And I'm driving your slides, right, Shaya?

Shaye: Thank you so much.

Mallory Knodel: Welcome. We're glad to have you.

Shaye: Thank you so much. Good morning, good afternoon, good evening from wherever you are in the world. Um I posted on LinkedIn this morning that for a lot of my career, um I had been asking trust and safety teams, PR teams, public affairs teams, can we speak with the technical experts? Can we bridge that gap? So thank you so much, Mallory, for convening such a space where human rights defenders can be talking to the techies, um rather than being uh manufacturedly uh opposed to each other.

(Standards of care: When online harm becomes organisational failure- IETF)

So I'll be talking about standards of care, which I think really um uh follows quite nicely from um Kate's presentation. Um I'll be talking about when um online harms becomes organisational failures and how we can have a governance approach to uh standards of care. When engineers design protocols, they are making decisions about reliability, reliability, efficiency, and how different technologies connect and function together. Those decisions can feel abstract, but once those protocols, designs, and products go live, they shape how billions of people interact with each other. They shape how information spreads, how communities form, and sometimes how harm travels through society.

My work sits right at that intersection, and for the past decade I've been studying what happens when technical teams meet real human behavior and how institutions can build what I call a standard—standards of care. So when so harm becomes anticipated and prevented rather than managed after the fact. I'm Shaye Akiwowo, and I'm a British Nigerian living in the UK, and my work focuses on governance design, institutional accountability, and democratic resilience in digital systems. I have worked across policy, civil society, and the tech sector to understand how online harm emerges for different communities and it not simply be framed as individual behavior, but structural failures of duties of cares within digital systems. During my term in elected office in East London, I founded Glitch, the organization that helped reposition online abuse as a governance issue during the UK's Online Safety Act. Unfortunately, we're still pushing for the Violence Against Women and Girls code to be compulsory rather than optional, but we have made significant process progress here in the UK. Next slide, please.

So working across these different environments, from policy rooms, product teams, and communities experiencing harm, it's given me a particular vantage point. And from that vantage point, a pattern has become visible to me. For the last decade, online harms has been treated as exceptional—something to apologize for, patch, manage after damage is done. But when you study these incidents that disproportionately impact marginalized communities, a different picture I have seen as is emerging. Many of the online harms are not random; they follow a structural pattern. I see this as: harms emerge from design incentives, harms spread through amplification systems, and harms spread through governance gaps. When these three conditions exist together, harm scales. Next slide, please.

Let's consider the evidence. Women are 27 times more likely to experience online abuse than men, and Black women face significantly higher levels of targeted harassment, 84% more likely to experience online harms than white women. So at this point, it's clear that harassment campaigns, misinformations cascades, and coordinated abuses cannot be explained as isolated incidents or behaviors by some bad actors. They are now systems outcomes. And in 2026, those outcomes are increasingly predictable. So when harm becomes predictable, allowing it to operate at scale is no longer simply unfortunate; it becomes a design choice. And not because engineers intend harm, but because the incentives embedded in the system design influences how people behave within those systems. And the people who are exposed to the harm often reveal the vulnerabilities in the system first. These tech systems shape real-world power, safety, and participation, and it exposes who do we really see as people and as humans. Next slide, please.

So this is where the idea of standards of care has become really important in my work advising advising tech companies, because when you analyze online harm structurally, the key question changes. Instead of asking who posted the harmful content, which often content moderation policies are focused on, we begin to ask: what design conditions allow that behavior to scale? Digital systems are not neutral environments; they are built around incentives, and those incentives shape behavior. And systems optimized for, in my opinion, for cheap engagement metrics amplify emotional content. And then systems optimized to reach have prioritized amplifying outrage, shame, and fear. Systems designed for the frictionless sharing of information, including harmful information, is has allowed misinformation to spread quickly. So when harm occurs at scale, responsibility does not sit with the users alone; it sits within the systems that make those behaviors effective. Next slide, please.

Let me give you some concrete examples. We are now seeing the mainstream AI—the mainstream AI tools used to generate non-consensual sexual imagery of women and girls, most visibly this has happened through tools like Grok on X. But this trend should not have been a surprise to us. For years, the nudification of women and girls has been a documented problem online. In 2023 alone, nudify apps attracted more than 24 million users. These tools were trained almost exclusively on the bodies of women, and in many cases did not work on men at all. We also know that around 99% of people targeted by sexual deepfakes are women. So the demand was known, the risk was documented, and the president was clear. Launching AI systems capable of producing this content without meaningful safeguards was not simply an oversight; it reflects how innovation moves faster than governance structures designed to anticipate them and how unconscious bias is designed from the very beginning.

Another example, please next slide. Um, another example is what we now call the manosphere. Uh for many people recently encountered this ecosystem through Louis Theroux's documentary last week last week on Netflix. For some viewers it felt shocking, but for those of us who have studied online abuse systems, it was not surprising because the manosphere did not appear out of nowhere. It grew inside digital systems that reward outrage, humiliation, conflict, money, greed, dehumanization, because those emotions have we have decided culturally and through a design lens are the most important engagement metrics. When humiliating women drives clicks, humiliation becomes part of the platform economy. Recommendation systems amplify what keeps people watching, and we have seen documentation over the years of YouTube recommendation systems promoting increasingly extreme content to young people. Creators are responding to those incentives: outrage, short-term content for clicks and likes, and gradually an ecosystem forms. So what looks like a culture crisis is also a systems outcome. Next slide, please.

When calm when harm occurs online, responsibility often moves between different multiple actors. Engineers focus on system performance metrics rather than downstream social harm. Product teams say they are responsible responsible for a specific set of engagement metrics—watch time, clicks, shares. And trust and safety teams enforce reactive content moderation policies, relying on users reporting abuse, meaning they've already experienced the harm. So this is a governance problem, and good governance is not defined to just the executives, the senior leadership team, or regulators. Good governance begins in the production chain, in the design decisions, in the incentive structures, and the infrastructure choices that shape how systems behave at scale. So when responsibility is fragmented, foreseeable harm has nowhere to land. Next slide.

And I think it's really important to say that not everyone can opt out of online life just because it's not safe. If it's not safe, you don't have to be online—that's just not true anymore. For journalists, organizers, human rights defenders, founders, and public figures, these platforms are where work happens, and visibility is built and needed. That means they are also the least able to step away when harassment escalates. Research consistently shows that harm concentrates most intensely where multiple vulnerabilities intersect. One useful diagnostic tool for understanding this comes from Black—comes from Black feminist scholarship, intersectionality, coined by Dr. Kimberlé Crenshaw. Intersectionality examines how different how systems affect people differently depending on overlapping identities and power structures. In technology design, this becomes a powerful risk detection tool, allowing us to become more preventative than reactive. Because if systems produce harm first and most intensely for the most exposed users—often women of color, children, and neurodivergent communities—it signals that signal tells us something about the system itself. For example, study misogynoir, the specific targeting of Black women online has repeatedly revealed vulnerabilities in platforms designed earlier than other indicators. In my book How to Stay Safe Online, I talk specifically how Black women were experiencing specific harms before Gamergate and that if they were listened to by platforms and content—content moderation teams, perhaps we could have prevented Gamergate. Online, those edges are frequently experienced by people who are sitting at multiple intersecting vulnerabilities. And misogynoir and misogyny provide early warning signs of system failure, or at the very least, replicating the harms that we're trying to escape offline. So we could use intersectionality as a way for engineering teams to stress test a system. I borrow this example from my work from my time in in politics, when we would have statutory failures of say young children dying in incare, we would have a an annual review to look at where did the systems fail. How do we make sure that we are are stress-testing our policies and our systems with the most vulnerable person in mind? That's an offering for engineer teams when they are designing um and systems. Next slide, please.

And I think often what we've seen in narrative framing and responses from public affairs teams and um partnership teams of of tech companies is that we can give more digital wellbeing tools, which is key and is important, and and many do rely on tools to limit exposure, manage stress online, and this can help users cope. But when systems solely rely on individuals coping rather than structural responsibility, that cost doesn't disappear; it's absorbed by those that are already dealing with multiple intersecting vulnerabilities. It moves into mental health, which we're seeing as an is a contributing factor to how people are behaving and their relationship with the online space and technology, and it shapes who gets to stay visible, which is impacting democracy, which is impacting um gender parity and all other forms of human rights. So it ultimately is shaping who gets to participate in public life and who is seen as a human being. Next slide, please.

In many industries, standards of care are well-established. If you design a car, you must consider passenger safety. If you manufacture medicine, you must demonstrate it will not cause harm. But digital systems have historically developed without equivalent expectations. A standard of care means anticipating foreseeable harm and designing systems that reduce those risks. This—the this requires three things: anticipating, so identifying risks with the most marginalized person in in mind before systems scale; accountability, that means clarifying responsibility across different institutions, within also institutions, um is important; and finally, institutional learning: adapting when harm patterns appear rather than just reacting, giving an apology statement, and then moving on. This is not about slowing down innovation but ensuring that systems can sustain human dignity at scale. Next slide, please.

Like I've said, most governance debates happen after systems scale, but infrastructure decisions determine what behaviors are amplified, what incentives exist, and where accountability sits. If care for the most marginalized is not considered upstream, systems often evolve towards amplification without friction, and scale without safety, and responsibility without ownership. Therefore, standards of care must exist upstream of harm. And then my last slide, please.

Many of the people in this room work at the earliest stage of the in—of the internet infrastructure. That position carries enormous influence because the assumptions embedded in infrastructure shape what becomes possible later. Care isn't soft; it is a decision. A decision about who gets to participate in systems, whose safety is prioritized, and how responsibility is distributed when harm occurs. That's the work I now focus on: helping leadership teams think about a duty of care in their decision-making around upstream protocols before they become public failures. If that's a conversation you're up for, I would love to hear that in the Q&A and and answer your questions. And I have one question for you all because I've been dying to speak to more techies: when protocol developers think about system risk, do you do conversations ever include the downstream of social harms um that those systems might produce, or is that considered outside of the scope of infrastructure design? Thank you very much.

Mallory Knodel: Thanks very much, Shaye. So, folks, um, get yourselves in the queue. We have a little bit of time for Q&A. But also, um, there was a question posed, so if you have an answer, that's great. I see Shane has answered in the chat. And yeah, this is the end of day one of the IETF, so I'm going to go ahead and invite everybody to give um Shaye and all the speakers a round of applause and many, many kind thanks for joining us. I was so happy to have you all here. This has been a really good session and I appreciate it very much. Thanks to everybody who came and have a good rest of your week, everybody.