By Kacie Harold, Omidyar Network
Caroline Sinders is a designer and artist focusing on the intersections of artificial intelligence, abuse, and politics in digital conversational spaces. She has worked with the United Nations, Amnesty International, IBM Watson, the Wikimedia Foundation and recently published a piece with the support of Omidyar Network and Mozilla Foundation. Sinders has held fellowships with the Harvard Kennedy School, Google’s PAIR (People and Artificial Intelligence Research group), and the Mozilla Foundation. Her work has been featured in the Tate Exchange in Tate Modern, the Victoria and Albert Museum, MoMA PS1, LABoral, Wired, Slate, Quartz, the Channels Festival and others. Caroline also has a passion for addressing harassment online, which represents one of the harmful behaviors within the Bad Actors Tech Risk Zone.
Caroline, can you tell us about how design plays an important role in creating safe and inclusive environments online?
I’ve been studying online harassment for nearly the past seven years. I look at it from the perspective of how technology products and social networks are designed, and how that design can mitigate or amplify harassment. I focus on how the design of a space allows for harassment to occur, including both the actions that a harasser could engage in and the affordances that a victim has, to mitigate the harassment that they are receiving.
How can tech companies benefit from protecting their users from harassment?
I always like to remind people that bad business costs money. Additionally, when people face harassment, they tend to engage in self-censorship. The chilling effect of harassment is that people post less content and they engage less often. I believe that becomes a human rights issue when, for safety reasons, some people can not engage freely in a platform, but others can. Being able to participate safely in a space is crucial for engaging in free speech. Ultimately, a company will lose money if people stop using or all together leave their platform; one way to get users to stay is to protect them.
In the last few years, you’ve worked with Band Camp, Facebook, and Wikipedia on anti-harassment policies and tools to support victims. Are there any common challenges that you’ve seen tech teams struggle with as they address harassment on their platforms?
Platforms, across the board, struggle to identify growing forms of harm. Harassers are always changing their methods and finding some new and interesting way to hurt other people. It’s important to regularly talk to a variety of people from underrepresented groups, who are using your product or technology in order to understand how forms are harassment are evolving.
When you listen to users, you need to be aware their relationship to the tool. Often in open source communities or volunteer led projects, you see a lot of users who feel very committed to a project because they have contributed to it and they are deeply invested in the community. For instance, at Wikimedia, I saw victims who were more willing to forgive or try to empathize or work through the harassment they had faced out of concern that asking the Wikimedia Foundation or community leadership to make changes might rupture the community or hurt the encyclopedia. In these cases, you need to find other marginalized members who have experienced toxicity, and have a conversation with them and make sure you aren’t perpetuating toxicity in order to protect a project.
Another challenge is that some forms of harassment look innocuous at first. For example, imagine you receive the same message from 10 different people over the course of a year, and although you block the users, the messages keep coming. When you file a report, there’s no way to show the messages are related, and the platform has no way to investigate it. In another scenario where you receive a comment from someone that says, “I love your green top with the polka dots,” you might be scared, wondering why or how that person has seen your shirt. But the content moderator isn’t going to see that, all they see is a comment on the victim’s appearance. Even with harassment policy and procedures in place, reporting flows may prevent victims from sharing context or evidence necessary for a content moderator to verify it.
How can tech companies be proactive about preventing harm on their platforms?
Unfortunately, when big tech thinks of preventative care in terms of harassment, they think of technology solutions to it. This can be really problematic because those technology solutions end up being things like AI and AI filters, which aren’t very accurate.
Preventing harassment would entail much more user friendly privacy settings. The challenge is, most people aren’t necessarily thinking of their safety until it has been compromised. One way to increase safety for users is to make data privacy settings really legible, and easy to find and use. This could also look like sending users a push notification suggesting changes to their privacy settings, keeping location sharing off by default, or even notifying users of ways that harassment can occur on that platform.
In addition to giving people tools to protect themselves, victims may also need proof that they have reported abuse in case things get worse. So right now, if you file a harassment report on Facebook or Twitter they send you an email, but it would help victims to be able to find all of those reports in one place and in a downloadable format in case they need those reports to build a legal case at some point.
What advice do you have for tech makers, builders, or companies that are just starting to think about or discuss harassment?
Hire Black women and other marginalized people who use your tool. If you are a privileged person, you may not quite understand that someone could experience harassment in a place that you feel is very safe. I think of Zoom which, really could not have anticipated this moment or the popularity of their tool. The CEO said that they had never thought of harassment because Zoom was created as a workplace tool. But we know that harassment happens at work.
When you design a technology, always ask yourself what could possibly go wrong and really map out things, even if they feel absurd to you. Don’t just design for like this middle area of how you hope people will use your technology, design for the real world.
Finally, remember that every data point about harassment is a real person’s traumatic story. So even if you have what seems like really low numbers of harassment, it’s always important to remember that these are people experiencing trauma, not numbers.
You can find more of Caroline’s work on her website, and can follow her journey on twitter @CarolineSinders.