Exploring the Tech Risk Zones: Outsized Power

By Kacie Harold, Omidyar Network 

Chris Hughes is the co-founder and co-chair of the Economic Security Project, a group that is working to restructure the economy so that it works for everyone. Prior to this, he co-founded Facebook in 2004, and later worked as a publisher at New Republic for several years. Chris has worked exclusively on economic issues since 2016, focusing on anti-monopoly and antitrust issues, and calling for a guaranteed income and tax policy.

What motivates you on the issues related to ethics and responsibility within technology?

I think any systems builder, whether it’s in technology, finance, or the arts needs to think about how their work impacts other people and the world. This is an obligation that we all have as humans first, whether we end up as business people or anything else. Tech in particular has a very important responsibility since so many of the companies that are out there, pioneering and charting new paths for products and services. And each one of those comes with a different set of ethical questions that tech companies need to develop a practice of asking and answering on a regular basis.

I tend to be an optimist and think that folks in tech now are thinking much more comprehensively and ethically. That said, I don’t think that we should overcomplicate thinking about ethics. Just as parents teach their kids at the playground to think about how their play affects other kids, we do the same thing in schools. We have to do the same thing in business and technology as well. And I don’t think that creating the habit of thinking about how our work impacts others is a particularly tall ask. Part of being human is thinking about how we live in community with other people, what other people provide us and what we’re providing them. If there’s any moment in the past several decades that illustrates that more than ever it’s COVID-19. We can see that we all rely on each other to stay healthy and create the kind of communities that we, have appreciated and want to return to.

I think that the long arc of history teaches us that we’re all relying on each other and the decisions that we make as individuals affect the greater community. This is true in business. It’s true in politics and in organizing, too.

How do monopolies hurt entrepreneurs?

For small business entrepreneurs, the most worrying things about monopolies is their ability to move into a market and effectively shut that market down by either price gouging, copying features or tools, or, hostile acquisitions of companies in that marketplace. From a talent perspective, it is often difficult to compete against monopolies because of their ability to attract and retain talent.

Increases in market concentration lead to decreases in entrepreneurship and innovation. It becomes harder to enter these markets, and this slows down the pace of innovation. Even before the recession, small business startups were at a near historic low, and one of the chief causes of that is monopoly power.

My sense is that we’ve lost a vocabulary and a framework to talk about outsized corporate power in the United States over the past 30 or 40 years. Most folks in tech are wary of these conversations, but they are also concerned about the big of the consolidation of power. And so we’re at a transitional moment where folks in tech, like a lot of people elsewhere in the country and even in the world are rethinking what the role of public policy should be to create markets that are fair and create good outcomes for everyone.

I think another issue is that people have begun to see the large tech monopolies as inevitable and unchangeable. And so they may not think as much about how those monopolies impact their lives or impede their work.

That’s what the leaders of the large tech companies want you to believe. So if that’s what you’re thinking, they’ve got you right where they want you. The more that they can convince folks that there is no other way, and that this is the best of all possible worlds, then they’ve won.

I think there are a lot of entrepreneurs out there who are thinking creatively and are skeptical about the role that those large companies are playing. The challenge is less about not buying into what the tech companies are saying — that monopolies are inevitable– and more about believing that government can be a positive force for good in the world. Specifically, that the Federal Trade Commission and the US Department of Justice can create markets that are more dynamic and fair. We live in a time where cynicism about the government runs deep. I think for entrepreneurs, that cynicism is bigger barrier than the tech companies’ talking points.

If you don’t mind shifting for a moment, I’d like to ask you about something you wrote in an op-ed for The New York Times in 2019 about your experience working at Facebook.

“I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders.”

Given your experience of being in the room as decisions like this get made, is there any advice you would give tech developers and teams to identify the key moments where they need to stop and think about what they are creating?

I can only speak for myself. In the early days of Facebook, it was very hard for me to imagine what a success at scale might look like. These were products that were for college kids in the United States that were largely meant to be fun, about creating connection. We knew that it was more than just fun, but the backdrop was that it was a lighthearted, recreational project that we hoped would bring people closer together. That would have been the way we would have spoken about it at the time. And so for me it was very hard to imagine what this thing would look like when over a billion people were using it, for who knows how many hours a day, and anyone can have access. That difficulty was real, and it isn’t an excuse because we knew that Facebook was a very sticky, very popular product, very early on. And that’s why I wrote what I wrote. Because we should have thought much more seriously about what it could turn into, even at the outset.

I’m not sure if it would have changed in some of those initial decisions that I made at the time, but it would have created a framework of accountability that we could refer back to as a team and individually. And I think it’s only in the past year or two that Facebook has really come to even understand its responsibility, if it really has. My advice to (tech) teams is, even if you’re working small, think big, and think about what problems could be introduced at scale.

I think when you are in a company that is growing and doing really well, it’s natural to be excited and want to move quickly, but that speed can make it difficult to predict ways that things could go wrong. Do you have any advice for how tech makers can recognize those pivotal moments where they should slow down and consider the impact of what they are creating?

You’re always in the moment, and you don’t have to worry about figuring out if you’re in the moment or not. My advice is that you should always be asking that question. Often it will feel theoretical, but it isn’t. I guess that’s my point with the playground analogy at the beginning. Thinking about how your actions impact other people is a basic part of living in a community with other people.

I realize that interviewing somebody (formerly) from Facebook may be a little counterproductive because people could say, well, my company is not going to become a Facebook, so I don’t need worry about this. But I think everybody should be thinking about it much of the time, whether you’re in the CEO suite or, the most junior customer service agent.

You can find more of Chris’ thinking on twitter @chrishughes. The Economic Security Project is a grantee of Omidyar Network.

Exploring the Tech Risk Zones: Bad Actors

By Kacie Harold, Omidyar Network 

Caroline Sinders is a designer and artist focusing on the intersections of artificial intelligence, abuse, and politics in digital conversational spaces. She has worked with the United Nations, Amnesty International, IBM Watson, the Wikimedia Foundation and recently published a piece with the support of Omidyar Network and Mozilla Foundation. Sinders has held fellowships with the Harvard Kennedy School, Google’s PAIR (People and Artificial Intelligence Research group), and the Mozilla Foundation. Her work has been featured in the Tate Exchange in Tate Modern, the Victoria and Albert Museum, MoMA PS1, LABoral, Wired, Slate, Quartz, the Channels Festival and others. Caroline also has a passion for addressing harassment online, which represents one of the harmful behaviors within the Bad Actors Tech Risk Zone.

Caroline, can you tell us about how design plays an important role in creating safe and inclusive environments online?

I’ve been studying online harassment for nearly the past seven years. I look at it from the perspective of how technology products and social networks are designed, and how that design can mitigate or amplify harassment. I focus on how the design of a space allows for harassment to occur, including both the actions that a harasser could engage in and the affordances that a victim has, to mitigate the harassment that they are receiving.

How can tech companies benefit from protecting their users from harassment?

I always like to remind people that bad business costs money. Additionally, when people face harassment, they tend to engage in self-censorship. The chilling effect of harassment is that people post less content and they engage less often. I believe that becomes a human rights issue when, for safety reasons, some people can not engage freely in a platform, but others can. Being able to participate safely in a space is crucial for engaging in free speech. Ultimately, a company will lose money if people stop using or all together leave their platform; one way to get users to stay is to protect them.

In the last few years, you’ve worked with Band Camp, Facebook, and Wikipedia on anti-harassment policies and tools to support victims. Are there any common challenges that you’ve seen tech teams struggle with as they address harassment on their platforms?

Platforms, across the board, struggle to identify growing forms of harm. Harassers are always changing their methods and finding some new and interesting way to hurt other people. It’s important to regularly talk to a variety of people from underrepresented groups, who are using your product or technology in order to understand how forms are harassment are evolving.

When you listen to users, you need to be aware their relationship to the tool. Often in open source communities or volunteer led projects, you see a lot of users who feel very committed to a project because they have contributed to it and they are deeply invested in the community. For instance, at Wikimedia, I saw victims who were more willing to forgive or try to empathize or work through the harassment they had faced out of concern that asking the Wikimedia Foundation or community leadership to make changes might rupture the community or hurt the encyclopedia. In these cases, you need to find other marginalized members who have experienced toxicity, and have a conversation with them and make sure you aren’t perpetuating toxicity in order to protect a project.

Another challenge is that some forms of harassment look innocuous at first. For example, imagine you receive the same message from 10 different people over the course of a year, and although you block the users, the messages keep coming. When you file a report, there’s no way to show the messages are related, and the platform has no way to investigate it. In another scenario where you receive a comment from someone that says, “I love your green top with the polka dots,” you might be scared, wondering why or how that person has seen your shirt. But the content moderator isn’t going to see that, all they see is a comment on the victim’s appearance. Even with harassment policy and procedures in place, reporting flows may prevent victims from sharing context or evidence necessary for a content moderator to verify it.

How can tech companies be proactive about preventing harm on their platforms?

Unfortunately, when big tech thinks of preventative care in terms of harassment, they think of technology solutions to it. This can be really problematic because those technology solutions end up being things like AI and AI filters, which aren’t very accurate.

Preventing harassment would entail much more user friendly privacy settings. The challenge is, most people aren’t necessarily thinking of their safety until it has been compromised. One way to increase safety for users is to make data privacy settings really legible, and easy to find and use. This could also look like sending users a push notification suggesting changes to their privacy settings, keeping location sharing off by default, or even notifying users of ways that harassment can occur on that platform.

In addition to giving people tools to protect themselves, victims may also need proof that they have reported abuse in case things get worse. So right now, if you file a harassment report on Facebook or Twitter they send you an email, but it would help victims to be able to find all of those reports in one place and in a downloadable format in case they need those reports to build a legal case at some point.

What advice do you have for tech makers, builders, or companies that are just starting to think about or discuss harassment?

Hire Black women and other marginalized people who use your tool. If you are a privileged person, you may not quite understand that someone could experience harassment in a place that you feel is very safe. I think of Zoom which, really could not have anticipated this moment or the popularity of their tool. The CEO said that they had never thought of harassment because Zoom was created as a workplace tool. But we know that harassment happens at work.

When you design a technology, always ask yourself what could possibly go wrong and really map out things, even if they feel absurd to you. Don’t just design for like this middle area of how you hope people will use your technology, design for the real world.

Finally, remember that every data point about harassment is a real person’s traumatic story. So even if you have what seems like really low numbers of harassment, it’s always important to remember that these are people experiencing trauma, not numbers.

You can find more of Caroline’s work on her website, and can follow her journey on twitter @CarolineSinders.

Exploring the Tech Risk Zones: Surveillance

By Kacie Harold, Omidyar Network 

Matt Mitchell is a hacker and Tech Fellow at The Ford Foundation, working with the BUILD and Technology and Society teams to develop digital security strategy, technical assistance offerings, and safety and security measures for the foundation’s grantee partners. Matt has also worked as the Director of Digital Safety & Privacy at Tactical Tech, and he founded CryptoHarlem, which teaches basic cryptography tools to the predominately African American community in upper Manhattan.

Matt, why should small and midsize tech companies want to address issues of surveillance and think about data privacy and security for their users?

I recently spoke with founders of a blockchain, cryptocurrency social media startup that values “humans first”. Privacy came up briefly in the conversation. As a small team going through their first round of funding, they are motivated to build quickly, get people to use the product, and then find a way to monetize it. I suggested they create a transparency report and a plain speak privacy policy because this would give them competitive advantage, and it speaks to the motivations of that team. When you are building a product that’s new, existing companies and competitors might not have these things, so focusing on privacy is really easy, lo-hanging fruit when it comes to feature development. You can go a long way to earning the trust of your users and build engagement when people know that using your product isn’t going to compromise their security in the future.

Are there any common surveillance related problems companies run into when they build a new products or features?

When you’re making a product, there’s a temptation to gather as much data as possible because, in the worst case scenario, maybe you’re VC-funded and you’re losing your seed funding. The money you have to play with every month is going down and you’re not really meeting your KPIs, but you do know your users. If you reach a place where you may have to lose some staff, it can be tempting to sell user information or what you know about user behavior.

Monetizing user data usually seems like a good idea at the time. But it always turns out to be something that hurts you, because it hurts your relationship with the users. When your users can’t trust you anymore, they begin seeing you as the lowest part of what you provide. You are no longer delighting the users, and then they lose the reason why they’re there, and it becomes so easy for someone to replace you.

You may be approached by a company who is interested in just a small part of what you do, for instance, something related to user behavior. This is where you should say “no”. You are still empowered to say “no” at that moment. But as soon as you say “yes”, even if it’s just to sell a little bit of information, but only to trusted partners in certain conditions, that criteria starts sliding really quickly, especially if you are not the only one making decisions or you have funders or VCs you report to. Once you make it, you can’t undo it. You can’t unbuild a surveillance apparatus.

Another common problem is that teams are working on tight timelines, and it can be hard to find the time to make sure they are doing things right, and without guidance, they don’t know when they are doing something wrong. When it comes particularly to surveillance, people don’t have a good mapping of things that equal surveillance in their industry and in their products. Engineers aren’t thinking they want to add surveillance to something, they just want to build a tool. They don’t realize when the different elements of what they built and the data they are collecting can be used to monitor and harm users.

What can teams do to prevent surveillance issues from creeping up on them?

I think harm reduction on a micro-intervention level is a helpful practice because it’s just adding a few minutes into a workday that is full of loose minutes. When you’re trying to fix a broken app or a broken world it can take years, and you won’t necessarily have any wins. This is why it is important to invest those minutes and prevent these harms.

Everyone on the team needs to be equipped with tools and information to identify and prevent surveillance-related harms. For engineers, (quality assurance), and the debugging team, using basic checklists on a regular basis can help prevent problems and identify moments where the team should slow down and evaluate whether there may be a surveillance issue developing. Product managers and UX should create user personas that include information about how that user could be harmed, if your tool were used for surveillance.

Finally, give your team an “emergency brake” that anyone can pull anonymously, if they see an emerging harm, or something that violates the values your team or company has agreed upon. Make it clear ahead of time that if the emergency brake is pulled, the team will dedicate a sprint to fixing the issue.

What advice would you give tech builders who are just starting to think about surveillance?

Reading doesn’t seem like the first thing you want to do when starting a company and focused on finding funding, hiring engineers, and building a prototype. But reading doesn’t take long, and the value it saves you in protecting you from liability, enhancing your ability to compete, and building trust with your users pays itself back in dividends.

I recommend reading about Black [people] using technology, because those use cases open up a set harms that you can apply to almost everything. Two books I like are Dark Matters by Simone Brown, an amazing book on the surveillance of Black folks, and Algorithms of Oppression by Safiya Umoja Noble. When you know better, you can do better.

You can learn more about Matt’s work and watch his talks and security training videos on Medium, or follow him on Twitter @geminiimatt.

Exploring the Tech Risk Zones: Algorithmic Bias

By Kacie Harold, Omidyar Network 

Safiya Noble is an Associate Professor at UCLA who specializes in algorithmic discrimination and the ways in which digital technologies reinforce or compound oppression. She co-directs UCLA’s Center for Critical Internet Inquiry, and her book Algorithms of Oppression: How Search Engines Reinforce Racism has been invaluable for us in understanding how tech is implicated in a variety of civil and human rights issues.

Professor Noble, what advice would you give to technologists who are just starting to think about whether their AI systems might be perpetuating harmful biases?

Read. At the Center for Critical Internet Inquiry, we have a list of the top 15 books at the intersection of racial justice and technology. People can get educated on their own; they don’t need to go back to school. Managers can make these required readings and give people time at work to read.

Normalize these conversations, and give people common books and articles to talk through together. When you are just starting, it’s good to establish common vocabulary and points of reference because it’s difficult to learn when people are talking from different knowledge frames. The Ethical Explorer (Tech Risk Zone) cards are a great resource for this; teams can bring a different question each week to discuss at a brownbag lunch.

Bring in experts. We know that outside of the workplace, broadly in society, we do terribly with conversations about justice, race, gender, sexuality, power, and class. We are so unclear about what we are talking about when we have these conversations. Sometimes it’s also easier to hear these things from someone outside of your team. It is unfair to put the onus of leading these conversation on the only women or people of color on your team.

Get the C-suite connected, and signal that this is a priority. Don’t put it on the lower level managers; there has to be a commitment from the top.

Are there any common roadblocks where people or teams get stuck when talking about AI bias or ways their technology may perpetuate discrimination?

From my experience it’s often non-programmers on the team, such as UX, who bring these issue forward and recommend solutions. However, on teams, people who do not do the actual coding often subordinate to those who do.

As a manager you have to build teams that allow the best ideas to rise to the top, prioritizing collaboration and equal power of different kinds of expertise. People with graduate degrees in African American Studies, Ethnic Studies, Gender Studies and Sociology–people who are deep experts in society–should be on these teams and hired as equals so they are co-creating as equals and there is not always privileging of the programmers’ point of view. Establishing this kind of camaraderie helps us to let go of the limits of our work and be more open to improving it.

I think it’s hard for people to grasp the ways the technology they build and use may impact their lives in the future. How do you get people to remove themselves a little bit from the moment of excitement of “Oh, we can do this” to step back and ask, “Should we do this”?

Imagine what it is going to be like when everything you have ever done on the Internet gets run through a series of algorithms that decide your worthiness to participate in certain aspects of society. That is coming, and it is already happening. Banks are already assessing applicants and their credit worthiness based on their social networks. The difference will be that 20 years from now, children will be born into those systems right from the beginning. So if you are born into a caste system or born working class or poor, that is the social network you will inherit. This is frightening. We must acknowledge that building blocks for that future are being developed today.

Are there any promising developments that you are seeing around mitigating bias and discrimination caused by AI?

I think we are entering a new paradigm of concern about harm. Knowing that a decade ago we’re not in that place, and that now we have normalized these conversations and so many people are invested in talking about harm and the dangers (of technology). That in itself is really big to me.

It’s kind of like when a new moral code is introduced to a large enough dimension of society that you can create leverage for a different possibility. One thing we have to do is get a critical mass in the workforce, on all of the teams, who can talk about these issues.

We often think of change as something that happens when one great leader comes along to marshal it. But I don’t think that’s how change happens. I think we should be hopeful that we can make change and that every conversation we are in matters. It can be a product team of five or six people that bring something into the world in a very big way, let’s not underestimate the power of these teams.

You can find Safiya’s book, Algorithms of Oppression: How Search Engines Reinforce Racism, here. Follow her on twitter @safiyanoble.

This site uses cookies

This website deploys cookies for basic functionality and to keep it secure. These cookies are strictly necessary. Optional analysis cookies which provide Omidyar Network with statistical information about the use of the website may also be deployed, but only with your consent. Please review our Privacy & Data Policy for more information.

Accept