Exploring the Tech Risk Zones: Outsized Power

By Kacie Harold, Omidyar Network 

Chris Hughes is the co-founder and co-chair of the Economic Security Project, a group that is working to restructure the economy so that it works for everyone. Prior to this, he co-founded Facebook in 2004, and later worked as a publisher at New Republic for several years. Chris has worked exclusively on economic issues since 2016, focusing on anti-monopoly and antitrust issues, and calling for a guaranteed income and tax policy.

What motivates you on the issues related to ethics and responsibility within technology?

I think any systems builder, whether it’s in technology, finance, or the arts needs to think about how their work impacts other people and the world. This is an obligation that we all have as humans first, whether we end up as business people or anything else. Tech in particular has a very important responsibility since so many of the companies that are out there, pioneering and charting new paths for products and services. And each one of those comes with a different set of ethical questions that tech companies need to develop a practice of asking and answering on a regular basis.

I tend to be an optimist and think that folks in tech now are thinking much more comprehensively and ethically. That said, I don’t think that we should overcomplicate thinking about ethics. Just as parents teach their kids at the playground to think about how their play affects other kids, we do the same thing in schools. We have to do the same thing in business and technology as well. And I don’t think that creating the habit of thinking about how our work impacts others is a particularly tall ask. Part of being human is thinking about how we live in community with other people, what other people provide us and what we’re providing them. If there’s any moment in the past several decades that illustrates that more than ever it’s COVID-19. We can see that we all rely on each other to stay healthy and create the kind of communities that we, have appreciated and want to return to.

I think that the long arc of history teaches us that we’re all relying on each other and the decisions that we make as individuals affect the greater community. This is true in business. It’s true in politics and in organizing, too.

How do monopolies hurt entrepreneurs?

For small business entrepreneurs, the most worrying things about monopolies is their ability to move into a market and effectively shut that market down by either price gouging, copying features or tools, or, hostile acquisitions of companies in that marketplace. From a talent perspective, it is often difficult to compete against monopolies because of their ability to attract and retain talent.

Increases in market concentration lead to decreases in entrepreneurship and innovation. It becomes harder to enter these markets, and this slows down the pace of innovation. Even before the recession, small business startups were at a near historic low, and one of the chief causes of that is monopoly power.

My sense is that we’ve lost a vocabulary and a framework to talk about outsized corporate power in the United States over the past 30 or 40 years. Most folks in tech are wary of these conversations, but they are also concerned about the big of the consolidation of power. And so we’re at a transitional moment where folks in tech, like a lot of people elsewhere in the country and even in the world are rethinking what the role of public policy should be to create markets that are fair and create good outcomes for everyone.

I think another issue is that people have begun to see the large tech monopolies as inevitable and unchangeable. And so they may not think as much about how those monopolies impact their lives or impede their work.

That’s what the leaders of the large tech companies want you to believe. So if that’s what you’re thinking, they’ve got you right where they want you. The more that they can convince folks that there is no other way, and that this is the best of all possible worlds, then they’ve won.

I think there are a lot of entrepreneurs out there who are thinking creatively and are skeptical about the role that those large companies are playing. The challenge is less about not buying into what the tech companies are saying — that monopolies are inevitable– and more about believing that government can be a positive force for good in the world. Specifically, that the Federal Trade Commission and the US Department of Justice can create markets that are more dynamic and fair. We live in a time where cynicism about the government runs deep. I think for entrepreneurs, that cynicism is bigger barrier than the tech companies’ talking points.

If you don’t mind shifting for a moment, I’d like to ask you about something you wrote in an op-ed for The New York Times in 2019 about your experience working at Facebook.

“I’m disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders.”

Given your experience of being in the room as decisions like this get made, is there any advice you would give tech developers and teams to identify the key moments where they need to stop and think about what they are creating?

I can only speak for myself. In the early days of Facebook, it was very hard for me to imagine what a success at scale might look like. These were products that were for college kids in the United States that were largely meant to be fun, about creating connection. We knew that it was more than just fun, but the backdrop was that it was a lighthearted, recreational project that we hoped would bring people closer together. That would have been the way we would have spoken about it at the time. And so for me it was very hard to imagine what this thing would look like when over a billion people were using it, for who knows how many hours a day, and anyone can have access. That difficulty was real, and it isn’t an excuse because we knew that Facebook was a very sticky, very popular product, very early on. And that’s why I wrote what I wrote. Because we should have thought much more seriously about what it could turn into, even at the outset.

I’m not sure if it would have changed in some of those initial decisions that I made at the time, but it would have created a framework of accountability that we could refer back to as a team and individually. And I think it’s only in the past year or two that Facebook has really come to even understand its responsibility, if it really has. My advice to (tech) teams is, even if you’re working small, think big, and think about what problems could be introduced at scale.

I think when you are in a company that is growing and doing really well, it’s natural to be excited and want to move quickly, but that speed can make it difficult to predict ways that things could go wrong. Do you have any advice for how tech makers can recognize those pivotal moments where they should slow down and consider the impact of what they are creating?

You’re always in the moment, and you don’t have to worry about figuring out if you’re in the moment or not. My advice is that you should always be asking that question. Often it will feel theoretical, but it isn’t. I guess that’s my point with the playground analogy at the beginning. Thinking about how your actions impact other people is a basic part of living in a community with other people.

I realize that interviewing somebody (formerly) from Facebook may be a little counterproductive because people could say, well, my company is not going to become a Facebook, so I don’t need worry about this. But I think everybody should be thinking about it much of the time, whether you’re in the CEO suite or, the most junior customer service agent.

You can find more of Chris’ thinking on twitter @chrishughes. The Economic Security Project is a grantee of Omidyar Network.

Exploring the Tech Risk Zones: Bad Actors

By Kacie Harold, Omidyar Network 

Caroline Sinders is a designer and artist focusing on the intersections of artificial intelligence, abuse, and politics in digital conversational spaces. She has worked with the United Nations, Amnesty International, IBM Watson, the Wikimedia Foundation and recently published a piece with the support of Omidyar Network and Mozilla Foundation. Sinders has held fellowships with the Harvard Kennedy School, Google’s PAIR (People and Artificial Intelligence Research group), and the Mozilla Foundation. Her work has been featured in the Tate Exchange in Tate Modern, the Victoria and Albert Museum, MoMA PS1, LABoral, Wired, Slate, Quartz, the Channels Festival and others. Caroline also has a passion for addressing harassment online, which represents one of the harmful behaviors within the Bad Actors Tech Risk Zone.

Caroline, can you tell us about how design plays an important role in creating safe and inclusive environments online?

I’ve been studying online harassment for nearly the past seven years. I look at it from the perspective of how technology products and social networks are designed, and how that design can mitigate or amplify harassment. I focus on how the design of a space allows for harassment to occur, including both the actions that a harasser could engage in and the affordances that a victim has, to mitigate the harassment that they are receiving.

How can tech companies benefit from protecting their users from harassment?

I always like to remind people that bad business costs money. Additionally, when people face harassment, they tend to engage in self-censorship. The chilling effect of harassment is that people post less content and they engage less often. I believe that becomes a human rights issue when, for safety reasons, some people can not engage freely in a platform, but others can. Being able to participate safely in a space is crucial for engaging in free speech. Ultimately, a company will lose money if people stop using or all together leave their platform; one way to get users to stay is to protect them.

In the last few years, you’ve worked with Band Camp, Facebook, and Wikipedia on anti-harassment policies and tools to support victims. Are there any common challenges that you’ve seen tech teams struggle with as they address harassment on their platforms?

Platforms, across the board, struggle to identify growing forms of harm. Harassers are always changing their methods and finding some new and interesting way to hurt other people. It’s important to regularly talk to a variety of people from underrepresented groups, who are using your product or technology in order to understand how forms are harassment are evolving.

When you listen to users, you need to be aware their relationship to the tool. Often in open source communities or volunteer led projects, you see a lot of users who feel very committed to a project because they have contributed to it and they are deeply invested in the community. For instance, at Wikimedia, I saw victims who were more willing to forgive or try to empathize or work through the harassment they had faced out of concern that asking the Wikimedia Foundation or community leadership to make changes might rupture the community or hurt the encyclopedia. In these cases, you need to find other marginalized members who have experienced toxicity, and have a conversation with them and make sure you aren’t perpetuating toxicity in order to protect a project.

Another challenge is that some forms of harassment look innocuous at first. For example, imagine you receive the same message from 10 different people over the course of a year, and although you block the users, the messages keep coming. When you file a report, there’s no way to show the messages are related, and the platform has no way to investigate it. In another scenario where you receive a comment from someone that says, “I love your green top with the polka dots,” you might be scared, wondering why or how that person has seen your shirt. But the content moderator isn’t going to see that, all they see is a comment on the victim’s appearance. Even with harassment policy and procedures in place, reporting flows may prevent victims from sharing context or evidence necessary for a content moderator to verify it.

How can tech companies be proactive about preventing harm on their platforms?

Unfortunately, when big tech thinks of preventative care in terms of harassment, they think of technology solutions to it. This can be really problematic because those technology solutions end up being things like AI and AI filters, which aren’t very accurate.

Preventing harassment would entail much more user friendly privacy settings. The challenge is, most people aren’t necessarily thinking of their safety until it has been compromised. One way to increase safety for users is to make data privacy settings really legible, and easy to find and use. This could also look like sending users a push notification suggesting changes to their privacy settings, keeping location sharing off by default, or even notifying users of ways that harassment can occur on that platform.

In addition to giving people tools to protect themselves, victims may also need proof that they have reported abuse in case things get worse. So right now, if you file a harassment report on Facebook or Twitter they send you an email, but it would help victims to be able to find all of those reports in one place and in a downloadable format in case they need those reports to build a legal case at some point.

What advice do you have for tech makers, builders, or companies that are just starting to think about or discuss harassment?

Hire Black women and other marginalized people who use your tool. If you are a privileged person, you may not quite understand that someone could experience harassment in a place that you feel is very safe. I think of Zoom which, really could not have anticipated this moment or the popularity of their tool. The CEO said that they had never thought of harassment because Zoom was created as a workplace tool. But we know that harassment happens at work.

When you design a technology, always ask yourself what could possibly go wrong and really map out things, even if they feel absurd to you. Don’t just design for like this middle area of how you hope people will use your technology, design for the real world.

Finally, remember that every data point about harassment is a real person’s traumatic story. So even if you have what seems like really low numbers of harassment, it’s always important to remember that these are people experiencing trauma, not numbers.

You can find more of Caroline’s work on her website, and can follow her journey on twitter @CarolineSinders.

Exploring the Tech Risk Zones: Surveillance

By Kacie Harold, Omidyar Network 

Matt Mitchell is a hacker and Tech Fellow at The Ford Foundation, working with the BUILD and Technology and Society teams to develop digital security strategy, technical assistance offerings, and safety and security measures for the foundation’s grantee partners. Matt has also worked as the Director of Digital Safety & Privacy at Tactical Tech, and he founded CryptoHarlem, which teaches basic cryptography tools to the predominately African American community in upper Manhattan.

Matt, why should small and midsize tech companies want to address issues of surveillance and think about data privacy and security for their users?

I recently spoke with founders of a blockchain, cryptocurrency social media startup that values “humans first”. Privacy came up briefly in the conversation. As a small team going through their first round of funding, they are motivated to build quickly, get people to use the product, and then find a way to monetize it. I suggested they create a transparency report and a plain speak privacy policy because this would give them competitive advantage, and it speaks to the motivations of that team. When you are building a product that’s new, existing companies and competitors might not have these things, so focusing on privacy is really easy, lo-hanging fruit when it comes to feature development. You can go a long way to earning the trust of your users and build engagement when people know that using your product isn’t going to compromise their security in the future.

Are there any common surveillance related problems companies run into when they build a new products or features?

When you’re making a product, there’s a temptation to gather as much data as possible because, in the worst case scenario, maybe you’re VC-funded and you’re losing your seed funding. The money you have to play with every month is going down and you’re not really meeting your KPIs, but you do know your users. If you reach a place where you may have to lose some staff, it can be tempting to sell user information or what you know about user behavior.

Monetizing user data usually seems like a good idea at the time. But it always turns out to be something that hurts you, because it hurts your relationship with the users. When your users can’t trust you anymore, they begin seeing you as the lowest part of what you provide. You are no longer delighting the users, and then they lose the reason why they’re there, and it becomes so easy for someone to replace you.

You may be approached by a company who is interested in just a small part of what you do, for instance, something related to user behavior. This is where you should say “no”. You are still empowered to say “no” at that moment. But as soon as you say “yes”, even if it’s just to sell a little bit of information, but only to trusted partners in certain conditions, that criteria starts sliding really quickly, especially if you are not the only one making decisions or you have funders or VCs you report to. Once you make it, you can’t undo it. You can’t unbuild a surveillance apparatus.

Another common problem is that teams are working on tight timelines, and it can be hard to find the time to make sure they are doing things right, and without guidance, they don’t know when they are doing something wrong. When it comes particularly to surveillance, people don’t have a good mapping of things that equal surveillance in their industry and in their products. Engineers aren’t thinking they want to add surveillance to something, they just want to build a tool. They don’t realize when the different elements of what they built and the data they are collecting can be used to monitor and harm users.

What can teams do to prevent surveillance issues from creeping up on them?

I think harm reduction on a micro-intervention level is a helpful practice because it’s just adding a few minutes into a workday that is full of loose minutes. When you’re trying to fix a broken app or a broken world it can take years, and you won’t necessarily have any wins. This is why it is important to invest those minutes and prevent these harms.

Everyone on the team needs to be equipped with tools and information to identify and prevent surveillance-related harms. For engineers, (quality assurance), and the debugging team, using basic checklists on a regular basis can help prevent problems and identify moments where the team should slow down and evaluate whether there may be a surveillance issue developing. Product managers and UX should create user personas that include information about how that user could be harmed, if your tool were used for surveillance.

Finally, give your team an “emergency brake” that anyone can pull anonymously, if they see an emerging harm, or something that violates the values your team or company has agreed upon. Make it clear ahead of time that if the emergency brake is pulled, the team will dedicate a sprint to fixing the issue.

What advice would you give tech builders who are just starting to think about surveillance?

Reading doesn’t seem like the first thing you want to do when starting a company and focused on finding funding, hiring engineers, and building a prototype. But reading doesn’t take long, and the value it saves you in protecting you from liability, enhancing your ability to compete, and building trust with your users pays itself back in dividends.

I recommend reading about Black [people] using technology, because those use cases open up a set harms that you can apply to almost everything. Two books I like are Dark Matters by Simone Brown, an amazing book on the surveillance of Black folks, and Algorithms of Oppression by Safiya Umoja Noble. When you know better, you can do better.

You can learn more about Matt’s work and watch his talks and security training videos on Medium, or follow him on Twitter @geminiimatt.

Exploring the Tech Risk Zones: Algorithmic Bias

By Kacie Harold, Omidyar Network 

Safiya Noble is an Associate Professor at UCLA who specializes in algorithmic discrimination and the ways in which digital technologies reinforce or compound oppression. She co-directs UCLA’s Center for Critical Internet Inquiry, and her book Algorithms of Oppression: How Search Engines Reinforce Racism has been invaluable for us in understanding how tech is implicated in a variety of civil and human rights issues.

Professor Noble, what advice would you give to technologists who are just starting to think about whether their AI systems might be perpetuating harmful biases?

Read. At the Center for Critical Internet Inquiry, we have a list of the top 15 books at the intersection of racial justice and technology. People can get educated on their own; they don’t need to go back to school. Managers can make these required readings and give people time at work to read.

Normalize these conversations, and give people common books and articles to talk through together. When you are just starting, it’s good to establish common vocabulary and points of reference because it’s difficult to learn when people are talking from different knowledge frames. The Ethical Explorer (Tech Risk Zone) cards are a great resource for this; teams can bring a different question each week to discuss at a brownbag lunch.

Bring in experts. We know that outside of the workplace, broadly in society, we do terribly with conversations about justice, race, gender, sexuality, power, and class. We are so unclear about what we are talking about when we have these conversations. Sometimes it’s also easier to hear these things from someone outside of your team. It is unfair to put the onus of leading these conversation on the only women or people of color on your team.

Get the C-suite connected, and signal that this is a priority. Don’t put it on the lower level managers; there has to be a commitment from the top.

Are there any common roadblocks where people or teams get stuck when talking about AI bias or ways their technology may perpetuate discrimination?

From my experience it’s often non-programmers on the team, such as UX, who bring these issue forward and recommend solutions. However, on teams, people who do not do the actual coding often subordinate to those who do.

As a manager you have to build teams that allow the best ideas to rise to the top, prioritizing collaboration and equal power of different kinds of expertise. People with graduate degrees in African American Studies, Ethnic Studies, Gender Studies and Sociology–people who are deep experts in society–should be on these teams and hired as equals so they are co-creating as equals and there is not always privileging of the programmers’ point of view. Establishing this kind of camaraderie helps us to let go of the limits of our work and be more open to improving it.

I think it’s hard for people to grasp the ways the technology they build and use may impact their lives in the future. How do you get people to remove themselves a little bit from the moment of excitement of “Oh, we can do this” to step back and ask, “Should we do this”?

Imagine what it is going to be like when everything you have ever done on the Internet gets run through a series of algorithms that decide your worthiness to participate in certain aspects of society. That is coming, and it is already happening. Banks are already assessing applicants and their credit worthiness based on their social networks. The difference will be that 20 years from now, children will be born into those systems right from the beginning. So if you are born into a caste system or born working class or poor, that is the social network you will inherit. This is frightening. We must acknowledge that building blocks for that future are being developed today.

Are there any promising developments that you are seeing around mitigating bias and discrimination caused by AI?

I think we are entering a new paradigm of concern about harm. Knowing that a decade ago we’re not in that place, and that now we have normalized these conversations and so many people are invested in talking about harm and the dangers (of technology). That in itself is really big to me.

It’s kind of like when a new moral code is introduced to a large enough dimension of society that you can create leverage for a different possibility. One thing we have to do is get a critical mass in the workforce, on all of the teams, who can talk about these issues.

We often think of change as something that happens when one great leader comes along to marshal it. But I don’t think that’s how change happens. I think we should be hopeful that we can make change and that every conversation we are in matters. It can be a product team of five or six people that bring something into the world in a very big way, let’s not underestimate the power of these teams.

You can find Safiya’s book, Algorithms of Oppression: How Search Engines Reinforce Racism, here. Follow her on twitter @safiyanoble.

Additional Resources for Ethical Explorers

By Kacie Harold, Omidyar Network

We hope you’ve enjoyed playing with the Ethical Explorer Pack. If you’ve gotten excited about moving from conversation to action in building safer, healthier, more responsible technology, here are some great resources.

If you’re still making the case:

Ledger of Harms: A broad assortment of compelling and current studies and articles compiled by the Center for Humane Technology that show clear effects harmful products and features, documented by relatively unbiased researchers and writers. This may be a helpful resource Ethical Explorers who need data and stories to make a case for prioritizing responsible design.

Parable of the Polygons: A quick interactive game by Vi Heart and Nicky Case to learn about how our history can lead to both biased data and a biased present. This may be useful for Ethical Explorers leading conversations about bias with their teams.

Ethical Litmus Test: A deck of 66 short questions and activities by Debias AI to help build up ethics capacity and vocabulary — as individuals and as part of a team. Website includes links to reading groups ad toolkits focused on bias in machine learning.

Classes you can take:

Data Science Ethics: Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making. Course by University of Michigan on edX. Twelve hours over four weeks, free.

Ethics, Technology and Engineering: Focuses on the concrete moral problems that engineers encounter in their professional practice. Course offered by Eindhoven University of Technology on Coursera. An 18-hour course, offering a certificate.

Future Learn Philosophy of Technology: Learn about the impact of technology on society. Explore the philosophy of technology and mediation theory, focused on design. Course by University of Twente hosted by Future Learn. Three weeks, four hours per week, free.

Ethics of Technological Disruption: Popular class from Stanford University with guest speakers from top tech companies. Six, two-hour video lectures available on YouTube.

HmntyCntrd: Interactive, cohort-based online course and community for UX professionals who want to learn how to design and advocate for equitable and inclusive user experiences. Created by UX Researcher and Humanity in Tech Advocate Vivianne Castillo.

Responsible design practices to try:

Design Ethically Tooklit: A toolkit for design strategists and product designers with several 30-minute to 1-hour small group exercises to help teams evaluate ethical implications of product ideas, think about consequences of unintended user behaviors, and create checklists for ethical issues to monitor after shipping product. Created by Kat Zhou, Product Designer at Spotify.

Judgment Call: Team-based game for cultivating stakeholder empathy through scenario-imagining. Game participants write product reviews from the perspective of a particular stakeholder, describing what kind of impact and harms the technology could produce from their point of view. Created by Microsoft’s Office on Responsible AI.

Harms Modeling: Framework for product teams, grounded in four core pillars that examine how people’s lives can be negatively impacted by technology: injuries, denial of consequential services, infringement on human rights, and erosion of democratic & societal structures. Similar to Security Threat Modeling. Created by Microsoft’s Office on Responsible AI.

Community Jury: Adaptation of the Citizen Jury, is a technique where diverse stakeholders impacted by a technology are provided an opportunity to learn about a project, deliberate together, and give feedback on use cases and product design. This technique allows project teams to understand the perceptions and concerns of impacted stakeholders. Created by Microsoft’s Office on Responsible AI.

Consequence Scanning: Lightweight agile practice to be used during vision, roadmap planning and iteration stages of product or feature development; focused on identifying potential positive and negative consequences of a new technology. Developed by Doteveryone.

Tools to help you manage Tech Risk Zones:

Addiction:
Calm Design Quiz: Set of scorecards to evaluate whether UX is optimized for healthy user engagement. Created by Amber Case, author of Calm Technology: Principles and Patterns for Non-Intrusive Design.

Algorithmic Bias
AI Blindspot: A discovery process for spotting unconscious biases and structural inequalities in AI systems from MIT Media Lab and Harvard University’s Berkman Klein Center for Internet and Society Assembly program. Includes resources on considering ethics when determining AI system performance metrics, security risks, and setting goals for your AI system.

People + AI Guidebook: Developed using data and insights from Google product teams, experts, and academics to help UX professionals and product managers follow a human-centered approach to AI. Includes guidance and worksheets on six topics including trust and explainability, designing feedback mechanisms, identifying errors and failures.

Data Control:
Data Ethics Canvas: Open Data Institute framework to identify and manage data ethics issues for anyone who collects, shares or uses data.

Exclusion:
Mismatch: Collection of inclusive design resources for making products accessible to users with disabilities. Includes links to accessibility checklists and tools, classes, and stories about inclusivity driving design innovation. Created by Kat Holmes, inclusive UX and Product Design expert and author of Mismatch: How Inclusion Shapes Design.

Universal Barriers: Framework for evaluating where an existing or changing service might exclude users. Created by the United Kingdoms office of Government Digital Service.

Surveillance:
Digital Security and Privacy Protection UX Checklist: Checklist with suggestions to promote privacy when designing and developing tools for targeted communities.

Discuss your values with funders and partners:

Conscious Scaling: Framework for dialogue between founders and investors/the board focused on identifying and mitigating long-term risks associated with a business model or technology’s impact on society, the environment, and all stakeholders. Created by Atomico, of which Omidyar Network’s Sarah Drinkwater is an angel program participant.

Ethical Intake Framework: Open source framework to assess mission and values alignment when evaluating potential partners, funders, investees or projects. Created by Partners & Partners.

 

Want to tell us what you think of Ethical Explorer, and how you used it? Email us [email protected].

And don’t forget to show your support for responsible tech by using the #EthicalExplorer hashtag!

Introducing the Ethical Explorer Pack

By Sarah Drinkwater, Director, Beneficial Technology, Omidyar Network

In the last few years, we’ve been excited and inspired by the rise of responsible tech workers asking hard questions, course correcting, and planning ahead. From designers and engineers to product managers and founders, there’s a growing movement of passionate people committed to ensuring technology protects and benefits its users.

The importance of that movement is apparent now more than ever as we grapple with COVID-19. Tech is helping us navigate the unprecedented challenges the pandemic presents to our everyday lives—from connecting with loved ones to ordering groceries—but it is also shining a spotlight on concerns about privacy, disinformation, and more.

These are complex issues, and sometimes we don’t know where to start. Not all companies or teams have cultures that welcome fair, thoughtful conversations about business or product decisions. And even at workplaces that encourage this kind of nuance, leading those efforts can be intimidating and feel like uncharted territory.

That’s why we built Ethical Explorer.

This toolkit is a direct response to the need we’ve seen and heard from makers and builders—as well as their collaborators—for a digestible, actionable resource to steward ethical tech. Created specifically for workers at startups and small to mid-size tech companies, Ethical Explorer is shaped by input from trusted partners, the community we’re serving, and dozens of subject matter experts. Just a few of experts who provided input on topics ranging from privacy to AI include: Caroline Sinders, founder of Convocation Design + Research; Amy Lazarus, CEO and founder of InclusionVentures; and Gisela Pérez de Acha, a human rights lawyer and data journalist.

The end result is a toolkit for pioneers who want to create a future where tech products are built with responsibility at the core.

So, what’s included?

Ethical Explorer is a DIY kit designed to spark group dialogue, identify early warning signs, build internal support, and brainstorm solutions to avoid future risks.

  • Field Guide: An overview of Ethical Explorer, suggested activities for both individuals and groups, and ideas for gaining buy-in within organizations.
  • Tech Risk Zones: Eight Tech Risk Zone cards—plus one blank card to tailor to a specific need or scenario—to help provoke thoughtful conversations around risk, responsibility, and impact in key areas.
  • Stickers: Explorers can outfit their gear with unique stickers to show off their passion for safer, fairer, and healthier tech that’s also more ethical and inclusive.

At Omidyar Network, we have a human-centered vision of tech that underpins greater individual and community empowerment, social opportunity, and safety. Our work investing in and creating partnerships with like minds has led us to the toolkit’s guiding principles: to support human values, create a culture of questioning, and ignite change through dialogue.

Building responsibility into core business and product decisions takes continual conversation and action. We encourage you, as a user of this tool, to make it your own; please evolve, experiment, and play.

We want to say a special thanks to those who made this project possible. We couldn’t have built it without the wealth of insight from the many passionate experts and organizations that contributed, especially Institute for the Future (IFTF) and Artefact. Your knowledge and creativity were essential.

Now let’s start exploring.

 

Want to tell us what you think of Ethical Explorer, and how you used it? Have another Tech Risk Zone you think we should discuss as a community? Email us [email protected].

And don’t forget to show your support for responsible tech by using the #EthicalExplorer hashtag!

This site uses cookies

This website deploys cookies for basic functionality and to keep it secure. These cookies are strictly necessary. Optional analysis cookies which provide Omidyar Network with statistical information about the use of the website may also be deployed, but only with your consent. Please review our Privacy & Data Policy for more information.

Accept