Our Code of Ethics

The Role of a Code

In reflecting on the role of digital technology in society, we’ve sought ideas and language to help us understand the ethics of our work. Our Code reflects research, presentations, discussions, and debates we’ve had as a company and is an ongoing attempt to hold ourselves to ethical standards in the same way we hold ourselves to the standards of our craft.

We reference the Code when considering potential clients and projects and when dealing with shifts in project direction. Rather than prescribing decisions, our Code of Ethics gives us a framework for considering our work when it falls in ethical gray areas: neither obviously right nor obviously wrong. And, while it draws on universal ideas that apply to other fields, it doesn’t contain universal guidelines for the entire Web. We’ve tried to write it to be specific to our work as a digital agency.

In sharing it publicly, we hope to draw attention to ethical issues and encourage others to reflect on the intersections with their work. Further down we share links to resources that we’ve found helpful. As always, please share your thoughts with us if you have recommendations or questions. Thanks for reading.

Core Ethical Principles

Our ethical principles are starting points for thinking about what it means to do good work. One way to think about goodness is the potential for flourishing, a concern that’s central to Viget’s identity (from the Latin vigeō, “to thrive”). What does it mean to be honest, trustworthy, empathetic, helpful, and respectful of others with what we build and who we work with? How does what we build promote or hinder human flourishing? These principles help us think more clearly about what makes work “good.”

  1. Technology always has an ethical dimension. Technology isn’t amoral, but reflects the values of all those involved in making it, from investors and entrepreneurs to product owners, designers, and engineers. Product decisions are ethical decisions, no matter how small or large their consequences may be.
  2. We bear responsibility for the consequences of our work. If we help design or build something, we share in the responsibility for its existence and its consequences. Yet, as consultants, we may not have the final say over how a product is built, designed, or used. So it is our responsibility to think critically about the potential second- and third-order consequences of our work and discuss concerns with our clients.
  3. What we build matters as much as who we build it for. When considering ethical issues, it is important to account for both the domain and the work itself. There are some domains we choose not to work in (e.g., the tobacco or animal testing industries). Yet even in areas that we are excited to work in — wildlife conservation, for example — we bring the same critical thought to our projects.
  4. Accepted practice is not inherently right. Just because something may be standard practice — indiscriminate data collection or running Facebook ads, for example — does not mean it is good.
  5. An ethical stance supports our business. Doing the right thing doesn’t require justification. But we believe that ethically-minded work will support our business in a number of ways: helping us and our clients anticipate and adapt to regulatory standards; attracting the kinds of inspiring employees and clients we want to work with; addressing ethical ambivalence in our industry; and making things that people want to use. We see ethics as a complement to innovation, not a stifling of it.
  6. Ethics is a conversation. A culture of healthy debate and diverse thought helps us avoid ethical blind spots and anticipate the consequences of our work. Rather than a set of static dogma, ethics is a mode of inquiry, a way of thinking about and discussing our work based on the values we hold.

Topics and Considerations

Ethics can get complicated quickly, so we’ve defined a set of topics to help us think more clearly about relevant issues. The considerations for each topic are meant to help teams think practically about how the best practices might apply to their work; practices are some of the ways we can anticipate and deal with issues as they arise. We’ve also included example scenarios to illustrate each topic.

1. Universal Issues

Certain issues extend across the entirety of our work as a digital agency, regardless of project type. These issues require ongoing critical discussion and action in order to avoid the biases and errors of our industry. In this category, we group concerns like tech monoculture, the environmental impact of technology, and tech solutionism: big, large-scale issues that no one organization is responsible for, but that we all have a responsibility to be cognizant of.

Considerations:

  • What does diversity look like on this project?

  • What might our blind spots be when working in this space?

  • What is the appropriate scope of a digital solution for this problem?

  • What does it look like to involve real-world users in coming up with our solutions?

  • How might a bad actor misuse this product or website?

Practices:

  • Use critique as a way to uncover ethical blind spots. While critique serves as a way to hone our technique and improve our project work, it can also show us implications of our work that we may not have been aware of.

  • Help clients anticipate harmful usage of their product. While the consequences of abuse by bad actors often matter in proportion to the scale of the technology, encouraging clients to think critically about what could go wrong helps ensure that this preparedness is designed into products from the start.

  • Assemble teams with diverse backgrounds, perspectives, educations, and experiences. To seek diversity is to continually acknowledge that none of us has a monopoly on expertise or wisdom.

  • Seek to co-design with clients and users when possible. While empathy is necessary, we can go beyond imagining others’ perspectives to actually inviting them into the work itself.

Example scenarios:

  • A healthcare application that allows patients to rate their doctors. What would a meaningful rating system look like? How might the system potentially promote bias?

  • An app that allows community members to share local history with one another through user-written stories. How might this content be moderated? What content governance patterns could prevent people from psychologically harming one another?

  • An app for therapists to use to help people with mental health issues. What aspects of the therapist-client interaction and mental health management could be effectively addressed with an app? What aspects might an app not be able to address? How would these limits be defined?

2. Client Values, Mission, and Purpose

We are thoughtful about the types of clients we engage with and promote through our work with them. Above all, we strive to work on projects and with clients we find interesting, meaningful, and rewarding. What that looks like may be different from person to person and could be related to the domain, mission, or type of work that a project involves.

The kinds of projects we avoid

  • Are responsible for environmental damage or hinder conservation efforts

  • Undermine human psychological or physical well-being

  • Seek to subvert social trust in civic institutions like science, medicine, law, or journalism

  • Create or spread disinformation or propaganda

  • Promote human inequality or discrimination

Practices:

  • Communicate proactively. We encourage folks who work at Viget to communicate with their immediate manager about any particular types of clients or domains that they would prefer to not work with because of personal beliefs.

  • Empower teams. We believe that teams should be able to raise concerns about a client they’re working with, regardless of the stage of the project.

  • Promote healthy conversation. We recognize that very few clients or projects fall cleanly into “good” or “bad” categories and encourage teams to pursue respectful discourse with teammates and leadership to resolve issues.

3. Collection, Use, and Security of Personal Data

At the moment, people have limited means for maintaining ownership of their own data, disallowing the monetization of their data, and avoiding online targeting and tracking. With the introduction of GDPR in Europe and CCPA in California, however, standards for personal data and data subject rights are changing in favor of web users. We seek to help clients find ways to collect and use data transparently and with consent; consider how to responsibly steward personal data; and future-proof their practices to prepare for the changing landscape of data subject rights.

Considerations:

  • What personal data are we collecting over the course of the project or in the software itself and why?

  • How might we create transparency for users around the collection and use of their data?

  • What practices have we enacted on this project to protect personal data in our care?

  • What guidance can we provide to clients around enabling users to access, erase, and opt-out of data collection?

Practices:

  • Prioritize website experience data. Collect data primarily for the purpose of understanding and improving the website experience. When possible, avoid collecting data that could also be used for assembling cross-site user profiles for targeted advertising (e.g., Google Signals and social network scripts).

  • Only collect personal data for a specific and identified purpose. This might include data that is needed to provide utility for the user, data that is needed to allow us to conduct meaningful research, or data that the client needs to identify meaningful patterns of use.

  • Clearly inform users what data is being collected, how it will be used, how long it will be stored, and what their options are for opting out. Where possible, ask for consent prior to collecting any data.

  • Avoid burying privacy policy details. Adhere to Viget’s data security standards, e.g., by setting a date for when PII should be deleted and setting a calendar reminder to do so.

  • Consider using privacy-oriented analytics tracking tools if they meet project requirements. Strive to develop systems or integrate with third-party systems that allow data subjects to access and download the information that has been collected about them, have the data collected be erased or corrected, and restrict what data is collected.

  • Define data collection default settings according to users’ best interests, and make them obvious and easy to change.

  • Be especially judicious when collecting personal data that is covered under “protected classes” in the United States. Only use data for specified, explicit, and legitimate purposes for which the user has given consent or has been informed of. Consider what bad actors might do if given the option to target or exclude individuals based on these data points.

  • Follow Viget’s data privacy standards when collecting data about research participants, e.g., de-identifying personally-identifying survey data while preserving people’s responses.

Example Scenarios:

  • A newspaper subscription app that requires email and password. Where should the email data be stored? How should we advise clients on siloing data and keeping it out of analytics tools like Google Analytics? Would hashing the email for internal reporting be appropriate and fulfill reporting and user privacy requirements?

  • A college that has a high percentage of students from the European Union and California. How should we advise the college’s marketing department on whether or not to use Facebook marketing pixels on their site? How can we fulfill the marketing objectives and address GDPR or CCPA compliance requirements?

  • An online book retailer has asked us to interview its users. We have personal data, including the user’s full name, email address, location, and previous order history (which might reveal the user’s race, gender, or political preference). Where should the data be stored? How long do we need to store the data? Who should have access to the data and should it be hashed and scrubbed of PII prior to dispersal to other researchers or analysts? Should the client see which user said what about their digital product or should those user responses be anonymized?


4. Success Metrics

Our goal on any project is to help our clients to be successful, whether that means increasing profitability or donations, improving workflows, standardizing processes — whatever shape their goals take. The ways in which we measure success on projects matter because these standards inevitably shape what we put into the world. Often, this is a positive influence: designing a focused reading experience might lead to people staying on a website longer. However, approached uncritically, our clients’ success metrics — or even metrics we propose — can perpetuate harmful trends in digital behavior manipulation.

Considerations:

  • How is this website or product seeking to influence people’s behavior?

  • What might be the long-term effects of continued usage of what we are designing?

  • How might what we’re making support meaningful offline interactions?

  • What kind of labor might our product create, whether for clients’ employees or customers?

  • Are there ways in which our product can avoid creating tedious, rote work for its users, instead supporting people’s creative capabilities?


Practices:

  • Adopt a critical perspective on conventional success metrics like time on site and user retention. Consider how these might not only distract from issues with the design and content of a website, but might perpetuate patterns of web usage that contribute to technology addiction or diminished social relationships. Many web apps measure success by how well they can keep you looking at a screen in their ecosystem, sometimes to the detriment of in-person connections.

  • Work with clients to define metrics that consider people’s time and mental health. What if a shorter time spent on a website provides more perceived value to users?

  • Consider the ways in which the product creates work for others and what the nature of this work might be. Much of what we design and build is intended for everyday use, whether in healthcare, real estate, or other industries. How will what we make influence the way others do their work and how that work might be tracked? Are there ways the product can improve the work or, if tedious yet unavoidable, minimize it?

  • Reflect on the ways that products influence the behavior of its users. All design seeks to encourage certain behaviors and discourage others. And, as machine learning becomes more accessible and available, this manipulation will be increasingly performed without human oversight, requiring an additional level of critical consideration to ensure the influence of our work corresponds with our and our clients’ values.


Example scenarios:

  • An app that helps people schedule and plan in-person visits with friends. How might the app be designed in order to remain peripheral to these visits? What kind(s) of metrics might be used to determine if the app is succeeding?

  • A healthcare app that allows nurses to document interactions with patients. If nurses are using the app for multiple patients every day, how might it alleviate the cumulative documentation burden? Are there ways the app might be able to learn from nurses’ usage to adapt and anticipate their documentation? Are there process changes we might suggest that meet regulatory requirements while easing nurses’ workflows?

  • A wayfinding app for urban environments. What if the app allowed users to identify personal fitness goals, prioritizing walking paths and bike routes over other types of transportation?

5. Algorithmic and Machine Learning-based Solutions

The stakes of product strategy have risen with the increasing availability of algorithmic solutions. Both successes and mistakes are magnified. Algorithms, machine learning, and any use of “big data” provide opportunities to create powerful personalized experiences. However, without adequate oversight and thoughtful implementation, algorithmically-driven products can contribute to biased analyses and reduce users’ visibility into assumptions that technology is making about or for them.

Considerations:

  • What data is being used to create the algorithmic or machine learning-based solution? Where does it come from and what groups might not be captured in it?

  • How might we audit or validate the output from the algorithm for different groups of people?

  • How might we surface the assumptions made by the algorithm to the users of the product? How might users correct incorrect assumptions about them?

  • What real-world consequences might users face based on the algorithm’s output?

Practices:

  • Reduce bias in training data. If a technology relies on large data sets, consider what historical gaps or biases likely exist in the data and how they may introduce bias into the technology.

  • Consider how to audit the algorithm and validate that it is behaving as intended. Test solutions on “edge” cases early and often.

  • Consider the consequences that an incorrect assumption or flawed model might have for people who use the product. Machine learning is now used for everything from granting job interviews to qualifying people for bank loans. How might negative, unintended consequences be minimized?

  • Show the analysis. Where appropriate, consider how you might surface the inferences the algorithm is making to users and what means of recourse exist to correct incorrect assumptions.

  • Take special care with protected information. Consider the consequences of making assumptions about protected classes within the United States such as age, race, disability, health status, sexual orientation, or religion and how bad actors might use this information to target or exclude people.

Examples:

  • An app that uses an algorithm to screen applicants for a job opening. How might the app make it clear to applicants how they are being evaluated without enabling them to game the system?

  • An app that uses machine learning to create psychographic profiles of consumers and gives businesses the ability to target those profiles. How might the app give consumers the ability to fix incorrect data about themselves? How might the app give consumers the ability to opt-out of being profiled?

Resources

Ethics is inherently conversational, which means it happens in a community of committed thinkers and practitioners. We’ve sought the wisdom of others as we’ve reflected on our own work and have included some of their resources here. While we may not agree with all of their conclusions, we’re grateful for their contribution to the burgeoning conversation.

  1. Ledger of Harms by The Center for Humane Technology
  2. Deon data ethics checklist by DrivenData
  3. Ethical OS Checklist by Institute for the Future of Omidyar Network
  4. Ethics in Tech Practice by Santa Clara University
  5. Future Ethics by Cennydd Bowles
  6. Human Values, Ethics, and Design by Friedman and Kahn