Standard Approaches To Data And AI Ethical Risk Mitigation

  • The academic approach: this means spotting ethical problems, their sources, and how to think through them. But it unfortunately tends to ask different questions than businesses. The result is academic treatments that do not speak to the highly particular, concrete uses of data and AI.
  • The “on-the-ground” approach: it knows to ask the business-relevant risk-related questions precisely because they are the ones making the products, but it lacks the skill, knowledge, and experience to answer ethical questions systematically, exhaustively, efficiently and institutional support.
  • The high-level AI ethics principles: Google and Microsoft, for instance, trumpeted their principles years ago. The difficulty comes in operationalizing those principles. What, exactly, does it mean to be for “fairness?” Which metric is the right one in any given case, and who makes that judgment?
Colin I. (@colinii) - Profile Photo

@colinii

🧐

Problem Solving

MORE IDEAS FROM THE ARTICLE

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. 

  • Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app.
  • Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients.
  • Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.
  • Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

These companies realized one simple truth: failing to operationalize data and AI ethics is a threat to the bottom line. Missing the mark can expose companies to reputational, regulatory, and legal risks, but that’s not the half of it. Failing to operationalize data and AI ethics leads to wasted resources, inefficiencies in product development and deployment, and even an inability to use data to train AI models at all.

  • Identify existing infrastructure that a data and AI ethics program can leverage. 
  • Create a data and AI ethical risk framework that is tailored to your industry.
  • Change how you think about ethics by taking cues from the successes in health care. Leaders should take inspiration from health care, an industry that has been systematically focused on ethical risk mitigation since at least the 1970s.
  • Optimize guidance and tools for product managers. 
  • Build organizational awareness.
  • Formally and informally incentivize employees to play a role in identifying AI ethical risks.
  • Monitor impacts and engage stakeholders.

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

RELATED IDEAS

Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

AI and Equality
  • Designing systems that are fair for all.

A Practical Guide to Building Ethical AI

hbr.org

Technology ethics is the application of ethical thinking to the practical concerns of technology.

The reason technology ethics is growing in prominence is that new technologies give us more power to act, which means that we have to make choices we didn't have to make before. While in the past our actions were involuntarily constrained by our weakness, now, with so much technological power, we have to learn how to be voluntarily constrained by our judgment: our ethics.

Technology Ethics

scu.edu

From Hammurabi to Kant

Hammurabi’s best-known injunction is as follows: "If a builder builds a house and the house collapses and causes the death of the owner of the house—the builder shall be put to death."

❤️ Brainstash Inc.