Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. 

  • Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app.
  • Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients.
  • Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.
  • Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.
Colin I. (@colinii) - Profile Photo

@colinii

🧐

Problem Solving

A Practical Guide to Building Ethical AI

hbr.org

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

These companies realized one simple truth: failing to operationalize data and AI ethics is a threat to the bottom line. Missing the mark can expose companies to reputational, regulatory, and legal risks, but that’s not the half of it. Failing to operationalize data and AI ethics leads to wasted resources, inefficiencies in product development and deployment, and even an inability to use data to train AI models at all.

  • The academic approach: this means spotting ethical problems, their sources, and how to think through them. But it unfortunately tends to ask different questions than businesses. The result is academic treatments that do not speak to the highly particular, concrete uses of data and AI.
  • The “on-the-ground” approach: it knows to ask the business-relevant risk-related questions precisely because they are the ones making the products, but it lacks the skill, knowledge, and experience to answer ethical questions systematically, exhaustively, efficiently and institutional support.
  • The high-level AI ethics principles: Google and Microsoft, for instance, trumpeted their principles years ago. The difficulty comes in operationalizing those principles. What, exactly, does it mean to be for “fairness?” Which metric is the right one in any given case, and who makes that judgment?
  • Identify existing infrastructure that a data and AI ethics program can leverage. 
  • Create a data and AI ethical risk framework that is tailored to your industry.
  • Change how you think about ethics by taking cues from the successes in health care. Leaders should take inspiration from health care, an industry that has been systematically focused on ethical risk mitigation since at least the 1970s.
  • Optimize guidance and tools for product managers. 
  • Build organizational awareness.
  • Formally and informally incentivize employees to play a role in identifying AI ethical risks.
  • Monitor impacts and engage stakeholders.

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

SIMILAR ARTICLES

🤖

AI

1 IDEA

❤️ Brainstash Inc.