AI: Scaling Solutions Vs Risks

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. 

  • Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app.
  • Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients.
  • Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.
  • Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.
Colin I. (@colinii) - Profile Photo

@colinii

🧐

Problem Solving

hbr.org

MORE IDEAS FROM THE ARTICLE

  • Identify existing infrastructure that a data and AI ethics program can leverage. 
  • Create a data and AI ethical risk framework that is tailored to your industry.
  • Change how you think about ethics by taking cues from the successes in health care. Leaders should take inspiration from health care, an industry that has been systematically focused on ethical risk mitigation since at least the 1970s.
  • Optimize guidance and tools for product managers. 
  • Build organizational awareness.
  • Formally and informally incentivize employees to play a role in identifying AI ethical risks.
  • Monitor impacts and engage stakeholders.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

These companies realized one simple truth: failing to operationalize data and AI ethics is a threat to the bottom line. Missing the mark can expose companies to reputational, regulatory, and legal risks, but that’s not the half of it. Failing to operationalize data and AI ethics leads to wasted resources, inefficiencies in product development and deployment, and even an inability to use data to train AI models at all.

  • The academic approach: this means spotting ethical problems, their sources, and how to think through them. But it unfortunately tends to ask different questions than businesses. The result is academic treatments that do not speak to the highly particular, concrete uses of data and AI.
  • The “on-the-ground” approach: it knows to ask the business-relevant risk-related questions precisely because they are the ones making the products, but it lacks the skill, knowledge, and experience to answer ethical questions systematically, exhaustively, efficiently and institutional support.
  • The high-level AI ethics principles: Google and Microsoft, for instance, trumpeted their principles years ago. The difficulty comes in operationalizing those principles. What, exactly, does it mean to be for “fairness?” Which metric is the right one in any given case, and who makes that judgment?

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

RELATED IDEAS

Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

AI and Equality
  • Designing systems that are fair for all.
The Fear Of AI

AI(Artificial Intelligence) has got a bad rep, mostly coming from movies(The Matrix, for instance) or news articles demonizing AI’s reach and scope, stoking fears ranging from privacy invasion to space wars.

AI, and technology itself, is a double-edged sword having the capacity to either overwhelm and overpower us or be a defence mechanism for the looming future threats, depending on how we use it.

Narrow AI & General AI

Very broadly, AI can be divided into two: narrow AI and general AI.

1. Narrow AI systems handle singular or limited tasks. Also referred to as weak AI sometimes, such systems have applications in email spam filtering, recommendation systems, and autonomous vehicles.

2. On the other hand, general AI or strong AI, refers to a machine’s capability to think and function as a human. It denotes the ability to distinctly recognise other intelligent entities’ needs, emotions, and thoughts.

❤️ Brainstash Inc.