Ethics and AI: 3 Conversations Companies Need to Have - Deepstash
Ethics and AI: 3 Conversations Companies Need to Have

Ethics and AI: 3 Conversations Companies Need to Have

Curated from: hbr.org

Ideas, facts & insights covering these topics:

14 ideas

·

794 reads

Explore the World's Best Ideas

Join today and uncover 100+ curated journeys from 50+ topics. Unlock access to our mobile app with extensive features.

The Premise

The Premise

While concerns about AI and ethical violations have become common in companies, turning these anxieties into actionable conversations can be tough. With the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes, and conversations around these issues can feel nebulous and abstract. Getting to the desired outcomes requires learning to talk about these issues differently.

6

114 reads

AI Ethics

AI Ethics

Over the past several years, concerns around AI ethics have gone mainstream. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented.

No one wants to push out discriminatory or biased AI. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. But once we’ve all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks?

6

93 reads

Understanding The Problems That Need To Be Solved

Understanding The Problems That Need To Be Solved

Acting quickly to address concerns is admirable, but with the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes. To implement, scale, and maintain effective AI ethical risk mitigation strategies, companies should begin with a deep understanding of the problems they’re trying to solve. A challenge, however, is that conversations about AI ethics can feel nebulous. The first step, then, should consist of learning how to talk about it in concrete, actionable ways.

6

76 reads

The People Who Need To Be Involved

Summon a senior-level working group that is responsible for driving AI ethics in your organization. They should have the right skills, experience, and knowledge such that the conversations are well-informed about the business needs, technical capacities, and operational know-how. Involve four kinds of people: technologists, legal/compliance experts, ethicists, and business leaders.

Their collective goal is to understand the sources of ethical risks generally, for the industry of which they are members, and for their particular company. 

6

67 reads

The Tech Guys

The Tech Guys

You need the technologist to assess what is technologically feasible, not only at a per product level but also at an organizational level. That is because, in part, various ethical risk mitigation plans require different tech tools and skills. Knowing where your organization is from a technological perspective can be essential to mapping out how to identify and close the biggest gaps.

6

57 reads

The Legal Eagles

The Legal Eagles

Legal and compliance experts are there to help ensure that any new risk mitigation plan is compatible and not redundant with existing risk mitigation practices. Legal issues loom particularly large in light of the fact that it’s neither clear how existing laws and regulations bear on new technologies, nor what new regulations or laws are coming down the pipeline.

6

48 reads

The Path Correctors

The Path Correctors

Ethicists are there to help ensure a systematic and thorough investigation into the ethical and reputational risks you should attend to, not only by virtue of developing and procuring AI, but also those risks that are particular to your industry and/or your organization. Their importance is particularly relevant because compliance with outdated regulations does not ensure the ethical and reputational safety of your organization.

6

45 reads

The Business Leaders

The Business Leaders

Business leaders should help ensure that all risk is mitigated in a way that is compatible with business necessities and goals.

Zero risk is an impossibility so long as anyone does anything. But unnecessary risk is a threat to the bottom line, and risk mitigation strategies also should be chosen with an eye towards what is economically feasible.

7

43 reads

Three Conversations to Push Things Forward

Three Conversations to Push Things Forward

Once the team is in place, here are three crucial conversations to have.

  • One conversation concerns coming to a shared understanding of what goals an AI ethical risk program should be striving for.
  • The second conversation concerns identifying gaps between where the organization is now and where it wants to be.
  • The third conversation is aimed at understanding the sources of those gaps so that they are comprehensively and effectively addressed.

6

46 reads

Define Your Organization’s Ethical Standard for AI

Define Your Organization’s Ethical Standard for AI

Any conversation should recognize that legal compliance (like the anti-discrimination law) and regulatory compliance are table stakes.

The question to address is: Given that the set of ethical risks is not identical to the set of legal/regulatory risks, what do we identify as the ethical risks for our industry/organization and where do we stand on them?

6

41 reads

Identify the Gaps Between Where You Are Now and What Your Standards Call For

Identify the Gaps Between Where You Are Now and What Your Standards Call For

There are various technical “solutions” or “fixes” to AI ethics problems. A number of software products from big tech to startups to non-profits help data scientists apply quantitative metrics of fairness to their model outputs.

Tools like LIME and SHAP aid data scientists in explaining how outputs are arrived at in the first place. But virtually no one thinks these technical solutions, or any technological solution for that matter, will sufficiently mitigate the ethical risk and transform your organization into one that meets its AI ethics standards.

6

40 reads

Know The Limits

Know The Limits

Your AI ethics team should determine where their respective limits are and how their skills and knowledge can complement each other. This means asking:

  • What, exactly, is the risk we’re trying to mitigate?
  • How does software/quantitative analysis help us mitigate that risk?
  • What gaps do the software/quantitative analyses leave?
  • What kinds of qualitative assessments do we need to make, when do we need to make them, on what basis do we make them, and who should make them, so that those gaps are appropriately filled?

6

38 reads

Understand the Complex Sources of The Problems and Operationalize Solutions

Understand the Complex Sources of The Problems and Operationalize Solutions

Many conversations around bias in AI start with giving examples and immediately talking about “biased data sets.” Sometimes this will slide into talk about “implicit bias” or “unconscious bias,” which are terms borrowed from psychology that lack a clear and direct application to “biased data sets.” But it’s not enough to say, “the models are trained on biased data sets” or “the AI reflects our historical societal discriminatory actions and policies.”

6

43 reads

The Bottom Line

Productive conversations on ethics should go deeper than broad stroke examples descried by specialists and non-specialists alike. Your organization needs the right people at the table so that its standards can be defined and deepened.

Your organization should fruitfully marry its quantitative and qualitative approaches to ethical risk mitigation so it can close the gaps between where it is now and where it wants it to be. And it should appreciate the complexity of the sources of its AI ethical risks.

7

43 reads

IDEAS CURATED BY

anikad

Life Is A Marathon| Life Lover

Anika Dhar's ideas are part of this journey:

7 days with Seth Godin

Learn more about technologyandthefuture with this collection

How to develop a growth mindset

How to think creatively and outside the box

How to embrace change

Related collections

Read & Learn

20x Faster

without
deepstash

with
deepstash

with

deepstash

Personalized microlearning

100+ Learning Journeys

Access to 200,000+ ideas

Access to the mobile app

Unlimited idea saving

Unlimited history

Unlimited listening to ideas

Downloading & offline access

Supercharge your mind with one idea per day

Enter your email and spend 1 minute every day to learn something new.

Email

I agree to receive email updates