Explainable AI - Deepstash

Explainable AI

Explainable AI is a critical element of the broader discipline of responsible AI. Responsible AI encompasses ethics, regulations, and governance across a range of risks and issues related to AI including bias, transparency, explicability, interpretability, robustness, safety, security, and privacy.

3 STASHED

1 LIKE

MORE IDEAS FROM THEARTICLE

There are six broad approaches as it relates to post-hoc explainability.

  • Feature relevance: These approaches to explainability focus on the inner functioning of the model and highlight the features that best explain the outcome of the model.
  • Model simplification: These approaches focus on building a new model that is a simplification of a more complex model that is to be explained.
  • Local explanations: These approaches segment the solutions space and provide explanations for smaller segments that are less complex.
  • Explanations by example
  • Visualization
  • Text explanations

3 STASHED

End users require an explanation of the decision or action recommended made by the AI system in order to carry out the recommendation.

Business users require an explanation to ensure corporate governance and manage reputational risk to their group or company.

Data scientists require explanations to validate models and also perform trade-offs between accuracy of the model and performance criteria.

Regulators require explanations to ensure compliance to existing regulations and ensuring that no harm comes to consumers.

3 STASHED

The audience for the explanation or whom to explain should be the first question to answer. Understanding the motivation of the audience, what action or decision the audience is planning to make, their mathematical or technical knowledge and expertise are all important aspects that should be considered during the formulation of the explanation. Based on our experience we propose four main categories of audience:

  1. End users
  2. Business sponsors
  3. Data Scientists
  4. Regulators

This list of four types is by no means exhaustive, but it does capture some of the key differences between different groups.

3 STASHED

<p data-selectable-paragraph="...

Interpretability and explainability are closely related topics . Interpretability is at the model level with an objective of understanding the decisions or predictions of the overall model. Explainability is at an individual instance of the model with the objective of understanding why the specific decision or prediction was made by the model. When it comes to explainable AI we need to consider five key questions — Whom to explain? Why explain? When to explain? How to explain? What is the explanation?

3 STASHED

1 LIKE

Visual or graphical explanations, tabular data-driven explanation, natural language descriptions or voice explanations are some of the existing modes of explanation.

A salesperson might be comfortable with an explanation that shows a graph of increasing sales and how the increase in sales is achieved.

The instructions for a construction worker and the explanations for why those instructions are being given may be better provided through a voice interface as opposed to a detailed written explanation.

3 STASHED

Explanations may be generated before the model is built, also called ex-ante or the model is trained and tested first and then the explanation may be generated, also called post-hoc.

3 STASHED

Explainable AI is a critical element of the broader discipline of responsible AI. Responsible AI encompasses ethics, regulations, and governance across a range of risks and issues related to AI including bias, transparency, explicability, interpretability, robustness, safety, security, and privacy.

3 STASHED

There are six broad approaches as it relates to post-hoc explainability.

  • Feature relevance: These approaches to explainability focus on the inner functioning of the model and highlight the features that best explain the outcome of the model.
  • Model simplification: These approaches focus on building a new model that is a simplification of a more complex model that is to be explained.
  • Local explanations: These approaches segment the solutions space and provide explanations for smaller segments that are less complex.
  • Explanations by example
  • Visualization
  • Text explanations

3 STASHED

Deepstash helps you become inspired, wiser and productive, through bite-sized ideas from the best articles, books and videos out there.

GET THE APP:

RELATED IDEAS

Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app. Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients. Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards. Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for nonprofit organizations and academics. Today the biggest tech companies in the world — Microsoft, Facebook, Twitter, Google, and more — are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models, aka AI.

AI and Equality
  • Designing systems that are fair for all.

8 STASHED

1 LIKE

All companies will need to become “AI companies”

... so they can leverage the considerable benefits to be gained through greater knowledge of their customers, explore new markets, and counteract new, AI-driven companies that might seek their market share.

To extract the benefits from AI while mitigating the risks, companies must ensure that they are sufficiently agile so they can adopt best practices to create responsible transformation with AI.

16 STASHED

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. 

  • Los Angeles is suing IBM for allegedly misappropriating data it collected with its ubiquitous weather app.
  • Optum is being investigated by regulators for creating an algorithm that allegedly recommended that doctors and nurses pay more attention to white patients than to sicker black patients.
  • Goldman Sachs is being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on their Apple cards.
  • Facebook infamously granted Cambridge Analytica, a political firm, access to the personal data of more than 50 million users.

13 STASHED

3 LIKES