Transparency + Context = Explainability - Deepstash

Bite‑sized knowledge

to upgrade

your career

Ideas from books, articles & podcasts.

Transparency + Context = Explainability

Transparency allows modellers, developers, and technical auditors to understand how an AI system works, including how a model is trained and evaluated, what its decision boundaries are, what inputs go into the model, and finally, why it made a specific prediction. This is often also described as “interpretability” in existing research.

Explainable AI (XAI) goes a step further by explaining how a system works or why a particular recommendation was made to members and customers.

STASHED IN:

2

MORE IDEAS FROM THE SAME ARTICLE

From a holistic “responsible design” perspective, there are many non-AI initiatives that help increase the transparency of our products and experiences.

  • On the backend, researchers have previously highlighted the importance of dataset documentation as a key enabler of transparency.

Complex predictive machine learning models often lack transparency, resulting in low trust from these teams despite having high predictive performance. While many model interpretation approaches such as SHAP and LIME return top important features to help interpret model predictions, these top fea...

Delivering the best member and customer experiences with a focus on trust is core to everything that we do at LinkedIn. As we continue to build on our Responsible AI program that we recently outlined three months ago, a key part of our work is designing products that provide the right protections...

Transparency means that AI system behaviour and its related components are understandable, explainable, and interpretable. The goal is that end-users of AI—such as LinkedIn employees, customers, and members—can use these insights to understand these systems, suggest improvements, and identify pot...

A few key ways we've improved transparency in AI at LinkedIn:

  • Explainable AI for model consumers to build trust and augment decision-making.
  • Explainable AI for modellers to perform model debugging and improvement.
  • Transparency beyond AI systems.

Machine learning engineers at LinkedIn need to understand how their models are making the decisions to identify blindspots and, thereby, opportunities for improvement. For this, we have explainability tools that allow model developers to derive insights and characteristics about their model at a ...

Discover and save more ideas by creating a

FREE

Deepstash account.

Develop a

reading habit

, save

time

and create an amazing

knowledge library

.

GET THE APP:

MORE LIKE THIS

How To Responsibly Adopt Artificial Intelligence

We also see some negative, often unintended consequences of these technologies. They go from the rise of fake news and algorithms that favour the incendiary and divisive over the factual, to major privacy breaches and AI models that discriminate against minority groups or even cost human lives.

STASHED IN:

6

Building ethical AI

Companies are leveraging data and artificial intelligence to create scalable solutions — but they’re also scaling their reputational, regulatory, and legal risks. For instance, Los Angeles...

1

STASHED IN:

9

Accept that no one has all the answers

Organizations today are often faced with a challenge: How do we move forward even if we don’t have all of the answers yet? A data engineer can have an impact across the application, from application performance to the semantics and meaning of the data flowing across the system.

AI...

STASHED IN:

1