Our approach to building transparent and explainable AI systems - Deepstash
Our approach to building transparent and explainable AI systems

Our approach to building transparent and explainable AI systems

Curated from: engineering.linkedin.com

Ideas, facts & insights covering these topics:

7 ideas

·

56 reads

Explore the World's Best Ideas

Join today and uncover 100+ curated journeys from 50+ topics. Unlock access to our mobile app with extensive features.

Better AI Systems

Delivering the best member and customer experiences with a focus on trust is core to everything that we do at LinkedIn. As we continue to build on our Responsible AI program that we recently outlined three months ago, a key part of our work is designing products that provide the right protections, mitigate unintended consequences, and ultimately better serve our members, customers, and society.

2

33 reads

The principle of Transparency

Transparency means that AI system behaviour and its related components are understandable, explainable, and interpretable. The goal is that end-users of AI—such as LinkedIn employees, customers, and members—can use these insights to understand these systems, suggest improvements, and identify potential problems (should they arise).

Developing large-scale AI-based systems that are fair and equitable or that protect the users may not be possible if our systems are opaque.

2

7 reads

Transparency + Context = Explainability

Transparency allows modellers, developers, and technical auditors to understand how an AI system works, including how a model is trained and evaluated, what its decision boundaries are, what inputs go into the model, and finally, why it made a specific prediction. This is often also described as “interpretability” in existing research.

Explainable AI (XAI) goes a step further by explaining how a system works or why a particular recommendation was made to members and customers.

3

7 reads

Improving AI Transparency

A few key ways we've improved transparency in AI at LinkedIn:

  • Explainable AI for model consumers to build trust and augment decision-making.
  • Explainable AI for modellers to perform model debugging and improvement.
  • Transparency beyond AI systems.

2

3 reads

Augmenting trust and decision-making

Complex predictive machine learning models often lack transparency, resulting in low trust from these teams despite having high predictive performance. While many model interpretation approaches such as SHAP and LIME return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to these teams.

To deal with this challenge, we developed Intellige, a customer-facing model explainer that creates digestible interpretations and insights reflecting the rationale behind model predictions.

2

2 reads

Explainable AI(XAI) for modellers

Machine learning engineers at LinkedIn need to understand how their models are making the decisions to identify blindspots and, thereby, opportunities for improvement. For this, we have explainability tools that allow model developers to derive insights and characteristics about their model at a finer granularity.

2

2 reads

Transparency beyond AI systems

From a holistic “responsible design” perspective, there are many non-AI initiatives that help increase the transparency of our products and experiences.

  • On the backend, researchers have previously highlighted the importance of dataset documentation as a key enabler of transparency.
  • On the front end, we launched a transparency initiative designed to earn and preserve member trust through improvements to our reporting and policy enforcement experiences.

2

2 reads

IDEAS CURATED BY

brianadam

Multimedia specialist

Brian Adams's ideas are part of this journey:

Machine Learning With Google

Learn more about computerscience with this collection

Understanding machine learning models

Improving data analysis and decision-making

How Google uses logic in machine learning

Related collections

Read & Learn

20x Faster

without
deepstash

with
deepstash

with

deepstash

Personalized microlearning

100+ Learning Journeys

Access to 200,000+ ideas

Access to the mobile app

Unlimited idea saving

Unlimited history

Unlimited listening to ideas

Downloading & offline access

Supercharge your mind with one idea per day

Enter your email and spend 1 minute every day to learn something new.

Email

I agree to receive email updates