Augmenting trust and decision-making - Deepstash

Augmenting trust and decision-making

Complex predictive machine learning models often lack transparency, resulting in low trust from these teams despite having high predictive performance. While many model interpretation approaches such as SHAP and LIME return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to these teams.

To deal with this challenge, we developed Intellige, a customer-facing model explainer that creates digestible interpretations and insights reflecting the rationale behind model predictions.

2

2 reads

CURATED FROM

IDEAS CURATED BY

brianadam

Multimedia specialist

The idea is part of this collection:

Machine Learning With Google

Learn more about computerscience with this collection

Understanding machine learning models

Improving data analysis and decision-making

How Google uses logic in machine learning

Related collections

Read & Learn

20x Faster

without
deepstash

with
deepstash

with

deepstash

Personalized microlearning

100+ Learning Journeys

Access to 200,000+ ideas

Access to the mobile app

Unlimited idea saving

Unlimited history

Unlimited listening to ideas

Downloading & offline access

Supercharge your mind with one idea per day

Enter your email and spend 1 minute every day to learn something new.

Email

I agree to receive email updates