Complex predictive machine learning models often lack transparency, resulting in low trust from these teams despite having high predictive performance. While many model interpretation approaches such as SHAP and LIME return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to these teams.
To deal with this challenge, we developed Intellige, a customer-facing model explainer that creates digestible interpretations and insights reflecting the rationale behind model predictions.
2
2 reads
The idea is part of this collection:
Learn more about computerscience with this collection
Understanding machine learning models
Improving data analysis and decision-making
How Google uses logic in machine learning
Related collections
Read & Learn
20x Faster
without
deepstash
with
deepstash
with
deepstash
Personalized microlearning
—
100+ Learning Journeys
—
Access to 200,000+ ideas
—
Access to the mobile app
—
Unlimited idea saving
—
—
Unlimited history
—
—
Unlimited listening to ideas
—
—
Downloading & offline access
—
—
Supercharge your mind with one idea per day
Enter your email and spend 1 minute every day to learn something new.
I agree to receive email updates