Interpreting Machine Learning Models - Deepstash
De-escalate Office Tension

Learn more about leadershipandmanagement with this collection

How to create a positive work environment

Conflict resolution strategies

Effective communication in the workplace

De-escalate Office Tension

Discover 87 similar ideas in

It takes just

12 mins to read

Interpreting Machine Learning Models

Many machine learning textbooks present students with a chart that shows a tradeoff between model interpretability and model accuracy. This is a heuristic, but many students come away thinking that this tradeoff is as strict as a law of physics.

2

0 reads

MORE IDEAS ON THIS

Prediction By Algorithms

A national effort was undertaken to build Algorithms to predict which pneumonia patients should be admitted to hospitals and which are treated as outpatients. Only by interpreting the model was a crucial problem discovered and avoided. Understanding why a model makes a prediction can literally be...

2

1 read

Deprioritizing Interpretability

  • Global Interpretability: How well can we understand the relationship between each feature and the predicted value at a global level — for our entire observation set. Can we understand both the magnitude and direction of the impact of each feature on the predicted va...

2

1 read

Linear Regression

An ordinary least squares (OLS) model generates coefficients for each feature. These coefficients are signed, allowing us to describe both the magnitude and direction of each feature at the global level. For local interpretability, we need only multiply the coefficient vector by a specific featur...

2

1 read

Random Forest

In the middle of the accuracy-interpretability spectrum are random forests. We’ve often seen them described as “black boxes,” which we think this is unfair — maybe “gray” but certainly not “black”!

Random forests are collections of decision trees, like the one drawn below. The splits in eac...

2

0 reads

Neural Networks

As the hottest topic in machine learning over the past decade, we’d be remiss if we didn’t mention neural networks. Hailed for outstanding accuracy in difficult domains like image recognition and language translation, they’ve also generated criticism for lacking interpretability.

Nobody...

2

0 reads

CURATED FROM

CURATED BY

georhampton

Surveyor in commercial/residential

Related collections

More like this

Transfer learning concept

The biggest problem, thoug h, is that models like this one are performed only on a single task. Future tasks require a new set of data points as well as equal or more amount of resources.

Transfer learning is an approach in deep learning (and machine learning) where knowledge is ...

Two decision-making models for tough challenges

  • The hub-and-spoke decision-making model. It's likely you don't have enough experience, brainpower, or time to succeed. The way forward is using a small group of elite talent, traditional experts, or a consulting team and having them set the course. This model uses interview...

Why we use models

  • A model is just a series of calculations that abstractly represent some systems in the real world. We use models all the time.
  • We may work out the routes we could take to get to work at a specific time of the day. We use past data to make predictions about what we...

Read & Learn

20x Faster

without
deepstash

with
deepstash

with

deepstash

Access to 200,000+ ideas

Access to the mobile app

Unlimited idea saving & library

Unlimited history

Unlimited listening to ideas

Downloading & offline access

Personalized recommendations

Supercharge your mind with one idea per day

Enter your email and spend 1 minute every day to learn something new.

Email

I agree to receive email updates