Curated from: medium.com
Ideas, facts & insights covering these topics:
6 ideas
·51 reads
Explore the World's Best Ideas
Join today and uncover 100+ curated journeys from 50+ topics. Unlock access to our mobile app with extensive features.
Many machine learning textbooks present students with a chart that shows a tradeoff between model interpretability and model accuracy. This is a heuristic, but many students come away thinking that this tradeoff is as strict as a law of physics.
3
13 reads
A national effort was undertaken to build Algorithms to predict which pneumonia patients should be admitted to hospitals and which are treated as outpatients. Only by interpreting the model was a crucial problem discovered and avoided. Understanding why a model makes a prediction can literally be an issue of life and death. Many students come away thinking that this Tradeoff is as strict as a law of physics.
3
10 reads
3
8 reads
An ordinary least squares (OLS) model generates coefficients for each feature. These coefficients are signed, allowing us to describe both the magnitude and direction of each feature at the global level. For local interpretability, we need only multiply the coefficient vector by a specific feature vector to see the predicted value, and the contribution of each feature to that prediction.
3
9 reads
In the middle of the accuracy-interpretability spectrum are random forests. We’ve often seen them described as “black boxes,” which we think this is unfair — maybe “gray” but certainly not “black”!
Random forests are collections of decision trees, like the one drawn below. The splits in each tree are chosen from random subsets of our features, so the trees all look slightly different. A single tree can be easily interpreted, assuming it is not grown too deep. But how we can interpret a random forest that contains hundreds or thousands of trees?
3
6 reads
As the hottest topic in machine learning over the past decade, we’d be remiss if we didn’t mention neural networks. Hailed for outstanding accuracy in difficult domains like image recognition and language translation, they’ve also generated criticism for lacking interpretability.
Nobody understands how these systems — neural networks modelled on the human brain — produce their results. Computer scientists “train” each one by feeding it data, and it gradually learns. But once a neural net is working well, it’s a black box.
3
5 reads
IDEAS CURATED BY
Learn more about leadershipandmanagement with this collection
How to create a positive work environment
Conflict resolution strategies
Effective communication in the workplace
Related collections
Similar ideas
1 idea
What Is Deep Transfer Learning and Why Is It Becoming So Popular?
towardsdatascience.com
16 ideas
What is a Quantum Convolutional Neural Network?
analyticsindiamag.com
6 ideas
Tired of AI? Let’s talk about CI.
towardsdatascience.com
Read & Learn
20x Faster
without
deepstash
with
deepstash
with
deepstash
Personalized microlearning
—
100+ Learning Journeys
—
Access to 200,000+ ideas
—
Access to the mobile app
—
Unlimited idea saving
—
—
Unlimited history
—
—
Unlimited listening to ideas
—
—
Downloading & offline access
—
—
Supercharge your mind with one idea per day
Enter your email and spend 1 minute every day to learn something new.
I agree to receive email updates