Learn more about computerscience with this collection
Understanding machine learning models
Improving data analysis and decision-making
How Google uses logic in machine learning
If you have a small dataset, using a model pre-trained on large datasets can be a good idea. You can use your small dataset to fine-tune it.
6
18 reads
MORE IDEAS ON THIS
As AI integration across industries picks greater pace, ML engineers are confronted with a sad reality - once stakeholders identify a use case with proven ROI, they are eager to jump onto the AI ship, and dat...
6
19 reads
7
19 reads
Testing data is used to test the validity of the training data set. Training data is not used for testing because it will produce the expected output. The testing data set comprises of 20 percent of the total data.
8
16 reads
The process of curating datasets for machine learning starts well before availing datasets. Here’s what we suggest:
7
15 reads
Data is the new oil - and just as oil needs the right refining to come into perfect usage, data too needs curing. The power of your machine learning models will greatly depend on the quality of your data.
6
59 reads
ML engineers depend on data during each step of their AI journey – from model selection, training, and tuning to testing. These datasets usually fall under three categories:
6
24 reads
The training data set is used to train an algorithm, apply concepts, learn, and give results. Around 60 percent of data is training data.
8
20 reads
CURATED FROM
IDEAS CURATED BY
Related collections
Other curated ideas on this topic:
Worrying about things that might happen, makes us waste our precious energy and time. The limited thought cycles get jammed with ‘pre-worry’.
You have all the time to worry about the problem and handle it when (and if) it happens. Pre-worry isn’t doing you any good.
The process of reduction in the number of dimensions (or feature variables) in datasets is known as Dimensionality Reduction.
If a cube has 1000 points, we can reduce its dimensionality by simply taking the 3D data and viewing it as a 2D model. We can also remove feature variables...
Transfer learning consists of taking features learned on one problem, and leveraging them on a new, similar problem. For instance, features from a model that has learned to identify racoons may be useful to kick-start a model meant to identify tanukis.
Read & Learn
20x Faster
without
deepstash
with
deepstash
with
deepstash
Personalized microlearning
—
100+ Learning Journeys
—
Access to 200,000+ ideas
—
Access to the mobile app
—
Unlimited idea saving
—
—
Unlimited history
—
—
Unlimited listening to ideas
—
—
Downloading & offline access
—
—
Supercharge your mind with one idea per day
Enter your email and spend 1 minute every day to learn something new.
I agree to receive email updates