Why you don’t need big data to train ML

ML
ML

When somebody says artificial intelligence (AI), they most often mean machine learning (ML). To create an ML algorithm, most people think you need to collect a labeled dataset, and the dataset must be huge. This is all true if the goal is to describe the process in one sentence. However, if you understand the process a little better, then big data is not as necessary as it first seems.

Why many people think nothing will work without big data

To begin with, let’s discuss what a dataset and training are. A dataset is a collection of objects that are typically labeled by a human so that the algorithm can understand what it should look for. For example, if we want to find cats in photos, we need a set of pictures with cats and, for each picture, the coordinates of the cat, if it exists.

During training, the algorithm is shown the labeled data with the expectation that it will learn how to predict labels for objects, find universal dependencies and be able to solve the problem on data that it has not seen.

One of the most common challenges in training such algorithms is called overfitting. Overfitting occurs when the algorithm remembers the training dataset but doesn’t learn how to work with data it has never seen.

Let’s take the same example. If our data contains only photos of black cats, then the algorithm can learn the relationship: black with a tail = a cat. But the false dependency is not always so obvious. If there is little data, and the algorithm is strong, it can remember all the data, focusing on uninterpretable noise.

The easiest way to combat overfitting is to collect more data because this helps prevent the algorithm from creating false dependencies, such as only recognizing black cats.

Source