Acquisition functions

We want you to select the data samples that will be the most informative to your model, so a natural approach would be to score each sample based on its predicted usefulness for training. Since labeling samples is usually done in batches, you could take the top k scoring samples for annotation. This type of function, that takes an unlabeled data sample and outputs its score, is called acquisition function.

Uncertainty-based acquisition functions

In Encord Active, we employ the uncertainty sampling strategy where we score data samples based on the uncertainty of the model predictions. The assumption is that samples the model is unconfident about are likely to be more informative than samples for which the model is very confident about the label.

We include the following uncertainty-based acquisition functions:

, Where and are the first and second-highest predicted labels.

👍

Tip

Follow the links provided for each acquisition function for detailed explanations of each, including alternative formulas, guidance on interpreting the output scores, and its implementation in GitHub.

🚧

Caution

On the following scenarios, uncertainty-based acquisition functions must be used with extra care:

  • Softmax outputs from deep networks are often not calibrated and tend to be quite overconfident.
  • For convolutional neural networks, small, seemingly meaningless perturbations in the input space can completely change predictions.

Diversity-based acquisition functions

Diversity sampling is an active learning strategy that aims to ensure that the labeled training set includes a broad range of examples from across the input space. The underlying assumption is that a diverse set of training examples will allow the model to learn a more generalized representation, improving its performance on unseen data.

In contrast to uncertainty-based methods, which prioritize examples that the model finds difficult to classify, diversity-based methods prioritize examples based on their novelty or dissimilarity to examples that are already in the training set. This can be particularly useful when the input space is large and the distribution of examples is uneven.

We include the following diversity-based acquisition function:

This metric clusters the images according to number of classes in the Ontology file. Then, it chooses samples from each cluster one-by-one to form an equal number of samples from each cluster. Samples are chosen according to their proximity to cluster centroids (closer samples will be chosen first).

👍

Tip

Diversity-based acquisition functions are generally easier to use compared to the uncertainty-based functions because they may not require any ML model. See the Running Diversity Based Acquisition Function on Unlabeled Data tutorial to learn how to use them in you project easily.

Which acquisition function should I use?

“Ok, I have this list of acquisition functions now, but which one is the best? How do I choose?”

This isn’t a question for which there is an easy answer - it heavily depends on your problem, your data, your model, your labeling budget, your goals, etc. This choice can be crucial to your results and comparing multiple acquisition functions during the active learning process is not always feasible.

Simple uncertainty measures like least confident score, margin score and entropy make good first considerations.

👍

Tip

If you’d like to talk to an expert on the topic, the Encord ML team can be found in the #general channel in our Encord Active Slack community.

How can I utilize the chosen acquisition function?

Explore the Easy Active Learning on MNIST tutorial for a quick overview using a well-known example dataset.

Tutorials

Easy Active Learning on MNIST: A quick overview of the acquisition functions using a well-known example dataset.
Diversity sampling without an ML model: Using diversity sampling to rank images without training any model.
Selecting hard samples for object detection: A jupyter notebook to run acquisition functions using an object detector.