Skip to main content

Importing Model Predictions

This page shows you how to import model predictions with code.


Everytime you run any of these importers, previously imported predictions will be overwritten! We're working on fixing this.


There is also a workflow description on importing model predictions here.


Before you can import your predictions you need to have a couple of prerequisites in place:

  1. You should have imported a project - and taken note of the /path/to/the/data
  2. In your code, you need to have an encord.Project initialised.

You can do this with the following code - only the highlighted line should need to change:

from pathlib import Path
import yaml

from encord import EncordUserClient

data_dir = Path("/path/to/the/data")

meta = yaml.safe_load((data_dir / "project_meta.yaml").read_text())
private_key = Path(meta["ssh_key_path"]).read_text()

client = EncordUserClient.create_with_ssh_private_key(private_key)
project = client.get_project(project_hash=meta["project_hash"])

The code examples from this point on assume that you have the data_dir and the project variables available.

When you have these things in place, there are a couple of options for importing your predictions into Encord Active:

Prepare a .pkl File to be Imported with the CLI

You can prepare a pickle file (.pkl) to be imported with the Encord Active CLI as well. You do this by building a list of Prediction objects. We support predictions for classifications and object detections.

All predictions need a unique identifier of the data unit (the data_hash and potentially a frame) and model confidence score

In addition to the above, a classification prediction contains 3 identifiers, classification_hash attribute_hash and option_hash while an object detection prediction contains the feature_hash, the actual prediction data, and the format of that data.

Creating a Prediction Object

Below, you find examples of how to create an object of each of the three supported types.

Since we can have multiple classifications for for the same data unit (e.g. has a dog? and has a cat?) we need to uniquely identify them by providing the 3 hashes from the ontology.

prediction = Prediction(
frame = 3, # optional frame for videos
confidence = 0.8,

To find the three hashes, we can inspect the ontology by running

encord-active print ontology

Preparing the Pickle File

Now you're ready to prepare the file. You can copy the appropriate snippet based on your prediction format from above and paste it in the code below. Note the highlighted line, which defines where the .pkl file will be stored.

from encord_active.lib.db.predictions import Prediction, Format

predictions_to_store = []

for prediction in my_predictions: # Iterate over your predictions
# PASTE appropriate prediction snippet from above

with open("/path/to/predictions.pkl", "wb") as f:
pickle.dump(predictions_to_store, f)

In the above code snippet, you will have to fetch the data_hash, class_id, etc. ready from the for loop in line 5.

Import Your Predictions via the CLI

To import the predictions into Encord Active, you run the following command inside the project directory:

encord-active import predictions /path/to/predictions.pkl

This will import your predictions into Encord Active and run all the metrics on your predictions. With the .pkl approach, you are done after this step.

Predictions from Your Prediction Loop

You probably have a prediction loop, which looks similar to this:

def predict(test_loader):
for imgs, img_ids in test_loader:
predictions = model(imgs)

You can directly import your predictions into Encord Active by the use of an encord_active.model_predictions.prediction_writer.PredictionWriter. The code would change to something similar to this:

from encord_active.lib.model_predictions.writer import PredictionWriter

def predict(test_loader):
with PredictionWriter(data_dir, project) as writer: # project is defined above.
for imgs, img_ids in test_loader:
predictions = model(imgs)
for img_id, img_preds in zip(img_ids, predictions)
for pred in img_preds:
data_hash = img_id,
class_uid = pred.class_id,
confidence_score = pred.confidence,
# either bbox
bbox = pred.bbox # dict with x, y, w, h normalized
# or segmentation (mask or normalized polygon points)
polygon = pred.mask
frame = 0 # If video indicate what frame of the video

In the code example above, the arguments to add_prediction are:

  • data_hash: The data_hash of the data unit that the prediction belongs to.
  • class_uid: The featureNodeHash of the ontology object corresponding to the class of the prediction.
  • confidence_score: The model confidence score.
  • bbox: A bounding box prediction. This should be a dict with the format:
'x': 0.1 # normalized x-coordinate of the top-left corner of the bounding box.
'y': 0.2 # normalized y-coordinate of the top-left corner of the bounding box.
'w': 0.3 # normalized width of the bounding box.
'h': 0.1 # normalized height of the bounding box.
  • polygon: A polygon represented either as a list of normalized [x, y] points or a mask of size [h, w].
  • frame: If predictions are associated with a video, then the frame number should be provided.

Only one bounding box or polygon can be specified in any given call to this function.

Predictions from KITTI Files


This works for bounding boxes only.

If you have KITTI labels stored in CSV files, there is a utility function to import the predictions from those files. For this, the files must be associated with one image each and their file names must contain the data_hash of the associated image.

The file structure needs to be as follows:

├── labels
│   ├── aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee__whatever_you_may_need.txt
│   ├── ...
│   └── aaaaaaaa-bbbb-cccc-dddd-ffffffffffff__whatever_you_may_need.csv
└── ontology_label_map.json

That is, a root directory with two components:

  1. A subdirectory named "labels" that contains text files with names that start with the data_hash followed by two underscores
  2. A json file which maps class names to Encord ontology classes

We cover the two components below.

Text File Format

The KITTI importer supports the format described here with the addition of an additional column corresponding to the model confidence.

An example:

car 0.00 0 0.00 587.01 173.33 614.12 200.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 97.85
cyclist 0.00 0 0.00 665.45 160.00 717.93 217.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 32.65
pedestrian 0.00 0 0.00 423.17 173.67 433.17 224.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3.183

Columns are:

  • class_name: str
  • truncation: float ignored
  • occlusion: int ignored
  • alpha: float ignored
  • xmin: float
  • ymin: float
  • xmax: float
  • ymax: float
  • height: float ignored
  • width: float ignored
  • length: float ignored
  • location_x: float ignored
  • location_y: float ignored
  • location_z: float ignored
  • rotation_y: float ignored
  • confidence: float

Note, the ignored items need to be there but will be ignored.

The JSON Class Map

The JSON class map needs to follow the following structure:

"OTk2MzM3": "pedestrian",
"NzYyMjcx": "cyclist",
"Nzg2ODEx": "car"

The keys should correspond to the featureNodeHash of a bounding box object in the project ontology. To list the available hashes from your project, you can do this in your script:

# NB: Remember to include the first code snippet on this page.
print({o["featureNodeHash"]: o["name"] for o in project.ontology["objects"]})
# Outputs somtihing similar to
# {'OTk2MzM3': 'Pedestrian', 'NzYyMjcx': 'Cyclist', 'Nzg2ODEx': 'Car'}

The values of the JSON file should be the values that can appear in the first column of text files described above.

Importing the Predictions

To import the predictions, you do the following

import json

from encord_active.lib.model_predictions.importers import import_KITTI_labels
from encord_active.lib.model_predictions.writer import PredictionWriter

predictions_root = Path("/path/to/your/predictions")
object_map = json.loads((predictions_root / "ontology_label_map.json").read_text())

with PredictionWriter(cache_dir=data_dir, project=project, custom_object_map=object_map) as writer:
import_KITTI_labels(project, data_root=predictions_root, prediction_writer=writer)

Predictions from Masks


This works for segmentation/polygons only.

If you have your predictions stored as png masks of shape [height, width], where each pixel value correspond to a class, then you can use the import_mask_predictions function from encord_active.model_predictions.importers. It requires that you can provide a mapping between file name and data hashes.

Assuming you have predictions stored in a directory like this:

├── aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.png
├── ...
└── aaaaaaaa-bbbb-cccc-dddd-ffffffffffff.png

or in a nested structure like

├── dir1
│   ├── aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee.png
│   ├── ...
│   └── aaaaaaaa-bbbb-cccc-dddd-ffffffffffff.png
└── dir2
   ├── bbbbbbbb-bbbb-cccc-dddd-eeeeeeeeeeee.png
   ├── ...
   └── bbbbbbbb-bbbb-cccc-dddd-ffffffffffff.png

You can use this template where the highlighted lined are what you need to change:

from encord_active.lib.model_predictions.importers import import_mask_predictions
from encord_active.lib.model_predictions.writer import PredictionWriter

class_map = {
# featureNodeHash: pixel_value
"OTk2MzM3": 1, # "pedestrian"
"NzYyMjcx": 2, # "cyclist",
"Nzg2ODEx": 3, # "car"
# Note: value: 0 is reserved for "background"
predictions_root = Path("/path/to/predictions")
with PredictionWriter(cache_dir=data_dir, project=project) as writer:
# this is what provides the mapping between file names and data hashes:
du_hash_name_lookup=lambda file_pth: (file_pth.stem, 0),
  1. The script will look recursively for files with a .png extension and import them.
  2. For each file, every "self-contained" contour will be interpreted as an individual prediction. For example, This mask will be treated as three objects. Two from class 1 and one from class 2.
  1. NB: model confidence scores will be set to 1... we're working on fixing this!

Running Metrics on Your Predictions

When you have imported your predictions, it is time to run all the metrics on them.

For this, you can use these lines of code:

from encord_active.lib.model_predictions.iterator import PredictionIterator
from encord_active.lib.metrics.execute import run_metrics

run_metrics(data_dir=data_dir, iterator_cls=PredictionIterator)

This will compute all the metrics for your predictions. Next time you run

encord-active visualise

You should be able to see the performance of your model based on the metrics.