To set up a Benchmark QA Workflow, you need to create two distinct Projects:

  • Benchmark Project: This Project establishes the “ground-truth” labels, which serve as the benchmark for evaluating annotator performance.
  • Production Project: In this Project, annotators generate the production labels. Annotator performance is scored against the ground-truth labels from the first Project.

STEP 1: Import Files into Encord

You must first import your files into Encord. This includes files that are used to establish ‘ground-truth’ labels, and your production data.

We recommend creating separate folders for Benchmark and Production tasks.
2

Create a Folder to Store your Files

  1. Navigate to Files under the Index heading in the Encord platform.
  2. Click the + New folder button to create a new folder. A dialog to create a new folder appears.
  1. Give the folder a meaningful name and description.

  2. Click Create to create the folder. The folder is listed in Files.

3

Create JSON or CSV for Import

To import files from cloud storage into Encord, you must create a JSON or CSV file specifying the files you want to upload.

Find helpful scripts for creating JSON and CSV files for the data upload process here.

All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all images, image groups, videos and DICOM files in your cloud storage.

For a list of supported file formats for each data type, go here
Encord supports file names up to 300 characters in length for any file or video for upload.
4

Import your Files

STEP 2: Create Benchmark Project

The Benchmark Project establishes ground truth labels.

1

Create a Benchmark Dataset

Create a Dataset containing tasks designed to establish ground truth labels. These files will be used to generate ‘gold-standard’ labels against which annotator performance will be evaluated. Be sure to give the Dataset a clear and descriptive name.

Learn how to create Datasets here
2

Create an Ontology

Create an Ontology to label your data. The same Ontology is used in the Benchmark Project AND the Production Project.

Learn how to create Ontologies here
3

Create a Workflow Template

Create a Workflow template to establish ground truth labels and give it a meaningful name like “Establishing Benchmarks”. The following example template is just one approach; however, the process for creating benchmark labels is flexible, allowing you to choose any Workflow that suits your requirements.

For information on how to create Workflow templates see our documentation here.
4

Create the Benchmark Project

Ensure that you:

  • Attach ONLY the Benchmark Dataset to the Project.
  • Attach the Benchmark Workflow Template to the Project.
  1. In the Encord platform, select Projects under Annotate.
  2. Click the + New annotation project button to create a new Project.
  1. Give the Project a meaningful title and description, for example “Benchmark Labels”.
  2. Click the Attach ontology button and attach the Ontology you created.
  3. Click the Attach dataset button and attach the Dataset you created.
  4. Click the Load from template button to attach the template you created in STEP 2.3.
  1. Click Add collaborators. Add collaborators to the Project and add them to the relevant Workflow stages.

  2. Click Create project to finish creating the Project. You have now created the Project to Establish ground-truth labels.

STEP 3: Create Benchmark Labels

Complete the Benchmark Project created in STEP 2 to establish a set of ground truth labels for all data units in the Benchmark Dataset.

To learn how to create annotations, see our documentation here.

STEP 4: Create Production Project

Create a Project where your annotation workforce labels data and is evaluated against benchmark labels.

1

Create a Production Dataset

Create a Dataset using your Production data. Give the Dataset a meaningful name and description to distinguish it from the Benchmark Dataset created in STEP 2.

2

Create a Production Workflow Template

Create a Workflow template for labeling production data using Benchmark QA and give it a meaningful name like “Benchmark QA Production Labels”

The following Workflow template is an example showing how to set up a Workflow for Benchmark QA.

  • A Task Agent is used to route tasks depending on whether they originates in the Benchmark Dataset or the Production Dataset.

  • A script is will be added to the Consensus block of the Production Workflow to evaluate annotator performance.

3

Create The Production Project

Ensure that you:

  • Attach both the Benchmark Dataset AND the Production Dataset when creating the Production Project.
  • Attach the SAME Ontology you created for the Benchmark Project.
  • Attach the Production Workflow Template to the Project.
  1. In the Encord platform, select Projects under Annotate.
  2. Click the + New annotation project button to create a new Project.
  3. Give the Project a meaningful title and description, for example “Benchmark QA Production Labels”.
  4. Click the Attach ontology button and attach the Ontology you created. Attach the SAME Ontology you created for the Benchmark Project.
  5. Click the Attach dataset button and attach the Benchmark AND the Production Datasets.
  6. Click the Load from template button to attach the “Benchmark QA Production Labels” template you created in STEP 4.2.
  7. Click Add collaborators. Add collaborators to the Project and add them to the relevant Workflow stages.
  8. Click Create Project to create the Project. You have now created the Project to label production data and evaluate annotators against the benchmark labels.
4

Create and run the SDK script for the Agent node

Create and run the following benchmark_routing.py script to check whether a data unit is part of the Benchmark Dataset, or the Production Dataset.

  • If a task is part of the Benchmark Dataset, the task is routed along the “Yes” pathway and proceeds to the Consensus 1 stage of the Production Project, where annotator performance is evaluated.
  • If the task is not part of the Benchmark Dataset it is routed along the “No” pathway and proceeds to the Annotate 1 stage of the Production Project, where production data is labeled.
Run this script each time new production data is added to the Production Dataset
benchmark_routing.py
# Import dependencies
from encord.user_client import EncordUserClient
from encord.workflow import AgentStage

#Replace <project_hash> with the hash of your Project
PROJECT_HASH = "<project_hash>"

BENCHMARK_DATASET_HASH = "<benchmark_dataset_hash>"

#Replace <private_key_path> with the full path to your private key
SSH_PATH = "<private_key_path>"

# Authenticate using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(
ssh_private_key_path=SSH_PATH
)

# Specify the Project that contains the Task agent.
project = user_client.get_project(PROJECT_HASH)

# Specify the Task Agent
agent_stage = project.workflow.get_stage(name="Benchmark Task?", type_=AgentStage)
benchmark_dataset = user_client.get_dataset(BENCHMARK_DATASET_HASH)
benchmark_data_hashes = {data_row.uid for data_row in benchmark_dataset}

for task in agent_stage.get_tasks():
    if task.data_hash in benchmark_data_hashes:
        task.proceed(pathway_name="YES")
    else: 
        task.proceed(pathway_name="NO")
5

Create a script for the Review & Refine stage

Crete the following compare_labels.py script for the Consensus 1 stage in the Production Project. The script compares the annotator’s labels in the Production Project with the ground truth labels established in the Benchmark Project.

All tasks in this stage are rejected and routed to the Archive stage, as they do not constitute production data. The point of the Consensus block is to evaluate annotator performance.

compare_labels.py
# Import dependencies
from encord import EncordUserClient, Project

from encord.workflow import(
AnnotationStage, 
ReviewStage,
ConsensusAnnotationStage, 
ConsensusReviewStage,
FinalStage
)

# Replace <project_hash> with the hash of your Project
PROJECT_HASH = "<project_hash>"

# Replace <private_key_path> with the full path to your private key
SSH_PATH = "<private_key_path>"


# Authenticate
user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path=SSH_PATH
)

# Get Production Project
project = user_client.get_project(PROJECT_HASH)

# The review stage of the Consensus block compares labels on the Benchmark dataset
stage = project.workflow.get_stage(name="Consensus 1", type_=ConsensusReviewStage)

# Code for comparing labels goes here
  # For each label branch compare labels

STEP 5: Create labels

Once your Production Project is set up, annotators can begin labeling the production data. Tasks from both the Benchmark Dataset and the Production Dataset are assigned to annotators. Their performance is then assessed based on how accurately they label the Benchmark tasks.

To learn how to create annotations, see our documentation here.

STEP 6: Evaluate Annotator Performance

Run the compare_labels.py script created in STEP 5.5 to evaluate annotator performance.