Skip to main content

Task manager (Queue)

The task manager manages all annotation and review tasks for labeling queues & workflows. The task manager is built to optimize labeling and quality control workflows and is designed to scale to thousands of annotators, reviewers, team managers, and administrators working concurrently on the same project. The task manager is enabled by default but can be switched on and off under the 'Options' tab in project settings.

Annotation and review tasks are distributed automatically using the first in-first out method - tasks that have been in the queue the longest are served first. Annotation tasks are generated and added to the label queue when a dataset or a set of datasets is attached to a project and when new data is added to attached datasets. Review tasks are generated and added to the review queue once an annotator submits an annotation task. Conversely, detaching a dataset will remove any associated annotation and review tasks, so you should exercise caution if you proceed.

Team managers and administrators can also assign tasks explicitly to individual annotators and reviewers. Once an annotation or review task is distributed to an annotator or reviewer, it is reserved by that individual, prohibiting other team members from accessing that task. Both annotation and review tasks are accessible in the 'Queue' pane in the 'Labels' tab.

Task generation#

Annotation tasks are generated and added to the label queue when a dataset or a set of datasets is attached to a project and when a new data asset is added to attached datasets. Review tasks are generated and added to the review queue once an annotator submits an annotation task. Conversely, detaching a dataset will remove any associated annotation and review tasks, so you should exercise caution if you proceed.

By default, each data asset will be labeled once, and each label submitted for review will be reviewed once. You can create additional review tasks by clicking the + Add reviews button and following the steps in the window. You can reopen submitted annotation tasks if you wish to send the data asset back into the queue for further labeling by selecting the relevant assets and clicking the Reopen button.

Task distribution#

Annotation and review tasks are distributed automatically using the first in-first out method (illustrated below) - tasks that have been in the queue the longest are served first. Once an annotator or reviewer clicks on the Start labeling or Start reviewing button, the next available free task in the queue is reserved by that individual, prohibiting other team members from accessing the task. Once the task is fetched, the annotator or reviewer is taken to the label editor to complete the task.

Project administrators and team managers can override the automated distribution of tasks by explicitly assigning tasks to individuals in the 'Queue' pane in the 'Labels' tab. Assignments can be done on a task-by-task basis or in bulk by selecting the relevant tasks and clicking the Assign button.

Tasks can be released by pressing the icon next to the task and clicking the Release task button. Reserved tasks do not have an expiry and will keep being assigned to an individual until it is submitted, released, or skipped.

tip

Annotation tasks can be submitted programmatically via our API & using Encord's Python SDK.

Task completion#

An annotation task is completed once all outstanding labels subject to review have been reviewed. Completed annotation tasks and annotation tasks currently in the review stage are visible in the 'Activity' pane in the 'Labels' tab.

Annotation#

Annotation tasks are completed in the label editor. Hit the Submit button once labeling work is completed and the annotator is confident that the quality and coverage of produced labels meet the accuracy threshold for the project. The submitted labels will be selected for review, depending on the project Quality configuration.

Annotators can skip tasks by clicking the Skip button. The next available task is automatically distributed and assigned if a task is skipped.

caution

In order to prevent any possible issues of annotator work being overwritten, it's critical that all annotations are done via the Task Management System's Queue tab, and only the person assigned to the task makes annotations at any given time.

Review#

Review tasks are completed in a purpose-built user interface in the label editor.

Review mode components:

  • A. Single label review toggle
  • B. Edit labels
  • C. Pending reviews pane
  • D. Completed reviews pane
  • E. Reject and Approve buttons
  • F. Approve and Reject all in frame buttons

'Pending' and 'Completed' review panes#

All labels for review for a particular data asset assigned to the reviewer are automatically loaded into the 'Pending reviews' pane. Completed reviews are displayed in the 'Completed reviews' pane. You can click on specific objects to highlight them. Labels can be selected and then approved or rejected for a given instance or in bulk using the Reject and Approve buttons or the matching hotkeys, b for reject and n for approve.

'Single label review' toggle#

You can enter the 'Single label review' mode by toggling the switch at the top. The single label review mode automatically highlights and hides other objects, allowing you to review and approve or reject a single label at a time and quickly browse through individual labels using the Up and Down keys on your keyboard.

note

The reviewer is automatically taken to the next set of requested label reviews once all labels in a particular review task have been reviewed.

Edit labels#

A convenient feature is allowing reviewers to edit labels and make small adjustments without the need to return the entire set of labels to the annotator. Press the Edit labels button and make any necessary changes before switching back to review mode. Currently, only a subset of label edit operations are supported:

  • objects: moving the object or individual vertices
  • classifications: changing the classification value
  • objects and classifications: change any nested classifications

Approve/Reject all in frame buttons#

In addition to being able to review all labels for a given instance, you can review labels grouped by frame as well. For review workflows that focus on progressing through video by frame rather than by instance, use the Approve all in frame and Reject all in frame buttons. Of course, you should be sure you want to apply that judgement to all labels in a given frame before using this feature!

Rejected labels#

If a reviewer rejects a label during the review stage, it will be marked as Returned in the 'Queue' pane in the 'Labels' tab. By default, rejected annotation tasks are returned and assigned to the queue of the person who submitted the task.

Returned tasks are resolved in a purpose-built user interface in the label editor. Click on the Exclamation point button icon on the right-hand side of the screen to open the drawer containing rejected labels. Once the reviewer comments have been addressed, click on the Thumbs up button icon to mark it as resolved.

Annotation tasks cannot be re-resubmitted until all issues have been marked as resolved. Once a task is re-submitted, the labels marked as resolved are sent back for an additional review. There is no limit on how many times a label can be rejected and sent back for correction.

Returned task label editor view

Missing labels#

If a reviewer determines that a label is missing entirely, they can use the report missing labels feature to indicate labels are missing in a given frame or image. Missing label reports will be sent back to the annotator via the same queue as rejected labels.

Submit missing label report

Quality Assurance#

The task manager allows you to set granular parameters for quality control workflows. Specifically, the task manager enables you to define:

  • The percentage of labels that are to be manually reviewed
  • Rules for distribution of review tasks
  • Common rejection reasons that can be used to identify and systematize errors in your labels
  • Reviewer to class and annotator mapping (e.g. label X with class Y should always be reviewed by reviewer Z)
  • Assign tasks that are rejected after a specific number of review cycles for expert reviews.

A. Sampling rate
B. Multi review assignment
C. Default rejection reasons
D. Reviewer mapping
E. Expert review

Sampling rate#

Project administrators can dynamically change the sampling rate applied to submitted annotation tasks. The sampling rate determines the proportion of the submitted labels that a reviewer should review. This can be modified with the slider.

Sampling rates can also be configured by annotation type and annotator (e.g. class Y should have a sampling rate of 50%, class Z should have a sampling rate of 80%, annotator A should have a sampling rate of 70%, annotator B should have a sampling rate of 95%) by clicking the Configure button (this feature is only available to paying users).

Multi review assignment#

Annotation tasks with many labels across one data asset might get partitioned into review tasks that are distributed to different reviewers. Enabling multi review assignment means that all review tasks generated through the submission of one annotation task are assigned to the same reviewer.

Default rejection reasons#

The default rejection reasons allows an admin to create default responses a reviewer can select when rejecting annotation tasks. Pressing the + New button and entering a response will save it for future reviews. Setting default rejection reasons can help you identify and systematize errors in your labels.

Reviewer mapping#

You can configure rules that automatically assign specific reviewers to classes and annotators (e.g. label X with class Y should always be reviewed by reviewer Z). The setting can be configured by toggling the 'Reviewer mapping enabled' option.

Clicking the Configure button opens up a window where you can assign reviewers to specific annotators or classes. Assigning a reviewer to classes (objects or classifications) can be done under the 'Class mapping' tab, and assigning a reviewer to annotators under the 'Annotator mapping' tab. Any number of reviewers can be assigned to annotators and classes. One of them will be selected at a time for each task submitted.

note

If an annotator is mapped to a reviewer(s) and they create labels with specific classes also mapped to a reviewer, the class mapping will take precedence over the annotator mapping.

Expert review#

If a particular label is being repeatedly rejected by the reviewer, there may be deeper issues than can be solved by simply having the annotator continue to reexamine their work. In those cases, it may be necessary to have a special class of designated expert reviewers -- who perhaps excel in communicating the reason for rejections or have particuarly high skills in the data domain being annotated.

We've built the expert review quality assurance feature to solve just those cases. Configure the threshold number of rejections and the special set of expert reviewers. If a label is rejected more than the threshold count, the next review task will be allocated to one of the designated expert reviewers, regardless of the standard class and annotator-reviewer mappings specified.