Skip to main content
Encord is designed to support annotation programs that span multiple teams, departments, and geographies. This page covers how to structure your workforce, manage projects at scale, enforce quality, and monitor operational performance.

Workspace and user model

Every Encord deployment is organized around a Workspace — your organization’s dedicated environment within the platform. All users, projects, datasets, and ontologies live within the Workspace.

Workspace roles

RoleCapabilities
AdminFull access: manage users, view all projects and datasets, create and delete resources
Workforce ManagerAdd and manage Taskers, create projects and datasets, manage user groups
MemberCreate and access projects, datasets, and ontologies they are invited to
TaskerAccess only the projects and tasks they are assigned to
Workspace roles control access at the organizational level. Users also have separate roles within individual Projects (Annotator, Reviewer, Team Manager, Admin), allowing fine-grained control over who can do what in each annotation program.

User groups

Workspace Admins and Workforce Managers can create user groups to organize annotators and reviewers into named pools. Groups can be assigned to workflow stages in bulk, making it easier to manage large annotator workforces without configuring permissions task by task.

Project and workflow structure

Projects

An Encord Project brings together:
  • One or more Datasets (the data to be labeled)
  • An Ontology (the labeling schema)
  • A Workflow (the path tasks follow from annotation to completion)
  • Collaborators (users and their roles)
Projects can be tagged and filtered to support large programs with many concurrent annotation efforts.

Workflows

Workflows define the stages a task passes through — from initial annotation, through one or more review stages, to completion. Encord provides a no-code Workflow builder for creating custom pipelines. Common workflow patterns include:
  • Annotate → Complete — single-stage labeling for straightforward tasks
  • Annotate → Review → Complete — two-stage with human review
  • Annotate → Review → Rework → Review → Complete — iterative review with rework loops
  • Consensus — multiple annotators label the same task independently; disagreements are surfaced for resolution
Workflow templates can be saved and reused across projects, ensuring consistency across teams. See Workflows and Templates for the full reference.

Task management at scale

Queue management

The Queue tab in each Project shows all tasks organized by workflow stage. Admins and Team Managers can:
  • Assign tasks to specific annotators or reviewers
  • Set task priority (0–100) to control order of execution
  • Move tasks between workflow stages manually
  • Filter by dataset, assignee, data type, task status, or issue status
  • Sort by priority or alphabetically
The Label Editor serves tasks to annotators in descending priority order, ensuring high-priority work is completed first.

Task assignment strategies

StrategyWhen to use
Unassigned (pull model)Annotators self-assign from the queue; best for large, homogeneous workforces
Explicit assignmentAdmin assigns specific tasks to specific annotators; best for specialized data or tiered review
Selective workflow stagesSpecific users are assigned to specific workflow stages; best for multi-stage QA pipelines

Task statuses

Tasks carry a status that reflects their history:
  • New — not yet worked on since being added to the project
  • Reopened — rejected during review and returned for re-labeling
  • Skipped — skipped by one or more annotators

Quality assurance

Review workflows

Encord’s review stages allow designated reviewers to inspect, approve, or reject annotated tasks. Rejected tasks are automatically returned to the annotator with issue notes.

Consensus labeling

For high-stakes labeling tasks, Consensus workflows assign the same task to multiple annotators independently. Disagreements between annotations are surfaced automatically, allowing team managers to resolve conflicts and measure inter-annotator agreement. See Consensus Workflows for setup details.

Issues

Annotators and reviewers can raise Issues on specific labels or frames — flagging ambiguities, errors, or edge cases for discussion. Issues can be assigned, tracked, and resolved within the platform.

Benchmark workflows

Benchmark projects allow you to measure annotator accuracy against a gold-standard dataset. This is useful for:
  • Screening new annotators before onboarding
  • Ongoing quality monitoring for active annotators
  • Training annotators on difficult edge cases
See Benchmark QA for a walkthrough.

Analytics and reporting

Project-level analytics

The Analytics tab in each Project provides detailed statistics on labeling and review activity:
  • Labels created, approved, and rejected
  • Time spent per task, per user, and per stage
  • Open issues by annotator or reviewer
  • Task throughput over time

Role-based analytics views

What you see in the Analytics tab depends on your role:
  • Annotators and Reviewers see their own personal performance metrics
  • Team Managers and Admins see aggregate team metrics and can drill down to individual performance

Workspace-level reporting

Workspace Admins can view activity across all Projects in the Workspace, giving a firmwide view of annotation throughput, workforce utilization, and quality trends.

Project status management

Admins and Team Managers can set a Project’s status to reflect its current phase:
  • Not started / In progress / Paused / Completed / Cancelled / Archived
Pausing, completing, or cancelling a Project prevents Annotators and Reviewers from opening tasks, while Admins and Team Managers retain full access.

Where to go next