Skip to main content

Creating annotation projects

Video Tutorial - creating a new annotation project

Under in the 'Annotate' section in the navigation bar, select 'Annotation projects'. Click the + New annotation project button to create a new project.

1. Enter details

In the 'Enter details' screen, enter a project title and optional description. A clear title and description help keep your projects organized.

If you part of an organization, you will see an optional project tags dropdown. Project tags are useful for categorizing your projects. Select as many tags as are relevant to your project.

When you're ready to continue setting up your project, click Next.

2. Add datasets

The 'Add datasets' screen is where you attach datasets to your project. You can select from a list of datasets you have already created by clicking Add next to a dataset, or create a new dataset by clicking the + New dataset tab to initiate the dataset creation flow. Attached datasets will appear in a list on the right.

3. Select ontology

The 'Select ontology' screen is where you specify your ontology.


DICOM customers might be more familiar with the term 'labeling protocol', which is equivalent to an ontology.

Encord offers support for objects within a frame, as well as frame level classifications, with nested attributes if need be.

You can reuse a pre-existing ontology by cliking Select next to the ontology, or create a new one by clicking the + New ontology tab. A preview of the label editor sandbox for your chosen ontology will be shown on the right side of the screen.

Click Next step to continue creating your project.

4. Select your QA type

Select what type of Quality Assurance (QA) you would like for your project.

  • Manual QA requires reviewers to manually review labels created by annotators. Use the 'Sampling rate' slider to adjust the percentage of labels that will be manually reviwed. Click here for more information on manual QA and its setup.

  • Automated QA uses an already existing manual QA project as a 'benchmark' to automaticlly review labels created by annotators. The manual QA 'benchmark project' needs to be complete, and use the same ontology as your automated QA project.

    • Dynamic Benchmark gives you control over how labels in a frame are scored against the benchmark. [Click here] to learn more about setting up and editing dynamic benchmark scoring functions.
    • Single frame-level classification only evaluates frame-level classifications, not labels within a frame.
  • Click here for more information on automated QA and its setup.

  • Click here for a benchmark QA walkthrough.

5. Create the project

Once you are happy with your choice of datasets, ontology, and QA type - click Create project to complete the process.