Skip to main content

Working with annotation projects

Annotation projects allow you to create and automate annotations for various kinds of data to be used in machine learning applications.

Project dashboard

Clicking on a project in the 'Projects overview' screen takes you to its 'Project dashboard'.

This is where you monitor and manage your project. For example, you can view your project's summary statistics, manage labeling tasks, view your team's productivity, train models and invite collaborators.

The dashboard is split into 7 tabs:

  • Summary: a high-level view of labeling and productivity statistics
  • Explore: a quick way to explore the distribution of instances and labels across data assets in the project
  • Labels: for managing all the project's labeling activity and tasks
  • Performance: a more detailed view of your team's manual labeling and productivity
  • Models: for creating, training and using pre-trained computer vision models with your labeled training data
  • Export: for exporting your labeling data
  • Settings: editing project options, ontology, team collaborators, and other general project settings

Summary

Clicking an annotation project takes you to its summary dashboard. This dashboard has 3 components and gives you a rich visual display of your project's progress at a high level.

Annotation task progress

Displays the number of annotation tasks that are in each state: Unassigned, Assigned or Completed.

  • Unassigned: When no one is assigned to the task. The Reserved by column in Labels > Queue is empty for the corresponding task.
  • Assigned: A task is displayed as Assigned when a task is assigned to an annotator, but has not yet been submitted for review.
  • Completed: A task in the pie chart is displayed as Completed when the annotator submits the task for review. A Completed annotation task will show the task is of type Review in the Labels > Queue tab.

Review task progress

Analogous to the annotation task progress section, except for review tasks. Review tasks are created according to the configured sampling rate for that ontology class, and each created review task corresponds to an annotation instance.

  • Incomplete: Review tasks are incomplete when they are either queued or assigned, but have not received a judgement. A judgement of either rejection or acceptance will complete a review task.
  • Completed: Completed tasks are those that have had a judgement made against them, regardless if that judgement is Rejected or Approved.

For a more comprehensive summary of how a task moves from annotation through instance review and full completion, reference the Status section below.

Project estimate calculator

You can the known stats of your project, Total frames and Frames with labels, to simulate how long a project might require given a different number of annotators and estimates frames/day each annotator can complete.

Explore

The explore tab, indicated with the magnifying glass icon, provides interfaces to help you understand how project's annotations are distributed amongst the data assets at both an instance and label level. It further allows you to deeper exploration by nested classifications on objects and frame-level classifications as well.

Instance statistics

This section provides the total count of all instances across the datasets in your project.

  • Project total: Shows total instances (both objects and classifications) across the project by default. To get instance statistics for individual data files, click the dropdown to select a data file.
  • Select class: Shows the total instances for a particular class. This is a summary of how a given class is distributed across your project's data assets. The pie chart segments show a breakdown of how that class is split across the data assets.
  • Display timestamps: Flip the toggle to switch between frame numbers and timestamps for the labels.

Label statistics

This is a summary of how your labels are distributed across the project. The pie chart shows a breakdown of how many labels there are for a given class.

  • Project total: Shows the total number of labels across different datasets in the project. To get label stats for individual data files, click the dropdown to select a data file.
  • Objects: Click on the pie chart segment of a class to see the total number of labels and its nested classifications (sometimes called nested attributes) if available for that class.
  • Classifications: Shows the global classification at project or individual video level. For example, location, time of day, etc.

Quick definitions of classes, instances and labels

  • Class: Fundamental unit of the project's ontology. For example the ontology of a project annotating traffic videos could have classes such as Car, Truck, Bicycle, and so on. For more information on objects and classifications, see Ontologies Overview.
  • Instance: Specific occurrence of a class. Car(0) is an instance of the Car class, for example, it could be a specific black sedan. The single Car(0) instance can appear in a single frame or a range of frames. Therefore, instances may contain multiple labels across frames.
  • Label: An frame-specific annotation of an instance. For example the annotation of Car(0) on frame 201 is a label.

Labels

The labels page is your gateway to annotating, reviewing, and auditing the labels made against all the datasets in your project.

The labels page presents several different tabs, each with a different purpose. Access to each pane will depend on the user's project role. We quickly summarize the purpose of each tab, and the roles which can access each below.

The labels dashboard features the following tabs:

  • Activity: Quickly view all the tasks that are either In review or Completed, as well as confirm the status of the labels within those tasks.
  • Queue: The Queue tab is where all labeling and review actions should be initiated. Regardless of a user's role within the project, all annotations and reviews should be initiated from within the Queue pane, as it pulls tasks from the Task Management System. This ensures collaborators' efforts will not overwrite each other.
  • Data: Use the Data tab to get an overview of every data asset in the project, regardless of status within the Task Management System.
  • Instances: The Instances tab lets you use the unique instance identifier to search across the project to find a specific instance, and then jump directly into the editor to confirm the status of the annotation visually.

Access to each tab is associated with the various project roles as follows:

RoleActivityQueueDataInstances
Annotator
Reviewer
Annotator & Reviewer
Team Manager
Admin

Activity

The actvity screen allows you to quickly monitor annotation and review activity in your project by showing tasks and providing a summary interface to see the status of reviewed labels inside each task. See below for a detailed explanation of the various columns and interfaces. Tasks are displayed in most recently edited order from top to bottom. The most recently edited is at the top.

  • A. File, Search, & Reopen: The name of the sepcific data unit or data asset. This is the same as the name in the dataset to which this data asset is a part of. Use the search box to filter the list by file name, and send tasks back to annotation using the 'Reopen' feature.
  • B. Dataset: The dataset the data asset belongs to.
  • C. Type: The type of the data, such as an image or video. For more on our supported data types, see our label editor documentation.
  • D. Status: The status of this task within the Task Management System.
  • E. Frames: The number of frames in the data asset. For a DICOM series, this will be the number of slices.
  • F. Reviews: How many annotation instances are in this data asset.
  • G. Submitted: Indicates when the last submit action, whether for an annotation or review, was made against any of the labels in this data asset.
  • H. Submitted by: Who last submitted the annotations.
  • I. Reviewed by: Who submitted the most recent review.
  • J. Actions: Click the 'View' link to open the label editor. Note: this feature is only available to Team Managers and Administrators as an extra method of reviewing submissions outside of the TMS. We advise extra caution if you decide to edit the labels from this interface. If significant work needs to be done, we strongly recommend to 'Reopen' the task to prevent possible errors from simultaneous edits.
  • K. Filter: Use the filter dropdown to only show tasks with the selected status.

File, Search, and Reopen

The file column shows the name of the data asset. For files uploaded via the GUI, they keep the name they were uploaded with. For files added from your cloud storage, this will normally be the path under the bucket they are stored on.

Use the search interface to quickly filter and display only those tasks with file names matching your desired text. Even partial matches will be shown. For example: searching "fly" will return file names containing "flyover" and "flyaround."

The 'Reopen' button allows Administrators and Team Managers to send tasks which are currently Completed or In review back to annotation. Select your target tasks using the checkboxes in the File column to select individual assets, or select the checkbox in the column header to select all tasks, and press the 'Reopen' button to move all selected tasks back to the annotation stage. Tasks reopened in this way will have the status 'Returned,' in the 'Queue' tab. No labels are lost by reopening a task. The 'Reopen' action is only applied to tasks which are both visible (i.e. not filtered out by the file search) and selected.

Status

This column shows the status of this task within the Task Management System. The Activity pane only shows assets which have had some action done on them, and therefore only reflects tasks with the following statuses:

  • In review: The annotation task has been submitted but outstanding review tasks remain. In review task status is shown in blue.
  • Completed: The annotation task has been submitted and all reviews have been completed. Completed task status is shown in green.

For a comprehensive summary of the possible task states, see the status section of the Data tab, below.

Reviews

Show a count of how many instances have been reviewed in a given data asset. Click the number to open a panel which shows the last review action taken on each instance, and who originally created the annotation and when. Note that unless the review was done by an 'Expert Reviewer', all reviewed annotations must be either 'Approved' or 'Deleted' before a task can be 'Completed.' Read more about the Expert Review feauture here.

Queue

The 'Queue' is where annotators and reviewers look to find their next task. The 'Start labeling' and 'Start reviewing' buttons visible throughout the project open the label editor with the next task in the queue according to the relevant task type. This tab can be used to assess the number of tasks assigned to you as an annotator or reviewer and therefore estimate your likely workload. Administrators and Team Managers can also use it to quickly verify the current assignments per team member, and change assignments as necessary.

  • A. File, Search, & Assign: The name of the specific data unit or data asset. This is the same as the name in the dataset to which this data asset is a part of. Use the search box to filter the list by file name, and send tasks back to annotation using the 'Reopen' feature.
  • B. Dataset: The dataset the data asset belongs to.
  • C. Type: The type of the data, such as an image or video. For more on our supported data types, see our label editor documentation.
  • D. Status and Task: The status and category of this task within the Task Management System.
  • E. Last Edited: When the task was lated edited
  • F. Reserved by: Who the task has been assigned to or reserved by
  • G. Actions: Depending on your collaborator role, you can initiate or reassign the task.
  • H. Filter: Use the filter drop-down to only show tasks of the selected status.

File, Search, and Assign

The file column shows the name of the data asset. For files uploaded via the GUI, they keep the name they were uploaded with. For files added from your cloud storage, this will normally be the path under the bucket they are stored on.

Use the search interface to quickly filter and display only those tasks with file names matching your desired text. Even partial matches will be shown. For example: searching "fly" will return file names containing "flyover" and "flyaround."

The 'Assign' button allows Administrators and Team Managers to allocate unassigned tasks to specific collaborators for annotation or review. Select your target tasks using the checkboxes in the File column to select individual assets, or select the checkbox in the column header to select all tasks, and press the 'Assign' button open the task assignment popup. Confirm the selected tasks are as intended, then select the target collaborator from the dropdown and press assign. Tasks which have already been assigned to another collaborator, as indicated by the email in the 'Reserved by' column, can not be reassigned until they have first been released.

Status and Task

The 'Queue' tab only shows tasks which have remaining annotation or review work to be done within the Task Management System. Therefore, the stage of the task within the TMS is understood by reading the Status and Task columns together.

The two types of tasks are 'Annotate' and 'Review' which can be in any of the following states:

  • Queued: The task is ready for annotation or review. For an annotation tasks to be 'Queued' it must not be assigned to a user, and must have no submitted labels. It may have beenpreviously assigned to a user, but subsequently released before any annotations were submitted.
  • Assigned: The annotation or review task is assigned to a specific user.
  • Returned: The annotation task was previously submitted, and either several of the annotations were rejected by the reviewer or it was 'reopened' after completion by a Team Manager or Administrator.

Actions

There are two relevant actions that can be done on each task from the 'Queue' pane. Press 'Initiate' to open the label editor and proceed with annotation or review, depending on the task type.

Additionally, Administrators and Team Managers can click the three vertical dots to open the expanded menu, to access the 'Release task' function. Tasks must be explicitly released before they can be reassigned.

Data

The Data tab gives a complete overview of all the data asset tasks in the project, regardless of their progress through the Task Management System. Therefore, this is first place Administrators and Team Managers should check if they want confirm the status of a given task.

  • A. File & Search: The name of the specific data unit or data asset. This is the same as the name in the dataset to which this data asset is a part of. Use the search box to filter the list by file name.
  • B. Dataset: The dataset the data asset belongs to.
  • C. Type: The type of the data, such as an image or video. For more on our supported data types, see our [label
  • D. Status: The status of this task within the Task Management System.
  • E. Frames: The total frames in this data asset. This will apply to videos, image sequences and DICOM. Images always only have 1 frame.
  • F. FPS: the frames per second of the data asset. This only applies for data of type video. Others will show a dash (-).
  • G. Created: When the task was created. Tasks are created when the dataset containing the data asset is attached to the project.
  • H. Last edited by: the lat collobator to edit the task in any capacity (such as annotate or review), and when.
  • I. Actions: The 'Data' tab allows users to view the task in the label editor, as well as get a code snippet for using the SDK with this task, and confirming the edit actions via the Activity Log.
  • J. Filter by: Use the filter dropdown to view only tasks with the selected Status.

The file column shows the name of the data asset. For files uploaded via the GUI, they keep the name they were uploaded with. For files added from your cloud storage, this will normally be the path under the bucket they are stored on.

Use the search interface to quickly filter and display only those tasks with file names matching your desired text. Even partial matches will be shown. For example: searching "fly" will return file names containing "flyover" and "flyaround."

Status

The data tab provides the most comprehensive overview of all the tasks associated with each data asset in a given project. As such, this is the first place to check to see the status of various tasks.

  • Queued: The task is ready for annotation. For a task to be 'Queued' it must not be assigned to a user, and have no submitted labels. A queued task may have been previously assigned to a user, but subsequently released before any annotations were submitted. Queued tasks are shown in light orange.
  • Assigned: An annotation task has been assigned to a specific user. Assigned tasks are shown in aqua green.
  • In review: The annotation task has been submitted but outstanding review tasks remain. In review task status is shown in blue.
  • Returned: The task was previously submitted, and either several of the annotations were rejected by the reviewer or it was 'reopened' after completion by a team manager or administrator.
  • Completed: The annotation task has been submitted and all reviews have been completed. Completed task status is shown in green.

Actions

Clicking View will drop you into the label editor to do a live audit of the annotations in this data asset. The 'Data' tab is only visible to Administrators and Team Managers and so grants great power to view any data asset, however appropriate care must be taken to ensure annotations are not simultaneously edited from the 'Queue' pane by an annotator or reviewer. Encord advises edit actions are NOT taken from the Data tab unless you have received confirmation no one else is concurrently editing the asset.

caution

In order to prevent any possible issues of annotator work being overwritten, it's critical that all annotations are done via the Task Management System's Queue tab, and only the person assigned to the task makes annotations at any given time.

Other possible actions include 'API Details' which show a popup with sample code you can use to get started with our SDK to access this particular data asset, often known as a label row in the SDK. Click 'Acitvity log' to see a popup with a graphical summary of add / edit / delete actions on this data asset indexed by annotator or ontology class. Click 'Display logs' in the lower right to show all actions in reverse chronological order.

Instances

The 'Instances' tab allows Administrators and Team Managers to search within the data to directly find specific instances. Recall that an annotation instance correlates to a unique instantiation of a specific ontology class in a data asset. For example, if you have the 'Person' class in your ontology, the first instance of a 'Person' in a given data asset will be indicated in the interface as 'Person (0)', the second as 'Person (1)' and so on. Instances, therefore, can exist in multiple frames of a data asset, and indicate the same object. Use the 'Instances' tab to search for specific instances of objects or classifications using their Identifier.

Instance identifiers are unique at the project scope, and can be found in any of the following ways:

  • From inside the label editor, by clicking on a particular instance, and then selecting 'Copy identifier' from the instance action menu.
  • From inside exported labels, where they are known as the objectHash or classificationHash as appropriate.
  • When uploading labels using the SDK, you may specify your own objectHash or classificationHash.

Once you have an identifier of intereset, use the 'Search instance' interface to filter the instances by identifier to quickly find the instance you're interested in. This can be particuarly handy when you want to visually confirm an annotation you may not have seen before, but for which you have the identifier.

After locating your instance of interest, click 'View' from the Actions column to jump deeply into the dataset straight to where the instance is first annotated.

Performance

The performance dashboard gives a more detailed view of your team's manual labeling and productivity.

There are two global filters that can be applied to the entire dashboard:

  • Date range: sets the date boundaries for the statistics shown on the dashboard
  • Labels or instances: how your labels are distributed across datasets, objects and classifications
    • More information here

Submissions chart

The submissions chart displays the number of submitted labels or instances over the specified time period. The chart can be filtered to show submissions for specific annotators or classes.

If you filter on both Annotators and Classes then the resulting chart will show the submission statistics for the selected annotators for the selected labels.

Reviews chart

The reviews chart displays the cumulative number of accepted and rejected labels or instances over the specified time period.

Annotators table

The annotators' table displays all the relevant statistics for all annotators in a project. It can be filtered on classes to show annotator statistics only for the selected classes.

Table columns

  • User: The annotator's email
  • Rejection rate: Percentage of their labels or instances that have been rejected in the review process
  • Submitted labels / instances: Number of labels or instances that the annotator has submitted for review
    • Repeated submissions are not counted
  • Accepted labels / instances: Number of labels or instances that the annotator created that passed the review process
  • Rejected labels / instances: Number of labels or instances that the annotator created that we're rejected during the review process
    • Note that this can be higher than the number of submitted labels / instances since a label or instance can be rejected multiple times during the review process but the submission will only be logged once
  • Total session time: Time spent labeling

Reviewers table

Table columns

  • User: The reviewers email
  • Rejection rate: Percentage of labels or instances that they rejected in the review process
  • Accepted labels / instances: Number of labels or instances that the reviewer accepted
  • Rejected labels / instances: Number of labels or instances that the reviewer rejected
  • Total session time: Time spent reviewing

Objects and classifications table

Each row the in the objects and classifications table can be expanded to show statistics on the nested attributes of the class.

Table columns

  • Class: The class name
  • Rejection rate: Percentage of labels or instances rejected in the review process
  • Reviewed labels / instances: Number of labels or instances of the class that have gone through the review process
  • Accepted labels / instances: Number of labels or instances of the class that have passed the review process
  • Rejected labels / instances: Number of labels or instances of the class that failed the review process
  • Avg. time to annotate: Average time spent annotating this class

Models

Using our sophisticated label editor to quickly and accurately annotate large datasets is a powerful and attractive feature of the Encord platform. Automating the labeling process using micro-models can help accelerate your efforts even further. A project's Models tab is your interface to setup and manage micro-models for a given project. You have full control over:

  • Model types: classification, object detection or instance segmentation
  • Training frameworks: PyTorch, DarkNet
  • Model architectures: ResNet, VGG, YOLOv5 and Faster-RCNN, among others.
  • Training specifications: epochs, batch sizes and hardware accelerators
  • Training data

Here, we'll walk you through understanding the model browser interface and details of creating a model. After you've created a model, consult our documentation on training and inference to learn how to put your models into action.

note

The presence of the Models tab inside each project implies that models are scoped at the project level. Therefore, you'll need to create and train a model inside each project you want to use it in. Our team is working to make models available at the organizational level.

Browse and search models

The models page is your gateway to interacting with our customizable automation features. Follow the steps below to create a new model. Previously created models are shown in a tile interface so that you can see the title and important attributes such as the type, framework, architecture at a glance. You can also see how many times the model has been trained, and launch a variety subscreens from the tabs on each model. Highlighted in the image above, from left to right, the tabs are:

  • Train model: Start the training process here, full details provided in our training documentation.
  • Model training API details: It's also possible to train models using our SDK, click here to get head start with a helpful code snippet.
  • Display Training log: Review how the model performed during the training epocs by examining the training logs.
  • Other menu: Currently, the only action supported from this menu is Delete. Please use with caution, we may not be able to recover deleted models for you.

If you have created more models than fit easily on the screen, use the search interface to search by title and quickly find your model of interest. If you have yet to create any models, read on.

Creating a model

Create a model by specifying:

  • Model title and an optional description. Give your model an easy name to make it easy to find when using for inference or re-training at a later stage after you've labeled more data.
  • Model type (classification, detection, segmentation), framework (PyTorch, FastAI, DarkNet) and architecture: these depend on the problem setting and type of labels.
  • Relevant objects: the final step of the model creation is to select the relevant ontology objects for model training. We will be able to select what specific instances of such objects we wish to train on later.

Let's walk through creating the various model types.

Classification

The classification models assume there is only one correct class per image e.g. classifying handwritten digits or pictures of cats and dogs. It is associated with frame-level classifications from the project’s ontology.

For this task, we support many different architectures through the FastAI framework. These include various sizes of ResNet and VGG.

Object detection

Object detection models assume there are potentially multiple objects in an image that need to be located through a bounding box and classified. Possible objects that can be included are drawn from the classes of bounding box annotation type from the project's ontology.

For this task, we support Faster-RCNN and YOLOv5 from PyTorch, as well as YOLTv5 from DarkNet.

Instance segmentation

Segmentation models assume there are potentially multiple objects in an image that need to be segmented and classified. This differs from object detection in that the expected input and output of the model are polygons, rather than bounding boxes. Consequently, it is associated with polygon annotations from the project’s ontology.

For this task, we support Mask-RCNN from PyTorch.

Training

See our automation section to learn the full details about Training.

Inference

See our automation section to learn the full details about running model Inference.

Export

After you or your team has generated labels, you can export them in JSON and COCO format. Labels can be exported from the Encord Web app or the SDK.

Export data from the Encord web app

Click the Export tab inside a project to access the export screen. Configure the export operation by using the Export options, Configure data, and Export labels tabs as follows:

  1. Export options
    • Format: We currently support exports in two native formats, a JSON file with file metadata and labels organized per frame and COCO, a large-scale object detection, segmentation, and captioning dataset. Encord's native export format is described here.
    • Type: For JSON, select either or both the Classifications and Objects labeling. For COCO, select Objects labeling.
    • Generate signed URLs: (JSON-only) Turn on the toggle to generate signed URLs to allow the exported JSON files to open a data asset directly in the browser. Keep in the mind the labels are not available on the data asset, instead it's only the data asset as it was originally uploaded. Click and follow the link under under the "data_link" attribute to view a signed URL:
  2. (JSON-only) Use the Configure data section to select the objects and classifications to include in the export.
  3. The Export labels allows you to choose which assets to export labels from. Data assets that have polyline annotations in it are currently not supported in the COCO format.

The export button will queue the export process. Depending on the number of labels, the process queue may require a few minutes. Monitor the Activity menu in the upper right which will inform you of when the export is ready for download. Download files are tagged with a unique id for the project they're exported from and the time at which the export is made -- so there are no worries about previous downloads being overwritten by new exports.

Label export history

Export history is tracked per file per request, regardless if the request was made via the web app or the SDK, or if the format was COCO or Encord's JSON format. Currently, you can confirm the export history for a given file by exporting it from the web app in Encord's JSON format. Export the labels like normal and look for the export_history key inside each label object. Note that the act of exporting labels to confirm export history will also generate an export history entry.

Export data from the Python SDK

Look here to learn how to export JSON files from Encord's Python SDK. Please reach out to us to request for exporting COCO labels.

Settings

The 'Settings' tab provides you with options to make modifications to your project.

Options

The 'Options' pane gives the overview of the configuration of the project and allows you to manage project data, ontology and review workflow.

General options

You can initiate the project copy flow, upload annotation instructions, manage project tags and a project's ontology from this panel.

Use the Edit link to edit the ontology. This starts the ontology structure editor. Please note that changes apply to all projects that share the ontology.

Alternatively, you can switch to using a different ontology for the project by clicking the Switch button. This opens an ontology selection dialog, similar to the one used when creating the project.

Collaborators

To manage project collaborators, select the 'Team' pane in your project 'Settings'.

Here you can invite collaborators to the project, and configure their roles.

1. Add a collaborator

To invite collaborators to your project, click the + Invite collaborators button. This will open a new window where you can enter email addresses of the people you would like to invite.

Once you have entered everybody you want to invite, press the Add button.

2. Add collaborators as a group

note

To add collaborators as a group, your organization needs to have user groups. Navigate to our documentation on creating user groups for more information.

Collaborators can be added to a project as a group - which can save time as well as ensure that no individual is forgotten.

In the 'Groups' section of the page, click on Manage to make the 'Manage Groups' pop-up appear.

Click the 'Select group' dropdown and pick a group you would like to add as collaborators. After selecting a group, click the 'Select Role' dropdown to assign a role to the group of collaborators. Click Add to add the group.

The group you just added will appear under the 'Added groups' heading. Repeat the process if you'd like to add more groups with different roles to the project.

To delete a group from the project, simply click the button next to the group name.

2. Change collaborator role

A project admin can modify the different roles of collaborators, using the dropdown on the right.

You can assign the following roles to collaborators:

  • Annotator: annotators are responsible for labeling. This is the default role for all collaborators.
  • Reviewer: for reviewing labeled tasks.
  • Annotator & reviewer: a combination of annotator and reviewer.
  • Team manager: a team manager can assign tasks to other users, and add collaborators to the project.
  • Admin: gives this collaborator full administrative control over this project. Caution: this is an irreversible action.

Please confirm or cancel your selection when making a collaborator a project admin.

Delete a project

You can delete your project by going to the 'Danger zone' tab at the bottom of the menu, and clicking the red Delete project button, shown below.

Deleting your project does not delete the datasets in the project, but will delete the project's labels and ontology.

Copy a project

To copy a project, click on the Copy project button in the 'Options' tab. This will open the copy project window, where you can pick the various parts of your project you want to copy over into your new project.

1. Select copy options

Choose the parts of your project you want to copy.

You can copy any combination of the following assets:

  • Labels: this will copy the labels within videos and image groups of your choice.
  • Models: this will copy all the models in your project along with their training logs.
  • Collaborators: copy all project users with their respective roles. Project admins are copied regardless of whether this is selected or not.
  • All datasets: all datasets will be copied, and new annotation tasks will be created for all videos and image groups if their labels were not copied over (see next line).

The new annotation project will use the same ontology as the original. This can be changed in the project settings if required.

If you choose not to copy labels, press Copy project. This will create the copy of your project, which you can then access in the Projects tab.

If you do choose to copy over labels, you will then be asked to select the data assets for which you would like labels copied over. To begin the process, press Next: configure labels

2. Select data assets

Select the videos/image groups whose labels you want to copy over. After doing so, click Next.

If the project does not have task management enabled, you'll simply see the Copy project button. Click to create a copy of your project. You are done.

If task management is enabled, continue as below.

3. Configure labels

Select the type of labels you want to be copied over from the videos and image groups you chose on the previous page. The available options are

  • Approved: labels which were accepted after review
  • Pending review: labels which have been queued for review, but have yet to be reviewed
  • Not selected for review: labels which haven't been selected for review
  • Rejected: labels which were rejected after review

All options are selected by default. Unselect any option you do not want to copy. Finally, click Copy project to create a copy of your project.

note
  • The review tasks for any labels you copy over will not be copied
  • Not selecting 'Collaborators' as an option may leave some of your copied label tasks unassigned

Project tags

If you are part of an organization, you can add tags to your project. Project tags is a feature that enables you to categorize your projects by labeling them with tags. You can filter your projects using these tags, which can be useful if you have many projects and you want to see how they are grouped. The tags you can set on your project are determined by your organization administrators.

Adding and removing tags

You can add tags to your projects when you create a project or in the settings page. To add tags to your projects in the settings page navigate to the 'Options' tab and click on the 'Project tags' dropdown. Here you will see the available tags in your organization. Click on a tag to add it to a project. You can remove a tag from your project by clicking the same tag again, or clicking the x button next to its name.

Filtering projects by tags

You can filter your projects based on the tags they contain. To do so, click on the 'Projects tab' in the Navigation bar, click the 'Filter by tags' dropdown and select one or more project tags. This will result in only projects with all of those tags being displayed.