DatasetUserRole Objects
DatasetUserRoleV2 Objects
dataset_user_role_str_enum_to_int_enum
DatasetUser Objects
user_email- Email address of the user who has access to the dataset.user_role- Role of the user on the dataset.dataset_hash- Identifier of the dataset the user has access to.
DataLinkDuplicatesBehavior Objects
- DUPLICATE: Allow duplicates and create a new link for each request.
- FAIL: Fail the operation if a duplicate link would be created.
- SKIP: Skip data that is already linked and continue with the rest.
DataClientMetadata Objects
ImageData Objects
file_type
file_size
signed_url
DataRow Objects
dict style member functions and the property accessors and setters is discouraged.
WARNING: Do NOT use the .data member of this class. Its usage could corrupt the correctness of the
datastructure.
uid
title
data_type
created_at
frames_per_second
None as a frames_per_second
field is not applicable.
duration
None as a duration field is not applicable.
client_metadata
width
None for data types of
IMG_GROUP () where
is_image_sequence () is False, because
each image in this group can have a different dimension. Inspect the
images () to get the height of individual images.
height
None for data types of
IMG_GROUP () where
is_image_sequence () is False, because
each image in this group can have a different dimension. Inspect the
images () to get the height of individual images.
file_link
DataType.DICOM then this returns None as no single file is associated with the
series.
signed_url
file_size
file_type
images_data
is_optimised_image_group
None for other data types.
DEPRECATED: This method is deprecated and will be removed in the upcoming library version.
Please use is_image_sequence() instead
is_image_sequence
None for other data types.
For more details refer to the
:ref:documentation on image sequences <https://docs.encord.com/docs/annotate-supported-data#image-sequences>
backing_item_uuid
refetch_data
signed_url- If True, this will fetch a generated signed url of the data asset.images_data_fetch_options- If not None, this will fetch the image data of the data asset. You can additionally specify what to fetch with the ImagesDataFetchOptions class.client_metadata- If True, this will fetch the client metadata of the data asset.
save
DataRows Objects
DatasetInfo Objects
Dataset Objects
__init__
dict style member functions and the property accessors and setters is discouraged.
WARNING: Do NOT use the .data member of this class. Its usage could corrupt the correctness of the
datastructure.
DatasetDataInfo Objects
data_hash- Internal identifier of the data item.title- Human-readable title applied to the data item.backing_item_uuid- UUID of the storage item that backs this dataset data.
AddPrivateDataResponse Objects
CreateDatasetResponse Objects
__init__
dict style member functions and the property accessors and setters is discouraged.
WARNING: Do NOT use the .data member of this class. Its usage could corrupt the correctness of the
datastructure.
StorageLocation Objects
- CORD_STORAGE: Encord-managed storage.
- AWS: AWS S3 bucket.
- GCP: Google Cloud Storage.
- AZURE: Azure Blob Storage.
- S3_COMPATIBLE: S3-compatible storage.
- NEW_STORAGE: This is a placeholder for a new storage location that is not yet supported by your SDK version. Please update your SDK to the latest version.
DatasetType
For backwards compatibilityDatasetData Objects
SignedVideoURL Objects
SignedImageURL Objects
SignedImagesURL Objects
SignedAudioURL Objects
SignedDicomURL Objects
SignedDicomsURL Objects
Video Objects
ImageGroup Objects
Image Objects
SingleImage Objects
Audio Objects
Images Objects
DicomSeries Objects
data_hash- Internal identifier of the DICOM series.title- Human-readable name or description of the series.
DicomDeidentifyTask Objects
dicom_urls- List of DICOM object URLs to be de-identified.integration_hash- Identifier of the integration or configuration used to carry out the de-identification.
ImageGroupOCR Objects
processed_texts- Mapping of identifiers to recognized text blocks produced by the OCR pipeline.
ReEncodeVideoTaskResult Objects
data_hash- Identifier of the data item that was re-encoded.signed_url- Optional signed URL for downloading the re-encoded video. Only present when using CORD_STORAGE.bucket_path- Path inside the storage bucket where the re-encoded video is stored.
ReEncodeVideoTask Objects
DatasetAccessSettings Objects
fetch_client_metadata
Whether client metadata should be retrieved for eachdata_row.
ImagesDataFetchOptions Objects
True if you need to download the
images.
Arguments:
fetch_signed_urls- IfTrue, include signed URLs for image data so that the media can be downloaded directly from storage.
LongPollingStatus Objects
ignore_errors=True).
If ignore_errors=False was specified in
add_private_data_to_dataset_start(),
the job will only have the status DONE if there were no errors.
If ignore_errors=True was specified in
add_private_data_to_dataset_start(),
the job will always show the status DONE once complete and will never show
ERROR status if this flag was set to True. There could be errors that were
ignored.
Information about number of errors and stringified exceptions is available in the
units_error_count: int and errors: List[str] attributes.
ERROR
Job has completed with errors. This can only happen if ignore_errors was set to
False. Information about errors is available in the units_error_count: int
and errors: List[str] attributes.
CANCELLED
Job was cancelled explicitly by the user through the Encord UI or via the Encord
SDK using the add_data_to_folder_job_cancel method.
In the context of this status:
- The job may have been partially processed, but it was explicitly interrupted before completion by a user action.
- Cancellation can occur either manually through the Encord UI or programmatically
using the SDK method
add_data_to_folder_job_cancel. - Once a job is cancelled, no further processing will occur, and any processed data before the cancellation will be available.
- The presence of cancelled data units (
units_cancelled_count) indicates that some data upload units were interrupted and cancelled before completion. - If
ignore_errorswas set toTrue, the job may continue despite errors, and cancellation will only apply to the unprocessed units.
DataUnitError Objects
object_urls
URLs involved. A single item for videos and images; a list of frames for image groups and DICOMerror
The error messagesubtask_uuid
Opaque ID of the process. Please quote this when contacting Encord support.action_description
Human-readable description of the action that failed (e.g. ‘Uploading DICOM series’).DatasetDataLongPolling Objects
status
Status of the upload job. Documented in detail in LongPollingStatus()data_hashes_with_titles
Information about data which was added to the dataset.errors
Stringified list of exceptions.data_unit_errors
Structured list of per-item upload errors. See DataUnitError for more details.units_pending_count
Number of upload job units that have pending status.units_done_count
Number of upload job units that have done status.units_error_count
Number of upload job units that have error status.units_cancelled_count
Number of upload job units that have been cancelled.DatasetLinkItems Objects
items- List of storage item identifiers linked to the dataset.
CreateDatasetPayload Objects
True, create a legacy “mirror” dataset together with a
backing storage folder in a single operation. This behavior
is retained for backwards compatibility.
legacy_call: Internal flag used for analytics to detect usage of legacy
dataset creation flows. This field will be removed in a
future version and should not be set manually.
create_backing_folder
this creates a legacy “mirror” dataset and it’s backing folder in one golegacy_call
this field will be removed soonCreateDatasetResponseV2 Objects
dataset_uuid- UUID of the newly created dataset.backing_folder_uuid- Optional UUID of the backing folder created alongside the dataset, if applicable. A ‘not None’ indicates a legacy “mirror” dataset was created.
backing_folder_uuid
a ‘not None’ indicates a legacy “mirror” dataset was createdDatasetsWithUserRolesListParams Objects
title_eq- Optional filter to return only datasets whose title exactly matches the given string.title_cont- Optional filter to return only datasets whose title contains the given substring.created_before- If set, only datasets created before this timestamp are returned.created_after- If set, only datasets created on or after this timestamp are returned.edited_before- If set, only datasets last edited before this timestamp are returned.edited_after- If set, only datasets last edited on or after this timestamp are returned.include_org_access- IfTrue, include datasets that are visible through organisation-level access in addition to user-level sharing.
DatasetWithUserRole Objects
dataset_uuid- UUID of the dataset.title- Title of the dataset.description- Description of the dataset.created_at- Timestamp when the dataset was created.last_edited_at- Timestamp when the dataset was last modified.user_role- Role of the requesting user on this dataset, if any.storage_location- Storage location of the dataset’s underlying data, if known.backing_folder_uuid- UUID of the legacy backing folder if this dataset was created as a “mirror” dataset.
storage_location
legacy field: you can have data from mixed locations nowbacking_folder_uuid
if set, this indicates a legacy ‘mirror’ datasetDatasetsWithUserRolesListResponse Objects
result- List of datasets together with the role of the current user.

