Before you can do anything with the Encord platform and cloud storage, you need to configure your cloud storage to work with Encord. Once the integration between Encord and your cloud storage is complete, you can then use your data in Encord.
In order to integrate with AWS S3, you need to:
Create a permission policy for your resources that allows appropriate access to Encord.
Create a role for Encord and attach the policy so that Encord can access those resources.
Activate Cross-origin resource sharing which allows Encord to access those resources from a web browser.
Create an S3 bucket to store your files if you haven’t already. Your S3 bucket permissions should be set to be blocking all public access.
In the Integrations section of the Encord platform, click +New integration to create a new integration.
Select AWS S3 at the top of the chooser.
It is essential you do not close this tab or window until you have finished the whole integration process. If you use the AWS UI for integration, we advise opening the AWS console in a separate tab.
For a list of supported file formats for each data type, go here.
All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all images, image groups, videos and DICOM files in your cloud storage.
For a list of supported file formats for each data type, go here
Encord supports file names up to 300 characters in length for any file or video for upload.
Encord enforces the following upload limits for each JSON file used for file registration:
Up to 1 million URLs
A maximum of 500,000 items (e.g. images, image groups, videos, DICOMs)
URLs can be up to 16 KB in size
Optimal upload chunking can vary depending on your data type and the amount of associated metadata. For tailored recommendations, contact Encord support. We recommend starting with smaller uploads and gradually increasing the size based on how quickly jobs are processed. Generally, smaller chunks result in faster data reflection within the platform.
BEST PRACTICE: If you want to use Index or Active with your video data, we STRONGLY RECOMMEND using custom metadata (clientMetadata) to specify key frames, custom metadata, and custom embeddings. For more information go here or here for information on using the SDK.
For detailed information about the JSON file format used for import go here.
The information provided about each of the following data types is designed to get you up and running as quickly as possible without going too deeply into the why or how. Look at the template for each data type, then the examples, and adjust the examples to suit your needs.
If skip_duplicate_urls is set to true, all object URLs that exactly match existing images/videos in the dataset are skipped.
When the videoMetadata flag is present in the JSON file, we directly use the supplied metadata without performing any additional validation, and do not store the file on our servers.
To guarantee accurate labels, it is crucial that the videoMetadata provided is accurate.
The following is an example JSON file for uploading two audio files to Encord.
Template: Imports audio files with an Encord title.
Audio Metadata: Imports one audio file with the audiometadata flag. When the audiometadata flag is present in the JSON file, we directly use the supplied metadata without performing any additional validation, and do not store the file on our servers. To guarantee accurate labels, it is crucial that the metadata you provide is accurate.
For detailed information about the JSON file format used for import go here.
Image sequences are collections of images that are processed as one annotation task and represented as a video.
Images within image sequences may be altered as images of varying sizes and resolutions are made to match that of the first image in the sequence.
Creating Image sequences from cloud storage requires ‘write’ permissions, as new files have to be created in order to be read as a video.
Each object in the image_groups array with the createVideo flag set to true represents a single image sequence.
If skip_duplicate_urls is set to true, all URLs exactly matching existing image sequences in the dataset are skipped.
The only difference between adding image groups and image sequences using a JSON file is that image sequences require the createVideo flag to be set to true. Both use the key image_groups.
The position of each image within the sequence needs to be specified in the key (objectUrl_{position_number}).
Encord supports up to 32,767 entries (21:50 minutes) for a single image sequence. We recommend up to 10,000 to 15,000 entries for a single image sequence for best performance. If you need a longer sequence, we recommend using video instead of an image sequence.
Template: Provides the proper JSON format to import image groups into Encord.
For detailed information about the JSON file format used for import go here.
Each dicom_series element can contain one or more DICOM series.
Each series requires a title and at least one object URL, as shown in the example below.
If skip_duplicate_urls is set to true, all object URLs exactly matching existing DICOM files in the dataset will be skipped.
Custom metadata is distinct from patient metadata, which is included in the .dcm file and does not have to be specific during the upload to Encord.
The following is an example JSON for uploading three DICOM series belonging to a study. Each title and object URL correspond to individual DICOM series.
The first series contains only a single object URL, as it is composed of a single file.
The second series contains 3 object URLs, as it is composed of three separate files.
The third series contains 2 object URLs, as it is composed of two separate files.
For each DICOM upload, an additional DicomSeries file is created. This file represents the series file-set. Only DicomSeries are displayed in the Encord application.
When using a Multi-Region Access Point for your AWS S3 buckets the JSON file has to be slightly different from the examples provided. Instead of an object’s URL, objects are specified using the ARN of the Multi-Region Access Point followed by the object name. The example below shows how video files from a Multi-Region Access Point would be specified.
In the CSV file format, the column headers specify which type of data is being uploaded. You can add and single file format at a time, or combine multiple data types in a single CSV file.
Details for each data format are given in the sections below.
Encord supports up to 10,000 entries for upload in the CSV file.
Object URLs can’t contain whitespace.
For backwards compatibility reasons, a single column CSV is supported. A file with the single ObjectUrl column is interpreted as a request for video upload. If your objects are of a different type (for example, images), this error displays: “Expected a video, got a file of type XXX”.
A CSV file containing videos should contain two columns with the following mandatory column headings:
‘ObjectURL’ and ‘Video title’. All headings are case-insensitive.
The ‘ObjectURL’ column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the video resource.
The ‘Video title’ column containing the video_title. If left blank, the original file name is used.
In the example below files 1, 2 and 4 will be assigned the names in the title column, while file 3 will keep its original file name.
ObjectUrl
Video title
path/to/storage-location/frame1.mp4
Video 1
path/to/storage-location/frame2.mp4
Video 2
path/to/storage-location/frame3.mp4
path/to/storage-location/frame4.mp4
Video 3
Single images
A CSV file containing single images should contain two columns with the following mandatory headings:
‘ObjectURL’ and ‘Image title’. All headings are case-insensitive.
The ‘ObjectURL’ column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the image resource.
The ‘Image title’ column containing the image_title. If left blank, the original file name is used.
In the example below files 1, 2 and 4 will be assigned the names in the title column, while file 3 will keep its original file name.
A CSV file containing image groups should contain three columns with the following mandatory headings:
‘ObjectURL’, ‘Image group title’, and ‘Create video’. All three headings are case-insensitive.
The ‘ObjectURL’ column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.
The ‘Image group title’ column containing the image_group_title. This field is mandatory, as it determines which image group a file will be assigned to.
In the example below the first two URLs are grouped together into ‘Group 1’, while the following two files are grouped together into ‘Group 2’.
A CSV file containing image sequences should contain three columns with the following mandatory headings: ‘ObjectURL’, ‘Image group title’, and ‘Create video’. All three headings are case-insensitive.
The ‘ObjectURL’ column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.
The ‘Image group title’ column containing the image_group_title. This field is mandatory, as it determines which image sequence a file will be assigned to. The dimensions of the image sequence are determined by the first file in the sequence.
The ‘Create video’ column. This can be left blank, as the default value is ‘true’.
In the example below the first two URLs are grouped together into ‘Sequence 1’, while the second two files are grouped together into ‘Sequence 2’.
ObjectUrl
Image group title
Create video
path/to/storage-location/frame1.jpg
Sequence 1
true
path/to/storage-location/frame2.jpg
Sequence 1
true
path/to/storage-location/frame3.jpg
Sequence 2
true
path/to/storage-location/frame4.jpg
Sequence 2
true
Image groups and image sequences are only distinguished by the presence of the ‘Create video’ column.
Image sequences require ‘write’ permissions against your storage bucket to save the compressed video.
DICOM
A CSV file containing DICOM files should contain two columns with the following mandatory headings: ‘ObjectURL’ and ‘Dicom title’. Both headings are case-insensitive.
The ‘ObjectURL’ column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.
The ‘Series title’ column containing the dicom_title. When two files are given the same title they are grouped into the same DICOM series. If left blank, the original file name is used.
In the example below the first two files are grouped into ‘dicom series 1’, the next two files are grouped into ‘dicom series 2’, while the final file will remain separated as ‘dicom series 3’.
You can upload multiple file types with a single CSV file by using a new header each time there is a change of file type. Three headings will be required if image sequences are included.
Since the ‘Create video’ column defaults to true all files that are not image sequences must contain the value false
The example below shows a CSV file for the following:
To use your data in Encord, it must be uploaded to the Encord Files storage. Once uploaded, your data can be reused across multiple Projects and contain no labels or annotations themselves. Files stores your data, while Projects store your labels. The following script creates a folder in Files and uses your AWS integration to register data in that folder.
The following script creates a new folder in Files and initiates uploads from AWS. It works for all file types.
If Upload is still in progress, try again later! is returned, use the
script to check the upload status to see whether the upload has finished.
Ensure that you:
Replace <private_key_path> with the path to your private key.
Replace <integration_title> with the title of the integration you want to use.
Replace <folder_name> with the folder name. The scripts assume that the specified folder name is unique.
Replace A folder to store my files with a meaningful description for your folder.
Replace "my": "folder_metadata" with any metadata you want to add to the folder.
The script has several possible outputs:
“Upload is still in progress, try again later!”: The registration has not finished. Run this script again later to check if the data registration has finished.
“Upload completed”: The registration completed. If any files failed to upload, the URLs are listed.
“Upload failed”: The entire registration failed, and not just individual files. Ensure your JSON file is formatted correctly.
Copy
# Import dependenciesfrom encord import EncordUserClientfrom encord.orm.dataset import LongPollingStatus # Ensure correct import path# Instantiate user client. Replace <private_key_path> with the path to your private keyuser_client = EncordUserClient.create_with_ssh_private_key( ssh_private_key_path="<private_key_path>")# Specify the integration you want to useintegrations = user_client.get_cloud_integrations()integration_idx = [i.title for i in integrations].index("<integration_title>")integration = integrations[integration_idx].id# Create a storage folderfolder_name = "<folder_name>"folder_description = "A folder to store my files"folder_metadata = {"my": "folder_metadata"}storage_folder = user_client.create_storage_folder( folder_name, folder_description, client_metadata=folder_metadata)# Initiate cloud data registrationupload_job_id = storage_folder.add_private_data_to_folder_start( integration_id=integration, private_files="path/to/json/file.json", ignore_errors=True)# Check upload statusres = storage_folder.add_private_data_to_folder_get_result(upload_job_id, timeout_seconds=5)print(f"Execution result: {res}")if res.status == LongPollingStatus.PENDING: print("Upload is still in progress, try again later!")elif res.status == LongPollingStatus.DONE: print("Upload completed") if res.unit_errors: print("The following URLs failed to upload:") for e in res.unit_errors: print(e.object_urls)else: print(f"Upload failed: {res.errors}")
If Step 5 returns "Upload is still in progress, try again later!", run the following code to query the Encord server again. Ensure that you replace <upload_job_id> with the output by the previous code. In the example above upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.
The script has several possible outputs:
“Upload is still in progress, try again later!”: The registration has not finished. Run this script again later to check if the data registration has finished.
“Upload completed”: The registration completed. If any files failed to upload, the URLs are listed.
“Upload failed”: The entire registration failed, and not just individual files. Ensure your JSON file is formatted correctly.
Copy
# Import dependenciesfrom encord import EncordUserClientfrom encord.orm.dataset import LongPollingStatusupload_job_id = <upload_job_id># Authenticate with Encord using the path to your private key. user_client = EncordUserClient.create_with_ssh_private_key( ssh_private_key_path="<private_key_path>" )# Check upload statusres = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)print(f"Execution result: {res}")if res.status == LongPollingStatus.PENDING: print("Upload is still in progress, try again later!")elif res.status == LongPollingStatus.DONE: print("Upload completed") if res.unit_errors: print("The following URLs failed to upload:") for e in res.unit_errors: print(e.object_urls)else: print(f"Upload failed: {res.errors}")
Creating a Dataset and adding files to a Dataset are two distinct steps. Click here to learn how to add data to an existing Dataset.
Datasets cannot be deleted using the SDK or the API. Use the Encord platform to delete Datasets.
The following example creates a Dataset called “Houses” that expects data hosted on AWS S3.
Substitute <private_key_path> with the file path for your private key.
Replace “Houses” with the name you want your Dataset to have.
Copy
# Import dependenciesfrom encord import EncordUserClientfrom encord.orm.dataset import StorageLocation# Authenticate with Encord using the path to your private keyuser_client = EncordUserClient.create_with_ssh_private_key( ssh_private_key_path="<private_key_path>")# Create a new datasetdataset_response = user_client.create_dataset( dataset_title="Houses", dataset_type=StorageLocation.AWS, create_backing_folder=False,)# Prints a CreateDatasetResponse object. Verify the Dataset creationprint(dataset_response)# Print the storage locationprint(f"Using storage location: AWS")
Now that you uploaded your data and created a Dataset, it is time to add your files to the Dataset. The following scripts add all files in a specified folder to a Dataset.
Replace <private_key_path> with the path to your private key.
Replace <folder_name> with the name you want to give your Storage folder.
Replace <dataset_hash> with the hash of the Dataset you want to add the data units to.
Files added to the folder at a later time will not be automatically added to the Dataset.
All files
Copy
from encord import EncordUserClient# Authenticationuser_client = EncordUserClient.create_with_ssh_private_key( ssh_private_key_path="<private_key_path>")# Find the storage folder by namefolder_name = "<folder_name>" # Replace with your folder's namefolders = list(user_client.find_storage_folders(search=folder_name, page_size=1))dataset = user_client.get_dataset("<dataset_hash>")# Ensure the folder was foundif folders: storage_folder = folders[0] # List all data units items = list(storage_folder.list_items()) # Collect all item UUIDs item_uuids = [item.uuid for item in items] # Output the retrieved data units for item in items: print(f"UUID: {item.uuid}, Name: {item.name}, Type: {item.item_type}") # Link all items at once if there are any if item_uuids: dataset.link_items(item_uuids)else: print("Folder not found.")
After adding your files to the Dataset, verify that all the files you expect to be there made it into the Dataset.
The following script prints the URLs of all the files in a Dataset. Ensure that you:
Replace <private_key_path> with the path to your private key.
Replace <dataset_hash> with the hash of your Dataset.
Sample Code
Copy
#Import dependenciesfrom encord import EncordUserClient, Project,Datasetfrom encord.objects.project import ProjectDatasetfrom encord.orm.dataset import DatasetAccessSettings#Initiate clientuser_client = EncordUserClient.create_with_ssh_private_key( ssh_private_key_path="<private_key_path>")#Files the file links to all files in the Datasetdataset_level_file_links = []dataset: Dataset = user_client.get_dataset("<dataset_hash>")for data in dataset.list_data_rows(): dataset_level_file_links.append(data.file_link)print(dataset_level_file_links)