Before you can do anything with the Encord platform and cloud storage, you need to configure your cloud storage to work with Encord. Once the integration between Encord and your cloud storage is complete, you can then use your data in Encord.

There are two parts to setting up a GCP integration in Encord:

  1. Setting up a GCP bucket on the Google Cloud Platform so that it can be integrated with Encord. This includes setting up a CORS configuration for your bucket.

  2. Integrating your GCP bucket with Encord's platform.

Step 1: Setup GCP Integration

Clickable Div Step 1: Setup GCP bucket Step 2: Configure CORS

Step 2: Create Encord Integration

Once your GCP storage is set up and configured, you are ready to create the integration in the Encord platform.

In the Integrations section of the Encord platform click Add integration to create a new integration.

Click + Add integration.

Create the integration by selecting GCP at the top of the chooser. Enter a name for the integration, and enter the name of the bucket you wish to make available in the second dropdown of the GCP integration window.

Optionally check the box to enable Strict client-only access, server-side media features will not be available if you would like Encord to sign URLs, but refrain from downloading any media files onto Encord servers. Read more about this feature here.

Click Create to create the GCP integration.

Step 3: Create Dataset using SDK

Your data is added to Datasets, which can be reused across multiple projects and contain no labels or annotations themselves.

Each Dataset is identified using a unique "<dataset_hash>" - a unique ID that can be found within the Dataset's Settings, or the URL in the Encord platform when the Dataset is being viewed.

Use the following code to create a Dataset.

  • Replace the <private_key_path> variable with the file path to the private key you generated in Step 1: Create Encord Integration.

  • Replace the <Dataset_Title> variable with a meaningful name for the Dataset.

In the following example, the variables are replaced with:

  • <private_key_path>: /Users/chris/encord-ssh-private-key.txt

  • <Dataset_Title>: Training Data Dataset 1


# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import StorageLocation

# Authenticate with Encord using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="<private_key_path>")

# Create a dataset by specifying a title as well as a storage location
dataset = user_client.create_dataset(
    "<Dataset_Title>", StorageLocation.GCP
)

# Prints the dataset
print(dataset)
# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import StorageLocation

# Authenticate with Encord using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="/Users/chris/encord-ssh-private-key.txt")

# Create a dataset by specifying a title as well as a storage location
dataset = user_client.create_dataset(
    "Training Data Dataset 1", StorageLocation.GCP
)

# Prints the dataset
print(dataset)

Step 4: Create JSON or CSV for import

All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all of the images, image groups, videos and DICOM files in your cloud storage.

ℹ️

Note

For a list of supported file formats for each data type, go here

❗️

CRITICAL INFORMATION

Encord supports file names up to 300 characters in length for any file or video for upload.

Create JSON file for import

For detailed information about the JSON file format used for import go here.

The information provided about each of the following data types is designed to get you up and running as quickly as possible without going too deeply into the why or how. Look at the template for each data type, then the examples, and adjust the examples to suit your needs.

ℹ️

Note

If skip_duplicate_urls is set to true, all object URLs that exactly match existing images/videos in the dataset are skipped.

Videos

JSON for videos provides the proper JSON format to import videos into Encord.

Example 1 for videos, imports videos into Encord.

Example 2 for videos, imports videos with an Encord title for the videos and with custom metadata. Custom metadata only appears in the Encord UI in Active Cloud as an option to filter your data in an Active Project.

{
  "videos": [
    {
      "objectUrl": "file/path/to/videos/video-001.file-extension>"
    },
    {
      "objectUrl": "file/path/to/videos/video-002.file-extension"
    },
    {
      "objectUrl": "file/path/to/videos/video-003.file-extension",
      "title": "video-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    },
    {
      "objectUrl": "file/path/to/videos/video-004.file-extension",
      "title": "video-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    }
  ],
  "skip_duplicate_urls": true
}
{
  "videos": [
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/BerghouseLeopardJog.mp4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blue_bus_video.mp4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/bluemlisalphutte_flyover.mp4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/boat_lake_normalized.mp4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/boat_lake.MP4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/boat_lake.MP4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/snow_sled.MOV"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/cyclists.MP4"
    }
  ],
  "skip_duplicate_urls": true
}
{
    "videos": [
      {
        "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blueberries-001.mp4",
        "title": "Blueberries video 1",
        "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
      },
      {
        "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blueberries-002.mp4",
        "title": "Blueberries video 2",
        "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
      },
      {
        "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blueberries-003.mp4",
        "title": "Blueberries video 3",
        "clientMetadata": {"Location": "NY", "Harvested": "Mid July"}
      },
      {
        "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/cherries-001.MOV",
        "title": "Cherries video 1",
        "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
      },
      {
        "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/cherries-002.MP4",
        "title": "Cherries video 2",
        "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
      }
    ],
  "skip_duplicate_urls": true
  }

Single images

For detailed information about the JSON file format used for import go here.

The JSON structure for single images parallels that of videos.

JSON for images provides the proper JSON format to import images into Encord.

Example 1 for images, indicates that Encord imports the images in the JSON into Encord.

Example 2 for images, indicates that Encord imports images with an Encord title for the images and with custom metadata for each image. Custom metadata only appears in the Encord UI in Active Cloud as an option to filter your data in an Active Project.

{
  "images": [
    {
      "objectUrl": "file/path/to/images/file-name-01.file-extension"
    },
    {
      "objectUrl": "file/path/to/images/file-name-02.file-extension"
    },
    {
      "objectUrl": "file/path/to/images/file-name-03.file-extension",
      "title": "image-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    },
    {
      "objectUrl": "file/path/to/images/file-name-04.file-extension",
      "title": "image-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    }
  ]
}
{
  "images": [
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/0001.jpg"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/0002.jpg"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/0003.jpg"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/DALL%C2%B7E+2022-09-08+18.53.25+-+firefighter+extinguishing+flames+around+computer+in+software+office+overgrown+by+foliage.png"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/DALL%C2%B7E+2022-12-08+22.16.52+-+steampunk+combustion+engine+that+is+fueled+by+data+and+produces+computer+vision+algorithms.png"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/large_images/pexels-ivo-rainha-00057.png"
    }
  ]
}
{
  "images": [
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blueberry-0001.jpg",
      "title": "Blueberry 0001",
      "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blueberry-0002.jpg",
      "title": "Blueberry 0002",
      "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/blueberry-0001.jpg",
      "title": "Blueberry 0003",
      "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/cherry-0001.jpg",
      "title": "Cherry 0001",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/cherry-0002.jpg",
      "title": "Cherry 0002",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/cherry-0003.jpg",
      "title": "Cherry 0003",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    }
  ]
}

Image groups

For detailed information about the JSON file format used for import go here.

  • Image groups are collections of images that are processed as one annotation task.
  • Images within image groups remain unaltered, meaning that images of different sizes and resolutions can form an image group without the loss of data.
  • Image groups do NOT require 'write' permissions to your cloud storage.
  • Custom metadata is defined per image group, not per image.
  • If skip_duplicate_urls is set to true, all URLs exactly matching existing image groups in the dataset are skipped.

ℹ️

Note

The position of each image within the sequence needs to be specified in the key (objectUrl_{position_number}).

JSON for image groups provides the proper JSON format to import image groups into Encord.

Example 1 for image groups, indicates that Encord imports the image groups in the JSON into Encord.

Example 2 for image groups, indicates that Encord imports image groups with an Encord title for the image groups and with custom metadata for each image. Custom metadata only appears in the Encord UI in Active Cloud as an option to filter your data in an Active Project.

{
  "image_groups": [
    {
      "title": "<title 1>",
      "createVideo": false,
      "objectUrl_0": "file/path/to/images/file-name-01.file-extension"
      "objectUrl_1": "file/path/to/images/file-name-02.file-extension"
      "objectUrl_2": "file/path/to/images/file-name-03.file-extension"
    },
    {
      "title": "<title 2>",
      "createVideo": false,
      "objectUrl_0": "file/path/to/images/file-name-01.file-extension",
      "objectUrl_1": "file/path/to/images/file-name-02.file-extension",
      "objectUrl_2": "file/path/to/images/file-name-03.file-extension",
      "clientMetadata": {"optional": "metadata"}
    }
  ]
}

{
  "image_groups": [
    {
      "title": "Image group 01",
      "createVideo": false,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/0001.jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/0002.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/DALL%C2%B7E+2022-09-08+18.53.25+-+firefighter+extinguishing+flames+around+computer+in+software+office+overgrown+by+foliage.png"
    },
    {
      "title": "Image group 02",
      "createVideo": false,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/thing-0001.jpg",
      "objectUrl_1": "<https://storage.cloud.google.com/encord-image-bucket/thing-0002.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/thing-0003.jpg"
    }
  ]https://storage.cloud.google.com/encord-image-bucket/
}


{
  "image_groups": [
    {
      "title": "Blueberries image group 1",
      "createVideo": false,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/blueberry-0001.jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/blueberry-0002.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/blueberry-0003.jpg"
      "clientMetadata": {"Location": "NY", "Harvested": "End July"}
    },
    {
      "title": "Cherries image group 1",
      "createVideo": false,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/cherry-0001.jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/cherry-0002.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/cherry-0003.jpg",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    }
  ]
}

Image sequences

For detailed information about the JSON file format used for import go here.

  • Image sequences are collections of images that are processed as one annotation task and represented as a video.
  • Images within image sequences may be altered as images of varying sizes and resolutions are made to match that of the first image in the sequence.
  • Creating Image sequences from cloud storage requires 'write' permissions, as new files have to be created in order to be read as a video.
  • Each object in the image_groups array with the createVideo flag set to true represents a single image sequence.
  • Custom client metadata is defined per image sequence, not per image.
  • If skip_duplicate_urls is set to true, all URLs exactly matching existing image sequences in the dataset are skipped.

👍

Tip

The only difference between adding image groups and image sequences using a JSON file is that image sequences require the createVideo flag to be set to true. Both use the key image_groups.

ℹ️

Note

The position of each image within the sequence needs to be specified in the key (objectUrl_{position_number}).

❗️

CRITICAL INFORMATION

Encord supports up to 32,767 entries (21:50 minutes) for a single image sequence. We recommend up to 10,000 to 15,000 entries for a single image sequence for best performance. If you need a longer sequence, we recommend using video instead of an image sequence.

{
  "image_groups": [
    {
      "title": "<title 1>",
      "createVideo": true,
      "objectUrl_0": "<object url>"
    },
    {
      "title": "<title 2>",
      "createVideo": true,
      "objectUrl_0": "<object url>",
      "objectUrl_1": "<object url>",
      "objectUrl_2": "<object url>",
      "clientMetadata": {"optional": "metadata"}
    }
  ]
}

{
  "image_groups": [
    {
      "title": "Image sequence 001",
      "createVideo": true,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/01.jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/02.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/DALL%C2%B7E+2022-09-08+18.53.25+-+firefighter+extinguishing+flames+around+computer+in+software+office+overgrown+by+foliage.png"
    },
    {
      "title": "Image sequence 002",
      "createVideo": true,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/thing-01.jpg",
      "objectUrl_1": "<https://storage.cloud.google.com/encord-image-bucket/thing-02.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/thing-03.jpg"
    }
  ]
}


{
  "image_groups": [
    {
      "title": "Blueberries sequence 01",
      "createVideo": true,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/blueberry-highbush-01.jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/blueberry-highbush-02.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/blueberry-highbush-03.jpg"
      "clientMetadata": {"Location": "ON", "Harvested": "Mid July"}
    },
    {
      "title": "Cherries sequence 01",
      "createVideo": false,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/cherry-bing-01.jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/cherry-bing-02.jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/cherry-bing-03.jpg",
      "clientMetadata": {"Location": "ON", "Harvested": "End July"}
    }
  ]
}

DICOM

For detailed information about the JSON file format used for import go here.

  • Each dicom_series element can contain one or more DICOM series.
  • Each series requires a title and at least one object URL, as shown in the example below.
  • If skip_duplicate_urls is set to true, all object URLs exactly matching existing DICOM files in the Dataset will be skipped.

ℹ️

Note

Custom metadata is distinct from patient metadata, which is included in the .dcm file and does not have to be specific during the upload to Encord.

The following is an example JSON for uploading three DICOM series belonging to a study. Each title and object URL correspond to individual DICOM series.

  • The first series contains only a single object URL, as it is composed of a single file.
  • The second series contains 3 object URLs, as it is composed of three separate files.
  • The third series contains 2 object URLs, as it is composed of two separate files.
{
  "dicom_series": [
    {
      "title": "Series-1",
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/study1-series1-file.dcm"
    },
    {
      "title": "Series-2",
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/study1-series2-file1.dcm",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/study1-series2-file2.dcm",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/study1-series2-file3.dcm",
    },
      {
      "title": "Series-3",
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/study1-series3-file1.dcm",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/study1-series3-file2.dcm",
    }
  ]
}


Multiple file types

You can upload multiple file types using a single JSON file. The example below shows 1 image, 2 videos, 2 image sequences, and 1 image group.


{
  "images": [
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/Image1.png"
    }
  ],
  "videos": [
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/Cooking.mp4"
    },
    {
      "objectUrl": "https://storage.cloud.google.com/encord-image-bucket/Oranges.mp4"
    }
  ],
  "image_groups": [
    {
      "title": "apple-samsung-light",
      "createVideo": true,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/1-Samsung-S4-Light+Environment/1+(32).jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/1-Samsung-S4-Light+Environment/1+(33).jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/1-Samsung-S4-Light+Environment/1+(34).jpg",
      "objectUrl_3": "https://storage.cloud.google.com/encord-image-bucket/1-Samsung-S4-Light+Environment/1+(35).jpg"
    },
    {
      "title": "apple-samsung-dark",
      "createVideo": true,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/2-samsung-S4-Dark+Environment/2+(32).jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/2-samsung-S4-Dark+Environment/2+(33).jpg",
      "objectUrl_2": "https://storage.cloud.google.com/encord-image-bucket/2-samsung-S4-Dark+Environment/2+(34).jpg",
      "objectUrl_3": "https://storage.cloud.google.com/encord-image-bucket/2-samsung-S4-Dark+Environment/2+(35).jpg"
    }
  ],
  "image_groups": [
    {
      "title": "apple-ios-light",
      "createVideo": false,
      "objectUrl_0": "https://storage.cloud.google.com/encord-image-bucket/3-IOS-4-Light+Environment/3+(32).jpg",
      "objectUrl_1": "https://storage.cloud.google.com/encord-image-bucket/3-IOS-4-Light+Environment/3+(33).jpg"
    }
  ]
}


CSV format

In the CSV file format, the column headers specify which type of data is being uploaded. You can add and single file format at a time, or combine multiple data types in a single CSV file.

Details for each data format are given in the sections below.

🚧

Caution

  • Object URLs can't contain whitespace.
  • For backwards compatibility reasons, a single column CSV is supported. A file with the single ObjectUrl column is interpreted as a request for video upload. If your objects are of a different type (for example, images), this error displays: "Expected a video, got a file of type XXX".
Videos

Videos

A CSV file containing videos should contain two columns with the following mandatory column headings:
'ObjectURL' and 'Video title'. All headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the video resource.

  • The 'Video title' column containing the video_title. If left blank, the original file name is used.

In the example below files 1, 2 and 4 are assigned the names in the title column, while file 3 keeps its original file name.

ObjectUrlVideo title
https://storage/frame1.mp4Video 1
https://storage/frame2.mp4Video 2
https://storage/frame3.mp4
https://storage/frame4.mp4Video 3
Single images

A CSV file containing single images MUST contain two columns with the following mandatory headings:
'ObjectURL' and 'Image title'. All headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the image resource.

  • The 'Image title' column containing the image_title. If left blank, the original file name is used.

In the example below files 1, 2 and 4 are assigned the names in the title column, while file 3 keeps its original file name.

ObjectUrlImage title
https://storage/frame1.jpgImage 1
https://storage/frame2.jpgImage 2
https://storage/frame3.jpg
https://storage/frame4.jpgImage 3
Image groups

Image groups

A CSV file containing image groups MUST contain three columns with the following mandatory headings:
'ObjectURL', 'Image group title', and 'Create video'. All three headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.

  • The 'Image group title' column containing the image_group_title. This field is mandatory, as it determines which image group a file will be assigned to.

In the example below the first two URLs are grouped together into 'Group 1', while the following two files are grouped together into 'Group 2'.

ObjectUrlImage group titleCreate video
https://storage/frame1.jpgGroup 1false
https://storage/frame2.jpgGroup 1false
https://storage/frame3.jpgGroup 2false
https://storage/frame4.jpgGroup 2false

ℹ️

Note

Image groups do not require 'write' permissions.

Image sequences

Image sequences

A CSV file containing image sequences MUST contain three columns with the following mandatory headings: 'ObjectURL', 'Image group title', and 'Create video'. All three headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.

  • The 'Image group title' column containing the image_group_title. This field is mandatory, as it determines which image sequence a file will be assigned to. The dimensions of the image sequence are determined by the first file in the sequence.

  • The 'Create video' column. This can be left blank, as the default value is 'true'.

In the example below the first two URLs are grouped together into 'Sequence 1', while the second two files are grouped together into 'Sequence 2'.

ObjectUrlImage group titleCreate video
https://storage/frame1.jpgSequence 1true
https://storage/frame2.jpgSequence 1true
https://storage/frame3.jpgSequence 2true
https://storage/frame4.jpgSequence 2true

👍

Tip

Image groups and image sequences are only distinguished by the presence of the 'Create video' column.

ℹ️

Note

Image sequences require 'write' permissions against your storage bucket to save the compressed video.

DICOM

A CSV file containing DICOM files MUST contain two columns with the following headings: 'ObjectURL' and 'Series title'. Both headings are case-insensitive.

  • The 'ObjectURL' column contains the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.

  • The 'Series title' column contains the dicom_title. When two files are given the same title they are grouped into the same DICOM series. If left blank, the original file name is used.

In the example below the first two files are grouped into 'dicom series 1', the next two files are grouped into 'dicom series 2', while the final file will remain separated as 'dicom series 3'.

ObjectUrlSeries title
https://storage/frame1.dcmdicom series 1
https://storage/frame2.dcmdicom series 1
https://storage/frame3.dcmdicom series 2
https://storage/frame4.dcmdicom series 2
https://storage/frame5.dcmdicom series 3
Multiple file types

Multiple file types

You can upload multiple file types with a single CSV file by using a new header each time there is a change of file type. Three headings will be required if image sequences are included.

🚧

Caution

Since the 'Create video' column defaults to "true" all files that aren't image sequences have to contain the value "false"

The example below shows a CSV file for the following:

  • Two image sequences composed of 2 files each.
  • One image group composed of 2 files.
  • One single image.
  • One video.
ObjectUrlImage group titleCreate video
https://storage/frame1.jpgSequence 1true
https://storage/frame2.jpgSequence 1true
https://storage/frame3.jpgSequence 2true
https://storage/frame4.jpgSequence 2true
https://storage/frame5.jpgGroup 1false
https://storage/frame6.jpgGroup 1false
ObjectUrlImage titleCreate video
https://storage/frame1.jpgImage 1false
ObjectUrlImage titleCreate video
https://storage/video.mp4Video 1false

Step 5: Import data using the SDK

The following script only initializes the upload job, allowing you to use your computer for other purposes while the data upload runs in the background. You have to manually query Encord servers to find out whether the upload job completed.

In the following example, the variables are replaced with:

  • <private_key_path>: /Users/chris/encord-ssh-private-key.txt

  • path/to/json/file.json: /Users/chris/training_data_1.json

# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

# Instantiate user client. Replace <private_key_path> with the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="<private_key_path>")

# Specify the dataset you want to upload data to by replacing <dataset_hash> with the dataset hash
dataset = user_client.get_dataset("<dataset_hash>")

# Specify the integration you want to upload data to by replacing <integration_title> with the integration hash
integrations = user_client.get_cloud_integrations()
integration_idx = [i.title for i in integrations].index("GCP")
integration = integrations[integration_idx].id

# Initiate cloud data upload. Replace path/to/json/file.json with the path to your JSON file
upload_job_id = dataset.add_private_data_to_dataset_start(
    integration, "path/to/json/file.json"
)

# timeout_seconds determines how long the code will wait after initiating upload until continuing and checking upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")


if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed without errors")
else:
    print(f"Errors: {res.errors}")

# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

# Instantiate user client. Replace <private_key_path> with the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="/Users/chris/encord-ssh-private-key.txt")

# Specify the dataset you want to upload data to by replacing <dataset_hash> with the dataset hash
dataset = user_client.get_dataset("<dataset_hash>")

# Specify the integration you want to upload data to by replacing <integration_title> with the integration hash
integrations = user_client.get_cloud_integrations()
integration_idx = [i.title for i in integrations].index("GCP")
integration = integrations[integration_idx].id

# Initiate cloud data upload. Replace path/to/json/file.json with the path to your JSON file
upload_job_id = dataset.add_private_data_to_dataset_start(
    integration, "/Users/chris/training_data_1.json"
)

# timeout_seconds determines how long the code will wait after initiating upload until continuing and checking upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")


if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed without errors")
else:
    print(f"Errors: {res.errors}")
add_private_data_to_dataset job started with upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.
SDK process can be terminated, this will not affect successful job execution.
You can follow the progress in the web app via notifications.
add_private_data_to_dataset job completed with upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.
Execution result: DatasetDataLongPolling(status=<LongPollingStatus.DONE: 'DONE'>, data_hashes_with_titles=[DatasetDataInfo(data_hash='cd42333d-8014-46q7-837b-5bf68b9b5', title='funny_image.jpg')], errors=[], units_pending_count=0, units_done_count=1, units_error_count=0)
Upload completed without errors

Step 6: Check data upload

If the Step 5 returns "Upload is still in progress, try again later!", run the following code to query the Encord server again. Replace upload_job_id with the upload_job_id output by the previous code. In the example above upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.

# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

# Instantiate user client
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="/Users/encord/.ssh/new-key-db-private-key.txt")

# Check upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")

if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed without errors")
else:
    print(f"Errors: {res.errors}")

Step 7: Verify all the data from your cloud storage is available in Encord

After importing all your data, verify all your data imported from your cloud storage.

The example replaces the following variables:

  • <private_key_path>: /Users/chris/encord-ssh-private-key.txt

  • <project_hash>: 7d4ead9c-4087-4832-a301-eb2545e7d43b

from encord import EncordUserClient

user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
)

project = user_client.get_project(
    project_hash="<project_hash>"
    )

for log_line in project.list_label_rows_v2():
    data_list = project.get_data(log_line.data_hash, get_signed_url=True)
    print(data_list)

from encord import EncordUserClient

user_client = EncordUserClient.create_with_ssh_private_key(
ssh_private_key_path="/Users/chris/encord-ssh-private-key.txt"
)

project = user_client.get_project(
project_hash="7d4ead9c-4087-4832-a301-eb2545e7d43b"
)

for log_line in project.list_label_rows_v2():
data_list = project.get_data(log_line.data_hash, get_signed_url=True)
print(data_list)


## Step 8: Prepare your data for label/annotation import


[block:html]
{
  "html": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n    <meta charset=\"UTF-8\">\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n    <title>Clickable Div</title>\n    <style>\n        .clickable-div {\n            display: inline-block;\n            width: 200px;\n            height: 50px;\n            background-color: #ffffff;\n            border: solid;\n            text-align: center;\n            line-height: 50px;\n            color: #000000;\n            text-decoration: none;\n            margin: 10px;\n        }\n\n        .clickable-div:hover {\n            background-color: #ededff;\n        }\n    </style>\n<a href=\"https://docs.encord.com/reference/sdk-prepare-to-label\" class=\"clickable-div\">Prepare Data for Labels</a>\n</body>\n</html>"
}
[/block]


## Step 9: Import labels/annotations


[block:html]
{
  "html": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n    <meta charset=\"UTF-8\">\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n    <title>Clickable Div</title>\n    <style>\n        .clickable-div {\n            display: inline-block;\n            width: 200px;\n            height: 50px;\n            background-color: #ffffff;\n            border: solid;\n            text-align: center;\n            line-height: 50px;\n            color: #000000;\n            text-decoration: none;\n            margin: 10px;\n        }\n\n        .clickable-div:hover {\n            background-color: #ededff;\n        }\n    </style>\n<a href=\"https://docs.encord.com/reference/sdk-import-labels-annotations\" class=\"clickable-div\">Import Labels/Annotations</a>\n</body>\n</html>"
}
[/block]