Import Data from OTC (SDK)
Using cloud storage data in Encord is a multi-step process:
-
Set up your cloud storage so Encord can access your data
-
Create a cloud storage integration on Encord to link to your cloud storage
-
Create a JSON or CSV file to import your data
-
Create a Dataset
-
Perform the import using the JSON or CSV file
Step 1: Setup OTC Integration
Before you can do anything with the Encord platform and cloud storage, you need to configure your cloud storage to work with Encord. Once the integration between Encord and your cloud storage is complete, you can then use your data in Encord.
In order to integrate with Open Telecom Cloud, you need to:
- Create the account which accesses data in the Object Storage Service.
- Give the account read access to the desired buckets by:
- Creating a Custom Bucket Policy.
- (Optional) If you have Cross-origin resource sharing (CORS) configured on your buckets, make sure that *.encord.com is given read access.
- Create the integration by giving Encord access to that account’s credentials.
Step 2: Create Encord Integration
On the Encord platform enter the Access key ID
and Secret access key
, which should be located in the access key file, generated with the creation of the user. (if the access key has been misplaced, a new one can be created from the IAM User menu).
Optionally check the box to enable Strict client-only access, server-side media features will not be available if you would like Encord to sign URLs, but refrain from downloading any media files onto Encord servers. Read more about this feature here.
Finally, click the Create button at the bottom of the pop-up. The integration will now appear in the list of integrations in the ‘Integrations’ tab.
Step 3: Create Metadata Schema
Before importing your custom metadata to Encord, we recommend that you import a metadata schema. Encord uses metadata schemas to validate custom metadata uploaded to Encord and to instruct Index and Active how to display your metadata.
video.description
, while team B could use audio.description
. Another example could be TeamName.MetadataKey
. This approach maintains clarity and avoids key collisions across departments.Benefits of Using a Metadata Schema
Using a metadata schema provides several benefits:
- Validation: Ensures that all custom metadata conforms to predefined data types, reducing errors during data import and processing.
- Consistency: Maintains uniformity in data types across different datasets and projects, which simplifies data management and analysis.
- Filtering and Sorting: Enhances the ability to filter and sort data efficiently in the Encord platform, enabling more accurate and quick data retrieval.
Metadata Schema Table
varchar
as a versatile default. Use .add_scalar() to add a scalar key to your metadata schema.
Scalar Key | Description | Display Benefits |
---|---|---|
boolean | Binary data type with values “true” or “false”. | Filtering by binary values |
datetime | ISO 8601 formatted date and time. | Filtering by time and date |
number | Numeric data type supporting float values. | Filtering by numeric values |
uuid | Customer specified unique identifier for a data unit. | Filtering by customer specified unique identifier |
varchar | Textual data type. Formally string . string can be used as an alias for varchar , but we STRONGLY RECOMMEND that you use varchar . | Filtering by string. |
text | Text data with unlimited length (example: transcripts for audio). Formally long_string . long_string can be used as an alias for text , but we STRONGLY RECOMMEND that you use text . | Storing and filtering large amounts of text. |
Use add_enum
and add_enum_options
to add an enum and enum options to your metadata schema.
Key | Description | Display Benefits |
---|---|---|
enum | Enumerated type with predefined set of values. | Facilitates categorical filtering and data validation |
Use add_embedding
to add an embedding to your metadata schema.
Key | Description | Display Benefits |
---|---|---|
embedding | 1 to 4096 for Index. 1 to 2000 for Active. | Filtering by embeddings, similarity search, 2D scatter plot visualization (Coming Soon) |
Incorrectly specifying a data type in the schema can cause errors when filtering your data in Index or Active. If you encounter errors while filtering, verify your schema is correct. If your schema has errors, correct the errors, re-import the schema, and then re-sync your Active Project.
Import Your Metadata Schema to Encord
Verify Your Schema
After importing your schema to Encord we recommend that you verify that the import is successful. Run the following code to verify your metadata schema imported and that the schema is correct.
Edit Schema Keys
You can change the data type of schema keys using the .set_scalar()
method. The example below shows how to update the data type for multiple metadata fields.
Delete Schema Keys
You can delete schema keys using the .delete() method.
There are two types of deletion: hard delete and soft delete. A hard delete permanently removes the key, making it impossible to restore. A soft delete allows you to restore the key later using the .restore_key() method.
The following examples show hard delete and soft deletion of a schema key called Fruit
.
Restore Schema Keys
Keys that have been soft deleted can be restored using the .restore_key() method. The following example restores a schema key called Fruit
.
Step 4: Create JSON or CSV for import
All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all images, image groups, videos and DICOM files in your cloud storage.
Create JSON file for import
For detailed information about the JSON file format used for import go here.
The information provided about each of the following data types is designed to get you up and running as quickly as possible without going too deeply into the why or how. Look at the template for each data type, then the examples, and adjust the examples to suit your needs.
skip_duplicate_urls
is set to true
, all object URLs that exactly match existing images/videos in the dataset are skipped.Create CSV file for import
In the CSV file format, the column headers specify which type of data is being uploaded. You can add and single file format at a time, or combine multiple data types in a single CSV file.
Details for each data format are given in the sections below.
- Object URLs can’t contain whitespace.
- For backwards compatibility reasons, a single column CSV is supported. A file with the single
ObjectUrl
column is interpreted as a request for video upload. If your objects are of a different type (for example, images), this error displays: “Expected a video, got a file of type XXX”.
Step 5: Upload data to Encord
To use your data in Encord, it must be uploaded to the Encord Files storage. Once uploaded, your data can be reused across multiple Projects and contain no labels or annotations themselves. Files stores your data, while Projects store your labels. The following script creates a folder in Files and uses your AWS integration to upload data to that folder.
The following script creates a new folder in Files and initiates uploads from AWS. It works for all file types.
Upload is still in progress, try again later!
is returned, use the
script to check the upload status to see whether the upload has finished.Ensure that you:
- Replace <private_key_path> with the path to your private key.
- Replace <integration_title> with the title of the integration you want to use.
- Replace <folder_name> with the folder name. The scripts assume that the specified folder name is unique.
- Replace
path/to/json/file.json
with the path to a JSON file specifying which cloud storage files should be uploaded. - Replace
A folder to store my files
with a meaningful description for your folder. - Replace
"my": "folder_metadata"
with any metadata you want to add to the folder.
The script has several possible outputs:
- “Upload is still in progress, try again later!”: The upload has not finished. Run this script again later to check if the upload has finished.
- “Upload completed”: The upload completed. If any files failed to upload, the URLs are listed.
- “Upload failed”: The entire upload failed, and not just individual files. Ensure your JSON file is formatted correctly.
Step 6: Check data upload
If Step 5 returns "Upload is still in progress, try again later!"
, run the following code to query the Encord server again. Ensure that you replace <upload_job_id>
with the output by the previous code. In the example above upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727
.
The script has several possible outputs:
-
“Upload is still in progress, try again later!”: The upload has not finished. Run this script again later to check if the upload has finished.
-
“Upload completed”: The upload completed. If any files failed to upload, the URLs are listed.
-
“Upload failed”: The entire upload failed, and not just individual files. Ensure your JSON file is formatted correctly.
timeout_seconds
argument from the
add_private_data_to_dataset_get_result() method performs status checks until the status upload has finished.Step 7: Create a Dataset
The following example creates a Dataset called “Houses” that expects data hosted on OTC.
- Substitute
<private_key_path>
with the file path for your private key. - Replace “Houses” with the name you want your Dataset to have.
Storage location | StorageLocation method argument | Represented by |
---|---|---|
AWS S3 | AWS | 1 |
GCP | GCP | 2 |
Azure blob | AZURE | 3 |
Open Telekom Cloud | OTC | 4 |
Encord storage | CORD_STORAGE | 0 |
Step 8: Add your data to a Dataset
Now that you upload your data and created a Dataset, it is time to add your files to the Dataset. The following scripts add all files in a specified folder to a Dataset.
- Replace
<private_key_path>
with the path to your private key. - Replace
<folder_name>
with the name you want to give your Storage folder. - Replace
<dataset_hash>
with the hash of the Dataset you want to add the data units to.
Step 8: Verify your files are in the Dataset
After adding your files to the Dataset, verify that all the files you expect to be there made it into the Dataset.
The following script prints the URLs of all the files in a Dataset. Ensure that you:
- Replace
<private_key_path>
with the path to your private key. - Replace
<dataset_hash>
with the hash of your Dataset.
Step 10: Prepare your data for label/annotation import
Step 11: Import labels/annotations
Was this page helpful?