GCP Examples
Basic Geometric Example
A simple example showing how to use objectHashes.
Use Case: Selective OCR on Selected Objects
This functionality allows you to apply your own OCR model to specific objects selected directly within the Encord platform.
When you trigger your agent from the Encord app after selecting objects, the platform automatically sends a list of objectHashes
to your agent. Your agent can then use the dep_objects
method to gain immediate access to these specific object instances, which greatly simplifies integrating your OCR model for targeted processing.
Test the Agent
- Save the above code as
agent.py
. - Run the following command to run the agent in debug mode in your terminal.
- Open your Project in the Encord platform and navigate to a frame with an object that you want to act on. Choose an object from the bottom left sider and click
Copy URL
as shown:
The url should have roughly this format: "https://app.encord.com/label_editor/{project_hash}/{data_hash}/{frame}/0?other_query_params&objectHash={objectHash}"
.
- In another shell operating from the same working directory, source your virtual environment and test the agent.
- To see if the test is successful, refresh your browser to see the action taken by the Agent. If the test has run successfully, the agent can be deployed. Visit the deployment documentation to learn more.
Nested Classification using Claude 3.5 Sonnet
The goals of this example are:
- Create an editor agent that automatically adds frame-level classifications.
- Demonstrate how to use the
OntologyDataModel
for classifications.
Prerequisites
Before you begin, ensure you have:
- Created a virtual Python environment.
- Installed all necessary dependencies.
- Have an Anthropic API key.
- Are able to authenticate with Encord.
Run the following commands to set up your environment:
Project Setup
Create a Project with visual content (images, image groups, image sequences, or videos) in Encord. This example uses the following Ontology, but any Ontology containing classifications can be used.
The aim is to trigger an agent that transforms a labeling task from Figure A to Figure B.
Figure A: No classification labels.
Figure B: Multiple nested classification labels generated by an LLM.
Create the Agent
This section provides the complete code for creating your editor agent, along with an explanation of its internal workings.
Agent Setup Steps
-
Import dependencies, authenticate with Encord, and set up the Project. Ensure you insert your Project’s unique identifier.
-
Create a data model and a system prompt based on the Project Ontology to tell Claude how to structure its response.
-
Set up an Anthropic API client to establish communication with the Claude model.
-
Define the Editor Agent. This includes
- Retrieving Frame Content: It automatically fetches the current frame’s image data using the
dep_single_frame
dependency. - Analyzing with Claude: The frame image is then sent to the Claude AI model for analysis.
- Parsing Classifications: Claude’s response is parsed and transformed into structured classification instances using the predefined data model.
- Saving Results: The new classifications are added to the active label row, and the updated results are saved within the Project.
Test the Agent
- In your current terminal, run the following command to run the agent in debug mode.
- Open your Project in the Encord platform and navigate to a frame you want to add a classification to. Copy the URL from your browser.
The url should have the following format: "https://app.encord.com/label_editor/{project_hash}/{data_hash}/{frame}"
.
- In another shell operating from the same working directory, source your virtual environment and test the agent.
- To see if the test is successful, refresh your browser to view the classifications generated by Claude. Once the test runs successfully, you are ready to deploy your agent. Visit the deployment documentation to learn more.
Nested Attributes using Claude 3.5 Sonnet
The goals of this example are:
-
Create an editor agent that can convert generic object annotations (class-less coordinates) into class specific annotations with nested attributes like descriptions, radio buttons, and checklists.
-
Demonstrate how to use both the
OntologyDataModel
and thedep_object_crops
dependency.
Prerequisites
Before you begin, ensure you have:
- Created a virtual Python environment.
- Installed all necessary dependencies.
- Have an Anthropic API key.
- Are able to authenticate with Encord.
Run the following commands to set up your environment:
Project Setup
Create a Project with visual content (images, image groups, image sequences, or videos) in Encord. This example uses the following Ontology, but any Ontology containing classifications can be used provided the object types are the same and there is one entry called "generic"
.
The goal is create an agent that takes a labeling task from Figure A to Figure B
Figure A: No classification labels.
Figure B: Multiple nested classification labels generated by an LLM.
Create the Agent
This section provides the complete code for creating your editor agent, along with an explanation of its internal workings.
Agent Setup Steps
-
Import dependencies, authenticate with Encord, and set up the Project. Ensure you insert your Project’s unique identifier.
-
Extract the generic Ontology object and the specific objects of interest. This example sorts Ontology objects based on whether their title is
"generic"
. The generic object is used to query image crops within the agent. Before that,other_objects
is used to pass in the specific context we want Claude to focus on. TheOntologyDataModel
class helps convert Encord Ontology Objects into a Pydantic model and parse JSON into Encord ObjectInstances. -
Prepare the system prompt for each object crop using the
data_model
to generate the JSON schema. Onlyother_objects
is passed to ensure the model can choose only from non-generic object types. -
Set up an Anthropic API client to establish communication with the Claude model. You must include your Anthropic API key.
-
Define the Editor Agent.
- All arguments are automatically injected when the agent is called. For details on dependency injection, see here.
- The
dep_object_crops
dependency allows filtering. In this case, it includes only “generic” object crops, excluding those already converted to actual labels.
-
Query Claude using the image crops. The
crop
variable has a convenientb64_encoding
method to produce an input that Claude understands. -
Parse Claude’s message using the
data_model
. When called with a JSON string, it attempts to parse it with respect to the JSON schema we saw above to create an Encord object instance. If successful, the old generic object can be removed and the newly classified object added. -
Save the labels with Encord.
Test the Agent
- In your current terminal, run the following command to run the agent in debug mode.
- Open your Project in the Encord platform and navigate to a frame you want to add a generic object to. Copy the URL from your browser.
The url has following format: "https://app.encord.com/label_editor/{project_hash}/{data_hash}/{frame}"
.
- In another shell operating from the same working directory, source your virtual environment and test the agent.
- To see if the test is successful, refresh your browser to view the classifications generated by Claude. Once the test runs successfully, you are ready to deploy your agent. Visit the deployment documentation to learn more.
Video Recaptioning using GPT-4o-mini
The goals of this example are:
- Create an Editor Agent that automatically generates multiple variations of video captions.
- Demonstrate how to use OpenAI’s GPT-4o-mini model to enhance human-created video captions with a FastAPI-based agent.
Prerequisites
Before you begin, ensure you have:
- Created a virtual Python environment.
- Installed all necessary dependencies.
- Have an OpenAI API key.
- Are able to authenticate with Encord.
Run the following commands to set up your environment:
Project Setup
Create a Project containing videos in Encord.
This example requires an Ontology with four text classifications:
- One text classification for human-created summaries of what is happening in the video.
- Three text classifications to be automatically filled by the LLM.
The workflow for this agent is:
-
A human watches the video and enters a caption in the first text field.
-
The agent is then triggered and generates three additional caption variations for review.
- Each video is first annotated by a human (ANNOTATE stage).
- Next, a data agent automatically generates alternative captions (AGENT stage).
- Finally, a human reviews all four captions (REVIEW stage) before the task is marked complete.
If no human caption is present when the agent is triggered, the task is sent back for annotation. If the review stage results in rejection, the task is also returned for re-annotation.
Create the Agent
This section provides the complete code for creating your editor agent, along with an explanation of its internal workings.
Agent Setup Steps
- Set up imports and create a Pydantic model for our LLM’s structured output.
- Create a detailed system prompt for the LLM that explains exactly what kind of rephrasing we want.
- Configure the LLM to use structured outputs based on our model.
- Create a helper function to prompt the model with both text and image.
- Define the agent to handle the recaptioning. This includes:
- Retrieving the existing human-created caption, prioritizing captions from the current frame or falling back to frame zero.
- Sending the first frame of the video along with the human caption to the LLM.
- Processing the response from the LLM, which provides three alternative phrasings of the original caption.
- Updating the label row with the new captions, replacing any existing ones.
Click here for a concrete Vision Language Action model use-case.
This example requires the following dependencies:
To set up and test the agent locally:
-
Save the dependencies above into a
requirements.txt
file. -
Set up your Python environment and run the agent:
(Replace
/path/to/your_private_key
and<your-api-key>
with your actual credentials.) -
In a separate terminal, test the agent:
(Replace
<url_from_the_label_editor>
with the URL from your Encord Label Editor session.)