Skip to main content

Models

Using our sophisticated label editor to quickly and accurately annotate large datasets is a powerful and attractive feature of the Encord platform. Automating the labeling process using micro-models can help accelerate your efforts even further. A project's Models tab is your interface to setup and manage micro-models for a given project. You have full control over:

  • Model types: classification, object detection or instance segmentation
  • Training frameworks: PyTorch, DarkNet
  • Model architectures: ResNet, VGG, YOLOv5 and Faster-RCNN, among others.
  • Training specifications: epochs, batch sizes and hardware accelerators
  • Training data

Here, we'll walk you through understanding the model browser interface and details of creating a model. After you've created a model, consult our documentation on training and inference to learn how to put your models into action.

note

The precense of the Models tab inside each project implies that models are scoped at the project level. Therefore, you'll need to create and train a model inside each project you want to use it in. Our team is working to make models available at the organizational level.

Browse and search models#

The models page is your gateway to interacting with our customizable automation features. Follow the steps below to create a new model. Previously created models are shown in a tile interface so that you can see the title and important attributes such as the type, framework, architecture at a glance. You can also see how many times the model has been trained, and launch a variety subscreens from the tabs on each model. Highlighted in the image above, from left to right, the tabs are:

  • Train model: Start the training process here, full details provided in our training documentation.
  • Model training API details: It's also possible to train models using our SDK, click here to get head start with a helpful code snippet.
  • Display Training log: Review how the model performed during the training epocs by examining the training logs.
  • Other menu: Currently, the only action supported from this menu is Delete. Please use with caution, we may not be able to recover deleted models for you.

If you have created more models than fit easily on the screen, use the search interface to search by title and quickly find your model of interest. If you have yet to create any models, read on.

Creating a model#

Create a model by specifying:

  • Model title and an optional description. Give your model an easy name to make it easy to find when using for inference or re-training at a later stage after you've labeled more data.
  • Model type (classification, detection, segmentation), framework (PyTorch, FastAI, DarkNet) and architecture: these depend on the problem setting and type of labels.
  • Relevant objects: the final step of the model creation is to select the relevant ontology objects for model training. We will be able to select what specific instances of such objects we wish to train on later.

Let's walk through creating the various model types.

Classification#

The classification models assume there is only one correct class per image e.g. classifying handwritten digits or pictures of cats and dogs. It is associated with frame-level classifications from the project’s ontology.

For this task, we support many different architectures through the FastAI framework. These include various sizes of ResNet and VGG.

Object detection#

Object detection models assume there are potentially multiple objects in an image that need to be located through a bounding box and classified. Possible objects that can be included are drawn from the classes of bounding box annotation type from the project's ontology.

For this task, we support Faster-RCNN and YOLOv5 from PyTorch, as well as YOLTv5 from DarkNet.

Instance segmentation#

Segmentation models assume there are potentially multiple objects in an image that need to be segmented and classified. This differs from object detection in that the expected input and output of the model are polygons, rather than bounding boxes. Consequently, it is associated with polygon annotations from the project’s ontology.

For this task, we support Mask-RCNN from PyTorch.

Training#

See our automation section to learn the full details about Training.

Inference#

See our automation section to learn the full details about running model Inference.