Model Targets API Overview

2. API Overview

2.1 High-Level Overview of the API

Model Targets Tracking is exposed in the API using the ObjectTracker class.

Tracker and Target Coordinate System

The pose of a trackable result can be defined in one of two coordinate systems: The Camera coordinate system or the World coordinate system. Both of these coordinate systems follow specific conventions as defined by Vuforia Engine.

For Model Tracking, the pose of the Model Trackable Result is defined in the Camera coordinate system.  

Vuforia Image

Figure 3.21: Model Target pose will be defined in the Vuforia Engine Camera Coordinate System. 

This pose definition differs from those for some other Object Trackables. The convention used for the Model Coordinate System (a.k.a. Trackable Coordinate System).

Vuforia Engine uses a standard convention for most of our trackable coordinate systems, defining the up direction. For example, the ImageTarget Coordinate System has z up and x/y along the direction of the 2D image. Additionally, the origin of the target is always placed at the center of the physical object.

Vuforia Image

Figure 3.2.2: Coordinate System convention for Different Trackable Types. For Models Target, the coordinate system is defined solely by the (CAD) Model. 

For Model Targets, we have a strong dependency between the CAD Model and the Physical Model, and how the CAD model will be defined as a Trackable in the Vuforia Engine API.

Consequently the Model Target Model Coordinate System is defined entirely by the definition of the CAD Model and its configuration in the Model Target Generator.

For example, in the picture below you can see that the same coordinate system (and origin) is used between the CAD model and the Model Targets trackable. This differs from the convention used for the other Vuforia Engine targets (x-up).;

Figure 3.2.3: The Coordinate System for the Model Targets Trackable will be the same as the one of the CAD Model (in this example, y up, x left, z back). 

With this approach, if you already have some existing digital content associated with the model (e.g. simplified virtual model for augmentation, PLM information, 3D annotations), you will be able to easily position this digital content in your AR application (on the physical model) the same way you would position it in a CAD-focused application.

Target Size and Origin

The size of a Model Target is associated with the size of the Bounding Box of the (CAD) Model generated from the Model Target Generator Tool (previously known as CAD Target Generator Tool or CTG). This will be dependent on the CAD Model used as input in this Model Target Generator Tool. 

One of the major differences between other trackable types and a Model Target is related to the origin of the Target. We previously used a convention for the origin of the target, such as center of the image for the ImageTarget (or top left corner for 3DRO Target). In this case, the relation between the size and the bounding box of the object was implicitly defined (divide the size by 2 to get the corner of the bounding box of the ImageTarget).

For Model Targets, the origin can be anywhere on the model, meaning it does not necessarily have to be at the center.

A developer needs to know and be aware of the origin of the (CAD) model as it will be of major importance for doing any AR augmentation of the physical model. The origin of the model will be used as origin of the target. The origin cannot be changed by the developer from the API.

Vuforia Image

Figure 3.2.2.: Model Target size and origin will be defined implicitly from the CAD model size and origin used with the Model Target Generator Tool.

Target and Target Pose Unit

The unit used for the target (and for the pose) will be meters.


The selection of Model Target used by the ObjectTracker is possible with the dataset API.
A Model Dataset (inherited from DataSet) can be created and loaded from the ObjectTracker.
Once loaded, a dataset can be activated to be used by the ObjectTracker. Similar to our existing feature, you need to load and activate dataset before starting the tracker or stop the tracker before any load/activation operations.

Model Target databases apply the following restrictions

  • Only one Model Target per Model Dataset
  • Only one Model Dataset can be activated
  • Multiple Model Datasets can be loaded, limited by memory usage (similar to 3DRO)

Guide View

A Model Dataset requires access to a Guide View to initiate the tracking experience.

A Guide View contains the following components:

  • View Intrinsic Parameters: intrisincs used when the Guide View was created (from the Model Target Generator Tool). It is represented as a CameraCalibration data structure.
  • View Extrinsic Parameters: extrinsics used when the Guide View was created (from the Model Target Generator Tool). It is represented as a Matrix3x4 data structure.
  • There is a function to set these extrinsics to customize the snapping pose. This will however invalidate the overlay image, as it has been created using the default pose. Render a custom 2D image or 3D model instead.
  • An overlay image: a rendering screenshot using the previous camera information, rendering of the model, using a non-photorealistic style (edge rendering). The image is defined as an RGB Image. This image is represented as Image data structure (part of our SDK) containing image dimensions, image format, and image buffer.

The View Intrinsic Parameters use the same projection coordinate system convention as used with the Vuforia Engine Camera Coordinate System (CV projection matrix, with z forward).

The View Extrinsic Parameters use the Vuforia Engine Camera Coordinate System (see Figure 3.2.1). Consequently, the extrinsic parameters define the pose of the Trackable in the Camera Coordinate System.

As a developer, you have two choices for rendering the Guide View:

  • Render a 3D Model: you will be able to use the view extrinsic parameters from the Guide View (and current camera calibration) to render a 3D model, using the same convention used for rendering AR content on a trackable.
  • Render a 2D Image: you can position the 2D Image in overlay of the camera image, using the image from the Guide View. For correct placement, we recommend to scale the image such that the longer side of the guide view image has the same length as the longer side of the camera image. Alternatively, you can use the View intrinsic Parameters, and the current camera calibration/device screen resolution to adjust any scale/aspect ratio. 

Extended Tracking

ObjectTracker with Model Dataset supports an extended tracking (ET) mode, similar to the existing ObjectTracker. Internally it operates differently, but it will behave similar for you. Extended Tracking supports virtual augmentation of digital content when the tracked physical object will not be in view.

Extended tracking is always on on supported devices and can’t be changed at run-time. For the list of supported devices, please see Vuforia Calibrated Devices.

At run-time (after starting the tracker), you can check if ET is running by querying isExtendedTrackingStarted() on the Model Targets.

Note that the tracking status of the ModelTargetResult (getStatus()) is reported as TRACKED when Model Target tracking is robust and EXTENDED_TRACKED when target tracking confidence diminishes and reverts to positional device tracking.

2.2 Unity API

The Unity API mirrors the organization of the Native API in that there are classes for:

  • GuideView
  • ModelTarget
  • ModelTargetBehaviour (will be instanciated using the ModelTarget GameObject)

ModelTarget GameObject

The ModelTarget GameObject is the visual representation of the target within the Unity Scene. In the Unity Editor's Scen View, a 3d model will be rendered to help developers place content relative to the real model and is not rendered in play mode or on device.. Setting a Scale of 1 (uniform scale is enforced) will always reset the size of the object to its original size.

For more information about using Model Targets in Unity, please refer to the Introduction to Model Targets in Unity.


‹‹ Previous – 1.0 Getting Started Guide

Next – 3.0 Workflow ››