Model Targets API Overview

Vuforia Image

High-Level Overview of the API

Model Target tracking is exposed in the C++ API using the ObjectTracker class. While tracking is running, tracking results are available as ModelTargetResult instances via the State

  • If you are using an untrained Model Target dataset with a single model, you should create a DataSet via the ObjectTracker, load the .dat/.xml file pair representing your Model Target dataset, and activate the DataSet. ModelTarget instances are available via the DataSet.
  • If you are using a trained Model Target dataset with multiple Model Targets and/or multiple Guide Views, you should first obtain the appropriate TargetFinder from the ObjectTracker, initialize it with the .dat/.xml file pair representing your Model Target dataset, and start it running. Check the TargetFinder periodically (typically every frame). ModelTarget instances become available on the TargetFinder after tracking has been enabled for a target.

You will need to render a GuideView image over the camera image to help users position their device correctly so that tracking can begin. You can obtain a GuideView from your ModelTarget instance.

Target Size and Origin

The size of a Model Target is associated with the size of the Bounding Box of the (CAD) Model generated from the Model Target Generator Tool (previously known as CAD Target Generator Tool or CTG). This will be dependent on the CAD Model used as input in this Model Target Generator Tool. 

A Model Target's coordinate system has its origin at the origin of the 3D model used to define the target. This is in contrast to some other types of trackable found elsewhere in Vuforia Engine, such as ImageTarget (origin is the center of the image), or 3DRO Target (origin is the top left corner of the target). 

A developer therefore needs to know and be aware of the origin of the 3D model in order to do any AR augmentation of the physical object. The origin cannot be changed by the developer from the API.

Vuforia Image

Model Target size and origin is defined implicitly from the 3D model size and origin loaded into the Model Target Generator app.

Target and Target Pose Unit

The units used for the target (and for the pose) is meters.

Unity API

The Vuforia Engine Unity integration provides a number of C# classes and pre-built drop-in Game Objects for working the Model Targets.

To start working with Model Targets, import a *.unitypackage file containing a trained or untrained Model Target dataset to your project. Create a ModelTarget GameObject (or a ModelRecognition GameObject if you have a trained Model Target dataset with multiple objects and/or Guide Views) using the GameObject -> Vuforia -> Model Targets menu, and configure it with your AR content.

For more information, see the Introduction to Model Targets in Unity.

Native API

A Model Target dataset is stored as a pair of *.xml and *.dat files.

Load the dataset

The load procedure for Model Targets is different depending on whether you are using an untrained dataset with a single object, or a trained dataset with multiple objects and/or multiple Guide Views.

  • For an untrained dataset with a single object, loading Model Targets (and switching, if required) is done via instances of the DataSet class. Create a DataSet instance on the ObjectTracker by calling ObjectTracker::createDataSet(). Load the data set by calling DataSet::load(), and then activate it on the ObjectTracker using ObjectTracker::activateDataSet().
  • For a trained dataset with multiple objects and/or multiple Guide Views, loading Model Targets and switching between them is done via the TargetFinder. Obtain the TargetFinder from ObjectTracker by calling ObjectTracker::getTargetFinder(). Load the data set by calling TargetFinder::init(), and start the TargetFinder by calling TargetFinder::startRecognition().

The following restrictions apply to Model Target tracking at runtime:

  • Multiple datasets can be loaded at once, subject to memory usage limits (as with 3DRO targets).
  • Only one Model Target dataset may be active at any one time. (This restriction does not apply to other kinds of datasets. For example, you may have multiple active Image Target datasets at the same time as you have an active Model Target dataset).
  • If your Model Target dataset contains multiple objects, you will need to use the TargetFinder to switch between the objects.
  • If a Model Target in your dataset has multiple Guide Views, you can use the TargetFinder to automatically select the best Guide View (if you have a trained Model Target dataset), or you can switch between them yourself using ModelTarget::setActiveGuideViewIndex().

Please refer to Model Targets Native API Workflow for example code.

Guide View

If you are not already familiar with Guide Views, please refer to the Model Targets Guide View page for an introduction to the Guide View concept.

As a developer, it is your responsibility to render an appropriate Guide View. Typically, you would do this in one of two ways:

  • Draw a 2D Image as an overlay to the camera image.

    The image provided on the GuideView class is designed for this purpose; or, you may have your own custom image based on the image output from the Model Target Generator that you want to use instead.

    In either case, for correct placement we recommend that you scale the image such that the longer side of the image has the same length as the longer side of the camera image. Alternatively, you can use the view intrinsic parameters (provided on the GuideView instance) in combination with the current camera calibration or device screen resolution in order to adjust the scale/aspect ratio of the image.

  • Render a 3D Model representing the object as it appears from the Guide View pose. You can use the view extrinsic parameters provided on the GuideView instance in combination with the current camera calibration to do this, using the same conventions used elsewhere for rendering AR content on a trackable.

Note that if you are working with HoloLens, you are strongly recommended to render the Guide View as a 3D model.

Custom Guide View rendering and positioning

In the C++ Vuforia Engine API, the GuideView class exposes the following properties of a Guide View:

  • View Intrinsic Parameters: intrisincs used when the Guide View was created (by the Model Target Generator). It is represented as a CameraCalibration data structure.
  • View Extrinsic Parameters: extrinsics used when the Guide View was created (by the Model Target Generator). It is represented as a Matrix3x4 data structure.
  • Overlay image: a rendering of the object in an abstracted style (edge rendering), as it would appear from the Guide View position as set in the Model Target Generator (i.e. using the initial View Intrinsic Parameters and View Extrinsic Parameters).

If you change the Guide View pose off-line, you can use the Model Target Generator to modify the Detection Position and re-export the Model Target dataset.

If you want to change the Guide View pose at run-time, you can use GuideView::setPose() to modify the Guide View pose (and therefore the point from which tracking will begin). Note, however, that if you do this, the image provided will no longer match the new Guide View pose you have set, and you will need to be able to render your own visual overlay at runtime - one that matches the newly-set Guide View pose.

Extended Tracking

Extended Tracking is supported via the Positional Device Tracker.

Note that the tracking status as reported by ModelTargetResult::getStatus() will be TRACKED when Model Target tracking is active and robust, and EXTENDED_TRACKED when the Positional Device Tracker is providing tracking instead.

Learn More

Model Targets Overview

Introduction to Model Targets in Unity

Model Targets Native Workflow