This article describes the native API for configuring a Model Target and its Guide View.
The Model Target tracking is exposed in the C++ API by loading Model Target Device Databases into the ObjectTracker class. The API design of Model Targets follow a structure similar to other target types in ObjectTracker. The main differences are:
- GuideView images are used to guide the user to a location where the object can be detected so that tracking can start.
- While the tracker is running, the status of each target is reported as a ModelTargetResult instance on the State. This includes the latest tracking poses when available, but also information on whether Guide Views need to be displayed and which targets are being recognized as Advanced Model Targets.
Target Size and Origin
The size of a Model Target is associated with the size of the Bounding Box of the (CAD) Model generated from the Model Target Generator Tool. This will be dependent on the CAD Model used as input in this Model Target Generator Tool.
A Model Target's coordinate system has its origin at the origin of the 3D model used to define the target. This is in contrast to some other types of trackable found elsewhere in Vuforia Engine, such as ImageTarget (origin is the center of the image), or Object Target (origin is the center of the scanning target).
A developer, therefore, needs to know and be aware of the origin of the 3D model in order to do any AR augmentation of the physical object. The origin cannot be changed by the developer from the API.
Model Target size and origin is defined implicitly from the 3D model size and origin loaded into the Model Target Generator app.
Target and Target Pose Unit
The units used for the target (and for the pose) is meters.
A Model Target database is stored as a pair of
Loading and activating Model Target databases is similar to other ObjectTracker datasets.
The following restrictions apply to Model Target tracking at runtime:
- Multiple databases can be loaded at once, subject to memory usage limits.
- Only one Model Target database may be active at any one time. (This restriction does not apply to other kinds of databases. For example, you may have multiple active Image Target databases at the same time as you have an active Model Target database).
- For Advanced Model Target databases with multiple objects, only a single object can be tracked at any point in time. Vuforia will automatically detect the currently visible object and set it as the active object.
Please refer to Model Targets Native API Workflow for example code and instructions.
Vuforia provides a motion hint option that is ideal for tracking stationary objects. We recommend setting the Model Target’s motion hint in the Model Attributes menu of the Model Target Generator. However, the value can be manually overridden. Before rendering the model and the guide view, set the motion hint to either STATIC or ADAPTIVE with the following call:
Similiarly, a Tracking Mode can also be assigned to your Model Target in the Model Target Generator. Set it as
3D SCAN. Note, that only
CAR modes can be set at runtime.
See more on Motion Hint and Tracking Mode in Optimizing Model Target Tracking.
Model Targets require a Guide View to be rendered on the screen to assist users in finding the right position from where the object can be successfully detected. If you are not already familiar with Guide Views, please refer to the Model Target Guide View page for a more detailed introduction to the Guide View concept. Advanced Model Targets do not require Guide Views to be rendered on the screen.
As a developer, it is your responsibility to render an appropriate Guide View. Typically, you would do this in one of two ways:
- Draw a 2D Image as an overlay to the camera image.
The image provided on the GuideView class is designed for this purpose. You may have your own custom image based on the image output from the Model Target Generator that you want to use instead.
In either case, for correct placement, we recommend that you scale the image such that the longer side of the image has the same length as the longer side of the camera image. Alternatively, you can use the view intrinsic parameters (provided on the GuideView instance) in combination with the current camera calibration or device screen resolution in order to adjust the scale/aspect ratio of the image.
- Render a 3D Model representing the object as it appears from the Guide View pose. You can use the view extrinsic parameters provided on the GuideView instance in combination with the current camera calibration to do this, using the same conventions used elsewhere for rendering AR content on a trackable.
NOTE: If you are working with stereo eyewear such as MS HoloLens, you are strongly recommended to render the Guide View as a 3D model.
Displaying Guide Views for Model Targets
Since different types of Model Target databases have different requirements on when to display Guide Views, the Vuforia Native API provides a mechanism that allows developers to implement a unified logic that can handle all cases without requiring prior knowledge of the database type.
Standard Model Targets Behavior
For standard Model Targets, a ModelTargetResult is posted to the State after activation with
getStatusInfo()==NO_DETECTION_RECOMMENDING_GUIDANCE. Once the user has aligned the object with the Guide View, tracking starts, and the target’s state switches to
Switching between Guide Views
For Model Targets with multiple Guide Views, these can be switched by calling
setActiveGuideViewIndex() on the ModelTarget object. Advanced Model Targets will automatically recognize objects without requiring a Guide View to be set and rendered.
In addition to above explanation an app workflow diagram is available on the Model Targets Native Workflow page.
Advanced Model Targets Behavior
For Advanced Model Targets with a detection range up to 360°, it is generally not necessary to display a Guide View at runtime. When an Advanced Model Target database is activated, all objects are initialized in the State with
getStatusInfo()==INITIALIZING. When one of the Model Targets is detected, tracking starts automatically and its state switches to
Custom Guide View Rendering and Positioning
In the C++ Vuforia Engine API, the GuideView class exposes the following properties of a Guide View:
- View Intrinsic Parameters: intrinsics used when the Guide View was created (by the Model Target Generator). It is represented as a CameraCalibration data structure.
- View Extrinsic Parameters: extrinsics used when the Guide View was created (by the Model Target Generator). It is represented as a Matrix3x4 data structure.
NOTE: GuideView uses a y-down convention, which is similar to coordinate system conventions used for Computer Vision.
- Overlay image: a rendering of the object in an abstracted style (edge rendering), as it would appear from the Guide View position as set in the Model Target Generator (i.e. using the initial View Intrinsic Parameters and View Extrinsic Parameters).
If you change the Guide View pose off-line, you can use the Model Target Generator to modify the Detection Position and re-export the Model Target database.
If you want to change the Guide View pose at run-time, you can use
GuideView::setPose() to modify the Guide View pose (and therefore the point from which tracking will begin). Note, however, that if you do this, the image provided will no longer match the new Guide View pose you have set, and you will need to be able to render your own visual overlay at runtime - one that matches the newly-set Guide View pose.
Controlling Recognition Behavior for Advanced Model Targets
For Advanced Model Targets with multiple objects and/or multiple views, a recognized object is activated automatically, and tracking starts as soon as the user has aligned the object with the Guide View.
As long as an Advanced Model Target is
TRACKED, recognition is disabled automatically to save processing power and prevent unwanted activation of other targets. When tracking of a target is lost, i.e. its
STATUS changes to
LIMITED, recognition is automatically enabled again to allow detection of other objects. Since only a single object can be actively tracked at a time, a recognition and activation of another object can lead to complete tracking loss of the previously tracked target. In scenarios where this behavior is not intended (e.g. to save power or when only a single object is present in the scene), this behavior can be disabled by calling
Vuforia::setHint(Vuforia::HINT_MODEL_TARGET_RECO_WHILE_EXTENDED_TRACKED, 0). In this case the first successfully tracked object will stay active until the database is deactivated.
We recommend enabling Extended Tracking when using Model Targets to allow for a robust experience, especially with large-scale objects. Extended Tracking is supported by starting the Positional Device Tracker.
NOTE: The tracking status as reported will be
TRACKED when Model Target tracking is active and robust, and
EXTENDED_TRACKED when the Positional Device Tracker is providing tracking based on the last observed position instead. In addition, in situations where the Positional Device Tracker provides insufficient information to generate an accurate
EXTENDED_TRACKED pose of the ModelTarget, the tracking status will be set to
LIMITED tracking status is reported, we recommend hiding augmentations of the Model Target until more robust tracking is reported again.
As of Vuforia 8.3, the API for the Advanced Model Target feature (called Model Target Trained earlier) has been consolidated and merged into the general Model Target API, with the following major changes:
- Support for loading Model Target databases via the TargetFinder API and using the ModelTargetSearchResult to retrieve recognition results has been removed. Advanced Model Target databases can now be used via the standard Model Target API based on the ObjectTracker.
- Recognized objects are now reported on the Vuforia
- Recognized objects are now activated automatically and start tracking as soon as the user has moved to a suitable location as indicated by the Guide View.
- As long as an Advanced Model Target is
TRACKED, recognition is deactivated automatically. When tracking of a target is lost, i.e. its
EXTENDED_TRACKED, recognition is automatically turned on again, which may lead to the recognition and activation of another object and complete tracking loss of the previously tracked target. In scenarios where this behavior is not intended (e.g. when only a single object is present in the scene), this behavior can be disabled by calling
- Upon dataset activation, all Model Target datasets now report a
getStatus()==NO_POSE. This may break compatibility with apps that do not check
The new API workflow is explained above and on the Model Target Native Workflow page.