Model Targets Native Workflow

Vuforia Image

This document demonstrates general usage of the Model Targets API, with code snippets for commonly-performed tasks. For a more general introduction to working with Model Targets, see the Model Targets Overview and the Model Targets API Overview.

Model Targets are tracked using the ObjectTracker and therefore are used in a very similar way to Image Targets, VuMarks, Cylinder Targets, and Multi Targets. Unique to Model Targets is the Guide View, which defines an initialization pose and therefore requires an additional rendered overlay that guides the user to this pose.

A Model Target dataset can have either a single object with a single Guide View, or multiple objects and/or multiple Guide Views. If your Model Target dataset is of the latter type, you will need the help of a TargetFinder to start tracking a particular object and/or to switch between Guide Views based on what is visible in the camera.

Adding a License Key

If you want to use your own Model Targets created using the Model Target Generator, you will need to have an appropriate license key. See How to Add a License Key to Your Vuforia App for more information.

Using Model Targets - General

Init and Destroy the ObjectTracker

To start working with Model Targets, initialize the ObjectTracker:

TrackerManager& trackerManager = TrackerManager::getInstance(); 
ObjectTracker* objectTracker = static_cast<ObjectTracker*>(trackerManager.initTracker(
    Vuforia::ObjectTracker::getClassType()));

To destroy the ObjectTracker:

TrackerManager& trackerManager = TrackerManager::getInstance(); 
trackerManager.deinitTracker(Vuforia::ObjectTracker::getClassType()); 

Load the Model Target dataset

Depending on whether you are using an untrained Model Target dataset with a single object, or a trained Model Target dataset with multiple objects and/or multiple Guide Views, your load step will differ. See the individual sections below: 

Untrained Model Target dataset with a single object

Trained Model Target dataset with multiple objects and/or multiple Guide Views

Start and Stop the Object Tracker

Start the Object Tracker:

    objectTracker->start();

Stop the Object Tracker:

    objectTracker->stop();

Check for results and render

Model Target tracking results are delivered via the State, as usual.

void render() 
{
    const Vuforia::State state = Vuforia::TrackerManager::getInstance().getStateUpdater().updateState();

    for (TrackableResult* trackableResult: state.getTrackableResults())
    {
        // Check for a Model Target tracked pose.
        if (trackableResult->isOfType(ModelTargetResult::getClassType()))
        {
            const Vuforia::ModelTargetResult* modelTargetResult = static_cast<const Vuforia::ModelTargetResult*>(trackableResult); 
            const Vuforia::ModelTarget& modelTarget = modelTargetResult->getTrackable(); 

            // Get a model-view matrix representing the pose of the Object Target.
            // Augmentations should be rendered using this model-view matrix as a baseline.
            Vuforia::Matrix44F modelViewMatrix = Vuforia::Tool::convertPose2GLMatrix(trackableResult->getPose()); 

            // Get the size of the 3D model 
            Vuforia::Vec3F targetScale = Vec3F(modelTarget.getSize().data[0], 
                                               modelTarget.getSize().data[1], 
                                               modelTarget.getSize().data[2]);
            // Get the bounding box of the 3D model 
            Vuforia::Obb3D targetBBox = modelTarget.getBoundingBox();
            // Get the center offset of the 3D model
            Vuforia::Vec3F translateCenter = Vuforia::Vec3F(targetBBox.getCenter().data[0],
                                                            targetBBox.getCenter().data[1], 
                                                            targetBBox.getCenter().data[2]);

            // [your custom rendering code goes here]
            // for example, to render a wireframe box around the object respecting the object's size and bounding box:
            MyApp::renderBoxWireframe(projectionMatrix, modelViewMatrix, targetScale, translateCenter);
        }
    }
}

Loading a Model Target dataset with a single object

If you are using an untrained Model Target dataset containing a single model, create a new DataSet to hold the Model Target via the ObjectTracker instance, and load the data files:

const char* modelTargetDatasetPath = "Vuforia_Motorcycle_Marslander.xml";

DataSet* modelTargetDataset = objectTracker->createDataSet(); 
if (DataSet::exists(modelTargetDatasetPath, Vuforia::STORAGE_APPRESOURCE)) 
    modelTargetDataset->load(modelTragetDatasetPath, Vuforia::STORAGE_APPRESOURCE);

The loaded dataset must activated before it can be used by the tracker (this must be done before starting the tracker and after load):

objectTracker->activateDataSet(modelTargetDataset);

The loaded dataset must be deactivated after using the tracker (after stopping the tracker):

objectTracker->deactivateDataSet(modelTargetDataset);

Finally, the dataset must be destroyed after usage (after stopping the tracker and deactivating the dataset):

objectTracker->destroyDataSet(modelTargetDataset); 

Loading a trained Model Target dataset with multiple objects and/or multiple Guide Views

If you are using a trained dataset with multiple Model Targets and/or multiple Guide Views, use a TargetFinder to switch between Model Targets and/or Guide Views depending on what is currently visible in the camera.

Initialize and de-initialize the TargetFinder

Initialize the TargetFinder:

const char* modelTargetDatasetPath = "Vuforia_Motorcycle_Marslander.xml";

TrackerManager& tm = TrackerManager::getInstance();
ObjectTracker* objectTracker = static_cast<ObjectTracker*>(tm.getTracker(ObjectTracker::getType()));

TargetFinder* targetFinder = objectTracker->getTargetFinder(ObjectTracker::TargetFinderType::MODEL_RECO);

targetFinder->startInit(modelTargetDatasetPath, Vuforia::STORAGE_APPRESOURCE);
if (targetFinder->getInitState() != TargetFinder::INIT_SUCCESS) 
{
    printf("Failed to initialize the TargetFinder: init state %i\n", targetFinder->getInitState());
}

De-initialize the TargetFinder:

targetFinder->deinit();

Start and stop the TargetFinder

To start continuous model recognition on the camera feed:

targetFinder->startRecognition();

To stop model recognition:

targetFinder->stop();

Enable tracking for the first recognized Model Target

Note that after activating tracking on a target, it's probably a good idea to stop the TargetFinder from running by calling targetFinder->stop().

TargetFinderQueryResult result = targetFinder->updateQueryResults();
if (result.status == Vuforia::TargetFinder::UPDATE_NO_MATCH ||
    result.status == Vuforia::TargetFinder::UPDATE_NO_REQUEST) 
{
    // no results for now
} 
else if (result.status == Vuforia::TargetFinder::UPDATE_RESULTS_AVAILABLE) 
{
    // if results are available there should be at least one object
    const ModelRecoSearchResult* firstResult = 
        static_cast<const Vuforia::ModelTargetSearchResult*>(result.results[0]);
    // activate tracking for the Model Target
    targetFinder->enableTracking(*firstResult);
    // tracking will begin in earnest once the user aligns their device with 
    // the Model Target's active GuideView
} 
else 
{
    printf("Failed to update query results: status %i\n", result.status);
}

Using the Guide View

Obtain the GuideView instance

Access the GuideView from the ModelTarget trackable.

  • If you are using a Model Target dataset with a single object and a single Guide View, the ModelTarget trackable is available via the loaded DataSet:
    ModelTarget* modelTarget = nullptr;
    for (Trackable* trackable: modelTargetDataset->getTrackables()) 
    { 
        if (trackable->isOfType(ModelTarget::getClassType())) 
        {  
            modelTarget = static_cast<ModelTarget*>(trackable); 
            break; 
        } 
    } 
    if (modelTarget == nullptr) return;
    
  • If you are using a trained Model Target dataset with multiple objects and/or multiple Guide Views, the active ModelTarget instance is available via the TargetFinder any time after calling targetFinder->enableTracking():
    ModelTarget* modelTarget = nullptr;
    for (ObjectTarget* objectTarget: targetFinder->getObjectTargets()) 
    {
        if (objectTarget->isOfType(ModelTarget::getClassType()))
        {
            modelTarget = static_cast<ModelTarget*>(trackable); 
            break; 
        }
    }
    if (modelTarget == nullptr) return;
    

Once you have the ModelTarget instance, you can obtain the GuideView itself:

int guideViewIndex = modelTarget->getActiveGuideViewIndex(); 
GuideView* guideView = modelTarget->getGuideViews.at(guideViewIndex);

Render the Guide View

In most use cases, you will render the Guide View using the default image from the GuideView instance, drawn with a textured rectangle overlay, as part of your render loop. Note the scaling code to correctly scale the image to match the camera view size:

// Initialization code 
if (guideView != nullptr && guideView->getImage() != nullptr)
{
    textureId = MyApp::createTexture(const_cast(guideView->getImage()));
}
 
MyApp::renderVideoBackground();
 
// Only display the guide when you don’t have tracking 
// scale your guide view with the video background rendering
if (guideView != nullptr && guideView->getImage() != nullptr && state.getTrackableResults().size() == 0)
{    
    float guideViewAspectRatio = (float)guideView->getImage()->getWidth() / 
        guideView->getImage()->getHeight();
    float cameraAspectRatio = (float)viewport.data[2] / viewport.data[3];
    
    float planeDistance = 0.01f;
    float fieldOfView = Vuforia::CameraDevice::getInstance().getCameraCalibration().getFieldOfViewRads().data[1];
    float nearPlaneHeight = 2.0f * planeDistance * tanf(fieldOfView * 0.5f);
    float nearPlaneWidth = nearPlaneHeight * cameraAspectRatio;
    
    float planeWidth;
    float planeHeight;
    
    if(guideViewAspectRatio >= 1.0f && cameraAspectRatio >= 1.0f) // guideview landscape, camera landscape
    {
        // scale so that the long side of the camera (width)
        // is the same length as guideview width
        planeWidth = nearPlaneWidth;
        planeHeight = planeWidth / guideViewAspectRatio;
    }
    
    else if(guideViewAspectRatio < 1.0f && cameraAspectRatio < 1.0f) // guideview portrait, camera portrait
    {
        // scale so that the long side of the camera (height)
        // is the same length as guideview height
        planeHeight = nearPlaneHeight;
        planeWidth = planeHeight * guideViewAspectRatio;
    }
    else if (cameraAspectRatio < 1.0f) // guideview landscape, camera portrait
    {
        // scale so that the long side of the camera (height)
        // is the same length as guideview width
        planeWidth = nearPlaneHeight;
        planeHeight = planeWidth / guideViewAspectRatio;
        
    }
    else // guideview portrait, camera landscape
    {
        // scale so that the long side of the camera (width)
        // is the same length as guideview height
        planeHeight = nearPlaneWidth;
        planeWidth = planeHeight * guideViewAspectRatio;
    }
    
    // normalize world space plane sizes into view space again
    Vuforia::Vec2F scale = Vuforia::Vec2F(planeWidth / nearPlaneWidth, -planeHeight / nearPlaneHeight);
 
    // render the guide view image using an orthographic projection
    MyApp::renderRectangleTextured(scale, color, textureId);
}
Vuforia Image

Learn More

Model Targets

Native API Reference

The ModelTarget class extends the base ObjectTarget with methods related to querying the bounding box and associated guide views.

The GuideView class provides access to the 2D guide view images and the associated pose and camera intrinsics. It also alows overriding the detection pose for scenarios where the pose is to be changed dynamically at runtime.

If you are using a trained dataset with multiple Model Targets and/or multiple Guide Views, the TargetFinder class and the ModelRecoSearchResult class provide methods for enabling tracking on a Model Target based on whether it is visible in the camera, and selecting an appropriate Guide View based on the user's angle to the object.