User Defined Targets


User-defined targets are Image Targets that are created at runtime from camera frames selected by the user. They share most of the capabilities of a standard Image Target with the exception that they do not support Virtual Buttons. 

Use Cases

Just as Image Targets allow a developer to choose an image ahead of time that the app recognizes, User-Defined Targets allow an end user to pick an image at runtime. Thus, the user experiences AR “any time, anywhere” by selecting an image – like a photograph, book cover or poster – from his or her immediate environment without having to carry around pre-defined targets.

Supported Environments

User Defined Targets should be captured and viewed under moderately bright and diffuse lighting.  The surfaces of the object should be evenly lit. Indoor scenarios generally work well. 

Working with User Defined Targets

The process of capturing, building and tracking a User Defined Target is managed by your app logic using the Vuforia API. Because the target image is selected by the user, it's important to communicate to the user what types of images can be used and how they should be captured to get the best user experience.

Selecting a User Defined Target image

In the application, the developer should communicate to the user the qualities of a good target so that the end user can select images that will support robust detection and tracking.

Attributes of an ideal User Defined Target,
  • Rich in detail, e.g., street scene, group of people, collages and mixtures of items, sport scenes
  • Good contrast, i.e., bright and dark regions, well-lit
  • No repetitive patterns, i.e., a grassy field, the façade of modern house with identical windows, a checkerboard
  • Ease of availability, e.g., business cards, magazines, memos

Framing the Image

The user should be instructed to capture the image when their device is parallel to the plane of the image's surface to minimize perspective distortion. This will provide a good rectilinear reference of the image to the ObjectTracker. 

The SDK also provides a frame analysis indicator that can be displayed to the user to indicate when they are viewing a suitable image to use as a User Defined Target.

Using Extended Tracking with User Defined Targets

Extended Tracking improves tracking robustness by using features of the environment surrounding the target. It enables you to use larger models and to place them farther away from the face of the target. Use Extended Tracking when your target will be staged in a stable environment and won't be moved by the user. Moving the target or changing its environment when Extended Tracking is activated can corrupt tracking. See: Extended Tracking

Samples

The User-Defined Targets sample app shows how to design user interface and user experience elements that will enable your app's users to execute a UDT experience effectively.
 

Recommended Attributes of a User Defined Target

For user-defined targets, the application is responsible for the following tasks:

  • Starting the process of scanning a target
  • Triggering the process for building the target
  • Adding the newly acquired target into a database for tracking

Since user-defined targets are created by invoking the target acquisition process using the ImageTargetBuilder at runtime, this article gives only a generic overview about this feature. 

In the application, the developer should communicate to the user the qualities of a good target so that the end user can successfully acquire a target.

An ideal user-defined target includes the following attributes:

Attribute Example
Rich in detail A street scene, group of people, collages and mixtures of items, and sport scenes
Good contrast Includes bright and dark regions, and well-lit
No repetitive patterns A grassy field, the façade of modern house with identical windows, and a checkerboard
Ease of availability Business cards, magazine, and memos

The developer must provide instructions within the application for properly framing the snapshot.

Note: Attributes for user-defined targets are not stored in the Database Configuration XML file.

 


How To Develop for User Defined Targets in Unity

The UserDefinedTargetBuildingBehaviour of the User Defined Target Builder prefab provides methods to start and stop scanning for good targets and to build a new target. In its inspector the following options can be configured:

  • Start scanning automatically: If this is enabled, the target builder will automatically start to scan the camera image for feature points on startup.
  • Stop tracker while scanning: Check this if you want to automatically stop the ImageTracker while the Target Builder is scanning. Once scanning mode is stopped, the ImageTracker will be automatically started again.
  • Stop scanning after creating a target: If this is enabled, scanning will be automatically stopped when a new target has been created.

If the last two options are disabled, it is possible to create an application where scanning mode is never left and targets are tracked immediately after creating them. With this configuration it is possible to create multiple targets in quick succession.

User-added image

The UserDefinedTargetBuildingBehaviour will fire various events to registeredUserDefinedTargetEventHandlers. These must implement the IUserDefinedTargetEventHandlerinterface to process the following events:

public interface IUserDefinedTargetEventHandler
{
    /// <summary>
    /// called when the UserDefinedTargetBehaviour has been initialized
    /// </summary>
    void OnInitialized();
 
    /// <summary>
    /// called when the UserDefinedTargetBehaviour reports a new frame Quality
    /// </summary>
    void OnFrameQualityChanged(ImageTargetBuilder.FrameQuality frameQuality);
 
    /// <summary>
    /// called when an error is reported during initialization
    /// </summary>
    void OnNewTrackableSource(TrackableSource trackableSource);
}

 

See the UserDefinedTargets sample for details. A sample implementation of theIUserDefinedTargetEventHandler interface can be found in Assets/Scripts/UserDefinedTargetScripts/UserDefinedTargetEventHandler.cs.

Instead of the UserDefinedTargetBuilder prefab, the ImageTargetBuilder API can be used directly. It can be found in Assets/Vuforia/Scripts/ImageTargetBuilder.cs.
An instance of the ImageTargetBuilder object can be requested from the ObjectTracker object:
 

ObjectTracker objectTracker =
(ObjectTracker)TrackerManager.Instance.GetTracker<ObjectTracker>();
ImageTargetBuilder imageTargetBuilder = objectTracker.ImageTargetBuilder;
 

 


How To Develop for User Defined Targets in Native

In this article, we show how to use the User-Defined Targets feature to instantiate objects of classes from  TrackableSource  which can be used to create new  Trackables  at runtime.

Two new classes, ImageTargetBuilder and ImageTargetBuilderState are introduced:

User Defined Targets Class Diagram

The class ImageTargetBuilder exposes an API for controlling the building progress and retrieving a TrackableSource for instantiating a new trackable upon successful completion.

Relevant API

C++:

class ImageTargetBuilder
{
 
public:
 
    enum FRAME_QUALITY {
       FRAME_QUALITY_NONE = -1,  ///< getFrameQuality was called oustside of
                                 ///< scanning mode
       FRAME_QUALITY_LOW = 0,    ///< Poor number of features for tracking
       FRAME_QUALITY_MEDIUM,     ///< Sufficient number features for tracking
       FRAME_QUALITY_HIGH,       ///< Ideal number of features for tracking
    };
 
    virtual bool build (const char* name, float sceneSizeWidth) = 0;
 
    virtual void startScan() = 0;
 
    virtual void stopScan() = 0;
 
    virtual FRAME_QUALITY getFrameQuality() = 0;
 
    virtual TrackableSource* getTrackableSource() = 0;
};

Java:

public class ImageTargetBuilder
{
 
    public static final class FRAME_QUALITY {
       static int FRAME_QUALITY_NONE = -1;  
       static int FRAME_QUALITY_LOW = 0;    
       static int FRAME_QUALITY_MEDIUM;     
       static int FRAME_QUALITY_HIGH;
    }
 
    public boolean build (String  name, float sceneSizeWidth);
 
    public void startScan();
 
    public void stopScan();
 
    public int getFrameQuality();
 
    public TrackableSource getTrackableSource();
}


The startScan() and stopScan() simply control the process of the frame quality estimation inside the SDK. When scanning is started, the developer can query the frame quality of the current frame and give feedback to the application user. Once the user decides to create a UDT, the application calls build() to create a target from the current frame.

Once build() is called, the developer polls for the TrackableSource to be available, which it can then use to instantiate a new Trackable in a DataSet.

Code fragment: building and activating targets

Here we show the call flows to enter the scan/build states for the UDT creation process, and then how to add the created trackable to the desired dataset. In this example, the tracker is stopped when we enter the scanning state, and then re-started when the process is complete.

C++:

Vuforia::TargetTargetBuilder* builder;
Vuforia::ObjectTracker* imageTracker;
Vuforia::DataSet* dataSet;
bool building = false;
bool scanning = false;
void startUserDefScan()
{
    builder->startScan();
    scanning = true;
}
 
void startUserDefBuild(const char* name, float sceneSizeWidth)
{
    building = builder->startBuild(name, sceneSizeWidth);
    builder->stopScan();
    scanning = false;
}
 
void onApplicationUpdate()
{
    if (building)
    {
        Vuforia::TrackableSource* trackableSource = builder->getTrackableSource ();
        if (trackableSource != NULL)
        {
            imageTracker->deactivateDataSet(dataSet);
 
            dataSet->createTrackable(trackableSource);
 
            imageTracker->activateDataSet(dataSet);
 
            building = false;
        }
        else if (scanning)
        {
            updateScanningGUI();
        }
    }
Java :
ImageTargetBuilder builder;
ObjectTracker imageTracker;
DataSet dataSet;
boolean building = false;
boolean scanning = false;
void startUserDefScan()
{
    builder.startScan();
    scanning = true;
}
 
void startUserDefBuild(String name, float sceneSizeWidth)
{
    building = builder.startBuild(name, sceneSizeWidth);
    builder.stopScan();
    scanning = false;
}
 
void onApplicationUpdate()
{
    if (building)
    {
        TrackableSource trackableSource = builder.getTrackableSource ();
        if (trackableSource != null)
        {
            imageTracker.deactivateDataSet(dataSet);
 
            dataSet.createTrackable(trackableSource);
 
            imageTracker.activateDataSet(dataSet);
 
            building = false;
        }
        else if (scanning)
        {
            updateScanningGUI();
        }
    }
}

 


How To Use Device and Cloud Databases with User Defined Targets

The following sample code captures the use case of an AR application for a magazine:
  • Premium content
    • A limited number of targets is guaranteed to be available for detection and tracking instantly. These could be paid-for advertisements in the magazine.
    • The premium content targets are loaded from a dataset file retrieved from the Target Manager.
  • Regular content
    • A much larger number of targets is retrievable using Cloud Recognition. These could be pages from all regular articles in the magazine.
  • Play-anywhere game
    • The UDT feature can be used to play a magazine-specific mini-game on a user-defined target.

We also show a slightly more advanced use of the cloud recognition API and how to manage the mapping of developer app resources to cloud recognition trackables using a Table.

Note: The following snippets do not form a complete application, but help to recreate the use case described above.

Initialization

We load the datasets for premium content and create a dataset for user-defined targets.

C++:

Vuforia::TargetFinder* finder;
Vuforia::ObjectTracker* imageTracker;

// A fixed dataset containing Trackables for premium content loaded from a dataSet file
Vuforia::DataSet* premiumContentDataSet;

// A dataset where a user-defined Target for the mini game is instantiated at run-time
Vuforia::DataSet* udtDataSet;

onApplicationInit()
{
// Load the premium content dataset from file
premiumContentDataSet = imageTracker->loadDataSet(“premium.xml”, STORAGE_APP);

// Activate the premium content dataset immediately:
imageTracker->activateDataSet(premiumContentDataSet);

// Create an empty dataset for UDT:
udtDataSet = imageTracker->createDataSet();
}

onApplicationDeInit()
{
// Destroy all dataSets:
imageTracker->destroyDataSet(premiumContentDataSet);
imageTracker->destroyDataSet(udtDataSet);
}

Java:

TargetFinder  finder;
ObjectTracker imageTracker;

// A fixed dataset containing Trackables for premium content loaded from a dataSet file
DataSet premiumContentDataSet;

// A dataset where a user-defined Target for the mini game is instantiated at run-time
DataSet udtDataSet;

onApplicationInit()
{
// Load the premium content dataset from file
premiumContentDataSet = imageTracker.loadDataSet(“premium.xml”,
DataSet.STORAGE_TYPE.STORAGE_APP);

// Activate the premium content dataset immediately:
imageTracker.activateDataSet(premiumContentDataSet);

// Create an empty dataset for UDT:
udtDataSet = imageTracker.createDataSet();
}

onApplicationDeInit()
{
// Destroy all dataSets:
imageTracker.destroyDataSet(premiumContentDataSet);
imageTracker.destroyDataSet(udtDataSet);
}

Reacting to lifecycle events.

C++:

onApplicationResume()
{
// Start the image Tracker
imageTracker->start();

// Start cloud based recognition
finder->startRecognition();
}

onApplicationPause()
{
// Stop the image Tracker
imageTracker->stop();

// Stop cloud based recognition
finder->stop();
}

Java:
 
onApplicationResume()
{
// Start the image Tracker
imageTracker.start();

// Start cloud based recognition
finder.startRecognition();
}

onApplicationPause()
{
// Stop the image Tracker
imageTracker.stop();

// Stop cloud based recognition
finder.stop();
}


Handling Cloud Recognition Results

Initialization of the TargetFinder used for cloud recognition and creating the Table to map content. We update the Table on new TargetSearchResults.

C++:

Vuforia::TargetFinder* finder;
Vuforia::ObjectTracker* imageTracker;

Struct TableEntry
{
    String uniqueID;
    MyContent* myContent;
    STATUS status; (RETRIEVING/READY);
}
ContentTable table;
MyContent* createContent(const char* metaData);
void releaseContent(MyContent*);

onApplicationUpdate()
{
// Check if there are new results available:
if (finder->updateSearchResults() == Vuforia::TargetFinder::RESULTS_AVAILABLE)
{
// Iterate through the new results:
for (int i = 0; i < finder->getResultCount(); ++i)
{
const Vuforia::TargetSearchResult* result = finder->getResult(i);

// Check if this target is suitable for tracking:
if (result->getTrackingRating() > 0)
{
// Create a new Trackable from the result:
finder->enableTracking(result);

// Check if we have cached content for this target
// otherwise setup new content:
if (!table.find(result->getUniqueTargetId())
{
// Check if we need to free some content if we have
// reached the limit (e.g. Memory resources)
if (table.size() > MAX_APP_CONTENT)
{
TableEntry entry = table.getOldest();
releaseContent(entry.myContent);
table.remove(entry);
}
table.addEntry(createContent(
result->getMetaData), result->getUniqueTargetId());
}
}
}
}
}

Java:

TargetFinder finder;
ObjectTracker imageTracker;

class TableEntry
{
String uniqueID;
MyContent myContent;
STATUS status; //(RETRIEVING/READY);
}
ContentTable table;
MyContent createContent(String metaData);
void releaseContent(MyContent content);

onApplicationUpdate()
{
// Check if there are new results available:
if (finder.updateSearchResults() == TargetFinder.RESULTS_AVAILABLE)
{
// Iterate through the new results:
for (int i = 0; i < finder.getResultCount(); ++i)
{
TargetSearchResult result = finder.getResult(i);

// Check if this target is suitable for tracking:
if (result.getTrackingRating() > 0)
{
// Create a new Trackable from the result:
Finder.enableTracking(result);

// Check if we have cached content for this target
// otherwise setup new content:
if (!table.find(result.getUniqueTargetId())
{
// Check if we need to free some content if we have
// reached the limit (e.g. Memory resources)
if (table.size() > MAX_APP_CONTENT)
{
TableEntry entry = table.getOldest();
releaseContent(entry.myContent);
table.remove(entry);
}
table.addEntry(createContent(
result.getMetaData), result.getUniqueTargetId());
}
}
}
}
}

Starting the UDT acquisition

Initializing the user-defined targets builder and starting it.

C++:

Vuforia::ImageTargerBuilder*   builder;
Vuforia::ObjectTracker*         imageTracker;
Vuforia::DataSet*              udtDataSet;
bool                        acquiringUdTarget;

onMiniGameStart()
{
// Start creating a new target if necessary:
if (udtDataSet->getNumTrackables() == 0)
{
builder->startScan(“GroundPlane”, 320);

// Start the acquisition GUI
acquiringUdTarget = startUdtGui();
}
else
{
// Directly activate the dataset:
imageTracker->activateDataSet(udtDataSet);
}
}
onMiniGameStop()
{
// Deactivate the UDT dataset if it is active:
if (udtDataSet->isActive())
imageTracker->deactivateDataSet(udtDataSet);
}

Java:

ImageTargerBuilder   builder;
ObjectTracker         imageTracker;
DataSet                  udtDataSet;
boolean                  acquiringUdTarget;

onMiniGameStart()
{
// Start creating a new target if necessary:
if (udtDataSet.getNumTrackables() == 0)
{
builder.startScan(“GroundPlane”, 320);

// Start the acquisition GUI
acquiringUdTarget = startUdtGui();
}
else
{
// Directly activate the dataset:
imageTracker.activateDataSet(udtDataSet);
}
}

onMiniGameStop()
{
// Deactivate the UDT dataset if it is active:
if (udtDataSet.isActive())
imageTracker.deactivateDataSet(udtDataSet);
}

Instantiating the new UDT trackable

Whenever the user creates a new target on top of the magazine, we create a new trackable and activate it.

C++:

Vuforia::ImageTargerBuilder*   builder;
Vuforia::ObjectTracker*         imageTracker;
Vuforia::DataSet*              udtDataSet;
bool                        acquiringUdTarget;

MyContent* createMiniGameContent();

onMiniGameUpdate()
{
// Nothing to do if we are not acquiring a new UDT target:
if (!acquiringUdTarget)
return;

// Check if we have finished building the UDT target:
if (builder->isActive())
{
Vuforia::ImageTargetBuilderState* state = builder->getState();
if (state->getMode() == MODE_SUCCESS)
{
// Instantiate new Trackable:
TrackableSource* trackableSource = builder->getTrackableSource();
Trackable* newTrackable = dataSet->createTrackable(trackable);

// Associate the mini game content:
newTrackable->setUserData((void*)createMiniGameContent());

// Activate the dataset:
imageTracker->activateDataSet(dataSet);

acquiringUdTarget = false;
}
else if (state->getMode() == MODE_FAILURE)
{
builder->stop();
}
}
}
Java:
ImageTargerBuilder   builder;
ObjectTracker         imageTracker;
DataSet              udtDataSet;
boolean              acquiringUdTarget;

MyContent  createMiniGameContent();

onMiniGameUpdate()
{
// Nothing to do if we are not acquiring a new UDT target:
if (!acquiringUdTarget)
return;

// Check if we have finished building the UDT target:
if (builder.isActive())
{
ImageTargetBuilderState state = builder.getState();
if (state.getMode() == MODE_SUCCESS)
{
// Instantiate new Trackable:
TrackableSource trackableSource = builder.getTrackableSource();
Trackable newTrackable = dataSet.createTrackable(trackable);

// Associate the mini game content:
newTrackable.setUserData( createMiniGameContent() );

// Activate the dataset:
imageTracker.activateDataSet(dataSet);

acquiringUdTarget = false;
}
else if (state.getMode() == MODE_FAILURE)
{
builder.stop();
}
}
}

Rendering

Rendering our data using our Table to map to content.

C++:

Vuforia::ObjectTracker*         imageTracker;
bool                        acquiringUdTarget;
void renderUdtAcquisitionGui(const ImageTargetBuilderState* state);
void renderContent(MyContent* content, const Vuforia::Mat44F& pose);
Struct TableEntry
{
String uniqueID;
MyContent* myContent;
STATUS status; (RETRIEVING/READY);
}
ContentTable table;

onApplicationRender()
{
// Get the state from Vuforia and mark the beginning of a rendering section
Vuforia::State state = Vuforia::Renderer::getInstance().begin();

// Render the Video Background
Vuforia::Renderer::getInstance().drawVideoBackground();

// Are we acquring a new target:
if (acquiringUdTarget)
renderUDTAcqusitionGUI(builder->getState());
else
{
// Did we find any trackables this frame?
For (int i = 0; i < state.getNumActiveTrackables(); i++)
{
// Get the trackable and cast to ImageTarget:
const Vuforia::ImageTarget* imageTarget =
(const Vuforia::ImageTarget*) state.getActiveTrackable(i);

// Render the associated augmentation:
if (imageTarget.getUserData() != 0)
{
// UDT:
renderContent((MyContent*) imageTarget.getUserData(),
imageTarget->getPose());
}
else
{
// VS: Find content for this VS trackable, initiate download of
// content if necessary
TableEntry entry = table.find(imageTarget->getUniqueTargetId();
if (entry == NULL)
entry = table.addEntry(createContent(imageTarget->getMetaData(),
imageTarget->getUniqueTargetId());

// Render content if already downloaded or render placeholder:
if (entry.status == RETRIEVING)
renderPlaceHolderAnim(imageTarget->getPose());
else
renderContent(entry.myContent, imageTarget->getPose());
}
}
}

// Let Vuforia know that we are done with rendering:
Vuforia::Renderer::getInstance().end();
}

Java:

ObjectTracker         imageTracker;
boolean                  acquiringUdTarget;

onApplicationRender()
{
// Get the state from Vuforia and mark the beginning of a rendering section
State state = Renderer.getInstance().begin();

// Render the Video Background
Renderer.getInstance().drawVideoBackground();

// Are we acquring a new target:
if (acquiringUdTarget)
renderUDTAcqusitionGUI( builder.getState() );
else
{
// Did we find any trackables this frame?
for (int i = 0; i < state.getNumTrackableResults(); i++)
{
// Get the trackable and cast to ImageTarget:
ImageTargetResult imageTargetResult = (ImageTargetResult)
state.getTrackableResult(i);
ImageTarget imageTarget = imageTargetResult.getTrackable();

// Render the associated augmentation:
if (imageTarget.getUserData() != null)
{
// UDT:
renderContent( imageTarget.getUserData(),  imageTargetResult.getPose());
}
else
{
// Cloud Reco: Find content for this trackable, initiate download of
// content if necessary
TableEntry entry = table.find(imageTarget.getUniqueTargetId();
if (entry == null)
entry = table.addEntry( createContent(imageTarget.getMetaData(),
imageTarget.getUniqueTargetId());

// Render content if already downloaded or render placeholder:
if (entry.status == RETRIEVING)
renderPlaceHolderAnim(imageTarget.getPose());
else
renderContent(entry.myContent, imageTarget.getPose());
}
}
}

// Let Vuforia know that we are done with rendering:
Renderer.getInstance().end();
}