How To Use 3D Scan Object Recognition in Unity

Introduction

Object Recognition (3D Scan) enables you to create apps that can recognize and track objects, such as toys. This article will show you how to add Object Recognition and Object Targets to a Unity project, and how to customize the behaviours exposed through the Object Recognition API and also implement custom event handling. If you have a 3D model already for the objects you are looking to track, Model Targets provides a very robust tracking solution which does not require the object to be scanned.

If you are unfamiliar with the Unity Workflow, it may be helpful to read Getting Started with the Unity

You will also need to get a license key from the License Manager and add it to your project if you are looking to track your own objects. The following articles may be helpful: How To Create an App License and How To add a License Key to your Vuforia App

Adding Object Targets to your Project

The workflow for adding Object Targets to a Vuforia Unity project is very similar to that of the Image Target types.

The GameObjects used for Object Recognition can be found in the menu GameObjects>Vuforia>3D Scan - this will create an ObjectTarget GameObject. You ll be using the ARCamera and ObjectTarget GameObjects.

Adding a Device Database containing an Object Target

  1. Create a Device Database in the Target Manager that includes the Object Target that you want to use. See: Vuforia Object Scanner
  2. Download that database as a *.unitypackage.
  3. Either import the unitypackage from Asset > Import Package in the Unity Editor, or by double clicking the package file on your file system. The database will be added to Streaming Assets/QCAR

Note: with the introduction of ObjectTarget we are also introducing a new type of configuration file that will be copied into Editor/QCAR. You absolutely need to get these files copied when you import your database.

Adding and Configuring the Object Target GameObject

  1. Add an ARCamera instance to your scene (menu: GameObject>Vuforia> AR Camera). Remove the default Main Camera from your scene.
  2. Go to the Datasets section of the Vuforia Configuration (menu: Window>Vuforia Configuration) Inspector and check the Load and Activate options associated with your database name.
  3. Add an ObjectTarget instance to your scene (menu: GameObject> Vuforia> 3D Scan) , both the ARCamera and Object target should be at the same level in the scene Hierarchy.
  4. Go to the Object Target Behaviour in the Object Target s Inspector and select the Dataset name and Object Target name that you want to associated with the Object Target instance. There is a 1:1 relationship between Object Target GameObject instances and Object Targets in a Device Database.
Vuforia Image
  1. Set the Transform Position of the instance to ( 0,0,0 ) in its Inspector panel.
  2. Add any content that you want to augment the object to the scene at the top level of the scene Hierarchy.
  3. Use the bounding box to position and scale your content in relation to the target.
  4. Now make the content a child of the ObjectTarget instance.
Vuforia Image
  1. Extend the DefaultTrackableEventHandler on the ObjectTarget instance to implement custom event handling, or create your own event handler by implementing the ITrackableEventHandler interface and add that script to the ObjectTarget instance.
  2. Test your scene in Play Mode by pressing the Play button at the top of the Editor.

Using the Bounding Box to register augmentations

A bounding box matching the grid area of the Object Scanning Target will be presented in the Scene view when Show Bounding Box is selected in the Object Target Behaviour component of an Object Target's Inspector panel. This bounding box can be used to accurately position content that will augment your target object. The bounding box is only rendered in Scene view.
The origin of the coordinate system of the bounding box is in the forward lower left corner of the box, and the coordinate scale is in millimeters, which is the default scene unit.

Vuforia Image

Recommended Augmentation Workflow

The Object Target bounding box corresponds to the grid region of the Object Scanning target. You can use the Object Scanning target as a physical reference for registering media in relation to the physical object.

Steps to placing content:

  1. Place the Scene and Game Views side by side in your Editor Window. See Fig. 2.

  2. Select the webcam that you want to use in the ARCamera s Webcam Behaviour.

  3. Place the Object Scanning target in your work area so that it is visible by you webcam. See Fig. 2.

  4. Place the physical object on the grid region at the position where it was originally scanned.

  5. Press the Play button at the top of the Unity Editor. This will activate the Vuforia Play Mode simulation.

  6. Bring the Object Scanning Target in view. You ll see the bounding box augmented on the target when the physical object is detected. Move your object to align these if they are offset.

  7. Compare the Game and Scene views to iteratively position and verify the registration of your content onto or near the physical object. See Fig. 3-5

    • View the object from multiple perspectives using your webcam. You can use the Scene Gizmo to achieve the same perspective in the Scene View.
    • Use the Scanning Target grid and Bounding Box grid to accurately adjust the position of your virtual objects.
    • Moving the content in Scene View will cause it to move to the same position in Game View.
    • Be aware that your content will move back to its original position when you turn off Play Mode!!
  8. When you re happy with the position of your content, be sure to record its position and scale values in the Inspector.
  9. Exit Play Mode and re-enter the position and scale values for your content.

Fig. 1

Vuforia Image

The Object Target Bounding Box in the Scene View

 

Fig. 2

Vuforia Image

Side by Side Scene and Game Views in Play Mode

 

Fig. 3

Vuforia Image

Fig. 4

Vuforia Image

Fig. 5

Vuforia Image

Obtaining the Object Recognition Tracker instance

ObjectTracker mTracker = TrackerManager.Instance.GetTracker<ObjectTracker>();

Starting and stopping the Object Recognition Tracker

Start() starts the Object Recognition Tracker
Stop() stops the Object Recognition Tracker
Reset() resets the Object Recognition Tracker

Customizing event Handling

Object Targets can have multiple detection and tracking states as defined by the Status of their TrackableResult. This result enables you to develop apps that unlock content when the Object Target is detected and is being tracked.

You can use this information to determine when an Object Target has been recognized, when it is being actively tracked, and when tracking and detection have been lost on an object that was previously being tracked.

The default ObjectTarget GameObject implements the ITrackableEventHandler interface in DefaultTrackableEventHandler.cs. To enable and disable the Renderers and Colliders of any objects that are children of the ObjectTarget instance. Extend this code to implement custom event handling.

private void OnTrackingFound()
{
    Renderer[] rendererComponents = GetComponentsInChildren<Renderer>(true);
    Collider[] colliderComponents = GetComponentsInChildren<Collider>(true);

    // Enable rendering:
    foreach (Renderer component in rendererComponents)
    {
        component.enabled = true;
    }

    // Enable colliders:
    foreach (Collider component in colliderComponents)
    {
        component.enabled = true;
    }

    Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " found");
}

private void OnTrackingLost()
{
    Renderer[] rendererComponents = GetComponentsInChildren<Renderer>(true);
    Collider[] colliderComponents = GetComponentsInChildren<Collider>(true);

    // Disable rendering:
    foreach (Renderer component in rendererComponents)
    {
        component.enabled = false;
    }

    // Disable colliders:
    foreach (Collider component in colliderComponents)
    {
        component.enabled = false;
    }

    Debug.Log("Trackable " + mTrackableBehaviour.TrackableName + " lost");
}

Activating Loaded Datasets at runtime

/// <summary>
/// Sample implementation of how to activate the datasets through code
/// </summary>
/// <param name="datasetPath"></param>
private void ActivateDatasets(string datasetPath)
{
    ObjectTracker objectTracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
    IEnumerable<DataSet> datasets = objectTracker.GetDataSets();

    IEnumerable<DataSet> activeDataSets = objectTracker.GetActiveDataSets();

    List<DataSet> activeDataSetsToBeRemoved = activeDataSets.ToList();

    //1. Loop through all the active datasets and deactivate them.
    foreach (DataSet ads in activeDataSetsToBeRemoved)
    {
        objectTracker.DeactivateDataSet(ads);
    }

    //Swapping of the datasets should not be done while the ObjectTracker is working at the same time.
    //2. So, Stop the tracker first.
    objectTracker.Stop();

    //3. Then, look up the new dataset and if one exists, activate it.
    foreach (DataSet ds in datasets)
    {
        if (ds.Path.Contains(datasetPath))
        {
            objectTracker.ActivateDataSet(ds);
        }
    }

    //4. Finally, start the object tracker.
    objectTracker.Start();
}

 

How To Support Multiple Object Targets in a Unity Scene

Object Recognition supports the detection and tracking of multiple Object Targets in an Object Recognition experience. Up to 2 Object Targets can be tracked simultaneously.

Use the following instructions to configure your Object Recognition scene to support multiple Object Targets in Unity. Set the Maximum Simultaneous Tracked Objects  value in the Vuforia Configuration panel (menu: Window Vuforia Configuration).

For Object Targets, the maximum is 2.

Vuforia Image


If you have enabled Extended Tracking on an Object Target, you must also enable it on any other Object Targets in the scene. Be aware that the use of Extended Tracking is not recommended for objects that may be moved during the Object Recognition experience.

Vuforia Image

How To Support Moving Objects

Object Recognition can track moving objects, provided the motion is slow and smooth. You'll need to configure your Object Targets to support motion using the following steps. To support movement, you ll need to disable Extended Tracking on the Object Target.

Extended Tracking can also be enabled and disabled programmatically using the startExtendedTracking() and stopExtendedTracking() methods exposed on the Object Trackable. Extended tracking is disabled by default. You won't need to call stopExtendedTracking() unless you had previously started extended tracking for that Trackable.

See: Extended Tracking

How To use Object Targets with other Target Types

Object Targets can be used in combination with Image Targets, Multi Targets, and Cylinder Targets by loading and activating multiple target datasets. When you download a Device Database that contains both Object Targets and alternate target types, the database archive will contain two distinct datasets, each with its own XML configuration file. You will need to load and activate both of these datasets to be able to detect and track Object Targets along with the other targets. The procedure for doing this depends on which development tools you are using.
These same steps can also be used to load and activate two separately downloaded Device Databases.

Considerations

  • A maximum of 20 Object Targets can be included in a single database
  • Object Targets support both persisted and non-persisted modes of Extended Tracking. Don't use Extended Tracking if you intend for the user to move the target.

How to Use an occlusion model

An occlusion model enables you to mask the surfaces of a physical object so that virtual objects are not rendered over them when they are behind the object in your scene. This effect relies on depth masking and provides much more realistic rendering for scenes that combine physical and virtual objects.

Depth masking using an occlusion model is accomplished by developing a 3D model of the regions of the object that you want to mask, and applying a depth mask shader to that model. The depth mask will prevent any objects behind the masked region from being rendered. An example of a depth mask shader is provided with the Vuforia Unity Extension (Assets/Vufroria/Shaders or Assets/Vuforia/Materials).

An occlusion model can match the exact geometry of the physical object that it corresponds to, or it can be a simplified representation of the object, such as its bounding box. The latter technique is an easy way to occlude physical objects with simple geometries, like boxes, balls and cylinders. This approach is similar to use of Collider objects for physics simulation in Unity.

A volumetric occlusion model:

Vuforia Image

When using a geometrically accurate occlusion model, it s recommended that you only use the exterior surfaces of the model, at the lowest polygon count needed to provide satisfactory masking of your physical object. You can use a 3D editor to simplify your model by reducing the polygon count and removing any duplicated or hidden (e.g. internal ) geometries.

Tip: If you use an occlusion model and are also augmenting the surface of the physical object with content, be sure to test the rendering results on the device platforms that you intend to deploy to. When 3D models are closely aligned, they can become confused in the renderer's depth buffer. This is an problem known as Z-Fighting ( Z is for Z Axis ) and can result in noisy rendering artifacts. To avoid Z-Fighting, use the MODE_OPTIMIZE_QUALITY setting on the ARCamera and offset your models so that they are not overlapping each other.

A geometrically accurate occlusion model:

Vuforia Image

How to configure a model to occlude for a physical object

See: "How To Align Digital Content on an Object" above to learn how to position virtual content on a physical object using Object Recognition in Unity.

1. Import your occlusion model to your Unity project, or create a simple bounding model by selecting a primitive mesh geometry from Game Object > Create Other in the Editor menu.

2. Drag the model onto your Object Target instance in the Hierarchy so that it is a child of the Object Target instance.

Vuforia Image

3. Follow the steps in "How To Align Digital Content on an Object" to accurately position your model in relation to its corresponding physical object.

Vuforia Image

4. Add a Depth Mask Shader to your model.

Vuforia Image

5. Add another model to your scene to verify that your occlusion model is masking the physical object accurately. Position it near the occlusion model.

6. Start Vuforia Play Mode to evaluate the occlusion effect in the scene.

Vuforia Image

7. Verify the accuracy of your occlusion model registration from multiple positions in the scene.

Vuforia Image

Additional Articles: