Creating a Driver with a Device Tracker

This guide provides information for OEMs (Original Equipment Manufacturers) that are working to enable Vuforia Engine on custom hardware using the Vuforia Driver API.

This document focuses on hardware which includes a positional device tracker as well as an image (preferably RGB) camera. Integrating a device tracker and providing per-frame camera intrinsics with a custom driver will allow Engine to accurately display AR content in relation to tracked objects. In addition, device poses from a device tracker are necessary for Area Targets to function, and without device poses, Model Target tracking stability is significantly reduced. Please see Creating a Camera Driver if you are looking for camera-only driver API.

Prerequisites

To integrate with Vuforia Engine, we require that the device can provide the following data and meet the following requirements:

  1. Provide frames from an image camera in one of the supported formats (YUYV, NV12, NV21) for computer vision processing.
  2. Camera calibration (intrinsics) is provided per-frame to Vuforia Engine through the driver. If the camera has large focal length variations (like in a microscope), per frame calibration is required. See Camera Calibration for details.
  3. Utilize a 6DOF (6 Degrees of Freedom) device tracker and provide the 6DOF position of the camera for each frame when the frame was captured.
  4. Test the device tracker; without Vuforia Engine, the device should be able to augment an object somewhere in the environment and have it stay stationary as the user moves the device around.

Eyewear

As a first step for eyewear partnership, the manufacturer should fill out our partnership form here. This will initiate discussions with PTC Vuforia regarding support for the new headset.

In addition, for eyewear:

  1. To get good registration between augmentations and the real world, we recommend the device have per-user calibration and an eye tracker.
    NOTE: See-through eyewear without a 6DOF tracker will likely struggle to maintain accurate alignment of augmentations.

Getting Started Developing a Driver

View our general documentation on the Driver Framework to get started. The driver framework documentation explains how applications can configure Vuforia Engine to load the custom-built driver and use it for Engine CV features.

Integration Overview

A Vuforia Driver is a library (.so or .dll) used by the Vuforia Engine library to access non-standard data sources. The Vuforia Driver and Vuforia Engine libraries must both be included in the App package. The following diagram illustrates this:

Vuforia Engine analyzes the camera frames to detect Vuforia-supported targets and position information from the External Positional Device Tracker to place them accurately within the Tracker’s world coordinate system.

In addition to camera frames and 6DOF position information, it is desirable to support the creation of tracking anchors to allow Vuforia Engine to inform the 6DOF Positional Device Tracker of important positions to track accurately so that it can adjust appropriately whenever the trackers’ map is updated.

Eyewear integration details

The next diagram illustrates our recommended approach for integrating Vuforia with an eyewear device with a see-through display. The device will already have a 6DOF Positional Device Tracker and a display system optimized for low latency including some form of head pose prediction.

The blue colored inner loop represents the low-latency tracking and rendering path provided by the device. The green-colored outer loop represents the Vuforia integration. A Vuforia Driver provides poses and images to the Vuforia Engine which then reports object poses in the device’s world coordinate system. As the device already renders from its world coordinate system with low latency, augmentations rendered relative to Vuforia detected objects will exhibit the same visual stability that is provided by the device.

Integration Details

Sending data from your Vuforia Driver

To deliver poses and camera frames to the Vuforia Driver, we require that the pose is delivered first and immediately followed by the camera frame with both having exactly the same timestamp value (in nanoseconds).

NOTE: Without device poses, Area Targets in Engine will not work, and Model Target tracking stability will be significantly reduced.

The Vuforia Driver camera API is designed to map simply onto most underlying camera APIs. Vuforia Engine can support frames in a number of image formats; however we strongly recommend that Driver implementations obtain camera frames in a YUV format to pass to Vuforia Engine. In addition to the image data, the Driver should provide calibrated camera intrinsics with each frame. If the camera supports either autofocus or optical image stabilization capabilities the intrinsics must be adjusted with each frame to account for this to maximize alignment accuracy.

The 6DOF device tracker must provide the current location of the device relative to a fixed location in the (real) world. We assume that the 6DOF device tracker provides a pose transforming 3D points from the device tracker world coordinate system to the coordinate system of the camera capturing the frames. The pose provided with each camera frame represents the position at the time that the camera frame was captured.

Therefore, make sure that the poses you provide, map points from world to camera coordinates and not to a different device coordinate system. For example, some device trackers, by default, provide poses from world to IMU device coordinates. In this case, you should apply an IMU-to-camera-transform to every pose before passing it to the Driver API.

When calculating the pose to deliver from the Vuforia Driver you should ensure the following:

  1. The pose represents the position and rotation of the camera in the world.
  2. The pose is defined as being rotation and translation with translation applied after the rotation.
  3. The world and camera coordinate systems are right-handed (RH).

The camera coordinate system is expressed in Computer Vision (CV) convention (x-right, y-down, z-away) as illustrated below.

NOTE: Some Vuforia Engine Observers, like Area Targets, expect that the y-axis is gravity aligned. Area Targets expect the gravity vector to be aligned with the negative y-axis of the device tracker world coordinate system.

Example

We provide an example of the steps to transform the pose from the device tracker to meet the requirements of Vuforia Engine.

The device tracker provides the 6DOF pose of the camera in its coordinate system. The pose will need to be transformed following the steps below. We use the following notations: Tracker world to camera = Tc_w, and Tracker camera to world = Tw_c

If the 6DOF tracker provides poses as Tw_c in the typical coordinate system convention of x-right, y-up, z-towards, you will need to invert it to Tw_c and rotate the camera coordinate system to CV convention as follows:

  1. Take the inverse of the tracker pose matrix Tw_c to convert to camera coordinate system Tc_w.
  2. Create a rotation matrix r that rotates 180 degrees around the x-axis.
  3. Multiply the matrices in this order: r * Tc_w.
  4. Send the resulting matrix to Vuforia Driver.

Important Note

If the pose capability is declared in the driver, but poses are not sent, the camera frame will be retained in memory by Vuforia Engine and the device will run out of memory after a few seconds. If you experience fast memory use and growth, please check your Driver code, and ensure that each camera frame is preceded by a device pose with the same timestamp.

Implementing the Anchor API

Whenever the External Positional Device Tracker provides support for anchors, we recommend that this capability is exposed to Vuforia Engine. The Vuforia Driver Anchor APIs are a part of the External Positional Device Tracker API and enables Vuforia Engine to use Anchors, provided the isAnchorSupported API is implemented and return true. When anchors are supported, the Driver should implement the methods to create and remove anchors and make the callback into Engine using the Callback API whenever anchors, managed by the External Positional Device Tracker, are added, removed, or updated. Each anchor comprises of a unique Id and the pose of the anchor in the world coordinate system.

Output from Vuforia Engine

Vuforia Engine reports results in the VuState object. When the Driver provides poses through the External Positional Device Tracker API, object and device positions are reported as transforms from the device tracker coordinate system. When no poses are provided, object and device positions are reported as transforms from the Vuforia world coordinate system.

NOTE: The poses reported by the Vuforia Device Tracker Observer will differ from what is passed as input to the Driver. The output uses the same the right-handed spatial model but with the camera coordinate system being x-right, y-up and z-towards. The Observer world coordinate system is the same as the driver input. This is illustrated below:

For rendering, we assume that the device renders into the Positional Device Tracker coordinate system to position the scene camera based on the current device position.

Unity Specific Information

When using the Vuforia Engine Unity Package with an external device tracker you need to consider how the Main Camera will be positioned. In a Vuforia application for a phone or tablet the Main Camera is typically positioned by Vuforia. This could be either:

  1. Relative to the detected object, or,
  2. Relative to the origin of the device tracker.

On custom hardware with its own 6DOF Positional Device Tracker, we expect that the Main Camera will be positioned by an OEM provided mechanism. In this case Vuforia should not attempt to position the Main Camera. UseThirdPartySeeThroughEyewear allows you to disable Vuforia updates:

public class Preinitializer : MonoBehaviour
{
    void Awake()
    {
        VuforiaConfiguration.Instance.DeviceTracker.UseThirdPartySeethroughEyewear = true;
        VuforiaRuntime.Instance.InitVuforia();
    }
}

When using this configuration, enable delayed initialization. You should also ensure that the Vuforia Device Tracker configuration is set in the VuforiaConfiguration as follows:

In the ARCamera GameObject, set the World Center Mode in the VuforiaBehaviour Behaviour component to DEVICE:

When integrating with Unity, we recommend that the first step is to get a Unity application using the device’s 6DOF device tracking poses to position a static cube object in space. You should observe that the cube stays in the same place as the device is moved around the environment.

Once this is working, proceed to add the VuforiaBehaviour Behaviour component to the camera rig and try placing a Vuforia Image Target in the scene and add a simple augmentation as a child of the Image Target GameObject. You should see the augmentation when the Image Target is detected and that the content remains locked to the target.

Can this page be better?
Share your feedback via our issue tracker