Model Target Guide View

Vuforia Image

To initialize detection and tracking of a Model Target, Vuforia Engine provides options to the developers to show Guide Views that assist the user during the AR experience to initiate tracking from a specified angle and distance. Matching the object with the displayed pose indicated on the screen will initiate the tracking.

In Vuforia Engine, both the image and the angle/distance relative to the object that image represents is called a Guide View. The Guide View is generated together with the Model Target database in the Model Target Generator desktop tool provided by Vuforia. There are two types of Model Target databases; Standard Model Targets and Advanced Model Targets. For each database generated, one or more Guide Views can be created to help detection and tracking of the object. The Guide View is used differently depending on the type of Model Target database.

  • Standard Model Target databases can have one or multiple Guide Views which can be switched between manually. This is useful if you are tracking a large object where only parts of the object are to be tracked or if the user is meant to approach the object from a certain position. Alternatively, multiple Guide Views can be used to carry out a series of tasks chronologically,
  • Advanced Model Target databases can contain multiple Model Targets that have been trained using a deep learning process. Each target can contain  one or more Advanced Guide Views that support recognition from up to a 360° range around the Guide View position. The Vuforia SDK automatically detects the object in view and will not display the Guide View on the screen as with untrained databases, because detection can happen within an angular range. The Guide View is still used to define the detection range and it is necessary for training the Advanced Model Target database.

For an indication of how Guide Views work within a typical Model Target application, see Introduction to Model Targets in Unity and the Model Target Test App User Guide.

Choosing a Good Guide View for a Model Target

For stable detection and tracking of your object, choose a Guide View position where you have a diagonal view (i.e. an angle where two or three sides of the object can be seen at the same time, as shown below in left image) that includes as much of the object as possible. Try to avoid a position that presents a front-to-parallel view onto the object (i.e., do not choose a view that is "square on" to one side of the object: right image below). In addition, try to avoid a Guide View position that makes the object appear to have many parallel lines/edges.

Vuforia Image

Favorable diagonal Guide View alignment

Vuforia Image

Undesired Guide View with less object visibility and parallel lines.

NOTE: Use the navigation buttons in the Model Target Generator in the left side of the window to navigate around the object when you are choosing your Guide Views.

If your object is very large, it may be difficult to find a Guide View that is near enough to the object and still shows the entirety of the object. If this is the case, try to choose a Guide View that captures the object from an angle with the most unique features. If your object has areas with large flat surfaces, try to avoid these areas, and instead find an angle of view where unique shapes in the object are more apparent.

To aid the process of selecting a good Guide View a dashed-line frame is displayed when in "model view" mode. To set-up the optimal Guide View for your use-case you can switch between different frames during positioning. Landscape and Portrait modes for handheld devices and a specific HoloLens mode can be selected to take into account the correct field-of-view of the device. The created Model Target databases are device agnostic - but the optimum result can be tuned while using this feature.

Multiple Guide Views

It is possible to define multiple Guide Views for a single Model Target and then switch between these different Guide Views at runtime. For example, you might have a service manual app with repair guides for various parts of a machine, and you want the user to be able to select which part they want to look at from an in-app menu. See the Model Targets API Overview for details on how to add such functionality to your app.

Multiple Guide Views can be added manually by positioning the camera view around the object and adding individual Guide Views one-by-one by pressing the Create Guide View button. A Guide View has its Guide View position fixed and its recognition ranges set as empty. You can adjust the Guide View position by clicking the icon located on the top right of the preview image of the Guide View.

NOTE: Untrained Guide Views and Advanced Views can be created together in a single Model Target database, but the database will not be able to be trained as an Advanced Model Target database and the Advanced Guide View will only work as an untrained Guide View!

Advanced Guide Views

Advanced Model Target Databases

With the MTG, you can train Advanced Model Target databases, which allow your app to automatically recognize multiple objects and automatically initiate tracking from a pre-defined recognition range.

For example, you might have a marketing app for a new car model, and you want to highlight different features of the car when the user points their device at them. Or, you might have a very large object, and you want the AR experience to be different depending on whether the user is approaching the object from the front or from the back. This could be achieved by training an Advanced Model Target database containing Model Targets that are resembling different parts of the very large object. Each Model Target should then be generated with one or more Advanced Guide Views. An Advanced Guide View can be set to recognize and detect in a range up to 360° from the Guide View position.

The Advanced Guide View will not be rendered at runtime, but it is needed by the MTG at authoring time when you generate the Model Target. Instead, we recommend using symbolic Guide Views if you wish to display a graphic to help users identify trackable objects.

See also Advanced Model Target Databases for more information on Advanced Model Targets.

UX Considerations

Advanced Model Targets do not display Guide Views. Therefore, in the cases where users are unaware of which objects can be recognized or if they are unsure of how Model Targets work and there is no Guide View to aid them, we recommend displaying a viewfinder UI on the screen to encourage users in positioning themselves so that tracking can begin.

A symbolic Guide View may also be employed for Advanced Model Targets. This can be an icon or a simplified visual of the object that informs the users about the shape of the object.

Have a look at the Vuforia Model Targets Unity sample app that demonstrates best-practice UX approach that we recommend using in combination with the Advanced Model Targets databases. This is demonstrated in the below images.

Vuforia Image

Standard Model Target Sample

Vuforia Image

Advanced Model Target Sample with UI elements to aid in locating specific targets.

Configuring Advanced Guide Views

To train recognition for Advanced Model Targets, you need to set up the Target Recognition Range and Target Extent for each Guide View. The Target Recognition Range represents the range of positions and relative angles for which a target can be recognized
Once the model has been trained, the object will be recognized from all camera positions that are covered by those recognition ranges.

In the Model Target Generator, select Create Advanced View in the Guide Views tab of your Model Target.

Vuforia Image

NOTE: An Advanced Guide View’s recognition range and Target Extent can be edited at a later time by clicking the  icon located on the top right of the preview image of the Guide View. This can also be used to add recognition ranges to Guide Views that were originally set up without:

Vuforia Image

In the next step, you can set a custom recognition range and edit the target extent.

An Advanced View can be pre-set to a recognition range of 90°, 180° or 360°, but can also be adjusted in a three-axis range to better fit your use case.

Vuforia Image
Vuforia Image

For e.g. a toy that can be recognized from any angle, it makes sense to configure a full 360° recognition range for the azimuth. However, if you know that users will always approach an object from a certain direction (e.g. for a large piece of machinery or for a museum exhibit, you can reduce the recognition range to only cover those angles. This will improve training and recognition performance. 

Viewing Angles and Orientation

To configure from which angles and orientation a model can be recognized from, the Model Target Generator lets you control the range of viewing angles for a given Advanced Guide View by controlling the azimuth (green) and elevation (red) angle ranges relative to the Guide View’s position. The roll (blue) can be set to Upright and Arbitrary where Upright is the default. Use the two roll options with the available motion hints for the best tracking experiences.

  • Use Upright when the object is certain to remain in a constant upright position (e.g. a machine bolted onto the floor or a car on the street).
  • Use Arbitrary when the orientation, with respect to gravity, is likely to change of the object (e.g. a toy or tool and any product you could pick up and place differently).

TIP: The roll choice is visible in the camera-rotation angles shown in the recognition-range editor in blue.

To illustrate, the model below will be recognized only when the user is viewing the object from a position that lies inside the colored areas, with the default range being (-45º,+45º) for azimuth (the green section in the image below), and Upright position. Entering from an angle that is not highlighted in the 3D view i.e. from from the backside, will not track the object.

Model Target Generator Recognition Range UI showing -45 to 45 azimuth range

In contrast, with the maximum range (-180,+180) for azimuth, this Guide View will always activate when the object is in view, regardless of which side you are viewing the object from:

Model Target Generator Recognition Range UI showing -180 to 180 azimuth range

NOTE: Both of these cases, the Advanced Guide View position (the small gray cone) stays at the same place. This represents the overall position and distance where the user needs to hold their device in order to start tracking the object.

As a general rule, only set up more than one guide view for a single model if the target extents of those guide views differ significantly. If two or more guide views share the same target extent, they should be combined into a single one.

See the Combining Advanced Guide Views section further below for more details.

Symmetry

In some cases your object may be symmetric, e.g. a cylindrical device that looks the same from both sides, but still has enough features and edges to detect and track.
In such a case, Vuforia Engine cannot reliably detect which side the object is being seen from and the Model Target Generator will display a warning when training.

To work around that, configure the recognition ranges of the Advanced Guide View so that they cover only the one part that is symmetric to others. The recognized pose will then always be reported to be in that range, even if the user is looking at another part that looks identical.

In case you have multiple Advanced Guide Views for this model (e.g. because there are additional panels on the machine), these Guide Views must not cover any of the symmetric parts.

Distance

The distance from which the object can be recognized is defined by the target extent. For example, making the target extent box smaller will require users to move closer to the object before it will be detected. And similarly, having the target extent to cover the whole of the model would initiate detection and tracking from a further distance. However, scaling the target extent to an extreme, i.e. the target extent is much larger than the object itself, would significantly reduce how well the model can be recognized. 

NOTE: Editing the Guide View position of an already specified Advanced Guide View will reset its recognition range setup. 

Target Extent

The Target Extent bounding box defines which parts of the model will actually be used in recognizing the object and/or discerning between Guide Views. Each Guide View has its own Target Extent bounding box, which you can modify in the Target Recognition Range window. Use the editing tools on the left side to adjust the target extent.

Vuforia Image

By default, the Target Extent bounding box covers the entire model. However, you may have a situation where you want to restrict the area of the object that the target should be recognized from. For example, suppose your object is a car, and you are making a marketing app demonstrating individual features of the car in the trunk area. In this case it might make sense to define a Guide View looking at the trunk of the car and restrict the Target Extent bounding box to just the rear section of the car, and another Guide View looking at the engine compartment of the car and restrict the Target Extent bounding box to just the front section of the car.

As an example, the following image shows how a target extent may be defined around the back side of an offroad vehicle, where recognition ranges cover the positions where a user is expected to point at this part of the model. 

Vuforia Image
Vuforia Image

After the model is trained, it will properlyrecognize from views such as the one presented in this screenshot.

NOTE: In previous versions of Vuforia Engine, it was required that Advanced Guide Views are selected to include at least part of the silhouette of the object. With version 9.3, this requirement was lifted. Recognition from close-up views that do not show the silhouette and some background not belonging are now supported as well.
Re-train your Advanced Model Target with the 9.3 Model Target Generator to benefit from that improvement. 

NOTE: The closer you get to the physical object, the more crucial it is that the CAD model of the object is accurate with respect to the physical object.

See also Advanced Model Target Databases for more information.

NOTE: if you restrict the Target Extent bounding box, the whole of the object will still be used for tracking - the Target Extent controls just the region of the object that can be used to recognize the target.

Combining Advanced Guide Views 

If multiple Advanced Guide Views are configured for a single Model Target, they should never share the same target extent. The Model Target Generator will show a warning if such a case is detected.

The following two screenshots show a case where two Advanced Guide Views have been set up. Both share the same target extent covering the whole model, but each guide view covers a different side.

This will result in unnecessary overhead during training and recognition. 

Vuforia Image
Model Target Generator Recognition Range UI composite showing four guide views with 90º ranges that do not overlap.

In such a case, it is recommended to combine the Advanced Guide views sharing the same extent into a single one that covers all recognition ranges, as shown below:

Model Target Generator Recognition Range UI showing -180 to 180 azimuth range

Setting up multiple Advanced Guide Views for a single Model Target is only recommended if the Target Extents between the Views differ.
Examples where this could be the case are:

  • A larger piece of machinery that contains different parts that may be recognized, e.g. a panel and an engine compartment on different sides of the model. In this case, the extents can be reduced to cover these parts only and the recognition ranges can be configured to cover the angles from which a user might approach those parts.
  • A car dashboard that is being recognized from inside the car by a device operated by a user sitting in the driver seat. Recognizable parts may include the gearshift, the air conditioning unit, or the control panels on the side door handles. All these could be different Advanced Guide Views where the extent is limited to those parts and recognition ranges are configured to only cover angles visible from the driver’s seat.

Performance Considerations

wider detection range may create more ambiguity when running your app and attempting detection. With that in mind, you will probably want to keep the detection ranges as small as possible, to match your expected usage scenarios.

For example, an object that cannot be viewed from behind (like an object mounted on a wall) doesn't need a full 360º azimuth angle range; and an object that will only be observed from above, and never from below, only needs an elevation range that covers the top half of the object.

At the same time, setting up multiple Advanced Guide Views that cover the same target extent will increase dataset size and recognition times. In such cases, combine those Guide Views into a single one that covers all combined angles to improve performance, even if this results in a larger recognition range than setting up multiple smaller ranges.

Learn More

Model Targets Overview

Model Target Generator User Guide

Advanced Model Target Databases