Model Target Guide View

Vuforia Image

To initialize detection and tracking of a Model Target, Vuforia Engine provides options to the developers to show Guide Views that assist the user during the AR experience to initiate tracking from a specified angle and distance. Matching the object with the displayed pose indicated on the screen will initiate the tracking.

In Vuforia Engine, both the image and the angle/distance relative to the object that image represents is called a Guide View. The Guide View is generated together with the Model Target database in the Model Target Generator desktop tool provided by Vuforia. There are two types of Model Target databases; Standard Model Targets and Advanced Model Targets. For each database generated, one or more Guide Views can be created to help detection and tracking of the object. The Guide View is used differently depending on the type of Model Target database.

  • Standard Model Target databases can have one or multiple Guide Views which can be switched between manually. This is useful if you are tracking a large object where only parts of the object are to be tracked or if the user is meant to approach the object from a certain position. Alternatively, multiple Guide Views can be used to carry out a series of tasks chronologically,
  • Advanced Model Target databases can contain multiple Model Targets that have been trained using a deep learning process. Each target can contain  one or more Advanced Guide Views that support recognition from up to a 360° range around the Guide View position. The Vuforia SDK automatically detects the object in view and will not display the Guide View on the screen as with untrained databases, because detection can happen within an angular range. The Guide View is still used to define the detection range and it is necessary for training the Advanced Model Target database.

For an indication of how Guide Views work within a typical Model Target application, see Introduction to Model Targets in Unity and the Model Target Test App User Guide.

Choosing a Good Guide View for a Model Target

For stable detection and tracking of your object, choose a Guide View position where you have a diagonal view (i.e. an angle where two or three sides of the object can be seen at the same time, as shown below in left image) that includes as much of the object as possible. Try to avoid a position that presents a front-to-parallel view onto the object (i.e., do not choose a view that is "square on" to one side of the object: right image below). In addition, try to avoid a Guide View position that makes the object appear to have many parallel lines/edges.

Vuforia Image

Favorable diagonal Guide View alignment

Vuforia Image

Undesired Guide View with less object visibility and parallel lines.

NOTE: Use the navigation buttons in the Model Target Generator in the left side of the window to navigate around the object when you are choosing your Guide Views.

If your object is very large, it may be difficult to find a Guide View that is near enough to the object and still shows the entirety of the object. If this is the case, try to choose a Guide View that captures the object from an angle with the most unique features. If your object has areas with large flat surfaces, try to avoid these areas, and instead find an angle of view where unique shapes in the object are more apparent.

To aid the process of selecting a good Guide View a dashed-line frame is displayed when in "model view" mode. To set-up the optimal Guide View for your use-case you can switch between different frames during positioning. Landscape and Portrait modes for handheld devices and a specific HoloLens mode can be selected to take into account the correct field-of-view of the device. The created Model Target databases are device agnostic - but the optimum result can be tuned while using this feature.

Multiple Guide Views

It is possible to define multiple Guide Views for a single Model Target and then switch between these different Guide Views at runtime. For example, you might have a service manual app with repair guides for various parts of a machine, and you want the user to be able to select which part they want to look at from an in-app menu. See the Model Targets API Overview for details on how to add such functionality to your app.

Multiple Guide Views can be added manually by positioning the camera view around the object and adding individual Guide Views one-by-one by pressing the Create Guide View button. A Guide View has its Guide View position fixed and its recognition ranges set as empty. You can adjust the Guide View position by clicking the icon located on the top right of the preview image of the Guide View.

NOTE: Untrained Guide Views and Advanced Views can be created together in a single Model Target database, but the database will not be able to be trained as an Advanced Model Target database and the Advanced Guide View will only work as an untrained Guide View!

Advanced Guide Views

Advanced Model Target Databases

With the MTG, you can train Advanced Model Target databases, which allow your app to automatically recognize multiple objects and automatically initiate tracking from a pre-defined recognition range.

For example, you might have a marketing app for a new car model, and you want to highlight different features of the car when the user points their device at them. Or, you might have a very large object, and you want the AR experience to be different depending on whether the user is approaching the object from the front or from the back. This could be achieved by training an Advanced Model Target database containing Model Targets that are resembling different parts of the very large object. Each Model Target should then be generated with one or more Advanced Guide Views. An Advanced Guide View can be set to recognize and detect in a range up to 360° from the Guide View position.

The Advanced Guide View will not be rendered at runtime, but it is needed by the MTG at authoring time when you generate the Model Target. Instead, we recommend using symbolic Guide Views if you wish to display a graphic to help users identify trackable objects.

See also Advanced Model Target Databases for more information on Advanced Model Targets.

UX Considerations

Advanced Model Targets do not display Guide Views. Therefore, in the cases where users are unaware of which objects can be recognized or if they are unsure of how Model Targets work and there is no Guide View to aid them, we recommend displaying a viewfinder UI on the screen to encourage users in positioning themselves so that tracking can begin.

A symbolic Guide View may also be employed for Advanced Model Targets. This can be an icon or a simplified visual of the object that informs the users about the shape of the object.

Have a look at the Vuforia Model Targets Unity sample app that demonstrates best-practice UX approach that we recommend using in combination with the Advanced Model Targets databases. This is demonstrated in the below images.

Vuforia Image

Standard Model Target Sample

Vuforia Image

Advanced Model Target Sample with UI elements to aid in locating specific targets.

Configuring Advanced Guide Views

For recognition to work as expected with Advanced Model Target databases, you need to set up the Target Recognition Range and Target Extent for each Guide View. The Target Recognition Range represents the range of positions and relative angles for which a target can be recognized

NOTE: Once the model has been trained, the object will be recognized from all camera positions that are covered by those recognition ranges.

In the Model Target Generator, select “Create Advanced View” in the “Guide Views” tab of your Model Target.

Vuforia Image

Note: An Advanced Guide View’s recognition range and target extent can be edited at a later time by clicking the  icon located on the top right of the preview image of the Guide View. This can also be used to add recognition ranges to guide views that were originally set up without.

Vuforia Image

In the next step, you can set a custom recognition range and edit the target extent.

An Advanced View can be pre-set to a recognition range of 90°, 180° or 360°, but can also be adjusted in a three axis range to better fit your use case.

Vuforia Image
Vuforia Image

For example, if you expect the user to approach your object from the front or from the back, but never from the left or right, you would create two Advanced Views and set the Target Recognition Range for the first Advanced View to cover approaches from the front side of the object, and for the second View to cover approaches from the back side of the object.

 

Viewing Angles and Orientation

To configure from which angles and orientation a model can be recognized from, the Model Target Generator lets you control the range of viewing angles for a given Advanced Guide View by controlling the azimuth (green) and elevation (red) angle ranges relative to the Guide View’s position. The roll (blue) can be set to Upright and Arbitrary where Upright is the default. Use the two roll options with the available motion hints for the best tracking experiences.

  • Use Upright when the object is certain to remain in a constant upright position (e.g. a machine bolted onto the floor or a car on the street).
  • Use Arbitrary when the orientation, with respect to gravity, is likely to change of the object (e.g. a toy or tool and any product you could pick up and place differently).

TIP: The roll choice is visible in the camera-rotation angles shown in the recognition-range editor in blue.

To illustrate, the model below will be recognized only when the user is viewing the object from a position that lies inside the colored areas, with the default range being (-45º,+45º) for azimuth (the green section in the image below), and Upright position. Entering from an angle that is not highlighted in the 3D view i.e. from from the backside, will not track the object.

Model Target Generator Recognition Range UI showing -45 to 45 azimuth range

In contrast, with the maximum range (-180,+180) for azimuth, this Guide View will always activate when the object is in view, regardless of which side you are viewing the object from:

Model Target Generator Recognition Range UI showing -180 to 180 azimuth range

NOTE: Both of these cases, the Advanced Guide View position (the small gray cone) stays at the same place. This represents the overall position and distance where the user needs to hold their device in order to start tracking the object.

Create more than one Advanced Guide View for your object in the following cases:

  • If it does not make sense to approach your object from any angle, but only from specific positions – e.g. for a large machine that is only serviced from two sides.
  • If your experience is focused on specific components of your model.

Do not overlap ranges

If you have multiple Advanced Guide Views for your object, it is important that the detection ranges and the Guide View positions do not overlap. The following screenshots from the Model Target Generator App indicate non-overlapping Guide Views:

Vuforia Image
Model Target Generator Recognition Range UI composite showing four guide views with 90º ranges that do not overlap.

In the following example, the azimuth ranges overlap, which means that training those two Guide Views are largely redundant:

Model Target Generator Recognition Range UI composite showing two guide views, the first with a 90º range and the second with a 180º range that overlaps the first.
Vuforia Image

In such a case, it is better to combine the two guide views into one with a larger viewing range.

Avoiding overlapping Advanced Guide Views will also reduce the risk of failed training when training Advanced Model Target databases.

Distance

The distance from which the object can be recognized is defined by the target extent. For example, making the target extent box smaller will require users to move closer to the object before it will be detected. And similarly, having the target extent to cover the whole of the model would initiate detection and tracking from a further distance. However, scaling the target extent to an extreme, i.e. the target extent is much larger than the object itself, would significantly reduce how well the model can be recognized. 

NOTE: Editing the Guide View position of an already specified Advanced Guide View will reset its recognition range setup. 

Target Extent

The Target Extent bounding box defines which parts of the model will actually be used in recognizing the object and/or discerning between Guide Views. Each Guide View has its own Target Extent bounding box, which you can modify in the Target Recognition Range window. Use the editing tools on the left side to adjust the target extent.

Vuforia Image

By default, the Target Extent bounding box covers the entire model. However, you may have a situation where you want to restrict the area of the object that the target should be recognized from. For example, suppose your object is a car, and you are making a marketing app demonstrating individual features of the car in the trunk area. In this case it might make sense to define a Guide View looking at the trunk of the car and restrict the Target Extent bounding box to just the rear section of the car, and another Guide View looking at the engine compartment of the car and restrict the Target Extent bounding box to just the front section of the car.

Advanced Guide Views work best if they are selected to include at least part of the silhouette of the object. If they are selected in a way where the object will fully cover the camera view when a user steps close to the object, the object will likely not be recognized well from that position.

As a rule of thumb, the following closeup view scenarios are known to work well for Advanced Model Targets:

  • At least a part of the silhouette is visible in the view
  • At least a part of the view is covered with background, not the object itself

As an example, the following image shows how a target extent may be defined around the back side of an offroad vehicle, where recognition ranges cover the positions where a user is expected to point at this part of the model. Note that the extent is chosen to include the silhouette of the model in that area.

Vuforia Image
Vuforia Image

After the model is trained, it will recognize properly from views such as the one presented, which again shows the silhouette of the model as well as some background not fully covered by the object.

Note: The closer you get to the physical object, the more crucial it is that the CAD model of the object is accurate with respect to the physical object.

See also Advanced Model Target Databases for more information.

NOTE: if you restrict the Target Extent bounding box, the whole of the object will still be used for tracking - the Target Extent controls just the region of the object that can be used to recognize the target.

Performance Considerations

wider detection range may create more ambiguity when running your app and attempting detection. With that in mind, you will probably want to keep the detection ranges as small as possible, to match your expected usage scenarios.

For example, an object that cannot be viewed from behind (like an object mounted on a wall) doesn't need a full 360º azimuth angle range; and an object that will only be observed from above, and never from below, only needs an elevation range that covers the top half of the object.

Learn More

Model Targets Overview

Model Target Generator User Guide

Advanced Model Target Databases