Model Target Guide View

Vuforia Image

To initialize tracking of a Model Target, Vuforia Engine requires that the user hold their device at a particular angle relative to the model, and at a particular distance. To assist the user with this process, your app will typically draw an image showing an approximation of the object from this distance and viewing angle, so that the user just needs to move their device until the camera view matches the image. When they do this, tracking can begin.

In Vuforia Engine, both the image and the particular angle/distance relative to the object that image represents are called a Guide View.

For an indication of how Guide Views work within a typical Model Target application, see Introduction to Model Targets in Unity and the Model Target Test App User Guide.

Choosing a good Guide View

For stable detection and tracking of your object, choose a Guide View position where you have a diagonal view (i.e. an angle where two or three sides of the object can be seen at the same time) that includes as much of the object as possible. Try to avoid a position that presents a fronto-parallel view onto the object (i.e., do not choose a view that is "square on" to one side of the object). In addition, try to avoid a Guide View position that makes the object appear to have many parallel lines/edges.

If your object is very large, it may be difficult to find a Guide View that is near enough to the object and still shows the entirety of the object. If this is the case, try to choose a Guide View where the object is easily distinguishable by shape. If your object has areas with large flat surfaces, try to avoid these areas, and instead find an angle of view where unique shapes in the object are more apparent.

To aid the process of selecting a good Guide View a dashed-line frame is displayed when in "Detect Position" mode. To set-up the optimal Guide View for your use-case you can switch between different frames during positioning. Landscape, portrait modes for handheld devices and a specific HoloLens mode will take into account the correct field-of-view of the device. The created Model Target dataset are device agnostic - but the optimum can be tuned well using this feature.

Multiple Guide Views

It is possible to define multiple Guide Views for a single Model Target, and then switch between these different Guide Views at runtime. For example, you might have a service manual app with repair guides for various parts of a machine, and you want the user to be able to select which part they want to look at from an in-app menu. See the Model Targets API Overview for details on how to add such functionality to your app.

Multiple Guide Views can be added manually by positioning the camera around the object and adding individual Guide Views one-by-one. To set-up Guide Views placed at regular positions around the object you can use the Add Multiple Views button. Selecting a preset will create the corresponding individual Guide Views automatically. Positions, viewing directions, and ranges set-up using this method are guaranteed to not overlap and ready for training.

Vuforia Image

In addition, you can create a trained Model Target dataset, which allows your app to automatically switch between the different Guide Views based on the user's position and angle relative to the object. For example, you might have a marketing app for a new car model and you want to highlight different features of the car when the user points their device at them. Or, you might have a very large object, and you want the AR experience to be different depending on whether the user is approaching the object from the front or from the back.

See Trained Model Target Datasets for more information and further guidance.

Target Recognition Range

For automatic recognition to function as expected, you may need to setup the Target Recognition Range for each Guide View. The Target Recognition Range represents the range of positions and relative angles for which a given Guide View is appropriate. The Model Target Generator provides defaults that work as is for many applications, but you may want to edit the defaults to better fit your use case. For example, if you want to present different content depending on whether the user is approaching your object from the front or from the back, you would create two Guide Views and set the Target Recognition Range for the first Guide View to cover approaches from the front side of the object, and for the second Guide to cover approaches the back side of the object

Setting the Target Recognition Range

You can set the Target Recognition Range for a Guide View using the Model Target Generator app.

Open the Target Recognition Range panel by clicking the small 3D cube icon on the top right side of a Guide View preview image:

Vuforia Image

This will open the Target Recognition Range panel on the right side of the window:

Vuforia Image

Viewing Angles

To configure which Guide View should be activated for a particular viewing angle toward an object, the Model Target Generator app lets you control the range of viewing angles that will activate a given Guide View, by controlling the azimuth (green)elevation (red), and roll (blue) angle ranges relative to the Guide View position. 

For example, with the default range (-45º,+45º) for azimuth (the green section in the image below), this Guide View (represented by the small black cone) will be active only when the user is viewing the object from a position that lies inside the coloured areas, i.e. from an angle that is less than 45º away from the Guide View position:

Model Target Generator Recognition Range UI showing -45 to 45 azimuth range

In contrast, with the maximum range (-180,+180) for azimuth, this Guide View will always activate when the object is in view, regardless of which side you are viewing the object from:

Model Target Generator Recognition Range UI showing -180 to 180 azimuth range

Note that in both of these cases the Guide View position (the small black cone) stays at the same place. This represents the exact position where the user needs to hold their device in order to actually start tracking the object. The Guide View image will be displayed anywhere inside the Target Recognition Range, but the user still needs to move their device to the actual Guide View position in order to actually start tracking.

Depending on your use case, it may make sense to define multiple Guide Views for your object, and set up Target Recognition Ranges so that a different Guide View is active depending on which side the user approaches your object from, or on which component of your object the user is pointing their device at.

Alternatively, it may make sense to have the same Guide View activate on all sides of the object, such as if you want the user to move to a particular side of your object so that the AR experience has a consistent entry point. (If this is the case, make sure that the Guide View image overlay that gets rendered in your app clearly indicates where the user should stand relative to the object in order to start the AR experience!)

Do not overlap ranges

If you have multiple Guide Views for your object, it is important that the detection ranges do not overlap. The following screenshots from the Model Target Generator App indicate non-overlapping Guide Views:

Model Target Generator Recognition Range UI composite showing four guide views with 90º ranges that do not overlap.

In the following example, the azimuth ranges overlap, which means that Vuforia Engine will be unable to reliably select a consistent Guide View for the overlapping range:

Model Target Generator Recognition Range UI composite showing two guide views, the first with a 90º range and the second with a 180º range that overlaps the first.

To avoid any ambiguities, there should ideally be a gap between each of the recognition ranges. For example, suppose you have two Guide Views for an object, one covering the front side and one covering the back. If the first Guide View covers azimuth angles from -85º to 85º, then the other Guide View should cover, at most, the corresponding range -85º to 85º from the other side of the object, so that there is at least a 10º gap between the edges of each range.

Distance

The width (thickness) of an angle range section/ring indicates the range of distances from which the object can be recognized for this detection position. The default range is between 0.75 and 1.5 times the distance from the object to the Guide View. The range can be set either relative to the distance of the detection position, or using absolute scene-unit values (which can be useful if you know the exact size of the room your AR app will be run in, and therefore the maximum distance the camera will be from the object).

Target Extent

The Target Extent bounding box defines which parts of the model will actually be used in recognizing the object and/or discerning between Guide Views. Each Guide View has its own Target Extent bounding box, which you can modify by clicking on Edit Target Extent in the Target Recognition Range UI. 

By default the Target Extent bounding box covers the entire model. However, you may want to restrict the area of the object that the user can use to activate this particular Guide View. For example, suppose your object is a car, and you are making a marketing app demonstrating individual features of the car. In this case it might make sense to define a Guide View looking at the trunk of the car and restrict the Target Extent bounding box to just the rear section of the car, and another Guide View looking at the engine compartment of the car and restrict the Target Extent bounding box to just the front section of the car.

Note that if you restrict the Target Extent bounding box, the whole of the object will still be used for tracking - the Target Extent controls just the region of the object that can be used to recognize and activate a particular Guide View.

Performance Considerations

wider detection range may create more ambiguity when running your app and attempting detection. With that in mind, you will probably want to keep the detection ranges as small as possible, to match your expected usage scenarios.

For example, an object that cannot be viewed from behind (like an object mounted on a wall) doesn't need a full 360º azimuth angle range; and an object that will only be observed from above, and never from below, only needs an elevation range that covers the top half of the object.

Learn More

Model Targets Overview

Model Target Generator User Guide

Model Target Trained Datasets