In Vuforia Engine, both the image and the angle/distance relative to the object that image represents is called a Guide View. The Guide View is generated together with the Model Target database in the Model Target Generator desktop tool provided by Vuforia.
There are two types of Model Target databases; Standard Model Targets and Advanced Model Targets. For each database generated, one or more Guide Views can be created to help detection and tracking of the object. The Guide View is used differently depending on the type of Model Target database.
- Standard Model Target databases can have one model with one or multiple Guide Views which can be switched between manually. These types of Guide Views are displayed to help position the user and camera so that tracking of the object can start. The position that the Guide Views represent have some tolerance on how exact users need to match the outline with the object.
- Advanced Model Target databases can contain multiple Model Targets that have been trained using a deep learning process. Each Model Target can contain one or more Advanced Guide Views that support recognition up to a 360° range around the Guide View position. The Vuforia SDK automatically detects the object in view and will not display the Guide View on the screen as with standard databases, because detection can happen within an angular range. The Guide View tab in the MTG is still used to define the detection range and it is necessary for training the Advanced Model Target database.
Choosing a Good Guide View for a Model Target
Guide Views for standard Model Targets should be created from a position and angle that represents how a user would typically approach an object. The Guide View outline is rendered to help the user to align the camera with the object. Guide Views allow for an extended tolerance where the object will also be recognized even if the user approaches from a range that deviates from the created Guide View position. The tolerances listed below, provide some freedom in creating and aligning Guide Views:
- About +/- 45° horizontally with respect to the center of the Guide View.
- About +/- 15° to the vertical top or bottom from the selected position .
- About 15% closer or further away from the object, depending on the shape of the object .
- About +/- 10° of rotation around the guide view viewing direction (roll).
Guide View position
For stable detection and tracking of your object, choose a Guide View position where you have a diagonal view (i.e. an angle where two or three sides of the object can be seen at the same time, as shown below in left image) that includes as much of the object as possible. Try to avoid a position that presents a front-to-parallel view onto the object (i.e., do not choose a view that is "square on" to one side of the object: right image below). In addition, try to avoid a Guide View position that makes the object appear to have many parallel lines/edges.
Favorable diagonal Guide View alignment
Undesired Guide View with less object visibility and parallel lines.
NOTE: Use the navigation buttons in the Model Target Generator in the left side of the window to navigate around the object when you are choosing your Guide Views.
If your object is very large, it may be difficult to find a Guide View that is near enough to the object and still shows the entirety of the object. If this is the case, try to choose a Guide View that captures the object from an angle with the most unique features. If your object has areas with large flat surfaces, try to avoid these areas, and instead find an angle of view where unique shapes in the object are more apparent.
To aid the process of selecting a good Guide View a dashed-line frame is displayed when in "model view" mode. To set-up the optimal Guide View for your use-case you can switch between different frames during positioning. Landscape and Portrait modes for handheld devices and a HoloLens mode can be selected to take into account the correct field-of-view of the device. The created Model Target databases are device agnostic - but the optimum result can be tuned while using this feature.
Guide View customization
You may also consider using your own 2D image or graphics to render as a Guide View. If you use your own image, for correct placement, we recommend that you scale the image such that the longer side of the image has the same length as the longer side of the camera image. Alternatively, you can use the view intrinsic parameters (provided on the Guide View instance) in combination with the current camera calibration or device screen resolution in order to adjust the scale/aspect ratio of the image.
You can also render a 3D model representing the object as it appears from the Guide View pose. You can use the view extrinsic parameters provided on the Guide View instance in combination with the current camera calibration to do this.
NOTE: If you are working with stereo eyewear such as the HoloLens, you are recommended to render the Guide View as a 3D model.
Multiple Guide Views
It is possible to define multiple Guide Views for a single Model Target and then switch between these different Guide Views at runtime. For example, you might have a service manual app with repair guides for various parts of a machine, and you want the user to be able to select which part they want to look at from an in-app menu. See the Model Targets API Overview for details on how to add such functionality to your app.
Multiple Guide Views can be added manually by positioning the camera view around the object and adding individual Guide Views one-by-one by pressing the Create Guide View button. A Guide View has its Guide View position fixed and its recognition ranges set as empty. You can adjust the Guide View position by clicking the icon located on the top right of the preview image of the Guide View.
NOTE: Untrained Guide Views and Advanced Views can be created together in a single Model Target database, but the database will not be able to be trained as an Advanced Model Target database and the Advanced Guide View will only work as an untrained Guide View!
Advanced Model Target Databases
With the MTG, you can train Advanced Model Target databases, which allow your app to automatically recognize multiple objects and automatically initiate tracking from a pre-defined recognition range.
For example, you might have a marketing app for a new car model, and you want to highlight different features of the car when the user points their device at them. Or, you might have a very large object, and you want the AR experience to be different depending on whether the user is approaching the object from the front or from the back. This could be achieved by training an Advanced Model Target database containing Model Targets that are resembling different parts of the very large object. Each Model Target should then be generated with one or more Advanced Guide Views. An Advanced Guide View can be set to recognize and detect in a range up to 360° from the Guide View position.
The Advanced Guide View will not be rendered at runtime, but it is needed by the MTG at authoring time when you generate the Model Target. Instead, we recommend using symbolic Guide Views if you wish to display a graphic to help users identify trackable objects.
See also Advanced Model Target Databases for more information on Advanced Model Targets.
Advanced Model Targets do not display Guide Views. Therefore, in the cases where users are unaware of which objects can be recognized or if they are unsure of how Model Targets work and there is no Guide View to aid them, we recommend displaying a viewfinder UI on the screen to encourage users in positioning themselves so that tracking can begin.
A symbolic Guide View may also be employed for Advanced Model Targets. This can be an icon or a simplified visual of the object that informs the users about the shape of the object.
Have a look at the Vuforia Model Targets Unity sample app that demonstrates best-practice UX approach that we recommend using in combination with the Advanced Model Targets databases. This is demonstrated in the below images.
Standard Model Target Sample
Advanced Model Target Sample with UI elements to aid in locating specific targets
Configuring Advanced Guide Views
To train recognition for Advanced Model Targets, you need to set up the Target Recognition Range and Target Extent for each Guide View.
- The Target Recognition Range represents the range of positions and relative angles for which a target can be recognized .
- The Target Extent is used for determining if the whole object or only a part of the object is used for recognition.
Once the model has been trained, the object will be recognized from all camera positions that are covered by those recognition ranges.
You can create Advanced Guide Views from the Guide Views tab.
First adjust the Advanced View position and then press the Create Advanced View.
In the following setting you can select the presets for 360° Dome or Full 360°, or you may choose the Custom Advanced View.
- The 360° Dome preset will create a recognition range setup to enable tracking all around an object and the top, but not the bottom. The Dome preset is useful for setting up recognition ranges for objects that are stationary, and the bottom of the object is not reachable or desired to be tracked. Examples of such objects are cars, appliances, and items on tabletops.
- The Full 360° preset will create a recognition range setup to cover all sides and angles of an object. This preset is especially for cases where the object is expected to be hand held by the user. Toys and tools are good examples where this preset is recommended.
- The Custom Advanced View allows you to set up a custom recognition range and target extent. With the custom setup, you may adjust the recognition range to only recognize an object from a certain entry point or you can adjust the target extent to only start tracking when a part of the object, for example a close-up, is in view.
Select one of the options that best fit your scenario. When an Advanced View is created, a preview is available in the Guide Views tab. At this point, you may choose to make additional Advanced Views.
NOTE: An Advanced Guide View’s recognition range and Target Extent can be reconfigured by clicking the icon located on the top right of the preview image of the Guide View. This can also be used to add recognition ranges to Guide Views that were originally set up without.
NOTE: Guide View position of an already specified Advanced Guide View will reset its recognition range setup.
Custom Advanced View setup
When you create a Custom Advanced View, you can set a custom recognition range and the target extent.
An Advanced View can be pre-set to a recognition range of 90°, 180° or 360°, but can also be adjusted in a three-axis range to better fit your use case.
For e.g. a toy that can be recognized from any angle, it makes sense to configure a full 360° recognition range for the azimuth.
if you know that users will always approach an object from a certain direction (e.g. for a large piece of machinery or for a museum exhibit, you can reduce the recognition range to only cover those angles. This will improve training and recognition performance.
Viewing Angles and Orientation
To configure from which angles and orientation a model can be recognized from, the Model Target Generator lets you control the range of viewing angles for a given Advanced Guide View by controlling the azimuth (green) and elevation (red) angle ranges relative to the Guide View’s position. The roll (blue) can be set to Upright and Arbitrary where Upright is the default. Use the two roll options with the available motion hints for the best tracking experiences.
- Use Upright when the object is certain to remain in a constant upright position (e.g. a machine bolted onto the floor or a car on the street).
- Use Arbitrary when the orientation, with respect to gravity, is likely to change for the object (e.g. a toy or tool and any product you could pick up and place differently).
TIP: The roll choice is visible in the camera-rotation angles shown in the recognition-range editor in blue.
To illustrate, the model below will be recognized only when the user is viewing the object from a position that lies inside the colored areas, with the default range being (-45º,+45º) for azimuth (the green section in the image below), and in Upright position. Entering from an angle that is not highlighted in the 3D view i.e. from the backside, will not track the object.
In contrast, with the maximum range (-180,+180) for azimuth, the Advanced Guide View will always activate when the object is in view, regardless of which side you are viewing the object from:
NOTE: Both of these cases, the Advanced Guide View position (the small gray cone) stays at the same place. This represents the overall position and distance where the user needs to hold their device in order to start tracking the object.
As a general rule, only set up more than one Guide View for a single model if the target extents of those Guide Views differ significantly. If two or more Guide Views share the same target extent, they should be combined into a single one.
See the Combining Advanced Guide Views section further below for more details.
In some cases your object may be symmetric, e.g. a cylindrical device that looks the same from both sides, but still has enough features and edges to detect and track.
In such a case, Vuforia Engine cannot reliably detect which side the object is being seen from and the Model Target Generator will display a warning when training.
To work around that, configure the recognition ranges of the Advanced Guide View so that they cover only the one part that is symmetric to others. The recognized pose will then always be reported to be in that range, even if the user is looking at another part that looks identical.
In case you have multiple Advanced Guide Views for this model (e.g. because there are additional panels on the machine), these Guide Views must not cover any of the symmetric parts.
The target extent defines which parts of the model will be used in recognizing the object and discerning between Guide Views. Each Advanced View has its own target extent bounding box, which you can modify in the Target Recognition Range window using the gizmos on the left side.
The target extent determines the distance from which the object can be recognized. For example, making the target extent box smaller will require users to move closer to the object before it will be detected. And similarly, having the target extent to cover the whole of the model would initiate detection and tracking from a farther distance.
NOTE: Scaling the target extent to an extreme, i.e. the target extent is much larger than the object itself, would significantly reduce how well the model can be recognized.
Automatic target extent
The target extent’s bounding box will be automatically calculated to cover the area of the object that is in in view and within the frame when you create an Advanced View.
- If all of the object is within the frame, the bounding box will cover the entire object.
- If only a part of the model is in the frame, the bounding box will cover only that area.
The target extent calculation is view dependent. By carefully selecting the position and the viewing angle of the frame, the recognition ranges will also be automatically adjusted.
For example, suppose our object is a car, and we are making a marketing app demonstrating individual features of this vehicle.
We create a close-up Advanced View looking at only the rear shock absorber and engine of an off-road vehicle from an accessible point.
Another Advanced View is made to look at the front of the car by restricting that target extent’s bounding box to just the front section of the car.
There may be situations where the bounding box needs to be repositioned, but this is easily done with the three tools in the left panel.
NOTE: Even when you restrict the target extent bounding box, the whole of the object is still used for tracking: the target extent controls just the region of the object that can be used to recognize the target.
When the Advanced Views are set up and no target extents overlap (a warning appears if this is the case), the Model Target can be trained. It will now properly recognize from views and close ups specified by each target extent.
NOTE: The closer you view the physical object or parts of it, the more crucial it is that the CAD model of the object is accurate with respect to the physical object.
See also Advanced Model Target Databases for more information.
NOTE: In previous versions of Vuforia Engine, it was required that Advanced Guide Views are selected to include at least part of the silhouette of the object. With version 9.3, this requirement was lifted. Recognition from close-up views that do not show the silhouette and some background are now supported as well. Re-train your Advanced Model Target with the 9.3 Model Target Generator to benefit from that improvement.
If multiple Advanced Guide Views are configured for a single Model Target, they should never share the same target extent. The Model Target Generator will show a warning if such a case is detected.
The following two screenshots show a case where two Advanced Guide Views have been set up. Both share the same target extent covering the whole model, but each Guide View covers a different side.
This will result in unnecessary overhead during training and recognition.
In such a case, it is recommended to combine the Advanced Guide Views sharing the same target extent into a single one that covers all recognition ranges, as shown below:
Setting up multiple Advanced Guide Views for a single Model Target is only recommended if the target extents between the views differ.
Examples where this could be the case are:
- A larger piece of machinery that contains different parts that may be recognized, e.g. a panel and an engine compartment on different sides of the model. In this case, the extents can be reduced to cover these parts only and the recognition ranges can be configured to cover the angles from which a user might approach those parts.
- A car dashboard that is being recognized from inside the car by a device operated by a user sitting in the driver seat. Recognizable parts may include the gearshift, the air conditioning unit, or the control panels on the side door handles. All these could be different Advanced Guide Views where the target extent is limited to those parts and recognition ranges are configured to only cover angles visible from the driver’s seat.
A wider detection range may create more ambiguity when running your app and attempting detection. With that in mind, you will probably want to keep the detection ranges as small as possible, to match your expected usage scenarios.
For example, an object that cannot be viewed from behind (like an object mounted on a wall) doesn't need a full 360º azimuth angle range; and an object that will only be observed from above, and never from below, only needs an elevation range that covers the top half of the object.
At the same time, setting up multiple Advanced Guide Views that cover the same target extent will increase dataset size and recognition times. In such cases, combine those Guide Views into a single one that covers all combined angles to improve performance, even if this results in a larger recognition range than setting up multiple smaller ranges.