Advanced Views

Model Targets with Advanced Views can be recognized from more than one position and up to a 360 degrees detection range. Create Advanced Views in the Model Target Generator (MTG) fitting to inspection use cases, constrained mobility, or full 360 degrees detection ranges.

An Advanced View automatically enables recognition of an object from a set of pre-defined recognition ranges and user positions. To be able to detect the object from multiple positions, the Model Target needs to be trained., see Advanced Model Target Databases for more information on Advanced Model Targets.

Advanced View

Advanced Views and Guide Views are both used to initiate tracking of a Model Target from a defined position, but an Advanced View preview is not rendered at runtime, as it is the case with Guide Views. However, Advanced Views gives you presets and tools to define when an object, or part of an object, should be detected, and the tracking initiated.

Different scenarios using Advanced Views include:

  • Recognizing a physical object from any angle while walking around it, e.g., a car or tabletop object. set the preset 360° Dome to allow users that approach the object to detect from any of its sides, except from underneath.
  • Recognizing a larger machine from a close-up position, e.g., when inspecting the engine of a car. Creating views for such inspection points using Constrained Angle Ranges or User Positions enables controlled recognitions from multiple views without requiring the user to have the full object in view.

For more scenarios and descriptions, see:

UX Considerations

An Advanced View is not rendered at runtime. You should therefore consider displaying a symbolic Guide View or other graphic to help users identify Model Target objects.

We recommend displaying a viewfinder UI or symbolic Guide View on the screen to encourage users in positioning themselves so that tracking can begin, for example, icons and simplified visuals of the object.

Standard Model Target Guide View

Advanced Model Target symbolic model representation.

Create Advanced Views in the MTG

Select Create Advanced View from the Guide Views tab to see the options available.

In the following panel, you can select the presets for 360° Dome, Full 360°, Constrained Angle Range and Target Extent, and Constrained User Positions and Target Extent

  • The 360° Dome preset will create a recognition range setup to enable tracking all around and top of an object, but not from below. The Dome preset is useful for setting up recognition ranges for objects that are stationary, where the bottom of the object is not reachable or desired to be tracked.
    • Examples of such objects are cars, appliances, and items on tabletops. 
  • The Full 360° preset will create a recognition range setup to cover all sides, orientations, and angles of an object. This preset is especially for cases where the object is expected to be handheld and potentially turned sideways or upside down by the user.
    • Some toys and tools are examples where this preset is recommended. 
  • The Constrained Angle Range and Target Extent allows you to set up a custom recognition range and Target Extent where certain sides of the object or only parts of the object should initiate tracking.
    This is particularly useful for the following scenarios:
    • Objects that can only be approached from a specific side, e.g., wall-mounted products.
    • Inspection scenarios where tracking is initiated from specific closeup views without the user first having the full physical object in view.
  • The Constrained User Positions and Target Extent lets you configure a User Volume that defines the space from where the user can initiate tracking from. This is particularly useful for situations where the user can only move within a very constrained space but wants to initialize tracking on a larger object. In these cases, it is often impossible to take a step back to get more of the object in view. Some examples include:
    • A car owner sitting in the driver’s seat of his car using an AR enabled manual that tracks and augments the dashboard of the car.
    • A service technician working in a vehicle inspection pit below a car, using an AR enabled app guiding him through repair steps by tracking the underside of the car.

Select the options that best fit your scenario. When an Advanced View is created, a preview is available in the Guide Views panel. At this point, you may choose to set up additional Advanced Views, but keep in mind that some combinations are redundant. For instance, it does not make sense to combine a 360° Dome with a Full 360° view since the first is a subset of the latter.

NOTE: You can edit the Constrained Angle Range and Target Extent and Constrained User Positions and Target Extent recognition ranges and Target Extents by clicking the icon located on the top right of the preview image of the Advanced View. This can also be used to add recognition ranges to Advanced Views that were originally set up without.

360 Dome 

The 360º Dome preset sets the recognition ranges to cover the object from its sides and from the top. It will not be possible to initialize tracking from the bottom side of the object. The Target Extent is set to cover the whole object, so that the user should make sure to see most of the object when initializing tracking. Scenarios where this option is useful include tracking cars, stationary tabletop objects, or machinery. It is not possible to edit the ranges or extent of this preset.

Full 360

The Full 360º preset sets the recognition ranges to cover all sides, angles and even orientations of the object. The Target Extent is set to cover the whole object, so that the user should make sure to see most of the object when initializing tracking. Scenarios where this option is useful include objects that users can pick up, toys, or tools. It’s not possible to edit the ranges or extent of this preset.

Note that this preset will train a database without any restriction on how the object will be oriented (upright, upside down, sideways). Not only will this increase training time, but it will also reduce recognition performance compared to more limited settings. If you expect that the object is oriented in a certain way, it is always preferred to train using the 360° Dome preset instead.

Constrained Angle Range and Target Extent

The Constrained Angle Range and Target Extent option lets you define to recognize the Model Target only from a limited angle range or from close-up views that do not require the user to get the full object into view. It is possible to define more than one constrained view for a single Model Target.

See the Combining Advanced Guide Views section further below for more details.

To customize an Advanced View using this option, you need to configure the following:

  • The recognition range representing the range of positions and relative angles for which a target can be recognized. 
  • The Target Extent that is used for determining what part of the object is used for recognition. 

Once the model has been trained, the object will be recognized from all camera positions that are covered by those recognition ranges.

Selecting Viewing Angles and Orientation

To configure from which angles and orientation a model can be recognized from, use the azimuth (green) and elevation (red) angle ranges relative to the Advanced View’s position (cone).

You can also use the shortcut ranges on the top of the panel to set the recognition range to 90°, 180° or 360°.

The roll (blue) can be set to Upright, Arbitrary, and Upside Down, where Upright is the default. Use the three roll options according to how the object can be oriented when it is being recognized.

  • Use Upright when the object is in an upright position (e.g., a machine bolted onto the floor or a car on the street).
  • Use Arbitrary when the orientation, with respect to gravity, is likely to change for the object (e.g., a toy or tool and any product you could pick up and place differently).
  • Use Upside Down when recognition of the object shall be supported while it is literally upside down. This option is not meant for situations where users move below the Model Target, as the orientation did not change for the Model Target.

The roll choice is visible in the camera-rotation angles shown in the recognition-range editor in blue.

The small grey cone represents the rough position and distance where the user needs to hold their device in order to start tracking the object.

NOTE: If you plan to use Arbitrary roll option or the Full 360 preset, there are a couple of limitations you should be aware of. Firstly, these options should only be selected when the initial orientation of the object can be completely arbitrary at detection time. And secondly, recognition performance may be impacted because the training process needs to cover a larger recognition range, which results in sparser recognizable views.

Set the recognition ranges so that they represent the angles that the object can be recognized from.

To illustrate, the model below will be recognized only when the user is viewing the object from a position that lies inside the colored areas, with the default range being (-45º, +45º) for azimuth (the green section in the image below), and in Upright position. Entering from an angle that is not highlighted in the 3D view i.e., from the backside, will not initialize tracking for this object.

In contrast, with the maximum range (-180, +180) for azimuth, the Advanced View will always activate when the object is in view, regardless from which side you are approaching the object:

Symmetry

In some cases, your object may be symmetric, e.g., a cylindrical device that looks the same from both sides, but still has enough features and edges to detect and track.
In such a case, Vuforia Engine cannot reliably detect which side the object is being seen from and the Model Target Generator will display a warning when training.

As a workaround, configure the recognition ranges of the Advanced View so that they cover only the one part that is symmetric to others. The recognized pose will then always be reported to be in that range, even if the user is looking at another part that looks identical.

In case you have multiple Advanced Views for a symmetric model (e.g., because there are additional panels on the machine), these Guide Views must not cover any of the symmetric parts.

 

Using the Target Extent to enable recognition from close-up views

The Target Extent defines which parts of the model need to be in view when recognizing the object. Each Advanced View can have a custom Target Extent bounding box, which you can modify in the Target Recognition Range window using the gizmos after selecting the blue box. 

The Target Extent determines the distance from which the object can be recognized. For example, making the Target Extent box smaller will require users to move closer to the object before it will be detected. And similarly, having the Target Extent to cover the whole of the model would initiate detection and tracking from a farther distance by getting the full object into view.

Automatic Target Extent

The Target Extent’s bounding box is automatically calculated to cover the area of the object that is in in view and within the frame when you create an Advanced View. 

  • If all of the model is within the frame, the Target Extent bounding box will cover the entire object. 
  • If only a part of the model is in the frame, the Target Extent bounding box will cover only that area.

The Target Extent calculation is view dependent. By carefully selecting the position and the viewing angle of the frame, the recognition ranges will also be automatically adjusted. 

For example, suppose our object is a car, and we are making a marketing app demonstrating individual features of this vehicle. 

We create a close-up Advanced View looking at only the rear shock absorber and engine of an off-road vehicle from an accessible point.

Another Advanced View is made to look at the front of the car by restricting that Target Extent’s bounding box to just the front section of the car.

There may be situations where the bounding box needs to be repositioned, but this is easily done with the gizmos available when selecting the blue Target Extent box.

NOTE: Even when you restrict the Target Extent bounding box, the whole of the object is still used for tracking: The Target Extent controls just the region of the object that is used to recognize and initiate tracking of the object.

Once the Model Target is trained, it will now properly recognize from views and close-ups specified by each Target Extent.

NOTE: The closer you view the physical object or parts of it, the more crucial it is that the CAD model of the object is accurate with respect to the physical object.

Constrained User Positions and Target Extent

The Constrained User Positions and Target Extent option lets you define a limited space that the user will be in when recognizing the object.
In many inspection and service use cases, the user cannot move freely around the object or cannot move far enough back to bring the whole object into view:

  • A car owner sitting in the driver’s seat of his car using an AR enabled manual that tracks and augments the dashboard of the car.
  • A service technician working in a vehicle inspection pit below a car, using an AR enabled app guiding him through repair steps by tracking the underside of the car.

The Constrained User Positions and Target Extent option can be used to enable such use cases by configuring the User Volume (green box) and Target Extent (blue box).

User Volume

The User Volume is represented by the green box and worker avatar. This box should be configured to cover the space that the user – or more precisely the device, will be in when recognizing the target. It sets the training area where the camera can be to initiate tracking from.

User Volume and Target Extent relationship

The blue box represents the Target Extent and should be configured to cover any area of the model that the user will point the camera at to initiate tracking.

In contrast to the Constrained Angle Range option, the Target Extent does not have to be fully in view to start tracking. Instead, the User Volume and the distance to the Target Extent ensures that the object can be detected even when you start your AR session at a close-up view.

Specifying the distance to the model surface

When training an Advanced Model Target using the Constrained User Positions and Target Extent option, camera positions will be sampled inside the defined User Volume. If that User Volume is close to the geometry of the object or intersecting it, the volume will be subdivided into voxels that do not intersect the geometry. For this, the minimum distance from the model surface needs to be specified.

In the example below, the User Volume is set up to cover all positions a user can be in when sitting in the driver’s seat of this car. The Target Extent is set to cover both the dashboard in front of the driver as well as the side doors, so that the car interior can be recognized, and tracking can be started when looking at any of these areas from the driver’s seat. In such a scenario it’s likely that the user is limited to holding the camera close to the dashboard to initiate tracking; the distance to the model surface can therefore be set around 10cm.

In another example below, the user is inspecting the engine compartment of the same car. For this, the Target Extent is configured to cover the front part of the car, and the User Volume is covering the space in front and to the side of the engine compartment.

In this example, the user is less restricted in his movement and will usually not be closer than arm’s length when initiating the AR experience. Setting a minimum distance of 20cm from the model surface is a good default value for such inspection use cases since the camera is usually not closer to the target when tracking is initialized.

Common Pitfalls

When using the Constrained User Positions and Target Extent option, it is good to keep the following edge cases in mind:

Simplification of inside parts

The Model Target Simplification feature of the Model Target Generator might remove internal parts of the Model if they are not visible from the outside. In some use cases explained above, e.g., when the user is sitting in the driver’s seat and looking at the dashboard, this could result in removal of the dashboard geometry. As a consequence, the recognition of the dashboard will not be possible.
To work around this, the internals need to be made visible, e.g., by removing the car roof of windows from the model before simplifying it.

User Volumes close to object geometry

Configuring positions unnecessarily close to an object will result in longer training times and reduced detection robustness. Make sure to configure the User can’t be closer than x cm to model surface setting to a meaningful value to avoid training from positions that are too close to the object. A good default for inspection scenarios is 20cm. A lower value can make sense if the user will be restricted in his movement and will be standing very close to the object when recognizing it.

Combining Advanced Guide Views 

There are many situations where it is useful to define multiple Advanced views for a single Model Target. The following two screenshots show a use case where two Advanced Views have been set up. Both share the same Target Extent covering the whole model, but each Guide View has a different Roll option.

In this scenario, we imagine that the Model Target is inspected in two states: in an upright orientation and in an upside-down orientation

Other situations where it is useful to set up multiple Advanced Views for a single Model Target are:

  • A larger piece of machinery that contains different parts that may be recognized, e.g., a panel and an engine compartment on different sides of the model. In this case, the extents can be reduced to cover these parts only and the recognition ranges can be configured to cover the angles from which a user might approach those parts. An additional 360 Dome view could be added to allow detection of the whole machine from further away and from any direction.
  • A car dashboard that is being recognized from inside the car by a device operated by a user sitting in the driver seat. Recognizable parts may include the gearshift, the air conditioning unit, or the control panels on the side door handles. All these could be different Advanced Views where the target extent is limited to those parts and recognition ranges are configured to only cover angles visible from the driver’s seat.

Performance Considerations

wider detection range may create more ambiguity when running your app and attempting detection. With that in mind, you will probably want to keep the detection ranges as small as possible, to match your expected usage scenarios.

For example, an object that cannot be viewed from behind (like an object mounted on a wall) doesn't need a full 360º azimuth angle range; and an object that will only be observed from above, and never from below, only needs an elevation range that covers the top half of the object.

Can this page be better?
Share your feedback via our issue tracker