Smart Terrain

The Smart Terrain™ feature of the Vuforia SDK is a breakthrough vision capability that enables a new level of immersion in augmented reality gaming experiences. Smart Terrain enables you to reconstruct and augment your physical environment, to create new kinds of gaming and visualization applications. App development is through a simple authoring workflow and event-driven programming model that is consistent with and similar to what Vuforia offers today.

Smart Terrain Terminology


Definitions

Stage

The stable configuration of objects and surfaces in the real world that is augmented by the scene.

Staging

The process of lighting a stage and arranging props and Targets in preparation for the scanning process.

Scanning

The process of capturing a stage from multiple vantage points through a continuous sequence of camera positions.

Terrain

Both the authoring resources and digital artifacts of the Smart Terrain scene generated by Vuforia, including the Primary Surface, Auxiliary Surfaces, borders, targets, props and the meshes accompanying these objects.

Scene

The virtual scene, in Unity, that contains the Terrain authoring elements and developer content. The scene augments a real world ‘stage’ using Smart Terrain.

Borders

The extent of the scene applied to the stage established by the synthesized edges and sensor horizons of the physical setting.

Primary Surface

The initial stage surface, scanned by the user, that serves as the reference geometry for constructing the Smart Terrain scene. Additional props and Surfaces in the scene are positioned in relation to the Primary Surface.

Props

Unknown real-world objects that can be reconstructed and tracked by the Smart Terrain Tracker.

Targets

Predefined Vuforia Trackables such as Image Targets, Multi-Targets, Cylinder Targets and 3DROs


Supported Objects

Objects or Props, as they are referred to in Smart Terrain, may be as small as a soup can to as large as large cereal box. They should have a static rigid geometry and their surfaces should provide patterns and details that can be used by the Smart Terrain Tracker to recognize and follow the object. Smart Terrain looks for the same types of natural features that are used by the other Vuforia trackable types (such as Image Targets). See: Natural Features and Image Ratings.

Although there is no hard limit on the number of props that can be tracked, Smart Terrain works best with up to 5 objects.

Transparent objects, such as glass, are not supported.

Tip: The best props are box or cylinder shaped with printed patterns, such as items you would usually find at home in your pantry. Good props are no smaller than a can of vegetables, and no larger than a big cereal box. The stage itself can be as large as dining table.

Supported Environments

Smart Terrain is best suited for near-range tabletop experiences in static settings that are well-lit with stable lighting conditions. Reflective and transparent surfaces are not good candidates for use with Smart Terrain because the appearance of such surfaces changes based on the position of the user.

Once the setting is staged, its elements should remain static. Changing the stage (table) geometry or arrangement of props (objects) on the stage will corrupt tracking.

In general, Smart Terrain has been designed to work with a wide variety of commonly occurring table surfaces found at home and in the office. Ideal stage surfaces are either plain or present a uniform density of feature, and should be visually distinct from near adjoining surfaces.

Minimum Requirements

Smart Terrain is supported only on devices with multicore processors. Developers should implement app logic to gracefully handle cases where a user attempts to run Smart Terrain on unsupported devices, by verifying the processing context and not initializing Smart Terrain if a multicore processor is not available.

You can check the processor count for a device at runtime using Unity’s SystemInfo.processorCount Property, which returns the number of available processors.

See: https://docs.unity3d.com/Documentation/ScriptReference/SystemInfo-processorCount.html

In addition, the minimum requirements for developing and running Unity apps on iOS and Android also apply to Smart Terrain:

Android

  • Android authored content requires devices equipped with:
  • Android OS 2.3.1 or later
  • Device powered by an ARMv7 (Cortex family) CPU
  • GPU support for OpenGLES 2.0 is recommended

iOS

  • Xcode 4.3
  • * iPhone 4 is not supported because it is a single core device

How Smart Terrain Works

Smart Terrain reconstructs, recognizes, and tracks physical objects and surfaces. These are represented as 3D meshes in the Unity scene. The collection of surface and object meshes is referred to as the scene’s Terrain.

There are three phases to a Smart Terrain experience:

  1. A staging phase in which the user sets up a staging area to use, adding their props and the initialization target.
  2. A scanning phase in which the stage and props used in the setting are captured and reconstructed by the Smart Terrain tracker.
  3. A tracking phase in which the terrain is augmented in real-time by the Unity scene you've developed.

The Scanning phase starts from a pre-defined targetwhich is used to initialize terrain generation. You can use an Image Target, Multi Target or Cylinder Target from either a Device or Cloud Database as the initialization target. Once the Terrain has been initialized, the Smart Terrain Tracker will begin reconstructing the stage surface, along with any viable props that are found on the stage, and adding them to the terrain.

Props are reconstructed as independent cube/cuboid meshes in the scene, and each Prop object possesses a collider and mesh renderer component. The primary surface of the terrain is an independent mesh as well with a collider and renderer.

Developers can address the prop and surface meshes independently to apply rendering techniques, and to evaluate their properties for app logic. For example, the size and/or position of a physical prop can be used to selectively assign prop instances in the scene.

There are three phases to a Smart Terrain experience:

  1. A staging phase in which the user sets-up a stage area to use, adding their props and the initialization target.
  2. A scanning phase, in which the stage and props used in the setting are captured and reconstructed by the Smart Terrain tracker.
  3. A tracking phase in which the terrain is augmented in real-time by the Unity scene you’ve developed.

See: Smart Terrain Workflow in Unity

Samples

See: Penguin Smart Terrain Sample