Advanced Tips for Optimizing Stereo AR/VR App Performance

As an AR/VR developer building for Digital Eyewear, you should be aware of available power management techniques and how these relate to your rendering budget. ​This awareness is especially important for stereo rendered VR apps where there is the additional cost of Device Tracking using an on-device Inertial Measurement Unit(s). The combined costs of stereo rendering and rotational tracking using  can push devices to their limits and result down clocking (i.e. reduced frame rates ).

The primary target for applying rendering optimizations is the stereo rendering process. This is because rendering stereo viewports, and applying the necessary lens distortion corrections, incurs significant additional computation costs relative to mono-viewport rendering on mobile devices.

  • Doubles rendering cost to render views for both the left and right eyes (native Android + iOS only)
  • Additional render passes for distortion compensation, which is applied offscreen to a distortion mesh
  • Increased frame rates (e.g. 60 FPS ) for immersive VR scenes

Here are some tips, and links to resources, that can help you reduce render costs and manage your rendering budget.

  • Anti Aliasing: Avoid setting anti-aliasing too high. A 2X setting can be beneficial by reducing the cost of the offscreen rendering pass, but anything above 4X will significantly decrease performance.
    In Unity you can change this setting in Project Settings -> Quality
  • Distortion Texture Size native Android and iOS only: The distortion rendering size can be rescaled if you are targeting low end devices. Vuforia provides a recommended texture size to maximize quality but which can detrimental on low end devices when performing high cost rendering. You can rescale the texture size programmatically, which will decrease rendering quality but improve FPS.
  • Distortion Texture Format: The format for distortion rendering (offscreen rendering) is automatically selected in Unity to optimize performance for each platform. For native Android and iOS apps you can select the optimal texture format yourself ( e.g. use RGB565 on Android - see the Android AR/VR sample app ).
  • Distortion Mesh native Android and iOS only: Vuforia provides a distortion mesh with a resolution that provides good performance on a broad range of devices. If you want to increase resolution - creating a tighter / smoother mesh - you can further tessellate this mesh by interpolating the provided vertex and texture coordinates.
  • Stereo View native Android and iOS only: Avoid unnecessarily duplicating draw call when you render the left and right eye. For Occlusion or Visibility computation, shadowing can be performed once and shared before rendering both eyes.
  • You may be able to use EGL/GL advanced extensions provide by GPU manufacturers to improve stereo performance, such as the recently introduced OVR_Multiview:
  • Frame rate: VR apps require a high constant frame rate, synced to the display refresh rate, which is 60fps on supported platforms. Any significant variation to the frame rate is not tolerable by users, potentially causing motion sickness. Avoid any non-rendering related processing in your render thread. Execute any non-critical operations in a separate process (e.g. IA, physics, asset management, etc.).

Improving Application Performance

The primary strategy for improving the user experience provided by your app is to reduce any potential motion-to-photon latency.

The Vuforia Device Tracker supports pose prediction (setPosePrediction()). Pose prediction applies an estimate of the pose update latency from the IMU and attempts to compensate for it by slightly adjusting the rotational pose provided by the tracker. This functionality is complimented by Vuforia's on-demand state updating capability, which uses the updateState() getLatestState() methods.

Vuforia Image

You should always try to obtain and apply the pose provided by the State object as late as you can before rendering for a given frame. Any rendering should be completed as close to the VSync interval as possible. The reason being that using the latest available pose update results in less perceived latency by the user.

Note that each Trackable Result contained in the State object has an associated time stamp value ( getTimeStamp()) which can be compared to the current timestamp ( getCurrentTimeStamp()). Comparing these two values will tell you the forward looking interval of your prediction time, when using pose prediction. You can use this interval to perform any remaining secondary ( non rendering ) processing.

You can further improve latency in your application by rigorously controlling the activities performed in your rendering thread, by supporting CPU/GPU synchronization and reducing any additional system latency.

On Android, you have some further options for optimizing rendering latency by controlling the presentation time of the frame and accurately retrieving the time of the next VSync by using the eglPresentationTimeAndroid() API. 

On Unity, be aware of how much memory is being allocated onto the heap. The Unity Garbage Collector is known to create CPU spikes that can impact performance.

You can find examples of these techniques applied in the Grafika project: https://github.com/google/grafika

Improving Device Tracking Performance

Avoid accessing the inertial sensors independently in your application as this can interfere with Device Tracking.

Note that you can use multiple Tracker types simultaneously in AR/VR apps, such as by combining the Rotational Device Tracker with the Object Tracker. However be aware that the available pose update rate will vary between these two trackers and that combining a Device Tracker with an Object or Smart Terrain Tracker will consume significant CPU resources. If you don’t need to use additional Trackers with the Device Tracker, be sure not to start them in your app.
.
Additional Tips:

  • Tracker/Camera life cycle: If you implement a mixed reality app, combining both AR and VR modes, always turn off the camera and Trackers when switching to VR mode. This will significantly improve performances - stopping these components is not costly. When restarting the camera, there will be a slight delay, so it's advisable to use a transition effect to conceal the start latency. 
    In Unity you can simply the MixedRealityController, which will handle all of the life cycle mechanics and configuration steps for you. See: Using the MixedRealityController in Unity.
  • Model Correction: The Rotational Device Tracker supports two pivot point correction models for both head and hand tracking. See: Using the Rotational Device Tracker

Further information

Both Oculus and Unity provide very useful guides to designing and optimizing VR apps.

See:
Introduction to Mobile VR Design from Oculus Mobile SDK
Unity Virtual Reality Guide