The Compositor is the rendering component of Meta’s SDK. From Meta’s Unity SDK, you can control a variety of advanced rendering settings, including enabling Meta’s advanced prediction techniques to improve the performance of your application.

Features

The Compositor handles the following:

  • Creating a window in either direct mode or extended mode on the headset.
  • Unwarping stereo textures. This converts the images generated by the cameras associated with both eyes (hence “stereo”) into a form that will appear correctly in the Meta 2.
  • Reducing the amount of latency the user experiences by applying prediction techniques such as asynchronous reprojection.

Usage

The Compositor is part of the MetaCameraRig. It will automatically start, detect your display mode, and initialize rendering.

By default, the Compositor will initialize the prediction system known as “2D Warp.” For applications with head-locked content – i.e. content that always stays in the same position relative to the user’s head no matter where they look – 2D warp may introduce judder. In this case, we recommend disabling both 2D and 3D warp.

For optimal rendering performance, we recommend enabling 3D warp. As described in greater detail below, 3D warp not only improves the perceived motion-to-photon latency – i.e. how closely your application’s rendering matches the user’s head movements – but it also fills in dropped frames when your application is running on less powerful hardware. However, please note that 3D warp is not compatible with certain content, particularly some transparency shaders. Please see below for more details.

Known Issues

  • Direct mode is not currently supported for AMD graphics cards.
  • Switching display modes while the Compositor is running is not supported.

Asynchronous Rendering

Asynchronous rendering allows an application and the Compositor to work asynchronously with respect to each other. The application submits frames – color and optionally a depth buffer for each eye – to the Compositor whenever it is ready, while the Compositor renders the latest frame at the display refresh rate. This allows the Compositor to perform various image stabilization and latency reduction techniques (see below) independent of the application, while delivering a consistent frame rate with low motion-to-photon latency, even when the application cannot keep up with the display’s frame rate.

Reducing Motion-to-Photon Latency

2D Warp and 3D Warp in combination with dynamic latewarp and asynchronous rendering serve two purposes:

  1. They reduce motion-to-photon latency by warping application-provided 2D images. They access the latest camera pose moments before it is time to display the next image on the display.
  2. They perform image stabilization by synthesizing missing frames whenever the application cannot keep up and drops frames. These frames are synthesized by appropriately warping a previously cached frame.

2D Warp

2D Warp is a less performance-intensive technique than 3D Warp. It maintains smooth rendering as long as the user only turns their head from side to side. For example, when an application drops frames, or runs at a lower frame rate, rendering still remains smooth as long as the user only rotates their head. It must be noted that rotation of the human head occurs at the neck and not the eye balls (i.e. the location of our virtual camera), so the user may still experience a small amount of judder, even with pure head rotation, if the application drops frames. 2D Warp does not account for translation of the user’s head.

3D Warp

3D Warp, on the other hand, accounts for both rotation and translation. This technique is slightly more expensive, and importantly, requires access to the application’s depth buffer. Our implementation imposes the following restrictions:

  1. Every pixel must have a valid depth value. The only exception is for halos or outlines that are within 10 pixels (in screen space) away from the border of an object that has a valid depth value (see Figure 1).
  2. An object needs to be at least 5 pixels in size (in screen space), along the smallest dimension for its details to be captured properly. For example, details along a thin long line with a thickness of 4 pixels may not be captured properly regardless of the length of the line.

Warp Figure 1

While most practical AR scenes can easily meet the above restrictions, full-screen post-processing effects such as “full-screen bloom” might be tricky to implement, especially due to the requirement that every pixel must have a valid depth (1).

However, 3D Warp provides such superior image stability and reduced perceived latency than the 2D Warp, and consequently it is often worth designing AR scenes with the above restrictions in mind whenever possible.

Limitations of 2D Warp and 3D Warp

Both 2D Warp and 3D Warp share some common limitations:

  1. Head-locked (i.e. HUD) content is generally not be compatible with these warp systems at this time. If you observe head-locked content juddering in your application, disabling these warp options should resolve this issue.
  2. Neither can disocclude objects that were originally hidden when the application rendered the scene in a 2D image. Disocclusion is exacerbated when a user translates, but it also happens when a user rotates their head, since head rotation occurs at the neck. While 3D Warp attempts to smooth the rendering, it inpaints the disoccluded areas with nearby color information. In AR, such inpainting is often unnoticeable.
  3. Neither can faithfully synthesize view-dependent phenomena such as specular highlights.
  4. Transparency effects cannot be synthesized faithfully either because a pixel can only have a single depth value. In a transparent area, a pixel can be composed of multiple objects at different depths. To compensate for this, we recommend enabling Z-Write for the outermost transparency. The visual artifact that can result is usually unnoticeable if the objects behind the transparency are not too far behind.

In AR, the above limitations are usually not show-stoppers because most of our view is of reality, and the artifacts are almost unnoticeable. The artifacts become most noticeable when an application runs very poorly and drops a lot of frames. In that case, the Compositor has to extrapolate a previously cached frame to fill in the missing frames.

Dynamic Latewarp

In order to reduce the motion-to-photon latency, the Compositor samples the head pose and schedules the final warp (lens warp including 2D/3D warps) as late as possible, leaving just enough time (headroom) to finish the render before the image is scanned out to the display upon VSYNC. Even though the final warp is bounded by the number of pixels on the display, which is a constant, the required headroom is quite variable in a real system, particularly due to performance throttling on laptops as part of power management.

Because a single fixed headroom doesn’t necessarily work across different machines, the Compositor carefully adapts the headroom during runtime based on the instant performance of the system. This results in smooth rendering and lower perceived latency across machines at all times.

Recommendations for Unity Developers

Transparencies are particularly challenging when they are drawn without writing to the Z-buffer (e.g. particle systems). Every pixel must have some valid depth value for 3D Warp to function properly. In this section we will illustrate how to use 3D Warp in a Unity scene in the presence of many of transparencies using two specific examples:

  1. We explain the principle of a general solution.
  2. We give a practical example.

Principle of Nested Transparencies

Consider a simple scene with a large number of particles being additively blended with transparency. Typically, Z-test is turned on so that particles collide with the world, but Z-write is turned off so that particles do not occlude each other. This becomes tricky when this particle system is drawn in an open space because there is no longer any depth value written to the Z-buffer. In this case, 3D Warp will fail, causing severe visual artifacts even if the application does not drop any frames.

The simplest fix could be to do a post depth pass (after drawing the transparencies) with a bounding box for the entire particle system. Enable the Z-buffer, Z-test, and z-write, while disabling the color buffer write (ColorMask=0). Such a crude approximation of depth will result in noticeable visual artifacts if the application drops too many frames. A better approximation would be to subdivide the bounding boxes into smaller pieces that closely capture the shape of the particle system.

The key idea is to encapsulate transparencies with a tight bounding volume, whatever shape that may be, and perform a depth-only pass with it.

Example of Nested Transparencies with the Workspace Campfire Scene

The Meta Workspace’s Campfire uses three particle systems: one for smoke, one for fire, and one for sparks. Each one uses a different Unity particle shader: Particles/Alpha Blended, Particles/Additive (Soft), and Particles/Additive, respectively. When 3D Warp is turned on, none of the particle systems are visible, as none write to the depth buffer. We started to adjust the campfire by downloading the Unity shader source code from the Unity download archive.

// Unity built-in shader source. Copyright (c) 2016 Unity Technologies. MIT license (see license.txt)

Shader "Particles/Additive" {
    Properties {
        _TintColor ("Tint Color", Color) = (0.5,0.5,0.5,0.5)
        _MainTex ("Particle Texture", 20) = "white" {}
        _InvFade ("Soft Particles Factor", Range(0.01,3.0)) = 1.0
    }
}

Category {
    Tags {"Queue"="Transprent" "IgnoreProjector"="True" "RenderType"="Transparent" "PreviewType"="Plane"}
    Blend SrcAlpha One
    ColorMask RGB
    Cull Off Lighting Off ZWrite Off
}

We then modified these three shaders by turning on writing to the depth buffer for each. With the smoke and fire effects, the changes didn’t produce a good result, as writing the depth buffer was blocking the other sprites. Due to the particulars of the setup, having only the spark particle write to the depth buffer produced a reasonable result. By having the sparks write to the depth buffer, the fire and smoke particles became visible, except at the very bottom of the fire. The fire particles started lower than the sparks, so they only became visible once they had moved up.

// Unity built-in shader source. Copyright (c) 2016 Unity Technologies. MIT license (see license.txt)

Shader "Particles/Additive" {
    Properties {
        _TintColor ("Tint Color", Color) = (0.5,0.5,0.5,0.5)
        _MainTex {"Particle Texture", 20) = "white" {}
        _InvFace ("Soft Particles Factor", Range (0.01,3.0) = 1.0
    }
}

Category {
    Tags {"Queue"=Transparent" "IgnoreProjector"="True "RenderType"="Transparent" "PreviewType"="Plane"}
    Blend SrcAlpha One
    ColorMask RGB
    Cull Off Lighting Off ZWrite On
}

In order to address this, we again used a modified Particles/Additive (Soft) shader that writes to the depth buffer. We again saw an issue that the overlapping fire particles were blocking each other, due to writing to the depth buffer. In order to address the issue, we modified the Particles/Additive (Soft) shader further by turning off z-testing. This causes the fire particles to draw even when there is a particle that is further in front of it. Finally, we made the fire and spark materials use a large render queue (3999 and 3998 respectively). This causes the fire to be rendered after all other Workspace models.

// Unity built-in shader source. Copyright (c) 2016 Unity Technologies. MIT license (see license.txt)

Shader "Particles/Additive (Soft)" {
    Properties {
        _MainTex ("Particle Texture", 2D) = "white" {}
        _InvFade ("Soft Particles Factor", Range(0.01,3.0)) = 1.0
    }
}

Ultimately, making the campfire work with 3D Warp required two shader changes. In Particle Add.shader, we toggled ZWrite from Off to On. In Particle AddSmooth.shader, we did the same and set ZWrite from Off to On, and added ZTest Always.

Requirements

  • Everything must write to depth. As mentioned above, in order to perform image stabilization using 3D Warp, depth information of the scene is required. Because of that, the following requirements must be met:
    • When text appears over the panel, it needs to be at the same depth as the panel.
    • A glass effect can still be implemented if the glass’s depth is drawn in a separate pass and the objects are properly ordered.

Techniques and Recommendations

  • Drawing Complex Transparencies (nest box/sphere and a covering hull).
  • Ensure that all the shaders (especially shaders which are opaque) write to the depth buffer.
  • Getting semi-transparent objects to render can be a bit tricky. Render them back to front (which is done anyways), and use the depth of the outermost object. Given the objects behind the semi-transparent objects are not too far away, it should look ok most of the time. For example, the Workspace Bee’s eye is semi-transparent and the eye ball behind it is opaque but you can’t really see any visual artifacts unless you look very closely.
  • When there is a transparent layer or surface around a 3D object not only does the shader need to be modified to write to depth, but it might be necessary to change the order the layers are drawn by changing the order value in the render queue. Make sure the transparent layer is drawn last (so that it writes to depth and is visible).
  • A pure sprite renderer might have issues, as Unity’s default and diffuse sprite shaders do not write to the depth buffer. To modify the shader:
    1. Download the Unity shader (available on Unity’s download archive) for the version of Unity being used. The shader can the be modified to write to the depth buffer.
    2. Create a material that uses the modified shader.
    3. Apply the material to the sprite renderer(s). If two or more sprite renderers are at the same position, they will now have z-fighting issues. The easiest fix for this is to move one so that it is at a slightly different depth.
  • Another issue with sprites, at least with using the default shader as previously modified, is that turning them invisible through changing their alpha causes a rendering artifact. The invisible sprite will still occlude other items since they are writing to the depth buffer. One solution is to turn the sprite renderer off instead of setting its alpha to zero. It seems likely that alpha testing would also work.

Clipping Planes

Modifying the clipping planes of the cameras within your Unity scene is supported by controls on the Meta Compositor component of the MetaCameraRig prefab. The near and far plane distances can be individually controlled with the sliders or their corresponding text fields. A button to reset to defaults appears if these settings are modified.

Caveats:

  • Do not modify the clipping plane settings on the Unity Camera GameObjects directly. Use the aforementioned controls instead.
  • The Meta 2’s optics are designed for viewing objects that are within arm’s reach because we believe that AR is at its best when you can naturally reach out and interact with virtual objects. Because of that, visual artifacts may appear when the far clipping plane is extended beyond the default distance and objects appear past this distance.
  • Objects which are on the edge of the clipping plane will partially disappear or shimmer. This is expected behavior within any 3D rendering engine.

Meta Compositor