whatbrentsay

  • 3/3

    AR and extended displays

    • tech
    • augmented reality
  • One of the use cases I'm most interested in for AR/MR devices is how they will extend existing physical device displays. Full on virtual displays will no doubt be very cool—and much desired—but AR/MR experiences will also be able to accessorize the existing devices we already use every day. Considering how limited the technology may be when these devices first reach consumers, this use case may be among the early compelling ones.

    Oddly enough, the hand terminals in The Expanse TV show exhibit the basic principle:

    Expanse handheld terminal with interface elements outside of physical display bounds

    Expanse tablet sized terminal with interface elements outside of physical display bounds

    These devices are common in the Expanse TV show and often feature UIs that extend outside of the physical display area. Characters in the show can interact with the elements outside of the screen. It's like an extremely local kind of 2D projection—very cool sci-fi tech.

    While the technology doesn't exist to extend displays out of their bounds like this, a wearable AR device could place interface elements outside of an existing device's screen in the same way. Even the limited field of view of current AR devices wouldn't be much a problem for this use case as we tend to look directly at our phones and tablets when we're using them.

    Extending device displays in this way would allow apps to place secondary UI elements outside of the main display, reserving that screen space for what's most important. This may become appealing if early AR glasses don't boast as high resolution (or high quality—color accuracy or range, contrast, etc.) displays as phones/tablets do. Additionally, this kind of display extension could offer improved user experiences in certain scenarios. Imagine looking at a single, full screen photo on screen but being able to see several photos taken before and after it to the left and right. Or, imagine scrolling a web page, document, or feed and seeing scrollable content above and below the screen. Simple examples but when it comes to using phones, screen size has always been one of the major limitations of the form factor and AR could remedy this in some situations.

    Alternatively, a similar but opposite use case would is also be compelling—perhaps moreso than the one above. Instead of maximizing a display by moving elements just outside of screen bounds, an AR device could allow your phone or tablet to become a secondary, controlling display. In this case, the primary app content would move into AR space and the phone/tablet would become a glorified remote, offering the user a variety of custom controls. What could appeal most to users about that is what it would offer from a privacy standpoint. Using an AR device along with a phone/tablet could offer a new dimension of privacy while in public spaces or more generally while around prying eyes.

    Having a visualization of content you're sharing between two devices would also be cool. So would the option of pinning persistent non-interactive elements, like widgets, around your device screen. The possibilities here are vast and I expect these use cases will be part of early AR device offerings, particularly those belonging to existing software ecosystems—iOS, Android, Samsung's One UI/Dex, etc. These kind of simple "better together" synergistic uses could be more feasible up front. I'm reminded of how dependent Apple Watch was on iPhone at the start of its life and see a very similar path for the first few generations of AR devices. That bodes well for companies with deep ecosystems and a variety of devices but may disadvantage those with shallower portfolios.