Why is XR so Unsustainable?

Let’s consider a great definition of sustainability – from the classic book “Design is the Problem” by Nathan Shedroff

Shedroff says it in many ways, but he basically says that sustainable design is created to minimize energy and resource consumption, and not deplete those resources. The goal is “future friendly” design that wil work in the future.

Now, contrast that with the world of Xtended Reality (XR), which is an umbrella term for Virtual, Augmented, and Mixed reality. XR is currently in its 3nd or 4th (if you count Sensorama) attempt at commercial viability. After the excitement of 2016, headset sales actually fell, reaching a low in 2018. During the last part of 2020, the Oculus Quest 2 defined a better hardware form factor (standalone) for XR, sales went up, apps made money – and we’re off to the XR races (again).

All the various incarnations of XR are more energy-intensive than standard web pages, or video apps like Zoom. There are several reasons for this. First, rendering a 3D space is more CPU and GPU-intensive. On average, users need more powerful computers to render an XR scene. This is especially true for ‘tethered’ XR headsets, which use a host computer GPU to render images. The newest headsets, like the Oculus Quest 2 can operate in ‘standalone’ mode, and draw a tiny fraction the power of a desktop, so this problem may be reduced in the future.

Second, to reduce motion sickness, XR scenes have to be rendered at a higher framerate than a desktop. Typical desktop refresh rates for the screens are 60-90fps. Anything below 90 fps in XR will make the virtual world ‘misalign’ with the message being sent by your senses of orientation and balance. Small lags, where the XR system struggles to catch up to the user’s movements, are the prime cause of barf in XR. The problem can only be solved with faster, more powerful headsets.

Third, current XR scenes don’t actually fill your field of view. If we look down on a person’s head, the arc of vision is about 220 degrees, with about 120 of that central vision. The best headsets don’t quite match central vision (“direct field of view” in the image below) and not fill the remaining peripheral vision.

Before you shout ‘Pimax’ note that this headset, while providing a wider field of view, does it at the expense of your vision up and down – it’s like you’re looking at an old Cinemascope wraparound screen from the 1950s.

Wide, but nothing above, and nothing below…

Plans to fix this include ‘foveated’ vision, where the headset dynamically tracks your eye movements, and draws high resolution only in that area. This makes it practical to fill in the rest with lower-quality data for your peripheral vision. Once again, the eye-tracking and decision-making by the rendering system will require more power.

Next, we have ‘haptics’ – broadly devices that allow the user to interact in the virtual world. The default haptic is a handheld controller, borrowed directly from game console gamepads. These haptics are low-powered. Recent headsets have added tracking of hands and fingers, using an onboard camera. The computation of hand position and motion once again, drive up those CPU cycles. There have been lots of attempts to extend haptics. Various ‘bodysuits’ have been devised, which vibrate or press on the user to create an illusion of solidity. Others can create a sensation of heat. There is even a haptic suit that causes pain if you make a mistake. These kinds of haptics are low powered.

On the other hand, future XR is often imagined as using body suits, vests, and machines which mechanically move you around. For just one user, the haptic investment skyrockets.

But the big factor in XR is streaming. Future visions of XR automatically assume 5G, or even 6G networks will deliver the necessary bandwidth. It’s pretty easy to see why this is so. Imagine we had a headset which delivers “ultra high definition” (4k per eye) at 90 fps. The data needed at the level of the user’s eyes is automatically hundreds of megabytes just for the most central region of vision. If we make the user’s vision fully immersive, 1.0 gigabit/second transmission looks like a entry-level standard.

Compare to our current web, where users are downloading a few megabytes per minute for a standard web page. Even with aggressive data compression, XR will need vast increases in bandwidth and network infrastructure to work.

And our problem – we are asking for more energy for a new medium, potentially hundreds of times higher than the old web-based one, and dozens of times higher than high-definition videoconferencing. Where does that energy come from?

The reality is that renewables are ticking up at a very slow rate. Consumption of coal, supposedly ‘going away’ is rising above 2019 levels, after a plunge during the 2020 pandemic. The growth of overall energy use is still higher than the growth of renewables – in other words, our ‘new’ energy is not adequately replacing the old.

As you can see, ‘renewables’ form a very small percent of the total. If you pull out Hydropower and Nuclear (older technologies which don’t actively emit CO2 during energy production) the situation looks worse.

During the 2020 pandemic, the skies got clear in major cities. Some people have thought of this as a ‘dry run’ for combatting climate change – we should have a ‘lockdown’ every 18 months or so until we meet climate goals. But a look at actual CO2 emission during 2020 makes that nonsensical.

Look for the change here – CO2 is UP in February 2021 versus February 2020.

Clearly, going to Zoom didn’t somehow ‘save the planet’ with reduced emissions…instead our emissions followed almost the same pattern in 2020 that they did in 2019, and we’re on schedule in 2021 to emit even more than 2019.

Currently, videoconferencing, has been touted as “greener” than actually going to the office. As we have seen in other instances, this is not necessarily true. A print run of books might be more efficient than PDFs in online courses. A recent study found that videoconferencing can reduce energy consumption, but not by much – it is incremental, rather than revolutionary.

All this conspires to put XR in the hot seat for Sustainable Virtual Design. Due to its greater energy and resource consumption, there has to be a compelling use case. But most XR experiences at present are just 2D games pulled into XR “barf mode.”

Click above image for XR Barf Video….

In other words, the “use case” for XR versus a game on your TV screen is…the mismatch between your still inner ear, and the frantically moving ‘gamer’ world introduces vomit.

Hmmm, GREAT REASON to emit more CO2!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.