At this year’s Consumer Electronics Show, I walked many miles and saw countless demos. Several of these demos were memorable, but one in particular really got my mental gears turning: Microsoft’s HoloLens.
HoloLens, of course, is Microsoft’s “mixed reality” glasses product, which has been shipping in pre-production form for about a year. (Previously, I would have used the term “augmented reality” to refer to HoloLens, which overlays computer-generated graphics on the user’s view of the physical world. But here I’m adopting the Microsoft’s preferred term, “mixed reality,” which many people now use to describe systems in which “people, places, and objects from your physical and virtual worlds merge together.”)
Over the past five years, I’ve seen many demos of virtual reality, augmented reality and mixed reality. Most of these showed promise – but the promise usually felt distant, because the demos weren’t sufficiently polished to feel “real,” and weren’t easy to use.
That was then. This is now: HoloLens has nailed both the “feels real” and ease-of-use aspects.
Wearing HoloLens, I played a shoot-em-up video game against an army of robots, illustrated in this video. The experience was stunning, thanks to three key capabilities. First, HoLolens is a wearable, battery-powered device, so I was able to move about the room to dodge hostile robots. Second, HoloLens accurately mapped the room I was in, enabling the robotic invaders to create what really looked like cracks in the actual walls of the room. And third, as I turned my head and shifted my position within the room, HoloLens adapted to these movements seamlessly, so that the illusion of merged physical and virtual worlds was maintained.
Now that I’ve experienced robust mixed reality, I foresee many compelling applications for this technology beyond gaming: Enabling physicians to see inside a body to enable safer, more accurate treatment. Giving utility workers a clear view of underground pipes and cables. Providing consumers with a realistic preview of how a room will look after redecorating it. Allowing museum visitors to see a skeleton transform into a fully formed, animated dinosaur. (The fact that HoloLens sells for $3,000 suggests that, for a while at least, this technology is more likely to be adopted by hospitals, utility companies and museums than by individual consumers.)
Of course, a convincing mixed reality (“MR”) experience – one in which the virtual and physical worlds interact in a realistic way – requires the MR device to maintain an accurate understanding of the surrounding physical world – and the user’s position within it – in three dimensions, with very low latency. That is, it requires fast, highly accurate 3D computer vision.
Mixed reality doesn’t necessarily require a wearable device. Vehicle applications, for example, can use the windshield as a projection screen. And 8tree’s clever handheld device for quantifying surface damage projects information onto the surface being inspected. But in many cases, glasses are the most compelling way to deliver mixed reality. This is because they leave your hands free, because they know where you are looking, and because they have the ability to project information into your field of view, wherever you’re looking. Packing all of the technology required for a convincing MR experience into a wearable device is a daunting challenge, however. With HoloLens, Microsoft has given us a hint of what’s possible. The HoloLens team has clearly put enormous effort into everything from custom chips to industrial design to create a device that’s reasonably comfortable to wear (though still bulky).
A few years ago, the Microsoft Kinect had a major catalyzing effect on many people’s thinking about low-cost 3D visual perception. In a similar fashion, I believe that HoloLens will spur many “aha” moments, leading to accelerated innovation in wearable computer vision devices, low-power 3D computer vision, and mixed reality. Given the importance of these technologies, I’m thrilled that Marc Pollefeys, Director of Science for HoloLens and a pioneer in 3D computer vision, will be a keynote speaker at the 2017 Embedded Vision Summit, taking place May 1-3, 2017 in Santa Clara, California. My colleagues and I at the Embedded Vision Alliance are putting together a fascinating program of presentations and demonstrations, with emphasis on deep learning, 3D perception, and energy-efficient implementation. If you’re interested in implementing visual intelligence in real-world devices, mark your calendar and plan to be there. As a bonus, if you register for the Summit on the Alliance website by February 1 you can save 25% with discount code NLID0130. I look forward to seeing you in Santa Clara!