Jeff Bier’s Impulse Response—"Where Am I?"

Submitted by Jeff Bier on Mon, 03/02/2015 - 22:01

If you frequently travel across many time zones, you've probably had the unsettling experience of waking at an odd hour and being uncertain about where exactly you are. If you're like me, your instinctive solution to this confusion is to turn on a light and look around. Instantly, you recognize that you're in a hotel room. A few more visual clues (sometimes requiring a peek through the window) and suddenly you know where you are.

And although it's largely a subconscious process, looking around is also integral to how humans are able to safely and efficiently move through unfamiliar, hazardous and changeable physical spaces, whether it's a Barcelona subway platform or a Dallas hotel room.

So, it's natural to think that mobile machines – whether they're warehouse robots, self-driving or semi-autonomous cars, or package delivery drones – will also need vision, both to determine their position and orientation in the physical world, and to navigate safely. And this turns out to be true. While other types of sensors – such as GPS, radar and LIDAR – are valuable, they also have significant limitations which render them, by themselves, insufficient. For example, keeping a moving car within the bounds of its lane is best accomplished by actually looking at the lane markings. That’s how human drivers do it and, not surprising, that's also how semi-autonomous and self-driving cars do it.

As embedded processors and image sensors become more powerful, less expensive, and more energy efficient, vision is proliferating into smaller devices as well. Take home floor cleaning robots, for example. Many people are familiar with the iRobot Roomba product family. Introduced in 2002, the Roomba line pioneered robotic vacuum cleaners and, according to iRobot, has sold more than 10 million units – impressive for a product that sells for several hundred dollars. But one of the key limitations of the Roomba is that it doesn't know where it is. This leads to frequent complaints about it getting stuck, and cleaning some areas repeatedly while missing others.

Enter Dyson’s 360 Eye robotic vacuum cleaner, announced in September. As its name suggests, the 360 Eye incorporates a 360-degree camera. The camera – along with a powerful embedded processor and sophisticated algorithms – enables the cleaning robot to solve the so-called "simultaneous localization and mapping" (SLAM) problem. In other words, by looking around as it moves, the robot builds a map of the room, determines its location in the room, and devises a path to accomplish its mission – much like you do when you wake in the middle of the night in an unfamiliar hotel room.

Dyson isn't the first to offer a robotic vacuum cleaner. And they're not the first to implement a solution to the SLAM problem. But Dyson's persistent pursuit (over 16 years) of a better product, coupled with advances in enabling technology like embedded processors and algorithms, has enabled Dyson to deliver real innovation for consumers.

I'm thrilled that Mike Aldred, the lead robotics developer at Dyson, will be one of the keynote speakers at the Embedded Vision Summit, taking place on May 12 in Santa Clara, California. The Embedded Vision Summit is a unique conference for product developers who are creating more intelligent products and applications by integration computer vision into their designs. Join us at the Summit to hear Mike's story about how Dyson overcame tough technical challenges to achieve the needed combination of performance, price and reliability for their application. And at the Summit, you'll have the opportunity to hear dozens of other top-notch presentations providing expert insights and practical know-how for integrating visual intelligence into your products. I hope to see you there!ill be packed with world-class expert presenters. I hope to see you there!

Jeff Bier is president of BDTI and founder of the Embedded Vision Alliance Post a comment here or send him your feedback at http://www.BDTI.com/Contact.

Add new comment

Log in to post comments