Jeff Bier’s Impulse Response—Machines that See: The Next Embedded Processing Frontier?

Submitted by Jeff Bier on Thu, 07/29/2010 - 17:00

Lately I’ve been thinking about what I call “embedded vision” technology—that is, the use of computer vision technology in embedded systems.  Similar to the way that wireless communication has become pervasive over the past 10 years, I believe that embedded vision technology will be very widely deployed in the next 10 years.

It’s clear that embedded vision technology can bring huge value to a vast range of applications.  Two of my favorites are Mobileye’s vision-based driver assistance systems, intended to help prevent motor vehicle accidents, and MG International’s swimming pool safety system, which helps prevent swimmers from drowning.  And for sheer geek appeal, it is hard to beat Intellectual Ventures’ laser mosquito zapper, designed to prevent people from contracting malaria.

Just as high-speed wireless connectivity began as an exotic, costly technology, embedded vision technology has so far typically been found in complex, expensive systems, such as a surgical robot for hair transplantation and quality control inspection systems for manufacturing.

Advances in digital integrated circuits were critical in enabling high-speed wireless technology to evolve from exotic to mainstream.  When chips got fast enough, inexpensive enough, and energy efficient enough, high-speed wireless became a mass-market technology.  Today you can buy a broadband wireless modem for your laptop for under $100.

Similarly, advances in digital chips are now paving the way for the proliferation of embedded vision into high-volume applications.  Like wireless communication, embedded vision requires lots of processing power—particularly as applications increasingly adopt high-resolution cameras and make use of multiple cameras.  Providing that processing power at a cost low enough to enable mass adoption is a big challenge.  This challenge is multiplied by the fact that embedded vision applications require a high degree of programmability.  In contrast to wireless applications where standards mean that, for example, algorithms don’t vary dramatically from one cell phone handset to another, in embedded vision applications there are great opportunities to get better results—and enable valuable features—through unique algorithms.

With embedded vision, I believe that the industry is entering a “virtuous circle” of the sort that has characterized many other digital signal processing application domains.  Although there are few chips dedicated to embedded vision applications today, these applications are increasingly adopting high-performance, cost-effective processing chips developed for other applications, including DSPs, CPUs, FPGAs, and GPUs.  As these chips continue to deliver more programmable performance per dollar and per watt, they will enable the creation of more high-volume embedded vision products.  Those high-volume applications, in turn, will attract more attention from silicon providers, who will deliver even better performance, efficiency, and programmability.  And so on.

As I write this, I’m sitting at my kitchen table with my laptop and my cell phone.  My wife is sitting across from me, similarly equipped.  Between the two of us, we have ten digital wireless transceivers at our fingertips.  Of course, we don’t think of them as digital wireless transceivers.  In fact, we really don’t think of them at all.  They’ve become part of the invisible infrastructure that enables us to get our jobs done efficiently and conveniently.  Ten years from now, how many embedded vision systems will be similarly integrated into our lives?

Are you working with embedded vision applications and technology?  If so, I’d like to hear about it.  Write to me at editor@InsideDSP.com.

Add new comment

Log in to post comments