Jeff Bier’s Impulse Response—What’s the Best Processor for Embedded Vision?

Submitted by Jeff Bier on Mon, 07/21/2014 - 22:02

These days, more and more product creators are incorporating computer vision into their designs. At the recent Embedded Vision Summit conference, a majority of the roughly 500 attendees reported that they are currently working on a vision-enabled product, or plan to start a vision-based design within the next year. And, increasingly, these designs target high-volume markets, like the recently announced Amazon Fire smartphone and the collision-prevention systems now being offered in many mid-market automobiles. The opportunity to supply processors for these applications is drawing the attention of a growing number of processor vendors.

Embedded vision (my preferred term for the practical applications of computer vision) demands lots of processing performance. This is because most embedded vision systems operate on video data (which is high-rate data) using complex algorithms (which are needed in order to reliably extract meaning from images). In addition to being complex, vision algorithms tend to evolve continuously, which means that ease of programming is a key attribute for processors competing in this space.

When we have data-intensive applications demanding lots of compute performance with low power, low cost and programmability, the answer is usually highly parallel, specialized processor architectures. And, indeed, that's a common theme among processors targeting vision applications. But despite this commonality, there's extreme diversity among architectures being promoted for vision applications. These include DSPs, GPUs, FPGAs and a variety of more specialized, vision-specific architectures.

I'm frequently asked which type of processor is best for embedded vision applications. The reality is that each type of processor has its strengths and weaknesses, and each application has its unique requirements. Of course, there's no one class of processor that's best for all vision applications. In fact, even for a single vision application, often a combination of two or three different processor types is optimal. An example of this approach is the chip created by start-up Inuitive, which performs functions like hand gesture recognition and gaze tracking to create natural user interfaces. Inuitive's chip uses a combination of a CPU core, multiple vision-oriented DSP cores, and Inuitive's own acceleration engines.

For system and chip designers, the good news is that there's a robust field of vision-oriented processors to choose from. (By my last count, over 20 suppliers are offering processors for vision applications in chip or IP core form.) The bad news is that it takes real work to sort through the options and determine which offerings are best suited for your specific application. And this work is made more challenging by the vast diversity of architectures and programming models.

If you're interested in learning about the many processor options for designers of systems and chips targeting embedded vision applications, check out the video of the talk I presented on this topic at the Embedded Vision Summit. This video, along with videos of all of the presentations from the Summit, is available free of charge to registered users of the Embedded Vision Alliance web site. (Registration is quick and easy.)

Sometimes as a new application domain emerges, things start out chaotic but then converge over time. So it's reasonable to ask whether, in a few years, the fog will have cleared and one class of processor will dominate embedded vision applications. Although this has happened in other domains (wireless communications comes to mind), I don't think it's going to happen in a big way in embedded vision applications. This is because there is incredible variety in the applications and their requirements, and this variety will continue to grow as engineers find new ways to add value to systems and applications by adding visual intelligence.

There's also the matter of incumbency. Once system and software developers get comfortable using a particular type of processor for a particular purpose, it's hard to change – both because the engineers are comfortable and productive with the processor they're familiar with, and also because of the big investments already made in designing and optimizing hardware and software around that processor. So even if another type of processor emerges which offers better performance, energy efficiency or programmability, it can be difficult to displace the incumbent processor. Hence, big advantages are accruing to the processor suppliers who are getting to market early with credible offerings for embedded vision applications.

If you've been evaluating or using processors for embedded vision, I'd love to hear your opinions. Post a comment here or send me your feedback at http://www.BDTI.com/Contact.

Jeff Bier is president of BDTI and founder of the Embedded Vision Alliance.

Add new comment

Log in to post comments