Jeff Bier’s Impulse Response—Stuck in the Past

Submitted by Jeff Bier on Sat, 02/07/2004 - 17:00

Vendors announcing new signal-processing chips tend to brag about the clock speed of the processor core, just as they did ten years ago. “Look at our hot new processor!” they proclaim. “It’s got lots and lots of Hertz!” But in embedded applications—just as in PC applications—comparing chip performance solely on the basis of core clock speeds never tells the whole story. In fact, it can be downright misleading.

The performance of today’s highly integrated chips isn’t just a function of the processor core, but also of surrounding components: for example, memory, I/O interfaces, and coprocessors. Most of the performance data available for these chips focuses on the processor, however, and ignores the effects of other on-chip components. As processor speeds have pushed upwards towards the gigahertz mark, memory and other on-chip elements have struggled to keep up. In many cases, the speed of the chip is limited by the speeds of these components. As a result, chip speeds are simply not increasing as fast as processor clock speeds.


Many processor vendors and users employ small benchmarks, such as FIR filters or Viterbi decoders, to make chip performance comparisons. This type of benchmark has been around a long time and offers much more accurate performance comparisons than MHz, MIPS, or MMACs. System developers are familiar and comfortable with this approach to assessing chip performance. Still, such benchmarks typically compare only processor core speeds, not chip speeds—just like MHz.

This is not to say that small benchmarks are no longer useful; they are. This is particularly true for licensable cores, where the surrounding components are selected by the licensee, not by the core designer. However, for complicated, high-performance chips, the addition of larger, application-oriented benchmarks that fully exercise memory, I/O, and peripherals is important.

Of course, this approach to performance estimation has its difficulties. It’s much more time-consuming to implement larger benchmarks, so these benchmark results will most likely only be available for the hottest applications. System developers who need to estimate whole-chip performance for other applications will need to use some combination of whole-chip benchmark data developed for different applications, processor core benchmark data, custom benchmarks, and old-fashioned back-of-the- envelope estimates.

Small benchmarks have served our industry well and continue to have their place—but more information is sorely needed. Application benchmarks, while not a panacea, will provide a welcome step forward.

Add new comment

Log in to post comments