Jeff Bier’s Impulse Response—Truth in Advertising

Submitted by Jeff Bier on Wed, 05/25/2005 - 16:00

If you are a regular reader of this column, you have probably noticed a recurring theme: signal processing applications are becoming more complicated and more varied—and so is the hardware that runs them. Ten years ago, DSPs used fairly simple architectures, and the architectures of most DSPs were similar to one another. Today, many DSPs use very complex architectures, and there is a remarkable amount of variety among DSP architectures. What's more, DSPs increasingly compete with alternatives such as application-specific standard products, general-purpose processors, and FPGAs. This growing architectural complexity and variety has made it increasingly difficult to evaluate chip performance. Accurately comparing the performance of a DSP and an FPGA, for example, requires a careful, detailed analysis.

The complexity of today's applications also means engineers can no longer select a processor solely on the basis of factors like cost and performance. Instead, engineers must also evaluate the development infrastructure—including tools, development boards, off-the-shelf software components, and other support—available for the processor. Without the right development infrastructure, it can be impossible to complete a sophisticated application on time and on budget.

The need to evaluate development infrastructure complicates the process of choosing a processor. Suppose an engineer developing a video-processing application wants to use an off-the-shelf video compression algorithm ("codec"). To choose the best processor, the engineer must understand not only the merits of the candidate processors, but also the merits of the corresponding codec implementations. For example, a slow processor with well-optimized off-the-shelf codec software might perform better than a fast processor with a poorly optimized codec. On the other hand, a poorly optimized codec implementation that's been thoroughly tested may be a much better choice than a well-optimized but buggy one.

And evaluating development infrastructure is often even trickier than evaluating processor performance. One reason for this is that subtle distinctions between vendors' offerings can make huge differences in the development process. The variation among processor simulators illustrates this point. Some simulators are fast but not cycle-accurate; others are cycle-accurate but slow. Some cycle-accurate simulators only model the processor core accurately; others model both the core and the memory system. Getting the right kind of simulator—or simulators—is critical, but trying to compare the simulators for a set of candidate processors can be maddeningly confusing.

The upshot of all this is that engineers are more dependent than ever on vendors' honesty. In the past it was often easy to check vendor's claims. An engineer with a basic understanding of processor architectures often needed only a few hours of analysis to determine if a vendor's performance claims were plausible. Today even a skilled, experienced engineer can find it difficult—if not impossible—to thoroughly evaluate both a complicated processor and its supporting development infrastructure in a reasonable amount of time.

This situation makes it tempting for vendors to issue exaggerated claims about their offerings. However, such hyperbole will come back to haunt the vendor. Engineers may be at the mercy of the vendor's marketing department during processor selection, but they will quickly learn the truth once product development begins.

Vendors that routinely overstate the quality of their offerings will inevitably develop a reputation for untrustworthiness. This reputation will make it difficult to sell products, because engineers know that the success of their projects depends on the reliability of the vendor's claims. As a result, vendors with an eye on long-term success should strive for credibility, even if being honest means losing some sales in the near term.

Add new comment

Log in to post comments