Soon after BDTI got its start in the early 1990s, we became known for our benchmarks. We benchmarked whatever types of processors people were using for embedded digital signal processing: first DSPs, then CPUs, and eventually MCUs, FPGAs, and GPUs, too.
One of the interesting things about benchmarking processors for embedded digital signal processing tasks is the importance of optimization. Optimization is central to digital signal processing applications. In a typical embedded DSP application, regardless of what type of processor it's running on, substantial effort (often person-years of effort) is invested in reducing the required processing performance. And since the performance-critical portions of applications are optimized, benchmarks must also be optimized in order to accurately reflect how processors will perform in applications.
As a consequence, my colleagues and I gained a lot of experience optimizing digital signal processing functions on a wide range of processor architectures. Eventually, some of our customers began asking us to deploy these same optimization skills to help them in their product development projects. By the late '90s, this kind of contract software development work had become a significant part of BDTI's business. In a typical project, a customer was trying to squeeze an algorithm that nominally required 200 MIPS of processing power into a 100 MIPS processor in order to meet a cost target.
I find these projects fascinating, because they offer opportunities for creative, out-of-the-box thinking, while also rewarding obsessive attention to detail—two of my favorite modes of working. To this day, optimizing software for digital signal processing algorithms on a variety of processors continues to be an important part of BDTI's business. But lately I've noticed that the optimization we're doing is more likely to be focused on reducing power consumption, rather than reducing cost.
Say, for example, you have a modern smartphone like the HTC One, packing an applications processor with four 1.7 GHz CPU cores, each with SIMD parallel processing instructions. You've got a SoC complex capable of well over 20 billion 32-bit multiply-accumulate operations per second. Why on earth, then, would you spend time optimizing the code for an audio algorithm that requires less than 10% of that performance?
The answer is power consumption. If you want that smartphone to run that audio algorithm all day without depleting its battery, you're going to need to optimize very carefully—and very likely, move your code off the CPU and onto an adjacent DSP core on the SoC, since the DSP is generally capable of more energy-efficient execution of signal processing tasks. Ultra-power-optimized functions on smartphones are useful for letting you play your music for days on end without recharging the battery. But they're also useful for enabling new kinds of functionality, such as the "always listening" speech recognition capabilities discussed in "TrulyHandsFree: Always-On Speech Recognition From Sensory," another article in this month's InsideDSP newsletter.
As consumers (and workers in many industries) increasingly grow accustomed to being untethered from AC outlets, more and more products will require aggressive optimization to achieve extreme energy efficiency. For example, some modern Bluetooth headsets incorporate speech recognition capability to enable fully hands-free operation. Today, ultra-low-power, always-on speech recognition is state-of-the-art. I'm betting that in a few years, ultra-low-power, always-on computer vision will be state-of-the-art.
For example, I'm personally eagerly looking forward to a Google Glass-like product that I can use to automatically prompt me with the names of past acquaintances that I encounter at conferences. Today it's difficult to envision to the potential for something like ultra-low-power, always-on computer vision, because we've never been able to implement such a thing. But as the first viable and valuable examples emerge, I expect a surge of innovation in this realm. And this surge will create new, exciting and valuable opportunities for those who can figure out how to squeeze big algorithms into small chips.
Speaking of ubiquitous computer vision—which I like to call "embedded vision"—if you're an engineer involved in (or interested in learning about) incorporating visual intelligence into your system, SoC, or software, I invite you to join us at the Embedded Vision Summit on April 25th in San Jose. Space at this free educational event is limited, so please register now. To begin the registration process, please fill out the application form here. We will respond with further details via email.