InsideDSP — In-depth analysis and opinion

Synopsys Enhances ARC Processor Core With Superscalar, DSP Capabilities

By adding a second front-end instruction decoder to the ARC HS3x high-end 32-bit RISC architecture, along with doubling the number of ALUs, Synopsys has created its latest ARC HS4x processor IP core family (Figure 1). The estimated performance increase over an ARC HS3x predecessor at the same clock speed can be as high as 40%, according to the company, with only modest die size and power consumption impacts. And via the inclusion of DSP enhancements akin to those initially launched with the mid Read more...

OmniVision Delivers Near-infrared Image Sensor Improvements

Posted in Video
Add the first comment (login required).
The effectiveness of a computer vision system is highly dependent not only on the algorithms but also on the quality of the images fed into the algorithms. "Garbage in, garbage out," as the saying goes. More than 20 years of consumer digital camera-fueled image sensor innovations have also delivered cost, resolution, sensitivity, dynamic range, power consumption, and other benefits to visible-light-based computer vision applications. OmniVision Technologies hopes to expand these benefits into Read more...

Jeff Bier’s Impulse Response—The Camera is the Ultimate Link Between the Real World and Computers

As a kid, I was fascinated with electronics – especially digital electronics. The idea that one could build a computing machine out of simple logic gates was a revelation, and designing such things was thrilling. But as powerful and flexible as digital computers are, we live in an analog world. Hence, analog-to-digital converters play a critical role. When I first encountered them, I found A/D converters exotic – even magical. With them, one could not only construct a computer, but also enable Read more...

Case Study: Harnessing Computer Vision and Machine Learning for Real-World Products

The buzz around artificial intelligence, computer vision and machine learning intensifies on a daily basis: There are a dizzying number of new processors, algorithms and tools for computer vision and machine learning. Investment and acquisition activity around AI companies is furious. Announcements of new AI-based applications and products are non-stop. Competition for engineering talent is fierce. All this creates challenges for product designers and application developers who seek to Read more...

NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference

With the proliferation of deep learning, NVIDIA has realized its longstanding aspirations to make general-purpose graphics processing units (GPGPUs) a mainstream technology. The company's GPUs are commonly used to accelerate neural network training, and are also being adopted for neural network inference acceleration in self-driving cars, robots and other high-end autonomous platforms. NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, Read more...

Next-generation Intel Movidius Vision Processor Emphasizes Floating-point Inference

Hot on the heels of announced production availability for the Neural Compute Stick based on the Myriad 2 vision processor comes a next-generation SoC from Movidius (an Intel company), the Myriad X. With a 33% increase in the number of 128-bit VLIW processing cores, along with additional dedicated imaging and vision acceleration blocks and a brand new "neural compute engine," the new chip delivers a claimed 10x increase in floating-point performance versus its predecessor (Figure 1 and Table 1 Read more...

Jeff Bier’s Impulse Response—Computer Vision: At the Edge or In the Cloud? It Depends

About seven years ago, my colleagues and I realized that it would soon become practical to incorporate computer vision into cost- and power-constrained embedded systems. We recognized that this would be a world-changing development, due to the vast range of valuable capabilities that vision enables. It’s been gratifying to see this potential come to fruition, with a growing number of innovative vision-enabled products finding market success. What we didn’t anticipate in 2011 was the important Read more...

Case Study: Engineering Algorithms for Efficiency and Effectiveness

Algorithms are the essence of embedded applications; they are the mathematical processes that transform data in useful ways. They’re also often computationally demanding. When designing a new product, companies often need to assess whether an algorithm will fit within their cost and power consumption targets. Sometimes, an algorithm won’t fit in its initial form. Most algorithms can be formulated in many different ways and different formulations will be more or less efficient on different Read more...

Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options

Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first Read more...

BrainChip Leverages Software, Acceleration Hardware to Jumpstart Emerging Neural Network Approach

Convolutional neural networks (CNNs) may be the hot artificial intelligence (AI) technology of the moment, in no small part due to their compatibility for both training and inference functions with existing GPUs, FPGAs and DSPs as accelerators, but they're not the only game in town. Witness, for example, Australia-based startup BrainChip Holdings and its alternative proprietary spiking neural network (SNN) technology (Figure 1). Now armed with both foundation software and acceleration hardware Read more...