Low-Power

Videantis Adds Deep Learning Core, Toolset to Its Product Mix

Germany-based processor IP provider videantis, launched in 2004, was one of the first of what has since become a plethora of vision processor suppliers. The company's latest fifth-generation v-MP6000UDX product line, anchored by an enhanced v-MP (media processor) core, is tailored for the deep learning algorithms that are rapidly becoming the dominant approach to implementing visual perception (Figure 1). Yet it's still capable of handling the traditional computer vision processing functions Read more...

Synopsys Enhances ARC Processor Core With Superscalar, DSP Capabilities

By adding a second front-end instruction decoder to the ARC HS3x high-end 32-bit RISC architecture, along with doubling the number of ALUs, Synopsys has created its latest ARC HS4x processor IP core family (Figure 1). The estimated performance increase over an ARC HS3x predecessor at the same clock speed can be as high as 40%, according to the company, with only modest die size and power consumption impacts. And via the inclusion of DSP enhancements akin to those initially launched with the mid Read more...

NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference

With the proliferation of deep learning, NVIDIA has realized its longstanding aspirations to make general-purpose graphics processing units (GPGPUs) a mainstream technology. The company's GPUs are commonly used to accelerate neural network training, and are also being adopted for neural network inference acceleration in self-driving cars, robots and other high-end autonomous platforms. NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, Read more...

Next-generation Intel Movidius Vision Processor Emphasizes Floating-point Inference

Hot on the heels of announced production availability for the Neural Compute Stick based on the Myriad 2 vision processor comes a next-generation SoC from Movidius (an Intel company), the Myriad X. With a 33% increase in the number of 128-bit VLIW processing cores, along with additional dedicated imaging and vision acceleration blocks and a brand new "neural compute engine," the new chip delivers a claimed 10x increase in floating-point performance versus its predecessor (Figure 1 and Table 1 Read more...

Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options

Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first Read more...

Next-gen Qualcomm Camera Modules, ISP Target Depth Sensing, Other Computer Vision Tasks

Write the first comment.
Qualcomm is expanding its reference camera module program with three new configurations targeting biometrics and depth-sensing functions in Android-based smartphones, tablets, AR (augmented reality) and VR (virtual reality) headsets, and other devices. While the modules' targeted computer vision tasks tend to be computationally intensive, a next-generation ISP (image signal processor) core optimized for the functions is intended to offload CPU, GPU and DSP resources inside a SoC, delivering Read more...

Intel-influenced Movidius Neural Compute Stick Increases Memory, Lowers Price, Reprioritizes Frameworks

When Movidius unveiled the Fathom Neural Compute Stick, based on its Myriad 2 VPU (vision processor), at the May 2016 Embedded Vision Summit, the company targeted a $99 price tag and initially planned to support the TensorFlow framework, with support for Caffe and other frameworks to follow. A lot's changed in a year-plus, most notably Intel's acquisition of Movidius announced in September. The company's new version of the Neural Compute Stick drops the price by 20%, switches from plastic to Read more...

AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration

AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An Read more...

Synopsys Broadens Neural Network Engine IP Core Family

Last June, when Synopsys unveiled its latest-generation DesignWare EV6x vision processor core, the company also introduced an 880-MAC, 12-bit convolutional neural network (CNN) companion processor, the CNN880. Although the CNN880 is optional for Synopsys customers using the EV6x, it's been a key factor (often the lead factor, in fact) in greater than 90% of EV6x customer engagements, according to Product Marketing Manager Gordon Cooper. And although a year ago, an 880-MAC architecture was at Read more...

Cadence Doubles Down on Deep Learning

At last year's Embedded Vision Summit, Cadence unveiled the Tensilica Vision P6 DSP, which augmented the imaging and vision processing capabilities of its predecessors with the ability to efficiently execute deep neural network (DNN) inference functions. Cadence returned to the Summit this year with a new IP offering, the Vision C5 DSP core, focused exclusively on deep neural networks. Vision C5 is intended for use alongside another core, such as the Vision P6, which will handle image signal Read more...