Tools

Videantis Adds Deep Learning Core, Toolset to Its Product Mix

Germany-based processor IP provider videantis, launched in 2004, was one of the first of what has since become a plethora of vision processor suppliers. The company's latest fifth-generation v-MP6000UDX product line, anchored by an enhanced v-MP (media processor) core, is tailored for the deep learning algorithms that are rapidly becoming the dominant approach to implementing visual perception (Figure 1). Yet it's still capable of handling the traditional computer vision processing functions Read more...

NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference

With the proliferation of deep learning, NVIDIA has realized its longstanding aspirations to make general-purpose graphics processing units (GPGPUs) a mainstream technology. The company's GPUs are commonly used to accelerate neural network training, and are also being adopted for neural network inference acceleration in self-driving cars, robots and other high-end autonomous platforms. NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, Read more...

Next-generation Intel Movidius Vision Processor Emphasizes Floating-point Inference

Hot on the heels of announced production availability for the Neural Compute Stick based on the Myriad 2 vision processor comes a next-generation SoC from Movidius (an Intel company), the Myriad X. With a 33% increase in the number of 128-bit VLIW processing cores, along with additional dedicated imaging and vision acceleration blocks and a brand new "neural compute engine," the new chip delivers a claimed 10x increase in floating-point performance versus its predecessor (Figure 1 and Table 1 Read more...

Case Study: Engineering Algorithms for Efficiency and Effectiveness

Algorithms are the essence of embedded applications; they are the mathematical processes that transform data in useful ways. They’re also often computationally demanding. When designing a new product, companies often need to assess whether an algorithm will fit within their cost and power consumption targets. Sometimes, an algorithm won’t fit in its initial form. Most algorithms can be formulated in many different ways and different formulations will be more or less efficient on different Read more...

Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options

Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first Read more...

BrainChip Leverages Software, Acceleration Hardware to Jumpstart Emerging Neural Network Approach

Convolutional neural networks (CNNs) may be the hot artificial intelligence (AI) technology of the moment, in no small part due to their compatibility for both training and inference functions with existing GPUs, FPGAs and DSPs as accelerators, but they're not the only game in town. Witness, for example, Australia-based startup BrainChip Holdings and its alternative proprietary spiking neural network (SNN) technology (Figure 1). Now armed with both foundation software and acceleration hardware Read more...

Intel-influenced Movidius Neural Compute Stick Increases Memory, Lowers Price, Reprioritizes Frameworks

When Movidius unveiled the Fathom Neural Compute Stick, based on its Myriad 2 VPU (vision processor), at the May 2016 Embedded Vision Summit, the company targeted a $99 price tag and initially planned to support the TensorFlow framework, with support for Caffe and other frameworks to follow. A lot's changed in a year-plus, most notably Intel's acquisition of Movidius announced in September. The company's new version of the Neural Compute Stick drops the price by 20%, switches from plastic to Read more...

AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration

AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An Read more...

Synopsys Broadens Neural Network Engine IP Core Family

Last June, when Synopsys unveiled its latest-generation DesignWare EV6x vision processor core, the company also introduced an 880-MAC, 12-bit convolutional neural network (CNN) companion processor, the CNN880. Although the CNN880 is optional for Synopsys customers using the EV6x, it's been a key factor (often the lead factor, in fact) in greater than 90% of EV6x customer engagements, according to Product Marketing Manager Gordon Cooper. And although a year ago, an 880-MAC architecture was at Read more...

Xilinx's reVISION Stack Tackles Computer Vision, Machine Learning Applications

Xilinx, like many companies, sees a significant opportunity in burgeoning deep neural network applications, as well as those that leverage computer vision...often times, both at the same time. Last fall, targeting acceleration of cloud-based deep neural network inference (when a neural network analyzes new data it’s presented with, based on its previous training), the company unveiled its Reconfigurable Acceleration Stack, an application-tailored expansion of its original SDAccel development Read more...