Software Development

Videantis Adds Deep Learning Core, Toolset to Its Product Mix

Germany-based processor IP provider videantis, launched in 2004, was one of the first of what has since become a plethora of vision processor suppliers. The company's latest fifth-generation v-MP6000UDX product line, anchored by an enhanced v-MP (media processor) core, is tailored for the deep learning algorithms that are rapidly becoming the dominant approach to implementing visual perception (Figure 1). Yet it's still capable of handling the traditional computer vision processing functions Read more...

NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference

With the proliferation of deep learning, NVIDIA has realized its longstanding aspirations to make general-purpose graphics processing units (GPGPUs) a mainstream technology. The company's GPUs are commonly used to accelerate neural network training, and are also being adopted for neural network inference acceleration in self-driving cars, robots and other high-end autonomous platforms. NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms, Read more...

Next-generation Intel Movidius Vision Processor Emphasizes Floating-point Inference

Hot on the heels of announced production availability for the Neural Compute Stick based on the Myriad 2 vision processor comes a next-generation SoC from Movidius (an Intel company), the Myriad X. With a 33% increase in the number of 128-bit VLIW processing cores, along with additional dedicated imaging and vision acceleration blocks and a brand new "neural compute engine," the new chip delivers a claimed 10x increase in floating-point performance versus its predecessor (Figure 1 and Table 1 Read more...

Case Study: Engineering Algorithms for Efficiency and Effectiveness

Algorithms are the essence of embedded applications; they are the mathematical processes that transform data in useful ways. They’re also often computationally demanding. When designing a new product, companies often need to assess whether an algorithm will fit within their cost and power consumption targets. Sometimes, an algorithm won’t fit in its initial form. Most algorithms can be formulated in many different ways and different formulations will be more or less efficient on different Read more...

Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options

Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first Read more...

BrainChip Leverages Software, Acceleration Hardware to Jumpstart Emerging Neural Network Approach

Convolutional neural networks (CNNs) may be the hot artificial intelligence (AI) technology of the moment, in no small part due to their compatibility for both training and inference functions with existing GPUs, FPGAs and DSPs as accelerators, but they're not the only game in town. Witness, for example, Australia-based startup BrainChip Holdings and its alternative proprietary spiking neural network (SNN) technology (Figure 1). Now armed with both foundation software and acceleration hardware Read more...

Jeff Bier’s Impulse Response—Is Deep Learning the Solution to All Computer Vision Problems?

Write a comment.
At the Embedded Vision Summit in May, I had the privilege of hearing a brilliant keynote presentation from Professor Jitendra Malik of UC Berkeley. Malik, whose research and teaching have helped shape the field of computer vision for 30 years, explained that he had been skeptical about the value of deep neural networks for computer vision, but ultimately changed his mind in the face of a growing body of impressive results. There’s no question that deep neural networks (DNNs) have transformed Read more...

Intel-influenced Movidius Neural Compute Stick Increases Memory, Lowers Price, Reprioritizes Frameworks

When Movidius unveiled the Fathom Neural Compute Stick, based on its Myriad 2 VPU (vision processor), at the May 2016 Embedded Vision Summit, the company targeted a $99 price tag and initially planned to support the TensorFlow framework, with support for Caffe and other frameworks to follow. A lot's changed in a year-plus, most notably Intel's acquisition of Movidius announced in September. The company's new version of the Neural Compute Stick drops the price by 20%, switches from plastic to Read more...

AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration

AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An Read more...

Xilinx's reVISION Stack Tackles Computer Vision, Machine Learning Applications

Xilinx, like many companies, sees a significant opportunity in burgeoning deep neural network applications, as well as those that leverage computer vision...often times, both at the same time. Last fall, targeting acceleration of cloud-based deep neural network inference (when a neural network analyzes new data it’s presented with, based on its previous training), the company unveiled its Reconfigurable Acceleration Stack, an application-tailored expansion of its original SDAccel development Read more...