- AMD's ROCm: CUDA Gets Some Competition
- CEVA-X1 DSP Core Targets Cellular IoT Opportunities
- Jeff Bier’s Impulse Response—Will Computer Vision Upend the Automotive Industry?
- Case Study: Balancing the Demands of Algorithms and the Capabilities of Processors When Designing Computer Vision Systems
- Wave Computing Targets Deep Learning
NVIDIA was an early and aggressive advocate of leveraging graphics processors for other massively parallel processing tasks (often referred to as general-purpose computing on graphics processing units, or GPGPU). The company's CUDA software toolset for GPU computing has to date secured only
October 10, 2016 | Write the first comment.
Hard on the heels of the public release of CEVA's second-generation convolutional neural network toolset, CDNN2 , the company is putting the final touches on its fifth-generation processor core, the CEVA-XM6 , designed to run software generated by that toolset. Liran Bar, the company's
September 07, 2016 | Write the first comment.
Last year, when CEVA introduced the initial iteration of its CDNN (CEVA Deep Neural Network) toolset, company officials expressed an aspiration for CDNN to eventually support multiple popular deep learning frameworks. At the time, however, CDNN launched with support only for the well-known Caffe
Modern SoCs increasingly contain a variety of processing resources: one or more CPU cores and a GPU, often with a DSP, programmable logic, or one or multiple special-purpose co-processors for tasks such as computer vision. Properly harnessed, such heterogeneous processors often deliver impressive
May 25, 2016 | Write the first comment.
In late January of this year, Movidius and Google broadened their collaboration plans, which had begun with 2014's Project Tango prototype depth-sensing smartphone . As initially announced , the companies’ broader intention to "accelerate the adoption of deep learning within mobile
March 21, 2016 | Write the first comment.
As computer vision is deployed into a variety of new applications, driven by the emergence of powerful, low-cost, and energy-efficient processors, companies need to find ways to squeeze demanding vision processing algorithms into size-, weight-, power, and cost-constrained systems. Fortunately for
Jeff Bier’s Impulse Response—Why Do Embedded Processor Software Development Tools Suck? It’s the Unknown Knowns
In 2002, in a famous piece of unintentional rhetorical artistry, U.S. Defense Secretary Donald Rumsfeld spoke to reporters about "known knowns" (things you're aware that you know), "known unknowns" (things you're aware that you don't know) and "unknown
The decreasing cost-per-transistor delivered by modern semiconductor processes means that a number of previously rare embedded processor options are now increasingly common. This trend includes floating-point coprocessors, which are especially useful when migrating code originally developed on a PC
November 16, 2015 | Write the first comment.
A growing number of products are incorporating computer vision capabilities. This, in turn, has led to rapid growth in the number of processors being offered for vision applications. Selecting the best processor (whether a chip for use in a system design, or an IP core for use in an SoC) is
October 21, 2015 | Write the first comment.
As applications become more complex, and processors become more powerful, system developers increasingly rely on off-the-shelf software components to enable rapid and efficient application development. This is particularly true in digital signal processing, where application developers expect to