- Himax, CEVA, emza Partner to Develop Low Power Vision Processing Platform
- Jeff Bier’s Impulse Response—Deployable Artificial Neural Networks Will Change Everything
- Case Study: Deep Understanding of Processor Architectures and Computer Vision Algorithms is Key to a Breakthrough Product
- Qualcomm's Latest Snapdragon Enhancements Leverage a Lithography Shrink
- New AMD Software Library, Hardware Support Deep Learning Acceleration
HPC (high-performance computing) servers, which have notably embraced the GPGPU (general-purpose computing on graphics processing units) concept in recent years, are increasingly being employed for computer vision and other deep learning-based applications. Beginning in late 2014, NVIDIA
NVIDIA was an early and aggressive advocate of leveraging graphics processors for other massively parallel processing tasks (often referred to as general-purpose computing on graphics processing units, or GPGPU). The company's CUDA software toolset for GPU computing has to date secured only
October 10, 2016 | Write the first comment.
Hard on the heels of the public release of CEVA's second-generation convolutional neural network toolset, CDNN2 , the company is putting the final touches on its fifth-generation processor core, the CEVA-XM6 , designed to run software generated by that toolset. Liran Bar, the company's
September 07, 2016 | Write the first comment.
Last year, when CEVA introduced the initial iteration of its CDNN (CEVA Deep Neural Network) toolset, company officials expressed an aspiration for CDNN to eventually support multiple popular deep learning frameworks. At the time, however, CDNN launched with support only for the well-known Caffe
Modern SoCs increasingly contain a variety of processing resources: one or more CPU cores and a GPU, often with a DSP, programmable logic, or one or multiple special-purpose co-processors for tasks such as computer vision. Properly harnessed, such heterogeneous processors often deliver impressive
May 25, 2016 | Write the first comment.
In late January of this year, Movidius and Google broadened their collaboration plans, which had begun with 2014's Project Tango prototype depth-sensing smartphone . As initially announced , the companies’ broader intention to "accelerate the adoption of deep learning within mobile
March 21, 2016 | Write the first comment.
As computer vision is deployed into a variety of new applications, driven by the emergence of powerful, low-cost, and energy-efficient processors, companies need to find ways to squeeze demanding vision processing algorithms into size-, weight-, power, and cost-constrained systems. Fortunately for
The decreasing cost-per-transistor delivered by modern semiconductor processes means that a number of previously rare embedded processor options are now increasingly common. This trend includes floating-point coprocessors, which are especially useful when migrating code originally developed on a PC
November 16, 2015 | Write the first comment.
A growing number of products are incorporating computer vision capabilities. This, in turn, has led to rapid growth in the number of processors being offered for vision applications. Selecting the best processor (whether a chip for use in a system design, or an IP core for use in an SoC) is
Jeff Bier’s Impulse Response—Why Do Embedded Processor Software Development Tools Suck? It’s the Unknown Knowns
In 2002, in a famous piece of unintentional rhetorical artistry, U.S. Defense Secretary Donald Rumsfeld spoke to reporters about "known knowns" (things you're aware that you know), "known unknowns" (things you're aware that you don't know) and "unknown