One of the things I find endlessly fascinating about digital signal processing is that it enables using computation to offset physical limitations. For example, with the right signal processing, you can get awesome sound out of tiny, inexpensive loudspeakers, like those that fit into a smartphone or tablet.
It turns out that this also applies to photography. Virtually all digital cameras today do some algorithmic processing to improve the quality of captured images. And the amount and sophistication of such processing is increasing rapidly. For example, many smartphone cameras today include high dynamic range capability, which improves photo quality by carefully combining information from multiple images captured with different exposure settings.
Algorithmic processing is also being used to provide new capabilities for photographers. A simple example is detecting smiles, to enable the camera to take a photo at just the right moment. Another increasingly popular feature is automatic panorama stitching, which finds the spacial relationships between a series of images and automatically merges them into a single panoramic image.
This rapid proliferation of practical "computational photography" is a major disruptive force in consumer photography today. It is driven by the convergence of several trends. The first trend is the proliferation of smartphones. The best camera to use is usually the camera you've got with you, and for most consumers that's a smartphone. In addition, the vast size and intense competitiveness of the smartphone market has spurred billions of dollars of R&D investment to create better smartphones, and improvement of camera capabilities is a key target of this activity.
Also, the fact that smartphones are relatively open platforms means that clever engineers with good ideas for enabling better photography can get those ideas into the smartphone marketplace quite easily and directly, compared to trying to get Nikon or Canon to incorporate them into traditional cameras. Thus, a growing amount of investment and creativity is focused on smartphone photography.
A final important trend is the increasing programmable processing power available in smartphones. No longer limited to CPUs (which have become very powerful in their own right), smartphone application processors now offer GPUs with "GPU compute" capability, enabling the data-parallel architecture of the GPU to be pressed into service for things other than graphics. Image processing and computer vision algorithms often exhibit just the type of data parallelism that maps well into these single-instruction, multiple-data architectures.
This combination of convenience (my smartphone is always nearby), massive investment, and innovation means that smartphone photographic technology is now advancing much faster than conventional camera technology. This creates huge challenges for camera manufacturers. Not only is the boom in smartphone photography coming at the expense of sales of conventional cameras—particularly the compact, "point-and-shoot" variety, but the focus of innovation is shifting away from areas that traditional camera suppliers have mastered, like optics, to new territory, like computer vision algorithms, energy-efficient programmable processors, and new types of sensors.
To me, the most exciting aspect of the growth in smartphone cameras and computational photography is the opportunity for more people to capture better pictures, and to do so in more situations, while spending less money on dedicated gear. The ability to add new features and improve the performance of a smartphone camera just by downloading new software is a nice bonus. Let's hear it for digital signal processing!
What’s your experience been with computational photography? I'd love to hear about it. Post a comment here or send me your feedback at http://www.BDTI.com/Contact.