- Himax, CEVA, emza Partner to Develop Low Power Vision Processing Platform
- Jeff Bier’s Impulse Response—Deployable Artificial Neural Networks Will Change Everything
- Case Study: Deep Understanding of Processor Architectures and Computer Vision Algorithms is Key to a Breakthrough Product
- Jeff Bier’s Impulse Response—Energy-efficient Processors are Critical for Embedded Vision
- Wave Computing Targets Deep Learning
Jeff Bier’s Impulse Response—Will Computer Vision Upend the Automotive Industry?
Earlier this week, Google announced the spin-off of its self-driving car project into a stand-alone business. Will Google become a major player in the automotive industry? Today, that idea seems far-fetched. On the other hand, 15 years ago Apple was a personal computer company, and few would have guessed that it would eventually become a dominant player in consumer electronics and photography.
The Google announcement resonated with me in light of a fascinating recent presentation by Mark Bünger of Lux Research. In Mark's view, computer vision isn't going to merely make cars better – it's going to completely transform the automotive industry. In the process, current automotive giants may be displaced by companies that aren't even players in the industry today.
If you're skeptical that titans like Ford, Toyota and BMW could be displaced by companies like Google and Uber, consider that new technologies frequently transform industries. For example, think about how smartphones have completely restructured the photographic industry. In 1999, consumers took around 80 billion photographs. In 2017, according to Deloitte, consumers will take roughly 3 trillion photographs. Of course, the vast majority of photos taken in 2017 (86%, according to InfoTrends) will be taken not with an analog camera, and not even with a digital camera – but with a smartphone or tablet.
So, picture-taking has grown dramatically over the past 20 years, but the companies that make money from this activity today are completely different from those of 20 years ago. A generation ago, if you wanted to take photographs, you purchased a camera. You consumed film. You sent that film to a lab for processing. Now you use your smartphone to capture images, which you can view instantly. And you use the Internet to store and share your photos, with zero incremental cost per image. Instead of consumers paying for cameras, film and processing, social networks like Facebook monetize photo sharing via advertising. In other words, wireless technology disrupted the photography industry.
Will computer vision catalyze a similar disruption in the automotive industry? Mark argues that computer vision is central to practical autonomous vehicles, and that the inevitable proliferation of autonomous vehicles will cause huge changes in the automotive industry. Let's examine these ideas in turn.
While autonomous cars will undoubtedly utilize a variety of sensors, Mark's view is that vision sensors will dominate, because of their unique versatility. I agree. With the right algorithms and a powerful processor, a handful of image sensors mounted on a car can detect vehicles, pedestrians and other objects. But they can also read traffic lights, lane markings, road signs, and road surface conditions. And inside the car, image sensors can determine where people are seated, who they are, where they're looking, and whether they've left something behind in the car. No other single type of sensor has this range of capabilities. As a result, over time vision will subsume the functionality of other types of sensors. And, in the process, vision sensors will reduce the cost of autonomous vehicles – helping to make them economically viable.
Mark also points out that the way we use cars today is very inefficient. For many people, a car is the second most expensive thing they own (after a home). Yet, typically, it sits unused 95% of the time. With autonomy, car sharing becomes more attractive. Instead of owning, maintaining, and insuring a car, consumers can subscribe to a transportation service. A car picks you up when you need it, drops you off, and continues on to pick up the next subscriber (after checking to see if you've left anything behind). With this model, consumers become less attached to the car (arguably, even today many consumers are not very attached to their cars) and more connected to the service provider.
If you're in Pittsburgh or San Francisco, you can get an early preview of this scenario today: Uber is offering customers the option of riding in a self-driving car (with an Uber engineer present to monitor things for the time being). And, of course, these cars are not advertised as "self-driving Volvos," but rather as "self-driving Ubers." In 20 years, at least in some densely populated areas, it may be more common to "request an Uber" than to fetch your own car from the parking lot.
For me, Mark's presentation was thought-provoking. Previously, I saw vision as enhancing cars and the automotive industry. Now I see it as potentially transforming the industry. If you want to understand how computer vision is changing industries and business models, and learn about the latest practical techniques and technologies for adding vision to all types of systems, I invite you to join me, Mark Bünger, and over 40 other speakers at the Embedded Vision Summit, taking place May 1-3, 2017 in Santa Clara, California. As one recent attendee said, "It's the conference for computer vision product developers."