Monthly Archives: May 2016

A VLIW Processor Bids to Dominate the Augmented Reality/Virtual Reality Market

By: Jonah McLeod

If history is any guide to the future, each major next generation consumer device—PC, Smart Phone, and what comes next will each have its own computing architecture. The x86 chip dominated the PC, while the ARM core architecture conquered the smart phone/device market. The question that now arises is what next generation device will drive CPU unit volumes in the future? And what CPU core will become the dominant architecture in this device?

The answer most generally given for the first question is IoT, but the response lacks specificity since it encompasses everything from health and fitness monitors, to smart doorbells, security cameras, and the list goes on. One market that has the look and feel of a next generation hit is augmented/virtual reality. The market research firm Digi-Capital has developed an AR/VR business model that predicts a market worth $120B by 2020. And of this total, around 40 percent, roughly $48B will be hardware sales. Just to provide some perspective, the Apple iPhone was introduced in 2007 and four years later had sales of $45B; the trajectory appears similar.

The AR/VR market differs from the smart phone/device market in that it lacks a major vendor, such as Apple Inc., defining the market and hardware architecture that will become dominant. The AR/VR market includes major players: Microsoft, Facebook, Google, Sony, Samsung, and HTC Corp. and more are coming, but not one of these has set the standard.

Head-mounted displays such as Google Glass and the HTC Vive are the major hardware components of the AR and VR market, respectively. At the Embedded Vision Conference held on May 2 through 4 at the Santa Clara Convention Center, some of the buzz was all about processors best suited for providing the vision processing demanded by these new devices.

I listened to pitches promoting the graphics processor unit (GPU) and the very long instruction word (VLIW) processor as the best compute engine for the image processing function, a major requirement in augmented reality (AR) applications. The Myriad 2 Vision Processor from Movidius, Inc. of San Mateo, Calif. with its 12 on-chip vector processors delivering teraflops of performance on a watt of power struck me as a good candidate because of its ability to deliver high performance on a modest power budget.

In an AR application in which the user wears glasses, the requirement is to capture the physical space being viewed, process all the elements comprising the space and provide the wearer information about the scene; for example the wall of an art gallery with the wearer viewing a painting by El Greco. Capturing the view is achieved through convolutional neural networks (CNN), which involves a large number of matrix multiplications. A VLIW processor uses instruction level parallelism—performing several multiplications at once—to minimize time consuming and power hungry memory accesses.

The other function in an AR application is determining what the wearer’s eye is viewing. Detecting the position of the eye and the scene before the viewer, the AR application can provide detailed information to the wearer that makes AR valuable. The Movidius VLIW processor consists of 12 VLIW engines that can be engaged for image processing, detecting the wearer’s eye position or other compute-intensive functions. The 12 engines are tied together via 2-Mbyte intelligent memory fabric that provides deterministic data locality to minimize memory accesses and an address map for easy programming.

This generation of compute engines seeking to become the defacto standard in the next high volume end application has to accommodate a wide range of packaged software that performs the image, vision, sensor—gyroscope, accelerometer, altimeter, etc.—functions. Thus, not only does the architecture need to provide compelling performance and power capability, it needs a software development environment to port the software modules demanded by the end application, while providing the compiler efficiency to most effectively exploit the hardware architecture, i.e. provide most performance with least power consumption.

Movidius claims to have this combination of architecture and software development environment in its Myriad Development Kit. The proof will come in the form of design wins. As with most great successes, being the CPU architecture in the product that suddenly catches fire and leaves the competition struggling to catch up trumps all other marketing and technical considerations.