Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Sonics’

Next Page »

DRAM Remains The Status Quo

Thursday, August 22nd, 2013

By Frank Ferro
No one will argue that the “post-PC” era is here. Tablet shipments are expected to pass laptops by the end of this year, and desktops by the end of 2015. Add-in the nearly 1 billion smartphones shipment projected for 2013, and you would think that the DRAM industry would take notice of this volume.

DRAM manufacturers do care about this segment of the market, but this fact is not obvious when looking at their roadmaps. The reality is that DRAM specifications continues to be driven by the PC and server markets, and it does not look it will change anytime soon. This was one of the key takeaway items for me after attending MemCon in Santa Clara two weeks ago.

Although unit shipments for mobile devices are higher than PCs, they only represent about 13% of the overall memory shipments said Martin Lund, senior vice president at Cadence, in his keynote address. This number would justify maintaining the current DRAM roadmap—at least for now. The problem, although not a new one, is that other mobile and embedded products have to live within the memory constraints set by the PC industry, which is focused on ‘cost per bit.’

Making Lemonade. Reducing the cost per bit is a good thing if you are making a PC or a server, but many embedded designers would put lower latency and lower power consumption much higher on their wish list. In addition, embedded designs typically need less memory capacity. However, for cost reasons, designers must choose a DRAM based on the lowest price node, regardless of the memory capacity needed for their design. Power consumption is being address to some degree as the adoption low-power DRAM (LPDDR) is increasing, but low power is still not a fundamental design criterion for DRAM. Latency, however, is a particular problem. CPU processing efficiency is greatly reduced because the processor has to wait for DRAM. And according to Bob Brennan, senior vice president at Samsung Semiconductor, DRAM latency has remained constant over the last 10 years. Increasing the processor speed, or adding a second processor, will not help if the CPUs are spending most of their time waiting for memory.

To address latency, we continue to see cache sizes growing, along with an increase in the number of caches (L3 and L4). More cache memory reduces latency, but at the expense of larger die sizes, and increased design complexity (schedule and cost). In addition to the cache, customers I work with are looking for latency reduction in the on-chip network (it seems now more than ever). Fast and efficient connections from the CPU to DRAM are critical, and the problem only gets worse with competition for DRAM from other heterogeneous processors. Parallel connections (sending address and data simultaneously) offer the lowest latency (zero in theory), while serial connections (address, then data) offer higher speeds with reduced wire count while introducing some latency. In addition to the connection topology between the CPU and DRAM, the network needs to have an advanced QoS algorithm to ensure that CPU traffic is not stalled or blocked entirely from memory due to data from other processors or I/O.

Feels like a Band-Aid. Another solution that addresses the need for more efficient DRAM in embedded mobile products is Wide I/O. Wide I/O 2 offers very efficient power consumption for a given bandwidth. The challenge with Wide I/O, which was reiterated by many MemCon speakers and during the panel discussion, is the manufacturing cost, reliability and the business model. For this technology to be widely adopted (no pun intended), their needs to be a major company willing to take the lead in order to drive volume and reliability up, and manufacturing cost down. With the consolidation in the applications processor market, it is not clear if and when a driver emerge. I was somewhat surprised to hear that most panelists believe that the hybrid memory cube (HMC) will be in volume production before Wide I/O. HMC is also a stacked die solution with a logic layer offering better bandwidth, lower power, and lower latency. As a stacked die TSV (through-silicon-via) solution, HMC will certainly have many of the cost and manufacturing challenges facing wide I/O.

The bottom line is that for the next two years (at least), architecting your SoC around the current DDR and LPDDR roadmap is the only practical choice. LPDDR3/4 provides a good bandwidth/power node, allowing for incremental improvements in SoC power consumption and performance, while avoiding some of the manufacturing risks. For the DRAM industry to start focusing on memory architectures optimized for latency and power consumption, we will have to wait for the embedded markets to clearly overtake the PC and server in memory consumption. Until then, the focus must be on system architectures that improve data flow efficiency in the SoC by using optimized network connections between key processing cores and memory, along with advanced QoS algorithms to manage traffic flow to maximize DRAM utilization.

—Frank Ferro is director of product marketing at Sonics.

Wearing My Computer

Thursday, July 25th, 2013

By Frank Ferro
I have been in a friendly debate with my colleagues (to remain nameless) for some time now about the future of ‘wearable devices.’ The most recent examples are the new Google glasses and the latest incarnation of the smart watch. I’m not a fan of either. I am trying to keep an open mind, however, because my natural inclination is not to overtly wear electronics. I could never clip a cell phone to my belt, and I only wear a Bluetooth headset in the car or when I’m alone in my office.

My concern about these particular devices is that they feel forced on the consumer. I understand that products need to be test marketed, but I really don’t see the use case for either of these products that would cause widespread adoption by consumers. For example, not everyone wears glasses, and if you don’t count cool sunglasses, most of us don’t want to wear them. In the case of the smart watch, many people—young people in particular—already have given up wearing watches because they use their phone. Plus do we really want to worry about charging our phone, watch and glasses all the time? I already have too many charging cords!

To be fair, I understand what is driving the need for these types of devices. According to a recent Internet Trend presentation (Mary Meeker and Liang Wu, KPCB, May 2013), the data show that users reach for their smartphone about 150 times per day. Unfortunately, this is a very believable number. I am guilty as anyone, although my smartphone use pattern does not seem to fit the data in the chart below because I mostly check email and access the Web. Clearly, reducing the number of times you have to turn on the large phone display will save battery life, and there is some level of convenience gained by not constantly needing to reach for the phone.

Now don’t get me wrong, there are some interesting wearable devices coming on the market and I am sure many more will continue to appear. A good example is the Fitbit Flex Wristband used for fitness training and/or weight loss. Devices like these are becoming popular because they have a specific use, and work with your smartphone so the user interface is comfortable for the consumer.

The concern about the success of these wearable devices is not just an exercise in being right, but like most IP and semiconductor manufacturers we are looking for devices that will drive the next wave of semiconductor products. The current state of sensor technology is allowing for an increase in the number of sensors per device. The Samsung Galaxy S4 is a good example with nine sensors on the phone including gesture, proximity, gyro, accelerometer, geomagnetic, temperature/humidity, barometer, hall, and RGB light sensors. I understand that most of us don’t wear our smartphone, but this is a good platform to show how all this sensor data is changing the way our electronic devices interact with the physical world around us.

In a smartphone there is plenty of processing power to analyze all this sensor data, but as we move to smaller wearable devices, the MCUs needed to process this ‘fusion’ of sensors are increasing in complexity. The following chart from Semico Research shows the growing need for 16- and 32-bit MCUs to work in conjunction with MEMS based sensors. As the chart shows, 16- and 32-bit MCUs will account for more than one-third of the processors embedded with MEMS sensors next year.

From an SoC perspective, sensor fusion is driving up processor complexity and will fuel (no pun intended) many of the new design starts by distributing the load of integrating real world data with our wearable everywhere computing device. Although I may be skeptical about the use models and success of some of these early wearable devices, there is no question that they will shape the future of consumer electronics. I continue to debate my colleagues on the timing and practicality of wearable consumer electronics, but I know in the end, I will be wearing my computer. The real question is what will it look like and when.

—Frank Ferro is the director of product marketing at Sonics, Inc.

Life After Smartphones

Thursday, June 27th, 2013

By Frank Ferro
Don’t let the title confuse you. Smartphones are not going away anytime soon. In fact this year’s smartphone shipments have exceeded feature phones for the first time, with a total of 216 million units in Q1, according to IDC, and the overall mobile phone market is expected to grow 4.3% in 2013. This volume represents an increase in smartphone sales of 42% from Q1 2012.

There is concern, however, that the high-end smartphone market is beginning to show signs of saturation, as previously optimistic sales expectations are being lowered. The market picture is not that simple, with high-end smartphones sales in the United States and developed countries still doing well, but the battle is now over brand dominance and not necessarily new growth (i.e. swapping back and forth between Android and iOS).

Market growth for smartphones is expected to come from developing countries with midrange or value-based smartphone products. This shift in product mix has the potential to disrupt the current market leaders in both the handsets and semiconductors industries. Concern from market leaders is evident as rumors surfaced about lower-cost iPhones, along with the recent announcement by Qualcomm of upgrades to Snapdragon 200, which is being targeted for entry-level phones in China and emerging regions. This tactic by Qualcomm is clearly in response to growing competition from both Asian and U.S. chip manufacturers targeting low-end smartphones.

Given that smartphones and tablets have been fueling the growth and innovation for the semiconductor industry, it will be interesting to see how the shift in focus to low-end smartphones will affect SoC development. As a consumer, I believe this is a good thing for the SoC industry, because up to this point semiconductor companies have been focused on the ‘race to performance and features.’ This race, without a doubt, has driven much of the SoC innovation, including multi-core CPUs, higher frequencies (>2GHz), more powerful GPUs, along with many new features. These features and functions were necessary as applications processor companies were fighting for market share. They were also necessary because consumers could not get enough new features in their smartphones.

Today, high-end smartphones are maturing to the point where differentiating one product from another is difficult. Adding additional CPU cores or speeding up the clock may not provide a discernible benefit for the user. In fact, it may have the opposite effect by increasing cost and power consumption. So the battle for low-end smartphone market share will require SoC manufacturers to focus on cost, power, product efficiency and time-to-market.

To reduce the overall development cost a better SoC design methodology is needed. At the recent Design Automation Conference (DAC), Nvidia chief scientist, Bill Dally had some very interesting comments on this topic, saying that it is “just scandalous” that SoC design takes as long as it does. He also noted that it may cost $50 million to take a “relatively simple SoC” to first prototype. A key element of the solution he proposed was a better IP ecosystem with hard IP blocks that conform to a standard network-on-chip (NoC) interface, so that IP can be easily connected together. Although I am not completely in agreement on the practicality of creating all these hard IP blocks, I do believe that complete IP subsystems, connected via an network-on-chip, will go a long way toward reducing development time.

Power management also will need to be a key focus if SoCs vendors want to be successful in the low-end smartphone market. Many design teams for the current application processors on the market had the best of intentions to aggressively manage power, but schedule pressure and the state of power management technology often left these solutions falling far short of their intended power saving goals. For the next generation of SoCs, power management has to start at the architecture level, including more power domains with finer grain control of these domains. Hardware control is needed for much faster on/off times, thus minimizing the need for CPU and software intervention.

Clearly smartphones will be the ‘hub’ of our activity for a long time, and like most consumer products, growth in the early phases of deployment means price reduction through improved cost. Advanced SoC development will certainly continue as requirements for the tablet market will lead high-end mobile SoCs. For the smartphone market, however, expect to see less influence on SoC design from the high-end segment, which is driven by performance, and more influence from the low-end, which is driven by cost and power consumption. Life after smartphones? That’s a story no one can write yet.

—Frank Ferro is director of product marketing at Sonics.

Multicore: Is More Better?

Thursday, May 30th, 2013

By Frank Ferro
Two cores are better than one, right? It reminds me of those AT&T commercials where they ask the kids, “Who thinks two is better than one?” And of course the kids all yell, two! In another version of the commercial they ask; “What’s better, doing two things at once or just one?” And again they all yell, two! Well, this is a good summary or of last week’s Multicore conference, where the conversation focused precisely on these questions: Are two cores (or multiple cores) are better than one core, and is doing two tasks (or multiple tasks) at once are better than one? And if so, how much better?

To answer these question, the conference brought together hardware and software companies to discuss the challenges of developing efficient multicore hardware and software. Before I get too far ahead of myself, I want to define (or try to define) what multicore means. During the closing panel discussion, I was surprised to learn that there is no Wikipedia entry for ‘Multicore.’ It just says, “May refer to: Multi-core computing.”

I was even more surprised at the debate that ensued after the opening question; “what does multicore (or many core) mean?” There was agreement (at least) that it means more than one core, but that’s where the agreement ended. To some multicore means homogeneous CPU cores, to others it means multiple heterogeneous processor cores, and to others any mix of cores in the system. My working definition has been, for the most part, multiple heterogeneous processor cores, but I will admit that I sometimes drift to the third definition meaning any system with ‘lots’ of cores. I will spare you the panel debate over the second question: “define a core.”

There was agreement, however, on the fact that multicore is being “thrust on the masses.” Up to now companies chose to do multicore, but with the slowing down of Moore’s Law, multiple processor cores are the only way to keep up with the performance demands of many consumer applications. Using multiple cores is necessary to stay on the performance curve because the rate of increase in CPU MHz has slowed to the point where this alone is no longer sufficient. “The race to MHz has now become the race to core density,” said Tareq Bustami, vice president of product management at Freescale. According to Linley Gwennap of The Linley Group, about half of the smartphones produced next year will have dual-core processors, and the number of phones with quad-core processors will jump to about 40%.

So how do SoC designers harness the power of all these cores, and when does adding more cores reach a point of diminishing returns? Most existing software has, for the most part, been written for scalar processing (do one thing at a time in sequence). The challenge for programmers is porting this scalar code to a multicore environment where there are multiple CPU cores running concurrent tasks or acting as shared resource for any task. Now add specialized heterogeneous processors such as a GPU or a DSP and programming becomes even more complicated. OpenCL is a framework developed to help with the task of writing programs that can execute across multiple heterogeneous cores like GPUs and DSPs. Systems can also add a hypervisor software layer as a way to abstract or virtualize the hardware from the software. This is all good progress, but there still is a lot of work to do from the software perspective to make the most efficient use of the hardware.

In addition to software optimization, another unanimous conclusion during the panel discussion was that memory and the interconnect are also “challenges to be solved” for optimal multicore system performance. Having multiple cores will not do you any good if you can’t efficiently access data. In an attempt to solve the memory bandwidth problem, the L2 and L3 cache sizes have been growing. Of course, this can be an expensive solution because increasing memory adds to the die cost.

Although not mainstream, Wide I/O is specifically designed to address the memory bandwidth issue and 3D memory is on the horizon. Even with an efficient memory design moving the data from multiple cores to memory requires an optimized interconnect. The need for having flexible topology structures to match the processor configuration, efficient protocols to maintain performance, virtual channels to support system concurrency, and quality-of-service to manage the competing data flow are must-have features to maximize multicore system performance.

So are two cores (or multiple) better than one? For most consumer applications, developers do not really have a choice anymore because all of the new processors have multiple CPU cores. In addition to the CPUs, it is hard to imagine these multi-function applications not having specialized coprocessors. Dinyar Dastoor, vice president of product management at Wind River, provided a good analogy by asking if having a third eye behind your head is better? The answer is only yes if you can process this additional information. The consensus was clear: If you don’t have improvement in multicore software, a memory architected for increased bandwidth, and an efficient interconnect, two cores may not always be better than one.

—Frank Ferro is the Director of Product Marketing at Sonics

The Power Treadmill

Thursday, April 25th, 2013

By Frank Ferro
The recent purchase of an LTE smart phone has me back on my power management soapbox. I upgraded my phone about a month ago to the newest version (staying with the same manufacturer as my previous device) and to my dismay, although it wasn’t completely unexpected, the battery life was actually shorter. I did not do a ‘scientific’ comparison, but following the same daily use pattern I noticed the battery life percentage indicator was much lower at the end of the day when compared to my two-year-old 3G model.

The reason I say this was not completely unexpected is because in my February 2013 SLD blog (Power Management: Throwing Down the Gauntlet) I sighted a recent survey showing that users of 4G phones were less satisfied with the battery life than were users of 3G phones. This was due to the fact that the radio needed to wake-up more often to look for a 4G base station. I also suspect that the larger screen is another key contributor.

Instead of speculating (and complaining) about battery life, let’s take a look at the power profile of a smart phone to determine where we can get the most ‘bang for the buck’ when looking for places to save power. The table below is from an article in the November 2012 Microwave Journal showing the power profile of major components in a smart phone. As the table shows, the overall power consumption over the last few years has nearly doubled. According to the same article, battery capacity has been increasing by about 10% per year for the last few years, so the battery technology has not been able to keep pace with the smart phone power requirements.

Note that battery life is a function of phone use cases scenarios, with such activities as voice calls, video calls (e.g. Skype or FaceTime), using the Bluetooth headset, Wi-Fi, watching videos, listening to audio, etc. Each of these use cases puts different loads on the CPU, GPU, display and the various radios, so the table provides a general idea of the overall power profile for a given use case.

As suspected, the radio takes a reasonably large percentage of the power (23%), but the rate that RF power has been increasing with each new technology node is relatively low at 11%. The largest rate increase has been in the display at 300%, which should not surprise anyone given the size and the resolution of the smart phone displays. And consider that the data in this chart does not take into account some of the most recent smartphone models having even larger displays with better resolution.

Looking next at the processor and peripherals we see that together they account for more than half of the power consumption, so clearly targeting these components for better power management will have a significant benefit to the overall battery life. The problem, however, is that new smartphone processors keep increasing in speed and adding more processor cores and GPUs, so the power treadmill is not slowing down.

Is help on the way?
One obvious solution is better battery technology. Recent research claims that lithium-ion micro-batteries will provide a 10x power improvement or be 10 times smaller than today’s batteries—take your pick. Given that these batteries are still in the research stage, don’t expect to see commercial products anytime soon. Consequently, we need to look at the silicon for some immediate relief.

Silicon providers traditionally have relied on process technology progression to reduce power (usually with lower operating voltage), but at 40nm process nodes and smaller there is limited help because leakage power has become difficult to control. If not properly managed, leakage power can exceed dynamic power. New process techniques such as silicon on insulator (SOI) have helped, and the new FinFET technology offers improved leakage, but we have to wait a bit longer for full production of FinFETs.

So where can we expect to get the most immediate and largest improvements in silicon power consumption? Looking at the above graph, the highest power consumption gains can be achieved with architectures that comprehend power management (left side of the graph). SoC designers must incorporate power management techniques early in the design phase as a fundamental part of the architecture, and not look for power optimization later during the silicon implementation phase (right side of graph). Techniques such as power shutoff, adaptive voltage scaling (AVS), dynamic voltage and frequency scaling (DVFS) and clock gating are in fact being used in various combinations in the latest smartphone SoCs. These techniques are good, but are they enough to keep up with the treadmill?

Other than CPF and UPF, which only specify power intent, there is not a standard methodology for implementing a power management architecture. For example, AVS and DVFS lower power consumption, but at the cost of increased system complexity. Therefore, without a standard methodology AVS and DVFS are used sparingly in the system to trade off design complexity with power savings. In addition to the hardware, software complexity also increases as more aggressive power management is applied to the system. To take full advantage these and other power saving techniques, design tools and IP are needed to allow SoC designers to deploy better power management without the design risk. Applying a standard methodology will simplify development, especially for design teams that are not familiar with power management, and increase thier ability to verify functionality and performance of the power management network.

So maybe (just maybe), two years from now when my phone contract expires, I will be able to purchase a smart phone that actually will have longer battery life. This only will be possible, however, with a combination of improved battery technology, process technology and better SoC power architectures. Given the SoC design cycle time, better SoC power architecture work needs to start right now in order for these SoCs to be in smart phones by March 2015— my current phone contract expiration date—or I will have to wait another two years…

—Frank Ferro is director of product marketing at Sonics.

The Business Of Things

Thursday, March 28th, 2013

By Frank Ferro
The Internet of things (IOT) will create $14 trillion dollars in business opportunities according to Cisco. Unless you are a government accumulating debt, most of us think that’s a big number—and a big opportunity. The much quoted “50 billion connected devices to the Internet by 2020” forecast is the impetus driving companies in all parts of the ecosystem including infrastructure, applications, services, systems, and semiconductors to position themselves for a share of this market.

Although much of the high-tech growth in recent years has been centered around connected consumer devices, with 1 billion units shipped in 2012 and an estimated 4.5 billion ‘connected screens’ to the Internet in 2016, these markets are maturing and consolidating. The HDTV market has matured, smart phones are next, and tablets will not be far behind. As a result, both the winners and losers in these markets are looking at the IOT as a way to leverage their technology investments.

The IOT market is, in fact, becoming a reality as new products and applications expand beyond vertical markets, making their way to the consumer. Our familiarity (affection may be a better word) with smart phones and tablets, along with the cloud infrastructure that makes these devices so useful, are enablers for the IOT. These devices provide an easy and intuitive interface to a wide range of technology products, which up to this point have only been envisioned. I am sure that your cable or Internet service provider has tried to get you to add home security to your system. These systems will allow you to monitor and control your home from any mobile device. Even my pool service company wants to sell me a controller with Wi-Fi so I can control and monitor the pool from my smart phone. I can fire up the spa on my way home from work, but of course then I would need a smart blender to prepare the margaritas!

These two simple examples begin to give us a sense of just how big this market can be. Basically any device that can connect to the Internet is fair game. This is a multifaceted challenge and is difficult to get your arms around. As briefly mentioned, there are many vertical market segments such as health care, industrial, transportation, energy, consumer and home, retail, IT and networks, to name only a few. There are also many layers of technology to deal with such as sensors, microcontrollers, power management, energy harvesting, systems, applications and infrastructure. The requirements and challenges for each of these market segments will vary, including cost, power consumption and performance.

The real question then is how can SoC companies create a successful business model around the IOT? Having a pool controller or security system that is connected to the Internet is nice, but how many of these products are sold per year? Last year for example, there were about 1 million cars sold with Wi-Fi connectivity and the number is projected to be 7.2 million in 2017. This is healthy growth, but when compared to the >500 million smart phones with Wi-Fi, this is a relatively small market. I am using connectivity (Wi-Fi in this case) as a proxy for these segments, but the same volumes generally apply to the underlying controllers, as well. Plus, the turnover rate in many segments of the IOT is much slower, with consumers owning products for seven years or longer and only one product per household versus many connected devices per person.

Because these markets are so segmented, SoC development cost no longer can be $100 million per generation if you expect to run a successful business. Chip development cost will need to be significantly lower (~one tenth) and be based on an architecture and design methodologies that are flexible enough to support a range of market requirements. Microcontroller companies have had to deal with this challenge for years, and more recently some Wi-Fi companies have adapted to these challenges. As the IOT evolves however, more complexity is being pushed closer to the end device so the requirements are no longer a simple sensor and controller.

To create SoCs that support this increasing level of complexity (e.g. low power for one application, high bandwidth for another) at a low design cost, a strategy needs to be developed that includes architecture, IP and design methodology. For example, several companies already have adopted on-chip network IP as a design methodology that provides standard interfaces with universal connectivity for IP cores from multiple vendors. Using this design approach allows IP cores to be quickly and reliably added or removed from the SoC without any significant design work because each core is isolated from the rest of the SoC. With this IP, SoCs can be quickly adapted with very little design cost to support multiple market segments along with changing design requirements.

Another good example is power management. Today this is done in an ad hoc fashion with no uniform design methodology. Some companies look to process technology and clock gating for low-power designs, others look to better architectures, and still others use design techniques such as DVFS, and some use all of the above. IP and EDA tools that can provide a unified methodology with standard power interfaces (beyond CPF/UPF) will save cost, development time and allow for chips with much lower power consumption.

It is good that semiconductor companies are talking about the IOT market as the ‘next big thing,’ but they need to take a serious look at the business model and the chip design methodologies required to support these wide ranging market segments if they want a piece of the $14 trillion pie.

—Frank Ferro is director of product marketing at Sonics.

Power Management: Throwing Down The Gauntlet

Thursday, February 28th, 2013

By Frank Ferro
The recent burst of articles challenging smart phone battery life has me asking the question, “Are we ready to turn the corner on power consumption?” About two years ago I was bemoaning the fact that we are willing to live with a smart phone that gets only one day of battery life (Powering Forward or Moon Walking). As of today, nothing has changed. We still need to charge the phone every day. Recent processor announcements continue to be about adding more CPU cores (i.e. more performance). Not to pick on any one company, but did the announcement of an 8-core processor significantly change the smart phone? Is this product creating anticipation in the market for a new processor with 16 cores? Not really.

For most of us, all we want is a smart phone that has a reliable voice connection with a fast Internet browser and decent battery life. Okay, I watch short video clips on my phone, use the maps, along with a few cool apps, but do we need HD quality on the small screen? Even as Mobile World Congress is kicking-off in Barcelona this week, I saw ST-Ericsson announced their new NovaThor L8580, the first smart phone processor to hit 3GHz speed mark. Putting aside the debate about if and when a 3GHz processor is needed in a smart phone, speed still is getting attention.

There is hope, however, for those of us that don’t want to always carry around a power cord. Google and Motorola are making noise about upcoming products that will focus on battery life. Google CEO Larry Page said, “Battery life is a huge issue. You shouldn’t have to worry about constantly recharging your phone.” Consumers also are weighing in (finally), expressing their dissatisfaction with battery life. In a recent J.D. Power survey of smart phone users, they say that battery performance is becoming a critical factor in overall product satisfaction. The report states that “satisfaction with battery performance is by far the least satisfying aspect of smartphones.”

Another interesting aspect of the report is that users of 4G phones gave battery performance lower rankings than 3G users. 4G phones apparently need to ping the base station more often looking for a 4G connection, and there are fewer of them than 3G base stations. Although this may be a temporary situation as 4G proliferates, early testing of voice over LTE (VoLTE) shows a significant reduction in battery life when compared to CDMA, so we are still in an uphill battle.

On the semiconductor side, companies will continue to compete with high-end SoCs that are loaded with features. However, recent consolidation of the application processor market is the first sign that these SoCs are reaching initial levels of product maturity. As with most product cycles, the goal for first- or second-generation products is to grab market share by getting to market quickly. In these early generation products, there is not too much care taken (typically) to be gate- and power-efficient. At the product level there are also signs that the smart phone market is starting to mature with the release of the first midrange and value-smart phones. This clearly will open up opportunities for the major SoC players to do cost and power reductions. It also will open up new opportunity for other SoC vendors to compete that missed the initial market cycle.

Product shrinks and removing features certainly will help power consumption as gate counts go down (or at least are not going up). In addition, current power management techniques—such as power switching, including dynamic voltage and frequency scaling—provide power savings, but is this enough? As SoCs are redesigned to meet the requirements of a segmenting smart phone market, this is a great opportunity for chipmakers to adopt much more aggressive power management techniques. For example, these complex SoCs include a collection of subsystems with multiple power and clock requirements that are grouped by ‘domains.’ These domains can be turned on or off based on the expected use cases (e.g., when I am listening to music I want video and all radios asleep), thereby consuming as little power as possible. Due to software complexity and interdependencies between domains, however, the number of domains that can be controlled is limited. Less domain control means that more parts of the chip are on. In addition, the switching speed at which these domains can be turned on or off needs improvement. The current ‘top down’ software-controlled view can be relatively slow, again leaving domains on much longer than necessary.

The good news is that the market will force the SoC manufactures to get much more aggressive about power management. The J.D. Power report also indicated that smart phone owners who are highly satisfied with their device’s battery life are more likely to repurchase the same brand of smart phone, so better power management is now a real competitive issue. Current SoC leaders must make it a priority to innovate around power management, implementing much more aggressive power saving techniques—or they run the risk of leaving the door wide open for competitors. The power gauntlet has been thrown down.

—Frank Ferro is director of product marketing at Sonics.

The CES Effect

Thursday, January 31st, 2013

By Frank Ferro
CES draws a lot of attention. Everyone wants to be first to see the latest and greatest consumer products. If you don’t mind squeezing through the crowd, you can glimpse the startling picture quality of an OLED TV. Never mind viewing the quality of a 4K Ultra HDTV, at CES you can skip a generation and see what an 85” 8K UHDTV looks like. Talk about resolution! You also can explore a working smart home connected by a host of products enabling the “Internet-of-Things,” see products that can sling video from your phone to other screens, and then see robots clean windows. You can even use your brain waves to control toy helicopters and kitty ears. And the list goes on.

This is all fun, but CES is also a place where you can collect valuable data points on markets, products and companies. Careful observation will help get answers to the following questions:

  1. Is a product is going to be real in the market and when?
  2. What’s the strategy of leading consumer and semiconductor companies?
  3. What’s coming next?

All the hype, discussion and speculation around these questions I like to call the ‘CES Effect’.

What is real? One of the hits of this year’s show was the 4K UHDTVs. There is no question that these TV’s are going to find their way into consumers’ homes. The only question is when. I remember when HDTVs first appeared at CES in the late ’90s at a cost of about $10K. I knew that it would be a long time before one would show up in my home. Ten years later, in 2009, I purchased my first HD set for about $600. Cost was not the only factor that limited widespread HD adoption; it also was limited by the available content and lack of infrastructure.

A very similar discussion is now taking place with regard to UHDTV including: where is the content? Can the infrastructure handle higher resolution? Higher frame rates are needed to view sporting events; you need HDMI 2.0, and so on. Given this, and the price tag, it will be a few more years before UHDTVs are adopted by consumers. Technologies like H.265 will certainly help the deployment providing similar or better quality with about 50% reduction in media files. I am sure that when my current HD set is on its last legs (hopefully five to six years from now), I probably will have no choice but to purchase a 4K set because these will eventually overtake existing HD technology.

What‘s not real on the other hand are 3D TVs. Yes, they have been at CES for a few years now, and maybe it is me, but the user experience seems to be getting worse and not better. Not to ‘toot my own horn’ but about a year ago I predicted that we are not ready for 3D because there is not a practical consumer use case. Even for movies, my wife and I will not pay extra to see the 3D version, preferring the 2D instead. 3D will remain a novelty for games or special applications, but not the widespread adoption that was expected. Actually, if you want a real ‘3D’ experience, go and view the 8K resolution UHDTVs. The depth and clarity of this picture gave the impression of three dimensions. Unfortunately, I will have to wait even longer to get one of these. Gesture recognition is another technology that was hyped a year ago but was basically absent for similar reasons as 3D—lack of a scalable use model for the consumer (also discussed in Dec 2011 SLD blog).

Just when CES was starting to feel like a “mobile show,” this year the clock was turned back to more traditional mix of consumer electronics with only a handful smart phone announcements. Perhaps companies are holding their announcements for Mobile World Congress in February. Even so, it is clearly a sign that the smart phone market is maturing and there is less jockeying for position.

Providing an interesting dichotomy to the show were a number of processor announcements from Intel, Nvidia, Qualcomm, Samsung and ST-Ericsson—a dichotomy because you can see iPhone cases next to semiconductor booths. At a consumer show do buyers from big box stores care about 8 CPU processors cores or 72 GPUs? Maybe the PC market has trained the consumer to just know that a dual-core processor is better than a single-core and a quad-core is better than dual-core.

In any case, semiconductor companies are ‘leaning forward’ with very aggressive designs to cover a range of markets. The Tegra 4 from Nvidia, for example, with four ARM Cortex-A15 CPU cores and 72 GPUs, is targeting the gaming and tablet markets with enough power to support 4K (UHDTV) output. Similarly, the Snapdragon 800 from Qualcomm will support higher-end gaming, augmented reality and 4K content. The Samsung Exynos 5 Octa uses ARM’s big.LITTLE architecture with 4 Cortex-A15s (big) and 4 Cortex-A7s (LITTLE) in order to save significant power over the previous quad-core version. Intel on the other hand is targeting value smart phones with its Lexington platform and is giving the ‘heads-up’ on Clover Trail+ along with a new 22nm Atom-based design.

If I can boil all this this data down, the ‘CES effect’ on the SoC world is the need for more performance, higher complexity and longer usage per charge (lower power). This should not be a big surprise to anyone tracking the SoC market. The consumer’s demand for all these high-tech gadgets is unrelenting and the pace of SoC development is not letting up anytime soon. I also could add to the list lower SoC cost (both development and product cost) and better execution (TTM). To keep up this pace, contributions are needed from all parts of the semiconductor ecosystem including better IP, improved system architecture and analysis tools.

And P.S.: If I see another Dick Tracy watch at CES (which I did) I will scream. Give up already!

—Frank Ferro is director of product marketing at Sonics.

The Network Is The SoC…

Wednesday, December 19th, 2012

By Frank Ferro
SoC design continues to challenge semiconductor and system companies in their pursuit to create a better user experience for a wide range of products. Given this, I was pleasantly surprised to see that two of the “Ten technologies that will change the world in 2013,” according to EETimes (December 2012 issue) were SoC-related.

One is virtual SoC prototypes and the other is IP subsystems. These technologies are right up there in the top 10 list with heterogeneous networks, gesture recognition and 3D printing (which by the way I struggle to ‘wrap my head around’ because this is a real Star Trek replicator!) Both virtual SoC prototypes and IP subsystems are making such lists because they are now necessary pieces in the SoC design puzzle. The complexity of SoCs designed in 28nm process technology and below are becoming too unwieldy for design teams to manage as more and more functionality is being crammed onto the die. Note that that 3D FinFET transistors also made the top 10 list (14nm and below).

Having the ability to create virtual prototypes addresses not only SoC complexity, but also the time-to-market pressure, by pipelining software development in advance of silicon. Virtual prototypes can be a cost effective alternative to FPGA emulators for hardware and software development. However, they also can be used in conjunction with FPGAs for hardware testing and third-party IP integration. Clearly defining the architecture based on a more detailed understanding of the system’s performance behavior, in advance of the SoC implementation, will save time and cost during the implementation phase, ensuring the SoC meets design specifications.

Along with virtual prototypes, IP subsystems are clawing their way out of an esoteric world as they emerge as a key component in a complex SoC design strategy. IP subsystems are a way to ‘divide and conquer,’ where advanced functions such as graphics, audio or video are addressed by the subsystem. The advantage of this approach is that these functions can be tested and verified at the unit level, then integrated with the top-level SoC functions. Another advantage is that subsystems are available as commercial IP blocks from multiple vendors, making for good competition. Plus, the expertise for these functions does not need to exclusively reside ‘in house.’ Semico Research predicts 25% of the SoCs that ship next year will include subsystems, with this number increasing to more than 65% in 2015.

SoC Design is Fabric Design: As collections of subsystems begin to make up a larger percentage of the SoC, integrating these subsystem along with other IP components is the real challenge. A customer recently noted that the speed and success of an SoC program is tightly coupled to their ability to do the fabric design (or the on-chip communications network). Being a supplier of on-chip networks, it is certainly encouraging to hear customers elevate the importance of this IP in their SoC methodology, equating it with the success or failure of a program. Fortunately (or unfortunately), this is true because the network touches every aspect of the SoC design from early architecture exploration all the way through to back-end layout. So the on-chip network is not only a critical IP block connecting all the cores in the system, it is also is a tool for architecture exploration and performance analysis. And finally, it is a platform methodology to allow the rapid and repeatable assembly of the SoC, enabling design teams to meet the rapidly changing market requirements.

PPA: Understanding tradeoffs around performance, power and area (PPA) are essential to ensure that architectural intent can be realized in silicon. Connecting so many cores and subsystems together creates natural contention points in the network which, if not managed, will mean poor performance for the various usage scenarios or failure of the SoC completely. To answer these PPA questions, RTL or SystemC models of the on-chip network allow the SoC architect and designer to model and analyze critical data paths in order to optimize the system (e.g. optimize buffer sizes and minimize wires). Architectural features in the network, such as virtual channels, QoS, and true non-blocking flow control (not simply request and response pipelining), provide the concurrency necessary to keep the performance up and the gate-count down. Features such as virtual channels also help with the back-end layout implementation because the logical network design is separated from the physical layout, thus avoiding performance problems late in the design as components are shifted on the die.

Mainstream: SoCs are now the critical component for leading-edge products in all the major market segments (consumer, communication, networking, enterprise, automotive). Successful SoC execution therefore is key to the success of both system and semiconductor companies, and hence the visibility. A better SoC methodology built around the on-chip network fabric is necessary to improve IP integration, help meet performance goals, and to avoid back-end layout problems (timing closure). Being on a top ten list is nice as long your SoC is the top seller.

—Frank Ferro is director of product marketing at Sonics.

Open IP Development Tools

Thursday, November 29th, 2012

By Pascal Chauvet
How much time have you wasted trying to understand software tools by deciphering the logic of their creator? I always find it very frustrating to be limited by features and tool capabilities that do not do exactly what I want, or which do not work at all with my other applications. We are engineers! We can learn and adapt, but we often want to be able to extend and improve the tools we are using. Why is that not always possible?

Adding or replacing an EDA tool from different vendors in your design flow does not have to be a headache. It should never force you to make major modifications in your methodology and overall environment. So how is it achieved? Enforcing the support of standards for tool interoperability is an obvious first step.

In the world of SoC architecture exploration and platform assembly, the IP-XACT standard, despite its flaws, has been widely adopted. IP-XACT also is used to ease IP integration. Similarly, IP model interoperability has benefited from SystemC TLM 2.0. For performance analysis and system debugging, UVM transaction recording and SCV transaction recording have made it easier to share instrumented models or RTL monitors to analyze simulation results.

Modularization of functionalities, in order to be shared across a common software platform such as Eclipse, opens up new opportunities for tool interoperability and integration.

Scripting capability built around the base commands of any tool transforms it into a very powerful application in the hands of its user. The most successful EDA tools have such a customization layer.

The de-facto language for user-level scripting in the EDA industry is TCL. Many CAD departments have managed to build complete infrastructure around their tool flow with TCL. I believe it is safer to stay away from any wholly proprietary language, or even any more exotic language, as these defeat the purpose of language unification.
The support for industry standards, along with the scripting capability of tool environments, defines what is called the “openness” of these environments. The more “open” the tools, the easier it will be to use them together and to adapt them to your needs.

EDA vendors are not the only companies building CAD tools for their users. Tools built by IP providers are often underestimated and should also be subjected to close scrutiny about openness. The more configurable an IP, the more sophisticated will be the tools associated with it. Memory subsystems and on-chip communication networks (interconnect or network on chip) are perfect examples of highly configurable IP. Ironically, even if these complex IP products are architected and designed to be easily interfaced with all other IP cores in a system, their tools may not be built with the same objective in mind.

Architecting and assembling a large SoC implies intimate knowledge of all the IP components that compose the system. That is why it can be extremely challenging for an EDA vendor to build such design environments. Until recently, the big 3 (Cadence, Synopsys and Mentor Graphics) had not shown much interest in tools for architects, or even tools for platform assembly. Perhaps the numbers for the ESL market were considered too small to be taken seriously.

EDA vendors tend to build new tools starting with very broad objectives. They want to determine if the tool creates any interest, but unfortunately these vendors usually barely scratch the surface. It is not until they work with a lead customer and address customer-specific requests that they refine the implementation. Openness and more precisely scripting is a must, so the user can add their own “know-how” to the tool.

IP vendors, on the other hand, have full knowledge of their IP, but they will often sacrifice having an open environment in order to limit dependency on external elements out of their control. This approach is indeed a safer, easier, and faster way to get a tool out that addresses your IP needs. But does it really help a customer to achieve their goals?

Forcing architects to systematically translate the requirements and constraints of the large system they are building into IP specific ones is an inefficient task. At the end of the day the entire SoC has to perform as expected so it can implement all the supported applications.

Buyer Beware. Any company evaluating a new IP should pay close attention to these tooling aspects. It is necessary to look beyond the mere “eye candy” UI. Ask yourself these questions: How will this tool play with the rest of your environment? And will you be able to extend it and mold it to your needs? Always be wary of vendors that assume they know more than you do.

—Pascal Chauvet is an application architect at Sonics

Next Page »