Part of the  

Chip Design Magazine

  Network

About  |  Contact

Jobs’ Law

September 22nd, 2011

By Jim Hogan
Every year there seem to be a few prevailing themes that emerge in chip design and EDA around the mid-summer /post-DAC timeframe. Usually they revolve around some high-level design tool concept such as electronic system-level (ESL) design, some aspect of SoC power optimization or design for manufacturing (DFM), or a wave of tools being introduced in some ‘new’ area. This year the chatter was the least tool-centric as I can remember in recent memory, which may not necessarily be a bad thing. In fact, it actually may be a sign of maturity that the EDA industry is focusing on the big picture challenges faced by IC and system designers.

In my mind, that biggest challenge is all about the tremendous shift toward true system-on-chip (SoC) design. A year sometimes can make a huge difference. Yes, we’ve talked about SoC design for years, but it’s really starting to evolve to become the next layer of design abstraction. One proof point alone should make that clear: Microsoft talking about its SoC strategy at CES earlier this year. Who would have thought that a consumer software company could even spell SoC? But that’s the key behind the push toward more mainstream adoption of what we in the EDA industry have almost taken for granted. It’s all about consumer markets and it’s all about software applications. Interestingly, it appears the systems companies have realized it is the way to capture more value for them and allow for differentiation and exclusivity in their system products. No one understands this better than Apple.

Growing up in Silicon Valley I have witnessed much of the electronic revolution firsthand. The Apple story has been told by many. I recall the Steves starting Apple and the Apple II being what I considered an interesting toy. Steve Jobs had another vision. Jobs has always been a one step removed from me in my network. Lots of my contemporaries joined Apple as it grew, but I never really thought of it as an innovator. It assembled interesting bits and pieces from elsewhere, like the mouse and graphical UI. Steve left and started NEXT and my brother-in law joined them from SGI. He raved about the UI and the graphics ability. I was using Sun and SGI workstations at the time and tried a NEXT. It was a great UI but slow as hell when I loaded a design. Again I wrote it off as a toy.

As everyone knows, Jobs returned to Apple and made the NEXT OS and UI the Mac. The Mac is and was a huge success. It got some horsepower and was actually a great graphics workstation for creative things like media. He also launched the Newton PDA. It was a good idea but you couldn’t make it work worth a darn.

Jobs showed us all that computers were as much a fashion statement when he launched the Mac in designer colors.

I know a bit about the development of the iPod and it wasn’t so much a technical masterpiece as a collection of parts that Jobs saw as a way of servicing the consumer. I finally had to say he was the real deal when he launched iTunes and showed us all what the Internet can do in terms of creating and servicing consumer needs through a more efficient marketing approach.

Thus, after dismissing Apple and Jobs as a toy supplier, I think I finally have come to realize why Steve Jobs is indeed the master. I think we can coin a Jobs’ Law, a lot like we have Moore’s and Amdahl’s Laws. I believe that Jobs’ Law is that the user experience is never compromised. In other words Apple will, for example, spend more of its bill of materials costs on a display; all Apple devices communicate transparently with each other (turns out so do Samsung products too, hmm?); the devices themselves are stylish and handsome; and they act as portals to all the things a consumer could want.

How does Jobs’ Law affect what we do in SoCs? Let’s start at the handoff to SoC (what I’m trying to describe as SoC 3.0) or System Realization.

With SoC 3.0 software is king and programmability is the key, a departure from the hardware-focused era of gates-and-switches chip design. Application software defines the differentiated value of the system for the consumer.

It seems the industry has agreed a critical requirement is for software that runs for 1.5 seconds at boot time. The real time bio (bare metal software) is what allows the hardware and software world to interact. There were more than a few comments around that this year. My guess is next year at DAC we will be seeing more than a few people talking about that as the discussion moves to up to System Realization. But today’s problem is SoC realization, or the bridge from system level design to Silicon Realization (or, as many of us like to call it, EDA Classic).


Of course, the software still requires the underlying hardware, which is what makes the EDA and IP industries still very relevant. Especially when you look at the massive costs and development times required to develop an SoC. Higher-level design methods and design re-use are an absolute necessity and the IP industry will flourish as a result. But what we have seen at this year’s DAC, and really in the past year or so, is a realization (no pun intended) that the SoC methodology, as traditionally defined by the EDA supply chain, still needs work in order to deliver on the promise of SoC, thus rich with opportunities.

On one end, the detailed process for implementing complex designs in advanced silicon is well understood and is served adequately by traditional EDA tools and their tight connection to device physics and manufacturing. That’s not to say there aren’t some interesting challenges, especially as you get closer to silicon, but the core design methodology is in place, trusted and understood. These new tools will complement the needs of smaller and smaller process geometries. This remains the perfect storm for five-man startups.

On the other end, the process of conceptualizing and analyzing designs at a system level, a high level of abstraction and without the restrictions of physical operating constraints, is also a well-proven, albeit somewhat less rigidly defined area. Ideally, this system-level implementation would be available to software application developers before the SoC is actually manufactured to test and debug the software and uncover any SoC architectural problems.

Operating at these two different levels of abstraction, most often performed by two (at least) different sets of operations, introduces a variety of risks and design management challenges. The most fundamental challenge is ensuring that what is intended at the highest level of abstraction actually gets implemented in silicon by the steps performed at lower levels of detail. Thus ensuring architectural intent or convergence, making sure nothing gets “lost in translation,” is a major issue within the current SoC design flow.

This void has emerged as an under-served and emerging link called SoC Realization, where important system level information must be transformed to the next level of abstraction and analyzed; and functions assembled (both hardware and software) and implemented in an optimized way in silicon. It is here where an SoC has more degrees of optimization and critical decisions are made on issues such as which building blocks are used, what operating characteristics the SoC will have, and whether it will work as intended once committed to a silicon device. It is the cockpit for guiding the design from concept to implementation and ensuring design fidelity from one level to the next. SoC Realization represents the next natural and necessary step up in abstraction from existing EDA methodologies, and a necessary bridge from the more abstract world of system-level design that is helping drive SoC-enabled products.

Business models will likely change, as well. It isn’t a foregone conclusion that the EDA time-based licensing makes sense in SoC realization. The use of cloud computing will evolve in the next year. We may actually start to see software as a service (SaaS) that will become part of the SoC realization story in the next year. I think there are some ideal design tools for an application of the cloud. The cloud solutions out there today charge you $5/hour/computer. They charge a lot for data transfer. So applications that are a relatively small set of constraints/and inputs are ideal. You transfer a megabyte of information, during run time you create a terabyte of data on the cloud, and then you transfer back a megabyte of results. Good applications, for example, are field solvers that generate a ton of run time data to solve Maxwell’s equations, and applications that require no proprietary information on the cloud. At least for now that rules out things like SPICE because of the need for proprietary data like process rules.

For some it’s difficult to admit, but EDA “classic” has become a commodity business, a tool replacement exercise that by definition has to keep pace with IC complexity and Moore’s Law, but is a technology treadmill of incremental performance improvements with incremental innovative breakthroughs. And it may well end up in the hands of the foundries that can derive more value by linking it even closer to their manufacturing processes. This is ASIC model déjà vu all over again.

SoC Realization, on the other hand, has the potential to dramatically change how companies can leverage the vast potential of design re-use and the capacity of today’s leading-edge semiconductor processes. It is the link between the consumer systems companies like Apple, Samsung, LG, Microsoft, Oracle and others now jumping into the SoC game, and the IC providers charged with expanding their portfolio of expertise to include vertical-market domain software and system platforms that can scale with each new market their customers want to address.

Meanwhile, designers are currently left with ad-hoc and DIY methods for taking system-level design concepts and trying to transform them into viable SoC designs. As we have seen in the fabless era, lowering the design methodology enables the democratization of SoCs, thus filling fabs and shortening the product lifetime in the consumer market. This is a large, and growing, gap that needs to be filled with better automation and more efficient ways to ensure design fidelity between high-level representations and silicon implementation. This is a gap that represents a huge opportunity to fill. The question is by whom?

As with any kind of capitalist-driven market, commercial companies will emerge to fill the SoC Realization need. The IP market is alive and well, and will continue to evolve. But it needs this critical infrastructure for it to really flourish. This is where growth is going to come from in EDA as a business and hopefully allow it to capture more value.

The success of Apple under Steve Jobs underscores the opportunity for EDA companies who can drive innovation and change the status quo. I think we need to tip our hat to Steve Jobs for the innovator he truly is. Thanks, Steve, for making things interesting and being true to your vision and Jobs’ Law.

MEMS@DAC 2010: Ready to Cross the Chasm?

June 24th, 2010

By Jim Hogan
Ten years ago the Design Automation Conference (DAC) may have seemed like an odd place for a discussion on mechanical design, or even MEMS – micro-electronic mechanical systems. In the eyes of many IC designers, it was still a technical curiosity. Anyone at the 2010 DAC toting the newest mobile products from Apple – which was a large percentage of DAC crowd this year – can attest to the power of MEMS, and more importantly their increasing presence in the traditional world of IC design.

hoganchart

ISuppli predicts that unit shipments of MEMS gyroscopes will ramp up from zero in 2009 to 285 million in 2014.

“The maker of the MEMS gyro used in the iPhone 4 is not yet known, since the device does not become available until June 24 and Apple does not reveal the identity of its suppliers. STMicrosystems, Invensense and Analog Devices are all marketing MEMS gyroscopes suitable for mobile handsets, but iSuppli guesses that Apple is using STMicro’s three-axis gyro, since the company is already the supplier of the MEMS accelerometers used in Apple’s current iPhone 3GS, iPad and iPod Touch.”

Long pigeon-holed in the steady, albeit somewhat slow-moving, domains of ink jet printers, DLP mirrors and automotive air bags, MEMS have begun to emerge as powerful differentiators for any type of product that needs to interact with or sense the outside world. See the Nintendo Wii or Apple’s ubiquitous rotating- displays to understand how MEMS can help satisfy a virtually insatiable consumer demand for cool new functionality where physical intelligence enhances the user experience.

So there we were at DAC in Anaheim, talking MEMS at a Birds of a Feather meeting entitled “MEMS@DAC: Ready to Cross the Chasm” (maybe next year MEMS will have arrived to the point where the topic will get a seat at a real conference session). There were folks from MEMS design tool companies, traditional EDA supplies, commercial foundries and a few other people with a genuine interest in what’s next for MEMS.

It’s actually a bit surprising that there weren’t more people at the session (the somewhat inconvenient meeting time at 6:30 p.m. between the show hours and the after parties notwithstanding) – the MEMS market is predicted to grow by more than 40 percent from 2008 to 2012, from just over $7 billion worldwide to over $13 billion, according to market research firm Yole Development. The growth is almost entirely fueled by consumer applications, as well as the emerging area of energy harvesting – both areas where MEMS capabilities can enable significant innovation. People may wonder a bit on the application, but if you look just at wireless communication and the potential for local advertising that MEMS enables, my guess it will actually grow faster than the Yole suggests.

Two different worlds: MEMS and IC Design
My personal feeling, having been around the IC design industry for 30-plus years, is that we are all a bit mystified and amazed, if not intimidated, by MEMS. Although almost all MEMS devices are tightly integrated with electronics, either on a common silicon substrate or in the same package, MEMS design has traditionally been separated or discrete from IC design and verification.

There are a number of reasons for this:

  1. MEMS design proceeds hand-in-hand with refinements to the MEMS fabrication process, unlike IC design which relies on standardized CMOS processes; the result is up unto recently the realm of Ph.D.’s in mechanical engineering and material science.
  2. MEMS, in contrast to ICs, are fundamentally three dimensional (3D), allowing an extra degree of freedom in design that translates into a much larger available design space. It also has additional forces to deal with, such as displacement.
  3. MEMS rely on coupled multi-physics effects to function. In particular, many MEMS designs rely on complex coupling between highly non-linear electrostatic forces and mechanical structures. Or, for instance, high-frequency MEMS resonators for RF applications rely on complicated high-frequency resonances in piezo-electric materials.
  4. MEMS devices are almost always tightly integrated with analog/mixed-signal ICs for sensory control, either on a common silicon substrate or in the same package.

Eventually the MEMS design must be handed off to an IC design team in order to go to fabrication, but the handoff typically follows an ad-hoc approach that requires a lot of design re-entry and expert handcrafting of SPICE-like behavioral models for functional verification. The present approach to MEMS design, with separate design tools and ad-hoc methods for transferring MEMS designs to IC design and verification tools, is simply not up to the requirements of developing products for consumer markets. New approaches are necessary to enhance MEMS design and bring it into the IC design mainstream, so design costs are reduced, time-to-market is shortened, and MEMS design is no longer confined to teams of specialists inside IDMs. This will create the democratization of MEMS design and manufacture, thus taking it out of the realm of Ph.Ds and moving it into the hands of practitioners, not just researchers.

A critical key to accomplishing this “democratization” is to build an integrated design flow for MEMS devices and the electronic circuits they interact with, using a structured design approach that avoids manual handoffs. At DAC we saw that there was progress being made in terms of tearing down the walls between IC design and MEMs development by companies like Coventor, Cadence and The MathWorks.

Helping MEMS “Cross the Chasm”
With tool flows starting to hit the market from companies like Coventor, the next priority is support from pure-play foundries. MEMS historically require specialized process development for each design, resulting in a situation often described as “one process, one product.” While there are a number of specialized MEMS foundries, support from pure-play foundries like TSMC has been very limited. Thus, most successful MEMS products on the market today were designed by teams of experts inside IDMs who have their own process technology. A foundry ecosystem with reference flows, foundational silicon IP libraries, and process design kits is needed.

When such an ecosystem finally becomes available—and it is starting—electronic systems developers can finally break free from the “one process, one product” tradition and MEMS design will become more accessible to fabless companies. Furthermore, MEMS will generate income for the foundries and drive some business to the mature process nodes that MEMS is likely to involve.

MEMS devices offer great potential for continuing the pace of miniaturization that began with Moore’s Law. We’ll see more than Moore as designers explore the 3D space. MEMS already has enabled great advances, from safer cars to digital projection, and these chips increasingly are revolutionizing the way we interface with the latest multi-function consumer electronics. But MEMS design has for too long been confined to specialists who use an ad-hoc design methodology with little or no connection to the electronic design environment.

The Visibility Challenge

May 27th, 2010

By Jim Hogan
Gary Smith, the noted EDA industry analyst, wrote an interesting blog post recently. In that blog he spoke about the progress the FPGA suppliers have made in adopting new processes, thus offering a more robust design platform.

In fact, new product families such as the Virtex-6 from Xilinx are taking market share from traditional ASIC design starts and becoming a more viable end-product alternative. There are many reasons for this. One of the most significant is the cost of delivering a leading edge ASIC. If FPGAs can meet the market requirements in terms of cost, performance and power, they are a great alternative to the ASIC without the silicon risk. Their capacity, performance and even lower costs have made more than a few companies think seriously about leveraging the benefits of programmability in a more widespread way and not just looking at FPGAs as prototyping platforms.

But Gary sounds a note of caution by saying, “There is no free ride for the FPGA designer.” It is true that by upping the ante on the capabilities of their FPGA devices, the silicon suppliers have thrown down the gauntlet from a design standpoint. New methodologies and tools–true system-level approaches like we have seen for hardwired gate arrays–will be required. Gone are the days of the free FPGA tools from Xilinx and Altera doing the job. We are talking about massive, complex and highly integrated SoCs with complex embedded software now, and this level of design needs a much more sophisticated design strategy than the old blow-and-go approach. Clearly, the old “blow-and-go” days of FPGA design are in the rear view mirror.

As with the ASIC days in years gone by, it will take some time for the industry to respond to all the needs of complex FPGAs. But remember what the advent of HDLs and logic synthesis did for the gate array designer? It not only removed a huge burden from the silicon suppliers who gradually exited the tool business, but leapfrogged chip companies’ ability to do application-specific IC design. This, for example, was the foundation that Cisco Systems built their routers on.

In the FPGA world, designers are used to using free tools from the FPGA suppliers and maybe a handful of lower-end commercial EDA tools. But those days are quickly disappearing when you are faced with a device containing multi-million gates, multi-embedded IP cores and performance capabilities that require precision timing-driven design approaches.

To emphasize Gary’s point, The FPGA Journal recently conducted a survey of FPGA designers. To no one’s real surprise, they discovered that close to a quarter of the designers are in a ‘crisis’ mode when it comes to the amount of time spent in debugging FPGAs. They complain of huge amounts of time required to verify a complex FPGA, find bugs and complete endless loops through synthesis and place-and-route. The implication is clear: The progress FPGAs have made is threatening to undermine their largest advantage, time to market, because of inadequate design tools and methodologies.

As with most areas of design automation, verification has become the major bottleneck. For FPGAs it comes down to two issues, performance and efficiency of visibility into the design itself. Today, 40% of all FPGAs include embedded processors. Most of them are soft cores, but as mainstream embedded processors are added to FPGA devices this percentage will increase and the complexity of these designs will skyrocket. Modern communications or video applications have an insatiable appetite for performance bandwidth. To deliver this performance, new applications are moving algorithms that previously executed in software to the FPGA hardware. With this change comes a significant verification and debug problem. We have good tools to debug software but when the associated hardware does not work, the system as a whole does not work. In order to debug these new embedded designs with FPGAs, the designer needs the ability to incrementally compile the hardware code along side of the processor in the chip and debug them together.

Let’s start with the performance challenge. Because of the sheer size and complexity of modern FPGAs, a software simulation run of the full-chip RTL that once completed in hours can now take days or weeks. The solution is to migrate as much of the design into a physical FPGA as soon as possible, because this will allow those portions to be run at-speed, and it will also dramatically reduce the loading on the software simulator. Most large-scale FPGA design groups utilize some combination of FPGA board based solutions from the FPGA manufactures, EDA suppliers or roll their own. This can add some significant time to the development cycle because of design and debug of the board based solutions themselves. However, this has proven to be a successful way to increase verification throughput. Native FPGA simulation as demonstrated by companies like GateRocket with its hardware-based RocketDrive product is a very interesting and promising approach.

But the real hidden time sink is in debug. And this has as much to do with the length of time required to re-run synthesis and place-and-route as it does with the actual process of finding bugs. For example, full-chip logic synthesis and place-and-route (PAR) runs that used to complete during lunch can now exceed 18 hours. This means that whenever a bug slips through to the system test lab and requires a change to the FPGA design, it can take more than a day to get the device re-programmed with a fix ready for testing.

In many cases, actually identifying the source of a bug can be virtually impossible, because bugs can be introduced at any stage of the design process. Often you can only simulate or debug where you think you have a bug. Coverage is a huge issue.

Since one bug may mask several others, it is not uncommon to re-spin the FPGA and re-test it in the system, only to discover that additional changes are required. It’s easy to see how this slow, iterative process can become unwieldy, and can lead to weeks or months of project delays. Imported IP presents challenges as well. In the FPGA domain, it’s common to be presented with two models: a high-level representation containing behavioral constructs for use in simulation, and a gate-level representation to be incorporated into the FPGA. The problem is that there may be subtle differences between the behavioral and gate-level representations, and these differences only manifest themselves when the FPGA design is deployed in its target system.

As with the performance challenge, the ‘visibility’ challenge—which equates to lost productivity—is the solution to target the design in-system, a la ASIC emulation. With an integrated hardware/software debug system (GateRocket calls it ‘device native’), once each new block is verified at the RTL (or behavioral) level in the context of our full-chip design, its synthesized/gate-level equivalent can be moved over into the physical FPGA. As soon as a problem manifests itself, the verification run can be incrementally repeated with the RTL version of the suspect block resident in the simulation world running in parallel with the gate-level version realized in the physical FPGA. The signals from the peripheries of these blocks (along with any designated signals internal to the blocks) can be compared “on-the-fly.” This is not only a huge time savings but changes the nature of design itself at this level.

Using this technology–combining conventional simulation with physical hardware and an appropriate debugging environment–it is possible to very quickly detect, isolate, and identify bugs, irrespective of where they originated in the FPGA design flow. Once a bug has been isolated to one block of the design, a change can be made to the RTL representation of that block, which can then be re-run along with the hardware representation of the other blocks. In this way, a fix can be immediately tested and verified without re-running synthesis and place-and-route, and with only the suspect block running in the software simulator.

Given the progress FPGA silicon providers are making in pacing with Moore’s Law and coming to market with phenomenal capabilities, it would be a shame to allow design tools and methodologies to slow us down. The EDA industry must meet the challenge. Advancements like giving designers the ability to see how a design behaves in the physical chip (often the actual target FPGA itself) by running it in-system while still having access to all the capabilities and flexibility of a software simulator (like those provided by GateRocket) are great breakthroughs that will help FPGAs make continued in roads in the traditional ASIC market and bring the power of programmability to more companies.

Envisioning A New Path for Chips

June 24th, 2009

By Jim Hogan and Peter L. Levin

About 15 years ago, not long after the Clinton Administration had suffered a devastating mid-term election defeat, the White House called a series of meetings with some titans of the automobile industry and their suppliers. The basic agenda was to build a consensus of action and care about the impending double whammy of climate change and soaring energy prices. Word was out that the Japanese were investing billions of dollars in safe, reliable, consumer-oriented hybrids with the then-unthinkable efficiency of better than forty miles per gallon, and many inside the administration (including one of the authors of this article) were concerned that the collective US response was a ho-Hummer third shift on the SUV line.

These conversations occurred prior to $4/gallon gasoline prices, prior to 9/11, and prior to the self-inflicted scandal and distraction of presidential impropriety. And the big three kept rolling on, paying lip service and little more to the tectonic repositioning that occurred in Asia. Today General Motors survives by a thread and only because of government loans while Chrysler is owned by FIAT, the UAW, and the American taxpayer; their future is uncertain at best, along with most of the rest of the automobile industry that so obtusely denied, delayed, and obfuscated. Some might respond that even Toyota lost money recently – fair enough – but they are in much better shape than any domestic manufacturer. The mighty have fallen, so quickly and so far, by poor reaction (and weak detection) of the macroeconomic forces that eventually crushed them.

Intel in the era of Detroit’s domination, preceding the proliferation of the microprocessor, was just another small company in Silicon Valley. Were Intel to have stubbornly resisted the now-obvious trends of CMOS and commodity memory, hardly anyone would have noticed or cared. Even today, a hypothetical case study of missed semiconductor opportunity would just be another footnote in the accumulating pantheon of smart guys making a sequence of dumb decisions with somebody else’s money. “Shame on them and a pox on the managers that enabled them” the Street would have cried, as the stock sank to pennies. That particular story is happier at the moment, thank goodness, but the lesson endures: adapt or die.

We can debate the harsh implications of Darwinian selection, but not the truth of it. It is not necessarily the best technology wins. It is good-enough technology, with the right business model, at the right time. Everything else is extinct.

These days of cataclysm and angst are a stark reminder of two facts, one inescapable, the other undeniable. Although we like to romanticize the semiconductor industry as “the cutting edge” of high technology growth in the United States and abroad, the more prosaic truth is that chip design is a mature segment subject to all the same macroeconomic turbulence as airplanes and cars, and chip manufacture has become a low-margin highly-commoditized business. “Chip companies” are decreasingly likely to find much differentiation or profit in semiconductor manufacturing anymore. Time to fold the tents and go get a therapist or a beer? Hardly. But the winners will self-select based on their ability to understand and adapt to the behavior of their consumers. The dinosaurs, the ones that ramp production for tactical expedience but neglect the clear signs of trouble, will die.
Indeed, this “behavioral” understanding is the differentiating Intellectual Property (IP), and semiconductors are the monetizing mechanism of delivery of this IP. Integrated circuits are going to become more prevalent, more ubiquitous, and even more important to economic growth in the years to come. Perhaps the only aspect of semiconductor technology that may start to slow is the relentless process of miniaturization, though even here we have a long way to go before declaring Moore’s law dead.

But the life-changing and life-saving applications will continue to multiply, and are limited only by our imaginations and “Big Three” auto think. While global competition will continue to drive fundamental advances in size, power, and performance at the hardware level, there are two overarching trends that next-generation semiconductor companies will ride to profit, or ignore at grave peril.

The first one is merely psychological, bordering on the cardinal sin of pride: the ossified, mistaken, and probably well-regretted aphorism that “real men have fabs”. This was complete and utter nonsense then, and they have pills for this dysfunction (or recreation) now. Indeed, it may have even been partially true twenty years ago. Today, however, real men, and women, don’t peg their ego or self-worth and business identity on anything except fundamental questions about the rationality, coherence, and profitability of their investments. Put more succinctly: they don’t let their ego interfere with their greed. For some, this may include a manufacturing facility. For most, it cannot, and it will not.

We’d go so far as to say that this is true for any asset – not just the foundry – that doesn’t perform. This is the notion of not just Fab-lite but Asset-Lite. Only retain and maintain those core competencies that differentiate you. Everything else is a jump ball.

The real money, and where the US remains particularly well positioned and strong, is in the design of the chips and the software applications that they support and then integrating them in a System On Chip (SoC)

Most of the semiconductor “industry” needs manufacturing about as much as Steven King needs a printing press. Their role has changed dramatically, as has their position in the value chain. We shouldn’t lament the shift, we should embrace the promotion. We get to focus on what we do best, which is think of new stuff and how to make money from it.

To use another automotive theme: most consumers are oblivious about the engine or power train in their car; what they really care is the safety and comfort of the driving experience. In our case, we need a quiet, secure and rock solid mobile office that we live in for 2 hours a day. Of course, we differentiate the market too: for some it may be a statement of environmental responsibility (like a Prius), for others it is just transportation at the very lowest cost possible (like India’s Tata Motors Nano). In the end the engine(s) are under the hood, or seat and we haven’t cracked one open in years.

We’re similarly agnostic if there is an Intel Atom or an ARM11 running the software on our PDA; in fact, we don’t even know who wrote the software and whether it runs on a Microsoft OS or a Linux from Windriver. What we do care about is reliability, convenience, cost and battery life. One of us splurges on seamless continental interoperability, the other one occasionally regrets his stinginess (half this piece is being written in Germany, where a smart phone doesn’t feel so smart at the moment).

The point is: semiconductor hardware has been relegated to second tier status, and will slip even further as software continues to dominate. But there are some silver linings of innovation on that dark cloud of obsolescence.

The fundamental shift in the centrality chip production happened when customers began to favor time-to-market over functionality and convenient customization over obsessive optimization. We’re not going to pretend to know exactly when in the last ten years the inflection occurred – it probably around the time we figured out “multiple use” of heterogeneous processors for specific tasks – but those two decision points are the real discriminants in the production economics of the semiconductor industry.

Interestingly, they are utterly scale independent. The paradigm is just as true at the design level as it is at the system level. This is going to be a pleasant surprise to some, and a career abrasion to others. The classic example is well the beaten drum of abstraction, the single-word history of the EDA industry. And yes, we’re ideological believers in the (one true?) path that effectively enables marketers and sales guys – in other words, people who speak to and extract cash from customers – to get ever closer to the specification and design of the platforms they are incentivized to sell. Abstraction, in the mildly twisted words (but not meaning) of Christensen, means that “less-skilled people can produce good-enough products.” Exactly.

In Analog design this has been the dogma since the beginning: there are no absolutes, only good enough. Somehow the digital world lulled us to sleep on that one, and great occasionally become the enemy of good.
The second word in the history will be integration, as in the “integrate” in Integrated Circuits. And this is where life in semiconductors will start to get very interesting again. If EDA was effectively born of automated design rules based on the heuristic (and well guessed) results from the proprietary machines on the manufacturing floor, the future of the semiconductor industry will lie in the still-green fields of locally optimized components and processes that can be quickly and reliably woven together through agreed-upon and conforming interfaces. We want to minimize wasted time at intermediate test points, everywhere, and maximize reuse and configurability on the devices, and between them.

That capability – of better design-to-process integration – is going to be the third factor that separates the macho men with fabs from the happy people with money. As the foundries focus on differentiation in reliability, yield, compliance, and turn-around, Jeffrey Macher and his team point out that production-based learning will decline in economic significance. It’ll matter a lot to the folks making the stuff, but not so much to the less-skilled – really less-specialized – people who specify and, to a certain extent, design it. Consequently, the future means smaller runs on highly reconfigurable lines that can literally turn on a dime, because that’s about all they’re going to get for all that capacity.

Authentic integration doesn’t mean kitchen-sink like functionality on a single device. There was a time and place for that. For many years, the dominant players essentially dictated schedule, availability, capability, and to a certain extent price. Today, we see tighter coupling between customer-driven specifications and the market’s ability to rapidly respond with slightly-less capable platforms that sensibly sacrifice some performance in favor of quick delivery and low cost.

Interestingly enough, this will drive EDA to be integrated into enterprise functions like ERP that can tunnel through and even inform corporate leadership whether or not there is capacity to build a certain product. (What it can’t say is whether the company can in fact design it.) The point is that in the next decade these capabilities will enable transparency, which will inform delivery, which will improve productivity.

This reconfiguration and virtualization of the semiconductor supply chain is already happening, and is a welcome harbinger of the many good things to come. Importantly, it relieves some of the destructive pressure on getting to the smallest nodes as quickly as possible, and allows for rationalization of the market, much broader innovation, and better proliferation as prices come down and device utility goes up.

Transistors Are A Growth Area Again

June 5th, 2009

By James Hogan

I am increasingly becoming aware of semiconductor players that are turning to more custom physical design as they focus on high-volume, low-margin markets such as data storage and consumer devices.

 

These semiconductor players are finding it difficult to differentiate existing SoC designs from the competition, given very similar ASIC physical design and implementation approaches. Additionally, most of the leading edge SoC companies utilize foundries such as TSMC, so differentiating your design via process technology is also impossible.

 

This increased custom physical design content is taking four major forms: regular structures such as memory and datapath, analog blocks, custom digital and cell or macro development.

 

As it regards the rebirth of custom physical design ,there is an especially interesting trend at advanced technology nodes. Here the same logic process is being used for analog, memory and mixed signal designs all on one chip. These designs experience very difficult constraints. The memory and mixed signal portions have always needed a certain degree of custom design, but at advanced technologies this degree of custom is increasing. It is apparent that assisted-automation technology in the routing domain is critical to meet schedules in this new era of custom design.

Along with this routing automation it is becoming necessary to use “what if” analysis (embedded timing and extraction) to ensure that timing and power requirements can be met.

 

A number of semiconductor designers also are finding that when a digital SoC design closure involves multi-corner, multi-mode physical design optimization and power optimization, custom design can provide an alternative solution with higher productivity. In a custom design transistors and routing can be exactly sized to meet critical timing paths while also optimizing power. There is far less control and differentiation in a purely automated ASIC place-and-route paradigm and this can lead to a great number of ECO and place-and-route iterations.

 

As more of these high-volume, low-margin semiconductor devices often involve mixed signal and custom designed blocks, layout editors and custom physical design techniques are becoming more important for chip finishing and assembly of mixed signal devices. Key to using the layout editor as a “chip assembly” platform is design flow interoperability. The layout editing environment must embrace standards such as OpenAccess, PyCells and Tcl, and must include powerful automation technology for routing and analysis.

 

 

Slowly the EDA market players are beginning to come around to this rebirth of the custom design space. Cadence’s Virtuoso dominates this space today but emerging technology is coming from competitors. The SpringSoft Laker platform is doing well in Asia and they are adding key automation partnerships such as with Pyxis in the routing domain. Synopsys is behind but has stated that custom design is a growth area for the company and one in which it plans to invest heavily in 2009-2010. Magma has its Titan offering in this space and Mentor is working to add automation to its legacy offering, as well.

 

 

TSMC also has added some interest by joining the IPL Alliance. The IPL Alliance is an industry-wide collaborative effort to create and promote interoperable process design kit (PDK) standards. At the core of the IPL activity is Craiova’s PyCell efforts, which enables a new product that adds a higher level of automation for device-level placement of analog and custom elements called Helix. There is a general feeling that custom design is back in vogue and will provide one of the few growth areas for EDA in 2009.

 

Transistors are again the growth area for EDA. I never thought I would live long enough to say that again.

Whole System Design: Abstraction, Security, and Scale

April 28th, 2009

By James Hogan & Peter Levin

If you really want to burn somebody, especially when they’re not looking, cop a snarky grin and tell a friend about so-and-so being a prescient predictor of the recent past. Gets ‘em every time. Of course, the gag looses its originality and humor after, say, ten years of constant use. But there’s always a fresh target.

In fact, it would be difficult to find a person or topic more prone to overstatement and snarky grins than ESL. It has been on the horizon now for the better part of a decade, and “everyone knows” that it is just a matter of time until the breakout occurs. Investors have placed massive amounts of treasure on the bet, and careers have been defined and occasionally broken by the sweet temptation of this particular tower of Babel.

Happily, the current end-of-financial days crisis has instigated a fresh and constructive look at this decade-old problem. This time we’ll need more than a prayer – or better behavioral synthesis – to make things better. The good news is that the situation is about to improve, and in ways that may surprise even the most jaded and non-believers among us.

It would be easy enough to recite the liturgy (or litany, depending upon your faith or perspective) of EDA’s collective abuse and exploitation at the doorstep of the old temple. The problem is that such dogma hasn’t done much to revitalize the industry, either commercially or technologically. The resulting incremental improvements come at a cost far disproportionate to their impact. And even ESL has become something of a burnt offering.

Nonetheless, we’re stubbornly bullish on the idea that abstraction is always, with no exception, the key to utility and productivity. Because of the tremendous advances in – and therefore commoditization of – semiconductor manufacturing the value of complex devices, especially SoCs, is utterly dependent upon the ability to specify well, implement quickly, test for fidelity, and validate for function. Of course, like the engine under the hood of a car, hardware matters; it can add to or detract from the user experience. On the other hand, how many of us know or care about the brand of the motor. Most drivers take such things for granted, as long as their propulsion needs – expressed in (high level) terms of fuel economy, power and performance – are well satisfied. It is no accident that SoC design feels very similar to systems design, especially as software content becomes the primary factor of differentiation and scalability.

But don’t expect the polygon pushers to reach high into the system any more than you would expect an assembly programmer to build the advanced apps in a smart phone. Too expensive, too slow, too restrictive. When the wise men come, they will know two things: how to integrate the components of design implementation in a way that obfuscates the details, and how to use abstraction to their benefit. And they won’t call it ESL; however they may call it virtualization, just as they do today in the IT industry.

In fact, our customers are already years ahead of the tools they buy. Sure, they care about compactness, manufacturability, and power. But the real battleground – at least between them – is the truly differentiated trade-space between device integrity (does it do what I want it to do?), reliability (will it perform well, long, and under duress?), and security (am I assured of my privacy, and protection against nefarious intrusion?).

The promise of “system level” anything – we’re going to propose a more ambitious new name in a second – is to break down the parochial boundaries that separate abstraction layers like so much cruddy varnish, and instead integrate them under in a common methodology and view. This hypothetical tool – none exists yet but we’re unshakably optimistic – would truly facilitate architectural exploration without the constraining ties to hardware targets, bastardized (or proprietary) language, and prohibitive cost of migrating from simulation to emulation, and emulation to target platform. Moreover, and crucially, it has to conveniently and sensibly accommodate the application software that differentiates our customers’ products in the market. With possibly one large exception, this is basically how they make their profits. In other words, it is a pre-requisite, and a recipe, for the holy grail of scale.

For example, consider the automobile industry, the once and future king of virtual prototyping. Conventional wisdom is that the most successful firms (the current downturn notwithstanding) are masters of supply chain management. However, behind the scenes, is an equally important differentiator: the quality of the mechanical CAD tools that enable designers to explore virtual alternative “architectures” quickly and easily, from aerodynamic drag to sophisticated safety and control systems. We see no fundamental reason that this couldn’t be achieved for SoC platforms. Indeed, many of the component pieces are already there.

In other words, instead of the raucous battles over synthesis and layout, the vendors should focus attention on the neglected frontier of whole system virtualization. Today, software designers are effectively cut-off from hardware designers, and neither of them enjoys close ties to manufacturing and test. Tomorrow, with a little invention, we’ll offer customers a truly comprehensive approach – not just a better, faster, more automated path to implementation – but excellent virtualization that is at once affordable enough, fast enough and granular enough to sign off long before anyone commits anything to silicon.

Some of the leading edge tools are already getting close to the paradigm we have in mind, and can integrate the hardware abstraction layer with transactors, and the transactors with function. Indeed, many recent commercial innovations are well down the road of breaking the moribund constraints of narrowly targeted software stacks, and thus enable better top-down flexibility and design cohesiveness. The challenge remains, though, about what to do while the hardware is churning, and the software teams still aren’t settled, and possibly can’t settle, on detailed functional specifications . . . in part because the hardware is churning. This is the ultimate vicious circle, a true Teufelskreis. Adding insult to injury, the difficulty will compound as density increases and the SoC platforms have not just 50 million gates, but 200 million, and the software port is waiting on a physical prototype or is manually trimmed to fit inside an FPGA because the cycle-accurate simulation is just too slow. This is precisely the hell we want to avoid.

Let’s make an even more dramatic point: the ratio of software developers to hardware designers is something like 5-or-10 to 1, and the most economically interesting companies at the moment either have very specific performance gains in a narrow market, or are big-S software, little-s silicon plays. The opportunity, and need, to support their development using system virtualization is as large as it is lucrative. It also represents an important shift in mindset: renewed emphasis and focus on getting the problem statement right, first – a truly universal canon – and then worry about how to get the job done. An effective virtual platform would provide a crucial protection against the eighth deadly sin of incomplete, vague, or sloppy specifications. We suspect that this will follow a path very similar to the virtualization of automotive and aeronautic design, where suppliers are not just selling a “CAD” tool, but a heavy dose of methodology and support.

The bottom line is that strategic intent has to evolve to the systems level. We close, therefore, with a humble proposal. Ken Anderson recently reminded us ”that success is not determined by what happens to us, but by how we respond to the challenges.”

Instead of climbing the endless ladder of abstraction and local optimization, we suggest taking a step back and thinking about an altogether new approach: put the whole device, all in one place, all in one tool, all within easy reach of the various stakeholders, from beginning to end, until a happy customer can do something new, something much better. Let him start with real architectural exploration and specification at the top, and not even think about silicon until late in the game, if ever. Let’s stand near that guy, and not the tired old predictors of the recent past. And let’s call it Whole System Design.