Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Moore’s Law’

Blog Review – Monday April 20, 2015

Monday, April 20th, 2015

Half a century and still quoted as relevant is more than most of us could hope to achieve, so the 50 anniversary of Gordon Moore’s pronouncement which we call Moore’s Law is celebrated by Gaurav Jalan, as he reviews the observation first pronounced on April 19, 1965, which he credits with the birth of the EDA industry, and the fabless ecosystem amongst other things.

Another celebrant is Axel Scherer, Cadence, who reflects on not just shrinking silicon size but the speed of the passing of time.

On the same theme of what Moore’s Law means today for FinFets and nano-wire logic libraries, Navraj Nandra, Synopsys, also commemorates the anniversary, with an example of what the CAD team has been doing with quantum effects at lower nodes.

At NAB (National Broadcasters Association) 2015, in Las Vegas, Steve Leibson, Xilinx, had an ‘eye-opening’ experience at the CoreEL Technologies booth, where the company’s FPGA evaluation kits were the subject of some large screen demos.

Reminiscing about the introduction of the HSA Foundation, Alexandru Voica, Imagination Technologies, provides an update on why heterogeneous computing is one step closer now.

Dr. Martin Scott, the senior VP and GM of Rambus’ Cryptography Research Division, recently participated in a Silicon Summit Internet of Things (IoT) panel hosted by the Global Semiconductor Alliance (GSA). In this blog he discusses the security of the IoT and opportunities for good and its vulnerabilities.

An informative blog by Paul Black, ARM examines the ARM architecture and DS-5 v5.21 DSTREAM support for debug, discussing power in the core domain and how to manage it for effective debug and design.

Caroline Hayes, Senior Editor

Blog Review – Mar 24 2014 Horse-play; games to play; multi-core puzzles; Moore pays

Monday, March 24th, 2014

Cadence’s Virtuoso migration path explained; Dassault reaches giddy-up heights in showjumping; an enthusiastic review of the Game Development Conference 2014, Mentor offers hope for embedded developers coping with complexity and MonolithIC 3D believes the end is nigh for Moore’s Law without cost penalties. By Caroline Hayes, Senior Editor.

Advocating migrating designs, Tom Volden, Cadence, presents an informative blog, explaining the company’s Viruoso design migration flow.

Last week, Paris, France hosted the Saut Hermès international showjumping event and Aurelien Dassault, reports on a 3D Experience for TV viewers to learn more about the artistry.

Creating a whole new game plan, Ellie Stone, ARM, reviews some of the highlights from GDC 14 (Game Developers’ Conference) with new of partners projects, the Sports Car Challenge and the Artist Competition wall.

The joy for a consumer is the bane of a developer’s working day: high complexity means developing multi-threaded applications bearing multiple-OS in mind, laments Anil Khanna, Mentor. He does offer hope though, with this blog and a link to more information.

Do not mourn the demise of Moore’s Law without counting the cost, warns Zvi Or-Bach, MonolithIC 3D. His blog has some interesting illustrations for the end of smaller transistors without price increases.

Will Moore’s and Metcalfe’s Laws Cross the IOT Chasm?

Sunday, April 30th, 2017

The success of the IOT may depend more on a viable customer experience over the convergence of the semiconductor and communication worlds.

By John Blyler, Editor, IOT Embedded Systems

The Internet of Things will involve a huge number of embedded devices reporting back to data aggregators running servers on the cloud. Low cost and low power sensors, cameras and other sources will allow the IOT to render the real world into a digital format. All of these “things” will be connected together via the Internet, which will open up new business models and services for customers and users. It should greatly expand the human–machine experience.

The key differentiators between the emerging IOT and traditional embedded systems is connectivity. IOT will conceivable connect all embedded things together. The result will be an almost inconceivable amount of data from sensors, cameras and the like, which will be transferred to the cloud for major computation and analysis.

Connectivity means IOT platforms will have a huge data side. Experts predict that the big data industry will grow to about US$54.3 billion by 2017. But the dark side of connectivity is the proliferation of hacking and privacy lapses caused by poor security.

Security is an issue for users as well as for the device developers. Since most IoT devices are resource constrained, designer cannot deploy resource-intensive security protection mechanisms. They are further constrained by the low cost of mass-produced devices.

Another challenge is that most software developers are not particularly security or reliability conscious. They lack training in the use of security testing, encryption, etc. Their code is often not design nor programmed in a defensive fashion.

Finally, since IOT devices will be designed and available on a massive scale, security attacks and failures can be easily propagated. Frequently software security patches will be needed but these must be design for early in the development life cycle of both the hardware (think architecture and memory) and software.

Moore-Metcalf and the Chasm

Connectivity, security and data analysis will make IOT devices far more complex than tradition embedded systems. This complexity in design and product acceptance can be illustrated by the confluence of two laws and a marketing chasm. Let’s consider each separately.

First, there is Moore’s Law. In 1965, Intel co-founder Gordon Moore predicted that transistor density (related to performance) of microprocessors would double every 2 years (see Figure 1). While “doubling every 2 years” suggests a parabola-shaped curve, Moore’s growth function is almost always represented in a straight line ― complemented by a logarithmic scale on the Y-axis.

Figure 1: Moore’s Law (courtesy of Mentor Graphics, Semi Pacific NW, 2015)

Several years later, another technology pioneer, 3Com co-founder Bob Metcalfe, stated that the value of a network grows with the square of the number of network nodes (or devices, or applications, or users, etc.), while the costs follow a more or less linear function. Not surprisingly, this equation is show as a network connection diagram. For example, 2 mobile devices will only able to communicate with each other. However, if you have billions of connected devices and applications, connection complexity rising considerably (see Figure 2).

Figure 2: Metcalfe’s Law.

Metcalfe’s Law is really about network growth rather than about technological innovation. Blogger Marc Jadoul recently noted on the Nokia website that, the combination of Moore’s and Metcalfe’s principles explains the evolution of communication networks and services, as well as the rise of the Internet of Things. The current IoT growth is enabled by hardware miniaturization, decreasing sensor costs, and ubiquitous wireless access capabilities that are empowering an explosive number of smart devices and applications…”

Jadoul realizes that the availability of state-of-the-art technology does not always guarantee success, citing the struggling growth of two main IOT “killer” consumer devices and apps, namely, watches and connected thermostats. The latter is also notorious for its security issues.

He explains this slow adoption by considering the “chasm.” Geoffrey A. Moore wrote about the gap that product marketers have to bridge for a new technology to go mainstream. Jadoul then combines these three charts, admitting the inaccuracies caused by different axis and scales, to observe that the chasm is actually the point where the shift from a technology driven model to a value and customer experience driven business needs to take place (see Figure 3).

Figure 3: Intersection of Gordon Moore’s Law, Metcalfe’s Law and Geoffrey Moore’s “the Chasm. (Courtesy of Marc Jadoul blog.)

This line of reasoning highlights the key differentiator of the IOT, i.e., connectivity of embedded semiconductor devices. But the success of the IOT may depend more on a viable customer experience over the convergence of computational and communication technologies.

DVCon Highlights: Software, Complexity, and Moore’s Law

Thursday, March 12th, 2015

Gabe Moretti, Senior Editor

The first DVCon  United States was a success.  It was the 27th Conference of the series and the first one with this name to separate it from DVCon Europe and DVCon India.  The last two saw their first event last year and following their success will be held this year as well.

Overall attendance, including exhibit-only and technical conference attendees, was 932.

If we count, as DAC does, exhibitors personnel then the total number of attendees is 1213.  The conference attracted 36 exhibitors, including 10 exhibiting for the first time and 6 of them headquartered outside of the US.   The technical presentations were very well attended, almost always with standing room only, thus averaging around 175 attendees per session.  One cannot fit more in the conference rooms that the DoubleTree has.  The other thing I observed was that there was almost no attendees traffic during the presentations.  People took a seat and stayed for the entire presentation.  Almost no one came in, listened for a few minutes and then left.  In my experience this is not typical and points out that the goal of DVCon, to present topics of contemporary importance, was met.

Process Technology and Software Growth

The keynote address this year was delivered by Aart de Geus, chairman and co-CEO of Synopsys.  His speeches are always both unique and quite interesting.  This year he chose as a topic “Smart Design from Silicon to Software”.   As one could have expected Aart’s major points had to do with process technology, something he is extremely knowledgeable about.  He thinks that Moore’s law as an instrument to predict semiconductor process advances has about ten years of usable life.  After that  the industry will have to find another tool, assuming one will be required, I would add.  Since, as Aart correctly points out, we are still using a 193 nm crayon to implement 10 nm features, clearly progress is significantly impaired.  Personally I do not understand the reason for continuing to use ultraviolet light in lithography, aside for the huge costs of moving to x-ray lithography.  The industry has resisted the move for so long that I think even x-ray has a short life span which at this point would not justify the investment.  So, before the ten years are up, we might see some very unusual and creative approaches to building features on some new material.  After all whatever we will use will have to understand atoms and their structure.

For now, says Aart, most system companies are “camping” at 28 nm  while evaluating “the big leap” to more advanced lithography process.  I think it will be along time, if ever, when 10 nm processes will be popular.  Obviously the 28 nm process supports the area and power requirements of the vast majority of advanced consumers products.  Aart did not say it but it is a fact that there are still a very large number of wafers produced using a 90 nm process.  Dr. de Geus pointed out that the major factor in determining investments in product development is now economics, not available EDA technology.  Of course one can observe that economics is only a second order decision making tool, since economics is determined in part by complexity.  But Aart stopped at economics, a point he has made in previous presentations in the last twelve months.  His point is well taken since ROI is greatly dependent on hitting the market window.

A very interesting point made during the presentation is that the length of development schedules has not changed in the last ten years, content has.  Development of proprietary hardware has gotten shorter, thanks to improved EDA tools, but IP integration and software integration and co-verification has used up all the time savings in the schedule.

What Dr. De Geus slides show is that software is and will grow at about ten times the rate of hardware.  Thus investment in software tools by EDA companies makes sense now.  Approximately ten years ago, during a DATE conference in Paris I had asked Aart about the opportunity of EDA companies, Synopsys in particular, to invest in software tools.  At that time Aart was emphatic that EDA companies did not belong in the software space.  Compilers are either cheap or free, he told me, and debuggers do not offer the correct economic value to be of interest.  Well without much fanfare about the topic of “investment in software” Synopsys is now in the software business in a big way.  Virtual prototyping and software co-verification are market segments Synopsys is very active in, and making a nice profit I may add.  So, it is either a matter of definition  or new market availability, but EDA companies are in the software business.

When Aart talks I always get reasons to think.  Here are my conclusions.  On the manufacturing side, we are tinkering with what we have had for years, afraid to make the leap to a more suitable technology.  From the software side, we are just as conservative.

That software would grow at a much faster pace than hardware is not news to me.  In all the years that I worked as a software developer or managers of software development, I always found that software grows to utilize all the available hardware environment and is the major reason for hardware development, whether is memory size and management or speed of execution.  My conclusion is that nothing is new: the software industry has never put efficiency as the top goal, it is always how easier can we make the life of a programmer.  Higher level languages are more  powerful because programmers can implement functions with minimal efforts, not because the underlying hardware is used optimally.  And the result is that when it comes to software quality and security the users are playing too large a part as the verification team.

Art or Science

The Wednesday proceedings were opened early in the morning by a panel with the provocative title of Art or Science.  The panelists were Janick Bergeron from Synopsys, Harry Foster from Mentor, JL Gray from Cadence, Ken Knowlson from Intel, and Bernard Murphy from Atrenta.  The purpose of the panel was to figure out whether a developer is better served by using his or her own creativity in developing either hardware or software, or follow a defined and “proven” methodology without deviation.

After some introductory remarks which seem to show a mild support for the Science approach, I pointed out that the title of the panel was wrong.  It should have been titled Art and Science, since both must play a part in any good development process.  That changed the nature of the panel.  To begin with there had to be a definition of what art and science meant.  Here is my definition.  Art is a problem specific solution achieved through creativity.  Science is the use of a repeatable recipe encompassing both tools and methods that insures validated quality of results.

Harry Foster pointed out that is difficult to teach creativity.  This is true, but it is not impossible I maintain, especially if we changed our approach to education.  We must move from teaching the ability to repeat memorized answers that are easy to grade on a test tests, and switched to problem solving, a system better for the student but more difficult to grade.  Our present educational system is focused on teachers, not students.

The panel spent a significant amount of time discussing the issue of hardware/software co-verification.  We really do not have a complete scientific approach, but we are also limited by the schedule in using creative solutions that themselves require verification.

I really liked what Ken Knowlson said at one point.  There is a significant difference between a complicated and a complex problem.  A complicated problem is understood but it is difficult to solve while a complex problem is something we do not understand a priori.  This insight may be difficult to understand without an example, so here is mine.  Relativity is complicated, black matter is complex.


Discussing all of the technical sessions would be too long and would interest only portions of the readership, so I am leaving such matters to those who have access to the conference proceedings.  But I think that both the keynote speech and the panel provided enough understanding as well as thought material to amply justify attending the conference.  Too often I have heard that DVCon is a verification conference: it is not just for verification as both the keynote and the panel prove.  It is for all those who care about development and verification, in short for those who know that a well developed product is easier to verify, manufacture and maintain than otherwise.  So whether in India, Europe or in the US, see you at the next DVCon.

IP Integration: Not a Simple Operation

Tuesday, May 13th, 2014

Gabe Moretti, Contributing Editor

Although the IP industry is about 25 years old, it still presents problems typical of immature industries.  Yet, the use of IP in systems design is now so popular one is hard press to find even one system design that does not use IP.  My first reaction to the use of IP is “back to the future”.

For many years of my professional career I dealt with board level design as well as chip design.  Between 1970 and 1990 IP was sold as discrete components by companies such as Texas Instruments, National, and Fairchild among many others.  Their databooks described precisely how to integrate the part in a design.  Although a defined standard for the contents did not exist, a de-facto standard was followed by all providers.  Engineers, using the databooks information would choose the correct part for their needs and the integration was reasonably straight forward.  All signals could be analyzed in the lab since pins and traces were available on the board.

Enters Complexity

As semiconductors fabrication progressed, the board became the chip, and the components are now IP modules.  One would think that integration would also remain reasonably straight forward.  But this is not the case.  Concerns about safeguarding intellectual property rights took over and IP developers were reluctant to provide much information about the functioning of the module, afraid that its functionality would be duplicated and thus they would loose sales.

As the number of transistors on a chip increases, the complexity of porting a design from one process to the next also increases.

Figure 1. Projected number of transistors on a chip

Developers found that by providing a hard macro, that is a module already placed and routed and ready for fabrication by the chosen foundry, was the best way to protect their intellectual rights.  But such strategy is costly because foundries cannot just validate every macro for free.  The IP provider must be in the position to guarantee volume use by the foundry’s customers.  Thus many IP modules must be synthesized.  This means they must be verified.

Karthik Srinivasan Corporate Application Engineer Manager Analog Mixed Signal at Apache Design Solutions, an Ansys company, wrote that “SoC designs today integrate significant number of IPs to accelerate their design times and to reduce the risks to their design closure. But the gap in the expectations of where and how the sign-off happens between the IP and SoC designers create design issues that affect the final product’s performance and release. IP designers often validate their IPs in isolation with expectations of near ideal operating conditions. SoCs are verified and signed off with mostly abstracted or in many cases ‘black-box’ views of IPs. But as more and more high speed and noise sensitive IPs get placed next to each other or next to the core digital logic failure conditions that were not considered emerge. This worsens when these IPs share one or more power and ground supply domains. For example, when a bank of high speed DDR IPs are placed next to a bank of memories, the switching of the DDR can generate sufficient noise on the shared ground network that can adversely affect the operation of the memory.

As designs migrate to smaller technology nodes, especially those using FinFET based technologies this gap in the design closure process is going to worsen the power noise and reliability closure process.”

DDR memory blocks are becoming a greater and greater portion of a chip as the portion of functionality implemented in firmware increases.  Bob Smith, Senior VP of Marketing and Business Development at Uniquify makes the case for a system view of memories.

” DDR IP is used in a wide variety of ASIC and SoC devices found in many different applications and market segments. If the device has an embedded processor, then it is highly likely that the processor requires access to external DDR memory. This access requires a DDR subsystem (DDR controller, PHY and IO) to manage the data traffic flowing to and from the embedded processor and external DDR memory.

Whether it is procured from an external source or developed by an internal IP group, almost all chip design projects rely on DDR IP to implement the on-chip DDR subsystem. The integration techniques used to implement the DDR IP in the chip design can have far reaching effects on DDR performance, chip area, power consumption and even reliability.

Figure 2. A non-optimized DDR implementation

The above fiogure illustrates a typical on-chip DDR implementation. Note that while the DDR I/Os span the perimeter of the chip, the DDR PHYs are configured as blocks and are placed in such a way that they are centered with the I/Os. As shown in the diagram, this not only wastes valuable chip area, but also creates other problems. ”

” A much more efficient way to implement the DDR subsystem IP is to deliver a DDR PHY that is exactly matched to the DDR I/O layout. By matching the PHY exactly to the I/Os, a tremendous amount of area is saved and power is reduced. Even better, the performance of the DDR can be improved since the PHY-I/O layout minimizes skew.”

Figure 3. Optimized DDR block

As process technology progresses and moves from 32 nm to 22nm and then 14 nm and so on, the role of the foundry in the place and route of an entire chip increases.  In direct proportion the freedom of designers to determine the final topology of a chip decreases.  Thus we are rapidly reaching the point where only hardened hard modules will be viable.  The number of viable providers in the IP industry is shrinking rapidly and many significant companies have been acquired in the last three or four years by EDA companies that becoming major providers of IP products.

Synopsys started selling IP around 1990 and has now a wide variety of IP in its portfolio, mostly developed internally.  Cadence, on the other hand, has built its extensive inventory of IP products mostly through acquisition.

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systemes points out that there are a number of issues to deal with regarding IP.

1.       IP Sourcing:  Companies are going to need a way to source IP.  They will need access to a cataloging system that allows for searching of both internally developed or under development IP as well as externally available third party IP.

2.       IP Governance: For internally developed IP, there needs to be systems and methodologies for handling the promotion of work in progress to company certified IP.  For both internal and externally acquired IP, there needs to be a process to validate that IP, and then a system to rate the IP internally based on previous use, documentation available, and other design artifacts.

3.       IP Issue Defect and Tracking: Since IP will be in use in multiple projects, a formal system is required to handle issue and defect tracking across multiple projects against all IP.  If one group finds and issue with a piece of IP, all other project groups that are using that IP need to be alerted of any issues found and the plan on resolving the issue.  Ideally this should be integrated into design tools that are used to assemble IP as well.  If a product has already gone out the door with the defective IP, these issues need to managed and corrective actions need to ensue based on any defects found.

4.       IP Security:  There are different levels of protection needed for different types of IP and robust methods must be put in place to ensure the security of IP.   First, company critical IP must be secure, and systems need to be put in place to make sure that the IP does not leave the company premises.  If collaborating with partners, any acquired IP must also be handled so that it is only used in the designs that are being collaboratively designed.  There need to be restrictions on using partner IP in design blocks which in turn can become IP in other designs.  There needs to be a way to track the ‘pedigree’ of IP.

5.   Variant driven platform based design:  Ultimately, for companies to keep up with the shortening market windows and application driven platform design, companies will need to adopt a system where there are base platforms with pre-qualified IP that can be configured on the fly and used a s a starting point for new designs.  These systems would automatically populate a design workspace with the required IP from a company approved catalog as the basis of a new design moving forward.

Integrating the Pieces

Farzad Zarrinfar, Managing Director of the Novelics Business Unit at Mentor Graphics,

provided a synthesis of the problems facing designers.

“For IP Integration, multiple IP like ‘Hard IP’, ‘Synthesizable Soft Peripheral IP’, and ‘Synthesizable Soft processor IP’ with different set of deliverables, use EDA tools for efficient ASIC/SOC designs. Selecting the optimal IP size (such as smallest embedded memory IP) is a critical design decision. While free IP is readily available, it does not always provide the best solution when compared to fee-based IP that provides much better characteristics for the specific applications.

IP integration to achieve smaller die size, lower leakage, lower dynamic power, or faster speed can provide designers with a more optimized solution that can potentially save millions of dollars over the life of the product, and better differentiate their chips in a highly competitive ASIC/SoC marketplace.”

Bill Neifert, Chief Technology Officer at Carbon Design Systems observes.  “Certainly, some designers at the bleeding edge differentiate every aspect of the subsystem and their own IP, but we’re increasingly seeing others adopt whole subsystem designs and then making configuration tweaks. Think black box design and ARM’s big.LITTLE offerings are prime examples of this trend.

Of course, in order to make these configuration changes, designers need to know the exact impact of the changes that they’re making. We see users doing this a lot on our IP Exchange web portal. They will download a CPAK (Carbon Performance Analysis Kit), a pre-built system or subsystem complete with software at the bare metal or O/S level. This gets them up and running quickly but not with their exact configuration. They’ll then iterate various configuration options in order to meet their exact design goals. It’s not unusual for a design team to compile 20 different configurations for the same IP block on our portal and then compare the impact of each of these different models on system performance.

Naturally, all of this impacts the firmware team quite a bit. The software developers don’t need to know exactly what the underlying hardware is doing but the firmware team needs the exact IP configuration. The sooner these decisions can be made, the sooner they can start being productive. Integrating this level of software on to the hardware typically exposes a new round of IP optimizations that can be made as well. Therefore, it’s not unusual for IP configuration changes to happen in waves as additional pieces of IP and software are added to the system.”

Drew Wingard, CTO at Sonics points out that standards matter.  “Because there are many sources for IP, the industry had to create and adhere to standards for integration. From a silicon vendors’ perspective, IP sources include third-party commercial components, internally designed blocks and cores, and customer-designed components. To meet the challenge of integrating IP components from many different places, SoC designers needed communication protocol standards. Communication protocol standards efforts began with the Virtual Socket Interface Alliance (VSIA), continued with the Open Core Protocol International Partnership (OCP-IP), and today reside with Accellera. Of course, our customers need to leverage de-facto standards such as ARM’s AMBA as well.  We owe our ability to integrate IP to the fundamental communication protocol standards work that these organizations performed.”

The Challenge of Verification

Sunrise’s Prithi Ramakrishnan is concerned about system verification.  “At a very high level, the main issue with IP is that the simulated environment is different from the final design environment.  Analog and RF IP is dependent on process/node, foundry, layout, extraction, model fidelity, and placement.  So you are either tied to just dropping it in ‘as is’ and treating it like a black box (nobody knows how it works and whether it meets the required specifications) or completely changing it (with the caveat that you can no longer expect the same results).  Digital IP needs to be resynthesized followed by placement and routing, and it takes several iterations to make the IP you got work the way you want it to work. In addition, this process is extremely tool-dependent.

Finally, there are system level issues like interoperability, interface and controls (how does the IP talk to the rest of the SoC). A very important, often overlooked factor is the communication between the IP providers and the SoC implementation houses – there are documents outlining integration guidelines, but without an automated process that takes in all that information, a lot could be lost in translation.”

The issue of how well a third party IP has been verified will always hunt designers unless the industry finds a way to make IP as trustworthy as the TI 7400 and equivalent parts of the early days.  Bernard Murphy, CTO at Atrenta observed: “One area that doesn’t get a lot of air-time is how a SoC verification team goes about debugging a problem around an IP. You have the old challenge – is this our bug or the IP developer’s bug? If the developer is down the hall, you can probably resolve the problem quickly. If they are now working for your biggest competitor, good luck with that. If this is a commercial IP, you work with an apps guy to circle around possibilities: maybe you are using it wrong, maybe you misunderstood the manual or the protocol, may be they didn’t test that particular configuration for that particular use-case… Then they bring in their expert and go back through the cycle until you converge on an answer. Problem is, all this burns a lot of time and you’re on a schedule. Is there a way to compress this debug cycle?”

He offers the following suggestion.  “One important class of things to check for is the above didn’t test that configuration for that use-case.  This is where synthesized assertions come in. These are derived automatically by the IP developer in the course of verifying the IP. They don’t look like traditional assertions (long, complex sequences of dependencies). They tend to be simpler, often non-obvious, and describe relationships not just at the boundary of the IP but also internal to the IP. Most importantly, they encode not just functionality but also the bounds of the use-cases in which the IP was tested. Think of it as a ‘signature’ for the function plus the verification of that function.”

Thomas L. Anderson, VP of Marketing at Breker Verification Systems pleads: integrate, but verify.  He argues that “The truth is that most SoC teams trust integration too much and verify too little. Many SoC products hit the market only after two or three iterations through the foundry. This costs a lot of money and risks losing market windows to competitors. Most SoC teams follow a five-step verification process:

  1. Ensure that each block, whether locally designed or licensed as IP, is well verified
  2. Use formal methods to verify that each block has been integrated into the SoC properly
  3. Assemble a minimal chip-level simulation testbench and run a few sanity verification tests
  4. Hand-write some simple C tests and run on the embedded processors in simulation or emulation
  5. Run the production software on the processors in simulation, emulation, or prototyping

The problem with this process is that the tests in steps 3 and 4 are too simple since they are hand-written. They typically verify only one block at a time, ignoring interaction between blocks. They also perform only one operation at a time, so they don’t stress cache coherency or any inherent concurrency within the design. Humans aren’t good at thinking and coding in parallel. Thorough SoC pre-silicon verification can occur only with multi-threaded, multi-processor test cases that string blocks together into realistic scenarios representing end-user applications of the chip.”

An Example of Complexity

Charlie Cheng, CEO of Kilopass gave me an example of complexity in choosing the correct IP for a design by using sparse matrix math.

With semiconductor IP comprising 90 percent of today’s semiconductor devices and memory IP accounting for over 50 percent of these complex SoCs, it’s no wonder that IP is the fastest growing sector of the overall semiconductor industry. As a result, managing third-party IP is a growing responsibility within today’s semiconductor companies. How to make the right choice from a growing quantity of IP is the major challenge facing engineering teams, purchasing departments, and executive management. The process of these groups buying IP can be viewed as a sparse matrix mathematical exercise but without the actual math formulas and data manipulation.

1.8V Vendor A OTP

The table shows two dimensions of a multi-dimensional matrix representing the variables confronting the purchasing company teams. In this two-dimensional matrix, imagine three additional tables for 1.8v operations with the four foundries at 28HPL, 28HPM, and 28HP.  Now, replicate this in a fourth dimension for the variable of 2.5v. Add a fifth and sixth dimension to the matrix for Vendor B’s OTP.  If this were a mathematical evaluation, a figure of merit would be assigned to each cell in each plane of the multidimensional matrix.

For example, Vendor A’s OTP at 1.8v has JEDEC three-lot at 28HP at TSMC, UMC and GLOBALFOUNDRIES, similarly for 28HPM and 28HP but has only working silicon at 28LP.  Vendor B’s OTP may not have received three lot qualification at any of the vendors on any of the processes, but may have first silicon or one or two lot qualification on one or more of the vendors. In the mathematical exercise, a figure of merit would be assigned to the being fully qualified, one or two lot qualified, first silicon or not taped out.  Using matrix algebra, the formal mathematical exercise would return a result but an intuitive evaluation of the process would suggest vendor A with more three-lot qualifications at multiple foundries would have an edge over vendor B which did not.

The above exercise would be typical of the evaluation occurring in the engineering team. A similar exercise would be occurring in the purchasing department with terms and conditions presented in the licensing agreement and royalty schedule each vendor submits. Corporate and legal would perform a similar exercise.

If the mathematical exercise was actually performed and all the cells in the matrix had assigned values, then a definitive solution is easily achieved. However, if the cells throughout the matrix are sparsely populated, then the solution ends up with a probable outcome.