Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘formal’

EDA in the year 2017 – Part 2

Tuesday, January 17th, 2017

Gabe Moretti, Senior Editor

The first part of the article, published last week, covered design methods and standards in EDA together with industry predictions that impacted all of our industry.  This part will cover automotive, design verification and FPGA.  I found it interesting that David Kelf, VP of Marketing at OneSpin Solutions, thought that Machine learning will begin to penetrate the EDA industry as well.  He stated: “Machine Learning hit a renaissance and is finding its way into a number of market segments. Why should design automation be any different?  2017 will be the start of machine learning to create a new breed of design automation tool, equipped with this technology and able to configure itself for specific designs and operations to perform them more efficiently. By adapting algorithms to suit the input code, many interesting things will be possible.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence touched on an issue that is being talked about more recently: security.  He noted that: “In 2016, IoT bot-net attacks brought down large swaths of the Internet – the first time the security impact of IoT was felt by many. Private and nation-state attacks compromised personal/corporate/government email throughout the year. “

In 2017, we have the potential for security concerns to start a retreat from always-on social media and a growing value on private time and information. I don’t see a silver bullet for security on our horizon. Instead, I anticipate an increasing focus for products to include security managers (like their safety counterparts) on the design team and to consider safety from the initial concept through the design/production cycle.

Figure 1.  Just one of the many electronics systems found in an automobile (courtesy of Mentor)

Automotive

The automotive industry has increased the use of electronics year over year for a long time.  At this point an automobile is a true intelligent system, at least as far as what the driver and passenger can see and hear the “infotainment system”.  Late model cars also offer collision avoidance and stay-in-lane functions, but more is coming.

Here is what Wally Rhines thinks: “Automotive and aerospace designers have traditionally been driven by mechanical design.  Now the differentiation and capability of cars and planes is increasingly being driven by electronics.  Ask your children what features they want to see in a new car.  The answer will be in-vehicle infotainment.  If you are concerned about safety, the designers of automobiles are even more concerned.  They have to deal with new regulations like ISO 26262, as well as other capabilities, in addition to environmental requirements and the basic task of “sensor fusion” as we attach more and more visual, radar, laser and other sensors to the car.  There is no way to reliably design vehicles and aircraft without virtual simulation of electrical behavior.

In addition, total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.”

David Kelf, VP of Marketing at OneSpin Solutions, observed: “OneSpin called it last year and I’ll do it again –– Automotive will be the “killer app” of 2017. With new players entering the marketing all the time, we will see impressive designs featured in advanced cars, which themselves will move toward a driverless future.  All automotive designs currently being designed for safety will need to be built to be as secure as possible. The ISO 26262 committee is working on security as well safety and I predict security will feature in the standard in 2017. Tools to help predict vulnerabilities will become more important. Formal, of course, is the perfect platform for this capability. Watch for advanced security features in formal.”

Rob Knoth, Product Management Director, Digital and Signoff Group at Cadence noted: “In 2016, autonomous vehicle technology reached an inflection point. We started seeing more examples of private companies operating SAE 3 in America and abroad (Singapore, Pittsburgh, San Francisco).  We also saw active participation by the US and world governments to help guide tech companies in the proliferation and safety of the technology (ex. US DOT V2V/V2I standard guidelines, and creating federal ADAS guidelines to prevent state-level differences). Probably the most unique example was also the first drone delivery by a major retailer, something which was hinted at 3 years prior and seemingly just a flight of fancy then.

Looking ahead to 2017, both the breadth and depth are expected to expand, including the first operation of SAE level 4/5 in limited use on public streets outside the US, and on private roads inside US. Outside of ride sharing and city driving, I expect to see the increasing spread of ADAS technology to long distance trucking and non-urban transportation. To enable this, additional investments from traditional vehicle OEM’s partnering with both software and silicon companies will be needed to enable high-levels of autonomous functions. To help bring these to reality, I also expect the release of new standards to guide both the functional safety and reliability of automotive semiconductors. Even though the pace of government standards can lag, for ADAS technology to reach its true potential, it will require both standards and innovation.”

FPGA

The IoT market is expected to provide a significant opportunity to the electronics industry to grow revenue and open new markets.  I think the use of FPGA in IoT dvices will increase the use of these devices in system design.

I asked Geoff Tate, CEO of FlexLogix, his opinions on the subject.  He offered four points that he expects to become reality in 2017:

1. the first customer chip will be fabricated using embedded FPGA from an IP supplier

2. the first customer announcements will be made of customers adopting embedded FPGA from an IP supplier

3. embedded FPGAs will be proven in silicon running at 1GHz+

4. the number of customers doing chip design using embedded FPGA will go from a handful to dozen.

Zibi Zalewski, Hardware Division General Manager at Aldec also addressed the FPGA subject.

“I believe FPGA devices are an important technology player to mention when talking what to expect in 2017. With the growth of embedded electronics driven by Automotive, Embedded Vision and/or IoT markets, FPGA technology becomes a core element particularly for in products that require low power and re-programmability.

Features of FPGA such as pipelining and the ability to execute and easily scale parallel instances of the implemented function allow for the use of FPGA for more than just the traditionally understood embedded markets. FPGA computing power usage is exploding in the High Performance Computing (HPC) where FPGA devices are used to accelerate different scientific algorithms, big data processing and complement CPU based data centers and clouds. We can’t talk about FPGA these days without mentioning SoC FPGAs which merge the microprocessor (quite often ARM) with reprogrammable space. Thanks to such configurations, it is possible to combine software and hardware worlds into one device with the benefits of both.

All those activities have led to solid growth in FPGA engineering, which is pushing on further growth of FPGA development and verification tools. This includes not only typical solutions in simulation and implementation. We should also observe solid growth in tools and services simplifying the usage of FPGA for those who don’t even know this technology such as high level synthesis or engineering services to port C/C++ sources into FPGA implementable code. The demand for development environments like compilers supporting both software and hardware platforms will only be growing, with the main goal focused on ease of use by wide group of engineers who were not even considering the FPGA platform for their target application.

At the other end of the FPGA rainbow are the fast-growing, largest FPGA offered both from Xilinx and Intel/Altera. ASIC design emulation and prototyping will push harder and harder on the so called big-box emulators offering higher performance and significantly lower price per gate and so becoming more affordable for even smaller SoC projects. This is especially true when partnered with high quality design mapping software that handles multi-FPGA partitioning, interconnections, clocks and memories.”

Figure 2. Verification can look like a maze at times

Design Verification

There are many methods to verify a design and companies will, quite often, use more than one on the same design.  Each method: simulation, formal analysis, and emulation, has its strong points.

For many years, logic simulation was the only tool available, although hardware acceleration of logic simulation was also available.

Frank Schirrmeister, Senior Product Management Group Director, System and Verification Group at Cadence submitted a through analysis of verification issues.  He wrote: “From a verification perspective, we will see further market specialization in 2017 – mobile, server, automotive (especially ADAS) and aero/defense markets will further create specific requirements for tools and flows, including ISO 26262 TCL1 documentation and support for other standards. The Internet of Things (IoT) with its specific security and low power requirements really runs across application domains.  Continuing the trend in 2016, verification flows will continue to become more application-specific in 2017, often centered on specific processor architectures. For instance, verification solutions optimized for mobile applications have different requirements than for servers and automotive applications or even aerospace and defense designs. As application-specific requirements grow stronger and stronger, this trend is likely to continue going forward, but cross-impact will also happen (like mobile and multimedia on infotainment in automotive).

Traditionally ecosystems have been centered on processor architectures. Mobile and Server are key examples, with their respective leading architectures holding the lion share of their respective markets. The IoT is mixing this up a little as more processor architectures can play and offer unique advantages, with configurable and extensible architectures. No clear winner is in sight yet, but 2017 will be a key year in the race between IoT processor architectures. Even OpenSource hardware architectures are look like they will be very relevant judging from the recent momentum which eerily reminds me of the early Linux days. It’s one of the most entertaining spaces to watch in 2017 and for years to come.

Verification will become a whole lot smarter. The core engines themselves continue to compete on performance and capacity. Differentiation further moves in how smart applications run on top of the core engines and how smart they are used in conjunction.

For the dynamic engines in software-based simulation, the race towards increased speed and parallel execution will accelerate together with flows and methodologies for automotive safety and digital mixed-signal applications.

In the hardware emulation world, differentiation for the two basic ways of emulating – processor-based and FPGA-based – will be more and more determined by how the engines are used. Specifically, the various use models for core emulation like verification acceleration low power verification, dynamic power analysis, post-silicon validation—often driven by the ever growing software content—will extend further, with more virtualization joining real world connections. Yes, there will also be competition on performance, which clearly varies between processor-based and FPGA-based architectures—depending on design size and how much debug is enabled—as well as the versatility of use models, which determines the ROI of emulation.

FPGA-based prototypes address the designer’s performance needs for software development, using the same core FPGA fabrics. Therefore, differentiation moves into the software stacks on top, and the congruency between emulation and FPGA-based prototyping using multi-fabric compilation allows mapping both into emulation and FPGA-based prototyping.

All this is complemented by smart connections into formal techniques and cross-engine verification planning, debug and software-driven verification (i.e. software becoming the test bench at the SoC level). Based on standardization driven by the Portable Stimulus working group in Accellera, verification reuse between engines and cross-engine optimization will gain further importance.

Besides horizontal integration between engines—virtual prototyping, simulation, formal, emulation and FPGA-based prototyping—the vertical integration between abstraction levels will become more critical in 2017 as well. For low power specifically, activity data created from RTL execution in emulation can be connected to power information extracted from .lib technology files using gate-level representations or power estimation from RTL. This allows designers to estimate hardware-based power consumption in the context of software using deep cycles over longer timeframes that are emulated. ‘

Anyone who knows Frank will not be surprised by the length of the answer.

Wally Rhines, Chairman and CEO of Mentor Graphics was less verbose.  He said:” Total system simulation has become a requirement.  How do you know that the wire bundle will fit through the hole in the door frame?  EDA tools can tell you the answer, but only after seeking out the data from the mechanical design.  Wiring in a car or plane is a three dimensional problem.  EDA tools traditionally worry about two dimension routing problems.  The world is changing.  We are going to see the basic EDA technology for designing integrated circuits be applied to the design of systems. Companies that can remain at the leading edge of IC design will be able to apply that technology to systems.

This will create a new market for EDA.  It will be larger than the traditional IC design market for EDA.  But it will be based upon the basic simulation, verification and analysis tools of IC design EDA.  Sometime in the near future, designers of complex systems will be able to make tradeoffs early in the design cycle by using virtual simulation.  That know-how will come from integrated circuit design.  It’s no longer feasible to build prototypes of systems and test them for design problems.  That approach is going away.  In its place will be virtual prototyping.  This will be made possible by basic EDA technology.  Next year will be a year of rapid progress in that direction.  I’m excited by the possibilities as we move into the next generation of electronic design automation.”

The increasing size of chips has made emulation a more popular tool than in the past.  Lauro Rizzatti, Principal at Lauro Rizzatti LLC, is a pioneer in emulation and continues to be thought of as a leading expert in the method.  He noted: “Expect new use models for hardware emulation in 2017 that will support traditional market segments such as processor, graphics, networking and storage, and emerging markets currently underserved by emulation –– safety and security, along with automotive and IoT.

Chips will continue to be bigger and more complex, and include an ever-increasing amount of embedded software. Project groups will increasingly turn to hardware emulation because it’s the only verification tool to debug the interaction between the embedded software and the underlying hardware. It is also the only tool capable to estimate power consumption in a realistic environment, when the chip design is booting an OS and processing software apps. More to the point, hardware emulation can thoroughly test the integrity of a design after the insertion of DFT logic, since it can verify gate-level netlists of any size, a virtually impossible task with logic simulators.

Finally, its move to the data center solidifies its position as a foundational verification tool that offers a reasonable cost of ownership.”

Formal verification tools, sometimes referred to as “static analysis tools” have seen their use increase year over year once vendors found human interface methods that did not require a highly-trained user.  Roger Sabbagh, VP of Application Engineering at Oski Technology pointed out: “The world is changing at an ever-increasing pace and formal verification is one area of EDA that is leading the way. As we stand on the brink of 2017, I can only imagine what great new technologies we will experience in the coming year. Perhaps it’s having a package delivered to our house by a flying drone or riding in a driverless car or eating food created by a 3-D printer. But one thing I do know is that in the coming year, more people will have the critical features of their architectural design proven by formal verification. That’s right. System-level requirements, such as coherency, absence of deadlock, security and safety will increasingly be formally verified at the architectural design level. Traditionally, we relied on RTL verification to test these requirements, but the coverage and confidence gained at that level is insufficient. Moreover, bugs may be found very late in the design cycle where they risk generating a lot of churn. The complexity of today’s systems of systems on a chip dictates that a new approach be taken. Oski is now deploying architectural formal verification with design architects very early in the design process, before any RTL code is developed, and it’s exciting to see the benefits it brings. I’m sure we will be hearing a lot more about this in the coming year and beyond!”

Finally David Kelf, VP Marketing at OneSpin Solutions observed: “We will see tight integrations between simulation and formal that will drive adoption among simulation engineers in greater numbers than before. The integration will include the tightening of coverage models, joint debug and functionality where the formal method can pick up from simulation and even emulation with key scenarios for bug hunting.”


Conclusion

The two combined articles are indeed quite long.  But the EDA industry is serving a multi-faceted set of customers with varying and complex requirements.  To do it justice, length is unavoidable.

Formal, Logic Simulation, hardware emulation/acceleration. Benefits and Limitations

Wednesday, July 27th, 2016

Stephen Bailey, Director of Emerging Technologies, Mentor Graphics

Verification and Validation are key terms used and have the following differentiation:  Verification (specifically, hardware verification) ensures the design matches R&D’s functional specification for a module, block, subsystem or system. Validation ensures the design meets the market requirements, that it will function correctly within its intended usage.

Software-based simulation remains the workhorse for functional design verification. Its advantages in this space include:

-          Cost:  SW simulators run on standard compute servers.

-          Speed of compile & turn-around-time (TAT):  When verifying the functionality of modules and blocks early in the design project, software simulation has the fastest turn-around-time for recompiling and re-running a simulation.

-          Debug productivity:  SW simulation is very flexible and powerful in debug. If a bug requires interactive debugging (perhaps due to a potential UVM testbench issue with dynamic – stack and heap memory based – objects), users can debug it efficiently & effectively in simulation. Users have very fine level controllability of the simulation – the ability to stop/pause at any time, the ability to dynamically change values of registers, signals, and UVM dynamic objects.

-          Verification environment capabilities: Because it is software simulation, a verification environment can easily be created that peeks and pokes into any corner of the DUT. Stimulus, including traffic generation / irritators can be tightly orchestrated to inject stimulus at cycle accuracy.

-          Simulation’s broad and powerful verification and debug capabilities are why it remains the preferred engine for module and block verification (the functional specification & implementation at the “component” level).

If software-based simulation is so wonderful, then why would anyone use anything else?  Simulation’s biggest negative is performance, especially when combined with capacity (very large, as well as complex designs). Performance, getting verification done faster, is why all the other engines are used. Historically, the hardware acceleration engines (emulation and FPGA-based prototyping) were employed latish in the project cycle when validation of the full chip in its expected environment was the objective. However, both formal and hardware acceleration are now being used for verification as well. Let’s continue with the verification objective by first exploring the advantages and disadvantages of formal engines.

-          Formal’s number one advantage is its comprehensive nature. When provided a set of properties, a formal engine can exhaustively (for all of time) or for a, typically, broad but bounded number of clock cycles, verify that the design will not violate the property(ies). The prototypical example is verifying the functionality of a 32-bit wide multiplier. In simulation, it would take far too many years to exhaustively validate every possible legal multiplicand and multiplier inputs against the expected and actual product for it to be feasible. Formal can do it in minutes to hours.

-          At one point, a negative for formal was that it took a PhD to define the properties and run the tool. Over the past decade, formal has come a long way in usability. Today, formal-based verification applications package properties for specific verification objectives with the application. The user simply specifies the design to verify and, if needed, provides additional data that they should already have available; the tool does the rest. There are two great examples of this approach to automating verification with formal technology:

  • CDC (Clock Domain Crossing) Verification:  CDC verification uses the formal engine to identify clock domain crossings and to assess whether the (right) synchronization logic is present. It can also create metastability models for use with simulation to ensure no metastability across the clock domain boundary is propagated through the design. (This is a level of detail that RTL design and simulation abstract away. The metastability models add that level of detail back to the simulation at the RTL instead of waiting for and then running extremely long full-timing, gate-level simulations.)
  • Coverage Closure:  During the course of verification, formal, simulation and hardware accelerated verification will generate functional and code coverage data. Most organizations require full (or nearly 100%) coverage completion before signing-off the RTL. But, today’s designs contain highly reusable blocks that are also very configurable. Depending on the configuration, functionality may or may not be included in the design. If it isn’t included, then coverage related to that functionality will never be closed. Formal engines analyze the design, in its actual configuration(s) that apply, and does a reachability analysis for any code or (synthesizable) functional coverage point that has not yet been covered. If it can be reached, the formal tool will provide an example waveform to guide development of a test to achieve coverage. If it cannot be reached, the manager has a very high-level of certainty to approving a waiver for that coverage point.

-          With comprehensibility being its #1 advantage, why doesn’t everyone use and depend fully on formal verification:

  • The most basic shortcoming of formal is that you cannot simulate or emulate the design’s dynamic behavior. At its core, formal simply compares one specification (the RTL design) against another (a set of properties written by the user or incorporated into an automated application or VIP). Both are static specifications. Human beings need to witness dynamic behavior to ensure the functionality meets marketing or functional requirements. There remains no substitute for “visualizing” the dynamic behavior to avoid the GIGO (Garbage-In, Garbage-Out) problem. That is, the quality of your formal verification is directly proportional to the quality (and completeness) of your set of properties. For this reason, formal verification will always be a secondary verification engine, albeit one whose value rises year after year.
  • The second constraint on broader use of formal verification is capacity or, in the vernacular of formal verification:  State Space Explosion. Although research on formal algorithms is very active in academia and industry, formal’s capacity is directly related to the state space it must explore. Higher design complexity equals more state space. This constraint limits formal usage to module, block, and (smaller or well pruned/constrained) subsystems, and potentially chip levels (including as a tool to help isolate very difficult to debug issues).

The use of hardware acceleration has a long, checkered history. Back in the “dark ages” of digital design and verification, gate-level emulation of designs had become a big market in the still young EDA industry. Zycad and Ikos dominated the market in the late 1980’s to mid/late-1990’s. What happened?  Verilog and VHDL plus automated logic synthesis happened. The industry moved from the gate to the register-transfer level of golden design specification; from schematic based design of gates to language-based functional specification. The jump in productivity from the move to RTL was so great that it killed the gate-level emulation market. RTL simulation was fast enough. Zycad died (at least as an emulation vendor) and Ikos was acquired after making the jump to RTL, but had to wait for design size and complexity to compel the use of hardware acceleration once again.

Now, 20 years later, it is clear to everyone in the industry that hardware acceleration is back. All 3 major vendors have hardware acceleration solutions. Furthermore, there is no new technology able to provide a similar jump in productivity as did the switch from gate-level to RTL. In fact, the drive for more speed has resulted in emulation and FPGA prototyping sub-markets within the broader market segment of hardware acceleration. Let’s look at the advantages and disadvantages of hardware acceleration (both varieties).

-          Speed:  Speed is THE compelling reason for the growth in hardware acceleration. In simulation today, the average performance (of the DUT) is perhaps 1 kHz. Emulation expectations are for +/- 1 MHz and for FPGA prototypes 10 MHz (or at least 10x that of emulation). The ability to get thousands of more verification cycles done in a given amount of time is extremely compelling. What began as the need for more speed (and effective capacity) to do full chip, pre-silicon validation driven by Moore’s Law and the increase in size and complexity enabled by RTL design & design reuse, continues to push into earlier phases of the verification and validation flow – AKA “shift-left.”  Let’s review a few of the key drivers for speed:

  • Design size and complexity:  We are well into the era of billion gate plus design sizes. Although design reuse addressed the challenge of design productivity, every new/different combination of reused blocks, with or without new blocks, creates a multitude (exponential number) of possible interactions that must be verified and validated.
  • Software:  This is also the era of the SoC. Even HW compute intensive chip applications, such as networking, have a software component to them. Software engineers are accustomed to developing on GHz speed workstations. One MHz or even 10’s of MHz speeds are slow for them, but simulation speeds are completely intolerable and infeasible to enable early SW development or pre-silicon system validation.
  • Functional Capabilities of Blocks & Subsystems:  It can be the size of input data / simuli required to verify a block’s or subsystem’s functionality, the complexity of the functionality itself, or a combination of both that drives the need for huge numbers of verification cycles. Compute power is so great today, that smartphones are able to record 4k video and replay it. Consider the compute power required to enable Advanced Driver Assistance Systems (ADAS) – the car of the future. ADAS requires vision and other data acquisition and processing horsepower, software systems capable of learning from mistakes (artificial intelligence), and high fault tolerance and safety. Multiple blocks in an ADAS system will require verification horsepower that would stress the hardware accelerated performance available even today.

-          As a result of these trends which appear to have no end, hardware acceleration is shifting left and being used earlier and earlier in the verification and validation flows. The market pressure to address its historic disadvantages is tremendous.

  • Compilation time:  Compilation in hardware acceleration requires logic synthesis and implementation / mapping to the hardware that is accelerating the simulation of the design. Synthesis, placement, routing, and mapping are all compilation steps that are not required for software simulation. Various techniques are being employed to reduce the time to compile for emulation and FPGA prototype. Here, emulation has a distinct advantage over FPGA prototypes in compilation and TAT.
  • Debug productivity:  Although simulation remains available for debugging purposes, you’d be right in thinking that falling back on a (significantly) slower engine as your debug solution doesn’t sound like the theoretically best debug productivity. Users want a simulation-like debug productivity experience with their hardware acceleration engines. Again, emulation has advantages over prototyping in debug productivity. When you combine the compilation and debug advantages of emulation over prototyping, it is easy to understand why emulation is typically used earlier in the flow, when bugs in the hardware are more likely to be found and design changes are relatively frequent. FPGA prototyping is typically used as a platform to enable early SW development and, at least some system-level pre-silicon validation.
  • Verification capabilities:  While hardware acceleration engines were used primarily or solely for pre-silicon validation, they could be viewed as laboratory instruments. But as their use continues to shift to earlier in the verification and validation flow, the need for them to become 1st class verification engines grows. That is why hardware acceleration engines are now supporting:
    • UPF for power-managed designs
    • Code and, more appropriately, functional coverage
    • Virtual (non-ICE) usage modes which allow verification environments to be connected to the DUT being emulated or prototyped. While a verification environment might be equated with a UVM testbench, it is actually a far more general term, especially in the context of hardware accelerated verification. The verification environment may consist of soft models of things that exist in the environment the system will be used in (validation context). For example, a soft model of a display system or Ethernet traffic generator or a mass storage device. Soft models provide advantages including controllability, reproducibility (for debug) and easier enterprise management and exploitation of the hardware acceleration technology. It may also include a subsystem of the chip design itself. Today, it has become relatively common to connect a fast model written in software (usually C/C++) to an emulator or FPGA prototype. This is referred to as hybrid emulation or hybrid prototyping. The most common subsystem of a chip to place in a software model is the processor subsystem of an SoC. These models usually exist to enable early software development and can run at speeds equivalent to ~100 MHz. When the processor subsystem is well verified and validated, typically a reused IP subsystem, then hybrid mode can significantly increase the verification cycles of other blocks and subsystems, especially driving tests using embedded software and verifying functionality within a full chip context. Hybrid mode can rightfully be viewed as a sub-category of the virtual usage mode of hardware acceleration.
    • As with simulation and formal before it, hardware acceleration solutions are evolving targeted verification “applications” to facilitate productivity when verifying specific objectives or target markets. For example, a DFT application accelerates and facilitates the validation of test vectors and test logic which are usually added and tested at the gate-level.

In conclusion, it may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).

Verification Choices: Formal, Simulation, Emulation

Thursday, July 21st, 2016

Gabe Moretti, Senior Editor

Lately there have been articles and panels about the best type of tools to use to verify a design.  Most of the discussion has been centered on the choice between simulation and emulation, but, of course, formal techniques should also be considered.  I did not include FPGA based verification in this article because I felt to be a choice equal to emulation, but at a different price point.

I invited a few representatives of EDA companies to answer questions about the topic.  The respondents are:

Steve Bailey, Director of Emerging Technologies at Mentor Graphics,

Dave Kelf, Vice President of Marketing at OneSpin Solutions

Frank Schirrmeister, Senior Product Management Director at Cadence

Seena Shankar, Technical Marketing Manager at Silvaco

Vigyan Singhal, President and CEO at Oski Technology

Lauro Rizzatti, Verification Consultant

A Search for the Best Technique

I first wanted an opinion of what each technology does better.  Of course the question is ambiguous because the choice of tool, as Lauro Rizzatti points out, depends on the characteristics of the design to be verified.  “As much as I favor emulation, when design complexity does not stand in the way, simulation and formal are superior choices for design verification. Design debugging in simulation is unmatched by emulation. Not only interactive, flexible and versatile, simulation also supports four-state and timing analysis.
However, design complexity growth is here to stay, and the curve will only get more challenging into the future. And, we not only have to deal with complexity measured in more transistors or gates in hardware, but also measured in more code in embedded software. Tasked to address this trend, both simulation and formal would hit the wall. This is where emulation comes in to rule the day.  Performance is not the only criteria to measure the viability of a verification engine.”

Vigyan Singhal wrote: “Both formal and emulation are becoming increasingly popular. Why use a chain saw (emulation) when you can use a scalpel (formal)? Every bug that is truly a block-level bug (and most bugs are) is most cost effective to discover with formal. True system-level bugs, like bandwidth or performance for representative traffic patterns, are best left for emulation.  Too often, we make the mistake of not using formal early enough in the design flow.”

Seena Shankar provided a different point of view. “Simulation gives full visibility to the RTL and testbench. Earlier in the development cycle, it is easier to fix bugs and rerun a simulation. But we are definitely gated by the number of cycles that can be run. A basic test

exercising a couple of functional operations could take up to 12 hours for a design with a 100 million gates.

Emulation takes longer to setup because all RTL components need  to be in place before a test run can begin. The upside is that millions of operations can be run in minutes. However, debug is difficult and time consuming compared to simulation.  Formal verification needs a different kind of expertise. It is only effective for smaller blocks but can really find corner case bugs through assumptions and constraints provided to the tool.”

Steve Bailey concluded that:” It may seem that simulation is being used less today. But, it is all relative. The total number of verification cycles is growing exponentially. More simulation cycles are being performed today even though hardware acceleration and formal cycles are taking relatively larger pieces of the overall verification pie. Formal is growing in appeal as a complementary engine. Because of its comprehensive verification nature, it can significantly bend the cost curve for high-valued (difficult/challenging) verification tasks and objectives. The size and complexity of designs today require the application of all verification engines to the challenges of verifying and validating (pre-silicon) the hardware design and enabling early SW development. The use of hardware acceleration continues to shift-left and be used earlier in the verification and validation flow causing emulation and FPGA prototyping to evolve into full-fledged verification engines (not just ICE validation engines).”

If I had my choice I would like to use formal tools to develop an executable specification as early as possible in the design, making sure that all functional characteristics of the intended product will be implemented and that the execution parameters will be respected.  I agree that the choice between simulation and emulation depends on the size of the block being verified, and I also think that hardware/software co-simulation will most often require the use of an emulation/acceleration device.

Limitations to Cooperation Among the Techniques

Since all three techniques have value in some circumstance, can designers easily move from one to another?

Frank Schirrmeister provided a very exhaustive response to the question, including a good figure.

“The following figure shows some of the connections that exist today. The limitations of cooperation between the engines are often of a less technical nature. Instead, they tend to result from the gaps between different disciplines in terms of cross knowledge between them.

Figure 1: Techniques Relationships (Courtesy of Cadence)

Some example integrations include:

-          Simulation acceleration combining RTL simulation and emulation. The technical challenges have mostly been overcome using transactors to connect testbenches, often at the transaction level that runs on simulation hosts to the hardware holding the design under test (DUT) and executing at higher speed. This allows users to combine the expressiveness in simulated testbenches to increase verification efficiency with the speed of synthesizable DUTs in emulation.

-          At this point, we even have enabled hot-swap between simulation and emulation. For example, we can run gate-level netlists without timing in emulation at faster speeds. This allows users to reach a point of interest at a later point of the execution that would take hours or days in simulation. Once the point of interest is reached, users can switch (hot swap) back into simulation, adding back the timing and continue the gate-level timing simulation.

-          Emulation and FPGA-based prototyping can share a common front-end, such as in the Cadence System Development Suite, to allow faster bring-up using multi-fabric compilation.

-          Formal and simulation also combine nicely for assertions, X-propagation, etc., and, when assertions are synthesizable and can be mapped into emulation, formal techniques are linked even with hardware-based execution.

Vigyan Singhal noted that: “Interchangeability of databases and poorly architected testbenches are limitations. There is still no unification of coverage database standard enabling integration of results between formal, simulation and emulation. Often, formal or simulation testbenches are not architected for reuse, even though they can almost always be. All constraints in formal testbenches should be simulatable and emulatable; if checkers and bus functional models (BFMs) are separated in simulation, checkers can sometimes be used in formal and in emulation.”

Dave Kelf concluded that: “the real question here is: How do we describe requirements and design specs in machine-readable forms, use this information to produce a verification plan, translate them into test structures for different tools, and extract coverage information that can be checked against the verification plan? It is this top-down, closed-loop environment generally accepted as ideal, but we have yet to see it realized in the industry. We are limited fundamentally by the ability to create a machine-readable specification.”

Portable Stimulus

Accellera has formed a study group to explore the possibility of developing a portable stimulus methodology.  The group is very active and progress is being made in that direction.  Since the group has yet to publish a first proposal, it was difficult to ask any specific questions, although I thought that a judgement on the desirability of such effort was important.

Frank Schirrmeister wrote: “At the highest level, the portable stimulus project allows designers to create tests to verify SoC integration, including items like low-power scenarios and cache coherency. By keeping the tests as software routines executing on processors that are available in the design anyway, the stimulus becomes portable between the different dynamic engines, specifically simulation, emulation, and FPGA prototyping. The difference in usage with the same stimulus then really lies in execution speed – regressions can run on faster engines with less debug – and on debug insight once a bug is encountered.”

Dave Kelf also has a positive opinion about the effort. “Portable Stimulus is an excellent effort to abstract the key part of the UVM test structures such that they may be applied to both simulation and emulation. This is a worthy effort in the right direction, but it is just scraping the surface. The industry needs to bring assertions into this process, and consider how this stimulus may be better derived from high-level specifications”

SystemVerilog

The language SystemVerilog is considered by some the best language to use for SoC development.  Yet, the language has limitations according to some of the respondents.

Seena Shankar answered the question “Is SystemVerilog the best we can do for system verification? as follows: “Sort of. SystemVerilog encapsulates the best features from Software and hardware paradigms for verification. It is a standard that is very easy to follow but may not be the best in performance. If the performance hit can be managed with a combination of system C/C++ or Verilog or any other verification languages the solution might be limited in terms of portability across projects or simulators.”

Dave Kelf wrote: “One of the most misnamed languages is SystemVerilog. Possibly the only thing this language was not designed to do was any kind of system specification. The name was produced in a misguided attempt to compete or compare with SystemC, and that was clearly a mistake. Now it is possible to use SystemVerilog at the system level, but it is clear that a C derived language is far more effective.
What is required is a format that allows untimed algorithmic design with enough information for it to be synthesized, virtual platforms that provide a hardware/software test capability at an acceptable level of performance, and general system structures to be analyzed and specified. C++ is the only language close to this requirement.”

And Frank Schirrmeister observed: “SystemVerilog and technologies like universal verification methodology (UVM) work well at the IP and sub-system level, but seem to run out of steam when extended to full system-on-chip (SoC) verification. That’s where the portable stimulus project comes in, extending what is available in UVM to the SoC level and allowing vertical re-use from IP to the SoC. This approach overcomes the issues for which UVM falls short at the SoC level.”

Conclusion

Both design engineers and verification engineers are still waiting for help from EDA companies.  They have to deal with differing methodologies, and imperfect languages while tackling ever more complex designs.  It is not surprising then that verification is the most expensive portion of a development project.  Designers must be careful to insure that what they write is verifiable, while verification engineers need to not only understand the requirements and architecture of the design, but also be familiar with the characteristics of the language used by developers to describe both the architecture and the functionality of the intended product.  I believe that one way to improve the situation is for both EDA companies and system companies to approach a new design not just as a piece of silicon but as a product that integrates hardware, software, mechanical, and physical characteristics.  Then both development and verification plans can choose the most appropriate tools that can co-exist and provide coherent results.

Future Challenges in Design Verification and Creation

Wednesday, March 23rd, 2016

Gabe Moretti, Senior Editor

Dr. Wally Rhines, Chairman and CEO of Mentor Graphics, delivered the keynote address at the recently concluded DVCon U.S. in San Jose.  The title of the presentation was: “Design Verification Challenges: Past, Present, and Future”.  Although one must know the past and recognize the present challenges, the future ones were those that interested me the most.

But let’s start from the present.  As can be seen in Figure 1 designers today use five major techniques to verify a design.  The techniques are not integrated with each other; they are as five separate silos within the verification methodology.  The near future goal as explained by Wally is to integrate the verification process.  The work of the Portable Stimulus Working Group within the Accellera System Initiative is addressing the problem.  The goal, according to Bill Hodges of Intel is: “Users should not be able to tell if their job was executed on a simulator, emulator, or prototype”.

Figure 1.  Verification Silos

The present EDA development work addresses the functionality of the design, both at the logical and at the physical level.  But, especially with the growing introduction of Internet of Things (IoT) devices and applications the issues of security and safety are becoming a requirement and we have not learned how to verify the device robustness in these areas.

Security

Figure 2, courtesy of Mentor Graphics, encapsulates the security problem.  The number of security breaches increases with every passing day it seems, and the financial and privacy losses are significant.

Figure 2

Chip designers must worry about malicious logic inside the chip, Counterfeit chips, and side-channel attacks.  Malicious logic is normally inserted dynamically into the chip using Trojan malware.  They must be detected and disabled.  The first thing designers need to do is to implement counter measures within the chip.  Designers must implement logic to analyze runtime activity to recognize foreign induced activity through a combination of hardware and firmware.  Although simulation can be used for verification, static tests to determine that the chip performs as specified and does not execute unspecified functions should be used during the development process.  Well-formed and complete assertions can approximate a specification document for the design.

Another security threat called “side-channel attacks” is similar to the Trojan attack but it differs in the fact that it takes advantage of back doors left open, either intentionally or not, by the developers.  Back doors are built into systems to deal with special security circumstances by the developers’ institution, but can be used criminally when discovered by unauthorized third parties.  To defend from such eventuality designers can use hardened IP or special logic to verify authenticity.  Clearly during development these countermeasures must be verified and weaknesses discovered.  The question to be answered is: “Is the desired path the only path possible?”

Safety

As the use of electronic systems grows at an increasing pace in all sort of products, the reliability of such systems grows in importance.  Although many products can be replaced when they fail without serious consequences for the users, an increasing number of systems failures put the safety of human being in great jeopardy.  Dr. Rhines identified in particular systems in the automotive, medical, and aerospace industries.  Safety standards have been developed in these industries that cover electronic systems.  Specifically, ISO 26262 in the automotive industry, IEC 60601 in the medical field, and DO-254 in aerospace applications.  These certification standards aim to insure that no harm will occur to systems, their operators, or to bystanders by verifying the functional robustness of the implementation.

Clearly no one would want a heart pace maker (figure 3) that is not fail-safe to be implanted in a living organism.

Figure 3. Implementation subject to IEC 60601 requirements

The certification standards address safe system development process by requiring evidence that all reasonable system safety objective are satisfied.  The goal is to avoid the risk of systematic failures or random hardware failures by establishing appropriate requirements and processes.  Before a system is certified, auditors check that each applicable requirement in the standard has been implemented and verified.  They must identify specific tests used to verify compliance to each specific requirement and must also be assured that automatic requirements tracking is available for a number of years.

Dr. Rhines presented a slide that dealt with the following question: “Is your system safe in the presence of a fault?”.

To answer the question verification engineers must inject faults in the verification stream.  Doping this it helps determining if the response of the system matches the specification, despite the presence of faults.  It also helps developers understand the effects of faults on target system behavior, and is assesses the overall risk.  Wally noted that formal-based fault injection/verification can exhaustively verify the safety aspects of the design in the presence of faults.

Conclusion

Dr. Rhines focused on the verification aspects during his presentation and his conclusions covered four points.

  • Despite design re-use, verification complexity continues to increase at 3-4X the rate of design creation
  • Increasing verification requirements drive new capabilities for each type of verification engine
  • Continuing verification productivity gains require EDA to both abstract the verification process from the underlying engines, develop common environments, methodologies and tools, and separate the “what” from the “how”
  • Verification for security and safety is providing another major wave of verification requirements.

I would like to point out that developments in verification alone are not enough.  What EDA really needs is to develop a system approach to the problem of developing and verifying a system.  The industry has given lip service to system design and the tools available so far still maintain a “silos” approach to the problem.  What is really required is the ability to work at the architectural level and evaluate a number of possible solutions to a well specified requirements document.  Formal tools provide good opportunities to approximate, if not totally implement, an executable requirements document.  Designers need to be able to evaluate a number of alternatives that include the use of mixed hardware and software implementations, analog and mixed-signal solutions, IP re-use, and electro-mechanical devices, such as MEMS.

It is useless or even dangerous to begin development under false assumptions whose impact will be found, if ever, once designers are well into the implementation stage.  The EDA industry is still focusing too much on fault identification and not enough on fault avoidance.

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.