Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Cadence Introduces Innovus Implementation System

Friday, March 13th, 2015

Gabe Moretti, Senior Editor

Cadence Design Systems  has introduced its Innovus Implementation System, a next-generation physical implementation solution that aims to enable system-on-chip (SoC) developers to deliver designs with best-in-class power, performance and area (PPA) while accelerating time to market.  The Innovus Implementation System was designed to help physical design engineers achieve best-in-class performance while designing for a set power/area budget or realize maximum power/area savings while optimizing for a set target frequency.

The company claims that the Innovus Implementation System provides typically 10 to 20 percent better power/performance/area (PPA) and up to 10X full-flow speedup and capacity gain at advanced 16/14/10nm FinFET processes as well as at established process nodes.

Rod Metcalfe, Product Management Group Director pointed out the key Innovus capabilities:

- New GigaPlace solver-based placement technology that is slack-driven and topology-/pin access-/color-aware, enabling optimal pipeline placement, wirelength, utilization and PPA, and providing the best starting point for optimization
- Advanced timing- and power-driven optimization that is multi-threaded and layer aware, reducing dynamic and leakage power with optimal performance
- Unique concurrent clock and datapath optimization that includes automated hybrid H-tree generation, enhancing cross-corner variability and driving maximum performance with reduced power
- Next-generation slack-driven routing with track-aware timing optimization that tackles signal integrity early on and improves post-route correlation
Full-flow multi-objective technology enables concurrent electrical and physical optimization to avoid local optima, resulting in the most globally optimal PPA

    The Innovus Implementation System also offers multiple capabilities that boost turnaround time for each place-and-route iteration. Its core algorithms have been enhanced with multi-threading throughout the full flow, providing significant speedup on industry-standard hardware with 8 to 16 CPUs. Additionally, it features what Cadence believes to be the industry’s first massively distributed parallel solution that enables the implementation of design blocks with 10 million instances or larger. Multi-scenario acceleration throughout the flow improves turnaround time even with an increasing number of multi-mode, multi-corner scenarios.

    Rahul Deokar, Product Management Director added that the product offers a common user interface (UI) across synthesis, implementation and signoff tools, and data-model and API integration with the Tempus Timing Signoff solution and Quantus QRC Extraction solution.

    The Innovus common GUI

    Together these solutions enable fast, accurate, 10nm-ready signoff closure that facilitates ease of adoption and an end-to-end customizable flow. Customers can also benefit from robust visualization and reporting that enables enhanced debugging, root-cause analysis and metrics-driven design flow management.

    “At ARM, we push the limits of silicon and EDA tool technology to deliver products on tight schedules required for consumer markets,” said Noel Hurley, general manager, CPU group, ARM. “We partnered closely with Cadence to utilize the Innovus Implementation System during the development of our ARM Cortex-A72 processor. This demonstrated a 5X runtime improvement over previous projects and will deliver more than 2.6GHz performance within our area target. Based on our results, we are confident that the new physical implementation solution can help our mutual customers deliver complex, advanced-node SoCs on time.”

    “Customers have already started to employ the Innovus Implementation System to help achieve higher performance, lower power and minimized area to deliver designs to the market before the competition can,” said Dr. Anirudh Devgan, senior vice president of the Digital and Signoff Group at Cadence. “The early customers who have deployed the solution on production designs are reporting significantly better PPA and a substantial turnaround time reduction versus competing solutions.”

    DVCon Highlights: Software, Complexity, and Moore’s Law

    Thursday, March 12th, 2015

    Gabe Moretti, Senior Editor

    The first DVCon  United States was a success.  It was the 27th Conference of the series and the first one with this name to separate it from DVCon Europe and DVCon India.  The last two saw their first event last year and following their success will be held this year as well.

    Overall attendance, including exhibit-only and technical conference attendees, was 932.

    If we count, as DAC does, exhibitors personnel then the total number of attendees is 1213.  The conference attracted 36 exhibitors, including 10 exhibiting for the first time and 6 of them headquartered outside of the US.   The technical presentations were very well attended, almost always with standing room only, thus averaging around 175 attendees per session.  One cannot fit more in the conference rooms that the DoubleTree has.  The other thing I observed was that there was almost no attendees traffic during the presentations.  People took a seat and stayed for the entire presentation.  Almost no one came in, listened for a few minutes and then left.  In my experience this is not typical and points out that the goal of DVCon, to present topics of contemporary importance, was met.

    Process Technology and Software Growth

    The keynote address this year was delivered by Aart de Geus, chairman and co-CEO of Synopsys.  His speeches are always both unique and quite interesting.  This year he chose as a topic “Smart Design from Silicon to Software”.   As one could have expected Aart’s major points had to do with process technology, something he is extremely knowledgeable about.  He thinks that Moore’s law as an instrument to predict semiconductor process advances has about ten years of usable life.  After that  the industry will have to find another tool, assuming one will be required, I would add.  Since, as Aart correctly points out, we are still using a 193 nm crayon to implement 10 nm features, clearly progress is significantly impaired.  Personally I do not understand the reason for continuing to use ultraviolet light in lithography, aside for the huge costs of moving to x-ray lithography.  The industry has resisted the move for so long that I think even x-ray has a short life span which at this point would not justify the investment.  So, before the ten years are up, we might see some very unusual and creative approaches to building features on some new material.  After all whatever we will use will have to understand atoms and their structure.

    For now, says Aart, most system companies are “camping” at 28 nm  while evaluating “the big leap” to more advanced lithography process.  I think it will be along time, if ever, when 10 nm processes will be popular.  Obviously the 28 nm process supports the area and power requirements of the vast majority of advanced consumers products.  Aart did not say it but it is a fact that there are still a very large number of wafers produced using a 90 nm process.  Dr. de Geus pointed out that the major factor in determining investments in product development is now economics, not available EDA technology.  Of course one can observe that economics is only a second order decision making tool, since economics is determined in part by complexity.  But Aart stopped at economics, a point he has made in previous presentations in the last twelve months.  His point is well taken since ROI is greatly dependent on hitting the market window.

    A very interesting point made during the presentation is that the length of development schedules has not changed in the last ten years, content has.  Development of proprietary hardware has gotten shorter, thanks to improved EDA tools, but IP integration and software integration and co-verification has used up all the time savings in the schedule.

    What Dr. De Geus slides show is that software is and will grow at about ten times the rate of hardware.  Thus investment in software tools by EDA companies makes sense now.  Approximately ten years ago, during a DATE conference in Paris I had asked Aart about the opportunity of EDA companies, Synopsys in particular, to invest in software tools.  At that time Aart was emphatic that EDA companies did not belong in the software space.  Compilers are either cheap or free, he told me, and debuggers do not offer the correct economic value to be of interest.  Well without much fanfare about the topic of “investment in software” Synopsys is now in the software business in a big way.  Virtual prototyping and software co-verification are market segments Synopsys is very active in, and making a nice profit I may add.  So, it is either a matter of definition  or new market availability, but EDA companies are in the software business.

    When Aart talks I always get reasons to think.  Here are my conclusions.  On the manufacturing side, we are tinkering with what we have had for years, afraid to make the leap to a more suitable technology.  From the software side, we are just as conservative.

    That software would grow at a much faster pace than hardware is not news to me.  In all the years that I worked as a software developer or managers of software development, I always found that software grows to utilize all the available hardware environment and is the major reason for hardware development, whether is memory size and management or speed of execution.  My conclusion is that nothing is new: the software industry has never put efficiency as the top goal, it is always how easier can we make the life of a programmer.  Higher level languages are more  powerful because programmers can implement functions with minimal efforts, not because the underlying hardware is used optimally.  And the result is that when it comes to software quality and security the users are playing too large a part as the verification team.

    Art or Science

    The Wednesday proceedings were opened early in the morning by a panel with the provocative title of Art or Science.  The panelists were Janick Bergeron from Synopsys, Harry Foster from Mentor, JL Gray from Cadence, Ken Knowlson from Intel, and Bernard Murphy from Atrenta.  The purpose of the panel was to figure out whether a developer is better served by using his or her own creativity in developing either hardware or software, or follow a defined and “proven” methodology without deviation.

    After some introductory remarks which seem to show a mild support for the Science approach, I pointed out that the title of the panel was wrong.  It should have been titled Art and Science, since both must play a part in any good development process.  That changed the nature of the panel.  To begin with there had to be a definition of what art and science meant.  Here is my definition.  Art is a problem specific solution achieved through creativity.  Science is the use of a repeatable recipe encompassing both tools and methods that insures validated quality of results.

    Harry Foster pointed out that is difficult to teach creativity.  This is true, but it is not impossible I maintain, especially if we changed our approach to education.  We must move from teaching the ability to repeat memorized answers that are easy to grade on a test tests, and switched to problem solving, a system better for the student but more difficult to grade.  Our present educational system is focused on teachers, not students.

    The panel spent a significant amount of time discussing the issue of hardware/software co-verification.  We really do not have a complete scientific approach, but we are also limited by the schedule in using creative solutions that themselves require verification.

    I really liked what Ken Knowlson said at one point.  There is a significant difference between a complicated and a complex problem.  A complicated problem is understood but it is difficult to solve while a complex problem is something we do not understand a priori.  This insight may be difficult to understand without an example, so here is mine.  Relativity is complicated, black matter is complex.

    Conclusion

    Discussing all of the technical sessions would be too long and would interest only portions of the readership, so I am leaving such matters to those who have access to the conference proceedings.  But I think that both the keynote speech and the panel provided enough understanding as well as thought material to amply justify attending the conference.  Too often I have heard that DVCon is a verification conference: it is not just for verification as both the keynote and the panel prove.  It is for all those who care about development and verification, in short for those who know that a well developed product is easier to verify, manufacture and maintain than otherwise.  So whether in India, Europe or in the US, see you at the next DVCon.

    Blog Review – Tuesday March 10, 2015

    Tuesday, March 10th, 2015

    An interesting and informative tutorial on connecting Ardino to the Internet when ‘in the wild’ is the topic that caught ARM’s Joe Hanson’s interest.

    Sharing the secrets of SoC companies that accelerate the distributed design process, Kurt Shuler, Arteris, considers the interconnect conundrum.

    Never one to shy away from the big question, Richard Goering, Cadence Design Systems, asks what is the key to IC design efficiency. He has some help, with panel members from the DVCON 2015 conference, organised by the Accellera Systems Initiative.

    Contemplating NXP’s acquisition of Freescale, Ray Angers, Chip Works, and, with a series of bar charts and dot-graphics, deems the Euro-American couple a good match.

    Experiencing an identity crisis, Jeff Bier, Berkley Design, is looking forward to attending Embedded Vision Summit, in May, and particularly, it seems, to the keynote by Mike Aldred, the lead robotics developer at Dyson.

    The multi-lingual Colin Walls is brushing up on his Swedish as he packs for ESC (Embedded Conference Scandinavia) this week. He will speak at three sessions – Dynamic Memory Allocation and Fragmentation in C and C++, Power Management in Embedded Systems and Self-Testing in Embedded Systems, which he previews in this blog.

    Delighted at Intel’s call for 3D IC, Zvi Or-Bach, MonolithIC 3D, argues that the packaging technology for SoCs, using data and graphics from a variety of sources.

    Blogging from Mobile World Congress, Martijn van der Linden, NXP, looks at what the company is developing for the Internet of Things, including the connected, Tesla, car concept from Rinspeed.

    Anyone looking into serial data transfers to replace parallel data transfer, can discover more from the blog, posted by Saurabh Shrivastava, Synopsys. The acceleration of PCI Express based systems’ verification and the difference power states of the interface has never been more relevant.

    Blog Review – Monday, February 23, 2015

    Monday, February 23rd, 2015

    Next week may be a test for the German language skills of Colin Walls, Mentor Graphics, as he returns to Nuremberg for Embedded World 2015, where he will be presenting papers at the conference. He previews both How to Measure RTOS Performance and Self-Testing in Embedded Systems. And issues an invitation to drop by the company booth.

    Another heads-up for Embedded World from Chris A Ciufo, eecatalog, who discusses heterogeneous computing, and not only how it can be applied, but what is needed for effective execution.

    After seven years, start-up Soft Machines was ready to unveil its VISC (Variable Instruction Set Computing) CPU cores at Cadence’s Front-End Design Summit. Richard Goering unpicks the innovation that could revive performance/W scaling to boost power and performance.

    A heart-warming case study from Artie Beavis, Atmel, shows how ARM-based wearable technology is being put to good use with the Unaliwar Kanega wristwatch, for the elderly.

    Patient Michael Posner is not going to let something like gate counts flummox him. He explains not only the nature of logic cells – with some help from Xilinx – but always flags up a warning and advice for any prototype attempting to map RTL code.

    Dark silicon, the part of a device which shuts down to avoid over-heating, is getting darker. Zvi Or-Back, MonolithIC 3D, believes his company’s architecture can throw light on a semiconductor industry that is on the brink of being derailed by the creep of dark silicon.

    By Caroline Hayes, Senior Editor.

    Blog Review – Monday, February 16, 2015

    Monday, February 16th, 2015

    Repeating last year’s project, Gagan Luthra, ARM, explains this year’s 100 projects in 100 days for the new Bluetooth Low Energy (BLE) Pioneer Kit, with Cypress’s PsoC 4 BLE; an ARM Cortex-M0 CPU with BLE radio.

    Steve Schulz visited Cadence Design and entranced Brian Fuller, with his ideas for standards for the IoT, the evolution of the SoC into a System on Stack and the design gaps that lie en route.

    Fighting the Nucleus corner, Chris Ciufo, ee catalog, rejoices in the news that Mentor Graphics is repositioning the RTOS for Industrie 4.0, for factory automation, and standing up to the tough guys in the EDA playground.

    Following the news that Intel is to buy Lantiq, Martin Bijman, Chipworks, looks into what the acquisition will bring to the Intel stable, presenting some interesting statistics.

    Maybe a little one-sided, but the video presented by Bernie DeLay, Synopsys, is informative about how VIP architecture accelerates memory debug for simultaneous visualization.

    The normally relaxed and affable Warren Savage, IP-extreme is getting hot under the collar at the thought of others ‘borrowing’, or plain old plagiarism, as he puts it in his post. The origins of the material will be argued, (the offending article has been removed from the publication’s site), but Savage uses the incident to make a distinction between articles with a back story and ‘traditional’ tech journalism.

    To end on a light note, Raj Suri, Intel, presents a list compiled by colleagues of employees that look like celebrities. Nicholas Cage, Dame Helen Mirren and Michael J Fox doppelgangers are exposed.

    Caroline Hayes, Senior Editor

    A Prototyping with FPGA Approach

    Thursday, February 12th, 2015

    Frank Schirrmeister, Group Director for Product Marketing of the System Development Suite, Cadence.

    In general, the industry is experiencing the need for what now has been started being called the “shift left” in the design flow as shown Figure 1. Complex hardware stacks, starting from IP assembled into sub-systems, assembled into Systems on Chips (SoCs) and eventually integrated into systems, are combined with complex software stacks, integrating bare metal software and drivers with operating systems, middleware and eventually the end applications that determine the user experience.

    From a chip perspective, about 60% into a project three main issues have to be resolved. First, the error rate in the hardware has to be low enough that design teams find confidence to commit to a tape out. Second, the chip has to be validated enough within its environment to be sure that it works within the system. Third, and perhaps most challenging, significant portions of the software have to be brought up to be confident that software/hardware interactions work correctly. In short, hardware verification, system validation and software development have to be performed as early as possible, requiring a “shift left” of development tasks to allow them to happen as early as possible.

    Figure 1: A Hardware/Software Development Flow.

    Prototyping today happens at two abstraction levels – using transaction-level models (TLM) and register transfer models (RTL) – using five basic engines.

    • Virtual prototyping based on TLM models can happen based on specifications earliest in the design flow and works well for software development but falls short when more detailed hardware models are required and is plagued by model availability and its creation cost and effort.
    • RTL simulation – which by the way today is usually integrated with SystemC based capabilities for TLM execution – allows detailed hardware execution but is limited in speed to the low KHz or even Hz range and as such is not suitable for software execution that may require billions of cycles to just boot an operating system. Hardware assisted techniques come to the rescue.
    • Emulation is used for both hardware verification and lower level software development as speeds can reach the MHz domain. Emulation is separated into processor based and FPGA based emulation, the former allowing for excellent at speed debug and fast bring-up times as long FPGA routing times can be avoided, the latter excelling at speed once the design has been brought up.
    • FPGA based prototyping is typically limited in capacity and can take months to bring up due to modifications required to the design itself and the subsequent required verification. The benefit, once brought up, is a speed range in the 10s of MHz range that is sufficient for software development.
    • The actual prototype silicon is the fifth engine used for bring up. Post-silicon debug and test techniques are finding their way into pre-silicon given the ongoing shift-left. Using software for verification bears the promise to better re-use verification across the five engines all the way into post-silicon.

    Advantages Of Using FPGAs For ASIC Prototyping

    FPGA providers have been pursuing aggressive roadmaps. Single FPGA devices now nominally hold up to 20 million ASIC gates, with utilization rates of 60%, 8 FPGA systems promise to hold almost 100 MG, which makes them large enough for a fair share of design starts out there. The key advantage of FPGA based systems is the speed that can be achieved and the main volume of FPGA based prototypes today is shipped to enable software development and sub-system validation. They are also relatively portable, so we have seen customers use FPGA based prototypes successfully to interact with their customers to deliver pre-silicon representations of the design for demonstration and software development purposes.

    Factors That May Limit The Growth Of The Technique

    There certainly is a fair amount of growth out there for FPGA based prototyping, but the challenge of long bring-up times often defies the purpose of early availability. For complex designs, requiring careful partitioning and timing optimization, we have seen cases in which the FPGA based prototype did not become available even until silicon was back. Another limitation is that the debug insight into the hardware is very limited compared to simulation and processor based emulation. While hardware probes can be inserted, they will then reduce the speed of execution because of data logging. Subsequently, FPGA based prototypes find most adoption in the later stages of projects during which RTL has become already stable and the focus can shift to software development.

    The Future For Such Techniques

    All prototyping techniques are more and more used in combination. Emulation and RTL simulation are combined to achieve “Simulation Acceleration”. Emulation and transaction-level models with Fast Models from ARM are combined to accelerate operating system bring-up and software driven testing. Emulation and FPGA based prototyping are combined to combine the speed of bring-up for new portions of the design in emulation with the speed of execution for stable portions of the design in FPGA based prototyping. Like in the recent introduction of the Cadence Protium FPGA based prototyping platform, both processor based emulation and FPGA based prototyping can share the same front-end to significantly accelerate FPGA based prototyping bring-up. At this point all major EDA vendors have announced a suite of connected engines (Cadence in May 2011, Mentor in March 2014 and Synopsys in September 2014).It will be interesting to see how the continuum of engines grows further together to enable most efficient prototyping at different stages of a development project.

    Blog Review – Monday, February 09, 2015

    Monday, February 9th, 2015

    Arthur C Clarke interview; Mastering Zynq; The HAPS and the HAPS-nots; Love thy customer; What designers want; The butterfly effect for debug

    A nostalgic look by at an AT&T and MIT conference, by Artie Beavis, ARM, has a great video interview with Arthur C Clarke. It is fascinating to see the man himself envisage mobile connectivity and ‘devices that send information to friends, the exchange of pictorial information and data; the ‘completely mobile’ telephone as well as looking forward to receiving signals from outer space.

    A video tutorial presented by Dr Mohammad S Sadri, Microelectronic Systems Design Research Group at Technische Universität Kaiserslautern, Germany, shows viewers how to create AXI-based peripherals in the Xilinx Zynq SoC programmable logic. Steve Liebson, Xilinx posts the video. Dr Sadri may appear a little awkward with the camera rolling, but he clearly knows his stuff and the 23 minute video is informative.

    Showing a little location envy, Michael Posner, Synopsys, visited his Californian counterparts, and inbetween checking out gym and cafeteria facilities, he caught up on FPGA-based prototype debug and HAPS.

    Good news from the Semiconductor Industry Association as Falan Yinug reports on record-breaking sales in 2014 and quarterly growth. Who bought what makes interesting – and reassuring – reading.

    Although hit with the love bug, McKenzie Mortensen, IPextreme, does not let her heart rule her head when it comes to customer relations. She presents the company’s good (customer) relationship guide in this blog.

    A teaser of survey results from Neha Mittal, Arrow Devices, shows what design and verification engineers want. Although the survey is open to more respondents until February 15, the results received so far are a mix of predictable and some surprises, all with the option to see disaggregated, or specific, responses for each questions.

    From bugs to butterflies, Doug Koslow, Cadence, considers the butterfly effect in verification and presents some sound information and graphics to show the benefits of the company’s SimVision.

    Caroline Hayes, Senior Editor

    Blog Review – Monday February 2, 2015

    Monday, February 2nd, 2015

    2015’s must-have – a personal robot, Thumbs up for IP access, USB 3.1 has landed, Transaction recap, New talent required, Structuring medical devices, MEMS sensors webinar

    Re-living a youth spent watching TV cartoons, Brad Nemire, ARM, marvels at the Personal Robot created by Robotbase. It uses an ARM-based board powered by a Quad-core Qualcomm Krait CPU, so he interviewed the creator, Duy Huynh, Founder and CEO of Robotbase and found out more about how it was conceived and executed. I think I can guess what’s on Nemire’s Christmas list already.

    Getting a handle on security access to big data, Michael Ford, Mentor Graphics, suggests a solution to accessing technology IP or patented technology without resorting to extreme measures shown in films and TV.

    Celebrating the integration of USB 3.1 in the Nokia N1 tablet and other, upcoming products, Eric Huang, Synopsys, ties this news in with access to “the best USB 3.1 webinar in the universe”, which – no great surprise – is hosted by Synopsys. He also throws in some terrible jokes – a blog with something for everyone.

    A recap on transaction-based verification is provided by Axel Scherer, Cadence, with the inevitable conclusion that the company’s tools meet the task. The blog’s embedded video is simple, concise and informative and worth a click.

    Worried about the lack of new, young engineers entering the semiconductor industry, Kands Manickam, IP extreme, questions the root causes for the stagnation.

    A custom ASIC and ASSP microcontroller combine to create the Struix product, and Jakob Nielsen, ON Semiconductor, explains how this structure can meet medical and healthcare design parameters with a specific sensor interface.

    What’s the IoT without MEMS sensors? Tim Menasveta, ARM, shows the way to an informative webinar : Addressing Smart Sensor Design Challenges for SoCs and IoT, hosted in collaboration with Cadence, using its Virtuoso and MEMS Convertor tools and the Cortex-M processors.

    Caroline Hayes, Senior Editor

    Blog Review – Monday, January 26 2015

    Monday, January 26th, 2015

    Finding fault tolerances with Cortex-R5; nanotechnology thinks big; Cadence, – always talking; mine’s an IEEE on ice; IP modeling

    The inherent fault tolerance ARM’s Cortex-R5 processors is explored and expanded upon by Neil Werdmuller, ARM, in an informative blog. Reading this post, it is evident that it is as much about the tools and ecosystem as the processor technology.

    Nanotechnology is a big subject, and Catherine Bolgar, Dassault Systemes, tackles this overview competently, with several, relevant links in the post itself.

    Harking back to CES, Brian Fuller, Cadence, shares an interesting video from the show, where Ty Kingsmore, Realtek Semiconductor, talks the talk about always on voice applications and the power cost.

    A special nod has to be given to Arthur Marris, Cadence, who travelled to Atlanta for the IEEE 802.3 meeting but managed to sightsee and includes a photo in his post of the vault that holds the recipe for coca cola. He also hints at the ‘secret formula’ for the 2.5 and 5G PHY and automotive proposals for the standard. (Another picture shows delegates’ tables but there were no iconic bottles to be seen anywhere – missed marketing opportunity?)

    In conversation with leading figures in the world of EDA, Gabe Moretti, considers the different approaches to IP modeling in today’s SoC designs.

    By Caroline Hayes, Senior Editor.

    The Various Faces of IP Modeling

    Friday, January 23rd, 2015

    Gabe Moretti, Senior Editor

    Given their complexity, the vast majority of today’s SoC designs contain a high number of third party IP components.  These can be developed outside the company or by another division of the same company.  In general they present the same type of obstacle to easy integration and require a model or multiple types of models in order to minimize the integration cost in the final design.

    Generally one thinks of models when talking about verification, but in fact as Frank Schirrmeister, Product Marketing Group Director at Cadence reminded me, there are three major purposes for modeling IP cores.  Each purpose requires different models.  In fact, Bernard Murphy, Chief Technology Officer at Atrenta identified even more uses of models during our interview.

    Frank Schirrmeister listed performance analysis, functional verification, and software development support as the three major uses of IP models.

    Performance Analysis

    Frank points out that one of the activities performed during this type of analysis is the analysis of the interconnect between the IP and the rest of the system.  This activity does not require a complete model of the IP.  Cadence’s Interconnect Workbench creates the model of the component interconnect by running different scenarios against the RT level model of the IP.  Clearly a tool like Palladium is used given the size of the required simulation of an RTL model.  So to analyze, for example, an ARM AMBA 8 interconnect, engineers will use simulations representing what the traffic of a peripheral may be and what the typical processor load may be and apply the resulting behavior models to the details of the interconnect to analyze the performance of the system.

    Drew Wingard, CTO at Sonics remarked that “From the perspective of modeling on-chip network IP, Sonics separates functional verification versus performance verification. The model of on-chip network IP is much more useful in a performance verification environment because in functional verification the network is typically abstracted to its address map. Sonics’ verification engineers develop cycle accurate SystemC models for all of our IP to enable rapid performance analysis and validation.

    For purposes of SoC performance verification, the on-chip network IP model cannot be a true black box because it is highly configurable. In the performance verification loop, it is very useful to have access to some of the network’s internal observation points. Sonics IP models include published observation points to enable customers to look at, for example, arbitration behaviors and queuing behaviors so they can effectively debug their SoC design.  Sonics also supports the capability to ‘freeze’ the on-chip network IP model which turns it into a configured black box as part of a larger simulation model. This is useful in the case where a semiconductor company wants to distribute a performance model of its chip to a system company for evaluation.”

    Bernard Murphy, Chief Technology Officer, Atrenta noted that: ” Hierarchical timing modeling is widely used on large designs, but cannot comprehensively cover timing exceptions which may extend beyond the IP. So you have to go back to the implementation model.”  Standards, of course, make engineers’ job easier.  He continued: “SDC for constraints and ILM for timing abstraction are probably largely fine as-is (apart from continuing refinements to deal with shrinking geometries).”

    Functional Verification

    Tom De Schutter, Senior Product Marketing Manager, Virtualizer – VDK, Synopsys

    said that “the creation of a transaction-level model (TLM) representing commercial IP has become a well-accepted practice. In many cases these transaction-level models are being used as the golden reference for the IP along with a verification test suite based on the model. The test suite and the model are then used to verify the correct functionality of the IP.  SystemC TLM-2.0 has become the standard way of creating such models. Most commonly a SystemC TLM-2.0 LT (Loosely Timed) model is created as reference model for the IP, to help pull in software development and to speed up verification in the context of a system.”

    Frank Schirrmeister noted that verification requires the definition of the IP at an IP XACT level to drive the different verification scenarios.  Cadence’s Interconnect Workbench generates the appropriate RTL models from a description of the architecture of the interconnects.”

    IEEE 1685, “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows,” describes an XML Schema for meta-data documenting Intellectual Property (IP) used in the development, implementation and verification of electronic systems and an Application Programming Interface (API) to provide tool access to the meta-data. This schema provides a standard method to document IP that is compatible with automated integration techniques. The API provides a standard method for linking tools into a System Development framework, enabling a more flexible, optimized development environment. Tools compliant with this standard will be able to interpret, configure, integrate and manipulate IP blocks that comply with the proposed IP meta-data description.

    David Kelf, Vice President of Marketing at OneSpin Solutions said: “A key trend for both design and verification IP is the increased configurability required by designers. Many IP vendors have responded to this need through the application of abstraction in their IP models and synthesis to generate the required end code. This, in turn, has increased the use of languages such as SystemC and High Level Synthesis – AdaptIP is an example of a company doing this – that enables a broad range of configuration options as well as tailoring for specific end-devices. As this level of configuration increases, together with synthesis, the verification requirements of these models also changes. It is vital that the final model to be used matches the original pre-configured source that will have been thoroughly verified by the IP vendor. This in turn drives the use of a range of verification methods, and Equivalency Checking (EC) is a critical technology in this regard. A new breed of EC tools is necessary for this purpose that can process multiple languages at higher levels of abstractions, and deal with various synthesis optimizations applied to the block.  As such, advanced IP configuration requirements have an affect across many tools and design flows.”

    Bernard Murphy pointed out that “Assertions are in a very real sense an abstracted model of an IP. These are quite important in formal analyses also in quality/coverage analysis at full chip level.  There is the SVA standard for assertions; but beyond that there is a wide range of expressions from very complex assertions to quite simple assertions with no real bounds on complexity, scope etc. It may be too early to suggest any additional standards.”

    Software Development

    Tom De Schutter pointed out that “As SystemC TLM-2.0 LT has been accepted by IP providers as the standard, it has become a lot easier to assemble systems using models from different sources. The resulting model is called a virtual prototype and enables early software development alongside the hardware design task. Virtual prototypes gave have also become a way to speed up verification, either of a specific custom IP under test or of an entire system setup. In both scenarios the virtual prototype is used to speed up software execution as part of a so-called software-driven verification effort.

    A model is typically provided as a configurable executable, thus avoiding the risk of creating an illegal copy of the IP functionality. The IP vendor can decide the internal visibility and typically limits visibility to whatever is required to enable software development, which typically means insight into certain registers and memories are provided.”

    Frank Schirrmeister pointed out that these models are hard to create or if they exist they may be hard to get.  Pure virtual models like ARM Fast Models connected to TLM models can be used to obtain a fast simulation of a system boot.  Hybrid use models can be used by developers of lower level software, like drivers. To build a software development environment engineers will use for example a ARM Fast Model and plug in the actual RTL connected to a transactor to enable driver development.  ARM Fast Models connected with say a graphics system running in emulation on a Palladium system is an example of such environment.

    ARM Fast Models are virtual platforms used mostly by software developers without the need for expensive development boards.  They also comply with the TLM-2.0 interface specification for integration with other components in the system simulation.

    Other Modeling Requirements

    Although there are three main modeling requirements, complex IP components require further analysis in order to be used in designs implemented in advanced processes.  A discussion with Steve Brown, Product Marketing Director, IP Group at Cadence covered power analysis requirements.  Steve’s observations can be summed up thus: “For power analysis designers need power consumption information during the IP selection process.  How does the IP match the design criteria and how does the IP differentiate itself from other IP with respect to power use.  Here engineers even need SPICE models to understand how I/O signals work.  Signal integrity is crucial in integrating the IP into the whole system.”

    Bernard Murphy added: “Power intent (UPF) is one component, but what about power estimation? Right now we can only run slow emulations for full chip implementation, then roll up into a power calculation.  Although we have UPF as a standard estimation is in early stages. IEEE 1801 (UPF) is working on extensions.  Also there are two emerging activities – P2415 and 2416 –working respectively on energy proportionality modeling at the system level and modeling at the chip/IP level.”

    IP Marketplace, a recently introduced web portal from eSilicon, makes power estimation of a particular IP over a range of processes very easy and quick.  “The IP MarketPlace environment helps users avoid complicated paperwork; find which memories will best help meet their chip’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets” said Lisa Minwell, eSilicon’s senior director of IP product marketing.

    Brad Griffin, Product marketing Director, Sigrity Technology at Cadence, talked about the physical problems that can arise during integration, especially when it concerns memories.  “PHY and controllers can be from either the same vendor or from different ones.  The problem is to get the correct signal integrity and power integrity required from  a particular PHY.  So for example a cell phone using a LP DDR4 interface on a 64 bit bus means a lot of simultaneous switching.  So IP vendors, including Cadence of course, provide IBIS models,.  But Cadence goes beyond that.  We have created virtual reference designs and using the Sigrity technology we can simulate and show  that we can match the actual reference design.  And then the designer can also evaluate types of chip package and choose the correct one.  It is important to be able to simulate the chip, the package, and the board together, and Cadence can do that.”

    Another problem facing SoC designers is Clock Domain Crossing (CDC).  Bernard Murphy noted that :”Full-chip flat CDC has been the standard approach but is very painful on large designs. There is a trend toward hierarchical analysis (just as happened in STA), which requires hierarchical models There are no standards for CDC.  Individual companies have individual approaches, e.g. Atrenta has its own abstraction models. Some SDC standardization around CDC-specific constraints would be welcome, but this area is still evolving rapidly.”

    Conclusion

    Although on the surface the problem of providing models for an IP component may appear straightforward and well defined, in practice it is neither well defined nor standardized.  Each IP vendor has its own set of deliverable models and often its own formats.  The task of comanies like Cadence and Synopsys that sell their own IP and also

    provide EDA tools to support other IP vendors is quite complex.  Clearly, although some standard development work is ongoing, accommodating present offerings and future requirements under one standard is challenging and will certainly require compromises.

    Next Page »