Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Blog Review – Monday, January 26 2015

Monday, January 26th, 2015

Finding fault tolerances with Cortex-R5; nanotechnology thinks big; Cadence, – always talking; mine’s an IEEE on ice; IP modeling

The inherent fault tolerance ARM’s Cortex-R5 processors is explored and expanded upon by Neil Werdmuller, ARM, in an informative blog. Reading this post, it is evident that it is as much about the tools and ecosystem as the processor technology.

Nanotechnology is a big subject, and Catherine Bolgar, Dassault Systemes, tackles this overview competently, with several, relevant links in the post itself.

Harking back to CES, Brian Fuller, Cadence, shares an interesting video from the show, where Ty Kingsmore, Realtek Semiconductor, talks the talk about always on voice applications and the power cost.

A special nod has to be given to Arthur Marris, Cadence, who travelled to Atlanta for the IEEE 802.3 meeting but managed to sightsee and includes a photo in his post of the vault that holds the recipe for coca cola. He also hints at the ‘secret formula’ for the 2.5 and 5G PHY and automotive proposals for the standard. (Another picture shows delegates’ tables but there were no iconic bottles to be seen anywhere – missed marketing opportunity?)

In conversation with leading figures in the world of EDA, Gabe Moretti, considers the different approaches to IP modeling in today’s SoC designs.

By Caroline Hayes, Senior Editor.

The Various Faces of IP Modeling

Friday, January 23rd, 2015

Gabe Moretti, Senior Editor

Given their complexity, the vast majority of today’s SoC designs contain a high number of third party IP components.  These can be developed outside the company or by another division of the same company.  In general they present the same type of obstacle to easy integration and require a model or multiple types of models in order to minimize the integration cost in the final design.

Generally one thinks of models when talking about verification, but in fact as Frank Schirrmeister, Product Marketing Group Director at Cadence reminded me, there are three major purposes for modeling IP cores.  Each purpose requires different models.  In fact, Bernard Murphy, Chief Technology Officer at Atrenta identified even more uses of models during our interview.

Frank Schirrmeister listed performance analysis, functional verification, and software development support as the three major uses of IP models.

Performance Analysis

Frank points out that one of the activities performed during this type of analysis is the analysis of the interconnect between the IP and the rest of the system.  This activity does not require a complete model of the IP.  Cadence’s Interconnect Workbench creates the model of the component interconnect by running different scenarios against the RT level model of the IP.  Clearly a tool like Palladium is used given the size of the required simulation of an RTL model.  So to analyze, for example, an ARM AMBA 8 interconnect, engineers will use simulations representing what the traffic of a peripheral may be and what the typical processor load may be and apply the resulting behavior models to the details of the interconnect to analyze the performance of the system.

Drew Wingard, CTO at Sonics remarked that “From the perspective of modeling on-chip network IP, Sonics separates functional verification versus performance verification. The model of on-chip network IP is much more useful in a performance verification environment because in functional verification the network is typically abstracted to its address map. Sonics’ verification engineers develop cycle accurate SystemC models for all of our IP to enable rapid performance analysis and validation.

For purposes of SoC performance verification, the on-chip network IP model cannot be a true black box because it is highly configurable. In the performance verification loop, it is very useful to have access to some of the network’s internal observation points. Sonics IP models include published observation points to enable customers to look at, for example, arbitration behaviors and queuing behaviors so they can effectively debug their SoC design.  Sonics also supports the capability to ‘freeze’ the on-chip network IP model which turns it into a configured black box as part of a larger simulation model. This is useful in the case where a semiconductor company wants to distribute a performance model of its chip to a system company for evaluation.”

Bernard Murphy, Chief Technology Officer, Atrenta noted that: ” Hierarchical timing modeling is widely used on large designs, but cannot comprehensively cover timing exceptions which may extend beyond the IP. So you have to go back to the implementation model.”  Standards, of course, make engineers’ job easier.  He continued: “SDC for constraints and ILM for timing abstraction are probably largely fine as-is (apart from continuing refinements to deal with shrinking geometries).”

Functional Verification

Tom De Schutter, Senior Product Marketing Manager, Virtualizer – VDK, Synopsys

said that “the creation of a transaction-level model (TLM) representing commercial IP has become a well-accepted practice. In many cases these transaction-level models are being used as the golden reference for the IP along with a verification test suite based on the model. The test suite and the model are then used to verify the correct functionality of the IP.  SystemC TLM-2.0 has become the standard way of creating such models. Most commonly a SystemC TLM-2.0 LT (Loosely Timed) model is created as reference model for the IP, to help pull in software development and to speed up verification in the context of a system.”

Frank Schirrmeister noted that verification requires the definition of the IP at an IP XACT level to drive the different verification scenarios.  Cadence’s Interconnect Workbench generates the appropriate RTL models from a description of the architecture of the interconnects.”

IEEE 1685, “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows,” describes an XML Schema for meta-data documenting Intellectual Property (IP) used in the development, implementation and verification of electronic systems and an Application Programming Interface (API) to provide tool access to the meta-data. This schema provides a standard method to document IP that is compatible with automated integration techniques. The API provides a standard method for linking tools into a System Development framework, enabling a more flexible, optimized development environment. Tools compliant with this standard will be able to interpret, configure, integrate and manipulate IP blocks that comply with the proposed IP meta-data description.

David Kelf, Vice President of Marketing at OneSpin Solutions said: “A key trend for both design and verification IP is the increased configurability required by designers. Many IP vendors have responded to this need through the application of abstraction in their IP models and synthesis to generate the required end code. This, in turn, has increased the use of languages such as SystemC and High Level Synthesis – AdaptIP is an example of a company doing this – that enables a broad range of configuration options as well as tailoring for specific end-devices. As this level of configuration increases, together with synthesis, the verification requirements of these models also changes. It is vital that the final model to be used matches the original pre-configured source that will have been thoroughly verified by the IP vendor. This in turn drives the use of a range of verification methods, and Equivalency Checking (EC) is a critical technology in this regard. A new breed of EC tools is necessary for this purpose that can process multiple languages at higher levels of abstractions, and deal with various synthesis optimizations applied to the block.  As such, advanced IP configuration requirements have an affect across many tools and design flows.”

Bernard Murphy pointed out that “Assertions are in a very real sense an abstracted model of an IP. These are quite important in formal analyses also in quality/coverage analysis at full chip level.  There is the SVA standard for assertions; but beyond that there is a wide range of expressions from very complex assertions to quite simple assertions with no real bounds on complexity, scope etc. It may be too early to suggest any additional standards.”

Software Development

Tom De Schutter pointed out that “As SystemC TLM-2.0 LT has been accepted by IP providers as the standard, it has become a lot easier to assemble systems using models from different sources. The resulting model is called a virtual prototype and enables early software development alongside the hardware design task. Virtual prototypes gave have also become a way to speed up verification, either of a specific custom IP under test or of an entire system setup. In both scenarios the virtual prototype is used to speed up software execution as part of a so-called software-driven verification effort.

A model is typically provided as a configurable executable, thus avoiding the risk of creating an illegal copy of the IP functionality. The IP vendor can decide the internal visibility and typically limits visibility to whatever is required to enable software development, which typically means insight into certain registers and memories are provided.”

Frank Schirrmeister pointed out that these models are hard to create or if they exist they may be hard to get.  Pure virtual models like ARM Fast Models connected to TLM models can be used to obtain a fast simulation of a system boot.  Hybrid use models can be used by developers of lower level software, like drivers. To build a software development environment engineers will use for example a ARM Fast Model and plug in the actual RTL connected to a transactor to enable driver development.  ARM Fast Models connected with say a graphics system running in emulation on a Palladium system is an example of such environment.

ARM Fast Models are virtual platforms used mostly by software developers without the need for expensive development boards.  They also comply with the TLM-2.0 interface specification for integration with other components in the system simulation.

Other Modeling Requirements

Although there are three main modeling requirements, complex IP components require further analysis in order to be used in designs implemented in advanced processes.  A discussion with Steve Brown, Product Marketing Director, IP Group at Cadence covered power analysis requirements.  Steve’s observations can be summed up thus: “For power analysis designers need power consumption information during the IP selection process.  How does the IP match the design criteria and how does the IP differentiate itself from other IP with respect to power use.  Here engineers even need SPICE models to understand how I/O signals work.  Signal integrity is crucial in integrating the IP into the whole system.”

Bernard Murphy added: “Power intent (UPF) is one component, but what about power estimation? Right now we can only run slow emulations for full chip implementation, then roll up into a power calculation.  Although we have UPF as a standard estimation is in early stages. IEEE 1801 (UPF) is working on extensions.  Also there are two emerging activities – P2415 and 2416 –working respectively on energy proportionality modeling at the system level and modeling at the chip/IP level.”

IP Marketplace, a recently introduced web portal from eSilicon, makes power estimation of a particular IP over a range of processes very easy and quick.  “The IP MarketPlace environment helps users avoid complicated paperwork; find which memories will best help meet their chip’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets” said Lisa Minwell, eSilicon’s senior director of IP product marketing.

Brad Griffin, Product marketing Director, Sigrity Technology at Cadence, talked about the physical problems that can arise during integration, especially when it concerns memories.  “PHY and controllers can be from either the same vendor or from different ones.  The problem is to get the correct signal integrity and power integrity required from  a particular PHY.  So for example a cell phone using a LP DDR4 interface on a 64 bit bus means a lot of simultaneous switching.  So IP vendors, including Cadence of course, provide IBIS models,.  But Cadence goes beyond that.  We have created virtual reference designs and using the Sigrity technology we can simulate and show  that we can match the actual reference design.  And then the designer can also evaluate types of chip package and choose the correct one.  It is important to be able to simulate the chip, the package, and the board together, and Cadence can do that.”

Another problem facing SoC designers is Clock Domain Crossing (CDC).  Bernard Murphy noted that :”Full-chip flat CDC has been the standard approach but is very painful on large designs. There is a trend toward hierarchical analysis (just as happened in STA), which requires hierarchical models There are no standards for CDC.  Individual companies have individual approaches, e.g. Atrenta has its own abstraction models. Some SDC standardization around CDC-specific constraints would be welcome, but this area is still evolving rapidly.”

Conclusion

Although on the surface the problem of providing models for an IP component may appear straightforward and well defined, in practice it is neither well defined nor standardized.  Each IP vendor has its own set of deliverable models and often its own formats.  The task of comanies like Cadence and Synopsys that sell their own IP and also

provide EDA tools to support other IP vendors is quite complex.  Clearly, although some standard development work is ongoing, accommodating present offerings and future requirements under one standard is challenging and will certainly require compromises.

Blog Review – Monday, January 19 2015

Monday, January 19th, 2015

Test case for lazybones; Mongoose in space, heads for Pluto; solar tracker design; new age shopping; IoT insight – the real challenge

The size of SoCs, security around EDA tools and the effort needed to test tool issues are all hurdles that can be mounted, asserts Uwe Simm, Cadence. His comprehensive post explains how the Test Case Optimizer (TCO) – a small generic (as in no special tools required or design styles are required) – can strip down simulation source files and reduce overal source input data size by over 99%.

After a stellar break, NASA’s New Horizons spacecraft reached Pluto. Not only does it have the ashes of astronomer Clyde Tombaugh, the discoverer of Pluto, it has a Mongoose on board – in the form of a MIPS-based Mongoose-V chip. Alexandru Voica, Imagination, tells us more about the rad-hard device manufactured by Synova.

An interesting project, and a worthy one too, is relayed in the blog post by John McMillan (Mentor Graphics). Cool Earth Solar designs and develops solar products and uses PADS to develop some of the monitoring hardware for the equipment that tracks the sun, and transmits data for the project.

A subject close to my heart, shopping, is explored by David McKinney, Intel, who has a guest blog from Jon Bird, Y&R Labstore. How to harness the data that make up shopping patterns, without freaking out shoppers. A startling obvious observation is “Retailers must first and foremost be shopper-centric” but what does that mean in the digital age and the Internet of Things era?

Demonstrating a helpful nature, David Blaza, ARM, points us to a report by McKinsey, about the Internet of Things. As well as Blaza’s observation relating to ARM’s Cortex-M devices on the edge of the IoT and ARM Cortex-A at the hub and gateway level, I was struck by Joep Van Beurden’s observation that the IoT is not about prices or power but connecting the hardware in a smart way to the cloud.

By Caroline Hayes, Senior Editor

Blog Review – Monday, January 12, 2015

Monday, January 12th, 2015

New year resolutions from ARM, IP Extreme; CES highlights from Cadence, Synopsys, ARM partners; Mentor looks back at 2014; Imagination looks ahead

It wouldn’t be a January Blog Review without a mention of resolutions. Jacob Beningo, ARM, is disappointed that DeLoreans and hover boards are not filling the skies as predicted in Back to the Future, but he does believe that 2015 should be the year of sound, embedded software development resolutions.

A challenge is thrown down by McKenzie, IP Extreme, to ensure the company meets its new year resolution to update its blog. If you find that the company has missed posting a blog by midnight Wednesday (Pacific time) you can claim a $100 voucher for a chop or restaurant of your choice.

It wouldn’t be the week after CES, if there were no mentions of ‘that show’. Michael Posner, Synopsys, looked beneath the cars, entertainment devices and robots to focus on sensors (and to mention DesignWare Sensor and Control Subsystem, which designs them).

Brian Fuller, Cadence, interviews Martin Lund, senior vice president for Cadence’s IP Group, at CES. Lund has some interesting observations about audio and video demos at the show and insight into the role of IP.

ARM was everywhere at CES, and Brad Nemire, ARM, has some great videos on his blog, with demos of partners’ devices, and also a link to a Bloomberg interview with CEO Simon Segars.

International finance was not covered at CES, but the mobile money payment services described in the blog by Catherine Bolgar, Dassault Systemes has a lot of ‘CES criteria’, connectivity, innovation and commercial applications, as well as the Vegas connection with cash. It is an enlightening view of how technology can help those without deemed to expensive to reach and service by conventional banking institutions.

Looking back at 2014, Vern Wnek, Mentor, considers the overall winner of the longest running EDA awards, the Technology Leadership Awards, Alcatel-Lucent. The award winnning project was the 1X100GE packet module includes 100Gb/s of total processing power and signals operating at 6/12/28GHz.

A world without wired cables, is the vision of Alexandru Voica, Imagination, who checks just how close a cable-free life is; encouraged with some introductions from the company, of course.

By Caroline Hayes, Senior Editor.

Blog Review – Thurs, January 08 2015

Thursday, January 8th, 2015

CES, no I mean CPS; CES 2015, 2016 and beyond; Connected cars at CES; ISO 26262 help; Constraint coding clinic

No doubt anticipating a wearables deluge at CES, Margaret Schmitt, Ansys, cleverly uses this to her advantage and tailors her blog, not to ‘that Vegas show’ but to arguing the point for CPS (Chip Package System) co-analysis for shareable, workable data. She also avoids all mention of CES but reminds readers that the company will be at DesignCon later this month.

This time of year it is always a trial to find decent blog material. If it’s not a review of 2014, it will be preview of trends at CES, but some bloggers do it well. David Blaza, goes behind the glitz and straight to the semiconductor business of CES. He takes the view that looking at devices being launched will reveal more about CES 2016 or 2017 than this week’s show.

Sounding a little world-weary (or is that Vegas-weary?) Dick James and Jim Morrison, ChipWorks, fought the crowds at CES Unveiled, the press preview. Their tech-fatigue is entertaining and they also came up with five top themes. Most you could guess but the connected car is a new addition. It is a theme embraced by Drue Freeman, NXP, which is not surprising as the company is showcasing its RoadLINK secure connected car technology in Vegas this week.

Intel CEO Brian Krzanich delivered a keynote at CES, illustrating how computer and human interactions are vital in this world of mobile computing everywhere. Scott Apeland refers to it in this blog about Intel’s RealSense technology and his enthusiasm knows no bounds. He includes descriptions of application examples and has sympathy for ‘those who haven’t had the good fortune’ to try the technology first hand. All that can be put right at the company’s booth.

This industry is the kind that wants to share and help fellow engineers and Kurt Shuler, Arteris, does just that with a glossary of ISO 26262 abbreviations and acronyms to help those attempting to wade through the functional safety standards.

Another helpful, detailed and timely blog is from Daniel Bayer, Cadence, discussing generative list pseudo methods in constraint for modelling and debugging. It is timely, as Ethernet-based communication is increasing in popularity and will require a different take on constraint coding.

Blog Review – Monday December 22 2014

Monday, December 22nd, 2014

Women in engineering; Santa’s CFD plan; VIP list; Cadence focus at CES 2015; Microsoft Band teardown; DDR 4 disruption; celebrate energy efficiency

A daughter’s enjoyment in toy trains and train tracks is the source of inspiration for a genuinely concerned blog by Keith Hanna, Mentor. Why aren’t more girls studying engineering? He takes his parental knowledge and knowledge of engineering to ponder the question.

Computational fluid dynamics also provides a back-up plan for Father Christmas – just in case the premier sleigh develops a fault (bug?) on the night of the 24 th! Gilles Eggenspieler, Ansys and helper elves, have designed a new sleigh and his blog has the graphics to demonstrate effectiveness. He has even thoughtfully added in wind shield factor and stealth mode.

Things to remember about memory VIP: VIP Experts at Synopsys, advise of a technical seminar: Strategy to Verify an AXI/ACE Compliant Interconnect (1 of 4) – just in case the Christmas TV schedules lets you down this year.

Looking ahead to the 2015 CES, Jacek Duda, Cadence, gives a glimpse of what Cadence will show in Las Vegas, reflecting the company’s focus on system solutions, including a TIP/DIP combination for mobile devices (and next year’s Christmas presents?).

Tear-downs are always fun and David Maidment, ARM, takes a look inside a Microsoft Band and have taken a look inside. He uncovers the treasure trove of an ARM Cortex-M4-based Kinetis K24 microcontroller for wearable devices.

Self-confessed candidate for the naughty list, Nazita Saye, Mentor Graphics, finds an excuse to celebrate the energy saving that electronics devices enjoy with a list of must-haves and a snap of the office Christmas tree.

Double data rate memory is set to turn the industry on its head, predicts Brian Fuller, Cadence. His blog cites Kevin Yee, Cadence product marketing director, and speculates on economics as well as the physics of the memory form.

Merry Christmas, happy new year and keep on blogging!

Blog Review – Monday, December 15 2014

Monday, December 15th, 2014

Rolling up her sleeves and getting down to some hard work – not just words, Carissa Labriola, ARM, opens a promised series of posts with an intelligent, and through analysis of the Arduino Due and there is even the chance to win one. This is a refreshingly interactive, focused blog for the engineering community.

It’s coming to the end of the year, so it is only to be expected that there is a blog round-up. Real Intent does not disappoint, and Graham Bell provides his ‘Best of’ with links to blog posts, an interview at TechCon and a survey.

There is a medical feel to the blog by Shelly Stalnake, Mentor Graphics, beginning with a biology text book image of an organism to lead into an interesting discussion on parasitic extraction. She lists some advice – and more importantly – links to resources to beat the ‘pests’.

Always considerate of his readers, Michael Posner, Synopsys, opens his blog with a warning that it contains technical content. He goes on to unlock the secrets of ASIC clock conversion, referencing Synopsys of course, but also some other sources to get to grips with this prototyping tool. And in the spirit of Christmas, he also has a giveaway, a signed copy of an FPGA-Based Prototyping Methodology Manual if you can answer a question about HAPS shipments.

Another list is presented by Steve Carlson, Cadence, but his is no wishlists or ‘best of’ in fact it’s a worst-of, with the top five issues that can cause mixed-signal verification misery. This blog is one of the liveliest and most colorful this week, with some quirky graphics to accompany the sound advice that he shares on this topic.

Blog Review – Monday December 08 2014

Wednesday, December 10th, 2014

Industry forecasts sustained semi growth; EVs just go on and on; Second-chance webinar; Tickets please; Play time; Missed parade

By Caroline Hayes, Senior Editor

Bringing 2014 to a close on an optimistic note, Falan Yinug, director, Industry Statistics & Economic Policy, Semiconductor Industry Association (SIA) tries to understand the industry’s quirky sense of timing while reporting that the World Semiconductor Trade Statistics (WSTS) program revised its full-year 2014 global semiconductor sales growth forecast to 9% ($333.2 billion in total sales) an increase from the 6.5% it forecast in June. It also forecasts that positive sales trend to continue with a 3.4% increase in sales in 2015 ($344.5 billion in total sales) and beyond, with $355.3 billion in 2016.

First road rage, now range anxiety. Apparently it is a common ailment for EV (electric vehicle) drivers. John Day, Mentor Graphics, takes heart from a report by IDTechEx which says that a range extender will be fitted to each of the 8million hybrid cards produced in 2025 and predicts the introduction in 2015 of hybrid EVs with fuel cell range extenders and multi-fuel jet engines to increase driver options.

It’s hardly a stretch to find someone who remembers using public transport before MIFARE ticketing, but Nav Bains, NXP looks at the next stage for commuters using a single, interoperable programming interface for commuters to tap NFC mobile devices to provide the ticketing service.

More time-warp timings, as Phil Dworsky, ARM, tells of a webinar entitled Avoiding Common Pitfalls in Verifying Cache-Coherent ARM-based Designs, which has been and gone but can be watched again, simply by registering. He even lists the speakers (Neill Mullinger and Tushar Mattu, both Synopsys) and lists what you missed but what you can catch again in the recorded webinar.

Enamoured with e code, Hannes, Cadence, directs people who just don’t get it to the edaplayground website, with links to a video for e-beginners.

Recap of what you missed, impactful blogs from the last 3 months
Perhaps frustrated that no-one seems to have notice, Michael Posner, Synopsys, patiently outlines some of his favourite blog posts from the last couple of months. He wants to draw your attention to prototyping in particular (it features heavily in the list) as well as abstract partitioning and the joy of vertical boards.

IoT Cookbook: Analog and Digital Fusion Bus Recipe

Tuesday, December 2nd, 2014

Experts from ARM, Mathworks, Cadence, Synopsys, Analog Devices, Atrenta, Hillcrest Labs and STMicroelectronics cook up ways to integrate analog with IoT buses.

By John Blyler, Editorial Director

Many embedded engineers approach the development of Internet-of-Things (IoT) devices like a cookbook. By following previous embedded recipes, they hope to create new and deliciously innovative applications. While the recipes may be similar, today’s IoT uses strong concentration of analog, sensors and wireless ingredients. How will these parts combine with the available high-end bus structures like ARM’s AMBA? To find out, “IoT Embedded Systems” talked with the head technical cooks including Paul Williamson, Senior Marketing Manager, ARM; Rob O’Reilly, Senior Member Technical Staff at Analog Devices; Mladen Nizic , Engineering Director, Mixed Signal Solution, Cadence; Ron Lowman, Strategic Marketing Manager for IoT, Synopsys; Corey Mathis, Industry Marketing Manager -  Communications, Electronics and Semiconductors, MathWorks; Daniel Chaitow, Marketing Manager, Hillcrest Labs; Bernard Murphy, CTO, Atrenta; and Sean Newton, Field Applications Engineering Manager, STMicroelectronics. What follows is a portion of their responses. — JB

Key points:

  • System-level design is needed so that the bus interface can control the analog peripheral through a variety of modes and power-efficient scenarios.
  • One industry challenge is to sort the various sensor data streams in sequence, in types, and include the ability to do sample or rate conversion.
  • To ensure the correct sampling of analog sensor signals and the proper timing of all control and data signals, cycle accurate simulations must be performed.
  • Control system and sensor subsystems are needed to help reduce digital bus cycles by tightly integrating the necessary components.
  • Hardware design and software design have inherently different workflows, and as a result, use different design tools and methodologies.
  • For low-power IoT sensors, the analog-digital converter (ADC) power supply must be designed to minimize noise. Attention must also be paid to the routing of analog signals between the sensors and the ADC.
  • Beyond basic sensor interfacing, designer should consider digitally assisted analog (DAA) – or digital logic embedded in analog circuitry that functions as a digital signal processor.

Blyler: What challenges do designers face when integrating analog sensor and wireless IP with digital buses like ARM’s AMBA and others?

Williamson (ARM): Designers need to consider system-level performance when designing the interface between the processor core and the analog peripherals. For example a sensor peripheral might be running continuously, providing data to the CPU only when event thresholds are reached. Alternatively the analog sensor may be passing bursts of sampled data to the CPU for processing.  These different scenarios may require that the designer develop a digital interface that offers simple register control, or more advanced memory access. The design of the interface needs to enable control of the peripheral through a broad range of modes and in a manner that optimizes power efficiency at a system and application level.

O’Reilly (Analog Devices): One challenge is ultra-low power designs to enable management of the overall system power consumption. In IoT systems, typically there is one main SoC connected with multiple sensors running at different Output Data Rates (ODR) using asynchronous clocking. The application processor SoC collects the data from multiple sensors and completes the processing. To keep power consumption low, the SoC generally isn’t active all of the time. The SoC will collect data at certain intervals. To support the needs of sensor fusion it’s necessary that the sensor data includes time information. This highlights the second challenge, the ability to align a variety of different data types in a time sequence required for fusion processing. This raises the question “How can an entire industry adequately sort the various sensor data streams in sequence, in types, and include the ability to do sample or rate conversion.?”

Nizic (Cadence): Typically a sensor will generate a small (low voltage/current) analog signal which needs to be properly conditioned and amplified before converting it to digital signal sent over a bus to memory register for further processing by a DSP or a controller. Sometimes, to save area, multiple sensor signals are multiplexed (sampled) to reduce the number of A2D converters.

From the design methodology aspect, the biggest design challenge is verification. To ensure analog sensor signals are sampled correctly and all control and data signals are timed properly, cycle-accurate simulations must be performed. Since these systems now contain analog, in addition to digital and bus protocol verification, a mixed-signal simulation must cover both hardware and software. To effectively apply mixed-signal simulation, designers must model and abstract behavior of sensors, analog multiplexers, A2D converters and other analog components. On the physical implementation side, busses will require increased routing resources, which in turn mean more careful floor-planning and routing of bus and analog signals to keep chip area at minimum and avoid signal interference.

Lowman (Synopsys): For an IC designer, the digital bus provides a very easy way to snap together an IC by hanging interface controllers such as I2C, SPI, and UARTs to connect to sensors and wireless controllers.  It’s also an easy method to hang USB and Ethernet, as well as analog interfaces, memories and processing engines.  However, things are a bit more complicated on the system level. For example, the sensor in a control system helps some actuator know what to do and when to do it.  The challenge is that there is a delay in bus cycles from sensing to calculating a response to actually delivering a response that ultimately optimizes the control and efficiency of the system.  Examples include motor control, vision systems and power conversion applications. Ideally, you’d want a sensor and control subsystem that has optimized 9D Sensor Fusion application. This subsystem significantly reduces cycles spent traveling over a digital bus by essentially removing the bus and tightly integrating the necessary components needed to sense and process the algorithms. This technique will be critical to reducing power and increasing performance of IoT control systems and sensor applications in a deeply embedded world.

Mathis (Mathworks): It is no surprise that mathematical and signal processing algorithms of increasing complexity are driving many of the innovations in embedded IoT. This trend is partly enabled by the increasing capability of SoC hardware being deployed for the IoT. These SoCs provide embedded engineers greater flexibility regarding where the algorithms get implemented. The greater flexibility, however, leads to new questions in early stage design exploration. Where should the (analog and mixed) signal processing of that data occur? Should it occur in a hardware implementation, which is natively faster but more costly in on-chip resources? Or in software, where inherent latency issues may exist? One key challenge we see is that hardware design and software design have inherently different workflows, and as a result, use different design tools and methodologies. This means SoC architects need to be fluent in both C and HDL, and the hardware/software co-design environments needed for both. Another key challenge is that this integration further exacerbates the functional, gate- or circuit-level, and final sign-off verification problems that have dogged designers for decades. Interestingly, designers facing either or both of these key challenges could benefit significantly from top-down design and verification methodologies. (See last month’s discussion, “Is Hardware Really That Much Different From Software?”)

Chaitow (Hillcrest Labs): In most sensor-based applications, data is ultimately processed in the digital realm so an analog to digital conversion has to occur somewhere in the system before the processing occurs. MEMS sensors measure tiny variations in capacitance, and amplification of that signal is necessary to allow sufficient swing in the signal to ensure a reasonable resolution. Typically the analog to digital conversion is performed at the sensor to allow for reduction of error in the measurement. Errors are generally present because of the presence of noise in the system, but the design of the sensing element and amplifiers have attributes that contribute to error. For a given sensing system minimizing the noise is therefore paramount. The power supply of the ADC needs to be carefully designed to minimize noise and the routing of analog signals between the sensors and the ADC requires careful layout. If the ADC is part of an MCU, then the power regulation of the ADC and the isolation of the analog front end from the digital side of the system is vital to ensure an effective sampling system.

As always with design there are many tradeoffs. A given analog MEMS supplier may be able to provide a superior measurement system to a MEMS supplier that provides a digital output. By accepting the additional complexity of the mixed-signal system and combining the analog sensor with a capable ADC, an improved measurement system can be built. In addition if the application requires multiple sensors, using a single external multiple channel ADC with analog sensors can yield a less expensive system, which will be increasingly important as the IoT revolution continues.

Murphy (Atrenta): Aside from the software needs, there are design and integration considerations. On the design side, there is nothing very odd. The sensor needs to be presented to an AMBA fabric as a slave of some variety (eg APB or AHB), which means it needs all the digital logic to act as a well-behaved slave (see Figure). It should recognize it is not guaranteed to be serviced on demand and therefore should support internal buffering (streaming buffer if an output device for audio, video or other real-time signal). Sensors can be power-hungry so they should support power down that can be signaled by the bus (as requested by software).

The implementation side is definitely more interesting. All of that logic is generally bundled with the analog circuitry into one AMS block and it is usually difficult to pin down a floor-plan outline on such a block until quite close to final layout. This makes full-chip floor planning more challenging because you are connecting to an AMBA switch fabric, which likes to connect to well-constrained interfaces because the switch matrix itself doesn’t constrain layout well on its own. This may lead to a little more iteration of the floor plan than you otherwise might expect

Beyond basic sensor interfacing, you need to consider digitally assisted analog (DAA). This is when you have digital logic embedded in analog circuitry, functioning as a digital signal processor to perform effectively an analog function but perhaps more flexibly and certainly with more programmability that analog circuitry. Typical applications are for beamforming in radio transmission and for super-accurate ADCs.

Figure: The AMBA Bus SOC Platform is a configurable with several peripherals and system functions, e.g., AHB Bus(es), APB Bus(es), arbiters, decoders. Popular peripherals include RAM controllers, Ethernet, PCI, USB, 1394a, UARTs, PWMs, PIOs. (Courtesy of ARM Community - http://community.arm.com/docs/DOC-3752)

Newton (STMicroelectronics): Integration of devices such as analog sensors and wireless IP (radios) is widespread today via the use of standard digital bus interfaces such as I2C and SPI. Integration of analog IP with a bus – such as ARM’s AMBA – becomes a matter of connecting the relevant buses to the digital registers contained within the IP. This is exactly what happens when you use I2C or SPI to communicate to standalone sensors or wireless radio, with the low-speed bus interfaces giving external access to the internal registers of the analog IP. The challenges for integration to devices with higher-end busses isn’t so much on the bus interface, as it is in defining and qualifying the resulting SoC. In particular, packaging characteristics, the number of GPIO’s available, the size of package, the type of processing device used (MPU or MCU), internal memory capability such as flash or internal SRAM, and of course the power capabilities of the device in question: does it need very low standby power? Wake capability?  Most of these questions are driven by market requirements and capabilities and must be weighed against the cost and complexity of the integration effort.

The challenges for integration to devices with higher-end busses isn’t so much on the bus interface, as it is in defining packaging characteristics, available GPIOs, type of processing device, memory such as flash or internal SRAM, and power capabilities.

Blyler: Thank you.

This article was sponsored by ARM.

ARM and Cortex are registered trademarks of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. mbed is a trademark of ARM Limited (or its subsidiaries) in the EU and/or elsewhere. All rights reserved.

Hot Trends for 2015

Tuesday, December 2nd, 2014

Chi-Ping Hsu, Senior Vice President, Chief Strategy Officer, EDA and Chief of Staff to the CEO at Cadence

The new system-design imperative

We’re at a tipping point in system design. In the past, the consumer hung on every word from technology wizards, looking longingly at what was to come. But today, the consumer calls the shots and drives the pace and specifications of future technology directions. This has fostered, in part, a new breed of system design companies that has taken direct control over the semiconductor content.

These systems companies are reaping business (pricing, availability), technical (broader scope of optimization) and strategic (IP protection, secrecy) benefits.  This is clearly a trend in which the winning systems companies are partaking.

They’re less interested in plucking components from shelves and soldering them to boards and much more interested in conceiving, implementing and verifying their systems holistically, from application software down to chip, board and package. To this end, they are embracing the marriage of EDA and IP as a speedy and efficient means of enabling their system visions. For companies positioned with the proper products and services, the growth opportunities in 2015 are enormous.

The shift left

Time-to-market pressures and system complexity force another reconsideration in how systems are designed. Take verification for example. Systems design companies are increasingly designing at higher levels, which requires understanding and validating software earlier in the process. This has led to the “shift left” phenomenon.

The simple way to think about this trend is that everything that was done “later” in the design flow is now being started “earlier” (e.g., software development begins before hardware is completed).  Another way to visualize this macroscopic change is to think about the familiar system development “V-diagram” (Figure 1 below). The essence of this evolution is the examination of any and all dependencies in the product planning and development process to understand how they can be made to overlap in time.

This overlap creates the complication of “more moving parts” but it also enables co-optimization across domains.  Thus, the right side of the “V” shifts left (Figure 2 below) to form more of an accelerated flow. (Note: for all of the engineers in the room, don’t be too literal or precise; it is meant to be thematic of the trend).

FIGURE 1

Prime examples of the shift left are the efforts in software development that are early enough to contemplate hardware changes (i.e., hardware optimization and hardware dependent software optimization), while at the other end of the spectrum we see early collaboration between the foundry, EDA tool makers and IP suppliers to co-optimize the overall enablement offering to maximize the value proposition of the new node.

A by-product of the early software development is the enablement of software-driven verification methodologies that can be used to verify that the integration of sub-systems does not break the design. Another benefit is that performance and energy can be optimized in the system context with both hardware and software optimizations possible.  And, it is no longer just performance and power – quality, security and safety are also moving to the top level of concerns.

FIGURE 2

Chip-package-board interdependencies

Another design area being revolutionized is packaging. Form factors, price points, performance and power are drivers behind squeezing out new ideas.  The lines between PCB, package, interposer and chip are being blurred.

Having design environments that are familiar to the principle in the system interconnect creation, regardless of being PCB, package or die centric by nature, provides a cockpit from which the cross fabric structures can be created, and optimized.  Being able to provide all of the environments also means that data interoperable data sharing is smooth between the domains.  Possessing analysis tools that operate independent of the design environment offers the consistent results for all parties incorporating the cross fabric interface data.  In particular power and signal integrity are critical analyses to ensure design tolerances without risking the cost penalties of overdesign.

The rise of mixed-signal design

In general, but especially driven by the rise of Internet of Things (IoT) applications, mixed-signal design has soared in recent years. Some experts estimate that as much as 85% of all designs have at least some mixed-signal elements on board.

Figure 3: IBS Mixed-signal design start forecast (source: IBS)

Being able to leverage high quality, high performance mixed signal IP is a very powerful solution to the complexity of mixed signal design in advanced nodes. Energy-efficient design features are also pervasive.  Standards support for power reduction strategies (from multi-supply voltage, voltage/frequency scaling, and power shut-down to multi-threshold cells) can be applied across the array of analysis, verification and optimization technologies.

To verify these designs, the industry has been a little slower to migrate. The reality is that there is only so much tool and methodology change that can be digested by a design team while it remains immersed in the machine that cranks out new designs.  So, offering a step-by-step progression that lends itself to incremental progress is what has been devised.  “Beginning with the end in mind” has been the mantra of the legions of SoC verification teams that start with a sketch of the outcome desired in the planning and management phase at the beginning of the program. The industry best practices are summarized as: MD-UVM-MS – that is, metrics-driven unified verification methodology with mixed signal.

Figure 4: Path to MS Verification Greatness

Next Page »