Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘modeling’

Blog Review – Mon. June 23 2014

Monday, June 23rd, 2014

By Caroline Hayes, Senior Hayes

The cost of scaling 3D monolithic devices; automation validation; an enterprise wide behavior model; connectivity vs. Wi-Fi design; what’s happening in the China fabless semiconductor market.

There is something afoot in monolithic 3D circles, detects Zvi Or-Bach, MonolithIC 3D, as he tracks disquiet about a feasible roadmap for the technology. Costs and scaling are clashing, he illustrates his case with some effective charts, including one from ARM, but ends on an optimistic note for the industry.

At Mentor, Jay Gorajia vents his frustration at the disruption to communication flows between design and manufacturing oganizations. He makes an effective case for automation to create consistent DFx reports which are configured specifically for each customer.

Continuing something of a crusade for model-based systems engineering, Todd McDevitt, Ansys, has some sound advice for enterprise-wide dynamic modeling. The checklist has some useful links to webinars and web pages.

An interesting interview with Richard Stamyik, ARM, by ChipDesign’s own John Blyler gets the root of cellular connectivity for M2M and IoT and how it differs from WiFi embedded design.

If you have the travel bug, and believe it’s a case of “Go east, young man” these days, then Richard Goering, Cadence, advises you not to pack those bags, just yet. He reports from the DAC 2014 China Fabless Semiconductor Panel, relating some challenges to some preconceptions, such as production and consumption in the region and investment.

Maybe Michael Posner saw one-too-many films on his travels to SNUG in Israel. His blog begins with a picture of Catherine Zeta-Jones and Zorro, the mysterious swordsman-avenger, but quickly moves on to a namesake with a different spelling: the Zoro Hybrid Prototype for early software development. His enthusiasm is infectious, (even if the film link is tenuous!), and the content is clearly set out to inform.

The Various Faces of IP Modeling

Friday, January 23rd, 2015

Gabe Moretti, Senior Editor

Given their complexity, the vast majority of today’s SoC designs contain a high number of third party IP components.  These can be developed outside the company or by another division of the same company.  In general they present the same type of obstacle to easy integration and require a model or multiple types of models in order to minimize the integration cost in the final design.

Generally one thinks of models when talking about verification, but in fact as Frank Schirrmeister, Product Marketing Group Director at Cadence reminded me, there are three major purposes for modeling IP cores.  Each purpose requires different models.  In fact, Bernard Murphy, Chief Technology Officer at Atrenta identified even more uses of models during our interview.

Frank Schirrmeister listed performance analysis, functional verification, and software development support as the three major uses of IP models.

Performance Analysis

Frank points out that one of the activities performed during this type of analysis is the analysis of the interconnect between the IP and the rest of the system.  This activity does not require a complete model of the IP.  Cadence’s Interconnect Workbench creates the model of the component interconnect by running different scenarios against the RT level model of the IP.  Clearly a tool like Palladium is used given the size of the required simulation of an RTL model.  So to analyze, for example, an ARM AMBA 8 interconnect, engineers will use simulations representing what the traffic of a peripheral may be and what the typical processor load may be and apply the resulting behavior models to the details of the interconnect to analyze the performance of the system.

Drew Wingard, CTO at Sonics remarked that “From the perspective of modeling on-chip network IP, Sonics separates functional verification versus performance verification. The model of on-chip network IP is much more useful in a performance verification environment because in functional verification the network is typically abstracted to its address map. Sonics’ verification engineers develop cycle accurate SystemC models for all of our IP to enable rapid performance analysis and validation.

For purposes of SoC performance verification, the on-chip network IP model cannot be a true black box because it is highly configurable. In the performance verification loop, it is very useful to have access to some of the network’s internal observation points. Sonics IP models include published observation points to enable customers to look at, for example, arbitration behaviors and queuing behaviors so they can effectively debug their SoC design.  Sonics also supports the capability to ‘freeze’ the on-chip network IP model which turns it into a configured black box as part of a larger simulation model. This is useful in the case where a semiconductor company wants to distribute a performance model of its chip to a system company for evaluation.”

Bernard Murphy, Chief Technology Officer, Atrenta noted that: ” Hierarchical timing modeling is widely used on large designs, but cannot comprehensively cover timing exceptions which may extend beyond the IP. So you have to go back to the implementation model.”  Standards, of course, make engineers’ job easier.  He continued: “SDC for constraints and ILM for timing abstraction are probably largely fine as-is (apart from continuing refinements to deal with shrinking geometries).”

Functional Verification

Tom De Schutter, Senior Product Marketing Manager, Virtualizer – VDK, Synopsys

said that “the creation of a transaction-level model (TLM) representing commercial IP has become a well-accepted practice. In many cases these transaction-level models are being used as the golden reference for the IP along with a verification test suite based on the model. The test suite and the model are then used to verify the correct functionality of the IP.  SystemC TLM-2.0 has become the standard way of creating such models. Most commonly a SystemC TLM-2.0 LT (Loosely Timed) model is created as reference model for the IP, to help pull in software development and to speed up verification in the context of a system.”

Frank Schirrmeister noted that verification requires the definition of the IP at an IP XACT level to drive the different verification scenarios.  Cadence’s Interconnect Workbench generates the appropriate RTL models from a description of the architecture of the interconnects.”

IEEE 1685, “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows,” describes an XML Schema for meta-data documenting Intellectual Property (IP) used in the development, implementation and verification of electronic systems and an Application Programming Interface (API) to provide tool access to the meta-data. This schema provides a standard method to document IP that is compatible with automated integration techniques. The API provides a standard method for linking tools into a System Development framework, enabling a more flexible, optimized development environment. Tools compliant with this standard will be able to interpret, configure, integrate and manipulate IP blocks that comply with the proposed IP meta-data description.

David Kelf, Vice President of Marketing at OneSpin Solutions said: “A key trend for both design and verification IP is the increased configurability required by designers. Many IP vendors have responded to this need through the application of abstraction in their IP models and synthesis to generate the required end code. This, in turn, has increased the use of languages such as SystemC and High Level Synthesis – AdaptIP is an example of a company doing this – that enables a broad range of configuration options as well as tailoring for specific end-devices. As this level of configuration increases, together with synthesis, the verification requirements of these models also changes. It is vital that the final model to be used matches the original pre-configured source that will have been thoroughly verified by the IP vendor. This in turn drives the use of a range of verification methods, and Equivalency Checking (EC) is a critical technology in this regard. A new breed of EC tools is necessary for this purpose that can process multiple languages at higher levels of abstractions, and deal with various synthesis optimizations applied to the block.  As such, advanced IP configuration requirements have an affect across many tools and design flows.”

Bernard Murphy pointed out that “Assertions are in a very real sense an abstracted model of an IP. These are quite important in formal analyses also in quality/coverage analysis at full chip level.  There is the SVA standard for assertions; but beyond that there is a wide range of expressions from very complex assertions to quite simple assertions with no real bounds on complexity, scope etc. It may be too early to suggest any additional standards.”

Software Development

Tom De Schutter pointed out that “As SystemC TLM-2.0 LT has been accepted by IP providers as the standard, it has become a lot easier to assemble systems using models from different sources. The resulting model is called a virtual prototype and enables early software development alongside the hardware design task. Virtual prototypes gave have also become a way to speed up verification, either of a specific custom IP under test or of an entire system setup. In both scenarios the virtual prototype is used to speed up software execution as part of a so-called software-driven verification effort.

A model is typically provided as a configurable executable, thus avoiding the risk of creating an illegal copy of the IP functionality. The IP vendor can decide the internal visibility and typically limits visibility to whatever is required to enable software development, which typically means insight into certain registers and memories are provided.”

Frank Schirrmeister pointed out that these models are hard to create or if they exist they may be hard to get.  Pure virtual models like ARM Fast Models connected to TLM models can be used to obtain a fast simulation of a system boot.  Hybrid use models can be used by developers of lower level software, like drivers. To build a software development environment engineers will use for example a ARM Fast Model and plug in the actual RTL connected to a transactor to enable driver development.  ARM Fast Models connected with say a graphics system running in emulation on a Palladium system is an example of such environment.

ARM Fast Models are virtual platforms used mostly by software developers without the need for expensive development boards.  They also comply with the TLM-2.0 interface specification for integration with other components in the system simulation.

Other Modeling Requirements

Although there are three main modeling requirements, complex IP components require further analysis in order to be used in designs implemented in advanced processes.  A discussion with Steve Brown, Product Marketing Director, IP Group at Cadence covered power analysis requirements.  Steve’s observations can be summed up thus: “For power analysis designers need power consumption information during the IP selection process.  How does the IP match the design criteria and how does the IP differentiate itself from other IP with respect to power use.  Here engineers even need SPICE models to understand how I/O signals work.  Signal integrity is crucial in integrating the IP into the whole system.”

Bernard Murphy added: “Power intent (UPF) is one component, but what about power estimation? Right now we can only run slow emulations for full chip implementation, then roll up into a power calculation.  Although we have UPF as a standard estimation is in early stages. IEEE 1801 (UPF) is working on extensions.  Also there are two emerging activities – P2415 and 2416 –working respectively on energy proportionality modeling at the system level and modeling at the chip/IP level.”

IP Marketplace, a recently introduced web portal from eSilicon, makes power estimation of a particular IP over a range of processes very easy and quick.  “The IP MarketPlace environment helps users avoid complicated paperwork; find which memories will best help meet their chip’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets” said Lisa Minwell, eSilicon’s senior director of IP product marketing.

Brad Griffin, Product marketing Director, Sigrity Technology at Cadence, talked about the physical problems that can arise during integration, especially when it concerns memories.  “PHY and controllers can be from either the same vendor or from different ones.  The problem is to get the correct signal integrity and power integrity required from  a particular PHY.  So for example a cell phone using a LP DDR4 interface on a 64 bit bus means a lot of simultaneous switching.  So IP vendors, including Cadence of course, provide IBIS models,.  But Cadence goes beyond that.  We have created virtual reference designs and using the Sigrity technology we can simulate and show  that we can match the actual reference design.  And then the designer can also evaluate types of chip package and choose the correct one.  It is important to be able to simulate the chip, the package, and the board together, and Cadence can do that.”

Another problem facing SoC designers is Clock Domain Crossing (CDC).  Bernard Murphy noted that :”Full-chip flat CDC has been the standard approach but is very painful on large designs. There is a trend toward hierarchical analysis (just as happened in STA), which requires hierarchical models There are no standards for CDC.  Individual companies have individual approaches, e.g. Atrenta has its own abstraction models. Some SDC standardization around CDC-specific constraints would be welcome, but this area is still evolving rapidly.”

Conclusion

Although on the surface the problem of providing models for an IP component may appear straightforward and well defined, in practice it is neither well defined nor standardized.  Each IP vendor has its own set of deliverable models and often its own formats.  The task of comanies like Cadence and Synopsys that sell their own IP and also

provide EDA tools to support other IP vendors is quite complex.  Clearly, although some standard development work is ongoing, accommodating present offerings and future requirements under one standard is challenging and will certainly require compromises.

What Powers the IoT?

Wednesday, October 16th, 2013

By Stephan Ohr, Gartner

Powering the Internet of Things (IoT) is a special challenge, says Gartner analyst Stephan Ohr, especially for the wireless sensor nodes (WSNs) that must collect and report data on their environmental states (temperature, pressure, humidity, vibration and the like). While the majority of WSNs will harness nearby power sources and batteries, there will be as many as 10% of the sensor nodes that must be entirely self-powering. Often located in places where it is difficult or impossible to replace batteries, these remote sensor nodes must continue to function for 20 years or more.

Two research and development efforts focus on self-powering remote sensor nodes: One effort looks at energy harvesting devices, which gather power from ambient sources. The major types of energy harvesting devices include specialized solar cells, vibration and motion energy harvesters, and devices that take advantage of thermal gradients warm and cool surfaces. Research and development concentrated on reducing the size and cost of these devices, and making their energy gathering more efficient. But even in their current state of development, these devices could add up to a half-billion in revenues per year within the next five years.

The other R&D effort concentrates on low-power analog semiconductors which will elevate the milli-volt outputs of energy harvesting devices to the levels necessary for powering sensors, microcontrollers, and wireless transceivers. These devices include DC-DC boost converters, sensor signal conditioning amplifiers, and, in some cases, data converter ICs which transform the analog sensor signals to digital patterns the microcontroller can utilize. Broadline analog suppliers like Linear Technology Corp. and Analog Devices have added low-power ICs to their product portfolios. In addition to boosting low-level signals, they use very little power themselves. LTC’s low-power parts, for example, have a quiescent current rating of 1.3 micro-amps. Other companies liked Advanced Linear Devices (ALD) have been working on low-threshold electronics for years, and Texas Instruments has a lineup of specialized power management devices for WSN applications.

Ohr’s projections on energy harvesting will be part of his talk on “Powering the Internet of Things” at the Sainte Claire Hotel, San Jose, CA on October 24, 2013. (Admission is free, but advance registration is required The Internet of Things – A Disruption and an Evolution)

Source: Gartner Research (Oct 2013)

Stephan (“Steve”) Ohr is the Research Director for Analog ICs, Sensors and Power Management devices at Gartner, Inc., and focuses on markets that promise semiconductor revenue growth. His recent reports have explored custom power management ICs for smart phones and tablets, the impact of Apple’s choices on the MEMs sensor industry, and a competitive landscape for MEMs sensor suppliers.

Ohr’s engineering degree, a BS in Industrial Engineering, comes from the New Jersey Institute of Technology (the Newark College of Engineering) and his graduate degree, an MA in sociology, comes from Rutgers.

Mixed Signal and Microcontrollers Enable IoT

Wednesday, October 16th, 2013

By John Blyler

The Internet of Things (IoT) has become such a hot topic that many business and technical experts see it as a key enabler for the fourth industrial revolution – following the steam engine, the conveyor belt and the first phase of IT automation technology (McKinsey Report). Still, for all the hype, the IoT concept seems hard to define.

From a technical standpoint, the IoT refers to the information system that uses smart sensors and embedded systems that connect wired or wirelessly via Internet protocols. ARM defines IoT as, “a collection of smart, sensor-enabled physical objects, and the networks, servers and services that interact with them. It is a trend and not a single sector or market.” How do these interpretations relate to the real world?

“There are two ways in which the “things” in the IoT interact with the physical world around us,” explains Diya Soubra, CPU Product Manager for ARM’s Processor Division. “First they convert physical (analog) data into information and second they act in the physical world based on information. An example of the first way is a temperature sensor that reports temperature while an example of the second way is a door lock opens upon receiving a text message.”

For many in the chip design and embedded space, IoT seems like the latest iteration of the computer-communication convergence heralded from the last decade. But this time, a new element has been added to the mix, namely, sensor systems. This addition means that the role of analog and mixed signal system must now extend beyond RF and wireless devices to include smart sensors. This combination of analog mixed signal, RF-wireless and digital microcontrollers has increase the complexity and confusion among chip, board, package and end product semiconductor developers.

“Microcontrollers (MCUs) targeting IoT applications are becoming analog-intensive due to sensors, AD converters, RF, Power Management and other analog interfaces and modules that they integrate in addition to digital processor and memory,” says Mladen Nizic, Engineering Director for Mixed Signal Solutions at Cadence Design Systems. “Therefore, challenges and methodology are determined not by the processor, but by what is being integrated around it. This makes it difficult for digital designers to integrate such large amounts of analog. Often, analog or mixed-signal skills need to be in charge of SoC integration, or the digital and analog designer must work very closely to realize the system in silicon.”

The connected devices that make up the IoT must be able to communicate via the Internet. This means the addition of wired or wireless analog functionality to the sensors and devices. But a microcontroller is needed to convert the analog signal to digital and to run the Internet Protocol software stacks. This is why IoT requires a mix of digital (Internet) and analog (physical world) integration.

Team Players?

Just how difficult is it for designers – especially digital – to incorporate analog and mix signal functionality into their SoCs? Soubra puts it this way (see Figure 1): “In the market, these are two distinct disciplines. Analogue is much harder to design and has its set of custom tools. Digital is easier since it is simpler to design, and it has its own tools. In the past (prior to the emergence of IoT devices), Team A designed the digital part of the system while Team B designed the analog part separately. Then, these two distinct subsystems where combined and tested to see which one failed. Upon failure, both teams adjusted their designs and the process was repeated until the system worked as a whole. These different groups using different tools resulted in a lengthy, time consuming process.”

Contrast that approach with the current design cycle where the entire mixed signal designers (Teams A and B) work together from the start as one project using one tool and one team. All tool vendors have offerings to do this today. New tools allow viewing the digital and analog parts at various levels and allow mixed simulations. Every year, the tools become more sophisticated to handle ever more complex designs.

Figure 1: Concurrent, OA-based mixed-signal implementations. (Courtesy of Cadence)

Simulation and IP

Today, all of the major chip- and board-level EDA and IP tool vendors have modeling and simulation tools that support mixed signal designs directly (see Figure 2).

Figure 2: Block diagram of pressures-temperature control and simulation system. (Courtesy Cadence)

Verification of the growing analog mixed-signal portion of SoCs is leading to better behavioral models, which abstract the analog upward to the register transfer level (RTL). This improvement provides a more consistent handoff between the analog and digital boundaries. Another improvement is the use of real number models (RNMs), which enable the discrete time transformations needed for pure digital solver simulation of analog mixed-signal verification. This approach enables faster simulation speeds for event-driven real-time models – a benefit over behavioral models like Verilog-A.

AMS simulations are also using assertion techniques to improve verification – especially in interface testing. Another important trend is the use of statistical analysis to handle both the analog nature of mixed signals and the increasing number of operational modes. (See, “Moore’s Cycle, Fifth Horseman, Mixed Signals, and IP Stress”).

Figure: Cadence’s Mladen Nizic (background right) talk about mixed-signal technology with John Blyler. (Photo courtesy of Lani Wong)

For digital designers, there is a lot to learn in the integration of analog systems. However, the availability of ready-to-use analog IP does make it much easier than in the past. That’s one reason why the analog IP market has grown considerable in the last several years and will continue that trend. As reported earlier this year, the wireless chip market will be the leading growth segment for the semiconductor industry in 2013, predicts IHS iSuppli Semiconductor (“Semiconductor Growth Turns Wireless”).

The report states that original-equipment-manufacturer (OEM) spending on semiconductors for wireless applications will rise by 13.5% this year to reach a value of $69.6 billion – up from $62.3 billion in 2012.

The design and development of wireless and cellular chips – part of the IoT connectivity equation – reflects a continuing need for related semiconductor IP. All wireless devices and cell phones rely on RF and analog mixed-signal (AMS) integrated circuits to convert radio signals into digital data, which can be passed to a baseband processor for data processing. That’s why a “wireless” search on the Chipestimate.com website reveals list after list of IP companies providing MIPI controllers, ADCs, DACs, PHY and MAC cores, LNAs, PAs, mixers, PLLs, VCOs, audio/video codecs, Viterbi encoders/decoders, and more.

Real-World Examples

“Many traditional analog parts are adding more intelligence to the design and some of them use microcontrollers to do so,” observes Joseph Yiu, Embedded Technology Specialist at ARM. “One example is an SoC from Analog Device (ADuCM360) that contains a 24-bit data acquisition system with multichannel analog-to-digital converters (ADCs), an 32-bit ARM Cortex-M3 processor, and Flash/EE memory. Direct interfacing is provided to external sensors in both wired and battery-powered applications.”

But, as Soubra mentioned earlier, the second way in which the IoT interacts with the physical world is to act on information – in other words, through the use of digital-to-analog converters (DACs). An example of a chip that converts digital signals back to the physical analog world is SmartBond DA14580. This System-on-Chip (SoC) is used to connect keyboards, mice and remote controls wirelessly to tablets, laptops and smart TVs. It consists of Bluetooth subsystem, a 32 -bit ARM Cortex M0 microcontroller, antenna connection and GPIO interfaces.

Challenges Ahead

In addition to tools that simulated both analog, mixed signal and digital designs, perhaps the next most critical challenge in IoT hardware and software development is the lack of standards.

“The industry needs to converge on the standard(s) on communications for IoT applications to enable information flow among different type of devices,” stressed Wang, software will be the key to the flourish of IoT applications, as demonstrated by ARM’s recent acquisition of Sensinode.” A Finnish software company, Sensinode builds a variation of the Internet Protocols (IP) designed for IoT device connection. Specifically, the company develops to the 6LoWPAN standard, a compression format for IPv6 that is designed for low-power, low-bandwidth wireless links.

If IoT devices are to receive widespread adoption by consumers, then security of the data collected and acted upon by these devices must be robust. (Security will be covered in future articles).

Analog and digital integration, interface and communication standards, and system-level security have always been challenges faced by leading edge designers. The only thing that changes is the increasing complexity of the designs. With the dawning of the IoT, that complexity will spread from every physical world sensor node to every cloud-based server receiving data from or transmitting to that node. Perhaps this complexity spreading will ultimately be the biggest challenge faced by today’s intrepid designers.

The Current State Of Model-Driven Engineering

Wednesday, December 19th, 2012

By John Blyler

Panelists from industry, national laboratories, and the Portland State System Engineering graduate program recently gathered for an open forum on model-driven engineering.

The goal of the forum—which was hosted in collaboration with PSU, the International Council on Systems Engineering (INCOSE) and IEEE—was to connect systems engineering and IT modeling to domain specialties in electronic/electrical, mechanical and software engineering. Panelists included speakers from Mentor Graphics, ANSYS, CH2M Hill, Pacific Northwest National labs, SAIC, Veterans Affair Resource Center and PSU.

To clarify what is meant by systems engineering (SE), Herman Migliore, director of PSU’s SE program, cited Norm Augustine’s often quoted definition: Systems engineering is the practice of creating means of performing useful functions through combination of two of more interacting components. This broad definition encompasses all domain specific SE disciplines, including hardware and software.

Migliore noted that modeling the entire system engineering process, from beginning to end, is made difficult by the challenges of exchanging modeling information between all disciplines. These disciplines include engineering, science, business and even the legal profession, as well as vertical markets such as defense, electronics and software.

“Each discipline and market has it own view of engineering and modeling the system,” said Migliore. The challenge becomes integrating all these differing points of view. That’s why the one model that might unite them all is the Vee-Diagram, which emphasizes the decomposition of the high-level system into component pieces, followed by the integration of the components into a working whole. This approach requires designers to consider test, verification and validation requirements at every phase of the development life cycle.

Next up was James Godfrey from CH2M-Hill, a construction management company that includes semiconductor equipment programming and deployment. To date, many vendors use UML diagrams to engage customers about needed processes that will then be created in software. Unfortunately, UML doesn’t address continuous systems needed for continuing improvement, according to Godfrey. SysML does deal with continuous processes, e.g., pumps, fans and moving waste.

Doing his work at PSU, Godfrey learned about a collaborative system M&S framework developed at Georgia Tech (see diagram below).

Many in the construction management world question the need for models. Godfrey noted that these users wonder why that can’t continue to use Visio to capture typical construction drawings and specification. This often leads to a redundant entering of information into static diagrams and then later in dynamic models.

“Reality feeds into models that then can become diagrams,” said Godfrey. All of which should be stored in one data repository.

ANSYS approached the system modeling challenge from a more electronics point-of-view. According to Andy Byers, ANSYS started as a structural analysis company in the nuclear industry, among others. With the acquisition of Ansoft in 2008, ANSYS added electromagnetic modeling. System-level multiphysics and electronic power modeling were added with the purchase of Apache Design a few years later.

Today, most engineers communicate via documents. But many now want models in addition to documentation for the systems they’re building or integrating. Yet models in one engineering domain don’t often translate well to other domains.

“Pictures may be best way to talk across different engineering disciplines,” observed Byers.

Another factor encouraging model-driven development is that many component companies are now moving up the supply chain (or left-hand, integration side of the Vee-Diagram) to create subsystems, including both embedded hardware and software.

As companies are moving further up the system supply chain, they are finding out that optimization modeling techniques don’t scale across multiple point and physics, noted Byers. Such inefficient optimization leads to overdesign, where designers leave too much margin on the table. This message was a key theme at the recent Ansys-Apache Electronics conference (JB: reference]

But a system-level model must be simple enough for all engineers to use. Today, most analysis are set up and performed by a few experts with PhDs. These experts are becoming a bottleneck, said Byers. “There needs to be a democratization of simulation to the engineering masses.

Finally, as useful as the Vee-Diagram is for system-level modeling, users must look beyond engineering to other systems, like cost, schedule, and even legal. Focusing on this last point, Byers related a story concerning the exchange of models in the automotive industry between and OEM and a Tier 1 (subsystem) and Tier 2 (component) vendors. In order to avoid intellectual property (IP) and gross negligence issues, the OEM lawyers wanted to embedded a legal model into the engineering one. It was unclear as to the success of this approach.

Switching perspectives, Ryan Slaugh spoke about the challenges of hardware-software integration from the standpoint of the Pacific Northwest National Labs (PNNL). With its changing mission, PNNL is facing a problem that is commonplace to electronic companies—deciding when research projects are ready for commercialization. “ We are trying to cross the chasm of death from R&D to successful product development,” said Slaugh.

To determine the maturity of an R&D project, PNNL uses a Technology Readiness Level (TRL) process. This helps grade projects to tell when they might be ready to become products. For example, a project with high confidence, which is one that re-uses known good hardware and software, has a low score. Once in the product stage, systems engineering techniques are applied to the life cycle to low the risk of failure.

How are complex modeling approaches taught to students? What is needed to help college students get used to modeling? These questions where addressed by William “Ike” Eisenhauser, an affiliate professor at PSU and director of…

Simple modeling approaches make great communication tools, especially for non-technical professionals. But in essence, all models are wrong, noted Eisenhauser. “Yet some can be useful.”

Eisenhauser presented a brief overview of different kinds of models:

  1. Simple representation: e.g., solar system ball-and-string model in high school.
  2. Math model: Describes a situation (y=function of x).
  3. State diagram: Moving from math to device representation.
  4. Engineering flowcharts (non-math models): Communicate to others to help make decisions.
  5. Behavior models: More complex, intended to describes why system behaves as it does. These models help to predict change.
  6. Discrete models: Sometimes mistaken for the actual system. They demonstration implementation, e.g., balls moving in a physical model.

The greatest challenge with modeling is teaching that models are just tools, not playthings. “Modelers must learn when to stop using models,” cautioned Eisenhauser. “This is a critical lesson for engineers. “

The problem is that students go into modeling because they want to create cool models. It is an analogous problem to physics majors who go into physics to build light sabers, not to help mankind with issues of global importance, said Eisenhauser.

That’s why it is important to teach engineers the objectives of modeling and knowing when to stop.

How does modeling fit into the role of systems engineering? Unfortunately, SE remains a text-intensive discipline. Documentation matters in detailing complex systems. There is an ongoing need to reduce text editing in SE modeling. That’s where system-modeling approaches such as SysML can help.

The educational problem that Eisenhauer and others in PSU’s SE program face is how to provide a useful SE modeling tool. All such tools—even SysML—require more than one 8-week course to learn. Any such tool will need to be taught across several classes.

Is SysML the best tool for SE modeling in university course? That’s an ongoing challenging in modeling education, namely, how to discern the popular software-of-the-day from truly useful and market-acceptable tools, said Eisenhauser.

The final speaker was Bill Chown, from Mentor Graphics. He spoke about Model Driven Development (MDD), a contemporary approach in which the model is the design and implementation is directly derived from the model.

The challenges facing system designers are well known, from increasing complexity to the convergence of multiple engineering disciplines and the associated problem of optimizing a comprehensive system design.

The design team itself is a dynamic entity, comprised of an architect or systems engineer, the hardware or software component designer and the system integrator who puts it all together, noted Chown. Further, each of these professionals may only be involved in the design for their portion of the life cycle, such as from the concept through design and to domain specific areas.

What types of models are used through the lifecycle? Chown listed three categories:

  1. Platform Independent model, which includes function, architecture, interfaces, interactions and which can demonstrate that requirements are understood and met.
  2. Platform-dependent models, such as hardware architectures with virtual prototypes or software architectures with partitions and data, which can be used to determine resources and performance goals and for hardware-software co-design before physical implementation.
  3. Platform-specific models, for implementation, verification, test and deliverables.

Models can and should drive implementation. For example, software models can generate code once configured to an RTOS. Hardware flows have emerged for C-to-RTL synthesis and UML-to-SystemC simulation and validation. Test languages also can be generated directly from models.

Model-driven design has evolved to cover the full system or product life cycle, from requirements to prototype and then production.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.