Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cliosoft’

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.

Internet of Things (IoT) and EDA

Tuesday, April 8th, 2014

Gabe Moretti, Contributing Editor

A number of companies contributed to this article.  In Particular: Apache Design Solutions, ARM, Atrenta, Breker Verification Systems, Cadence, Cliosoft, Dassault Systemes, Mentor Graphics, Onespin Solutions, Oski Technologies, and Uniquify.

In his keynote speech at the recent CDNLive Silicon Valley 2014 conference, Lip-Bu Tan, Cadence CEO, cited mobility, cloud computing, and Internet of Things as three key growth drivers for the semiconductor industry. He cited industry studies that predict 50 billion devices by 2020.  Of those three, IoT is the latest area attracting much conversation.  Is EDA ready to support its growth?

The consensus is that in many aspects EDA is ready to provide tools required for IoT implementation.  David Flynn, a ARM Fellow put it best.  “For the most part, we believe EDA is ready for IoT.  Products for IoT are typically not designed on ‘bleeding-edge’ technology nodes, so implementation can benefit from all the years of development of multi-voltage design techniques applied to mature semiconductor processes.”

Michael Munsey, Director of ENOVIA Semiconductor Strategy at Dassault Systèmes observed that conversely companies that will be designing devices for IoT may not be ready.  “Traditional EDA is certainly ready for the core design, verification, and implementation of the devices that will connect to the IoT.  Many of the devices that will connect to the IoT will not be the typical designs that are pushing Moore’s Law.  Many of the devices may be smaller, lower performance devices that do not necessarily need the latest and greatest process technology.  To be cost effective at producing these devices, companies will rely heavily on IP in order to assemble devices quickly in order to meet consumer and market demands.  In fact, we may begin to see companies that traditionally have not been silicon developers getting in to chip design. We will see an explosive growth in the IP ecosystem of companies producing IP to support these new devices.”

Vic Kulkarni, Senior VP and GM, Apache Design, Inc.  put it as follows: “There is nothing “new or different” about the functionality of EDA tools for the IoT applications, and EDA tool providers have to think of this market opportunity from a perspective of mainstream users, newer licensing and pricing model for “mass market”, i.e.  low-cost and low-touch technical support, data and IP security and the overall ROI.”

But IoT also requires new approaches to design and offers new challenges.  David Kelf, VP of Marketing at Onespin Solutions provided a picture of what a generalized IoT component architecture is likely to be.

Figure 1: generalized IoT component architecture (courtesy Onespin Solutions)

He went on to state: “The included graphic shows an idealized projection of the main components in a general purpose IoT platform. At a minimum, this platform will include several analog blocks, a processor able to handle protocol stacks for wireless communication and the Internet Protocol (IP). It will need some sensor-required processing, an extremely effective power control solution, and possibly, another capability such as GPS or RFID and even a Long Term Evolution (LTE) 4G Baseband.”

Jin Zhang, Senior Director of Marketing at Oski Technologies observed that “If we parse the definition of IoT, we can identify three key characteristics:

  1. IoT can sense and gather data automatically from the environment
  2. IoT can interact and communicate among themselves and the environment
  3. IoT can process all the data and perform the right action with or without human interaction

These imply that sensors of all kinds for temperature, light, movement and human vitals, fast, stable and extensive communication networks, light-speed processing power and massive data storage devices and centers will become the backbone of this infrastructure.

The realization of IoT relies on the semiconductor industry to create even larger and more complex SoC or Network-on-Chip devices to support all the capabilities. This, in turn, will drive the improvement and development of EDA tools to support the creation, verification and manufacturing of these devices, especially verification where too much time is spent on debugging the design.”

Power Management

IoT will require advanced power management and EDA companies are addressing the problem.  Rob Aitken, also a ARM fellow, said:” We see an opportunity for dedicated flows around near-threshold and low voltage operation, especially in clock tree synthesis and hold time measurement. There’s also an opportunity for per-chip voltage delivery solutions that determine on a chip-by-chip basis what the ideal operation voltage should be and enable that voltage to be delivered via a regulator, ideally on-chip but possibly off-chip as well. The key is that existing EDA solutions can cope, but better designs can be obtained with improved tools.”

Kamran Shah, Director of Marketing for Embedded Software at Mentor Graphics, noted: “SoC suppliers are investing heavily in introducing power saving features including Dynamic Voltage Frequency Scaling (DVFS), hibernate power saving modes, and peripheral clock gating techniques. Early in the design phase, it’s now possible to use Transaction Level Models (TLM) tools such as Mentor Graphics Vista to iteratively evaluate the impact of hardware and software partitioning, bus implementations, memory control management, and hardware accelerators in order to optimize for power consumption”

Figure 2: IoT Power Analysis (courtesy of Mentor Graphics)

Bernard Murphy, Chief Technology Officer at Atrenta, pointed out that: “Getting to ultra-low power is going to require a lot of dark silicon, and that will require careful scenario modeling to know when functions can be turned off. I think this is going to drive a need for software-based system power modeling, whether in virtual models, TLM (transaction-level modeling), or emulation. Optimization will also create demand for power sensitivity analysis – which signals / registers most affect power and when. Squeezing out picoAmps will become as common as squeezing out microns, which will stimulate further automation to optimize register and memory gating.”

Verification and IP

Verifying either one component or a subset of connected components will be more challenging.  Components in general will have to be designed so that they can be “fixed” remotely.  This means either fix a real bug or download an upgrade.  Intel is already marketing such a solution which is not restricted to IoT applications.Also networks will be heterogeneous by design, thus significantly complicating verification.

Ranjit Adhikary, Director of Marketing at Cliosoft, noted that “From a SoC designer’s perspective, “Internet of Things” means an increase in configurable mixed-signal designs. Since devices now must have a larger life span, they will need to have a software component associated with them that could be upgraded as the need arises over their life spans. Designs created will have a blend of analog, digital and RF components and designers will use tools from different EDA companies to develop different components of the design. The design flow will increasingly become more complex and the handshake between the digital and analog designers in the course of creating mixed-signal designs has to become better. The emphasis on mixed-signal verification will only increase to ensure all corner cases are caught early on in the design cycle.”

Thomas L. Anderson, Vice President of Marketing at Breker Verification Systems, has a similar prospective but he is more pessimistic.  He noted that “Many IoT nodes will be located in hard-to-reach places, so replacement or repair will be highly unlikely. Some nodes will support software updates via the wireless network, but this is a risky proposition since there’s not much recourse if something goes wrong. A better approach is a bulletproof SoC whose hardware, software, and combination of the two have been thoroughly verified. This means that the SoC verification team must anticipate, and test for, every possible user scenario that could occur once the node is in operation.”

One solution, according to Mr. Anderson, is “automatic generation of C test cases from graph-based scenario models that capture the design intent and the verification space. These test cases are multi-threaded and multi-processor, running realistic user scenarios based on the functions that will be provided by the IoT nodes containing the SoC. These test cases communicate and synchronize with the UVM verification components (UVCs) in the testbench when data must be sent into the chip or sent out of the chip and compared with expected results.”

Bob Smith, Senior Vice President of Marketing and Business development at Uniquify, noted that “Connecting the unconnected is no small challenge and requires complex and highly sophisticated SoCs. Yet, at the same time, unit costs must be small so that high volumes can be achieved. Arguably, the most critical IP for these SoCs to operate correctly is the DDR memory subsystem. In fact, it is ubiquitous in SoCs –– where there’s a CPU and the need for more system performance, there’s a memory interface. As a result, it needs to be fast, low power and small to keep costs low.  The SoC’s processors spend the majority of cycles reading and writing to DDR memory. This means that all of the components, including the DDR controller, PHY and I/O, need to work flawlessly as does the external DRAM memory device(s). If there’s a problem with the DDR memory subsystem, such as jitter, data/clock skew, setup/hold time or complicated physical implementation issues, the IoT product may work intermittently or not at all. Consequently, system yield and reliability are of upmost concern.”

He went on to say: “The topic may be the Internet of Things and EDA, but the big winners in the race for IoT market share will be providers of all kinds of IP. The IP content of SoC designs often reaches 70% or more, and SoCs are driving IoT, connecting the unconnected. The big three EDA vendors know this, which is why they have gobbled up some of the largest and best known IP providers over the last few years.”

Conclusion

Things that seem simple often turn out not to be.  Implementing IoT will not be simple because as the implementation goes forward, new and more complex opportunities will present themselves.

Vic Kulkarni said: “I believe that EDA solution providers have to go beyond their “comfort zone” of being hardware design tool providers and participate in the hierarchy of IoT above the “Devices” level, especially in the “Gateway” arena. There will be opportunities for providing big data analytics, security stack, efficient protocol standard between “Gateway” and “Network”, embedded software and so on. We also have to go beyond our traditional customer base to end-market OEMs.”

Frank Schirrmeister, product marketing group director at Cadence, noted that “The value chain for the Internet of Things consists not only of the devices that create data. The IoT also includes the hubs that collect data and upload data to the cloud. Finally, the value chain includes the cloud and the big data analytics it stores.  Wired/wireless communications glues all of these elements together.”

Collaboration Penalty Is Steep For Engineers

Thursday, December 16th, 2010

By John Blyler
System-Level Design sat down to discuss chip-design productivity and quality issues with Srinath Anantharaman, president and founder of Cliosoft; Ronald Collett, president and CEO of Numetrics Management Systems; and Michel Tabusse, CEO and co-founder of Satin Technologies. What follows are excerpts of that discussion:

SLD: How can chip-development teams improve their productivity?
Anantharaman: We all know that adding engineers to a project doesn’t increase productivity linearly. Engineers spend more time communicating and recovering from miscommunication and less time designing. This can be described by the following equation: N engineers + M engineers = (N + M)/CP, where CP is >1.0 and is the ‘collaboration penalty.’ The value of CP increases as the size of the team grows. Engineers need to share data with each other and be aware of changes being made by other engineers working on the project. One of the best investments a team can make to maximize productivity is to deploy tools and techniques to improve communication, institute accountability, track changes, have the ability to recover data easily, and do this without imposing undue burden on the engineers. Software engineers have used software-configuration-management (SCM) systems for decades to help solve some of the problems of concurrent development. Unfortunately, SCM systems don’t address all of the needs of hardware teams. Hardware flows are more complex, using a multitude of legacy tools generating large volumes and variety of data.
Collett: Productivity measures output per unit of effort expended to create that output. The development team’s output is the design it hands off for volume manufacture. The effort is the total labor, in person-weeks, that the team expends on the project—from concept to release-to-production. Therefore, maximum productivity occurs when the team expends the minimum effort to develop the chip. To ensure minimum effort, the project plan’s staffing level must assume that the average productivity among all team members will be the highest possible. Best-in-class is a good baseline target. The project manager then allocates only enough staffing necessary to achieve the development throughput—measured as output per week—that’s required to finish the project on time. You can think of it as ‘optimally understaffing’ the project.
A project manager achieves optimal understaffing by asking the following question during the project-planning phase: If my team were to achieve best-in-class productivity, what’s the minimum staffing I need to finish the project on time? The team size should be large enough to finish the project on schedule under the assumption that productivity will be best in class.
Tabusse: EDA tools—however good they may be to accomplish well-defined tasks and however focused on quality within their own domain—cannot solve the problem of overall quality without negatively impacting productivity. Scrutinizing megabytes of log files generated at each design iteration is hardly compatible with the overall design-schedule constraints. (And if you don’t do it, you risk overlooking something vital.) If design reviews are hurried or unfocused or skipped altogether in a crunch to get something completed—or if handoffs are unreliable—more risk is added. And if the weekly reports that a project manager gets from his team are nothing else than manually filled Excel checklists attached to e-mails, more time will be wasted at the project-management level before the right design decisions can be made. Design-quality monitoring and reporting has now become a discipline in itself. It can make the difference between meeting delivery schedules and not meeting them, between one-pass silicon and expensive respins, and between meeting a market window and missing it entirely.

SLD: Good EDA tools are a prerequisite for any serious chip-design project. Beyond that, what else is needed to ensure a high-quality, yet productive design?
Anantharaman: EDA tools help improve the productivity of an individual designer. However, as the size of the team grows it is of paramount importance to make sure that all of the team members are collaborating efficiently and working in unison. Even a small decrease in collaboration efficiency can erase any gains made by using the best EDA tools. Design teams must make use of automation to communicate effectively and share data and status in a timely and error-proof manner. For example, hardware configuration management (HCM) systems provide a very effective platform to share data and update engineers of status in a timely fashion. Issue tracking, project and schedule management, instant messaging, and web conferencing all are tools that should be deployed to grease the wheels of collaboration. Though tools can help improve productivity, it depends very much on how you use the tools. Keep the tools and processes simple. Otherwise, they can become onerous and engineers won’t follow the processes correctly—or will spend too much time following them.
Collett: Putting manufacturing issues aside, design quality is a function of the amount of verification and validation that a team performs. The more verification and validation, the higher the quality. This means putting the maximum number of resources possible on those tasks including staffing, computing, etc. Teams increase productivity by performing development activities more efficiently. Boosting motivation is among the most effective ways to improve efficiency. High motivation occurs if the team is set up for success. Give the team an extremely aggressive development schedule together with irrefutable facts and data that demonstrate that the schedule is achievable with the resources allocated (provided the team achieves best-in-class productivity).
Tabusse: Good EDA tools—even combined within well-automated flows—aren’t enough to produce quality designs with acceptable productivity. Outside of putting the best EDA in action, the most important issues to address are formalizing design flows, practices, and handoff processes. Equally important is the monitoring of all quality checks and metrics that the company considers critical as well as highlighting potential deviations from approved metrics. Focusing on saving monitoring time and automating reporting activities are also critical to successful design projects. By setting up a non-intrusive monitoring system based on specific company quality checks and metrics, each designer and design team can make decisions based on up-to-the-minute factual design data. They also can save weeks by automatically generating quality reports that highlight action items.

SLD: What’s the root cause of poor schedule predictability on IC-development projects?
Anantharaman: IC development is complex—often with large teams spread across multiple sites. Lack of communication and visibility into current status is a major cause of poor predictability. Changes to specifications or other ECOs may not get communicated to the necessary engineers and come as a surprise at the end. Managers have to rely on the engineers’ status reports, which are often too rosy. Deploying an HCM system effectively can help to avoid surprises. The team is constantly aware of the changes being made and can respond quickly to them. Schedule predictability can be improved by tracking objective metrics. Issue-tracking systems (several commercial and open-source systems available) provide an objective measure of the number and severity of open issues and also a rate of increase/decrease in reported issues.
Collett: By definition, the root cause of poor schedule predictability is a poor estimate of the time required to design the chip. A poor estimate means that the project was staffed incorrectly, given the design’s complexity and the time-to-market constraint imposed. Complexity includes not only the design’s intrinsic complexity, but also the stochastic nature of IC development, which introduces what I would call ‘stochastic complexity.’ Examples include spec changes, project-management issues, third-party and internal IP problems, EDA and library issues, and resource-management issues. Interestingly, both intrinsic and stochastic complexity can be accurately and reliably modeled, enabling accurate estimates of resource requirements. In a nutshell, a project slips schedule when a mismatch exists between intrinsic/stochastic complexity and resource allocation.
Tabusse: One cause is the lack of corporate-wide reference metrics that would help learning from the past. (Actually, the myriad of overlapping metrics at some companies has the same effect.) Another major challenge is the absence of reliable information on the quality of the building blocks that the next project will be using.

SLD: Design teams can be resource hogs, using up all available disk space and processing. How can such usage be controlled without adversely affecting the design?
Anantharaman: Hardware design teams are known to use up all available disk space. Design libraries are often very large. EDA tools produce large numbers of very large files. And engineers love to keep copies just in case they need them. Often, the entire project data is archived to keep a snapshot of a milestone. Engineers often think that disk space is cheap because they can go to a local electronics store and pick up a terabyte hard drive for a couple hundred dollars. However, this cannot be compared to disk space on a high-reliability NAS server in an enterprise network. Additionally, there’s the huge cost of managing disk space, which includes creating backups. Deploying an HCM system helps this cause in multiple ways. Users no longer need to keep local backup copies of their own because all versions are managed. Any milestone can be remembered using a tag/label to record the configuration of the project without having to make a complete copy of the project data.

SLD: Many engineers and their managers are reluctant to use software-as-a-service (SaaS) or cloud-computing tools due to performance (too slow) and security (too risky) concerns. How can product-lifecycle-management (PLM) software address these concerns? What portion of the overall development design could be used as a test case to boost confidence in SaaS-cloud PLM tools?
Collett: How does PLM software address those concerns? Cloud-based computing performance is a function of the software it uses to perform load balancing and transaction processing across the server farm and between the client and the cloud. So I think steady improvements in cloud-computing infrastructure software will be the answer to the performance issue. Regarding security, there are a tremendous number of very powerful measures available today. It’s different than it was 5 or 10 years ago. My company provides all of its products via SaaS and we’ve never had a security breach. Some of the top semiconductor companies in the world store their data on our systems. Preventing security breaches is a function of two things: the number of layers of security and continuous attention to the issue by senior management.
Tabusse: As stated before, our product architecture makes little use of the total available network bandwidth. End users only need a web navigator, and on most occasions they don’t even notice that the application is delivered from a remote server. While most of our customers have installed the software at their own premises (for security reasons or simply by habit), some have started to use a delocalized server that we provide.