Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for November, 2014

From Data to Information to Knowledge

Monday, November 17th, 2014

Gabe Moretti, Senior Editor

During my conversation with Luco Lanza last month we exchanged observations on how computing power, both at the hardware and the system levels, had progressed since 1968, the year when we both started working full time in the electronics field.  And how much was left to do in order to achieve knowledge based computing.

Observing what computers do, we recognized that computers are now very good at collecting and processing data.  In fact the concept of the IoT is based on the capability to collect and process a variety of data types in a distributed manner.  The plan is, of course, to turn the data into information.

Definitely there are examples of computer systems that process data and generate information.  Financial systems provide profit and loss reports using income and expense data for example, and logic simulators provide the behavior of a system by using inputs and output values.  But information is not knowledge.

Knowledge requires understanding, and computers do not understand information.  The achievement of knowledge requires the correlation and synthesis of information, sometimes from disparate sources, in order to generate understanding of the information and thus abstract knowledge.  Knowledge is also a derivate of previous knowledge and cannot always be generated simply by digesting the information presented.  The human brain associates the information to be processed with knowledge already available in a learning process that assigns the proper value and quality to the information presented in order to construct a value judgment and associative analysis that generates the new knowledge.  What follows are some of my thoughts since that dialogue.

As an aside I note that unfortunately many education systems give students information but not the tools to generate knowledge.  Students are thought to give the correct answer to a question, but not to derive the correct answer from a set of information and their own existing knowledge.

We have not been able to recreate the knowledge generating processes successfully with a computer, but nothing says that it will not be possible in the future.  As computing power increases and new computer architectures are created, I know we will be able to automate the generation of knowledge.  As far as EDA is concerned I will offer just one example.

A knowledge based EDA tool would develop a SoC from a set of specifications.  Clearly if the specifications are erroneous the resulting SoC will behave differently than expected, but even this eventuality would help in improving the next design because it would provide information to those that develop a knowledge based verification system.

When we achieve this goal humanity will finally be free to dedicate its intellectual and emotional resources to address those problems that derive directly from what humans are, and prevent most of them, instead of having to solve them.

At this moment I still see a lot of indeterminism with respect of the most popular topic in our field: the IoT.  Are we on the right track in the development of the IoT?  Are we generating the correct environment to learn to handle information in a knowledge generating manner?  To even attempt such a task we need to solve not just technical problems, but financial, behavioral, and political issues as well.  The communication links to arrive to a solution are either non-existent or weak.  Just think of the existing debate regarding “free internet”.  Financial requirements demand a redefinition of the internet.  From a communication mechanism akin to a utility, to a special purpose device used in selective manners defined by price.  How would a hierarchical internet defined by financial parameters modify the IoT architecture?  Politicians and some business interests do not seem to care and engineers will have to patch things up later.  In this case we do not have knowledge.

High Power Tools for Low Power

Tuesday, November 4th, 2014

Gabe Moretti, Senior Editor

The subject of design for low power, it is a miracle we do not have DLP in the vocabulary, continues to be in the minds of EDA people.  I talked about it with Krishna Balachandran, product marketing director for low power at Cadence.

I asked Krishna how pervasive is the need to conserve power among the Cadence users community.

Krishna responded: “The need to conserve power is pervasive in IP and chips targeting mobile applications. Interest in conserving power has spread to include plug-in-the-wall target applications because of government regulations regarding energy usage and the need to be green. Almost all integrated circuits designed today exercise some power management techniques to conserve power. Power conservation is so pervasive that it has now become the third optimization metric in addition to area and speed. Another noticeable trend is that while power management techniques are widespread in both purely analog and purely digital designs, mixed-signal designs are increasingly becoming power-managed.”

Given how pervasive the focus on reducing power seems to be I wondered how do tools for low power help in deciding tradeoffs?  I know, it is quite an open ended question and I got a very long answer from Krishna.

“Power estimation and analysis must be done early and accurately for maximizing the possible tradeoffs. If a design is only optimized for power for the hardware without comprehending how the software will take advantage of the power management in the hardware, power estimates will be inaccurate and problems will be found too late in the product development cycle. Power estimation is increasingly done employing hardware emulation platforms which are excellent at identifying peak-power issues by running real-life software application scenarios on the hardware being designed that allows appropriate power architecture modifications at an early stage.

Thorough functional verification of the register-transfer level (RTL) is very important to weed out any power-related design bugs. Metric-driven power-aware simulation, simulation acceleration and tools providing powerful low power debug capabilities are deployed to ensure that the design is functional in all the power modes and help shorten the overall verification time. Low power static and exhaustive formal verification complement simulation to find functional design bugs as well as issues with the power intent specification.

Smart leakage and dynamic power optimization options in logic synthesis and Place-and-Route (P&R) tools work hard by using power as a cost function in addition to area and speed. It is important that P&R tools do not undo any intelligent optimizations done for power at the logic synthesis stage. For example, placement-aware logic synthesis tools can swap out single bit flip flops and replace them with multi-bit flops referred to as Multi-Bit Cell Inferencing (MBCI), which can significantly reduce the load on the clock tree, a major power consumer in today’s System-On-Chip (SoC) designs. A Multi-Mode Multi-Corner (MMMC) optimization capability is a must to simultaneously optimize power while meeting timing. A rich set of power switch and enable chaining options and integration with a power-analysis and signoff engine are needed in your P&R tool to help identify the appropriate number and locations of power switches. Power switch analysis, insertion and placement capabilities tradeoff ramp time for the power domain that is waking up vs. the rush current and IR drop in the neighboring power domains that were already on.-Coupled with this, Graphical User Interface (GUI) capabilities within the P&R system need to allow the designer to interactively specify the switch and enable chaining topology to deliver a power efficient design that takes the tradeoffs between ramp time, IR drop and rush currents into consideration. All successive refinements of the design need to be verified vs. the original design intent (the RTL), and low power equivalence and static and formal verification tools do just that.

Mixed-signal low power designs pose unique challenges for both implementation and functional verification. Tools must be able to stitch together the digital blocks in an analog schematic environment taking into account the disparate representations of power in the two domains. Verification solutions must be able to check the interfaces of the analog/digital domains that are frequently a source of errors such as missing level shifters. Without employing such tools, the process is manual and error prone with design bugs creeping into silicon.”

Since it is important to plan power optimization solutions at the system level what is the approach that Cadence envisions as the most productive?   I know you have covered this somewhat in your previous answer but I would like to focus on the subject more.  Krishna was very polite in focusing his answer.

“ES-level tools provide the biggest bang for the buck when it comes to power optimization. The sooner you optimize, the more options you have for making excellent trade-offs. A good example of this is a High Level Synthesis tool that generates multiple micro-architectures from a given high-level description of an algorithm in C/C++/SystemC and helps the designer make the trade-off between area, speed and power for the target design. Since it operates at a pre-RTL stage, the resulting power/area/speed trade-offs are very impactful. Furthermore, it is desirable to integrate power estimation and logic synthesis engines within the High Level Synthesis tool, thus ensuring a high degree of correlation with downstream implementation tools.”

I think we will see increased attention to power optimization tools at the system level in the short term.  Feedback from designers and architects will help in defining the problem better.  And it is my hope that hardware engineers will be able to teach software developers how to use hardware more efficiently.  In my professional life time we have gone from having to count how many bites resulted from the code to considering hardware as an unlimited asset.  The time might be here to start considering how the software code impacts hardware again.