Part of the  

Chip Design Magazine

  Network

About  |  Contact

Kilopass Unveiled Vertical Layered Thyristor (VLT) Technology for DRAMs

October 19th, 2016

Gabe Moretti, Senior Editor

Kilopass Technology, Inc., is a leader in embedded non-volatile memory (NVM) intellectual property (IP).  Its patented technologies of one-time programmable (OTP) NVM solutions scale to advanced CMOS process geometries. They are portable to every major foundry and integrated device manufacturer (IDM), and meet market demands for increased integration, higher densities, lower cost, low-power management, better reliability and improved security.  The company has just announced a new device that potentially allows it to diversify into new markets.

According to Charlie Cheng, Kilopass’ CEO, VLT eliminates the need for DRAM refresh, is compatible with existing process technologies and offers significant other benefits including lower power, better area efficiency and compatibility.  When asked the reason for this additional corporate direction Charlie replied: “Kilopass built its reputation as the leader in one-time programmable memories,” says Charlie Cheng, Kilopass’ chief executive officer. “As the next step on our roadmap, we examined many possible devices that would not need new materials or complex process flows and found this vertical thyristor to be very compelling.  We look forward to commercializing VLT DRAM in early 2018.”

VLT Overview

Kilopass’ VLT is based on thyristor technology, a structure that is electrically equivalent to a cross-coupled pair of bipolar transistors that form a latch. The latch lends itself to memory applications since it stores values and, as opposed to current capacitor-based DRAM technology, does not require refresh. The thyristor was first invented in the 1950s and several attempts have been made to use it for the SRAM market without success.  Kilopass’ VLT is the realization of DRAM requirements based on implementing the thyristor structure vertically.

Since VLT does not require complex performance- and power-consuming refresh cycles, a VLT-based DDR4 DRAM lowers standby power by 10X when compared to conventional DRAM at the same process node. Furthermore, VLT requires fewer processing steps and is designed to be built using existing processing equipment, materials and flows.

The VLT bitcell operations and silicon measurement were completed in 2015 and shown to have excellent correlation to Kilopass’ proprietary ultra-fast TCAD simulator that is one hundred thousand times faster than a traditional TCAD simulator. The TCAD simulator enables Kilopass to predict the manufacturing windows for key process parameters, and optimize the design for any given manufacturing process.  A full macro level test chip was taped-out in May and initial silicon testing is underway.
Industry Perspective

The $50B DRAM market is being driven by strong demand in the server/cloud computing market as mobile phone and tablet market growth are slowing down and computing is moving increasingly to the cloud. The outlook for DRAM growth remains strong. In a report published in 2015, IC Insights forecasts DRAM CAGR of 9% over the period from 2014 – 2019. This growth rate shows DRAM growing faster than the total IC market.

Servers and server farms consume a tremendous amount of energy with memory being a major contributor. In an ideal world, the current generation of 20 nanometer (nm) DRAM would migrate to sub-20nm processes to deliver even lower power.

Current DRAM technology is based on the 1 transistor, 1 capacitor. The (1T1C) bitcell is difficult to scale since the smaller transistors exhibit more leakage and the smaller capacitor structure has less capacitance, resulting in the need to reduce the time between refresh intervals. Up to 20% of a 16Gb DDR DRAM’s raw bandwidth will be lost due to the increased frequency of refresh cycles, a negative for multi-core/multi-thread server CPUs that must squeeze out every bit of performance to remain competitive. The DRAM industry is in a quandary trying to increase memory performance while reducing power consumption, a tough challenge given the physics at play with the current 1T1C technology. In order to address the need for lower power consumption a new DRAM technology and architecture is needed.

Kilopass stated that its initial target markets include “PCs” and servers. I am of the old school and associate the term “PC” to personal computers.  But Kilopass uses the term to mean Portable Computing Devices so it is talking about a different market.  Kilopass expects to have test silicon by early 2017 that will confirm performance of the new VLT DRAM technology and manufacturability.   Kilopass has two primary reasons to announce the new technology over one year in advance of product delivery.  First the company is in the IP business, so it is giving itself time to look for licensees.  Secondly it thinks that the DRAM market has been stuck at 20nm. Adoption of new technology takes time, though VLT has been shown to be manufacturable. This is the right time to alert the market that there are alternative solutions, allowing time for investigation of this new technology.   Market penetration of new technology is not always assured.  Wide acceptance almost always requires a second source, especially with something so new as the VLT device.  Memories play a critical role cloud computing but a far smaller one in PC since power consumption in PC is not a widespread issue.

Interview with Pim Tuyls, President and CEO of Intrinsic-ID

October 4th, 2016

Gabe Moretti, Senior Editor

After the article on security published last week, I continued the conversation with more corporations.  The Apple vs. FBI case showed that the stakes are high and the debate is heated.  Privacy is important, not only for guarding sensitive information but for also ensuring functionality in our digital world.

I asked Pim Tuyls his impressions on security in electronics systems.

Pim:

“Often, privacy is equated with security. However, ‘integrity’, is often the more important issue. This is especially true with the Internet of Things (IoT) and autonomous systems, which rely on the inputs they receive to operate effectively.    If these inputs are not secure, how can they be trusted?  Researchers have already tricked sensors of semi-autonomous cars with imaginary objects on the road, triggering emergency braking actions.  Counterfeit sensors are already on the market.

Engineers have built in redundancy and ‘common-sense’ rules to help ensure input integrity. However, such mechanisms were built primarily for reliability, not for security. So something else is needed. Looking at the data itself is not enough. Integrity needs to be built into sensors and, more generally, all end-points.”

Chip Design: Are there ways you think could be effective in increasing security?

Pim:

“One way to do this is to append a Message Authentication Code (MAC) to each piece of data. This is essentially a short piece of information that authenticates a message or confirms that the message came from the claimed sender (its authenticity) and has not been changed in transit (its integrity). To protect against replay attacks the message is augmented with a timestamp or counter before the MAC is calculated.  Another approach to implement a MAC is based on hash functions (HMAC or Hash-based message authentication code). Hash functions such as the SHA-2 family are well-known and widely supported cryptographic primitives with efficient and compact implementation.”

Chip Design: These approaches sound easy but there are reasons they are not widely adopted?

Pim:

“First, even though an algorithm like HMAC is efficient and compact, it may still be too high of a burden on the tiny microcontrollers and sensors that are the nerves of a complex system.  Authenticating every piece of data naturally takes up resources such as processing, memory and power.  In some cases, like in-vitro medical sensors, any reduction in battery life is not acceptable. Tiny sensor modules often do not have any processing capabilities. In automotive, due to the sheer number of sensors and controllers, costs cannot be increased.”

Chip Design: It is true that many IoT devices are very cost sensitive, I said, however, over recent years there is an increasing use of more powerful, 32-bit, often ARM- based microcontrollers. Many of these now come with basic security features like crypto accelerators and memory management. So some of the issues that prevent adoption of security are quickly being eroded.

Pim continued:

“A second obstacle relates to the complex logistics of configuring such a system. HMAC relies on a secret key that is shared between the sensor and the host.  Ensuring that each sensor has a unique key and that the key is kept secret via a centralized approach creates a single point of failure and introduces large liabilities for the party that manages the keys.”

Chip Design: What could be a cost-effective solution?

Pim concluded:

“A new solution to all these issues is based on SRAM Physical Unclonable Functions (PUFs). An SRAM PUF can reliably extract a unique key from a standard SRAM circuit on a standard microcontroller or smart sensor. The key is determined by tiny manufacturing differences unique to each chip. There is no central point of failure and no liability for key loss at the manufacturer.  Furthermore, as nothing is programmed into the chip, the key cannot even be extracted through reverse engineering or other chip-level attacks.

Of course adapting a new security paradigm is not something that should be done overnight. OEMs and their suppliers are rightly taking a cautious approach. After all, the vehicle that is now being designed will still be on the road in 25 years. For industrial and medical systems, the lifecycle of a product may even be longer.

Still, with technologies like SRAM PUF the ingredients are in place to introduce the next level of security and integrity, and pave the road for fully autonomous systems. Using such technologies will not only help to enhance privacy but will also ensure a higher level of information integrity.”

This brought me back to the article where a solution using PUF was mentioned.

eSilicon Fully Automates Semiconductor IP Selection and Purchasing

September 21st, 2016

Gabe Moretti, Senior Editor

Approximately half of the area of advanced system-on-chip (SoC) designs is composed of memory IP. Optimizing the memory subsystem of an SoC requires exploration of hundreds to thousands of possible configurations to identify the optimal match for the chip’s power, performance and area (PPA) requirements. This process can take weeks and traditionally designers and architects have not been able to fully explore all the options available to them, resulting in sub-optimal memory architectures and design closure challenges.

eSilicon’s STAR Navigator tool addresses this challenge by allowing designers to choose, evaluate and procure eSilicon IP online. Now, designers can access a wide variety of memory and I/O options online to find the best configuration for each design. With the latest enhancements to STAR Navigator, designers can now get quotes for their chosen IP online and procure it by uploading a valid purchase order. Designers are now in control of specifying and purchasing the most appropriate IP for their projects.

STAR Navigator increases security of the transactions by providing one channel of communication that allows engineering, purchasing and finance to avoid misunderstandings.  BY eliminating multiple emails to different recipients, the progress of the choice of IP and its purchasing and delivery are all documented in one channel of communication.

Previously, purchasing memory IP and I/Os could be difficult to manage by the engineer. Accessing specific memory instances with a variety of options was time consuming and complex. STAR Navigator helps designers avoid complicated paperwork; find which memories will best help meet their SoC’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets. Pre-loaded data is available to enable architects and designers to obtain immediate PPA information for their early chip planning.

STAR Navigator empowers chip architects and designers to choose the best and highly differentiated eSilicon-developed IP solutions by performing the following tasks online:

  • Generate dynamic, graphical analyses of PPA data
  • View data graphically, view in table format, or download to Microsoft Excel
  • Build and download a complete SoC memory subsystem
    • Generate and download IP front-end views
    • Make changes over time
    • Purchase the IP that best meets the needs of the design

STAR Navigator contains all eSilicon-developed IP across multiple foundries and technologies:

  • Standard and specialty memory compilers from 14nm to 180nm including CAMs, fast cache single-port SRAMs, multi-port register files, ultra-low-voltage SRAMs and pseudo two-port architectures targeted for specific market segments
  • General-purpose and specialty I/O libraries from 14nm to 180nm
  • High-bandwidth memory (HBM) Gen2 PHY in 14/16nm and 28nm
  • Foundries include Dongbu, GLOBALFOUNDRIES, LFoundry, Samsung, SMIC, TSMC and UMC

“STAR Navigator simplifies the comparison of results across multiple technologies, architectures and other characteristics and takes the guesswork out of hitting PPA targets,” said Lisa Minwell, eSilicon’s senior director, IP marketing. “This goes much, much deeper than IP portals that serve as IP catalogs. Using STAR Navigator, designers can download front-end views, run simulations in their own environments and then purchase the back-end views of the IP and I/Os that best fit their design. The choice of optimized IP is now in the hands of the designer.”

An interesting White Paper from S2C

September 6th, 2016

Gabe Moretti, Senior Editor

S2C has published a white paper on Chip Design with the title: “Choosing the best pin multiplexing method for your Multiple-FPGA partition”.  It is of particular interest to designers that use FPGA based prototyping in their development of SoC designs.

Using multiple FPGAs to prototype a large design requires solving a classic problem: the number of signals that must pass between devices is greater than the number of I/O’s pins on an FPGA. The classic solution is to use a TDM (Time Domain Multiplexing) scheme that multiplexes two or more signals over a single wire or pin.

There are two distinct types of TDM implementations: synchronous and asynchronous. In synchronous TDM the multiplexing circuitry is driven by a fast clock that is synchronous with the (user’s) design clock.

In asynchronous mode, the TDM fast clock runs completely independent of the design clocks. Although asynchronous mode is slower, it supports multiple clocks and the timing constraints are easier to meet.

The paper shows that S2C’s Prodigy Play Pro is a tool that provides design partitioning across multiple FPGAs, and offers automatic TDM insertion based on an asynchronous TDM using LVDS.   Prodigy Play Pro Combines the technique of using asynchronous LVDS TDM with a single clock cycle design, and can partition a design and perform automatic TDM insertion. The result is that the tool is able to:

1)   Optimize buses and match the LVDS resources in each bank considering such factors as trace lengths, matching impedances, and impedance continuity, and

2)   Avoid consuming FPGA design resources for the TDM circuity by taking advantage of built-in reference clocks (e.g.: IODELAY) to drive TDM clocks and resets.

Just click on the title of the white paper to read it in its entirety or go to http://www.s2cinc.com/resource-library/white-papers.

ARC Processor summit in Santa Clara

August 30th, 2016

Gabe Moretti, Senior Editor

Synopsys is holding its second ARC Processor summit on September 13 at the Santa Clara Marriott.

The full day conference will open at 9:00 for on-site registration.  Synopsys will provide complimentary parking to attendees.  To see the full program please go to:

http://www.synopsys.com/IP/ProcessorIP/ARCProcessors/Pages/arc-processor-summit-2016.aspx

The ARC processor family comprises a number of versions of the MCU adapted to specific applications as well as a general purpose version.  From my point of view, the ARC processor family offers two major advantages to its customers: the availability of a large and tested IP family directly from Synopsys, and Synopsys leading edge rapport with many foundries, including all the important ones.

The day’s events are divided into three tracks: Hardware, Software, and Embedded Vision.

Linley Gwennap, The Linley Group, will deliver the keynote.  The title is: “IoT Standards Wars: Caught in the Middle?”

Given the number of devices and the differences of applications, it is extremely important to arrive quickly to a set of agreed upon standards that can support this variety and still offer robustness, flexibility and security.

The day will conclude with a demo session and networking opportunity from 5:30 to 7:00.

Accellera Relicenses SystemC Reference Implementation under the Apache 2.0 License

August 15th, 2016

Gabe Moretti, Senior Editor

SystemC is a subset of the C language.  The C language is widely used by software developers.  The SysremC subset contains the features of C that are synthesizable, that is, they are useful to describe hardware components and designs.  SystemC is used mainly by designers working at the system level, especially when it is necessary to simulate both hardware and software concurrently.  An algorithmic description in SystemC of a hardware block generally simulates faster than the same description implemented in a traditional hardware description language.

Accellera Systems Initiative (Accellera), the electronics industry organization focused on the creation and adoption of electronic design automation (EDA) and intellectual property (IP) standards, just announced that all SystemC supplemental material, including material contributed under the SystemC Open Source License Agreement prior to the merger of the Open SystemC Initiative (OSCI) and Accellera in 2011, has now been re-licensed under the Apache License, version 2.0.

The SystemC Open Source License used for the supplemental material required a lengthier contribution process that will no longer be necessary under Apache 2.0. Other Accellera supplemental material already available under Apache 2.0 includes the Universal Verification Methodology (UVM) base class library.

“This is a significant milestone for Accellera and the SystemC community,” stated Shishpal Rawat, Accellera Systems Initiative chair. “Having all SystemC supplemental material, such as proof-of-concept and reference implementations, user guides and examples, under the widely used and industry-preferred Apache 2.0 license will make it easier for companies to more readily contribute further improvements to the supplemental material.  We have been working with all of the contributing companies over the past 18 months to ensure that we could offer SystemC users a clear path to use and improve the SystemC supplemental material, and we are very proud of the efforts of our team to make this happen.”

The supplemental material forms the basis for possible future revisions of the standard as new methods and possible extensions to the language are adopted by a significant majority of users.  It is important to keep in mind that a modeling language is a “living” language, which means that it is subject to periodic changes.  For example, the IEEE specifies that an EDA modeling language standard be reaffirmed every five years.  This institutionalizes the possibility of a new version of the standard at regular intervals.

Hardware Based Security

August 5th, 2016

Gabe Moretti, Senior Editor

If there is one thing that is obvious about the IoT market it is that security is essential.  IoT applications will be, if they are not already, invasive to the life of their users and the privacy of each individual must be preserved.  The European Union has stricter privacy laws than the US, but even in the US privacy is valued and protective.

Intrinsic-ID has published a white paper “SRAM PUF: The Secure Silicon Fingerprint” that you can read in the Whitepapers section of this emag, or you can go to www.intrinsic-id.com and read it under the “Papers” pull down.

For many years, silicon Physical Unclonable Functions (PUFs) have been seen as a promising and innovative security technology that was making steady progress. Today, Static Random-Access Memory (SRAM)-based PUFs offer a mature and viable security component that is achieving widespread adoption in commercial products. They are found in devices ranging from tiny sensors and microcontrollers to high performance Field-Programmable Gate Arrays (FPGAs) and secure elements where they protect financial transactions, user privacy, and military secrets.

Intrinsic-ID goal in publishing this paper is to show that SRAM PUF is a mature technology for embedded authentication. The behavior of an SRAM cell depends on the difference of the threshold voltages of its transistors. Even the smallest differences will be amplified and push the SRAM cell into one of two stable states. Its PUF behavior is therefore much more stable than the underlying threshold voltages, making it the most straightforward and most stable way to use the threshold voltages to build an identifier.

It turns out that every SRAM cell has its own preferred state every time the SRAM is powered resulting from the random differences in the threshold voltages. This preference is independent from the preference of the neighboring cells and independent of the location of the cell on the chip or on the wafer.

Hence an SRAM region yields a unique and random pattern of 0’s and 1’s. This pattern can be called an SRAM fingerprint since it is unique per SRAM and hence per chip. It can be used as a PUF. Keys that are derived from the SRAM PUF are not stored ‘on the chip’ but they are extracted ‘from the chip’, only when they are needed. In that way they are only present in the chip during a very short time window. When the SRAM is not powered there is no key present on the chip making the solution very secure.

Intrinsic-ID has bundled error correction, randomness extraction, security countermeasures and anti-aging techniques into a product called Quiddikey. This product extracts cryptographic keys from the SRAM PUF in a very secure manner and is available as Hardware IP (netlist), firmware (ANSI C Code), or a combination of these.

The hardware IP is small and fast – around 15K gates / 100K cycles – and connects to common interconnects like AMBA AHB, APB as well as proprietary interfaces. A Built-In Self-Test (BIST) and health checks are included in the logic. Since it is pure digital, single clock logic it synthesizes readily to any technology.  Software reference implementations start from 10KB of code and are available for major platforms like ARM, ARC, Intel and MIPS. Software implementations can be used to add PUF technology to existing products by a firmware upgrade.

I will deal with security issues in more depth in September.  In the mean time the Intrisic-ID white paper is worth your attention

Reverse Acquisition

July 25th, 2016

Gabe Moretti, Senior Editor

It has been now one week since SoftBank of Japan has announced its intention to acquire ARM for a little over $32 billion in cash, an eye popping 43% premium from the stock price before the announcement.  As I have remarked in my previous blog:“ The ARM – SoftBank Deal : Heart Before Mind) the financials do not make sense, but, after a week of consideration and after reading Junko Yoshida’s interview of Masayoshi Son, SoftBank CEO, I can see how it makes strategic sense.  This is of course my interpretation, not something SoftBank would ever confirm.

I started by considering how Japan has not been able to recover from its industrial near collapse, in spite of its use of every financial tool, both conventional and somewhat unconventional.  There is only one thing left to do: get foreign companies, especially those leading in their fields, to invest in Japanese companies.  But of course, there have been no takers. What to do next: buy one!  And that is what SoftBank has done.

I should have trusted my intuition immediately.  Looking at the title of my blog I wrote ARM- SoftBank Deal.  It should in fact have been SoftBank – ARM deal, since SoftBank is the acquiring party.  Here is what is actually happening.

SoftBank is “lending” ARM $32 billion to “purchase” SoftBank.  Masayoshi Son has stated: “I may choose to become ARM’s Chairman of the Board”.  He has also stated that the reason for the purchase is to use ARM products within all of the fiscal deals SoftBank is involved in at this point like: Vodaphone Japan, Alibaba, and TaoBao.  Any entry in new markets is analysts speculation.

Yoshida reports Son stating: ““ARM will become central to SoftBank’s core business in three, five and 10 years’ time,” he said.  Note he did not say that ARM will provide new markets, only that it will strengthen existing ones.  And since ARM will be a wholly owned division of SoftBank, there will be no regulations compelling SoftBank to divulge operational details of ARM that it does not choose to make public.  Thus much of what SoftBank will do under the ARM cover will remain private.

Should we expect another such move from Japan,Inc.?  I will be watching the financial news carefully for a while now.

The ARM – Softbank Deal: Heart Before Mind

July 19th, 2016

Gabe Moretti, Senior Editor

If you happen to hold ARM stock, congratulation, you are likely to make a nice profit on your investment.  SoftBank, a Japanese company with diversifies interests, including Internet provider, has offered to purchase ARM for cash by tendering $32.4 billion dollars.  SoftBank is a large company whose latest financial result show that it made a profit of $9.82 before interest payments and tax obligations.

ARM, on the other hand, reported for 2015 fiscal year revenue of $1.488.6 billion with a profit of $414.8 million and an operating margin of 42%.  This is a very healthy operating margin, showing a remarkable efficiency by all aspects of the company.  So, there is little to improve in the way ARM operates.

What seems logical, then is that SoftBank expects a significant increase in ARM revenue after the acquisition, or an effect on its profit due to ARM’s impact on other parts of the company.  ARM profit for 2015 were 414.8 million British sterling and the revenue in sterling was 968.3 million for a ratio of 42.8%.  Let’s assume that SoftBank instead invested all of the $32.4 billion and obtained a 5% return or $1.62 billion per year.  To obtain the same result from the ARM acquisition it would mean that ARM must generate a profit of 3.9 times what it generated in 2015.  This is a very large increase since if we assume that all other financial ratios stay the same revenue would have to be a little over $5.5 billion. Yet, using the growth of 15% realized between 2014 an2015 for every year between 2015 and 2020 we “only” achieve a $2,913.6 billion mark.  And keeping the growth ratio constant as revenue increase gets harder and harder since it means a large increase every year.

So the numbers do not make sense to me.  I can believe that ARM could be worth $16 billion, but not twice as much.  And here is another observation.  I have read in many publications that financial analysts expect the IoT market to be $20 billion by 2020.  Assuming that the SoftBank investment, net of interest charges, returns 5% per year in 2020, it would mean that ARM’s revenue would be $5.5 billion or over 25% of TAM (Total Available Market).  This, I consider impossible to achieve, simply because the IoT market will be price sensitive, thus opening ARM to competition by other companies offering competitive microcontrollers.  SoftBank cannot possibly believe that Intel will go away, or that every person will own three cell phones each, or that Google will use only ARM processors in its offerings, or even that IP companies like Cadence and Synopsys will decide to ignore the IoT market.

I am afraid that the acquisition marks the end of ARM as we know it.  It will be squeezed for revenue and profit like it has never been before and the quality of its products will suffer.

DAC Official Results Are In

June 23rd, 2016

Gabe Moretti, Senior Editor

Have already covered DAC in a previous blog, but a couple of days ago I received an email from Michelle Clancy, 53rd DAC PR/Marketing Chair, reporting on the conference attendance.  I have additional observations on the Austin conference as a result of the release.

As far as I am concerned the structure of the release was poor.  Readers were guided to consider the overall attendance numbers which was quite small. The increment in overall badges between the 2013 Austin DAC and this year is an increase of 125 badges.  That is an increase of 2.1% significantly less that the increase in the revenue of the EDA industry in the same span of time.  And in addition we have witnessed the growth of related industries who have a presence in and around Austin such as embedded systems and IoT.

What should be underlined is the difference between conference attendees badges from 2013 and 2016.  There were 719 more conference badge this year, while the free “I LOVE DAC” passes were down 564 for the same comparison.  To me this are the important data.  It means that there were fewer “tire kickers” who collect souvenirs and more technical program or tutorial attendees than in 2013.  These are the numbers that indicate success, but the press release did not dwell on them.

I also find it telling that the quote in the release from Howard Pakosh, managing partner of TEKSTART, which provides interim sales, marketing and business development capital to high-tech entrepreneurs, observes “The people we’ve been talking to in Austin are actually looking for information and solutions; they’re not just here because it’s an easy commute from Silicon Valley.”  Obviously Mr. Pakosh finds it a waste of time to exhibit in San Francisco.

My experience on the exhibit floor was different.  The fact that Synopsys chose to send fewer PR and marketing persons to Austin was a negative point for me.  It was difficult to find the right person to discuss business with.  The company also did not have their usual press/analysts dinner and this is unfortunate since their new message “silicon to software” was not well presented on the floor.  I left the conference without understanding the message, especially since I was told in my meeting with corporate marketing that their effort was to promote products from Coventry and Codenomicon to markets outside the electronics business.  Are those products the “software” they are talking about?  What about embedded software for all sort of applications, including those who use their ARC processors?

Cadence and Mentor booths were better staffed, at least I met all the professionals I needed to meet. It is of course time that Cadence realizes that “The Denali Party” does not take the place of a serious dinner with press and analysts.  The Heart of Technology party is a better choice if one wants music and drinks and it supports a good cause.  I go to DAC to do business, not to drink cheap drinks and fight for food in a crowded buffet line.

It is of course expected that the technical program offered by DAC covers leading edge issues and opportunities.  This part of DAC was well organized and run.

If the DAC committee sees the need to defend the choice of Austin as the venue for the conference, then why use the venue next year?  Clearly the have determined that Austin is a viable location.  I for one, did enjoy Austin as a host city and found the convention hall pleasant and well equipped.  Of course the distance between both sessions and exhibits to the press room was not at all convenient, but I do understand that the press room location was chosen because it allowed the building of the necessary temporary meeting rooms.

Next Page »