DVCon 2012 ended yesterday, March 1. Rather than recap the entire conference, I’d like to focus on the “high energy” surrounding the event, starting with the vendor exhibitions. Each year I like to walk through the DVCon exhibition hall, looking for new technology, meeting old acquaintances and so forth. In very down times, I could have amped up my walking and run through the exhibition hall unimpeded, but this year, very slow walking was my only option. This was due to both the amount of people in the hall and the number of new (and newish) exhibitors present—not to mention the number of familiar faces I encountered.
Moreover, most of the people who were in hall were not just there for the free beer/wine and munchies. A rep at one of the large EDA vendors told me that they had no letup of potential customers during the entire Tuesday exhibition through the first half of the Wednesday exhibition, which is when I spoke with him. This sort of report was echoed by other vendors, and passed the eyeball test.
I was also impressed by the number of unfamiliar (at least to me) companies that were on the exhibition floor. The fact that these sorts of new companies keep emerging in the already well-covered function verification and system-level design spaces highlights the increasing importance of system design & function verification as SOCs become increasing complex and design cycles become shorter. Of course, the fact that these new companies see DVCon as a cost-effective way to reach their target audience is a testament to the increasing status of the conference.
Finally, there was noticeable change in what I might term the “employment dynamics” at this year’s DVCon. In the past few years (not so much in 2011, but certainly in 2008-10) lots of the attendees came to DVCon with resume in hand, looking for a job. This was not surprising, given the dismal economy at the end of the last decade. What somewhat surprised me this year was the number of potential employers who used DVCon as a venue to both seek and speak with potential new employees. I inadvertently overheard a number of such conversations this week both in the DoubleTree’s Sprigs restaurant and in that hotel’s executive lounge on the 10th floor.
Indeed, this year, instead of being asked if Cadence had any job openings, I was asked several times by “user companies” if there were any people I could recommend in the industry who might be looking for new job opportunities. This was quite a change from the somewhat depressing atmosphere of a few years ago, and I can only hope that this is a trend that will continue. More jobs are better.
No, DVCon is not DAC, and it does not pretend to cover the entire electronics design flow. However, it does provide a high energy place for technologists focused on the front-end of that design flow to gather, listen to papers, panels and tutorials that explain new technologies, and also to check out the various vendors that have implemented these technologies.
The Accellera Systems Initiative has announced that John Aynsley, the CTO of Doulos. will be awarded that organization’s Technical Achievement for his “contributions to SystemC”. In the press release, it is noted that John was a member of the IEEE P1666 Working Group (WG) that recently produced an updated version of the SystemC Standard. As the chair of the P1666 WG, I would argue that John was a member of that WG in the same way that Bruce Springsteen was a member of the E Street Band. True, but misleading.
To see my point, you should understand how the P1666-2011 technical work was done. Specifically, over a 1 ½ year period, the technical subcommittee of P1666 met only once, around halfway through the process. The “management subcommittee” of P1666 met roughly on a monthly basis, but those meetings, which I chaired, were mostly devoted to giving and getting updates on the technical process being made.
How then was any technical process managed when there was only one technical meeting? Simply put, almost all progress was made as the result of discussions held via the group’s email reflector. One member might, for example, propose a new language feature and maybe even create some example code showing the feature’s use, all via the email reflector. This suggestion would then be thoroughly discussed via email, and eventually as the group converged towards a solution, John would typically propose a way that the feature could be formalized and included in the new 1666-2011 language reference manual (LRM).
A fair number of new features were introduced in this manner, analyzed by the group and to at least some degree “prototyped” by John. Indeed, the one actual meeting of the P1666 technical subcommittee (a phone, as opposed to Face-to-Face meeting) focused on prioritizing the list of potential new language features. Once that prioritization was done, John was able to go into high gear and start introducing the features in an extremely precise way into the 1666 LRM. Of course, John’s additions to the LRM were then subject to further email discussions with John providing explanations, clarifications and even the occasional rewrite.
Why this reliance of the email reflector to have the technical discussions, instead of having multiple actual meetings? In the first place, the P1666 WG and especially the technical subcommittee was a very international group, with participants on at least three continents: North America, Europe and Asia, especially Japan. The group might have taken the normal approach and held all meetings at 8 AM California time, and let many of the members join at very odd hours. Or it might have decided to “pass around the pain”, having some meetings at 8 AM Berlin time, others at 8 AM Yokohama time, making sure that no one group was favored—indeed, the management subcommittee operated as much as feasible in that manner. The third alternative was to eschew formal meetings, and have the required discussions over email—in some sense having a 24/7/365 virtual meeting that was always going on, and into which one could always interject an opinion.
This ongoing virtual meeting could only have worked if there was a very strong leader, who owned the crown jewels—the LRM—and who could harness the energy expended into email discussions into meaningful LRM updates.
This is where John’s technical brilliance shown through, and why he richly deserves the award he is receiving. It is no exaggeration to state that without John making sense of the often heated international discussion regarding SystemC features, a defective LRM would have been produced, and that LRM would have been finished quite a bit later. However, because we had John, the 1666-2011 LRM (which not so incidentally, can be freely downloaded, courtesy of the Accellera Systems Initiative) was masterfully crafted by John and released as close to our agreed upon schedule as could even unreasonably be expected.
Ah, but before I get carried away, I would be remiss if I did not mention a major flaw of John’s. This flaw was uncovered during the mind-numbing process I went through in November 2011 working with an IEEE editor proofreading the 1666-2011 LRM. The editor, who actually did a fine job, was very diligent in changing phrases that John wrote such as:
“Whilst it is possible for components to use SystemC ports…”
“Although it is possible for components to use SystemC ports…”
In this, we uncover John’s major flaw: he speaks and writes in the regional British English dialect, as opposed to Standard, i.e., American, English! That’s OK John—we forgive ya’.
Lots of ink has been spilt (in a good cause) in reporting on the new Accellera Systems Initiative organization. However, many of you may still wonder how you can get an in-depth view on what is happening in this new organization that resulted from the merger of Accellera and the Open SystemC Initiative (OSCI). The answer is that such an in-depth look will be available to you at the fast-approaching DVCon 2012, scheduled to take place during the week of February 27th at the DoubleTree Hotel in San Jose.
There will, of course, be technical presentations revolving around Accellera Systems Initiative Standards (and IEEE Standards that originated in either Accellera or OSCI) throughout the entirety of DVCon. However, the first day of DVCon is something truly special: “Accellera Systems Initiative Day” during which three tracks of tutorials will be presented that cover some of that organization’s prime standards (all times are on Monday February 27):
1) An Introduction to IEEE 1666-2011, the New SystemC Standard (1:30-5 PM)
2) UVM: Ready, Set, Deploy! (8:30-5 PM)
3) An Introduction to the Unified Coverage Interoperability Standard (1:30-3 PM)
4) Verification and Automation Improvement Using IP-XACT (3:30-6:30 PM). The last hour of this tutorial will consist of an open poster session and reception.
Details on these various tutorials can be found on the DVCon website. All of these tutorials will require a paid registration, but the wealth of information presented will make the price of admission well worth it.
In addition to these tutorials, the North American SystemC Users’ Group (NASCUG) will from 8:30-noon hold its 17th meeting (or, for the ancient Romans among you, meeting number XVII). This co-located meeting will be open to the public and will, as the name of the session implies, focus on users’ experiences with SystemC.
At noon, in between the tutorials, there will be a sponsored lunch—sponsored in this case by the Accellera Systems Initiative, which it should be noted also sponsors all of DVCon. As we did at last year’s DVCon, we wish to give the lunch attendees a break from PowerPoint presentations, and will conduct this lunch as a “town hall meeting”, i.e., an open discussion of issues relevant to the participants. I shall, once again, be the nominal host for this event, but also present will be Accellera Systems Initiative Board members, Officers and Technical Working Group Chairs to field questions from the audience.
To get into the spirit of the lunch discussion, we have posted an open-ended question to kick things off:
“What will success for the Accellera Systems Initiative look like?”
In other words, looking from this point in time, what ought to be the goals of the newly merged organization, and looking, say, five years from now, how will we know if we have been a success?
I urge all participants to give this question a lot of thought prior to DVCon, and come to the lunch prepared to let the Accellera Systems Initiative leaders hear your preference for the new organization’s strategic direction.
By now most of you will have heard and read about the merger Accellera and OSCI into the Accellera Systems Initiative. A question that may linger after reading various press accounts is “why a merger”? There are, of course, synergies in standards to be discovered and exploited– anyone with even a rudimentary knowledge of current EDA standards will understand that. But, why the drastic act of merging into a single organization with all of the difficulty and expense that such a merger entails? Was one of the two organizations in trouble and hain need of “rescue”? If not, why not remain as two separate organizations, and set up a technology cooperation agreement?
First, both organization were doing fine both from a standards-setting and financial standpoint. The merger happened to make things better, as opposed to preventing bad things from happening. Then why not just set up a cooperation agreement of some sort and keep working independently?
To see the answer, consider that the “raw material” of any standards organization (EDA or not) is its workers and the technical knowledge that they bring to table. This is true of many high-technology organizations, but the difference in a standards organization is that almost all of its “employees” are actually employed by somebody else. There are the exceptions—some standards groups have a paid Chief Executive and staff, while others have paid administration. However, those paid people are not the norm—most workers in standards groups receive a paycheck from a different company.
To make matters more interesting, most workers in a standards organization are not paid by their actual employer to work full time or every a majority of their time on standards. Again, there are exceptions—I, for example, am paid by Cadence to spend a lot of my time on standards-related activities, but even I have significant non-standards-related work. This situation is even more dramatic for most standards workers: they have their “day job”— in the case of EDA standards, architecting and/or writing SW, doing design consulting, managing a group that does any of the preceding and so forth. Their employer does “allow” them time to work in standards group, but it is usually not enough time to cover the standards work actually done by the employee—assuming that the employee actually worked the mythical 40 hour work week, instead of the open-ended work week many of us enjoy.
Further, while many companies do allow (and may require) employees to work in standards groups, the immediate bottom line is (understandably) usually still king. For example, if an engineer is working on a project for his/her employer and deadlines get tight or are missed, the amount of time that the employee will be able to spend on “outside” activities often dries up. Even when this sort of problem is avoided, an employee’s standards meeting schedule is almost never considered when his/her travel schedule is set. Thus, it is not unusual to have group members calling in from airports or in the middle of the night from undisclosed locations.
All of this is not to complain, but to point out the realities that almost all “volunteers” in standards groups face. Indeed, I highlight these obstacles because, they (to my mind) lead directly to the reason why forming the Accellera Systems Initiative made so much sense. In particular, around the time that a merger began to be considered, it was becoming obvious that the standards that OSCI and Accellera had developed were starting to touch each other—the TLM 2.0 implementation in the UVM reference implementation was a case in point. It also was clear that there were places where OSCI and Accellera standards were not as connected as might have been desired—the AMS offerings from each groups were cases in point. Finally, although none of the officers of either group can tell the future, it did not take a large leap of faith to predict that there would be multiple other opportunities where OSCI and Accellera standards could benefit by having their disjoint groups work together.
A technology cooperation agreement would have been a good first step in the face of all of this, and such an agreement was discussed early in the negotiations that ultimately resulted in the merger. However, such an agreement would not remove the organizational barriers that stood between Accellera and OSCI. Those organizational barriers, viz., different rules (policies and procedures), different IP policies, very different cultures and so forth, were preventing the engineers that work in the two groups from cooperating as fully as they could. Of course, with extra effort, OSCI and Accellera volunteers could have met and crafted joint standards, but not only would such joint standards been special cases (“one-offs”), this sort of interorganozational work would have made the various volunteers’ jobs that much harder.
This is why the OSCI-Accellera merger makes so much sense to me. By removing barriers, i.e., by uniting the two organizations “under one roof” and tearing down the interior walls under that roof, synergies between standards will be more easily exploited by the workers of the new organization. Yes, some of those walls are not quite torn down yet—a common IP policy needs to be crafted, and each group comes with its own culture ready to be melded into a new culture, but the basic foundation (and roof!) is in place for new, as of yet undreamt, synergistic standards to be developed.
The workers in both Accellera and OSCI have, over the years, produced front-end EDA standards used in the Electronics Industry around the world, often after becoming IEEE and IEC standards. Now they have fewer organizational barriers in their way as they develop the next generation of EDA standards. This is why this merger had to happen.
On Sunday, December 4, Larry Saunders received the Ron Waxman award from the IEEE Design Automation Standards Committee (DASC) for extraordinary service to the DASC. A back injury kept me from attending the awards ceremony, but it did not keep me from recalling Larry’s seminal work. Larry was first chair of the 1076 (VHDL) Working Group (WG), and an important proponent of VHDL inside of IBM and the industry in general. In this post, I’d like to concentrate on the first role: being the initial chair of the 1076 WG.
One thing to keep in mind is that chairing the 1076 group in 1986 was not like chairing EDA standards WG today. Of course, being a chair of a WG in the present is non-trivial, but it was an enormous task for the 1076-1987 chair, precisely because this was the first EDA WG. Not only was there a mountain of technical work to be tackled coming both from the DoD VHDL 7.2 groundwork and general industry input, but there was also the need to harness the high-powered technical members of the WG, while keeping the considerable egos checked at the door. Larry managed to keep the process going monthly face-to-face meeting after monthly meeting in a manner that maintained the technical enthusiasm of the members while delivering a large technically-complex standard on schedule. Quite a job well done!
There is a personal aspect to this– there is almost always is. I took over the 1076 WG from Larry after the 1076-1987 standard was published. There were some new members of the WG, and some original members dropped out, but the WG membership stayed pretty much the same. My tenure as the 1076 WG chair was considerably less smooth than Larry’s, a development I attribute partially to (a) commercial interests entering the picture (as VHDL simulators started to be developed) and (b) the academic world (especially in Europe) waking up to VHDL after 1987. But (what I would characterize as) the sometimes raucous meetings of the 1076 WG that eventually produced VHDL 1993 (which was supposed to be VHDL 1992) can be attributed both to my inexperience and to my not being Larry Saunders. I have learned a lot since then about running standards groups, but I still point to Larry’s tenure as the Chair of 1076-1987 as leadership done right.
Larry gradually reduced his participation in the formal EDA standards world after the initial VHDL work was completed. He kept his “finger in the pot” promoting VHDL-based design methodologies while he was a design consultant with SEVA (which he co-founded) and with other companies. Eventually Larry migrated into the IT space, and opened up an IT services company in San Diego, where he is certified in technologies of which I have barely any knowledge. That is why, when Larry was suggested as a Ron Waxman Award recipient, I thought that it was a great idea. It is important both for the IEEE DASC to recall the people that helped it become the organization it is today, but also for Larry (even though he is no longer part of the EDA Standards world) to recognize that his work has neither been forgotten nor unappreciated.
In a recent post on the DeepChip website, Gary Smith states that Fortran and Ada are superior to C and its variants, but notes that “…unless there is a major revolt among Embedded Programmers we are stuck with C and SystemC”. I was very surprised to read this (and surprised at Fortran’s cameo appearance), since I had thought that the Ada-C wars had long since ended. However, Gary’s rather strong statement would argue that the fires are still smoldering.
Rather than stoking any remaining embers, I will forgo a direct Ada-C comparison, except to say that, having used both langauges in great depth (my first engineering job was at a defense contractor), I much preferred programming in C and C-derived languages. That aside, I would also suggest that no one should hold his/her breath waiting for a “revolt among Embedded Programmers”. The DoD mandate of Ada in 1987 gave such programmers a fool-proof way to embrace Ada—no revolt was required. However, what happened was that the Defense Department was inundated from the beginning of the Ada mandate with request for exemptions, and eventually the mandate was essentially phased out. Moreover, programmers at companies in every geographical region have over the past three decades given the thumbs up to C and its derivatives, including SystemC. Indeed, I think it reasonable to say that a sure-fire way to start a “major revolt among Embedded Programmers” (or at least the vast majority of them) would be to force them to use Ada.
We are, indeed, “stuck with” C and SystemC– a fact that will please many programmers.
All of you undoubtedly noted the passing of Steve Jobs on October 5. What you might have missed is the passing of another high technology giant, viz., Dennis Ritchie a few days later. Ritchie was the father of the C language and one of the main forces behind the development of UNIX®. Indeed, the two accomplishments were closely related—C was created as a “high level” language to be used in the development of UNIX.
At first, the low key coverage of Ritchie’s death, especially when contrasted with the post-mortem lionization of Jobs, annoyed me. This was, so I thought, just a reflection of the low esteem in which society views “the geeks”—a useful but somewhat laughable group of misfits, who inhabit a stereotype world of bespectacled males sporting pocket protectors. After all, so I argued to anyone who would listen (not many takers there), C and its derivatives dominate the software world, while UNIX and its derivatives run on everything from computers to climate control systems to automobiles. Thus, so I saw with what I took at the time to be perfect clarity that Ritchie was much more significant than Jobs. Take away the creation of the iPad® and things will not be the same, but take away C and UNIX and the high technology world becomes something radically different.
After calming down, I realized that the low key reporting on Ritchie’s passing (especially as opposed to the outpouring of emotion that accompanies Jobs’) was a result of Ritchie’s “children” (and grandchildren many times over) being in some sense invisible to most people. Everyone can appreciate the beauty and elegance of an iPad, but unless you are a member of the somewhat exclusive club of people familiar with programming languages, the beauty and elegance of C is not just overlooked, but unfathomable. Similarly, how many people using an iPad know that the iOS that is powering their machine is a derivative of Ritchie’s UNIX? Not many, I’d wager.
I later began thinking further about the similarities and differences between Jobs and Ritchie. Differences are easy to identify— Steve Jobs was, of course, a showman, while Dennis Ritchie was very private—I frankly had not heard anything new about him in years. There was also the difference between their focus—Jobs was (rightly) all about making profits for Apple and the other companies he headed, while Ritchie worked in a research lab (Bell Labs), which to a large degree gave away its products for free. I am sure that he made some money on the book on C that he wrote with Brian Kernighan, but that total would likely not make up a few days of what Steve Jobs earned.
Yet there is an underlying similarity to the products with which they have been associated. Apple’s products under Jobs have been noted for their elegance and, especially with the advent of the iPhone® and iPad, for being platforms on which to run “apps”. That is, they are essentially sophisticated pieces of infrastructure that serve as the home to add-on pieces of software.
But similar description can be applied to both C and UNIX. C is a very small and tightly written language, and gains its power though the use of packages that run “on top of” the base language. Apropos this point: I can still recall being astonished in an introductory programming course when my instructor said that there was no I/O built into C. Of course there is, I initially thought, wondering (again) about the instructor’s competence. I had been using printf commands like they were going out of style, and aren’t they I/O statements? Of course, as a beginning student I had missed the point that printf commands were only available to me when using C because I had included the stdio library in my program. The C language was “merely” the elegant platform that made libraries like stdio (and the hundreds of other libraries that I later used and created) useful.
The design of UNIX with its small but powerful kernel at its heart with sophisticated libraries and applications running on top of it, lends itself to a similar comparison to products championed by Steve Jobs. Thus, if one strips away all of the obvious dissimilarities between the two men, a case can be made that they were united in a common design philosophy.
In the end, the world has lost two giants of high technology in a very short period of time. One may argue about the relative importance of each (or importance vis-à-vis each other), but I will just end by saying that both changed our world for the better, and both will be missed.
UNIX is a registered trademark of The Open Group
iPad and iPhone are trademarks of Apple Inc.
As most of you will have seen by now, Accellera and OSCI have announced their intention to form a new EDA standards organization that will cover the design flow roughly from Gate-level up through the System-level. This may seem to be a natural move to most people, and one that could easily have happened years ago. If we flip back to what seems to be an very remote time, viz., 2000 when Accellera was created by the merger of Open Verilog International and VHDL International, there seems to have been no good reason why OSCI could not have been a third party to that merger. Or was there?
One interesting tidbit that has been lost in the temporal fog is that at an early Accellera meeting in 2000, a motion was made to urge OSCI to become part of Accellera. This meeting of the “Accellera C Standard Group” was no gathering of wannabes: the list of the attendees confirms that this was a gathering of the C-literate glitterati of the EDA world of the time (yours truly was not present, which just enforces this point). The minutes of the meeting are somewhat opaque, but it is clear that there was a desire on the part of many of the participants to have both OSCI and the OSCI-rival SpecC group join Accellera to help form an organization with a larger scope than the HDL-focused Accellera. It is noted in the minutes that Kevin Kranen, representing OSCI, and Dan Gajski, representing SpecC, would go back to their respective organizations and raise the possibility of joining forces with Accellera.
Clearly, neither OSCI nor SpecC joined Accellera in 2000, and I can find no other evidence that the matter was seriously broached in the next few years. It is interesting to speculate what would have happened had the SpecC group actually joined Accellera. I posit that OSCI would likely have quickly become very irrelevant given the level of corporate backing SpecC would have received from the members of Accellera. However, this did not happen, and SpecC more or less withered away. The question still remains, though, why OSCI did not join Accellera in those early days.
I would argue that there were at least two reasons that this did not happen. First, it is generally forgotten that in 2000 the “Open SystemC Initiative” was not particularly “open”. Rather, OSCI in 2000 was still a group led by Synopsys and CoWare to further the use of Synopsys’ SystemC language. In fact, it was only in 2001 that Synopsys opened up OSCI and gave up control of SystemC and OSCI became the OSCI of today.
Note that this is not an attempt to bash Synopsys—OSCI was a legitimate marketing gambit on their part, and they did in fact relinquish control of SystemC when it became clear that this was best for the industry. Rather, it is to point out that when Accellera was formed, and certainly at the time of the above referenced Accellera meeting, OSCI was not really the sort of open industry group that would have reasonably fit into Accellera.
Moreover, when Synopsys did give up ownership of SystemC at DAC of 2001, momentum quickly built behind OSCI: Cadence and Mentor both joined, along with heavyweights such as Sony, TI, Ericsson, Fujitsu, NEC and others. Indeed, this momentum was so great following the opening of OSCI in 2001 that a merger with Accellera became a non issue. Maybe it should have remaind an issue, but it did not.
Thus, there was sort of “procedural” reason the OSCI did not team with Accellera in the early 2000’s, but there was I believe a harder-to-verify, more “psychological” reason why OSCI and Accellera were an unlikely pair during this early period. To put it bluntly, RTL and System-level people just did not get along very well in the early 2000’s. As evidence, one need only revisit a memorable panel session in 2001 at the International HDL Conference, a predecessor of the current DVCon. This panel was very clearly divided—more accurately “ruptured”—between the Verilog and the SystemC camps, with Simon Davidman on the edge calling attention to what would become SystemVerilog. At the end of this raucous panel, John Cooley held up a cell phone to the audience and asked how many of the attendees planned to use a C-based language to design such a phone in their next project. The answer, of course, was very few—not a surprising result, given that this was a conference devoted to RTL/Gate level design in VHDL or Verilog. Nonetheless, this was taken by the Verilog faction as prima facie evidence of the non-viability of C-based languages going forward.
This panel was not an isolated event, and represented a schism between System-level designers and everyone else. “ESL was coming” and it dutifully showed up every year at DAC in Gary Smith’s forecasts, but most mainstream designers did not take it very seriously. There was a hardy band of designers and industry executives who bucked this trend of mainstream thought—OSCI’s subsequent growth and prospering during the 2000′s is their legacy. Nonetheless, the RTL and Gate-level users (on whom Accellera mostly focused) remained from Mars, while the System-level users, i.e., the OSCI focus group, lived on Venus.
This situation has, of course, radically changed since 2000/2001—planets have collided and the Martians and Venusians now inhabit the same planet. Mainstream design flows include tools and IP that are based on both OSCI and Accellera developed standards. The time has passed in which RTL and below designers can ignore those designing at higher levels of abstraction, just as those designing at higher levels can no longer consider themselves as being “above implementation”. As a direct consequence, now is the right time for the standards bodies covering the Gate through RTL through System-level design/verification flows to come together. This is why unifying Accellera and OSCI just feels—unlike 10 years ago– like such a “no-brainer”.
The last two months since my last post have been extremely busy for me—several weeks out of the office, and new responsibilities at work. In this post, I’d like to briefly look the two conferences, DVCon and DATE that I attended during this period.
By now everyone knows that DVCon (held in early March in San Jose, CA) was a success by any measure—an increased number of attendees, more papers submitted, a filled exhibition hall and so forth. As a member of the conference’s Steering Committee and an officer of Accellera, which sponsors it, I am doubly pleased with DVCon’s continuing success.
As usual, this conference had a “laid back” feel, unlike the frenetic DAC, which allowed time for insightful discussions with fellow attendees—the sort of discussions that would have been difficult to have at DAC where every minute seems be reserved for some pre-planned meeting or another. But there was another trend that I noticed at DVCon: the continued emergence of the “Design” portion of the “Design and Verification Conference”.
It is not all that much of a secret that “DVCon” in the past decade could have been dubbed “VCon” if just its technical content were assessed. This is not surprising, since functional verification and HDLs are in the genes of the conference: from “HDLCon” all the way back to its origin as the “VHDL International User’s Forum” (VIUF) and the “VHDL Users’ Group” (VUG), this conference has been largely functional verification-centric. This continues to be the case at least to some degree—many of the accepted papers focused on verification topics, and the UVM Workshop was by far the most attended tutorial in the history of the conference.
Yet, the “D” side of the house was unmistakably present. The North American SystemC User’s Group (NASCUG) held a well attended workshop on DVCon’s Monday morning that featured a keynote by Jim Hogan. This was followed by an equally well-attended tutorial in the afternoon on SystemC TLM 2.0 presented by OSCI. In between these two events was a joint UVM-SystemC “town hall meeting” that attracted around 300 people.
All of this was significant, but the more interesting phenomenon was what might be termed “session attendance patterns”. At this year’s DVCon, I observed significant cross fertilization of the verification and design communities. Specifically, I noticed a number of people I consider to be “SystemVerilog people” sitting in on the SystemC tutorial, and “SystemC people” attending the UVM Workshop. This is not surprising, since the boundary between design and verification languages has become somewhat blurred—witness the inclusion of TLM 2.0 in both SystemC and UVM, and the several calls for a UVM-SystemC at the aforementioned “town hall meeting”.
That said, I plan to seek a more concerted effort by DVCon Steering Committee to amplify this trend, and get more “Design” content, especially more “SystemC Design” content into DVCon 2012. One good starting point might be to promote DVCon’s call for papers through the world-wide network of SystemC Users’ Groups. In addition, perhaps NASCUG could be more integrated into the conference as opposed to being held “in conjunction” with it.
Increased representation from the SystemC Design community at DVCon will occur naturally, but that does not mean that the trend should not be nudged along. The result will be an even better and more diverse event.
Two weeks after DVCon, I found myself at DATE in Grenoble, France. Not having attended the conference in several years, and having heard rumors of DATE’s demise, I approached the conference with a mixture of nostalgia (I remember when DATE was the “European DAC” with all that entails), and trepidation. I had other business to do in Europe and DATE was not my main reason for being there, but I still wondered whether I would be underwhelmed by the DATE goings-on.
The answer is that I was not at all underwhelmed, but, rather, pleasantly surprised. DATE has, I believe/hope, successfully transformed itself from the European DAC into a get-together that more closely resembles ICCAD– a conference with a focus on technical paper and panel discussion sessions, and with a distinct academic flavor. Yes, there was an exhibition floor, but all three major EDA vendors were absent, and the “booths” were about the size of those at DVCon (mostly 10×10 popups).
I must admit that I was initially disappointed in all of this. Then it hit me that I had been to this conference before—it was DAC before it “grew up”, i.e., the DAC of the early 1980s. To mark the contrast, ask yourself when you last heard a group of people at DAC on the exhibition floor hotly discussing one of the papers that been presented in a recently concluded session? I cannot recall when I observed such a thing at DAC in the last decade(s), but it certainly happened multiple times at this year’s DATE. Indeed, I would say that for many of the attendees to whom I spoke, the glimpses of advanced technology that were given during the paper and panel sessions were at least as important as the display of presently-available technology being shown on the exhibition floor.
I have no idea about the future viability of DATE. However, I am pleased that I did get to experience it, even for a short period, this year. It brought me back to a simpler time as a freshly minted Ph.D. when I went to conferences to attend the paper sessions and debate technology trends with the authors and fellow attendees. It was a pleasant time to revisit, albeit all too briefly.
As you may have already seen in the blogosphere and in the tweetdom, the Accellera Board today approved the release of UVM 1.0. This release is a major accomplishment from a technical standpoint, but it also represents a triumph of the collective will of the Electronics/EDA industry.
If one flashes back to January 2008, the verification landscape from an interoperability standpoint was bleak. VMM from Synopsys was widely used, and the Cadence-Mentor collaboration had just release the OVM, which followed on URM/eRM from Cadence and AVM from Mentor. The “standard” chain of events one would have expected to unwind would have been for VMM and OVM to circle each other like characters in a Sergio Leone film, while the users sat in the audience not knowing on which side to place their bets.
Indeed, this is more or less what happened during 2008, but then a funny thing happened. Users, lead by Intel, along with other companies such as Freescale, came together and called a time out. They demanded that a “universal” verification methodology be developed that would combine the best of both OVM and VMM. Thus, in parallel with the OVM-VMM jousting that took place during 2008, the Accellera Verification IP Technical Subcommittee (VIP TSC) was formed, and began the work that eventually led to the release of UVM 1.0.
But the formation of the VIP TSC was only a start, and could have led nowhere. Once again, things did not play to form. For better or worse, when EDA/Electronics people play word association games, they will likely match the word “standards” with “war”. Much to my chagrin as a person deeply involved with EDA standards over the past 25 years, there have been a fair number of “standards wars” along the way. And that is exactly what an observer might have predicted to erupt during the VIP TSC’s work. Surely one of Cadence, Mentor or Synopsys would decide that things were not going their way, and subsequently take their marbles and go home—or at least sit on the curb and sulk.
But that did not happen. Yes, there were some rough patches—as occur in any group activity—but overall the VIP TSC acted as a harmonious group. The upshot was that by the end of the effort all members of the VIP TSC could look back the UVM 1.0, and be very satisfied with the results. This is, to my mind, a remarkable result, and one in which I am very proud to have played a small part.
I would also like to publicly thank Dennis Brophy and Yatin Trivedi, my counterparts at Mentor and Synopsys respectively, for their parts in bringing this all to fruition. As to be expected, there were “interesting” discussions between the three of us during this process, but at the end we remained united in our beliefs (and in our actions) that UVM is right for the industry.