Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Xilinx’

Next Page »

Blog Review – Monday Sept. 08 2014

Monday, September 8th, 2014

Semiconductor sales – good news; Namibia’s Internet project; Rambus fellow considers IoT security; Intel fashions a year of wearables.
by Caroline Hayes, Senior Editor


If you are wondering what Intel’s New Devices Group has been doing since its formation 12 months ago, Michael A. Bell reveals all in this blog. All the obvious wearable candidates are there, but with a twist – biometric earbuds, a bracelet computer…And there is also an insight into business news, an indication of where this business unit is headed.

A fair bit of name-dropping kicks off this Rambus blog, celebrating Rich Page’s IoT projects. This interview with the company’s fellow is an interesting take on IoT, as Page predicts a shift of emphasis to security and the ensuing design challenges.

Citizen Connect is a worthy project, supported by MyDigitalBridge Foundation, Microsoft, and Adaptrum. Steve Leibso, Xilinx, reports on the latest project for broadband access in the continent, wirelessly connecting 28 schools in Namibia with Ethernet over a 62x152km area using TVWS (TV White Space) spectrum.

It’s been a while since a journalist could write this, but it looks like good news for the semiconductor market. Falan Yinug, explains some of the statistics behind the SIA report of Q2 sales that set some records, for a feelgood read.

Blog Review – Mon. August 11 2014

Monday, August 11th, 2014

VW e-Golf; Cadence’s power signoff launch; summer viewing; power generation research.
By Caroline Hayes, Senior Editor

VW’s plans for all-electric Golf models have captured the interest of John Day, Mentor Graphics. He attended the Management Briefing Seminar and reports on the carbon offsetting and a solar panel co-operation with SunPower. I think I know what Day will be travelling in to get to work.

Cadence has announced plans to tackle power signoff this week and Richard Goering elaborates on the Voltus-Fi Custom Power Integrity launch and provides a detailed and informative blog on the subject.

Grab some popcorn (or not) and this summer blockbusters, as lined up by Scott Knowlton, Synopsys. Perhaps not the next Harry Potter series, but certainly a must-see for anyone who missed the company’s demos at PCI-SIG DevCon. This humorous blog continues the cinema analogy for “Industry First: PCI Express 4.0 Controller IP”, “DesignWare PHY IP for PCI Express at 16Gb/s”, “PCI PHY and Controller IP for PCI Express 3.0” and “Synopsys M-PCIe Protocol Analysis with Teledyne LeCroy”.

Fusion energy could be the answer for energy demands and Steve Leibson, Xilinx, shares Dave Wilson’s (National Instruments) report of a fascinating project by National Instruments to monitor and control a compact spherical tokamak (used as neutron sources) with the UK company, Tokamak Solutions.

Deeper Dive – Is IP reuse good or bad?

Friday, May 30th, 2014

To buy or to reuse, that is the question. Caroline Hayes, Senior Editor asked four industry experts, Carsten Elgert (EC), Product Marketing Director, IPG (IP Group), Cadence, Tom Feist (TF), Senior Marketing Director, Design Methodology, Xilinx, Dave Tokic (DT), Senior Director, Partner Ecosystems and Alliances, Xilinx, and Warren Savage (WS), President and CEO, IPextreme about the pros and cons of IP reuse versus third party IP.

What are the advantages and disadvantages when integrating or re-using existing IP?

WS: This is sort of analogous to asking what are the advantages/disadvantages of living a healthy lifestyle? The disadvantages are few, the advantages myriad. But in essence it’s all about practicality. If you can re-use a piece of technology, that means that you don’t have to spend money developing something different which includes a huge cost of verification. Today’s chips are simply too large to functionally verify every gate. A part of the every chip verification strategy assumes that pre-existing IP has already had its verification during its development and if it is silicon-proven, this only decreases the risk of any latent defects that may be discovered. The only reason to not to reuse an IP is that the IP itself is lacking in some ways that make the case to not reuse it but rather create a new IP.

strong>TF: Improved productivity. Reuse can have disadvantages when using older IP on new technologies, it is not always possible to target the newer features of a device with old IP.

DT: IP reuse is all about improving productivity and can result in significantly shrinking the design time especially with configurable IP. Challenges come from when the IP itself needs to be modified from the original, which then requires additional design and verification time. Verification in general could be more challenging, as most IP is verified in isolation and not in the context of the system. For example, if the IP is being used in a way the provider didn’t “think of” in their verification process, it may have bugs that are discovered during that integration verification phase that then needs to be reflected back to determining which IP has the issue and correcting/verifying that IP.

CE: The benefits of using your own IP in-house are that you know what the IP is doing and can use it again – it is also not available as third party IP, for differentiation. The disadvantage is that it is rarely documented allowing it to be used in different departments. The same engineers know what they are taking when they reuse their IP, but to properly document it, to make the product, can be time-consuming for a neighboring department. It is also the case that unwanted behavior is not verified. However, it is cheaper and it works.

What are the advantages and disadvantages of using third party IP?

WS: The advantages of using third party IP is usually related to that company being a domain expert in a certain field. By using IP from that company, you are in fact licensing expertise from this company and putting it to use in an effective way. Think of it like going to dinner at a 4-star Michelin rated restaurant. The ingredients may be ordinary but how they are assembled are far more exceptional than something the ordinary person can achieve on their own.

TF: Xilinx cannot cover all areas. Having a third party ecosystem allows us to increase the reach to customers. We have qualification processes in place to ensure the quality of the IP is up to the required standard.

DT: Xilinx and its ecosystem provide more than 600 cores across all markets, with over 130 IP providers in our Alliance Program. These partners not only provide more “fundamental” standards-based IP, but also provide a very rich set of domain and application-specific IP that would be difficult for Xilinx to develop and support. This allows Xilinx technology to be more easily and quickly adopted in 100’s of applications. Some of the challenges come in terms of consistency of deliverables, quality, and business models. Xilinx has a mature process of partner qualification and also work with the partner to expose IP quality metrics when we promote a partner IP product that helps customers make smarter decisions on choosing a provider or IP core.

EC: Third party IP means there is no rewriting. Without it, it could take hundreds of man-years to develop a function that is not major selling point for a design. Third party IP is compatible as well as bug-free/limited bugs. It is like spending hundreds of hours redesigning the steering wheel of a car – it is not a differentiating selling point of the vehicle.

Is third party IP a justifiable risk in terms of cost and/or compatibility?

DT: The third party IP ecosystem for ASIC and programmable technologies has been around for decades. There are many respected and high quality providers out there, from smaller specialized IP core partners such as Xylon, OmniTek, Northwest Logic, and PLDA to name a few, up to the industry giants like ARM and Synopsys generating $100M’s in annual revenue. But ultimately the customer is responsible for determining the system, cost, and schedule requirements, evaluating IP options, and make the “build vs. buy” decision.

EC: I would reformulate that question: Can [the industry] live without IP? Then, the risk is justifiable.

How important are industry standards in IP integration? What else would you like to see?

TF: IP standards are very important. For example, Xilinx is on the IEEE P 1735 working group for IP security. This is very important to protect customer, 3rd party and Xilinx IP throughout flows that may contain 3rd party EDA tools and still allow everyone to interoperate on the IP. We hope to see the 2.0 revision of this standard ratified this year so all tool vendors and Xilinx can adopt it and make IP truly portable, yet protected.

DT: Another example is AMBA AXI4, where Xilinx worked closely with ARM to define this high speed interconnect standard to be optimized for integrating IP on both ASIC and programmable logic platforms.

WS: Today, not so much. There has been considerable discussion on this topic for the last 15 years and various industry initiatives have come and gone over the years on this. The most successful one to date has been IP-XACT. There is massive level of IP reuse today and the lack of standards has not slowed that. I am seeing that the way the industry is handling this problem is through the deployment of pre-integrated subsystems that include both a collection of IP blocks and the embedded software that drives it. I think that within another five years, the idea of “Lego-like” construction tools will die as they do nothing to solve the verification problem associated with such constructions.

Have you any statistics you can share on the value or TAM (Total Available Market) for EDA IP?
WS: I assume you mean IP? Which I like to point out is distinctive from EDA. True, many EDA players are adopting an IP strategy but that is primarily because the growth in IP and the stagnation of the EDA markets are forcing those players to find new growth areas. Sustained double-digit growth is hard to ignore.

To the TAM (total available market) question, a lot of market research says the market is around $2billion today. I have long postulated that the real IP market is at least twice the size of the stated market, sort of like the “dark matter” theories in astrophysics. But even this ignores considerable amounts of patent licensing, embedded software licensing and such which dwarf the $2billion number.

TF: According to EDAC Semiconductor IP revenue totaled $486million, a 4.2% increase compared to Q4 2012 and four-quarters moving average increased 9.6%.

EC: For interface IP, I can see a market for interface IP – sharing no processor, no ARM, to grow 20% to $400million [Analyst IP Nest].

IP Integration to accelerate SoCs

Thursday, May 29th, 2014

As the accelerating market of SoCs inevitably means a faster rate of adoption, system level designers are also faced with fragmenting markets, with new standards adopted and with multiple types of complex requirements in a system. The integration of IP (Intellectual Property) can be a boon, but there are caveats, as Caroline Hayes, Senior Editor reports.

Today, there are many types of IP in use today, processor, DSP, high-speed IP, clocking IP, even whole systems. There are also a variety of high-speed I/O and infrastructure IP (such as the AXI4 interconnect). It is used to support protocols, interfaces, controllers and processor on the core.

Dave Tokic, Senior Director, Partner Ecosystems and Alliances, Xilinx believes that IP is invaluable. “It just isn’t possible to create the systems being developed without substantial IP integration,” he says. He goes on to explain the appeal of IP. “More and more, designers are being asked to create increasingly complex designs much faster on programmable devices. As such, we’ve seen a tremendous growth in the number of available IP cores and their use and reuse on programmable devices. Typically, we see the use of a broad range of connectivity and high speed I/O such as Ethernet, PCI Express, SDI, MIPI, and HDMI; general purpose and specialized processors, such as compact Xilinx Microblaze microcontrollers and high-speed DSP, to powerful multi-core ARM A9 application processors we’ve integrated into the Zynq All Programmable SoC ; optimized DMA and memory controllers, all the way up to the highest performance Hybrid Memory Cube controllers; application-specific IP for compression such as HEVC, JPEG-2000 to high performance image and video processing pipeline IP; and a broad range of clocking and infrastructure IP, such as the AMBA AXI4 interconnect.”
For Carsten Elgert, Product Marketing Director, IPG (IP Group), Cadence, this diversity in the system design is why the IP market is growing – according to EDAC Semiconductor, IP revenue totaled $486million Q4 2013, a 4.2 % increase compared to Q4 2012.

Tom Feist, Senior Marketing Director, Design Methodology, Xilinx, summarizes it thus: “As geometries shrink, typically parts and designs get larger and more complex yet design teams do not. This means higher productivity is required to produce larger designs. A key factor to address this is IP reuse.”
The proliferation of connections to an outside DDR controller, flash memory controller, display output or other display standards and communications protocols like USB, are all defined and the designer has to meet the standards of protocol committees. Egbert says: “When designing an SoC, there is no time, and [the designer] usually does not have the knowledge to design the standard protocol by himself – the end product will be sold because of a unique functionality, not because of an excellent USB protocol.”
It is, he argues far easier to use IP blocks as components, buying a peripheral 1²C bus or UART protocol rather build 20k gates. The availability of IP helps – there is interface IP, communications IP, microprocessor IP and IP from third party vendors, such as EDA companies and ARM for example, says Elgert.
He points out that IP is bought to ensure code compatibility, and includes high-end versions, such as ARM, and low-end, 8bit processors in SoCs, with less than 3k gates. There is also EDA IP and legacy processors. “And do not forget analog IP,” counsels Elgert. The SoC analog or mixed signal portion needs PLL and sensors using ADC and AC – this is very common if the SoC is analog.”
With so much to choose from, I asked what are the ‘rules’ if there are any, to choosing IP. Tokic believes “Some of the key factors for successful IP use is the quality of the IP and ease of integration,” he says, referring to the company’s functional verification and validation. “We work closely with our ecosystem of qualified IP providers to provide visibility to our customers on various IP quality metrics… [We] led the industry in automating IP integration with the IP Integrator technology built into the Vivado Design Suite that shortens the design cycle up to 10x over more manual processes.”

Elgert breaks it down into technology nodes, posing the question for engineers to ask – “What is available on process node that I’m forgetting? This can be an extensive shipping list for complex systems,” he says. He also advises narrowing the choice down to a preferred silicon vendor.
For example, he says, the PHY and the layer stack is relevant, being specific for each available technology node. “I would consider as many silicon technologies as possible,” is his advice.
Other items on Elgert’s list are to consider the technical performance, the PPA, or Power Performance Areas. “What is the power consumer, and how big is the silicon? Silicon still costs money. Compare PPA, which is often confidential information,” he says “for responsiveness and clarity of data – EDA support is important here”.
To implement the IP, Feist observes that customers prefer to use industry standards and avoid proprietary bus / interconnect. “By leveraging ARM’s AMBA AXI4 protocol [customers] can leverage state-of-the-industry interconnect that enables point to point connections versus a bus structure. This accelerates their design development and the designs are more portable.” (The vast majority of internal bus structures use AMBA AXI, confirms Elgert.

Xilinx relies on an ecosystem for IP integration support. Feist explains the role of the Vivado design suite. “In Vivado IP all parts of the IPI design should be packaged as IP. If just using third party or Xilinx IP this is done for you. For customer IP, they need to package this up and we provide a packager for this. To get the most out of IPI, the IP should be designed in a way that uses interfaces where possible like the Xilinx IP. By using AXI, it allows customers to leverage IPs that come with Vivado like bus monitors, bandwidth checkers, JTAG stimulus, bus functional models etc. that should speed up development of the IP.”
By providing an extensible IP catalog, which can contain Xilinx, third-party, and intra-company IP that can be shared across a design team, division, or company, the suite leverages industry standards such as ARM’s AXI interconnect and IP-XACT metadata when packaging IP. With IP Integrator, Vivado facilitates rapid development of smarter systems, which can include embedded, DSP, video, analog, networking, interface, building block, and connectivity IP consolidated into a hierarchical system and IP-centric view.
Users can also package their own RTL, or C/C++/SystemC and MATLAB/Simulink algorithms into the IP catalog using High-Level Synthesis (HLS) or System Generator for DSP with the IP packager.
Xilinx describes IPI as providing an automated IP subsystem generation to speed configuration. For Feist, IPI provides the second step to allow a user to quickly integrate an IP created in HLS into a full design. “The first step happens earlier in HLS tool that is adding AXI interfaces automatically to the C code. With both these steps it is possible to connect an IP generated in HLS to an ARM core pretty quickly.”
Synopsys is also helping customers reduce configuration and debug time with a ‘plug and play’ IP accelerator program, to be announced at DAC next month. It will package its DesignWare IP together with hardware IP prototyping kits and software development kits to modify IP with a fast iteration flow and without the time penalties, was all Dr Johannes Stahl, Director, Product Marketing, Virtual Prototyping, Synopsys would reveal on a visit to the UK ahead of DAC.

Turning to EDA’s role in IP, Elgert says the same challenges occur as are met when designing an SoC. The GDS2 or RTL has to match the RTL of an in-house design. The IP vendor has to support the IP flow, he says, but there is a delivery burden on EDA companies to support customers to ensure that the various tool flows and verification platforms are enabled.
All agree that as geometries shrink, parts and designs get larger and more complex. This creates demands on small (even shrinking) design teams and IP use and reuse can accelerate the timeframe of design, verification and eventually time to market, with designs that are differentiated with functions specifically for the application.

Are Best Practices Resulting in a Verification Gap?

Tuesday, March 4th, 2014

By John Blyler, Chief Content Officer

A panel of experts from Cadence, Mentor, NXP, Synopsys and Xilinx debate the reality and causes of the apparently widening verification gap in chip design.

A panel of semiconductor experts debate the reality and causes of the apparently widening verification gap in chip design.Does a verification gap exist in the design of complex system-on-chips (SoCs)? This is the focus of a panel of experts at DVCon 2014, which will include Janick Bergeron, Fellow at Synopsys; Jim Caravella, VP of Engineering at NXP – Harry Foster, Chief Verification Technologist at Mentor Graphics, John Goodenough, VP, ARM,  Bill Grundmann, a Fellow at Xilinx; and Mike Stellfox, a Fellow at Cadence. JL Gray, a  Senior Architect at Cadence, organized the panel. What follows is a position statement from the panelist in preparation for this discussion. – JB

Panel Description: “Did We Create the Verification Gap?”

According to industry experts, the “Verification Gap” between what we need to do and what we’re actually able to do to verify large designs is growing worse each year. According to these experts, we must do our best to improve our verification methods and tools before our entire project schedule is taken up by verification tasks.

But what if the Verification Gap is actually occurring as a result of continued adoption of industry standard methods. Are we blindly following industry best practices without keeping in mind that the actual point of our efforts is to create a product with as few bugs as possible, as opposed to simply trying to find as many bugs as we can?

Are we blindly following industry best practices …

Panelists will explore how verification teams interact with broader project teams and examine the characteristics of a typical verification effort, including the wall between design and verification, verification involvement (or lack thereof) in the design and architecture phase, and reliance on constrained random in absence of robust planning and prioritization to determine the reasons behind today’s Verification Gap.

Panelist Responses:

Grundmann: Here are my key points:

  • Methodologies and tools for constructing and implementing hardware have dramatically improved, while verification processes appear to have not kept pace with the same improvements.  As hardware construction is simplified, then there is a trend to have less resources building hardware but same or more resources performing verification.  Design teams with 3X verification to hardware design are not unrealistic and that ratio is trending higher.

    … we have to expect to provide a means to make in-field changes …

  • As it gets easier to build hardware, performing hardware verification is approaching software development type of resourcing in a project.
  • As of now, it very easy to quickly construction various hardware “crap”, but it very hard to prove any are what you want.
  • It possible that we can never be thoroughly verification “clean” without delivering some version of the product with a reasonable quality level of verification.  This may mean we have to expect to provide a means to make in-field changes to the products through software-like patches.

Stellfox: Most chips are developed today based on highly configurable modular IP cores with many embedded CPUs and a large amount of embedded SW content, and I think a big part of the “verification gap” is due to the fact that most development flows have not been optimized with this in mind.  To address the verification gap, design and verification teams need to focus more on the following:

  • IP teams need to develop and deliver the IP in a way that it is more optimized for SoC HW and SW integration.  While the IP cores need to be high quality, it is not sufficient to only deliver high quality IP since much of the work today is spent in integrating the IP and enabling earlier SW bring-up and validation.

    There needs to be more focus on integrating and verifying the SW modules with HW blocks …

  • There needs to be more focus on integrating and verifying the SW modules with HW blocks early and often, starting at the IP level to Subsystem to SoC.  After all, the SW APIs largely determine how the HW can be used in a given application, so time might be wasted “over-verifying” designs for use cases which may not be applicable in a specific product.
  • Much of the work in developing a chip is about integrating design IPs, VIPs, and SW, but most companies do not have a systematic, automated approach with supporting infrastructure for this type of development work.

Foster: No, the industry as a whole did not create the verification challenge.  To say this lacks an understanding of the problem.  While design grows at a Moore’s Law rate, verification grows at a double exponential rate. Compounded with increased complexity due to Moore’s Law are the additional dimensions of hardware-software interaction validation, complex power management schemes, and other physical effects that now directly affect functional correctness.  Emerging solutions, such as constrained-random, formal property checking, emulation (and so on) didn’t emerge because they were just cool ideas.  The emerged to address specific problems. Many design teams are looking for a single hammer that they can use to address today’s verification challenges. Unfortunately, we are dealing with an NP-hard problem, which means that there will never be a single solution that will solve all classes of problems.

Many design teams are looking for a single hammer that they can use to address today’s verification challenges.

Historically, the industry has always addressed complexity through abstraction (e.g., the move from transistors to gates, the move from gates to RTL, etc.). Strategically, the industry will be forced to move up in abstraction to address today’s challenges. However, there is still a lot of work to be done (in terms of research and tool development) to make this shift in design and verification a reality.

Caravella: The verification gap is a broad topic so I’m not exactly sure what you’re looking for, but here’s a good guess.

Balancing resource and budget for a product must be done across much more than just verification.

 Verification is only a portion of the total effort, resources and investment required to develop products and release them to market. Balancing resource and budget for a product must be done across much more than just verification. Bringing a chip to market (and hence revenue) requires design, validation, test, DFT, qualification and yield optimization. Given this and the insatiable need for more pre-tape out verification, what is the best balance? I would say that the chip does not need to be perfect when it comes to verification/bugs, it must be “good enough”. Spending 2x the resources/budget to identify bugs that do not impact the system or the customer is a waste of resources. These resources could be better spent elsewhere in the product development food chain or it could be used to do more products and grow the business. The main challenge is how best to quantify the risk to maximize the ROI of any verification effort.

Jasper: [Editor’s Note: Although not part of the panel, Jasper provided an additional perspective on the verification gap.]

  • Customers are realizing that UVM is very “heavy” for IP verification.  Specifically, writing and debugging a UVM testbench for block and unit level IP is very time consuming task in-and-of-itself, plus it incurs an ongoing overhead in regressions when the UVC’s are effectively “turned off” and/or simply used as passive monitors for system level verification.  Increasingly, we see customers ditching the low level UVM testbench and exhaustively verifying their IPs with formal-based.  In this way, the users can focus on system integration verification and not have to deal with bugs that should have been caught much sooner.

    UVM is very “heavy” for IP verification.

  • Speaking of system-level verification: we see customers applying formal at this level as well.  In addition to now familiar SoC connectivity and register validation flows, we see formal replacing simulation in architectural design and analysis.  In short, even without any RTL or SystemC, customers can use an architectural spec to feed into formal under-the-hood to exhaustively verify that a given architecture or protocol is correct by construction, won’t deadlock, etc.
  • The need for sharing coverage data between multiple vendors’ tool chains is increasing, yet companies appear to be ignoring the UCIS interoperability API.  This is creating a big gap in customers’ verification closure processes because it’s a challenge to compare verification metrics across multi-vendor flows, and they are none too happy about it.

Blog Review – Feb. 24 2014

Monday, February 24th, 2014

By Caroline Hayes, Senior Editor

ARM prepares for this week’s Embedded World in Nuremberg; Duo Security looks at embedded security; Xilinx focuses on LTE IP; Ansys rejoices in fluent meshing and Imagination strives to define graphic cores for comparison.

Equipped with a German phrase app on his (ARM-based) smartphone, Philippe Bressy is looking forward to Embedded World 2014, held in Nuremberg this week. His blog has some handy tips for tackling the show and why it is worth visiting the company’s stand and technical conference programme.

Anticipating his presentation at 2014 RSA Conference, Mark Stanislav, Duo Security, shares some insight into Internet of Things security. In the New Deal of Internet-Device Security, he explores security features in a mobile society for individuals, companies and governments.

Another exhibition that is happening in Europe this week, is Mobile World Congress, and Xilinx’s Steve Leibson looks at 4G and LTE proliferation and the latest IP from the company to support point-to-point and multipoint line of sight communications for 60GHz and 80GHz radio backhaul.

The virtues, even joys, of fluent meshing, is put under the spotlight by Andy Wade, Ansys in his blog. He considers the trends in CAE simulation, including innovations such as 3D and more complex geometries. There is also a link to the company’s top tech tips.

An interesting blog from Imagination Technologies attempts to compare graphics processors accurately. Rys Sommefeldt sets out what and how cores can be combined and used, and most importantly, how to compare like with like.

Blog Review January 06

Monday, January 6th, 2014

Happy 2014! This week the blog review is full of optimism for what the next 12 months has in store. By Caroline Hayes, senior editor.

A new year promises new hopes and, surely, some new technology. In Technology and Electronics Design Innovation: Big Things, Small Packages, Cadence’s Brian Fuller looks at some technology that caught his eye and some of the challenges and even moral dilemmas they may pose.

More predictions for 2014, as Intel’s Doug Davis looks forward to CES (Consumer Electronics Show). In Internet of Things: Transforming Consumer Culture and Business he looks ahead to CEO Brian Krzanich’s fireside chat (Fireside? In Vegas- really?) and urges visitors to grab a coffee from a vending machine. (Intel are at CES booth 7252.)

In a case of “What were they thinking?” Dustin Todd laments recent government action that he believes has increased, not reduced the threat of counterfeit chips. The SIA (Semiconductor Industry Association) feels thwarted and this debate is likely to go on for some time.
Still with CES, but also one eye on the TV schedules, SLeibso at Xilinx delves into the Vanguard Video H.265/HEV codec that was used in the 4K trailer for the Emmy winning House of Cards. Vanguard is demonstrating the codec this week in Las Vegas.

Blog Review – Dec 09

Monday, December 9th, 2013

Google encourages the world to wish the Queen of Software a happy birthday and prompts a revival of interest in her – and perhaps the role of women in technology; there is more news on the value of FinFET vs FDSOI and ARM looks back and looks ahead at DSP support and plays with RFduino.

If you googled anything today, you will see the graphic celebrating Grace Hopper’s 107th birthday. However, if you visit Harvard professor, Harry Lewis’ blog you will be charmed by a video link there showing the lady herself interviewed by a (very young) David Letterman. I thought she was taking out her knitting at one point, but it is actually a visualization of a nanosecond!

Sandy Adam is also a Grace Hopper fan. His blog celebrates “the queen of software” before looking to the next generation to take the crown as he introduces the Hour of Code project, part of Computer Science Education Week.

Drew Barbier ARM blog has fun with a kickstarter project called RFduino. As the name implies, RFduino is a Arduino shrunken to the size of a finger-tip with added wireless. Cool!

In the blink of an eye, the Sleibson, Xilinx blog caught my attention as the author explains the mechanics behind the Zynq SoC, driving an LED connected as a GPIO.

Mentor’s Arvind Narayanan poses some tough questions about FDSOI versus FinFET performance and better power than bulk; will the former at 20nm bridge the 16nm finFET gap; and what about cost. The war of words is well illustrated and may challenge your perceptions.

Jeffrey Gallagher, Cadence suggests getting crafty to maximize the potential of vias earlier in the flow, to save time and energy – what’s not to like?

There is a great showreel at the blog by Aurelien, Dassault Systèmes, showcasing the company’s acquisition of German 3D visualization company, RealTime Technology.

Looking back and also looking ahead, Richard York, ARM, proudly relates his finest DSP moment, with the announcement that Matlab’s Simulink environment will direct code-calls to produce optimised code for ARM, through the CMSIS DSP libraries. Looking ahead, the demo will be running at Embedded World in Nuremberg, in February.

Blog Review – Dec 02

Monday, December 2nd, 2013

By Caroline Hayes, senior editor

While everyone else hits the shops, one blogger want to see presents fall from the sky. This week there is also a sleek design project, new uses for SoCs, an Arduino project (with video) a hypervisor discussion and a summit review.

The power of the blog – it just keeps giving, as Sleibso, a Xilinx employee demonstrates. He references Dave Jones, proprietor of the EEVblog.com video blog who discusses “Automated PCB Panel Testing” which set the author to thinking about how Zynq All Programmable SoCs can be used in a new way – as ATE ports.

Sleek good looks are not just the preserve of the fashion world. Take a look at bleu, the CATIA design showcar that was developed with Dassault Systèmes’ technologies and with CATIA designers and engineers working in concert to produce a symphony of aesthetically pleasing, aerodynamic design to a very tight schedule. Arnaud’s blog has video that shows off the sleek car while revealing the design process.

Another, less conventional form of transport is occupying Eric Bantegnie, Ansys, is getting excited about online retailer, Amazon, using drones, called octopeters, to deliver products to customers 30mins after clicking the ‘buy’ button. Sadly, the drone-delivery is a few years’ away from being a reality. Until the technology arrives, Bantegnie will have to traipse around the shopping malls for presents this year, like everyone else!

Talking of presents and new toys, Drew Barbier is reflecting on what to do with this RFduino – the module based on a Nordic Semiconductor nRF51822, with ARM Cortex-M0 and Bluetooth 4.0 LE support. Making up for the time lost when he first contemplated the project, he seems to have had great fun shrinking the Arduino to the size of a fingertip and adding wireless

Another generous blogger is Colin Walls, Mentor, who continues the Embedded Hypervisor discussion with a mixture of compliments: quoting a colleague’s view on virtualization, but questioning his culinary skills…..

Finally, Richard Goering, Cadence, reports from the Signoff Summit, reviewing the technology behind the Tempus Timing Signoff Solution, and offering some insight into the challenges in static timing analysis.

Network on Chip Solution Is Gaining Market Shares

Thursday, November 14th, 2013

by Gabe Moretti, Contributing Editor

It is important to notice how much network on chip (NoC) architectures have established themselves as the preferred method of connectivity among IP blocks.  What I found lacking are tools and methods that help architects explore the chip topology in order to minimize the use of interconnect structures and to evaluate bus versus network tradeoffs.

Busses

Of course there are busses for SoC designs, most of which have been used in designs for years.  The most popular one is the AMBA bus first introduced in 1996.  Up to today there are five versions of the AMBA bus.  The last one introduced this year is the AMBA 5 CHI (Coherent Hub Interface) that offers a new high-speed transport layer and features aimed at reducing congestion.

Accellera offers the OCP bus developed by OCP-IP before it merged with Accellera.  It is an openly licensed protocol that allows the definition of cores ready for system integration that can be reused together with their respective test benches without rework.

The OpenCores open source hardware community offers the Wishbone bus.  I found it very difficult to find much information about Wishbone on the OpenCores.org website, with the exception of three references to implementations using this protocol.  Wishbone is not a complete bus definition, since it has no physical definitions.  It is a logic protocol described in terms of signals and their states and clock cycles.

Other bus definitions are proprietary.  Among them designers can find Quick Path from Intel, Hyper Transport from AMD, and IPBus from IDT.

IBM has defined and supports the Core Connect bus that is used in its Power Architecture products and is also used with Xilinx’s MicroBlaze cores.

Finally Altera uses its own Avalon bus for its Nios II products line.

Clearly the use of busses is still quite pervasive.  With the exception of proprietary busses, designers have the ability to choose both physical and protocol characteristics that are best suited for their design.

Network on Chip

There are two major vendors of NoC solutions: Arteris and Sonics.  Arteris is a ten year old company headquartered in Sunnyvale but with engineering center near Paris, France.  its technology is derived from the computer networking solutions that are modified to the requirements of SoC realizations.  Its products deal both with on-chip as well as with die-to-die and multi-chip connectivity.

Sonics was founded in 1996.  In addition to network on chip products it also offers memory subsystems, and tools for performance analysis and development tools for SoC realizations.  It offers six products in the NoC market covering many degrees of sophistication depending on designers’ requirements.  SonicsGN is its most sophisticated product.  It offers a high-performance network for the transportation of packetized data, utilizing routers as the fundamental switching elements.  On the other hand SonicsExpress can be used as a bridge between two clock domains with optional voltage domain isolation.  It supports AXI and OCP protocols and thus can be integrated in those bus environments.

After the panel discussion on IP Blocks Connectivity that covers mostly NoC topics, I spoke with Avi Behar, Product Marketing Director at Cadence.  Cadence had wanted to participate to the discussion but their request had come too late to include them in the panel.  But, information is important, and scheduling matters should not become an obstacle.  So I decided to publish their contribution in this article.

The first question I asked was: on chip connectivity uses area and power and also generates noise.  Have we addressed these issues sufficiently?

Avi:A common tendency among on-chip network designer is to over design. While it’s better to be on the safe side – under–designing will lead to the starvation of bandwidth hungry IPs and failure of latency critical IPs – over–designing has a cost in gate count and as a result in power consumption. In order to make the right design decisions, it is crucial that design engineer run cycle-accurate performance analysis simulations (by performance I primarily mean data bandwidth and transaction latency) with various configurations applied to their network. By changing settings like outstanding transactions, buffer depth, bus width, QoS settings and the switching architecture of the network and running the same, realistic traffic scenarios, designers can get to the configuration that would meet the performance requirements defined by the architects without resorting to over-design. This iterative process is time consuming and error prone, and this is where the just launched Cadence Interconnect Workbench (IWB) steps in. By combining the ability to generate a correct-by-construction test bench tuned for performance benchmarking (the RTL for the network is usually generated by tools provided by the network IP provider) with a powerful performance analysis GUI that allows side-by-side analysis of different RTL configurations, IWB greatly speeds this iterative process while mitigating the risks associated with manual creation of the required test benches.

What type of work do we need to do to be ready to have a common, if not standard, verification method for a network type of connectivity?

Avi: There are two aspects to the verification of networks-on-a-chip: functional verification and ‘performance verification’. Functional verification of these networks needs to be addressed at two levels: first make sure that all the ports connected to the network are compliant with the protocol (say AMBA3 AXI or AMBA4 ACE) that they are implementing, and secondly, verifying that the network correctly channels data between all the master and slave nodes connected to the network. As for performance verification, while the EDA industry has been focusing on serving the SoC architect community with virtual prototyping tools that utilize SoC models for early stage architectural exploration, building cycle accurate models of the on-chip network, capturing all the configuration options mentioned above is impractical. As the RTL for the connectivity network is usually available before the rest of the IP blocks, it is the best vehicle for performing cycle-accurate performance analysis. Cadence’s IWB, which as described in the previous answer, can generate a test bench tuned for running realistic traffic scenarios and capturing performance metrics. IWB can also generate a functional verification testbench which addresses the two aspects I mentioned earlier – protocol compliance at the port level and connectivity across the on-chip network.

What do you think should be the next step?

Avi: Many of our big SoC designing customers have dedicated network-on-a-chip verification teams who are struggling to get not only the functionality right, but ever more importantly, get the best performance while removing unnecessary logic. We expect this trend to intensify, and we at Cadence are looking forward to serving this market with the right methodologies and tools.

The contribution from Cadence reinforced the point of views expressed by the panelists.  It is clear that there are many options available to engineers to enable communication among IP blocks and among chips and dies.  What was not mentioned by anyone was the need to explore the topology of a die in view of developing the best possible interconnect architecture in terms of speed, reliability, and cost.

Next Page »