Part of the  

Chip Design Magazine

  Network

About  |  Contact

Headlines

Headlines

Connected IP Blocks: Busses or Networks?

Gabe Moretti

David Shippy of Altera, Lawrence Loh from Jasper Design, Mentor’s Steve Bailey, and Drew Wingard from Sonics got together to discuss the issues inherent in connecting IP blocks whether in a SoC or on stack die architecture.

SLD: On chip connectivity uses area and power and also generates noise. Have we addressed these issues sufficiently?

Wingard: The communication problem is fundamental to the high degree of integration. We must consider how to partition the design to obtain sufficient high performance while consuming the least amount of power. A significant portion of the cost benefits of going to a higher level of integration comes from the ability to share critical resources like off chip memory or a single control processor across a wide variety of elements. This communication problem is fundamental to the design integration. We cannot make it take zero area, we cannot make it have zero latency so we must think how we partition the design so that we can get things done with sufficient high performance and low power.

Loh: Another dimension that is challenging is the verification aspect. We need to continue to innovate in order to address it.

Shippy: The number of transistors in FPGAs is growing but I do not see that the number of transistors dedicated to connectivity growing at the same pace as in other technologies. At Altera we use network on chip to efficiently connect various processing blocks together. It turns out that both area and power used by the network is very small compared to the rest of the die. In general we can say that the resources dedicated to interconnect are around 3% to 5% of the die area.

Wingard: Connectivity tends to use a relatively large portion of the “long wires” on the chip, and so even if it may be a relatively modest part of the die area it runs the risk of presenting a higher challenge in the physical domain.

SLD: When people talk about the future they talk about “internet of things” or “smart everything”. SoC are becoming more complex. Two things we can do. One knowing the nature of the IP that we have to connect we could develop standard protocols that use smaller busses or we can use a network on chip. Which do you think is most promising?

Wingard: I think you cannot assume that you know what type of IP you are going to connect. There will be a wide variety of applications and a number of wireless protocols used. We must start with an open model that is independent of the IP in the architecture. I am strongly in favor of the decoupled network approach.

SLD: What is the overhead we are prepared to pay for a solution?

Shippy: The solution needs to be a distributed system that uses a narrower interface with fewer wires.

SLD: What type of work do we need to do to be ready to have a common, if not standard verification method for this type of connectivity?

Loh: Again as Drew stated we can have different types of connectivity, some optimized for power, other for throughput for example. People are creating architectural level protocols. When you have a well defined method to describe things then you can derive a verification methodology.

Bailey: If we are looking to an equivalent to UVM you start from the protocol. A protocol has various levels. You must consider what type of interconnect is used then you can verify if they are connected correctly and control aspects like arbitration. Then you can move to the functional level. To do that you must be able to generate traffic and the only way to do this is to either mimic I/O through files or have a high level model to generate the traffic so that we can replicate what it will happen under stress to make sure the network can handle the traffic. It all starts with basic verification IP. The protocol used will determine the properties of its various levels of abstraction, and the IP can provide ways to move across these levels to create a required verification method.

Wingard: Our customers expect that we will provide the verification system that will prove to them that we can provide the type of communication they want to implement. The state of the art today is that there will be surprises when the communication subsystem is connected to the various IP blocks. The advantage of having protocol verification IP is that what we see today is that the problems tend to be system interaction challenges and we do not yet have a complete way of describing how to gather the appropriate information to verify the entire system.

Bailey: Yes there is a difference between verifying the interconnect and doing system verification. There is still work that needs to be done to capture the stress points of the entire system: it is a big challenge.

SLD: What do you think should be the next step?

Wingard: I think the trend is pretty clear. More and more application areas are reaching the level of complexity where it makes sense to adopt network based communication solutions. As developers start addressing these issues they will start to appreciate the fact that there may be a need for system wide solution for other things like power management for example. As communication mechanisms become more sophisticated we need to address the issue of security at the system level as well.

Loh: In order to address the issue of system level verification we must understand that the design needs to be analyzed from the very beginning so that we can determine the choices available. Standardization can no longer be just how things are connected, but it must grow to cover how things are defined.

Shippy: One of the challenges is how are we going to build heterogeneous systems that interact together correctly while also dealing with all of the physical characteristics of the circuit. With many processors, and lots of DSPs and accelerators we need to figure a topology to interconnect all of those blocks and at the same time deal with the data coming from the outside or being output to the outside environment. The problem is how to verify and optimize the system, not just the communication flow.

Bailey: As the design part of future products evolves, so the verification methods will have to evolve. There has to be a level of coherency that covers the functionality of the system so that designers can understand the level of stress they are simulating for the entire system. Designers also need to be able to isolate the problem when found. Where does it originate from? An IP subsystem, the connectivity network, or is it a logic problem in the aggregate circuitry?

Contributors Biographies:

David Shippy is currently the Director of System Architecture for Altera Corporation where he manages System Architecture and Performance Modeling. Prior to that he was Chief Architect for low power X86 CPUs and SOCs at AMD. Before that he was Vice President of Engineering at Intrinsity where he led the development of the ARM CPU design for the Apple iPhone 4 and iPad. Prior to that he spent most of his career at IBM leading PowerPC microprocessor designs including the role of Chief Architect and technical leader of the PowerPC CPU microprocessors for the Xbox 360 and PlayStation 3 game machines. His experience designing high-performance microprocessor chips and leading large teams spans more than 30 years. He has over 50 patents in all areas of high performance, low power microprocessor technology.

Lawrence Loh, Vice President of Worldwide Applications Engineering, Jasper Design
Lawrence Loh holds overall management responsibility for the company’s applications engineering and methodology development. Loh has been with the company since 2002, and was formerly Jasper’s Director of Application Engineering. He holds four U.S. patents on formal technologies. His prior experience includes verification and emulation engineering for MIPS, and verification manager for Infineon’s successful LAN Business Unit. Loh holds a BSEE from California Polytechnic State University and an MSEE from San Diego State.

Stephen Bailey is the director of emerging technologies in the Design Verification and Test Division of Mentor Graphics. Steve chaired the Accellera and IEEE 1801 working group efforts that resulted in the UPF standard. He has been active in EDA standards for over two decades and has served as technical program chair and conference chair for industry conferences, including DVCon. Steve began his career designing embedded software for avionics systems before moving into the EDA industry in 1990. Since then he has worked in R&D, applications, and technical and product marketing. Steve holds BSCS and MSCS degrees from Chapman University.

Drew Wingard co-founded Sonics in September 1996 and is its chief technical officer, secretary, and board member. Before co-founding Sonics, Drew led the development of advanced circuits and CAD methodology for MicroUnity Systems Engineering. He co-founded and worked at Pomegranate Technology, where he designed an advanced SIMD multimedia processor. He received is BS from the University of Texas at Austin and his MS and PHD from Stanford University, all in electrical engineering.

Tags: , , , , , , , , , , , , ,

Leave a Reply