Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Cadence’

Next Page »

Blog Review – Monday, February 23, 2015

Monday, February 23rd, 2015

Next week may be a test for the German language skills of Colin Walls, Mentor Graphics, as he returns to Nuremberg for Embedded World 2015, where he will be presenting papers at the conference. He previews both How to Measure RTOS Performance and Self-Testing in Embedded Systems. And issues an invitation to drop by the company booth.

Another heads-up for Embedded World from Chris A Ciufo, eecatalog, who discusses heterogeneous computing, and not only how it can be applied, but what is needed for effective execution.

After seven years, start-up Soft Machines was ready to unveil its VISC (Variable Instruction Set Computing) CPU cores at Cadence’s Front-End Design Summit. Richard Goering unpicks the innovation that could revive performance/W scaling to boost power and performance.

A heart-warming case study from Artie Beavis, Atmel, shows how ARM-based wearable technology is being put to good use with the Unaliwar Kanega wristwatch, for the elderly.

Patient Michael Posner is not going to let something like gate counts flummox him. He explains not only the nature of logic cells – with some help from Xilinx – but always flags up a warning and advice for any prototype attempting to map RTL code.

Dark silicon, the part of a device which shuts down to avoid over-heating, is getting darker. Zvi Or-Back, MonolithIC 3D, believes his company’s architecture can throw light on a semiconductor industry that is on the brink of being derailed by the creep of dark silicon.

By Caroline Hayes, Senior Editor.

Blog Review – Monday, February 16, 2015

Monday, February 16th, 2015

Repeating last year’s project, Gagan Luthra, ARM, explains this year’s 100 projects in 100 days for the new Bluetooth Low Energy (BLE) Pioneer Kit, with Cypress’s PsoC 4 BLE; an ARM Cortex-M0 CPU with BLE radio.

Steve Schulz visited Cadence Design and entranced Brian Fuller, with his ideas for standards for the IoT, the evolution of the SoC into a System on Stack and the design gaps that lie en route.

Fighting the Nucleus corner, Chris Ciufo, ee catalog, rejoices in the news that Mentor Graphics is repositioning the RTOS for Industrie 4.0, for factory automation, and standing up to the tough guys in the EDA playground.

Following the news that Intel is to buy Lantiq, Martin Bijman, Chipworks, looks into what the acquisition will bring to the Intel stable, presenting some interesting statistics.

Maybe a little one-sided, but the video presented by Bernie DeLay, Synopsys, is informative about how VIP architecture accelerates memory debug for simultaneous visualization.

The normally relaxed and affable Warren Savage, IP-extreme is getting hot under the collar at the thought of others ‘borrowing’, or plain old plagiarism, as he puts it in his post. The origins of the material will be argued, (the offending article has been removed from the publication’s site), but Savage uses the incident to make a distinction between articles with a back story and ‘traditional’ tech journalism.

To end on a light note, Raj Suri, Intel, presents a list compiled by colleagues of employees that look like celebrities. Nicholas Cage, Dame Helen Mirren and Michael J Fox doppelgangers are exposed.

Caroline Hayes, Senior Editor

A Prototyping with FPGA Approach

Thursday, February 12th, 2015

Frank Schirrmeister, Group Director for Product Marketing of the System Development Suite, Cadence.

In general, the industry is experiencing the need for what now has been started being called the “shift left” in the design flow as shown Figure 1. Complex hardware stacks, starting from IP assembled into sub-systems, assembled into Systems on Chips (SoCs) and eventually integrated into systems, are combined with complex software stacks, integrating bare metal software and drivers with operating systems, middleware and eventually the end applications that determine the user experience.

From a chip perspective, about 60% into a project three main issues have to be resolved. First, the error rate in the hardware has to be low enough that design teams find confidence to commit to a tape out. Second, the chip has to be validated enough within its environment to be sure that it works within the system. Third, and perhaps most challenging, significant portions of the software have to be brought up to be confident that software/hardware interactions work correctly. In short, hardware verification, system validation and software development have to be performed as early as possible, requiring a “shift left” of development tasks to allow them to happen as early as possible.

Figure 1: A Hardware/Software Development Flow.

Prototyping today happens at two abstraction levels – using transaction-level models (TLM) and register transfer models (RTL) – using five basic engines.

  • Virtual prototyping based on TLM models can happen based on specifications earliest in the design flow and works well for software development but falls short when more detailed hardware models are required and is plagued by model availability and its creation cost and effort.
  • RTL simulation – which by the way today is usually integrated with SystemC based capabilities for TLM execution – allows detailed hardware execution but is limited in speed to the low KHz or even Hz range and as such is not suitable for software execution that may require billions of cycles to just boot an operating system. Hardware assisted techniques come to the rescue.
  • Emulation is used for both hardware verification and lower level software development as speeds can reach the MHz domain. Emulation is separated into processor based and FPGA based emulation, the former allowing for excellent at speed debug and fast bring-up times as long FPGA routing times can be avoided, the latter excelling at speed once the design has been brought up.
  • FPGA based prototyping is typically limited in capacity and can take months to bring up due to modifications required to the design itself and the subsequent required verification. The benefit, once brought up, is a speed range in the 10s of MHz range that is sufficient for software development.
  • The actual prototype silicon is the fifth engine used for bring up. Post-silicon debug and test techniques are finding their way into pre-silicon given the ongoing shift-left. Using software for verification bears the promise to better re-use verification across the five engines all the way into post-silicon.

Advantages Of Using FPGAs For ASIC Prototyping

FPGA providers have been pursuing aggressive roadmaps. Single FPGA devices now nominally hold up to 20 million ASIC gates, with utilization rates of 60%, 8 FPGA systems promise to hold almost 100 MG, which makes them large enough for a fair share of design starts out there. The key advantage of FPGA based systems is the speed that can be achieved and the main volume of FPGA based prototypes today is shipped to enable software development and sub-system validation. They are also relatively portable, so we have seen customers use FPGA based prototypes successfully to interact with their customers to deliver pre-silicon representations of the design for demonstration and software development purposes.

Factors That May Limit The Growth Of The Technique

There certainly is a fair amount of growth out there for FPGA based prototyping, but the challenge of long bring-up times often defies the purpose of early availability. For complex designs, requiring careful partitioning and timing optimization, we have seen cases in which the FPGA based prototype did not become available even until silicon was back. Another limitation is that the debug insight into the hardware is very limited compared to simulation and processor based emulation. While hardware probes can be inserted, they will then reduce the speed of execution because of data logging. Subsequently, FPGA based prototypes find most adoption in the later stages of projects during which RTL has become already stable and the focus can shift to software development.

The Future For Such Techniques

All prototyping techniques are more and more used in combination. Emulation and RTL simulation are combined to achieve “Simulation Acceleration”. Emulation and transaction-level models with Fast Models from ARM are combined to accelerate operating system bring-up and software driven testing. Emulation and FPGA based prototyping are combined to combine the speed of bring-up for new portions of the design in emulation with the speed of execution for stable portions of the design in FPGA based prototyping. Like in the recent introduction of the Cadence Protium FPGA based prototyping platform, both processor based emulation and FPGA based prototyping can share the same front-end to significantly accelerate FPGA based prototyping bring-up. At this point all major EDA vendors have announced a suite of connected engines (Cadence in May 2011, Mentor in March 2014 and Synopsys in September 2014).It will be interesting to see how the continuum of engines grows further together to enable most efficient prototyping at different stages of a development project.

Blog Review – Monday, February 09, 2015

Monday, February 9th, 2015

Arthur C Clarke interview; Mastering Zynq; The HAPS and the HAPS-nots; Love thy customer; What designers want; The butterfly effect for debug

A nostalgic look by at an AT&T and MIT conference, by Artie Beavis, ARM, has a great video interview with Arthur C Clarke. It is fascinating to see the man himself envisage mobile connectivity and ‘devices that send information to friends, the exchange of pictorial information and data; the ‘completely mobile’ telephone as well as looking forward to receiving signals from outer space.

A video tutorial presented by Dr Mohammad S Sadri, Microelectronic Systems Design Research Group at Technische Universität Kaiserslautern, Germany, shows viewers how to create AXI-based peripherals in the Xilinx Zynq SoC programmable logic. Steve Liebson, Xilinx posts the video. Dr Sadri may appear a little awkward with the camera rolling, but he clearly knows his stuff and the 23 minute video is informative.

Showing a little location envy, Michael Posner, Synopsys, visited his Californian counterparts, and inbetween checking out gym and cafeteria facilities, he caught up on FPGA-based prototype debug and HAPS.

Good news from the Semiconductor Industry Association as Falan Yinug reports on record-breaking sales in 2014 and quarterly growth. Who bought what makes interesting – and reassuring – reading.

Although hit with the love bug, McKenzie Mortensen, IPextreme, does not let her heart rule her head when it comes to customer relations. She presents the company’s good (customer) relationship guide in this blog.

A teaser of survey results from Neha Mittal, Arrow Devices, shows what design and verification engineers want. Although the survey is open to more respondents until February 15, the results received so far are a mix of predictable and some surprises, all with the option to see disaggregated, or specific, responses for each questions.

From bugs to butterflies, Doug Koslow, Cadence, considers the butterfly effect in verification and presents some sound information and graphics to show the benefits of the company’s SimVision.

Caroline Hayes, Senior Editor

Blog Review – Monday February 2, 2015

Monday, February 2nd, 2015

2015’s must-have – a personal robot, Thumbs up for IP access, USB 3.1 has landed, Transaction recap, New talent required, Structuring medical devices, MEMS sensors webinar

Re-living a youth spent watching TV cartoons, Brad Nemire, ARM, marvels at the Personal Robot created by Robotbase. It uses an ARM-based board powered by a Quad-core Qualcomm Krait CPU, so he interviewed the creator, Duy Huynh, Founder and CEO of Robotbase and found out more about how it was conceived and executed. I think I can guess what’s on Nemire’s Christmas list already.

Getting a handle on security access to big data, Michael Ford, Mentor Graphics, suggests a solution to accessing technology IP or patented technology without resorting to extreme measures shown in films and TV.

Celebrating the integration of USB 3.1 in the Nokia N1 tablet and other, upcoming products, Eric Huang, Synopsys, ties this news in with access to “the best USB 3.1 webinar in the universe”, which – no great surprise – is hosted by Synopsys. He also throws in some terrible jokes – a blog with something for everyone.

A recap on transaction-based verification is provided by Axel Scherer, Cadence, with the inevitable conclusion that the company’s tools meet the task. The blog’s embedded video is simple, concise and informative and worth a click.

Worried about the lack of new, young engineers entering the semiconductor industry, Kands Manickam, IP extreme, questions the root causes for the stagnation.

A custom ASIC and ASSP microcontroller combine to create the Struix product, and Jakob Nielsen, ON Semiconductor, explains how this structure can meet medical and healthcare design parameters with a specific sensor interface.

What’s the IoT without MEMS sensors? Tim Menasveta, ARM, shows the way to an informative webinar : Addressing Smart Sensor Design Challenges for SoCs and IoT, hosted in collaboration with Cadence, using its Virtuoso and MEMS Convertor tools and the Cortex-M processors.

Caroline Hayes, Senior Editor

Blog Review – Monday, January 26 2015

Monday, January 26th, 2015

Finding fault tolerances with Cortex-R5; nanotechnology thinks big; Cadence, – always talking; mine’s an IEEE on ice; IP modeling

The inherent fault tolerance ARM’s Cortex-R5 processors is explored and expanded upon by Neil Werdmuller, ARM, in an informative blog. Reading this post, it is evident that it is as much about the tools and ecosystem as the processor technology.

Nanotechnology is a big subject, and Catherine Bolgar, Dassault Systemes, tackles this overview competently, with several, relevant links in the post itself.

Harking back to CES, Brian Fuller, Cadence, shares an interesting video from the show, where Ty Kingsmore, Realtek Semiconductor, talks the talk about always on voice applications and the power cost.

A special nod has to be given to Arthur Marris, Cadence, who travelled to Atlanta for the IEEE 802.3 meeting but managed to sightsee and includes a photo in his post of the vault that holds the recipe for coca cola. He also hints at the ‘secret formula’ for the 2.5 and 5G PHY and automotive proposals for the standard. (Another picture shows delegates’ tables but there were no iconic bottles to be seen anywhere – missed marketing opportunity?)

In conversation with leading figures in the world of EDA, Gabe Moretti, considers the different approaches to IP modeling in today’s SoC designs.

By Caroline Hayes, Senior Editor.

The Various Faces of IP Modeling

Friday, January 23rd, 2015

Gabe Moretti, Senior Editor

Given their complexity, the vast majority of today’s SoC designs contain a high number of third party IP components.  These can be developed outside the company or by another division of the same company.  In general they present the same type of obstacle to easy integration and require a model or multiple types of models in order to minimize the integration cost in the final design.

Generally one thinks of models when talking about verification, but in fact as Frank Schirrmeister, Product Marketing Group Director at Cadence reminded me, there are three major purposes for modeling IP cores.  Each purpose requires different models.  In fact, Bernard Murphy, Chief Technology Officer at Atrenta identified even more uses of models during our interview.

Frank Schirrmeister listed performance analysis, functional verification, and software development support as the three major uses of IP models.

Performance Analysis

Frank points out that one of the activities performed during this type of analysis is the analysis of the interconnect between the IP and the rest of the system.  This activity does not require a complete model of the IP.  Cadence’s Interconnect Workbench creates the model of the component interconnect by running different scenarios against the RT level model of the IP.  Clearly a tool like Palladium is used given the size of the required simulation of an RTL model.  So to analyze, for example, an ARM AMBA 8 interconnect, engineers will use simulations representing what the traffic of a peripheral may be and what the typical processor load may be and apply the resulting behavior models to the details of the interconnect to analyze the performance of the system.

Drew Wingard, CTO at Sonics remarked that “From the perspective of modeling on-chip network IP, Sonics separates functional verification versus performance verification. The model of on-chip network IP is much more useful in a performance verification environment because in functional verification the network is typically abstracted to its address map. Sonics’ verification engineers develop cycle accurate SystemC models for all of our IP to enable rapid performance analysis and validation.

For purposes of SoC performance verification, the on-chip network IP model cannot be a true black box because it is highly configurable. In the performance verification loop, it is very useful to have access to some of the network’s internal observation points. Sonics IP models include published observation points to enable customers to look at, for example, arbitration behaviors and queuing behaviors so they can effectively debug their SoC design.  Sonics also supports the capability to ‘freeze’ the on-chip network IP model which turns it into a configured black box as part of a larger simulation model. This is useful in the case where a semiconductor company wants to distribute a performance model of its chip to a system company for evaluation.”

Bernard Murphy, Chief Technology Officer, Atrenta noted that: ” Hierarchical timing modeling is widely used on large designs, but cannot comprehensively cover timing exceptions which may extend beyond the IP. So you have to go back to the implementation model.”  Standards, of course, make engineers’ job easier.  He continued: “SDC for constraints and ILM for timing abstraction are probably largely fine as-is (apart from continuing refinements to deal with shrinking geometries).”

Functional Verification

Tom De Schutter, Senior Product Marketing Manager, Virtualizer – VDK, Synopsys

said that “the creation of a transaction-level model (TLM) representing commercial IP has become a well-accepted practice. In many cases these transaction-level models are being used as the golden reference for the IP along with a verification test suite based on the model. The test suite and the model are then used to verify the correct functionality of the IP.  SystemC TLM-2.0 has become the standard way of creating such models. Most commonly a SystemC TLM-2.0 LT (Loosely Timed) model is created as reference model for the IP, to help pull in software development and to speed up verification in the context of a system.”

Frank Schirrmeister noted that verification requires the definition of the IP at an IP XACT level to drive the different verification scenarios.  Cadence’s Interconnect Workbench generates the appropriate RTL models from a description of the architecture of the interconnects.”

IEEE 1685, “Standard for IP-XACT, Standard Structure for Packaging, Integrating and Re-Using IP Within Tool-Flows,” describes an XML Schema for meta-data documenting Intellectual Property (IP) used in the development, implementation and verification of electronic systems and an Application Programming Interface (API) to provide tool access to the meta-data. This schema provides a standard method to document IP that is compatible with automated integration techniques. The API provides a standard method for linking tools into a System Development framework, enabling a more flexible, optimized development environment. Tools compliant with this standard will be able to interpret, configure, integrate and manipulate IP blocks that comply with the proposed IP meta-data description.

David Kelf, Vice President of Marketing at OneSpin Solutions said: “A key trend for both design and verification IP is the increased configurability required by designers. Many IP vendors have responded to this need through the application of abstraction in their IP models and synthesis to generate the required end code. This, in turn, has increased the use of languages such as SystemC and High Level Synthesis – AdaptIP is an example of a company doing this – that enables a broad range of configuration options as well as tailoring for specific end-devices. As this level of configuration increases, together with synthesis, the verification requirements of these models also changes. It is vital that the final model to be used matches the original pre-configured source that will have been thoroughly verified by the IP vendor. This in turn drives the use of a range of verification methods, and Equivalency Checking (EC) is a critical technology in this regard. A new breed of EC tools is necessary for this purpose that can process multiple languages at higher levels of abstractions, and deal with various synthesis optimizations applied to the block.  As such, advanced IP configuration requirements have an affect across many tools and design flows.”

Bernard Murphy pointed out that “Assertions are in a very real sense an abstracted model of an IP. These are quite important in formal analyses also in quality/coverage analysis at full chip level.  There is the SVA standard for assertions; but beyond that there is a wide range of expressions from very complex assertions to quite simple assertions with no real bounds on complexity, scope etc. It may be too early to suggest any additional standards.”

Software Development

Tom De Schutter pointed out that “As SystemC TLM-2.0 LT has been accepted by IP providers as the standard, it has become a lot easier to assemble systems using models from different sources. The resulting model is called a virtual prototype and enables early software development alongside the hardware design task. Virtual prototypes gave have also become a way to speed up verification, either of a specific custom IP under test or of an entire system setup. In both scenarios the virtual prototype is used to speed up software execution as part of a so-called software-driven verification effort.

A model is typically provided as a configurable executable, thus avoiding the risk of creating an illegal copy of the IP functionality. The IP vendor can decide the internal visibility and typically limits visibility to whatever is required to enable software development, which typically means insight into certain registers and memories are provided.”

Frank Schirrmeister pointed out that these models are hard to create or if they exist they may be hard to get.  Pure virtual models like ARM Fast Models connected to TLM models can be used to obtain a fast simulation of a system boot.  Hybrid use models can be used by developers of lower level software, like drivers. To build a software development environment engineers will use for example a ARM Fast Model and plug in the actual RTL connected to a transactor to enable driver development.  ARM Fast Models connected with say a graphics system running in emulation on a Palladium system is an example of such environment.

ARM Fast Models are virtual platforms used mostly by software developers without the need for expensive development boards.  They also comply with the TLM-2.0 interface specification for integration with other components in the system simulation.

Other Modeling Requirements

Although there are three main modeling requirements, complex IP components require further analysis in order to be used in designs implemented in advanced processes.  A discussion with Steve Brown, Product Marketing Director, IP Group at Cadence covered power analysis requirements.  Steve’s observations can be summed up thus: “For power analysis designers need power consumption information during the IP selection process.  How does the IP match the design criteria and how does the IP differentiate itself from other IP with respect to power use.  Here engineers even need SPICE models to understand how I/O signals work.  Signal integrity is crucial in integrating the IP into the whole system.”

Bernard Murphy added: “Power intent (UPF) is one component, but what about power estimation? Right now we can only run slow emulations for full chip implementation, then roll up into a power calculation.  Although we have UPF as a standard estimation is in early stages. IEEE 1801 (UPF) is working on extensions.  Also there are two emerging activities – P2415 and 2416 –working respectively on energy proportionality modeling at the system level and modeling at the chip/IP level.”

IP Marketplace, a recently introduced web portal from eSilicon, makes power estimation of a particular IP over a range of processes very easy and quick.  “The IP MarketPlace environment helps users avoid complicated paperwork; find which memories will best help meet their chip’s power, performance or area (PPA) targets; and easily isolate key data without navigating convoluted data sheets” said Lisa Minwell, eSilicon’s senior director of IP product marketing.

Brad Griffin, Product marketing Director, Sigrity Technology at Cadence, talked about the physical problems that can arise during integration, especially when it concerns memories.  “PHY and controllers can be from either the same vendor or from different ones.  The problem is to get the correct signal integrity and power integrity required from  a particular PHY.  So for example a cell phone using a LP DDR4 interface on a 64 bit bus means a lot of simultaneous switching.  So IP vendors, including Cadence of course, provide IBIS models,.  But Cadence goes beyond that.  We have created virtual reference designs and using the Sigrity technology we can simulate and show  that we can match the actual reference design.  And then the designer can also evaluate types of chip package and choose the correct one.  It is important to be able to simulate the chip, the package, and the board together, and Cadence can do that.”

Another problem facing SoC designers is Clock Domain Crossing (CDC).  Bernard Murphy noted that :”Full-chip flat CDC has been the standard approach but is very painful on large designs. There is a trend toward hierarchical analysis (just as happened in STA), which requires hierarchical models There are no standards for CDC.  Individual companies have individual approaches, e.g. Atrenta has its own abstraction models. Some SDC standardization around CDC-specific constraints would be welcome, but this area is still evolving rapidly.”

Conclusion

Although on the surface the problem of providing models for an IP component may appear straightforward and well defined, in practice it is neither well defined nor standardized.  Each IP vendor has its own set of deliverable models and often its own formats.  The task of comanies like Cadence and Synopsys that sell their own IP and also

provide EDA tools to support other IP vendors is quite complex.  Clearly, although some standard development work is ongoing, accommodating present offerings and future requirements under one standard is challenging and will certainly require compromises.

Blog Review – Monday, January 19 2015

Monday, January 19th, 2015

Test case for lazybones; Mongoose in space, heads for Pluto; solar tracker design; new age shopping; IoT insight – the real challenge

The size of SoCs, security around EDA tools and the effort needed to test tool issues are all hurdles that can be mounted, asserts Uwe Simm, Cadence. His comprehensive post explains how the Test Case Optimizer (TCO) – a small generic (as in no special tools required or design styles are required) – can strip down simulation source files and reduce overal source input data size by over 99%.

After a stellar break, NASA’s New Horizons spacecraft reached Pluto. Not only does it have the ashes of astronomer Clyde Tombaugh, the discoverer of Pluto, it has a Mongoose on board – in the form of a MIPS-based Mongoose-V chip. Alexandru Voica, Imagination, tells us more about the rad-hard device manufactured by Synova.

An interesting project, and a worthy one too, is relayed in the blog post by John McMillan (Mentor Graphics). Cool Earth Solar designs and develops solar products and uses PADS to develop some of the monitoring hardware for the equipment that tracks the sun, and transmits data for the project.

A subject close to my heart, shopping, is explored by David McKinney, Intel, who has a guest blog from Jon Bird, Y&R Labstore. How to harness the data that make up shopping patterns, without freaking out shoppers. A startling obvious observation is “Retailers must first and foremost be shopper-centric” but what does that mean in the digital age and the Internet of Things era?

Demonstrating a helpful nature, David Blaza, ARM, points us to a report by McKinsey, about the Internet of Things. As well as Blaza’s observation relating to ARM’s Cortex-M devices on the edge of the IoT and ARM Cortex-A at the hub and gateway level, I was struck by Joep Van Beurden’s observation that the IoT is not about prices or power but connecting the hardware in a smart way to the cloud.

By Caroline Hayes, Senior Editor

Blog Review – Monday, January 12, 2015

Monday, January 12th, 2015

New year resolutions from ARM, IP Extreme; CES highlights from Cadence, Synopsys, ARM partners; Mentor looks back at 2014; Imagination looks ahead

It wouldn’t be a January Blog Review without a mention of resolutions. Jacob Beningo, ARM, is disappointed that DeLoreans and hover boards are not filling the skies as predicted in Back to the Future, but he does believe that 2015 should be the year of sound, embedded software development resolutions.

A challenge is thrown down by McKenzie, IP Extreme, to ensure the company meets its new year resolution to update its blog. If you find that the company has missed posting a blog by midnight Wednesday (Pacific time) you can claim a $100 voucher for a chop or restaurant of your choice.

It wouldn’t be the week after CES, if there were no mentions of ‘that show’. Michael Posner, Synopsys, looked beneath the cars, entertainment devices and robots to focus on sensors (and to mention DesignWare Sensor and Control Subsystem, which designs them).

Brian Fuller, Cadence, interviews Martin Lund, senior vice president for Cadence’s IP Group, at CES. Lund has some interesting observations about audio and video demos at the show and insight into the role of IP.

ARM was everywhere at CES, and Brad Nemire, ARM, has some great videos on his blog, with demos of partners’ devices, and also a link to a Bloomberg interview with CEO Simon Segars.

International finance was not covered at CES, but the mobile money payment services described in the blog by Catherine Bolgar, Dassault Systemes has a lot of ‘CES criteria’, connectivity, innovation and commercial applications, as well as the Vegas connection with cash. It is an enlightening view of how technology can help those without deemed to expensive to reach and service by conventional banking institutions.

Looking back at 2014, Vern Wnek, Mentor, considers the overall winner of the longest running EDA awards, the Technology Leadership Awards, Alcatel-Lucent. The award winnning project was the 1X100GE packet module includes 100Gb/s of total processing power and signals operating at 6/12/28GHz.

A world without wired cables, is the vision of Alexandru Voica, Imagination, who checks just how close a cable-free life is; encouraged with some introductions from the company, of course.

By Caroline Hayes, Senior Editor.

Blog Review – Thurs, January 08 2015

Thursday, January 8th, 2015

CES, no I mean CPS; CES 2015, 2016 and beyond; Connected cars at CES; ISO 26262 help; Constraint coding clinic

No doubt anticipating a wearables deluge at CES, Margaret Schmitt, Ansys, cleverly uses this to her advantage and tailors her blog, not to ‘that Vegas show’ but to arguing the point for CPS (Chip Package System) co-analysis for shareable, workable data. She also avoids all mention of CES but reminds readers that the company will be at DesignCon later this month.

This time of year it is always a trial to find decent blog material. If it’s not a review of 2014, it will be preview of trends at CES, but some bloggers do it well. David Blaza, goes behind the glitz and straight to the semiconductor business of CES. He takes the view that looking at devices being launched will reveal more about CES 2016 or 2017 than this week’s show.

Sounding a little world-weary (or is that Vegas-weary?) Dick James and Jim Morrison, ChipWorks, fought the crowds at CES Unveiled, the press preview. Their tech-fatigue is entertaining and they also came up with five top themes. Most you could guess but the connected car is a new addition. It is a theme embraced by Drue Freeman, NXP, which is not surprising as the company is showcasing its RoadLINK secure connected car technology in Vegas this week.

Intel CEO Brian Krzanich delivered a keynote at CES, illustrating how computer and human interactions are vital in this world of mobile computing everywhere. Scott Apeland refers to it in this blog about Intel’s RealSense technology and his enthusiasm knows no bounds. He includes descriptions of application examples and has sympathy for ‘those who haven’t had the good fortune’ to try the technology first hand. All that can be put right at the company’s booth.

This industry is the kind that wants to share and help fellow engineers and Kurt Shuler, Arteris, does just that with a glossary of ISO 26262 abbreviations and acronyms to help those attempting to wade through the functional safety standards.

Another helpful, detailed and timely blog is from Daniel Bayer, Cadence, discussing generative list pseudo methods in constraint for modelling and debugging. It is timely, as Ethernet-based communication is increasing in popularity and will require a different take on constraint coding.

Next Page »