Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘MEMS’

Next Page »

Blog Review – Monday February 2, 2015

Monday, February 2nd, 2015

2015’s must-have – a personal robot, Thumbs up for IP access, USB 3.1 has landed, Transaction recap, New talent required, Structuring medical devices, MEMS sensors webinar

Re-living a youth spent watching TV cartoons, Brad Nemire, ARM, marvels at the Personal Robot created by Robotbase. It uses an ARM-based board powered by a Quad-core Qualcomm Krait CPU, so he interviewed the creator, Duy Huynh, Founder and CEO of Robotbase and found out more about how it was conceived and executed. I think I can guess what’s on Nemire’s Christmas list already.

Getting a handle on security access to big data, Michael Ford, Mentor Graphics, suggests a solution to accessing technology IP or patented technology without resorting to extreme measures shown in films and TV.

Celebrating the integration of USB 3.1 in the Nokia N1 tablet and other, upcoming products, Eric Huang, Synopsys, ties this news in with access to “the best USB 3.1 webinar in the universe”, which – no great surprise – is hosted by Synopsys. He also throws in some terrible jokes – a blog with something for everyone.

A recap on transaction-based verification is provided by Axel Scherer, Cadence, with the inevitable conclusion that the company’s tools meet the task. The blog’s embedded video is simple, concise and informative and worth a click.

Worried about the lack of new, young engineers entering the semiconductor industry, Kands Manickam, IP extreme, questions the root causes for the stagnation.

A custom ASIC and ASSP microcontroller combine to create the Struix product, and Jakob Nielsen, ON Semiconductor, explains how this structure can meet medical and healthcare design parameters with a specific sensor interface.

What’s the IoT without MEMS sensors? Tim Menasveta, ARM, shows the way to an informative webinar : Addressing Smart Sensor Design Challenges for SoCs and IoT, hosted in collaboration with Cadence, using its Virtuoso and MEMS Convertor tools and the Cortex-M processors.

Caroline Hayes, Senior Editor

Smart Bluetooth, Sensors and Hackers Showcased at CES 2015

Wednesday, January 14th, 2015

Internet of Things (IoT) devices ranged from Bluetooth gateways and smart sensors to intensive cloud-based data processors and hackathons – all powered by ARM.

By John Blyler, Editorial Director

Connectivity continues to be a major theme at the International Consumer Electronics Show (CES). The only difference each year is the way in which the connectivity is express in products. For example, this year’s (2015) event showcased an increase in gateway networking devices that permitted Bluetooth Low Energy-equipped gadgets to connect to a WiFi router or other interfaces with the outside world.

According to a recent IHS report, the global market for low-power, Bluetooth Smart integrated circuits (IC) will see shipments rise nearly tenfold over the next 5 years. This is good news for very low power wireless semiconductor intellectual property (IP) and device manufacturers in the wearable and connected markets. One example out of many is Atmel’s BTLC1000 chip, which the company claims will help improve battery life by over 30% of current devices. The chip architecture is based on a ARM® Cortex®-M0 processor.

Bluetooth Smart is the intelligent, low-power version of traditional Bluetooth wireless technology that works with existing smartphone and tablet applications. The technology brings smart connectivity to every day devices such as toothbrushes, heart-rate monitors, fitness devices and more. (See, Wearable Technologies Meet Bluetooth Low Energy)

For the IoT to be useful, sensor data at the edge of the connectivity node must be communicated to the cloud for high performance processing of all the IoT data. This year’s CES showcased a number of multicore 64-bit devices like NVIDIA ARM-based Tegra X1. Another example of a high-end computing system is Samsung’s Exynos 5422 processor that is based upon ARM’s big.LITTLE™ technology and contains four Cortex-A15 cores and four Cortex-A7 cores. These types of products can run Android and 4K video displays on a 28nm process node.

Team mbed

Many embedded software developers enjoy the challenge of creating something new. Today, it is fashionable to call these people hackers, in part because they exhibit the prerequisite mindset, namely, “one who programs enthusiastically…”  – from the Hacker’s Jargon File, circa 1988.

Special events called hackathons have been created for these enthusiastic programmers to practice and demonstrate their skills. For example, back in August of 2014, ARM provided a group of hackers know as Team mbed™ with hardware and software development platforms for the AT&T Hackathon at Super Mobility Week. Last week, Team mbed returned to participate in the ATT Hackathon at the CES 2015. The team consisted of Internet of Things (IoT) industry participants from Freescale, Multi-Tech, Nordic Semiconductor, STMicroelectronics, u-blox and ARM. The team was supplied with a number of cool resources including ARM mbed-enabled development boards, connectivity modules, and a variety of different actuators and sensors. These resources combined with available guidance and inspiration enabled the developers to bring their own ideas to reality.

Following the show’s IoT theme, these software developer were given a ‘smorgasbord’ of sensors and actuators to go along with a variety of hardware platforms and I/O connectivity subsystems including Bluetooth®, cellular, Ethernet, and Wi-Fi®.  Recent projects are built around this IoT platform are highlighted at haster.io/mbed (see Figure 1).

Figure 1: Krisztian Flautner, GM of IoTBU at ARM, discusses this new mbed offering that sets out to simplify and speed up the creation and deployment of Internet of Things (IoT) products

Next to connectivity, sensors are the defining component of any IoT technology. Maybe that is why sensor companies have been a growing presence on the CES show floor. This year, sensor-related vendors accounted for over 10% of total exhibitors. Many new IoT sensor technology is implemented using tiny MEMS physical structures. At CES, a relatively new company known as Invensense announced a Sensor System on Chip that combines an ARM Cortex-M0 processor with 2 motion co-processors (see Figure 2). This combination enables a 6-axis motion measurement all in a 3mm x 3mm x 1mm package. To complete the package, this device has its own RTOS that is compatible with Android Lollipop.

Figure 2: InverSense chip with sensors.

Such sensor systems on chip would make a fine addition for the resources available for Team mbed at their next hackathon.

Research Review – March 11 2014

Tuesday, March 11th, 2014

By Caroline Hayes, Senior Editor

Transistor flexes circuits
Source-gate transistors, created at the University of Surrey in the UK have been used in a project by research fellows, Dr Radu Sporea, Professor Ravi Silva and Professor John Shannon, from the University’s Advanced Technology Institute and scientists from Philips Research to improve performance in thin-film digital circuits used within flexible devices.
The transistors control the electric current as it enters the semiconductor channel from the circuit which has been shown to decrease circuit malfunction, improve energy efficiency and minimize fabrication costs, all factors for consideration in gadgets made on flexible plastic or clothing at low price points.
Professor Ravi Silva, says, “This work is a classic example of academia working closely with industry for over two decades to perfect the operations of a completely revolutionary device concept suitable for large area electronics. This device architecture can be applied to traditional disordered materials such as poly-silicon and organic materials, and possibly state-of-the-art graphene thin films, a material that is now backed by the EU Flagship programme of €1Bn over the next ten years.”

Tailor-made medication

(a) Schematic image of the label-free biosensor based on a MEMS Fabry–Perot interferometer; (b) Schematic diagram of the transmission spectrum of the Fabry–Perot interferometer on the biosensor; (c) Photograph of the developed MEMS Fabry–Perot interferomtric biosensor.

While label-free MEMS- (microelectromechanical system) based sensors detect target molecules by measuring the deflection of cantilevers caused by biomolecular adsorption, they have poor sensitivity due to the low conversion efficiency of linear transducing from the mechanical deflection to the readout signal.
Kazuhiro Takahashi and colleagues at Toyohashi University of Technology have improved sensitivity by developing a biosensor based on a MEMS Fabry–Perot interferometer with an integrated photodiode. Using non-linear optical transmittance changes in the Fabry–Perot interference, the theoretical minimum detectable surface stress of the proposed sensor was predicted to be -1µN/m, which is two orders of magnitude greater than that of the conventional MEMS sensor.
The Fabry–Perot sensor was fabricated using a 4inch, p-type silicon wafer. The photodiode was integrated into the silicon substrate using ion implantation of phosphorus.
Surface stress sensor using MEMS-based Fabry-Perot interferometer for label-free biosensing – authors: Kazuhiro Takahashi, Hiroki Oyama, Nobuo Misawa, Koichi Okumura, Makoto Ishida, and Kazuaki Sawada.

Hybrid material stores solar energy
Looking at the storage problems of renewable energy, Berkeley Lab researchers at the Joint Center for Artificial Photosynthesis show that nearly 90% of the electrons generated by a hybrid material designed to store solar energy in hydrogen are being stored in the target hydrogen molecules.
Gary Moore, a chemist and principal investigator with Berkeley Lab’s Physical Biosciences Division, led an efficiency analysis study of a photocathode material developed for catalyzing the production of hydrogen fuel from sunlight. The hybrid material was formed by interfacing the semiconductor gallium phosphide with a molecular hydrogen-producing cobaloxime catalyst.
Moore says: “Given the intermittent availability of sunlight, we need a way of using the sun all night long. Storing solar energy in the chemical bonds of a fuel also provides the large power densities that are essential to modern transport systems. We’ve shown that our approach of coupling the absorption of visible light with the production of hydrogen in a single material puts photoexcited electrons where we need them to be, stored in chemical bonds.”
Moore is the corresponding author of “Energetics and efficiency analysis of a cobaloxime-modified semiconductor under simulated air mass 1.5 illumination.” Co-authors are Alexandra Krawicz and Diana Cedeno, in Physical Chemistry Chemical Physics.
Energetics and Efficiency Analysis of a Cobaloxime-Modified Semiconductor at Simulated Air Mass 1.5 Illumination

EU seeks efficient organic solar cells
A pan-European project, ArtESun, sees VTT collaborate with imec (Belgium), Fraunhofer ISE (Germany), Imperial College (U.K.), IKERLAN S.Coop. (Spain), Corning SAS (France), ONYX Solar Energy S.L (Spain), Confidex OY (Finland), Wibicom Inc. (Canada), and SAFC Hitech Ltd. (U.K.) to develop efficient, long life, low production cost, organic solar cells. Ultimately, the FP7 project hopes to bring photovoltaic technologies into thin film production.
In addition to establishing high efficiency materials for cost-effective, non-vacuum production of modules, the project is seeking to understand long term stability and degradation mechanism as well as roll-to-roll, additive, on-vacuum coating and printing techniques.

Keep taking the tablets
Smart connected devices continue to grow. International Data Corporation has published figures that show combined shipments of PCs, tablets, and smartphones, dubbed smart connected devices or SCDs, reached nearly 70million units in Q4 2013 and nearly 230million units for the calendar year, which is a 9.7% growth compared with 2012.
The analyst’s latest forecasts are that the smart connected device market will grow 5.1% in 2014, with tablet shipments expected to grow 17.6%, followed by smartphones with a 10.4% growth for the year, at the expense of the PC market, which contracted 2.4% in Q4 2013.

Future Challenges in Design Verification and Creation

Wednesday, March 23rd, 2016

Gabe Moretti, Senior Editor

Dr. Wally Rhines, Chairman and CEO of Mentor Graphics, delivered the keynote address at the recently concluded DVCon U.S. in San Jose.  The title of the presentation was: “Design Verification Challenges: Past, Present, and Future”.  Although one must know the past and recognize the present challenges, the future ones were those that interested me the most.

But let’s start from the present.  As can be seen in Figure 1 designers today use five major techniques to verify a design.  The techniques are not integrated with each other; they are as five separate silos within the verification methodology.  The near future goal as explained by Wally is to integrate the verification process.  The work of the Portable Stimulus Working Group within the Accellera System Initiative is addressing the problem.  The goal, according to Bill Hodges of Intel is: “Users should not be able to tell if their job was executed on a simulator, emulator, or prototype”.

Figure 1.  Verification Silos

The present EDA development work addresses the functionality of the design, both at the logical and at the physical level.  But, especially with the growing introduction of Internet of Things (IoT) devices and applications the issues of security and safety are becoming a requirement and we have not learned how to verify the device robustness in these areas.

Security

Figure 2, courtesy of Mentor Graphics, encapsulates the security problem.  The number of security breaches increases with every passing day it seems, and the financial and privacy losses are significant.

Figure 2

Chip designers must worry about malicious logic inside the chip, Counterfeit chips, and side-channel attacks.  Malicious logic is normally inserted dynamically into the chip using Trojan malware.  They must be detected and disabled.  The first thing designers need to do is to implement counter measures within the chip.  Designers must implement logic to analyze runtime activity to recognize foreign induced activity through a combination of hardware and firmware.  Although simulation can be used for verification, static tests to determine that the chip performs as specified and does not execute unspecified functions should be used during the development process.  Well-formed and complete assertions can approximate a specification document for the design.

Another security threat called “side-channel attacks” is similar to the Trojan attack but it differs in the fact that it takes advantage of back doors left open, either intentionally or not, by the developers.  Back doors are built into systems to deal with special security circumstances by the developers’ institution, but can be used criminally when discovered by unauthorized third parties.  To defend from such eventuality designers can use hardened IP or special logic to verify authenticity.  Clearly during development these countermeasures must be verified and weaknesses discovered.  The question to be answered is: “Is the desired path the only path possible?”

Safety

As the use of electronic systems grows at an increasing pace in all sort of products, the reliability of such systems grows in importance.  Although many products can be replaced when they fail without serious consequences for the users, an increasing number of systems failures put the safety of human being in great jeopardy.  Dr. Rhines identified in particular systems in the automotive, medical, and aerospace industries.  Safety standards have been developed in these industries that cover electronic systems.  Specifically, ISO 26262 in the automotive industry, IEC 60601 in the medical field, and DO-254 in aerospace applications.  These certification standards aim to insure that no harm will occur to systems, their operators, or to bystanders by verifying the functional robustness of the implementation.

Clearly no one would want a heart pace maker (figure 3) that is not fail-safe to be implanted in a living organism.

Figure 3. Implementation subject to IEC 60601 requirements

The certification standards address safe system development process by requiring evidence that all reasonable system safety objective are satisfied.  The goal is to avoid the risk of systematic failures or random hardware failures by establishing appropriate requirements and processes.  Before a system is certified, auditors check that each applicable requirement in the standard has been implemented and verified.  They must identify specific tests used to verify compliance to each specific requirement and must also be assured that automatic requirements tracking is available for a number of years.

Dr. Rhines presented a slide that dealt with the following question: “Is your system safe in the presence of a fault?”.

To answer the question verification engineers must inject faults in the verification stream.  Doping this it helps determining if the response of the system matches the specification, despite the presence of faults.  It also helps developers understand the effects of faults on target system behavior, and is assesses the overall risk.  Wally noted that formal-based fault injection/verification can exhaustively verify the safety aspects of the design in the presence of faults.

Conclusion

Dr. Rhines focused on the verification aspects during his presentation and his conclusions covered four points.

  • Despite design re-use, verification complexity continues to increase at 3-4X the rate of design creation
  • Increasing verification requirements drive new capabilities for each type of verification engine
  • Continuing verification productivity gains require EDA to both abstract the verification process from the underlying engines, develop common environments, methodologies and tools, and separate the “what” from the “how”
  • Verification for security and safety is providing another major wave of verification requirements.

I would like to point out that developments in verification alone are not enough.  What EDA really needs is to develop a system approach to the problem of developing and verifying a system.  The industry has given lip service to system design and the tools available so far still maintain a “silos” approach to the problem.  What is really required is the ability to work at the architectural level and evaluate a number of possible solutions to a well specified requirements document.  Formal tools provide good opportunities to approximate, if not totally implement, an executable requirements document.  Designers need to be able to evaluate a number of alternatives that include the use of mixed hardware and software implementations, analog and mixed-signal solutions, IP re-use, and electro-mechanical devices, such as MEMS.

It is useless or even dangerous to begin development under false assumptions whose impact will be found, if ever, once designers are well into the implementation stage.  The EDA industry is still focusing too much on fault identification and not enough on fault avoidance.

The EDA Industry Macro Projections for 2016

Monday, January 25th, 2016

Gabe Moretti, Senior Editor

How the EDA industry will fare in 2016 will be influenced by the worldwide financial climate. Instability in oil prices, the Middle East wars and the unpredictability of the Chinese market will indirectly influence the EDA industry.  EDA has seen significant growth since 1996, but the growth is indirectly influenced by the overall health of the financial community (see Figure 1).

Figure 1. EDA Quarterly Revenue Report from EDA Consortium

China has been a growing market for EDA tools and Chinese consumers have purchased a significant number of semiconductors based products in the recent past.  Consumer products demand is slowing, and China’s financial health is being questioned.  The result is that demand for EDA tools may be less than in 2015.   I have received so many forecasts for 2016 that I have decided to brake the subject into two articles.  The first article will cover the macro aspects, while the second will focus more on specific tools and market segments.

Economy and Technology

EDA itself is changing.  Here is what Bob Smith, executive director of the EDA consortium has to say:

“Cooperation and competition will be the watchwords for 2016 in our industry. The ecosystem and all the players are responsible for driving designs into the semiconductor manufacturing ecosystem. Success is highly dependent on traditional EDA, but we are realizing that there are many other critical components, including semiconductor IP, embedded software and advanced packaging such as 3D-IC. In other words, our industry is a “design ecosystem” feeding the manufacturing sector. The various players in our ecosystem are realizing that we can and should work together to increase the collective growth of our industry. Expect to see industry organizations serving as the intermediaries to bring these various constituents together.”

Bob Smith’s words acknowledge that the term “system” has taken a new meaning in EDA.  We are no longer talking about developing a hardware system, or even a hardware/software system.  A system today includes digital and analog hardware, software both at the system and application level, MEMS, third party IP, and connectivity and co-execution with other systems.  EDA vendors are morphing in order to accommodate these new requirements.  Change is difficult because it implies error as well as successes, and 2016 will be a year of changes.

Lucio Lanza, managing director of Lanza techVentures and a recipient of the Phil Kaufman award, describes it this way:

“We’ve gone from computers talking to each other to an era of PCs connecting people using PCs. Today, the connections of people and devices seem irrelevant. As we move to the Internet of Things, things will get connected to other things and won’t go through people. In fact, I call it the World of Things not IoT and the implications are vast for EDA, the semiconductor industry and society. The EDA community has been the enabler for this connected phenomenon. We now have a rare opportunity to be more creative in our thinking about where the technology is going and how we can assist in getting there in a positive and meaningful way.”

Ranjit Adhikary, director of Marketing at Cliosoft acknowledges the growing need for tools integration in his remarks:

“The world is currently undergoing a quiet revolution akin to the dot com boom in the late 1990s. There has been a growing effort to slowly but surely provide connectivity between various physical objects and enable them to share and exchange data and manage the devices using smartphones. The labors of these efforts have started to bear fruit and we can see that in the automotive and consumables industries. What this implies from a semiconductor standpoint is that the number of shipments of analog and RF ICs will grow at a remarkable pace and there will be increased efforts from design companies to have digital, analog and RF components in the same SoC. From an EDA standpoint, different players will also collaborate to share the same databases. An example of this would be Keysight Technologies and Cadence Designs Systems on OpenAccess libraries. Design companies will seek to improve the design methodologies and increase the use of IPs to ensure a faster turnaround time for SoCs. From an infrastructure standpoint a growing number of design companies will invest more in the design data and IP management to ensure better design collaboration between design teams located at geographically dispersed locations as well as to maximize their resources.”

Michiel Ligthart, president and chief operating officer at Verific Design Automation points to the need to integrate tools from various sources to achieve the most effective design flow:

“One of the more interesting trends Verific has observed over the last five years is the differentiation strategy adopted by a variety of large and small CAD departments. Single-vendor tool flows do not meet all requirements. Instead, IDMs outline their needs and devise their own design and verification flow to improve over their competition. That trend will only become more pronounced in 2016.”

New and Expanding Markets

The focus toward IoT applications has opened up new markets as well as expanded existing ones.  For example the automotive market is looking to new functionalities both in car and car-to-car applications.

Raik Brinkmann, president and chief executive officer at OneSpin Solutions wrote:

“OneSpin Solutions has witnessed the push toward automotive safety for more than two years. Demand will further increase as designers learn how to apply the ISO26262 standard. I’m not sure that security will come to the forefront in 2016 because there no standards as yet and ad hoc approaches will dominate. However, the pressure for security standards will be high, just as ISO26262 was for automotive.”

Michael Buehler-Garcia, Mentor Graphics Calibre Design Solutions, Senior Director of Marketing notes that many of the established and thought of as obsolete process nodes will instead see increased volume due to the technologies required to implement IoT architectures.

“As cutting-edge process nodes entail ever higher non-recurring engineering (NRE) costs, ‘More than Moore’ technologies are moving from the “press release” stage to broader adoption. One consequence of this adoption has been a renewed interest in more established processes. Historical older process node users, such as analog design, RFCMOS, and microelectromechanical systems (MEMS), are now being joined by silicon photonics, standalone radios, and standalone memory controllers as part of a 3D-IC implementation. In addition, the Internet of Things (IoT) functionality we crave is being driven by a “milli-cents for nano-acres of silicon,” which aligns with the increase in designs targeted for established nodes (130 nm and older). New physical verification techniques developed for advanced nodes can simplify life for design companies working at established nodes by reducing the dependency on human intervention. In 2016, we expect to see more adoption of advanced software solutions such as reliability checking, pattern matching, “smart” fill, advanced extraction solutions, “chip out” package assembly verification, and waiver processing to help IC designers implement more complex designs on established nodes. We also foresee this renewed interest in established nodes driving tighter capacity access, which in turn will drive increased use of design optimization techniques, such as DFM scoring, filling analysis, and critical area analysis, to help maximize the robustness of designs in established nodes.”

Warren Kurisu, Director of Product Management, Mentor Graphics Embedded Systems Division points to wearables, another sector within the IoT market, as an opportunity for expansion.

“We are seeing multiple trends. Wearables are increasing in functionality and complexity enabled by the availability of advanced low-power heterogeneous multicore architectures and the availability of power management tools. The IoT continues to gain momentum as we are now seeing a heavier demand for intelligent, customizable IoT gateways. Further, the emergence of IoT 2.0 has placed a new emphasis on end-to-end security from the cloud and gateway right down to the edge device.”

Power management is one of the areas that has seen significant concentration on the part of EDA vendors.  But not much has been said about battery technology.  Shreefal Mehta, president and CEO of Paper Battery Company offered the following observations.

“The year 2016 will be the year we see tremendous advances in energy storage and management.   The gap between the rate of growth of our electronic devices and the battery energy that fuels them will increase to a tipping point.   On average, battery energy density has only grown 12% while electronic capabilities have more than doubled annually.  The need for increased energy and power density will be a major trend in 2016.  More energy-efficient processors and sensors will be deployed into the market, requiring smaller, safer, longer-lasting and higher-performing energy sources. Today’s batteries won’t cut it.

Wireless devices and sensors that need pulses of peak power to transmit compute and/or perform analog functions will continue to create a tension between the need for peak power pulses and long energy cycles. For example, cell phone transmission and Bluetooth peripherals are, as a whole, low power but the peak power requirements are several orders of magnitude greater than the average power consumption.  Hence, new, hybrid power solutions will begin to emerge especially where energy-efficient delivery is needed with peak power and as the ratio of average to peak grows significantly. 

Traditional batteries will continue to improve in offering higher energy at lower prices, but current lithium ion will reach a limit in the balance between energy and power in a single cell with new materials and nanostructure electrodes being needed to provide high power and energy.  This situation is aggravated by the push towards physically smaller form factors where energy and power densities diverge significantly. Current efforts in various companies and universities are promising but will take a few more years to bring to market.

The Supercapacitor market is poised for growth in 2016 with an expected CAGR of 19% through 2020.  Between the need for more efficient form factors, high energy density and peak power performance, a new form of supercapacitors will power the ever increasing demands of portable electronics. The Hybrid supercapacitor is the bridge between the high energy batteries and high power supercapacitors. Because these devices are higher energy than traditional supercapacitors and higher power than batteries they may either be used in conjunction with or completely replace battery systems. Due to the way we are using our smartphones, supercapacitors will find a good use model there as well as applications ranging from transportation to enterprise storage.

Memory in smartphones and tablets containing solid state drives (SSDs) will become more and more accustomed to architectures which manage non-volatile cache in a manner which preserves content in the event of power failure. These devices will use large swaths of video and the media data will be stored on RAM (backed with FLASH) which can allow frequent overwrites in these mobile devices without the wear-out degradation that would significantly reduce the life of the FLASH memory if used for all storage. To meet the data integrity concerns of this shadowed memory, supercapacitors will take a prominent role in supplying bridge power in the event of an energy-depleted battery, thereby adding significant value and performance to mobile entertainment and computing devices.

Finally, safety issues with lithium ion batteries have just become front and center and will continue to plague the industry and manufacturing environments.  Flaming hoverboards, shipment and air travel restrictions on lithium batteries render the future of personal battery power questionable. Improved testing and more regulations will come to pass, however because of the widespread use of battery-powered devices safety will become a key factor.   What we will see in 2016 is the emergence of the hybrid supercapacitor, which offers a high-capacity alternative to Lithium batteries in terms of power efficiency. This alternative can operate over a wide temperature range, have long cycle lives and – most importantly are safe. “

Greg Schmergel, CEO, Founder and President of memory-maker Nantero, Inc points out that just as new power storage devices will open new opportunities so will new memory devices.

“With the traditional memories, DRAM and flash, nearing the end of the scaling roadmap, new memories will emerge and change memory from a standard commodity to a potentially powerful competitive advantage.  As an example, NRAM products such as multi-GB high-speed DDR4-compatible nonvolatile standalone memories are already being designed, giving new options to designers who can take advantage of the combination of nonvolatility, high speed, high density and low power.  The emergence of next-generation nonvolatile memory which is faster than flash will enable new and creative systems architectures to be created which will provide substantial customer value.”

Jin Zhang, Vice President of Marketing and Customer Relations at Oski Technology is of the opinion that the formal methods sector is an excellent prospect to increase the EDA market.

“Formal verification adoption is growing rapidly worldwide and that will continue into 2016. Not surprisingly, the U.S. market leads the way, with China following a close second. Usage is especially apparent in China where a heavy investment has been made in the semiconductor industry, particularly in CPU designs. Many companies are starting to build internal formal groups. Chinese project teams are discovering the benefits of improving design qualities using Formal Sign-off Methodology.”

These market forces are fueling the growth of specific design areas that are supported by EDA tools.  In the companion article some of these areas will be discussed.

Mixed Signal Design and Verification for IoT Designs

Tuesday, November 17th, 2015

Mitch Heins, EDS Marketing Director, DSM division of Mentor Graphics

A typical Internet-of-Things (IoT) design consists of several different blocks including one or more sensors, analog signal processing for the sensors, an analog-to-digital converter and a digital interface such as I2C.  System integration and verification is challenging for these types of IoT designs as they typically are a combination of two to three different ICs.  The challenge is exacerbated by the fact that the system covers multiple domains including analog, digital, RF, and mechanical for packaging and different forms of multi-physics type simulations needed to verify the sensors and actuators of an IoT design.  The sensors and actuators are typically created as microelectromechanical systems (MEMS) which have a mechanical aspect and there is a tight interaction between them and the package in which they are encapsulated.

The verification challenge is to have the right form of models available for each stage of the design and verification process that work with your EDA vendor tool suite.  Many of the high volume IoT designs are now looking to integrate the microcontroller and radio as one die and the analog circuitry and sensors on second die to reduce cost and footprint.

In many cases the latest IoT designs are now using onboard analog and digital circuitry with multiple sensors to do data fusion at the sensor, making for “smart sensors”.  These ICs are made from scratch meaning that the designers must create their own models for both system-level and device-level verification.

Tanner EDA by Mentor Graphics has partnered with SoftMEMS to offer a complete mixed signal design and verification tool suite for these types of MEMS centric IC designs. The Tanner Analog and MEMS tool suites offers a complete design-capture, simulation, implementation and verification flow for MEMS-based IoT designs.  The Tanner AMS verification flow supports top-down hierarchical design with the ability to do co-simulation of multiple levels of design abstraction for analog, digital and mechanical environments.  All design-abstractions, simulations and resulting waveforms are controlled and viewed from a centrally integrated schematic cockpit enabling easy design trade-offs and verification.   Design abstractions can be used to swap in different models for system level vs device level verification tasks as different parts of the design are implemented.  The system includes support for popular modeling languages such as Verilog-AMS and Verilog-A.

The logic abstraction of the design is tightly tied to the physical implementation of the design through a correct-by-construction design methodology using schematic-driven-layout with interactive DRC checking.  The Tanner/SoftMEMS solution uses the 2D mask layout to automatically create a correct-by-construction 3D model of the MEMS devices using a process technology description file.

Figure 1: Tanner Analog Mixed Signal Verification Cockpit

The 3D model is combined with similar 3D package models and is then used in Finite Element or Boundary Element Analysis engines to debug the functionality and manufacturability of the MEMS devices including mechanical, thermal, acoustic, electrical, electrostatic, magnetic and fluid analysis.

Figure 2: 3D-layout & cross section created by Tanner SOFTMEMS 3D Modeler

A key feature of the design flow is that the solution allows for the automatic creation of a compact Verilog-A model for the MEMS-Package combination from the FEA/BEA analysis that can be used to close the loop in final system-level verification using the same co-simulation cockpit and test benches that were used to start the design.

An additional level of productivity can be gained by using a parameterized library of MEMS building blocks from which the designer can more quickly build complex MEMS devices.

Figure 3: Tanner S-Edit Schematic Capture Mixed Mode Schematic of the IoT System

Each building block has an associated parameterized compact simulation model.  By structurally building the MEMS device from these building blocks, the designer is automatically creating a structural simulation model for the entire device that can be used within the verification cockpit.

Figure 4:Tanner SoftMEMS BasicPro Suite with MEMS Symbol and Simulation Library

An EDA View of Semiconductor Manufacturing

Wednesday, June 25th, 2014

Gabe Moretti, Contributing Editor

The concern that there is a significant break between tools used by designers targeting leading edge processes, those at 32 nm and smaller to be precise, and those used to target older processes was dispelled during the recent Design Automation Conference (DAC).  In his address as a DAC keynote speaker in June at the Moscone Center in San Francisco Dr. Antun Domic, Executive Vice President and General Manager, Synopsys Design Group, pointed out that advances in EDA tools in response to the challenges posed by the newer semiconductor process technologies also benefit designs targeting older processes.

Mary Ann White, Product Marketing Director for the Galaxy Implementation Platform at Synopsys, echoed Dr. Domic remarks and stated:” There seems to be a misconception that all advanced designs needed to be fabricated on leading process geometries such as 28nm and below, including FinFET. We have seen designs with compute-intensive applications, such as processors or graphics processing, move to the most advanced process geometries for performance reasons. These products also tend to be highly digital. With more density, almost double for advanced geometries in many cases, more functionality can also be added. In this age of disposable mobile products where cellphones are quickly replaced with newer versions, this seems necessary to remain competitive.

However, even if designers are targeting larger, established process technologies (planar CMOS), it doesn’t necessarily mean that their designs are any less advanced in terms of application than those that target the advanced nodes.  There are plenty of chips inside the mobile handset that are manufactured on established nodes, such as those with noise cancellation, touchscreen, and MEMS (Micro-Electronic Sensors) functionality. MEMS chips are currently manufactured at the 180nm node, and there are no foreseeable plans to move to smaller process geometries. Other chips at established nodes tend to also have some analog capability, which doesn’t make them any less complex.”

This is very important since the companies that can afford to use leading edge processes are diminishing in number due to the very high ($100 million and more) non recurring investment required.  And of course the cost of each die is also greater than with previous processes.  If the tools could only be used by those customers doing leading edge designs revenues would necessarily fall.

Design Complexity

Steve Carlson, Director of Marketing at Cadence, states that “when you think about design complexity there are few axes that might be used to measure it.  Certainly raw gate count or transistor count is one popular measure.  From a recent article in Chip Design a look at complexity on a log scale shows the billion mark has been eclipsed.”  Figure 1, courtesy of Cadence, shows the increase of transistors per die through the last 22 years.

Figure 1.

Steve continued: “Another way to look at complexity is looking at the number of functional IP units being integrated together.  The graph in figure 2, provided by Cadence, shows the steep curve of IP integration that SoCs have been following.  This is another indication of the complexity of the design, rather than of the complexity of designing for a particular node.  At the heart of the process complexity question are metrics such as number of parasitic elements needed to adequately model a like structure in one process versus another.”  It is important to notice that the percentage of IP blocks provided by third parties is getting close to 50%.

Figure 2.

Steve concludes with: “Yet another way to look at complexity is through the lens of the design rules and the design rule decks.  The graphs below show the upward trajectory for these measures in a very significant way.” Figure 3, also courtesy of Cadence, shows the increased complexity of the Design Rules provided by each foundry.  This trend makes second sourcing a design impossible, since having a second source foundry would be similar to having a different design.

Figure 3.

Another problem designers have to deal with is the increasing complexity due to the decreasing features sizes.  Anand Iyer, Calypto Director of Product Marketing, observed that: “Complexity of design is increasing across many categories such as Variability, Design for Manufacturability (DFM) and Design for Power (DFP). Advanced geometries are prone to variation due to double patterning technology. Some foundries are worst casing the variation, which can lead to reduced design performance. DFM complexity is causing design performance to be evaluated across multiple corners much more than they were used to. There are also additional design rules that the foundry wants to impose due to DFM issues. Finally, DFP is a major factor for adding design complexity because power, especially dynamic power is a major issue in these process nodes. Voltage cannot scale due to the noise margin and process variation considerations and the capacitance is relatively unchanged or increasing.”

Impact on Back End Tools.

I have been wondering if the increasing dependency on transistors geometries and the parasitic effects peculiar to each foundry would eventually mean that a foundry specific Place and Route tool would be better than adapting a generic tool to a Design Rules file that is becoming very complex.  I my mind complexity means greater probability of errors due to ambiguity among a large set of rules.  Thus by building rules specific Place and Route tools would directly lower the number of DR checks required.

Mary Ann White of Synopsys answered: “We do not believe so.  Double and multiple patterning are definitely newer techniques introduced to mitigate the lithographic effects required to handle the small multi-gate transistors. However, in the end, even if the FinFET process differs, it doesn’t mean that the tool has to be different.  The use of multi patterning, coloring and decomposition is the same process even if the design rules between foundries may differ.”

On the other hand Steve Carlson of Cadence shares the opinion.  “There have been subtle differences between requirements at new process nodes for many generations.  Customers do not want to have different tool strategies for second source of foundry, so the implementation tools have to provide the union of capabilities needed to enable each node (or be excluded from consideration).   In more recent generations of process nodes there has been a growing divergence of the requirements to support

like-named nodes. This has led to added cost for EDA providers.  It is doubtful that different tools will be spawned for different foundries.  How the (overlapping) sets of capabilities get priced and packaged by the EDA vendors will be a business model decision.  The use model users want is singular across all foundry options.  How far things diverge and what the new requirements are at 7nm and 5nm may dictate a change in strategy.  Time will tell.”

This is clear for now.  But given the difficulty of second sourcing I expect that a de4sign company will choose one foundry and use it exclusively.  Changing foundry will be almost always a business decision based on financial considerations.

New processes also change the requirements for TCAD tools.  At the just finished DAC conference I met with Dr. Asen Asenov, CEO of Gold Standard Simulations, an EDA company in Scotland that focuses on the simulation of statistical variability in nan-CMOS devices.

He is of the opinion that Design-Technology Co-Optimization (DTCO) has become mandatory in advanced technology nodes.  Modeling and simulation play an increasing important role in the DTCO process with the benefits of speeding up and reducing the cost of the technology, circuit and system development and hence reducing the time-to-market.  He said: “It is well understood that tailoring the transistor characteristics by tuning the technology is not sufficient any more. The transistor characteristics have to meet the requirement for design and optimization of particular circuits, systems and corresponding products.  One of the main challenges is to factor accurately the device variability in the DTCO tools and practices. The focus at 28nm and 20nm bulk CMOS is the high statistical variability introduced by the high doping concentration in the channel needed to secure the required electrostatic integrity. However the introduction of FDSOI transistors and FinFETs, that tolerate low channel doping, has shifted the attention to the process induced variability related predominantly to silicon channel thickness or shape  variation.”  He continued: “However until now TCAD simulations, compact model extraction and circuit simulations are typically handled by different groups of experts and often by separate departments in the semiconductor industry and this leads to significant delays in the simulation based DTCO cycle. The fact that TCAD, compact model extraction and circuit simulation tools are typically developed and licensed by different EDA vendors does not help the DTCO practices.”

Ansys pointed out that in advanced finFET process nodes, the operating voltage for the devices have drastically reduced. This reduction in operating voltage has also lead to a decrease in operating margins for the devices. With several transient modes of operation in a low power ICs, having an accurate representation of the package model is mandatory for accurate noise coupling simulations. Distributed package models with a bump resolution are required for performing Chip-Package-System simulations for accurate noise coupling analysis.

Further Exploration

The topic of Semiconductors Manufacturing has generated a large number of responses.  As a result the next monthly article will continue to cover the topic with particular focus on the impact of leading edge processes on EDA tools and practices.

Experts Share Unique Challenges in Wearable Designs

Monday, April 28th, 2014

By John Blyler, Chief Content Officer

Wearable devices will add a new twist to traditional embedded designs according to experts from ARM, Freescale, HillCrest Labs, STMicr, Imec and Koinix.

Wearable technology design presents challenges different from other embedded markets. To understand these challenges, “System Design Engineering” talked with James Bruce, Director Mobile Solutions for ARM; Mike Stanley, Systems Engineer at Freescale; Daniel Chaitow, Marketing Communications Manager at Hillcrest Labs; Jay Esfandyari, Director of Global Product Marketing at STMicroelectronics; Siebren Schaafsma, Team Leader at Holst Centre and Imec, and; Thea Rejman, Financial Analyst at Kionix, Inc. What follows is a portion of that conversation. – JB

System Design Engineering: What unique technical challenges are designers facing in the wearable smart connected market – as opposed to other markets?

Bruce: The big challenge for wearable designers is that the use cases are still very new.  There is a lot of innovation and diversity taking place. People are trying out many different operational scenarios. Designers need low power processors that are right for these evolving workloads. One of the benefits is that there is a strong ecosystem with a large number of system-on-chip (SoC) available to developers to create initial wearable solutions. Once they have taken the initial designs to market, they can stay with it, use a different SOC, or even customize it.

Another key consideration for designers is improving quality of sensor data integrated into the device, which have traditionally been not in the domain of digital designers. Traditional digital designers need to worry about the analog portions of a wearable device, i.e., accelerometers, gyros, humidity sensors, etc as there are several choices of sensor fusion solutions for low power application processors available off-the-shelf.

Stanley:  Managing power consumption and communications are currently the two largest hurdles for wearable hardware. Looking ahead to truly wearable sensors that can be embedded into clothing, athletic equipments, name badges, etc., means that every component in the system must become even smaller and thinner.  This drives the trend of consolidating multiple sensors into one package, and then shrinking that combo sensor even more. Chip scale packaging will be replacing QFN and LGA for many applications.  This requires close cooperation across all disciplines as the traditional package disappears.

Low power, wireless communications and small form factor are keys to driving IoT applications. In many ways, these are closely related to some wearable applications in that they use similar sensors and common software libraries for communication, data abstraction and signature recognition.

Chaitow: The challenges of timing, power, availability, and security are similar to but different from the typical embedded design problem.  Perhaps the main difference is that, instead of a single circuit board or chip to worry about, the designer must consider the whole system.  This adds network and systems engineering problems to the job of a typical embedded design engineer.  It’s one thing to optimize the power and security of a single device; it’s totally different to do that across a various set of devices in a variable network.  Then add in the fact that the number of devices or versions of the devices might change in a given network over time and the design problem gets even bigger.

An additional unique challenge is the need for calibration of sensors. MEMS sensors used in commercial products have variable performance. This variable performance is true both at the point of manufacture and over the lifetime of the product, as each sensor reacts differently to changes in environmental factors such as temperature, voltage, interference, and sensor aging, to name just a few. These variations make calibration essential for sensor-based products.

Esfandyari: Wearable-device requirements are currently driving significant changes in the MEMS industry. They are driving the development of even smaller components with even lower power consumption but with more embedded features. To satisfy these needs, sensor manufacturers are creating highly integrated devices with multiple sensors (e.g., accelerometer, gyroscope and magnetometer) embedded in a single package.

Finally, from a design perspective, wearable-device manufacturers must be very careful with the appearance of their product because many people are conscious of their appearance and would prefer not to wear accessories that make them look strange. The challenge is to make wearable technology “invisible” to the final user and the external world.

Various implementations of systems that moni-tor activity of the human body. (Courtesy of Imec)

Siebren: An important part of wearable technology will be in the design of body area networks (BANs) – a collection of miniature sensor and actuator nodes. Such devices will require innovative solutions to remove the critical technological obstacles such as shrinking form factors that require new integration and packaging technology. Battery capacities will need to be extended. Indeed, the energy consumption of all building blocks will need to be drastically reduced to allow energy autonomy. System design will have to focus on overall system power consumption where trade-offs have to be made between security, privacy, precision, availability and storage of the data. For example a high power streaming mode over radio of high resolution, medical grade ECG data in case of an emergency compared to average heart rate monitoring once a minute in the low power mode.

Rejman: Stringent power requirements drive the majority of sensor applications in the wearables market. Therefore, it is essential that both the sensors and the software are low power.  Designers should look for a sensor fusion solution that offers embedded power management functionality to help manage sensor interaction and data processing with minimal overhead, resulting in lower power and better performance.

System Design Engineering: Thank you.

Wearable Technology Steps Up a Gear

Tuesday, January 21st, 2014

By Caroline Hayes, Senior Editor

Initial momentum in fitness and wellness may be surpassed by growth in the infotainment market. But challenges lie ahead.

Wearable technology can be divided into three groups – fitness and wellness, infotainment, and healthcare and medical (used by professionals, these wearable devices need legislation and approval, making them a separate category). Caroline Hayes looks at the growth, challenges and prospects of wearable technology.

The fitness and wellness market has seen initial momentum, with activity monitors like Nike FuelBand and Fitbit trackers. However, the infotainment market is now one of high interest and growth, with smart watches and GoogleGlass projects. This type of wearable technology can be used for augmented reality and gaming as well as providing a second screen to a smartphone, and making the smartphone even more ubiquitous than it is already. David Maidment, mobile segment marketing manager, ARM believes that the ability to integrate multiple functionality into a single platform is the key to its appeal, adding: “We are used to technology on our wrist,” he reasons, “so the consumer is already familiar with the [smartwatch] idea”.

Challenges

The first innovation was the fitness band, which ‘hit its stride’ in 2012-2013. The nature of the wristband means that the semiconductors used have to be a very small form factor. In addition, the battery has to last a long time on a single charge. The wearable technology in this category is paired with a smartphone and with cloud services. When a wearable device is paired and tethered to a smartphone, the phone acts like a personal hub, providing access to cloud services, for example websites where personal charts and tables are stored for review. The need for low power connectivity, means that  Bluetooth Smart is used.

The ability to take disparate sensors, add algorithms that aggregate and make sense of the collected data adds functionality

In addition to ultra low power, low power connectivity, Maidment identifies a third criteria – connectivity of MEMS sensors. Wearable technology is about monitoring the data from the sensor, he says. This means that the device has to be always-on. Using Sensor Fusion, or Sensor Hub, the ARM® Cortex®-M processor in these devices will monitor the data and choose when to push the refined data to the cloud. This is important for wearable technology, as every bit of data sent costs power.

Sensor Fusion

For Maidment, the game-changer is Sensor Fusion, or Sensor Hub technology. The ability to take disparate sensors, add algorithms that aggregate and make sense of the collected data adds functionality but also exploits the low power performance of ARM 32bit architecture, he says. This provides more sophisticated data that is sent to the cloud following monitoring and analysis from various sources. “Sensor Fusion, smart aggregation and always-on power are at the heart of wearable technology”, he says, whether it is via the Nike Fuelband to log in to the cloud to review the day’s exercise data, or whether, in the future, it will be to log in to check blood glucose data following a blow-out and eating that forbidden burger.

Software

Sensor Fusion underpins everything, according to Maidment, as an always-on core is always needed. Another essential factor is contextual awareness. With gesture control, the flick of a wrist can accept a call or display a message. Sensor Fusion allows the watch to be always monitoring and waiting for the wearer to do something to which it can react.

Sensor Fusion is also the basis for Siri-style voice recognition. A microphone is a sensor after all, and so the microprocessor has to be left permanently on to detect voice commands and background noise for contextural awareness.

ARM and its partners are working together to develop this Sensor Hub (or Sensor Fusion). In the near future, he says the level of interpretation will allow the smartwatch to sense when the wearer is holding a steering wheel and driving a car, by the angle of the wrist and the background (i.e. engine) noise. This is why, he says the 32bit architecture is used over 8- and 16bit ones, says Maidment, referring to its power performance.

Analog Device’s Tony Zarola, strategic marketing manager, healthcare, agrees that power consumption is a significant hurdle in wearable technology design. “The main challenge is to meet the power consumption target that makes the end device useable for more than a few minutes, whilst maintaining the level of performance to make the device useful from a measurement perspective.” He quotes the example of a device that measures heart rate unreliably in order to conserve energy is of no use and, equally, unstable in a practical sense, he says is the example of a heart rate monitor that only operate continuously for an hour or so.

Time for smartwatches

The smartwatch market is still in the early stages, with the Pebble smartwatch using STMicroelectronics’ STM32 F2 microcontroller, with an ARM Cortex-M3 core. It also has the STMicroelectronics LIS3DH MEMS digital-output motion sensor (see Figure). The total wearable technology market will be $20billion by 2017, according to Future Source, as designs integrate the Internet of Things and connectivity of smart devices. Data on specific wearable devices is not available, but Maidment estimates that there were around 200,000 smart watches sold in 2013. ARM has over 60 design in progress or in the market today. This figure is expected to “explode” this year, through ARM partners.

Figure: Pebble smartwatch uses STMicro and ARM cores complemented with MEMS sensors.

Innovation is king, in the early stages of a technology, he says. Adapting and evolving the same sensors used in fitness bands, and the same low power, black and white displays for smartwatches.

Everyone agrees low power operation is vital. Wearable devices are expected to last a week on a single charge, while acting as both an activity monitor and a second screen for a smartphone, as alerts and SMS messages are pushed to the watch. All of this has to be in a small, unobtrusive form factor and the software has to be equally unobtrusive, hence the small, micro-kernel ARM Cortex-M3, says Maidment.

Some smartwatches connect to the cellular network, making them more of a mini smartphone (for example, the Omate TrueSmart, which runs on Android and is billed as a standalone smartwatch, working independently from a smartphone). Samsung has also recently introduced the Galaxy Gear, Android-based smart watch. Connecting directly to the cellular network untethers the watch from the smartphone, says Maidment, and allows data to be pushed directly to the cloud and cloud services.

The smartwatch as a smartphone on a wrist has its own limitations. A color display and/or touchscreen, for example, means that battery life is only one to two days. The ultimate goal, says Maidment, is to optimise the power; running always-on Sensor Fusion and increasing the complexity of software in a low power, footprint.

There are opportunities to bring the two modes of smart watch together, he says, with ARM-like power efficiencies, clever system level design, power gating and clock management.

For Zarola, as well as connectivity and power, it is the format of wearable technology that needs to be addressed. It demands a small footprint. This, he believes, can, to some extent, be addressed through novel packaging techniques.

All of the enabling technology for always-on, connected, wearable technologies, across all three groups is available now. Many believe that this year is the top of the curve for wearable technology and that innovation will continue to a thrive, as consumers demand more connectivity, more analysis and monitoring of data, more access to the cloud in a small, discreet form factor.

Sensors and Algorithms Challenge IoT Developers

Tuesday, December 10th, 2013

By John Blyler, Content Officer

Challenges abound as designers deal with the analog nature of sensors, IP issues and the new algorithms required by the IoT.

Sensors represent both the great enabler and unique challenge for the evolving Internet of Things (IoT). Innovation in the market will come from surprising places. These are just a few of the observations shared by ARM’s Willard Tu, Director of Embedded and Diya Soubra, CPU Product Manager. “System Engineered Design” caught up with them during the recent ARM Tech Con. What follows is a portion of that conversation. – JB

Blyler: Everyone talks about the importance of sensors to enable the Internet of Things (IoT) but few seem to appreciate what that means.  Would you elaborate?

Tu: Sensors are one of our key initiatives, especially from a microcontroller viewpoint (more on that shortly). But there is another aspect to sensors that both designers and even companies overlook, namely, the algorithms for processing the sensor data. These algorithms, from companies like Hillcrest, bring a unique value to the IoT market. And the algorithm software represents a real intellectual property (IP). I think people are missing out on the IP that is being created there.

Blyler: So you think that most people overlook the IP aspects and simply focus on the processing challenges needed to condition analog sensor signals into a digital output?

Tu: Processing power is critical, which is where distributed local and cloud computing comes in. But there are many other factors, such as energy harvesting to power sensors in areas you never thought of before. Both body and mess network communication challenges are another factor. Conversely, one enabler of sensors is their inexpensive cost. Ten years ago, an accelerometer was a really expensive piece of silicon for an automotive airbag system. Now, they are everywhere, even in cell phones which are very cost sensitive.

Blyler: Is this volume cost decrease due to innovation in MEMS design and manufacturing?

Tu: Yes, the MEMS market has evolved immensely (see Figure 1) and that’s the reason. And I think there is still a lot of evolution there. You see a lot of new comers with MEMS applications but I think you’ll see a lot of consolidation because only the strong will survive.

Figure 1: MEMS market trends as presented by Jeremie Bouchaud of the HIS at the MEMS Congress, 2013.

Soubra: Another factor is that few vendors use only one sensor, but rather a lot of sensors. A common example is multi-sensor accelerometers (see Figure 2): one sensor gives you a good pitch, the others give you yaw and roll.  So you will always need three of four sensors, which means that you have to have software to handle all of them.

Figure 2: Device Orientation – 6 Degrees of Freedom. (Courtesy of Hillcrest)

Blyler: Do you mean software to control and integrate data from the various sensors or software algorithms to deal with the resulting data?

Tu: Both – Software is needed to control and ensure the accuracy of the sensors. But developers are also doing more contextual awareness and predictive analysis. By contextual, I mean that a smart phone turns on when it’s being held next to my head. Predictive refers to what I’ll do next, i.e., having the software anticipate my next actions. Algorithms enable those capabilities

This is the next evolution in handling the data. You can use sensor fusion (sensors plus processors) to create contextual awareness.  That’s what people are doing today.  But how does that evolve into predictive algorithms? Anticipating what you want is even more complex than contextual awareness. It’s like using Apple’s Siri to anticipate when you are hungry and then order for you. Another example is monitoring a person’s glucose level to determine if they are hungry – because their glucose levels have dropped. It could be very intuitive or predictive down the road.

Blyler: These smart algorithms are another reason why processing power is a key enabler in the IoT evolution.

Tu: What you really need is scalable processing power. Sensors require a microcontroller, something with analog inputs. But there are still lots of designers who ask, “Why do you need to integrate the microcontroller with the sensor? It’s just an accelerometer.” They seem to forget that data acquisition is an analog process. The sensor data that is acquired must be conditioned and digitized to be useful in contextual or predictive applications. And that requires lots processing.

Another thing designers forget about is calibration (see Figure 3). Calibration is a big deal to get the accuracy necessary for all the contextual awareness applications. Calibration of the MEMS device is only part of the issue. The device must be recalibrated as part of the larger system once it is soldered and packaged to a board, to deal with temperature affects (of the solder) and flexing of the board. All of these things play a part of the system-level calibration.

Figure3 : Proper interpretation and calibration of the sensor data is critical. The performance of the core fusion algorithm depends on the quality of the input data. (Courtesy of Hillcrest)

You might think that, well, the sensors guys should do that. But the sensor guys are good at making a MEMS device. Some MEMS manufactures are vertically integrating to handle calibration issues, but others just want to make the device. This is another area where innovate IP can grow, i.e., around the calibration of the MEMS device to the system.

Blyler: Where will innovation come from as the IoT evolves?

Tu: I think the ecosystem is where innovation will emerge. Part of this will come from taking application developed in one area and applying them to another. Recently, I talked to several automotive developers. They admitted that they lack of expertise in developing certain types of algorithms – the same algorithms that companies like Hillcrest have already created for mobile consumer applications. I would like to introduce the automotive market to an algorithm company (like Hillcrest), a sensor platform provider (like Movea) ant a few other leaders in the mobile space.

I think you will see IP creation in that space. That is where innovation is coming, by taking that raw sensor data and making it do something useful.

Blyler: Consolidation is occurring throughout the world of semiconductor design and manufacturing, especially at the lower process nodes. Do you see similar consolidation happening in the sensor space.

Tu: Right now there is an explosion of sensor companies, but there will be a consolidation down the road. The question one should ask is if integration key to the sensor and IoT space. I don’t know.  As a company, ARM would like to see a microcontroller (MCU) next to every sensor or sensor cluster – whether it is directly integrated to the sensor array or not. This is where scalability is important. Processing will need to be distributed; low power processing near the sensor with higher performance processing in the cloud.  It is very difficult to put a high-powered fan based system in a sensor. It just won’t happen. You have to be very low power near the sensor.

Not only is the sensor node a very power constrained environment but it is also resource constrained, e.g., memory. That’s why embedded memory is critical – be it OTP or flash. In addition to low power, it is the cost of that memory is actually more influencing than the CPU.

Blyler: Thank you.

Next Page »