Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘Medical’

More space for satellites and a roadmap for data protection

Monday, February 12th, 2018

Blog Review – Monday, February 12, 2018
This week’s selection includes 100G Ethernet for data centers; Satellites will vie for space; A roadmap for data protection, and more from the blogsphere

The rise of data centers and increase in cloud-based computing has prompted Lance Looper, Silicon Labs, to examine how wireless networks are changing to meet the demands for performance and low latency and implementing 100G Ethernet.

https://www.silabs.com/community/blog.entry.html/2018/02/05/ethernet_s_role_inh-pTeJ

Marveling at how connectivity has ‘shrunk’ the world, Paolo Colombo, ANSYS, looks skywards to consider the growth of connected devices. He looks at the role of space satellites and how small satellites will have their day for critical applications and introduces ‘pseudo sats’ which are vying for space in space.

An article about medical device design and manufacturing challenges has prompted Roger Mazzella, QT, to address each and provide a response to reassure developers. Naturally, QT’s products play a role in allaying many fears, but it is an interesting insight into the medical design arena.

An interesting case study is recorded by Hellen Norman, Arm, featuring Scratchy the robot. She asks German embedded systems developer, Sebastian Förster how he used a Cortex-M4, some motors, Lego bricks and cable ties to create a four-legged robot, programmed to walk using artificial intelligence (AI).

It’s not unusual to feel bewildered at a technology conference, so we can sympathise with Thomas Hackett, Cadence, who has a twist on the usual philosophical question of “What am I here for?” A walk through DesignCon caused a lightbulb moment, illuminating the real world interplay of IP, SoC and packaging.

With the IoT there are no secrets, and Robert Vamosi, Synopsys examines how data sharing may not be as innocuous as companies would have us believe, if it is not configured flawlessly. The Strava heatmap which reveals secret military locations has thrown up some serious issues which, we are assured, are being addressed, and which Vamosi sees as a model for other IoT and wearable device manufacturers.

Tackling software-defined networking (SDN) head-on, Jean-Marie Brunet, Mentor Graphics, presents a clear and strong case for accelerating verification using virtual emulation. Of course he advocates Veloce VirtuaLAB PCIe for the task, but backs up his recommendation with some sound reasoning and guidance.

By Caroline Hayes, Senior Editor

Blog Review – Monday, December 11, 2017

Sunday, December 10th, 2017

Looking through the blogshphere, we find packaging issues ahead of the holidays; Life on the IoT edge; billions of connected devices – what does it even mean? and taking nature’s lead in 3D printing

According to Paul McLellan, Cadence, Moore’s Law is running out of steam. He spoke to John Park about advanced packaging and heterogeneous integration.

Living life on the edge, Jeff Miller, Mentor Graphics, sets out a step program for IoT design and advocates a standardized directory structure.

Anticipating one trillion smart, connected devices, Christine Young, Maxim Integrated, looks to the future and what the predicted scale of connectivity will mean for intelligence gathering and sharing, and their role in emerging technologies, such as blockchain.

Taking a cue from nature’s own materials, Scott Goodrich, Fortify guest blogs for ANSYS to explain how magnetic fields were used in 3D printing to align fibers for high strength-to-weight ratio printed parts.

Consumer trends that signal the end of wired audio connections has set Mark Melvin, ON Semiconductor, thinking about hearing aids and adding intelligence via wireless connectivity with smartphones.

Trends for the semiconductor chip market are discussed by John Blyler and Jim Feldan, Semico Research. The complexity is increasing which could impact the number of design starts. One trend is IP reuse and this informative report looks into the facts and figures in great detail to provide an understanding of the industry direction.

By Caroline Hayes, Senior Editor

Blog Review – Monday, March 27, 2017

Monday, March 27th, 2017

How AI can be used for medical breakthroughs; What’s wired and what’s not; A new compiler from ARM targets functional safety; Industry 4.0 update

A personal history lesson from Paul McLellan, Cadence Design Systems, as he charts the evolution from the beginning of the company, via the author’s career and various milestones with different companies and the trials of DAC over the decades.

Post Embedded World, ARM announced the ARM Compiler 6. Tony Smith, ARM, looks at its role for functional safety and autonomous vehicles.

A review of industrial IoT at Embedded World 2017 is the focus for Andrew Patterson’s blog. Mentor Graphics had several demonstrations for Industry 4.0. He explains the nature of Industry 4.0 and where it is going, the role of OPC-UA (Open Platform Communication – Unified Architecture) and support from Mentor.

What’s wired and what’s wireless, asks David Andeen, Maxim Integrated. His blog looks at vehicle sub-systems and wired communications standards, building automation and wired interface design and a link to an informative tutorial.

There are few philosophical questions posed in the blogs that I review, but this week throws up an interesting one from Philippe Laufer, Dassault Systemes. The quandary is does science drive design, or does design drive science? Topically posted ahead of the Age of Experience event in Milan next month, the answer relies on size and data storage, influenced by both design and science.

Security issues for medical devices are considered by David West, Icon Labs. He looks at the threats and security requirements that engineers must consider.

A worthy competition is announced on the Intel blog – the Artificial Intelligence Kaggle competition to combat cervical cancer. Focused on screening, the competition with MobileODT, using its optical diagnostic devices and software, challenges Kagglers to develop an algorithm that classifies a cervix type, for referrals for treatment. The first prize is $50,000 and there is a $20,000 prize for best Intel tools usage. “We aim to challenge developers, data scientists and students to develop AI algorithms to help solve real-world challenges in industries including medical and health care,” said Doug Fisher, senior vice president and general manager of the Software and Services Group at Intel.

Caroline Hayes, Senior Editor

Blog Review – Monday 07 November 2016

Monday, November 7th, 2016

Browsing the MIT Library; AI and HPC for cancer breakthroughs; FPGAs on Mars; Romancing ISO 26262; It’s IoT conference season; Who’s going to pay?

For smart and connected IoT devices, Intel has introduced the Intel Atom processor E3900 and Ken Caviasca, Intel explains how the series brings computing power nearer to the role of the sensor.

Crash scenes from Mars, as taken by the Mars Reconnaissance Orbiter’s High Resolution Imaging Science Experiment (HiRISE) reveal features previously unseen on the planet. Steve Leibson, Xilinx, explains how we have FPGAs to thank. (For the images, not the crash!)

Ahead of GE’s Minds & Machines Conference (November 15-16, San Francisco) Lane Lewis, Ansys, celebrates the marriage of the Simulation Platform and Predix Platform to create a profitable asset health monitoring and the industrial IoT.

As mobile payment matures, Martin Cox, Rambus Bell ID, identifies that tokenization is becoming a hot topic. His blog explains the role of the company’s Token Gateway as a means to integrate multiple mobile payment schemes. No excuse not to get a round of drinks in now.

Moving automotive and safety into the realm of Dungeons and Dragons, Paul McLellan, Cadence, reviews the recent DVCon Europe and how ISO 26262 – the critical safety standard – became a theme, but not necessarily one to dread and fear or avoid. Like St George, you just have to grit your teeth and tackle it head-on, to find the pot of gold that is critical safety design success.

Fresh from IoT Planet in Grenoble, France, Andrew Patterson, Mentor Graphics, is occupied by two topics – connectivity and security. He shares some interesting thoughts and statistics around these gleaned from the event.

Fascinating insights into the world of bio-medicine and computational bio-medicine are provided by Dr Michael J McManus, Intel. He explains how Artificial Intelligence (AI) and High Performance Computing (HPC) are used by researchers to analyze data and predicts an era of revolutionary cancer breakthroughs, of both drug development structures and genome analytics running on a single Intel cluster using Intel Xeon, Intel Xeon Phi processors and Intel Omni-Path architecture.

There is a fascinating collection of rare books at MIT, exhibited to mark Ada Lovelace Day. For those can’t walk the aisles of the MIT Libraries, Stephen Skuce, MIT Libraries, shows us through some of the collection relating to women who have contributed to science, math and engineering with its annual celebration of the history of women in the STEM (Science Technology, Engineering and Mathematics) subjects.

Caroline Hayes, Senior Editor

Cybernetic Human Via Wearable IOT

Friday, January 20th, 2017

UC Berkeley’s Dr. Rabaey sees humans becoming an extension of the wearable IoT via neuron connectivity at recent IEEE IMS event.

by Hamilton Carter and John Blyler, Editors, JB Systems

During the third week in May, more than 3000 microwave engineers from across the globe descended upon San Francisco for the International Microwave Symposium 2016. To close the week, it seemed only fitting then that the final plenary talk by Jan Rabaey was titled “The Human Intranet- Where Swarms and Humans Meet.”

RabaeyImg_rotate-crop

Dr. Rabaey, Professor and EE Division Chair at UC Berkeley, took the stage wearing a black T-shirt, a pair of slacks, and a sports coat that shimmered under the bright stage lights. He briefly summarized the topic of his talk, as well as his research goal: turning humans themselves into the next extension of the IoT. Ultimately he hopes to be able to create human-machine interfaces that could ideally not only read individual neurons, but write them as well.

What Makes a Wearable Wearable?

The talk opened with a brief discourse on the inability thus far of wearables to capture the public’s imagination. Dr. Rabaey cited several key problems facing the technology: battery life; how wearable a device actually is; limited functionality; inability to hold user interest; and perhaps most importantly something he termed stove-piping. Wearable technologies today are built to communicate only with other devices manufactured by the same company. Dr. Rabaey called for an open wearables platform to enable the industry to expand at an increasing rate.

Departing from wearables to discuss an internet technology that almost everyone does use, Dr. Rabaey focused for a few moments on the smart phone. He emphasized that while the devices are useful, the bandwidth of the communications channel between the device, and its human owner is debilitatingly narrow. His proposal for remedying this issue is not to further enhance the smart phone, but instead to enhance the human user!

One way to enhance the bandwidth between device and user is simply to provide more input channels. Rabaey discussed one project, already in the works, that utilizes Braille-like technology to turn skin into a tactile interface, and another project for the visually-impaired that aims to transmit visual images to the brain over aural channels via sonification.

Human limbs as prosthetics

As another powerful example of what has already been achieved in human extensibility, Dr. Rabaey, showed a video produced by the scientific journal “Nature” portraying research that has enabled quadriplegic Ian Burkhart to regain control of the muscles in his arms and hands. The video showed Mr. Burkhart playing Guitar Hero, and gripping other objects with his own hands; hands that he lost the use of five years ago. The system that enables his motor control utilizes a sensor to scan the neurons firing in his brain as researchers show him images of a hand closing around various objects. After a period of training and offline data analysis, a bank of computers learns to associate his neural patterns with his desire to close his hand. Finally, sensing the motions he would like to make, the computers fire electro-constricting arm bands that cause the correct muscles in his arm to flex and close his hand around an object. (See video: “The nerve bypass: how to move a paralysed hand“)

Human Enhancements Inside and Out

Rabaey divides human-enhancing tech into two categories, extrospective, applications, like those described above, that interface the enhanced human to the outside world, and introspective applications that look inwards to provide more information about enhanced humans themselves. Turning his focus to introspective applications, Rabaey presented several examples of existing bio-sensor technology including printed blood oximetry sensors, wound healing bandages, and thin-film EEGs. He then described the technology that will enable his vision of the human intranet: neural dust.

The Human Intranet

In 1997, Kris Pister outlined his vision for something called smart dust, one cubic millimeter devices that contained sensors, a processor, and networked communications. Pister’s vision was recently realized by the Michigan Micro Mote research team. Rabaey’s, proposed neural dust would take this technology a step further providing smart dust systems that measure a mere 10 to 100 microns on a side. At these dimensions, the devices could travel within the human blood stream. Dr. Rabaey described his proposed human intranet as consisting of a network fabric of neural dust particles that communicate with one or more wearable network hubs. The headband/bracelet/necklace-borne hub devices would handle the more heavy-duty communication, and processing tasks of the system, while the neural dust would provide real-time data measured on-site from within the body. The key challenge to enabling neural dust at this point lies in determining a communications channel that can deliver the data from inside the human body at real-time speeds while consuming very little power, (think picowatts).

Caution for the future

In closing, Dr. Jan implored the audience, that in all human/computer interface devices, security must be considered at the onset, and throughout the development cycle. He pointed out that internal defibrillators with wireless controls can be hacked, and therefore, could be used to kill a human who uses one. While this fortunately has never occurred, he emphasized that since the possibility exists it is key to encrypt every packet of information related to the human body. While encryption might be power-hungry in software, he stated that encryption algorithms build into ASICs could be performed at a fraction of the power cost. As for passwords, there are any number of unique biometric indicators that can be used. Among these are voice, and heart-rate. The danger for these bio-metrics, however, is that once they can be cloned, or imitated, the hacker has access to a treasure-trove of information, and possibly control. Perhaps the most promising biometric at present is a scan of neurons via EEG or other technology so that as the user thinks of a new password, the machine interface can pick it up instantly, and incorporates it into new transmissions.

Wrapping up his exciting vision of a bright cybernetic future, Rabaey grounded the audience with a quote made by Joanna Zylinska, an Australian performance artist, in a 2002 interview:

“The body has always been a prosthetic body. Ever since we developed as humanoids and developed bipedal locomotion, two limbs became manipulators. We have become creatures that construct tools, artifacts, and machines. We’ve always been augmented by our instruments, our technologies. Technology is what constructs our humanity. …, so to consider technology as a kind of alien other that happens upon us at the end of the millennium is rather simplistic.”

The more things change, the more they stay the same.

This is not your father’s MCU

Friday, October 24th, 2014

A lot can happen in four years in the embedded design world, and ARM has responded with the introduction of the ARM® Cortex®-M7 processor, following relatively shortly after the 2010 introduction of the Cortex-M4.
By: Caroline Hayes, Contributing Editor

ARM’s Cortex-M4 introduced DSP into the microprocessor, but according to Richard York, Vice President, Embedded, ARM, the subsequent Cortex-M7’s performance improvement needed a different approach. “The Cortex-M7 is a little less constrained about silicon,” he explains, “The performance has been improved [from the Cortex-M4] but brings in DSP and floating point performance at a similar cost point and a similar silicon area.”

To illustrate this, it is interesting to note that the Cortex-M7 DSP is the same size and the same instruction set as the Cortex-M4, but is faster. The Cortex-M7 DSP is increased to meet sensor fusion and control operations—two characteristics important in the growing IoT market. It operates at 400MHz, compared to 168MHz achieved by the Cortex-M4.

The ARMv7E M six-stage pipeline architecture increases throughput compared to the Cortex-M4 which uses the three-stage pipeline ARMv7E M architecture with the Thumb/Thumb 2 instruction set. The Cortex-M7’s superscalar pipeline allows the processor to execute more instructions/second. “This allows the traditional manufacturer to use a single core for control and DSP functions, which reduces the cost of the design,” adds Bee Hayes-Thakore, Product Marketing Manager, CPU, ARM.

The Cortex-M7 is less silicon-constrained than the Cortex-M4, with the same DSP and instruction set but with a performance uplift.


Figure 1. Cortex overview image: The Cortex-M7 is less silicon-constrained than the Cortex-M4, with the same DSP and instruction set but with a performance uplift.

Ian Johnson, Product Manager, Cortex-M7, ARM, is keen to point out that Cortex-M7 is an evolution and not a replacement for the Cortex-M4. “It is a high performance addition to the Cortex-M processor family – not a replacement for the Cortex-M4”. It has some shared characteristics – i.e. scalable architecture – with some ‘performance uplift’, he says. Hayes-Thakore elaborates: “When the Cortex-M4 was launched, it was in response to demand at the high end, but that bar has significantly changed with always-on and always-aware status devices, and with local processing capability. Partners wanted and demanded more, but with compatibility. The Cortex-M7 is really an extension of the Cortex-M4. It is designed for endpoints in automotive, Internet of Things and portable medical applications that will be expected to deliver 30 years’ of operation.”

The memory interface of the Cortex-M7 allows access to the internal cache for efficiency.

Processing: “More Better”
Higher processing requirements on microcontrollers in automotive and industrial automation are just two examples of why the performance uplift was developed for in the Cortex-M7. Factory floors require an increasing amount of precision and operate on large amounts of data in a short space of time. These application areas also need a good user interface, whether it is in a vehicle dashboard or a factory automation system. This allows the processor to address the fast interrupt controls and processing in almost real-time, and to allow autonomous decisions by the driver or operator, both driving factors, according to Johnson. “The application area is very wide in automotive and industrial control; initial licensees may be creating general purpose processors, but these will be used anywhere.”

He goes on to elaborate that although it may be viewed as a general purpose processor, it is by no means confined to that role. “The Cortex-M7 may be described as general-purpose but it is also used on other applications. . . For example, a Cortex-A partner will see the benefit in using the Cortex-M7 as a companion processor,” he says. “In that setting, it will be an invisible companion to the Cortex-M processor.”

The six-stage pipeline delivers a performance of 2.14DMIPS/MHz, improving the Cortex-M4’s 1.25DIPS/MHz, to fulfil the capability requirements that are normally only seen at the high end of the market. The increase in instructions per cycle have led to a modest improvement in clock rate, says York, for twice the number of instructions per cycle.

The six-stage pipeline increases performance to 2.14DMIPS/MHz to deliver ‘new wave’ capabilities.

“The first devices are on the market and are as energy-efficient as the Cortex-M4,” confirms York. The ability to process audio data and still image data due to the increased processing power addresses a cross-section of customers who are using makeshift solutions [such as a two-core controller and a DSP configuration] for these high-end functions. According to Hayes-Thakore, the single, scalable processor removes the barrier of adapting existing options and also saves development costs.
Another change is the Cortex-M7 memory interface. It is the first Cortex product to integrate the instruction cache and the data cache. The integration reduces the power consumption of the system and allows engineers to execute a large proportion of code from the internal cache to reduce the number of read and write occurrences from the external memory, leading to the power savings.

“This also allows efficient access to a large external memory,” says Johnson. “For example, a large DDR memory with large images to display and audio samples to play. Accessing the internal cache is more efficient.”

All Cortex-M7 processors have a floating point unit, which distinguishes it from other cores. It has the same energy-efficiency modes as the Cortex-M4, points out York, to meet the new wave of capabilities, such as always-on, always-connected, voice processing and image processing. Both share sleep on exit mode, sleep and deep sleep modes. “The comparison we make,” says Johnson, “is that of the user expectation from mobile devices. The pervasiveness of phones and tablets has set user expectations for that experience in the car, and in work-related settings. The Cortex-M7 has been developed to meet those demands—and some we haven’t thought of.”

Energy management
As well as the increased demands of established application areas, such as automotive, factory automation, white goods and medical, the relatively new application of Internet of Things has been a driver for the Cortex-M7, or, as York, the VP, Embedded, expresses it “Adding connectivity to unconnected things,” meeting the increase in what customers are introducing as always-on design features.

Energy-efficiency is another shared feature, with the same power modes offered in the Cortex-M4 and Cortex-M7. Sleep mode, deep-sleep mode and power-down mode, as well as the ability to shut-down and wake up peripherals in a short period of time, contribute to power efficiency. “The Cortex-M architecture responds with minimal latency to interrupts and in waking up the processor and surrounding peripherals,” says Johnson. A capability that can be exploited by the ‘new wave’ of applications such as Internet of Things and wearables which require always-on operation but not at the expense of battery drain. York also comments on the power management for always-on applications: “Monitors, sensors and gyroscopes need to ‘fire up’ a display and can use the Cortex-M7 with wireless communications, displays and interfaces with the same energy efficiency and performance as the Cortex-M4.”

The evolution of the Cortex-M architecture increases performance capabilities without impacting energy efficiency.


In another example, today’s factory automation systems may have several nodes and endpoints across the plant floor. If these can be in deep sleep mode, together with the processor peripherals, to save power, yet wake when triggered by an event in a short period of time, this has advantages across systems using multiple nodes for energy efficiency without time-delay penalties.

Development tools
The shared capabilities between Cortex-M4 and Cortex-M7 processors make it only logical that the development tool legacy is preserved. In addition to ARM’s own DS-5 and Keil® MDK 5 development tools, third party tools, software and RTOS from Atollic, DSP Concepts, Express Logic, FreeRTOS, Green Hills, IAR Systems, Lauterbach, Mentor Graphics, Micrium and SEGGER can also be used across the two processor environments and across the whole Cortex-M family.

As well as preserving the legacy investment of a single software tool, this also reduces the training environment and allows expertise from earlier designs to be reused and code from other Cortex-M processors to be used without modification.

The investment for the future with Cortex-M7 using skills and innovations from earlier generations reinforces the assertion in the title—this is not your father’s MCU.

October 2014

A Web of Security for Medical Devices

Thursday, September 19th, 2013

By Cheryl Coupé

People who are under medical care are often at their most vulnerable. The equipment used to monitor, medicate, diagnose, and treat them can’t be.

In the past, medical device security focused on the endpoint—the device itself. But Tony Magallanez, senior systems engineer for McAfee’s embedded sales group, explains that the days of focusing solely on device-level security is over; today’s medical devices need to be at the center of a web of security with multiple layers. “We advocate that concept because it lets you understand what’s happening on the device, and also what’s going on around the device,” Magallanez says. “It’s important because as threats proliferate through the network that surrounds these systems, they become more vulnerable.”

These connected devices may include monitoring equipment within hospitals or in patients’ homes; bedside (wired) or implanted (wireless) infusion pumps that deliver medication; networked radiology and surgical equipment; nurses’ stations, charting devices, and administrative systems; and telemedicine equipment that brings medical care to remote areas of the world. Entire networks that manage vital data and instructions are associated with these devices.

McAfee looks at the vulnerability aspects of everything the network implies, including the device’s physical security, data protection, and encryption as well as the behavior of the people using it, to make sure that the device and the network that surrounds it are secure. This level of security requires a layered approach that blankets the entire network.

layers.jpg

Security in layers

While personal health information can be accessed through sophisticated malware, low-tech risks, such as employees who accidentally or deliberately provide access, are just as dangerous. Securing personal health information to meet HIPAA and other requirements demands access control in situations where the device can be vulnerable. That’s especially important with the proliferation of easily accessed (and misplaced) mobile devices, including laptops, tablets, and smartphones. Security also relates to monitoring network traffic, including the sites that employees access on the Internet. Even legitimate sites can be compromised, which can then compromise sensitive data within the healthcare network.

Both the network and individual devices need to be monitored, maintained, and controlled; ideally using automated, 24/7 processes that don’t require the cost and inefficiency of onsite human intervention. McAfee’s Magallanez says, “We’re finding in the hospital space that margins are thinner and thinner, and administrators are trying to be as efficient as possible. Operating costs can be overwhelming.” Even “green” initiatives that are designed to reduce carbon footprint and make operations more energy-efficient can have security implications. For instance, if a threat is identified on a number of devices on the network, but other devices are powered off, historically there wasn’t a way to identify whether the threat had spread without sending technicians to power up, analyze, and patch those devices onsite.

Now administrators can use McAfee’s ePolicy Orchestrator (ePO) Deep Command. The ePO centralized console shows the network administrator where a security threat manifested and the scope of the problem, and defines resources to mitigate the threat. Deep Command uses the Intel® vPro™ Active Management Technology (AMT) to allow secure remote access, even if the device isn’t powered on, which allows the administrator to remotely patch and reboot even large numbers of infected devices.  Deep Command can remotely power systems on, apply security and other maintenance protocols, and power the system back down to ensure safe operation when workers return. This eliminates the need to police employee compliance to security patch instructions, and can work around the 24/7 schedule of healthcare environments.

deep defender.jpg

Balance security and performance in medical devices

The ongoing compromise for device developers is how to balance security and performance requirements. McAfee has successfully deployed new technologies to help developers mitigate risk while optimizing performance. McAfee Embedded Control provides application whitelisting that blocks unauthorized applications and changes on fixed-function devices with very little performance overhead. If the application is attacked or changed, the software locks down the system so the virus is intercepted and terminated before it can run. This provides a high level of security and peace of mind for both the hospital administrator and the device manufacturer. Because of stringent safety certifications (such as the FDA) that restrict changes to certified systems, a change can require the equipment to be sent back to the manufacturer to be reimaged, resulting in service costs as well as loss of revenue while the system is out of use.

Device developers can also take advantage of the Intel® AES New Instructions (Intel® AES-NI) encryption instruction set that accelerates the encryption of data in the Intel® Xeon® processor family and the 3rd-generation Intel® Core™ processor family. Encryption technology historically required the operating system to handle encryption algorithms, which can slow performance. McAfee integrates with the Intel AES-NI to offload the encryption engine to the CPU, with no reduction in performance and with full FIPS 140-2 certification.

Medical Device Innovation, Safety and Security (MDISS) Consortium

Looking ahead, Intel and McAfee, along with leading service care providers, device manufacturers, IT providers, research organizations, and others, are active in working groups of the Medical Device Innovation, Safety and Security (MDISS) Consortium. MDISS is focused on optimizing the relationship between the quality of healthcare and the process of assessing and ensuring that devices and systems are secure and functioning safely and appropriately. While MDISS is not a standards organization, its goals include the development of security best practices for safe, secure medical devices and associated networks.

This article first appeared in the Intel® Embedded Community, published by the Intel® Intelligent Systems Alliance.


coupe_cheryl

Cheryl Berglund Coupé is editor of EECatalog.com. Her articles have appeared in EE Times, Electronic Business, Microsoft Embedded Review and Windows Developer’s Journal and she has developed presentations for the Embedded Systems Conference and ICSPAT. She has held a variety of production, technical marketing and writing positions within technology companies and agencies in the Northwest.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.