Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Neural Network’

Blog Review – Monday, November 6, 2017

Monday, November 6th, 2017

This week, we find that ANSYS gets hyper about Hyperloop development, Xilinx puts its mind to networks, Maxim supports factory automation and NXP, Mentor and ON Semiconductor explain why and how a product can be used.

A positively upbeat tone is set by Maxim Integrated’s Jeff DeAngelis, as he looks at how Industry 4.0 and automation is bringing back jobs. He looks at how being competitive through automation is leading to reshoring activity.

The now infamous ‘Jeep hack’ is the starting point for Timo van Roermund, the security architect at NXP considers what safeguards are needed and how the car domain needs to be re-thought for security on the roads. As well as citing several NXP products, there are also some useful links.

There’s a new look to the Mentor Graphics blogs and Michael Nopp uses it to good effect to take us through the company’s PADS Professional. His use of clear, colourful graphics adds to a simply told design guide.

Who isn’t super-excited about Hyperloop technology at the moment? Adora Anound Tadros, HyperXite guests on the ANSYS site to tell us how the team from University of California, Irvine, used simulation tools for its entry in the SpaceX Hyperloop Pod competition. The team is gaining momentum and was in the top six of this year’ competition and is planning to compete again in 2018 – with a self-propulsion pod design.

Smile, you’re on camera, says an image-conscious Jason Liu, ON Semiconductor. He looks at the changing roles of cameras in our lives and introduces the company’s digital image sensor.

Another current favourite topic is neural networks. Steve Leibson proudly relates how a team at the University of Birmingham in the UK has implemented a deep recurrent neural network on a Xilinx Zynq Z-7020 SoC using the Python programming language.

Caroline Hayes, Senior Editor

Cadence Puts a Neural Network in a DSP

Monday, May 1st, 2017

Gabe Moretti, Senior Editor

Cadence Design Systems, Inc. today unveiled the Cadence Tensilica Vision C5 DSP, the industry’s first standalone, self-contained neural network DSP IP core optimized for vision, radar/lidar and fused-sensor applications with high-availability neural network computational needs. Targeted for the automotive, surveillance, drone and mobile/wearable markets, the Vision C5 DSP offers 1TMAC/sec computational capacity to run all neural network computational tasks.

What is a Neural Network?

The neural network technology mimics our present understanding of how a human brain works.  Figure 1 shown a depiction of a neuron, the component of a neural network.

Figure 1: A biological neuron

The neuron takes inputs from the dendrites, process them and send the output through the axon to be distributed by the boutons.  The “signals” are propagated and operated upon throughout the network, which operates as a pattern recognition machine.  The digital computational equivalent is shown in Figure 2.  It is important to understand that a neuron network is a directed flowgraph, so that the output of a node impact all of the following connected nodes in the graph, and that the signals are unidirectional.

FFigure 2; A Perceptron (Artificial Neuron)

The machine equivalent of the nucleus of the neuron, called a perceptron, has one or more inputs, computational capabilities, and an output that can be distributed to one or more nodes in the flowgraph. A typical perceptron has many inputs and these inputs are all individually weighted. The perceptron weights can either amplify or de-amplify the original input signal. For example, if the input is 1 and the input’s weight is 0.2 the input will be decreased to 0.2. These weighted signals are then added together and passed into the activation function. The activation function is used to convert the input into a more useful output. There are many different types of activation function.  Neural networks are very efficient for machine vision applications.