Part of the  

Chip Design Magazine


About  |  Contact

Archive for February, 2013

Twisted Light Spins Out

Sunday, February 24th, 2013

If you were pinning the hopes of your next startup on last year’s announcement of twisted light communications  you might want to scale back you venture capital search drastically, or if you’re far enough along, hurry up and cash the check. Last year, a team of scientists claimed that they had made the first transmission modulating the orbital angular momentum of light.  They claimed that by modulating different whole integer values of lights spin, they could provide much broader communications bandwidth  than what was currently available.  Their scientific report appeared in Nature, and subsequently on the BBC new service as well as other media outlets.

Since then, it has been shown, (see the references), that the newly proposed mode of propagation, at least in free space, can’t bee sustained and does not offer bandwidth improvements over existing communication methods such as multiple in multiple out, (MIMO).  One study even showed that some of the claims made for the method contradict the second law of thermodynamics.

While large media splashes were made about the initial reports of OAM technology demonstrations, I can’t find a single mention of the contradicting studies in any of the major media outlets.  This brings up the question of responsibility in scientific journalism.  How far should the popular scientific press be expected to go to make the public aware of counter views to, or disputes of science and technology they’ve reported on?  I know there are a number of new scientific developments every day, and it would probably be impossible to track the progress of each development.  However, in one case this would seem to be an invalid excuse.  The comment stream at the end of the Scientific American report actually contains a link to one of the contradicting studies.

For big splash articles like those linked to below in Scientific American, the BBC, and Nature, should the public expect their news sources to update them on developments instead of leaving them ignorant by omission?


Encoding many channels on the same frequency through radio vorticity: first experimental test


BBC Report on Twisted Light

BBC:  ‘Twisted’ waves could boost capacity of wi-fi and TV

Nature:Terabit free-space data transmission employing orbital angular momentum multiplexing

Scientific American: Twisted Radio Waves Could Expand Bandwidth for Mobile Phones

Is orbital angular momentum (OAM) based radiocommunication an unexploited area?

Comment on ‘Encoding many channels on the same frequency through radio vorticity: first experimental test’


Time to Rethink Audio Compression? Human Hearing Beats the Fourier Uncertainty Limit

Monday, February 11th, 2013

I’ve been thinking, (and writing), a lot lately about the similarities between quantum mechanics and electrical engineering.  Put succinctly, the similarities run like this; our propensity to model circuits in terms of their frequency response in terms of Fourier transforms is matched by the physicists’ propensity to model reality in terms of waves in both position and momentum space utilizing the same Fourier transform mechanics.

In quantum mechanics the Heisenberg uncertainty principle sets a limit on how accurately you can measure the position and the speed of an object simultaneously.  The result is directly related to the fact that to model position you have to use momentum, (velocity time mass), waves and that to model momentum you have to use position waves.  The more accurately you measure position, the larger the spectrum of your momentum waves becomes.  Think about it in terms of trying to crete a perfect square wave signal using sine waves.  As you make the corners sharper and sharper, you have to use more and more sine wave frequencies.

I was just starting to wonder if there was an uncertainty limit in the spectral decomposition of audio signals in electrical engineering when a team of physicicsts at Rockefeller University in New York answered my question.  There is an uncertainty relation for the Fourier decomposition of audio  signals and it relates the ability to distinguish the frequency of a signal to the ability to distinguish it’s duration.  It makes sense this would be the case since our Fourier variables are frequency and time as opposed to position and momentum.

Surprisingly, the researchers also found that humans can beat the Fourier uncertainty limit.  They tested a number of trained musicians and found that they could routinely beat the predicted Fourier accuracy limit to distinguish both the pitch and duration of a sound by up to a factor of 13.

So, what does it all mean?  First of all, it means that the mechanism humans use to decode sounds is not linear.  The solution of the differential equation that defines spectral decomposition can be built out of a linear sum of sine and cosine waves.  This is essentially the process that defines Fourier decomposition: find out what frequencies make up a signal and what the magnitudes of those frequencies are, then add signals of the requisite frequencies and magnitudes to wind up back at the original.  The same linearity that allows solutions to be built in this manner also demands that there be a precision limit.  Ipso facto, if human can beat the precision limit, they are utilizing a process that is inherently non-linear.  It also means that it might be time to re-investigate how we capture and store and process audio signals.  Many of our current models are based on the assumption that human hearing works as a linear decomposition of the frequencies of the audio signals around us. Do these findings account for many audiophiles insistence that vinyl and tubes just sound better?  Both processes are inherently non-linear, but the real answer remains to be seen.