## Time to Rethink Audio Compression? Human Hearing Beats the Fourier Uncertainty Limit

I’ve been thinking, (and writing), a lot lately about the similarities between quantum mechanics and electrical engineering. Put succinctly, the similarities run like this; our propensity to model circuits in terms of their frequency response in terms of Fourier transforms is matched by the physicists’ propensity to model reality in terms of waves in both position and momentum space utilizing the same Fourier transform mechanics.

In quantum mechanics the Heisenberg uncertainty principle sets a limit on how accurately you can measure the position and the speed of an object simultaneously. The result is directly related to the fact that to model position you have to use momentum, (velocity time mass), waves and that to model momentum you have to use position waves. The more accurately you measure position, the larger the spectrum of your momentum waves becomes. Think about it in terms of trying to crete a perfect square wave signal using sine waves. As you make the corners sharper and sharper, you have to use more and more sine wave frequencies.

I was just starting to wonder if there was an uncertainty limit in the spectral decomposition of audio signals in electrical engineering when a team of physicicsts at Rockefeller University in New York answered my question. There is an uncertainty relation for the Fourier decomposition of audio signals and it relates the ability to distinguish the frequency of a signal to the ability to distinguish it’s duration. It makes sense this would be the case since our Fourier variables are frequency and time as opposed to position and momentum.

Surprisingly, the researchers also found that humans can beat the Fourier uncertainty limit. They tested a number of trained musicians and found that they could routinely beat the predicted Fourier accuracy limit to distinguish both the pitch and duration of a sound by up to a factor of 13.

So, what does it all mean? First of all, it means that the mechanism humans use to decode sounds is not linear. The solution of the differential equation that defines spectral decomposition can be built out of a linear sum of sine and cosine waves. This is essentially the process that defines Fourier decomposition: find out what frequencies make up a signal and what the magnitudes of those frequencies are, then add signals of the requisite frequencies and magnitudes to wind up back at the original. The same linearity that allows solutions to be built in this manner also demands that there be a precision limit. Ipso facto, if human can beat the precision limit, they are utilizing a process that is inherently non-linear. It also means that it might be time to re-investigate how we capture and store and process audio signals. Many of our current models are based on the assumption that human hearing works as a linear decomposition of the frequencies of the audio signals around us. Do these findings account for many audiophiles insistence that vinyl and tubes just sound better? Both processes are inherently non-linear, but the real answer remains to be seen.