With perfectionist audio, a perfect digital chain is often described as “bit-perfect”. When two player applications are both bit-perfect; there should be no audible difference between them (when the rest of the chain remains unchanged of course). People now realize this is wrong; they hear differences between players such as Amarra, Decibel and Itunes. Actually this observation is not heavily discussed; most people can hear it. So clearly bit-perfect alone is not enough to describe the quality of a digital audio player. When describing a DAC; bit-perfect is not a point of discussion, however another parameter called ‘jitter’ is (check wiki for background on jitter). Jitter is the equivalent of hum and noise, but now in the frequency domain. Minimizing jitter requires perfect (relative) timing. Actually timing is key in any digital audio environment. Actually timing is key in any music produced; music consists not only of a series of notes; each note has to sound at the right moment in time.

So we have not only ‘bit-perfect’ but also ‘perfect-timing’ as parameters describing a digital audio chain. These parameters apply to audio software (player and OS) as well as audio hardware (cabling and DAC). May it be jitter that makes Amarra sound better than Decibel? Can the logic in the player and OS software influence the timing of the digital audio data stream? Well, in order to explain the audible differences it has to be…

A common issue with Itunes is it’s build-in sample rate converter. Why it’s there is easily explained; it needs to be there because Itunes can drive an Airport Express. An AE relies for it’s timing (clocking) on Itunes; Itunes is the master clock driving the AE. Your DAC/OS-driver also has a build-in clock; normally this clock would ‘clock’ the audio player. But with Itunes being master clock (for AE) we then have two masters… And that’s why Itunes needs sample rate conversion build-in. Any sample rate conversion has impact on timing; and there we are: Itunes sounds worse than programs like Amarra or Decibel because these players can slave to the DAC/OS-driver clock. Itunes can still be bit-perfect; but without perfect-timing this means nothing.

The differences between players like Amarra and Decibel are much more subtile. Maybe it’s the way their interfacing to the OS driver is programmed? But it’s must have something to do with timing, that’s for sure. With closed player software it’s difficult to determine the differences; although DAC hardware is mostly ‘closed’ as well. Why not perform (relative) jitter measurements on software players? Same DAC, OS, Soundfile etc. but different player software? Anybody? And don’t try this by capturing the digital audio stream, since the timing of the capturing is slaved by the data stream… The real measurement should take place in the analog domain…

Leave a Reply

Your email address will not be published. Required fields are marked *

62 − = 54