Share
Filters
  • Article
  •  

To video version>> 

 

Anyone that is interested in digital audio, has heard of jitter. But what is jitter? Wikipedia gives the following definition: “Jitter is the deviation from true periodicity of a presumed periodic signal in electronics and telecommunications, often in relation to a reference clock source.” Great, but what does it mean? To understand this, we have to understand how digital audio works - I promise you to keep it simple. The sound as we hear it, is nothing more than propagating variations in air density, a bit like ripples in the water when you throw in a stone. When a microphone is placed so it ‘sees’ the sound, the membrane of the microphone will move depending on the air pressure variations at any given moment. This causes a voltage at the output of the microphone that has the same shape as the air pressure variations. Or, as the technicians say: the output voltage is analogous to the pressure variations and therefore they call that signal analogue. To convert it to digital, the amplitude of the analogue waveform is measured at regular intervals. To show how this works I drew a tiny piece of a signal, the red line here, against time, the horizontal axis. The straight line is not likely to occur often in real audio but you will see later why I used it. The analogue to digital converter measures the voltages at precise intervals and stores the measured values in a table as seen on the right.

 

The line is sampled at regular intervals 

 

The table holding the measurements is then stored onto a hard disk or other storage device. And as long as the signal remains in the digital domain, you can send it around the world, make copies a thousand generations deep, send it to mars for all I care, and all without any loss. Only when losses are intentional, as with MP3 or digital tampering (often called remastering) you do not get back what was put in. 

 

Playback

 

Before playback, digital to analogue conversion must be performed. This is the same process as analogue to digital conversion but in reverse: The digital audio data is read from the hard disk and placed in a table. The analogue waveform is then reconstructed by plotting these values at precise time intervals.

 

Perfect reconstruction of the 'wave form' 

 

Precise timing is of the essence: if the samples are plotted at irregular intervals, a different wave form is constructed. What had to be a straight line, now is a distorted line. These timing inconsistencies are called jitter.

 

Timing inconsistencies cause jitter 

 

Jitter can be caused by all kinds of problems: interference with other clocks, interference with high frequency signals from cell phones, wifi or microwaves, ground loops, cable losses and so on. Depending on the kind of interference, different sound problems might pop up: reduced deep lows, muffled mid-lows, sharp voices and brass, loss in resolution, stereo imaging and focussing and so on. The problem can be caused by the source - the player giving out a bad clock or the d/a-converter detecting the incoming clock signal poorly or bad handling of the clock internally. It might also be caused by bad shielding or losses in the digital interconnect. It might even be a combination of some or all of the above. 

 

You might wonder why we bother and not just use analogue. Well, both analogue and digital have limitations and potential problems. The big advantage of digital is that, once digitized, the signal is easily stored, transported and copied without any loss. Only when converting to digital and back to analogue care must be taken to do it against a very stable clock and use properly designed converters and filters. 

 

To video version>> 

  

Previous page

Next page




http://