Phase noise is one way to quantify timing noise in a signal, which is typically used in analog signals.
Phase noise in the frequency domain appears as jitter in the time domain.
With digital signals, jitter is the primary metric used to understand signal integrity and stability.
Digital signals require precise timing that can be quantified using a calculation of jitter
Among all the possible signal integrity metrics you can formulate to quantify signal quality, phase noise and jitter in digital electronics are of prime importance. These metrics are related to each other by a Fourier transform and they are half of the information you’ll collect from an eye diagram simulation or measurement. Eye diagram measurements are a starting point for qualifying channel designs, and you’ll want to extract the jitter value from your measurements and compare it with specs in your signaling standards.
When we look in the frequency domain for arbitrary periodic signals, such as pulse trains in PAM-4, we sometimes use phase noise to define the rolloff to the noise floor in the design. When dealing with digital signals, it’s best to use jitter to quantify signal quality for several reasons. Jitter has additional sources beyond random fluctuations in phase due to the edge rate of digital signals. Let’s look at these to better understand the link between phase noise and jitter, and explore the various sources of jitter in a digital design.
Phase Noise vs. Jitter in Digital Electronics
As was mentioned above, these two quantities are related to each other. Phase noise refers to the random fluctuations in the phase of an oscillator, which creates variations in the edge rate of a digital signal. This causes variations in timing in a digital signal, meaning the time at which a signal level rises above its 50% span. Phase noise is calculated from a power spectrum measurement in the frequency domain, which is why it is normally associated with sinusoidal oscillators. In contrast, jitter is measured statistically in the time domain from an eye diagram by looking at crossing edge points in a bitstream.
Jitter is a more useful metric in digital signals for several reasons. First, jitter may affect different portions of a signal’s power spectrum with different magnitudes, which requires an extraction and visualization process to determine the effects of jitter on a digital signal. Contrast this with a sinusoidal signal, where jitter is only seen around the band edge as the signal’s power spectrum rolls off to the noise floor.
The second reason jitter is a more useful noise metric is that all jitter measurements come from the time domain. Digital signals are broadband, making it difficult to determine the contribution from phase noise to a signal’s power spectrum. However, jitter is very clearly visible in an eye diagram, so it’s best to start from time-domain measurements.
Using Demodulation to Determine Jitter
The flowchart below nicely summarizes how phase noise and jitter in digital electronics are related. When measuring a bitstream in an eye diagram measurement or simulation, we can extract a “jitter signal” through demodulation. By treating the intended bitstream as a carrier signal, you can unwrap the jitter signal (which creates the observed modulation) using heterodyning. Once the jitter signal is gathered, you can apply a Fourier transform to get a jitter spectrum. Finally, a histogram can be generated by binning, and the integral of this determines the bit error rate in the channel.
Jitter can be determined as a time-domain spectrum (time interval error, or TIE), from a histogram and calculation of standard deviation, or as a spectrum affecting different frequencies
With this procedure, you can arrive at a measurement of phase noise in the design, or rather a distribution of jitter intensity in the frequency domain, as well as a histogram that can be used for statistics. However, we can skip this demodulation step and determine the parameters that generate the above histogram using statistics. This is typically the procedure performed in automated bit error rate testers and analysis programs like MATLAB.
Determining Jitter Statistically
The other way to quickly determine jitter is to take timing variation measurements from an eye diagram and calculate the RMS level of jitter in your channels using basic statistics. First, a reference edge rate or time point is set in the design, and the time the edge transition passes some threshold (half the peak voltage) is counted. From this, we have a set of discrete data that can be used to generate a histogram or used to calculate statistical quantities.
Because digital channels are generally linear, it’s safe to assume that the central limit theorem holds and the probability distribution governing each edge rate transition is Gaussian. Therefore, standard statistical measures can be used to determine the parameters in a bimodal Gaussian distribution (one mode for each edge).
Jitter can be determined statistically directly from eye diagram measurements
It should be obvious that, although digital signals have a well-defined power spectrum, quantifying timing variations in the frequency domain using phase noise is not so simple. Eventually, you will need to get to a time-domain value for jitter because that is what will determine whether a bitstream can be sampled as digital data at a receiver.
Causes of Jitter in Digital Electronics
There are several causes of jitter in digital systems:
- Intersymbol interference due to inadequate bandwidth and reflections
- Power bus noise and transients
- Random noise (e.g., thermal noise)
- Crosstalk during edge transitions
In any jitter measurement, all of these factors could be occurring simultaneously. Designers must often use simulations to try and decouple these sources of jitter, or isolated channel measurements can be used to try and determine the influence of each on jitter.
What Level of Jitter Should You Require?
The required level of jitter in a given design depends on multiple factors. Pulses or arbitrary waveforms might require a different level of jitter than a typical square/trapezoidal digital signal. For standard digital protocols (PCIe, DDR, etc.), the acceptable level of jitter is normally defined in the standard. For example, PCIe Gen4 places a limit of 500 fs on jitter; other very high-speed digital protocols may place similarly tight margins on jitter.
What is important for your digital design is how to get jitter as small as possible for your application. For an application like lidar that uses pulse streams with time-of-flight measurements, jitter needs to be on the order of 1 ps in order to guarantee accuracy down to mm levels. This requires very precise filtering, very low noise, or both. Unfortunately, for many digital systems that have their high-speed protocols stuffed inside of a large IC, filtering isn’t sufficient, so designers have to focus on power stability, impedance matching, and EMI immunity.
The FPGA that will sit on this BGA footprint could have logic levels as low as 1.2 V and might tolerate less than 100 mV of power bus noise. Jitter on these outputs can’t be filtered with discrete components because the I/O density is so high, so designers need to focus on designing to have low PDN impedance, low ground bounce, and stable power delivery
Phase noise and jitter in digital electronics are unavoidable, but you can design to minimize things like power dropout, crosstalk, reflections, and EMI susceptibility with a complete set of system analysis tools from Cadence.