Any repetitive signal can be represented as the rotation of a vector around a point. If the length of the vector is fixed and the rate of rotation is constant the end of the vector will continuously trace a circle. Each pass around the circle represents one complete cycle of the signal.

Let us set an arbitrary point at the top of the circle. The instant that the vector passes through this point defines a fixed location on the repetitive signal. Between leaving this reference point and returning to it the vector sweeps out an angle of 2P radians. The angle between the reference point and the moving vector is the instantaneous phase of the signal. In fact, the vector really traces a spiral in phase space because the center point moves along a time line. The time line is at right angles to the plane in which the vector rotates.

The frequency with which the rotating vector passes the reference point is also the frequency of the signal. For the moment we will let the vector take the same time on each sweep around the circle, so that the signal frequency is constant.

Suppose now that we have a second vector rotating around the same center point. Its length is unimportant; we will say that both vectors are one unit long, without defining the nature of the unit. If the rate of rotation of this second vector is the same as that of the firsts, we can say that it represents a second signal at the same frequency. The vectors need not coincide. If the second vector stays a constant distance behind the first, we say that the phase of the second signal lags the first by so many radians. If the two vectors overlap, the signals are said to be in phase. If they lie along a rotating diameter, so that they always point in opposite directions, we say that the signals are in anti-phase. If the vectors form a constant right angle, we say that the signals are in quadrature.

It is frequently convenient to refer a real signal to either another real signal as a phase reference, or to relate it to a perfect theoretical signal at its nominal frequency that starts in-phase at some arbitrary point in time. The phase relationship between the real signal and this theoretical reference signal is fundamental to understanding phase, frequency and logic-induced modulation.

Suppose the real signal vector starts to pull away from the reference. The increasing phase lead tells us it must be rotating faster. The rate of increase of phase difference is a measure of the difference in frequency between the signal and the reference. So, if a signal continuously increases its phase with respect to its reference, it has been shifted upwards in frequency. Note that a constant rate of increase in phase angle means a fixed increase in frequency.

Frequency is thus seen to be the first derivative of phase, in both the absolute and relative cases, and, correspondingly, phase is the time integral of frequency. The infinite nature of a steady oscillation spiraling down its time line makes this integral increase without limit. To make the mathematics more tractable we often choose to refer to phase angles modulo 2 P, that is, to treat each pass through the reference point on the phase circle as though it had reset the radian count to zero. On occasion this mathematical convenience can obscure the precise nature of a modulated signal.

It should now be apparent that what is commonly called "frequency modulation" is in fact periodic phase modulation. If the modulating signal is also a periodic sinusoid, the signal vector will be seen to lead the reference vector for some time. Then it will pass back through the in-phase condition and lag for some time. The total amount of lead or lag accumulated by the carrier during each half-cycle of the modulating signal is kept well below P, to reduce the chances of the detector losing count of whole cycles, but this is not an inherent property of this type of modulation; in principle the only restriction is that the long-term average phase difference must be zero, to prevent the signal from acquiring a permanent frequency shift.

So, if "frequency modulation" really means periodic phase modulation, what is "phase modulation"? This is a nebulous term, open to many interpretations.

In communications systems, the term "phase modulation" (or "phase-shift keying") is only applied to carrier signals that are modulated with some form of binary information. In such a scheme one entire cycle of the carrier will be either shifted by P radians, to produce an anti-phase cycle, or shifted by P/2 radians, to produce a positive or negative quadrature cycle, or by some other exact sub-multiple of one cycle. These abrupt phase changes are a convenient way to pack more information onto a given carrier than simple amplitude or frequency modulation would permit. Because the frequency of the carrier is not changed by this modulating process, a simple phase-locked-loop detector with a long time constant will suffice to demodulate them. The output of the phase comparator can be seen to follow these abrupt phase changes while the averaged signal passing through the loop filter tracks slow drifts in the center frequency. With appropriate design constants the same simple PLL with its long loop constant will accurately extract the "frequency" modulation from an analog-modulated carrier.

Obviously, for our purposes in dealing with LIM, this definition of "phase modulation" will not serve to describe the difference between "phase" and "frequency" modulation. For that explanation, we have to leave the comfortable world of steadily varying signals and consider the meaning of phase and frequency in the digital environment.

Any cycle of a repetitive pure digital signal has only two identifiable points where it changes state. In principle we can disregard the portion of the signal between these points. This leaves only the edges. We can represent these by a pair of delta functions. Because the delta functions are discontinuous, we describe such a signal as discrete or sampled.

In practice, we represent one of these points by the positive transition of the wave form and the other by the negative transition.

If the signal is a perfect square wave, these points are equally spaced in time and the entire wave form has a single characteristic frequency. With such a symmetrical wave form we can apply the concepts of phase and frequency described above for a continuous sinusoidal signal. Then adding a constant increment of phase will shift its frequency. Alternately adding and subtracting phase will phase-shift it, and so on. Quadrature square-wave signal pairs, both of the same frequencies but separated in phase by P/2 radians, are very common.

However, we can do fancier tricks with these wave forms. By shifting the time of occurrence of only one of the edges we create a pulse-width modulated signal. The frequency of such a wave form is not changed by this process. The time between successive similar edges is unchanged, and the signal is still of constant amplitude. Yet it carries information in the relative positions of the positive and negative edges. The phase of such a signal is changed in a way that has no counterpart in the continuous domain. One set of edges remains in-phase with a reference square-wave, while the other edges move freely to lead or lag the reference.

Under these conditions the concept of applying a single phase number to describe the state of the modulated wave form is meaningless.

Worse yet, we can phase-shift the positive edge of the wave form, with respect to a stable reference signal, and make this shift independent of the shift that is occurring on the negative edge. If a real reference signal is available, this would permit a single PWM wave form to carry two unrelated modulating signals. Generally no such stable reference is available, and, since both edges are moving, the signal also contains frequency modulation because the sum of the time from any positive edge to the next positive edge is no longer constant.

What happens if
we feed such a signal through a detector consisting of a digital divider followed
by a phase-locked-loop with a variable loop filter, such as is found in the
**LIM Detector**?

First, the divider throws away one edge entirely. This is proper, since the reconstruction system only uses one edge to set its critical timing. You have to choose the right edge for the results to be meaningful, and the detector comes equipped with a polarity (slope) switch for this purpose.

Then, the edge position modulation, (i.e., LIM) on the selected input edge appears on both edges of the divided-down wave form. This is because the divider flips on one edge, ignores the next edge, flops on the third edge, ignores the fourth edge and so on. Both the flip and the flop are timed by the same-polarity jittery edges. The timing errors are propagated through the entire divider chain in the same manner.

The jitter timing is unchanged by the division process but the overall average period of the divided wave form fed to the detector is doubled at each division. Because of this, the relative amplitude of the jitter is reduced by the division process. The scaling is linear, a 6 dB reduction for each divide-by-two ahead of the detector.

Finally, the observed
signal at the output will depend in some way on the loop time-constant in the
detector. The detector is a very fast comparator. It compares the instantaneous
phase of the input signal with one derived from a VCO. It is followed by two
filters. One filter averages the output and feeds it back to the VCO to make
this "reference" oscillator track some aspect of the input signal,
while the other conditions the comparator output for further processing. In
the **LIM Detector**, the output filter is a simple brick-wall low-pass with
a corner frequency of 20 KHz. Its sole purpose is to remove LIM signals that
are not likely to cause audible degradation but which might be aliased back
into the audio band when the signal is analyzed using a digital sampling spectrometer.

The properties of the PLL loop filter are clearly what determine the appearance of the output from the LIM detector. If we suppose the LIM signal has components at all frequencies, from a few hertz to several MHz, then we can ask the question "What output will be observed for a short loop time-constant, and what will be seen if the time-constant is made very long?"

If the time-constant is short, the "reference" frequency from the VCO will track all changes below the corner frequency of the filter and will track changes that occur at a rate above its corner frequency in a manner that follows the roll-off of the filter. So, if we use (say) a two-pole filter with a 1KHz corner frequency, all LIM signals below 1 KHz are attenuated (with a residual that represents the finite conversion gain of the VCO), and signals of constant amplitude above 1 KHz will appear to increase (from this residual) at 12 dB per octave.

If the time-constant is very long, the "reference" frequency will remain relatively constant. This allows all the LIM signals to be detected. In fact, this is also an approximation, since a long time-constant means a filter with a very low corner frequency.

Suppose the time constant is ten seconds; then all variations below 0.1 Hz are suppressed, and the LIM signals above this frequency rise at the same 12 dB per octave as the VCO feedback falls. Now at 20 Hz, the feedback will be reduced by 92 dB, and will be effectively zero. The result will be a flat response to the LIM signals in the 20 Hz to 20 KHz range.

Now we have assumed repetitive interfering signals of constant amplitude in this discussion. In practice, the modulation on internal supply rails (which is the main cause of LIM) is usually the sum of many digital events that have a low correlation to each other. This means that the statistics of the displacement of the critical clock edge from its theoretical position usually approximate those of Gaussian noise. As is well known, the energy density of this type of random noise is constant, so that the energy per octave doubles in successive octaves. What is less well known is that this type of signal has components which increase in amplitude with increasing frequency, but the power density spectrum (the kind displayed by FFT analyzers) uses constant-bandwidth bins, so that the displayed spectrum of a Gaussian noise signal is flat. Of course, if the loop time-constant is set short enough, the flat spectrum will be tilted by the tracking effect described above.

Putting a meter, even a good "true RMS" meter, on such a wave form is likely to produce results that are less meaningful than might be expected. By varying the statistics of the wave form, or the bandwidth being averaged almost any reading can be obtained. (How accurate is your meter at 20 KHz and 20 Hz?) Analysis with a spectrum analyzer is de rigueur here.

Phase and frequency modulation are different names for the same process. Phase comparators with long loop constants measure this modulation process. Phase comparators with short loop constants measure the same process but slant the results. LIM is a form of random wide-band frequency modulation. You must measure the statistics to produce meaningful numbers.

**
Updates are performed on Meitner and Museatex products at our Calgary facility.
Email us at john@museatex.com or call
(403) 284-0723 for more information.)**\