Tool tips on GPS latency and RTK "fixes."

Q:I have just started using RTK (Real-time Kinematic) GPS. Why is it important to pay attention to latency? I understand that this is the lag between the time the GPS measurement is made and the time the result is shown on my display. But isn't it better to have a measurement and a position result, even if the latency is high?

A: Many people pay attention to latency after getting "burned" from not paying attention to it. You are correct in that this is the time delay between the GPS signal measurements (from every one of the satellites when a position is calculated) and the results sent to the user. These measurements include phase of the L1 and L2 carriers and phase of the C/A and P/Y codes from each satellite. GPS systems are designed to simultaneously make these measurements at the reference station and rover station receivers. For RTK to work, the reference station measurements must be transferred over a link, usually radio, which involves "packaging the data" and sending it to the digital modem of the link, which then sends it on in a predictable way (transmission format) to the digital modem at the other end of the link. The received data is then converted into a form that the processing algorithms running on the roving receiver can combine with its GPS measurements. Assuming the ambiguities have already been resolved, there is still a further time delay due to the computations for determining the changes since the last epoch and then converting the result into a coordinate in the system that the user is working. Often this result is shown as the difference between a desired position (if the surveyor is involved in a stakeout operation) and the current position. Most RTK systems typically update (or have epochs) at a rate that is faster than 1 Hz (one update per second).

As an example, let us say that the latency is 1.7 seconds. Then, if doing stakeout, let us say that the rover system is set up on the trial point and plumbed correctly at 10:01:35.3 a.m. The next GPS measurements are taken at 10:01:36, and the correct result is calculated and available for display at 10:01:37.7. If the display is also updated once a second, the correct result is actually displayed at 10:01:38. In this example, that is 2.7 seconds after the system was correctly positioned physically on the stakeout trial point. It is human nature to look at the screen of the rover as soon as the bubble on the antenna pole has been centered and to forget that there is a lag and make a decision on where the next trial point should be (if any). This problem can be resolved by developing ways of transmitting the data from the reference station and computing the result faster, and by having faster updates of the screen to show the result once it has been computed. Ideally, the latency should be faster than the measurement rate.

Q: Why does it take so long when I use RTK to "get a fix," even though after I have it I can move and observe positions much faster, as long as I haven't lost satellites?

A: "Getting a fix" with RTK involves resolving ambiguities to determine the length of the base line from the reference station to the rover according to the measurements each receiver makes from the satellite signals received. It is not within the scope of this column to explain how RTK (or GPS for that matter) works in detail. But some of the concepts can be described here. When a GPS receiver is used autonomously (as by hikers), it uses what is known as the code to make determinations of the receiver's position using the distance from the receiver to each of the satellites at a particular moment in time. This is a technique well known to surveyors-it is determination of position by intersecting distances. However, this position accuracy is only on the order of 5 to 10 m. To make position determinations of surveying quality, the phase of the carrier waves must be measured. Imagine the wave from each satellite as being sinusoidal in nature. The waves are approximately 19 cm long for L1 and 24 cm for L2. Carrier wave measurements mean that the receiver determines which part of an individual sine wave has been received at the time of that measurement. Remember that these sine waves "pour" in at the frequency of the carrier, which in the case of L1 is more than 1.5 million every second. The typical receiver can measure the sine wave to within a few degrees (360º being the entire wave). If the receiver can measure the wave within 5º, then the measurement is to within 2.5 mm. But because the receivers' positions are only known using code phase to between 5 and 10 m, the unknown number of full wavelengths to the satellite must also be determined (i.e. resolving the ambiguities). While we can't get into the computational methods used to do that here, remember that the satellites are moving (and hopefully the receivers aren't during the time to take the fix). Just as with a simple two distance intersection computed by a surveyor, it will be possible to "observe" all the possible solutions from measurement to measurement and see that one solution (the correct one) keeps recurring whereas the others (the incorrect ones) will not. This is highly simplified, but this is one of the reasons why more time is needed to get an RTK "fix." Note that the processes used to resolve ambiguities involve more than "observing" the solution method described here.