Point of Beginning

GNSS Post Processing: A Primary Option

March 1, 2013
 A GNSS receiver operates as an RTK base station while simultaneously recording data for post-processing. The Trimble R10 GNSS receiver's internal memory can store 4GB of raw satellite data.

Widespread adoption of RTK led many people to predict the end of GNSS post processing. Surveyors' needs for precision, flexibility and productivity have kept it in the mainstream.

We owe a debt of gratitude to the early adopters of satellite surveying. In the late 1980s and early 1990s, only a handful of GPS satellites existed. Surveyors who wanted to use GPS endured hours of field observations that often occurred at night. The field work was followed by slow downloads, manual editing of screens filled with arcane numbers and interminable waits as slow computers and inefficient algorithms slogged through the massive datasets.

Tales abound describing long days of fieldwork followed by even longer nights at the computer. The payoff came with massive increases in productivity and precision compared to the optical methods of the day. But in order to reap the benefits of GPS technology, data processing was a painful and necessary fact of life. The industry was ripe for change, and the arrival of RTK was eagerly embraced.

By 2005, RTK techniques had radically changed surveying practices. Improvements in field hardware and software for positioning and communications combined with the ever-increasing availability of real-time networks (RTN) to make RTK easier and more reliable. For many applications, the need for static observation and post processing faded.

As RTK continued to grow, many observers predicted that post processing would soon be a memory. They were wrong. In spite of widespread use of real-time techniques, post processing continues to play a prominent role in GNSS surveying. With apologies to Mark Twain, reports of the death of post processing have been greatly exaggerated.

The continuing popularity of post processing comes from several directions. Surveyors who regularly process their GNSS data list benefits such as convenience and flexibility, precision, cost and control over how data are managed. And since post-processed GNSS points are often connected to continuously operating reference stations (CORS) or fixed monuments in a high-accuracy reference network (HARN), the resulting positions are tied to the geodetic reference frame.

According to Dave Doyle, who recently retired after serving as chief geodetic surveyor for the U.S. National Geodetic Survey (NGS), post processing fills an important need. “People who are well-trained in land surveying know that they need to validate the evidence they find, such as a pipe or monument, to confirm that the mark represents a boundary point,” Doyle says. “Post processing lets them validate the position of the point, especially at the geodetic level.”

A GNSS receiver collects static data on a cadastral marker. The geodetic antenna uses a ground plane to mitigate effects of multipath.

This validation enables a surveyor to deliver accurate georeferenced positions with little or no additional effort. But precision and georeferencing are not always the driving forces behind post processing. Economics and productivity often play the decisive role.

When surveyors decide which GNSS methods to use, they must consider several aspects of a project. These factors include budget, schedule, accuracy and requirements from clients. It’s clear that kinematic techniques with GNSS can’t produce the precision provided by static or fast-static methods. For many applications though, that isn’t an issue, and surveyors are willing to trade a centimeter or two of precision for the speed and immediate results of RTK. The key lies in making the choice of techniques, and in understanding how post processing contributes to productivity in both the field and office.

When setting up for a large or long-term project, it’s commonplace to establish a precise reference frame for positioning in surveying, construction and ongoing operations. Static GNSS is the tool of choice for this work, and post processing is required to produce coordinates for the points.

Surveyors can place points where needed to supply control for subsequent surveying using both RTK and optical approaches. Considerations include safety, intervisibility, access, radio coverage and susceptibility to disturbance. These local networks are often built using short, internal baselines together with ties to more distant control points. This provides reliable, internally consistent control across the project.

Even in smaller or shorter-term projects, static GNSS with post processing delivers the same benefits in precision and georeferencing.

With the decision to use post processing in place, surveyors must develop a strategy for processing the data. There are three options: (1) processing data in house; (2) hiring a consultant; or (3) accessing an online service such as the NGS Online Positioning User Service (OPUS) system or a post processing service from equipment manufacturers.

For in-house work, most surveyors use GNSS processing software provided by the manufacturers. These packages provide capabilities that vary from simple differential corrections to carrier-phase baseline solutions using long-duration observations with all available satellites and signals. The commercial packages also offer network adjustments, datum transformations and direct support for dozens of defined coordinate projections around the world. And because commercial software is developed to support the manufacturer’s field equipment, downloads and file management are usually fast and smooth.

The choice of post processing affects fieldwork during the control portion of a project. It also plays an important role in the later stages, when work has shifted to kinematic and optical methods. While OPUS and other online services provide significant benefits, surveyors must understand the capabilities and limitations compared with commercial packages. One example is the occupation time needed for a given point. Because OPUS requires a minimum of two hours of observation time for each point, it’s often faster to do shorter occupations and process them yourself. Commercial packages produce excellent results with datasets in as little as 15 minutes.

The second benefit in post processing comes from the ability to measure baselines within a project and use them in a network adjustment. Points established using OPUS are computed independently and aren’t directly measured against other points in the project. While the long baselines to surrounding CORS carry good accuracy, the uncertainty at each point and lack of cross ties can cause problems when tying two points together.

Because of the OPUS convenience, many clients use static and in-house post processing to measure internal baselines, combining them with longer observations and OPUS to tie the network to the geodetic system. As an example, consider a project with a half-dozen control points that will support design and construction. With two receivers, it would take a full day to collect enough data to process all the points with OPUS, and there would be no direct measurements between the points. But by using fast-static measurements and post processing, it’s possible to collect enough data to produce multiple internal checks and still have strong ties to the surrounding CORS.

Post processing continues to play a role once a project is beyond the control stages. In today’s world of RTK and RTN, the loss of a data communications link can bring survey work to a standstill. Personnel and equipment can stand idle while the survey crew rushes to establish alternative methods, which often entail setting a new reference station or radio repeater. By having the ability to post process data collected using kinematic methods, applications such as topographic surveys, as-builts and inspections can continue without interruption.

Devin Kowbuz, PLS, a surveyor in the Denver office of CH2MHill, describes his company’s approach: “Once everything has been processed and adjusted, we then use primarily RTK techniques,” he says. “For example, a pipeline project may run 300 miles (500 km) and we use static techniques for control. Then we commonly switch over to RTK. But we run RTK with a PPK (post-processed kinematic) infill to cover loss of radio link. That way we don’t have gaps in data or need to go back and set additional control points.”

A plan and time-based display of post-processed GNSS baselines. Operators can select combinations of time slots and satellites to be used in baseline processing.

Kowbuz, who uses Trimble GNSS hardware and office software, says it’s common to switch on RTK infill when his team is in an area where communications may be dicey. By knowing that the group can post process to get points that RTK missed due to a communications dropout, the crews can work without worrying about RTK solutions. And the infill saves the need to set a new base station just to collect a few points that are out of radio contact with the original base.

In the early days of GPS, limited satellite availability and arcane data processing procedures made GNSS surveying far more challenging than it is today. Improvements in the last 20 years have been enormous. Certainly, the dozens of new satellites and enormous strides in field hardware and procedures have helped to move GNSS into the mainstream. But the advances in post processing, analysis and integrated technologies in office software have played a major role as well.

Modern GNSS processing software must serve the needs of a wide cross section of applications. Some users want the software to do as much as possible, while others prefer to take more control of the processing. It’s not unusual for a crew to operate a receiver in static mode all day, and then select specific time intervals for processing. With this approach, it’s easy to develop separate intervals a few hours apart, which results in multiple, independent sessions for the given point. Similarly, users can identify and remove data from low satellites that might introduce unwanted noise into the baseline solutions.

The modern algorithms are much more efficient and can produce results with ever-decreasing observation times. And when the new algorithms are combined with a few cycles of Moore’s Law (which states that computing power doubles every 18 months), the results are impressive. A baseline that would have required two hours of observation and an hour to process in the 1990s can now be solved in minutes.

The functionality of office software continues to improve beyond the baseline processing as well. For example, Trimble Business Center provides a function specifically for surveyors who want to tie their work to a CORS. The software can automatically locate and display the nearest CORS sites. The operator can select the desired control, which is then downloaded and used in the processing. The system is designed to use the web when it can, but is not reliant on a connection. Baseline results can be automatically passed to a network adjustment module for analysis and computation of final coordinates. The output includes vector information and error estimates needed by the adjustment routines.

Even as office processing gets faster and easier, improvements in the field seem to reduce the need. It’s common for projects to mix the three approaches of in-house post processing, online services and RTK.

For example, consider how Flatirons Surveying in Boulder, Colo., combines the three methods. According to Flatirons surveyor Dave Wilson, PLS, the company uses post processing to establish control on the bigger projects that go on for a long time. “Over time, a lot of data is based on that control, so you need to have confidence and backup that it is correct.”

Wilson says that much of his work takes place in areas covered by RTNs, and that RTK is becoming the dominant technique. He routinely checks his RTN results against fixed-control stations, and the data often agrees within one centimeter.

As new constellations and the L5 frequency come online, many people expect the need for post processing to erode further. That’s a cause for concern, due to the difference in training required for RTK compared to post-processed techniques. When surveyors learn how to process their static GNSS data, they gain basic skills and knowledge needed to be more proficient with RTK. Because it provides a foundation about the 3D world and the sources of error, post processing helps surveyors become better equipped to use both RTK and post processing.

With all the advances in the field, are the days of post processing coming to an end? In a word, no. Surveyors clearly understand the value that post processing provides to them and their clients. Kowbuz says it well: “Post processing is an essential part of a professional skill set that a surveyor should provide. We have developed a system that works quite efficiently. We use the various equipment and techniques from static to post-processed kinematic to RTK and RTN. We employ all of it, using whatever suits our clients efficiently and accurately for a given day and need.”

Certainly, the growth of RTK and RTN will continue. We can expect the ratio of points computed in real time versus post processing to grow. Yet post processing remains one of the most precise and flexible tools a surveyor has for control surveys. When combined with its ability to handle kinematic data and difficult situations, post processing won’t fade away anytime soon.

 

OPUS--The Elephant in the Room

The Online Positioning User Service (OPUS) is a GNSS post-processing system developed and operated by the U.S. NGS. In operation for more than 10 years, OPUS is a free service that allows users to upload data from a single GNSS receiver.

Based on the rough position of the receiver (which is contained in the RINEX format data files that OPUS requires), OPUS scans the NGS database to identify the five closest CORS to the receiver. It computes vectors from all five points and selects the three best results. It then uses these results to produce the position of the point and emails the solution to the user. Similar services exist in Australia and Canada. Because OPUS can access IGS (International GNSS Service) stations as well as the U.S. CORS, it is also used by surveyors in developing countries.

OPUS originated in response to the needs of NGS crews using GNSS to establish new control points tied to the National Spatial Reference System (NSRS). At the time, NGS carried the lion’s share of work to set new control, and OPUS made it much easier and faster to connect new points to the national reference frame.

When OPUS became available for public use in 2002, the dynamic of accessing the reference frame changed rapidly. Connecting to CORS became easy, and surveyors could create new points without occupying distant stations. OPUS soon became a common tool, even for surveyors who do in-house processing.

According to Neil Weston, chief of the NGS Spatial Reference Division, OPUS now processes roughly 40,000 data sets each month and has served about 65,000 unique users. In 2011, usage increased by 25 percent from 2010.

Dave Doyle, retired chief geodetic surveyor for the NGS, says that OPUS is complementary to commercial, in-house packages for post processing.

“NGS is very keen on understanding its role and not competing with the private sector,” he says. “Users can perform their own work to develop and refine baselines and networks, and then use OPUS to connect to the national system and validate the geodetic accuracy of the project.”

By making it easier to produce good results, OPUS helped to accelerate the adoption of GPS and GNSS. It’s made a significant contribution to the growth of the profession.

-- C.G.