Weather and Climate Inventory, Klamath Network, National Park Service, 2007
1.0. Introduction
1.4. Design of Climate-Monitoring Programs
Determining the purposes for collecting measurements in a given weather/climate monitoring program will guide the process of identifying weather/climate stations suitable for the monitoring program. The context for making these decisions is provided in Chapter 2 where background on the KLMN climate is presented. However, this process is only one step in evaluating and designing a climate-monitoring program. The following steps must also be included:
• Define park and network-specific monitoring needs and objectives.
• Identify locations and data repositories of existing and historic stations.
• Acquire existing data when necessary or practical.
• Evaluate the quality of existing data.
• Evaluate the adequacy of coverage of existing stations.
• Develop a protocol for monitoring the weather and climate, including the following:
o Standardized summaries and reports of weather/climate data.
o Data management (quality assurance and quality control, archiving, data access, etc.).
• Develop and implement a plan for installing or modifying stations, as necessary.
Throughout the design process, there are various factors that require consideration in evaluating weather and climate measurements. Many of these factors have been summarized by Dr. Tom Karl, director of the NOAA National Climatic Data Center (NCDC), and widely distributed as the “Ten Principles for Climate Monitoring” (Karl et al. 1996; NRC 2001). These principles are presented in Appendix B, and the guidelines are embodied in many of the comments made throughout this report. The most critical factors are presented here. In addition, an overview of requirements necessary to operate a climate network is provided in Appendix C, with further discussion in Appendix D.
1.4.1. Need for Consistency
A principal goal in climate monitoring is to detect and characterize slow and sudden changes in climate through time. This is of less concern for day-to-day weather changes, but it is of paramount importance for climate variability and change. There are many ways whereby changes in techniques for making measurements, changes in instruments or their exposures, or seemingly innocuous changes in site characteristics can lead to apparent changes in climate. Safeguards must be in place to avoid these false sources of temporal “climate” variability if we are to draw correct inferences about climate behavior over time from archived measurements.
For climate monitoring, consistency through time is vital, counting at least as important as absolute accuracy. Sensors record only what is occurring at the sensor—this is all they can detect. It is the responsibility of station or station network managers to ensure that observations are representative of the spatial and temporal climate scales that we wish to record.
1.4.2. Metadata
Changes in instruments, site characteristics, and observing methodologies can lead to apparent changes in climate through time. It is therefore vital to document all factors that can bear on the interpretation of climate measurements and to update the information repeatedly through time. This information (“metadata,” data about data) has its own history and set of quality-control issues that parallel those of the actual data. There is no single standard for the content of climate metadata, but a simple rule suffices:
• Observers should record all information that could be needed in the future to interpret observations correctly without benefit of the observers’ personal recollections.
Such documentation includes notes, drawings, site forms, and photos, which can be of inestimable value if taken in the correct manner. That stated, it is not always clear to the metadata provider what is important for posterity and what will be important in the future. It is almost impossible to “over document” a station. Station documentation is greatly underappreciated and is seldom thorough enough (especially for climate purposes). Insufficient attention to this issue often lowers the present and especially future value of otherwise useful data.
The convention followed throughout climatology is to refer to metadata as information about the measurement process, station circumstances, and data. The term “data” is reserved solely for the actual weather and climate records obtained from sensors.
1.4.3. Maintenance
Inattention to maintenance is the greatest source of failure in weather/climate stations and networks. Problems begin to occur soon after sites are deployed. A regular visit schedule must be implemented, where sites, settings (e.g., vegetation), sensors, communications, and data flow are checked routinely (once or twice a year at a minimum) and updated as necessary. Parts must be changed out for periodic recalibration or replacement. With adequate maintenance, the entire instrument suite should be replaced or completely refurbished about once every five to seven years.
Simple preventive maintenance is effective but requires much planning and skilled technical staff. Changes in technology and products require retraining and continual re-education. Travel, logistics, scheduling, and seasonal access restrictions consume major amounts of time and budget but are absolutely necessary. Without such attention, data gradually become less credible and then often are misused or not used at all.
1.4.4. Automated versus Manual Stations
Historic stations often have depended on manual observations and many continue to operate in this mode. Manual observations frequently produce excellent data sets. Sensors and data are simple and intuitive, well tested, and relatively cheap. Manual stations have much to offer in certain circumstances and can be a source of both primary and backup data. However, methodical consistency for manual measurements is a constant challenge, especially with a mobile work force. Operating manual stations takes time and needs to be done on a regular schedule, though sometimes the routine is welcome.
Nearly all newer stations are automated. Automated stations provide better time resolution, increased (though imperfect) reliability, greater capacity for data storage, and improved accessibility to large amounts of data. The purchase cost for automated stations is higher than for manual stations. A common expectation and serious misconception is that an automated station can be deployed and left to operate on its own. In reality, automation does not eliminate the need for people but rather changes the type of person that is needed. Skilled technical personnel are needed and must be readily available, especially if live communications exist and data gaps are not wanted. Site visits are needed at least annually and spare parts must be maintained. Typical annual costs for sensors and maintenance at the major national networks are $1500–2500 per station per year but these costs still can vary greatly depending on the kind of automated site.
1.4.5. Communications
With manual stations, the observer is responsible for recording and transmitting station data. Data from automated stations, however, can be transmitted quickly for access by research and operations personnel, which is a highly preferable situation. A comparison of communication systems for automated and manual stations shows that automated stations generally require additional equipment, more power, higher transmission costs, attention to sources of disruption or garbling, and backup procedures (e.g. manual downloads from data loggers).
Automated stations are capable of functioning normally without communication and retaining many months of data. At such sites, however, alerts about station problems are not possible, large gaps can accrue when accessible stations quit, and the constituencies needed to support such stations are smaller and less vocal. Two-way communications permit full recovery from disruptions, ability to reprogram data loggers remotely, and better opportunities for diagnostics and troubleshooting. In virtually all cases, two-way communications are much preferred to all other communication methods. However, two-way communications require considerations of cost, signal access, transmission rates, interference, and methods for keeping sensor and communication power loops separate. Two-way communications are frequently impossible (no service) or impractical, expensive, or power consumptive. Two-way methods (cellular, land line, radio, Internet) require smaller up-front costs as compared to other methods of communication and have variable recurrent costs, starting at zero. Satellite links work everywhere (except when blocked by trees or cliffs) and are quite reliable but are one-way and relatively slow, allow no retransmissions, and require high up-front costs ($3000–4000) but no recurrent costs.
Communications technology is changing constantly and requires vigilant attention by maintenance personnel.
1.4.6. Quality Assurance and Quality Control
Quality control and quality assurance are issues at every step through the entire sequence of sensing, communication, storage, retrieval, and display of environmental data. Quality assurance is an umbrella concept that covers all data collection and processing (start-to-finish) and ensures that credible information is available to the end user. Quality control has a more limited scope and is defined by the International Standards Organization as “the operational techniques and activities that are used to satisfy quality requirements.” The central problem can be better appreciated if we approach quality control in the following way.
• Quality control is the evaluation, assessment, and rehabilitation of imperfect data by utilizing other imperfect data.
The quality of the data only decreases with time once the observation is made. The best and most effective quality control, therefore, consists in making high-quality measurements from the start and then successfully transmitting the measurements to an ingest process and storage site. Once the data are received from a monitoring station, a series of checks with increasing complexity can be applied, ranging from single-element checks (self-consistency) to multiple-element checks (inter-sensor consistency) to multiple-station/single-element checks (inter-station consistency). Suitable ancillary data (battery voltages, data ranges for all measurements, etc.) can prove extremely useful in diagnosing problems.
There is rarely a single technique in quality control procedures that will work satisfactorily for all situations. Quality-control procedures must be tailored to individual station circumstances, data access and storage methods, and climate regimes.
The fundamental issue in quality control centers on the tradeoff between falsely rejecting good data (Type I error) and falsely accepting bad data (Type II error). We cannot reduce the incidence of one type of error without increasing the incidence of the other type. In weather and climate data assessments, since good data are absolutely crucial for interpreting climate records properly, Type I errors are deemed far less desirable than Type II errors.
Not all observations are equal in importance. Quality-control procedures are likely to have the greatest difficulty evaluating the most extreme observations, where independent information usually must be sought and incorporated. Quality-control procedures involving more than one station usually involve a great deal of infrastructure with its own (imperfect) error-detection methods, which must be in place before a single value can be evaluated.
1.4.7. Standards
Although there is near-universal recognition of the value in systematic weather and climate measurements, these measurements will have little value unless they conform to accepted standards. There is not a single source for standards for collecting weather and climate data nor a single standard that meets all needs. Measurement standards have been developed by the World Meteorological Organization (WMO 1983; 2005), the American Association of State Climatologists (AASC 1985), the U.S. Environmental Protection Agency (EPA 1987), Finklin and Fischer (1990), the RAWS program (Bureau of Land Management [BLM] 1997), and the National Wildfire Coordinating Group (2004). Variations to these measurement standards also have been offered by instrument makers (e.g., Tanner 1990).
1.4.8. Who Makes the Measurements?
The lands under NPS stewardship provide many excellent locations to host the monitoring of climate by the NPS or other collaborators. These lands are largely protected from human development and other land changes that can impact observed climate records. Most park units historically have observed weather/climate elements as part of their overall mission. Many of these measurements come from station networks managed by other agencies, with observations taken or overseen by NPS personnel, in some cases, or by collaborators from the other agencies. National Park Service units that are small, lack sufficient resources, or lack sites presenting adequate exposure may benefit by utilizing weather/climate measurements collected from nearby stations.
Other pages in this section