Weather and Climate Inventory, Klamath Network, National Park Service, 2007
Appendix D. General design considerations for weather/climate-monitoring programs
The process for designing a climate-monitoring program benefits from anticipating design and protocol issues discussed here. Much of this material is been excerpted from a report addressing the Channel Islands National Park (Redmond and McCurdy 2005), where an example is found illustrating how these factors can be applied to a specific setting. Many national park units possess some climate or meteorology feature that sets them apart from more familiar or “standard” settings.
D.1. Introduction
There are several criteria that must be used in deciding to deploy new stations and where these new stations should be sited.
- Where are existing stations located?
- Where have data been gathered in the past (discontinued locations)?
- Where would a new station fill a knowledge gap about basic, long-term climatic averages for an area of interest?
- Where would a new station fill a knowledge gap about how climate behaves over time?
- As a special case for behavior over time, what locations might be expected to show a more sensitive response to climate change?
- How do answers to the preceding questions depend on the climate element? Are answers the same for precipitation, temperature, wind, snowfall, humidity, etc.?
- What role should manual measurements play? How should manual measurements interface with automated measurements?
- Are there special technical or management issues, either present or anticipated in the next 5–15 years, requiring added climate information?
- What unique information is provided in addition to information from existing sites? “Redundancy is bad.”
- What nearby information is available to estimate missing observations because observing systems always experience gaps and lose data? “Redundancy is good.”
- How would logistics and maintenance affect these decisions?
In relation to the preceding questions, there are several topics that should be considered. The following topics are not listed in a particular order.
D.1.1. Network Purpose
Humans seem to have an almost reflexive need to measure temperature and precipitation, along with other climate elements. These reasons span a broad range from utilitarian to curiosity driven. Although there are well-known recurrent patterns of need and data use, new uses are always appearing. The number of uses ranges in the thousands. Attempts have been made to categorize such uses (see NRC 1998; NRC 2001). Because climate measurements are accumulated over a long time, they should be treated as multi-purpose and should be undertaken in a manner that serves the widest possible applications. Some applications remain constant, while others rise and fall in importance. An insistent issue today may subside, while the next pressing issue of tomorrow barely may be anticipated. The notion that humans might affect the climate of the entire Earth was nearly unimaginable when the national USDA (later NOAA) cooperative weather network began in the late 1800s. Abundant experience has shown, however, that there always will be a demand for a history record of climate measurements and their properties. Experience also shows that there is an expectation that climate measurements will be taken and made available to the general public.
An exhaustive list of uses for data would fill many pages and still be incomplete. In broad terms, however, there are needs to document environmental conditions that disrupt or otherwise affect park operations (e.g., storms and droughts). Design and construction standards are determined by climatological event frequencies that exceed certain thresholds. Climate is a determinant that sometimes attracts and sometimes discourages visitors. Climate may play a large part in the park experience (e.g., Death Valley and heat are nearly synonymous). Some park units are large enough to encompass spatial or elevation diversity in climate and the sequence of events can vary considerably inside or close to park boundaries. That is, temporal trends and statistics may not be the same everywhere, and this spatial structure should be sampled. The granularity of this structure depends on the presence of topography or large climate gradients or both, such as that found along the U.S. West Coast in summer with the rapid transition from the marine layer to the hot interior.
Plant and animal communities and entire ecosystems react to every nuance in the physical environment. No aspect of weather and climate goes undetected in the natural world. Wilson (1998) proposed “an informal rule of biological evolution” that applies here: “If an organic sensor can be imagined that is capable of detecting any particular environmental signal, a species exists somewhere that possesses this sensor.” Every weather and climate event, whether dull or extraordinary to humans, matters to some organism. Dramatic events and creeping incremental change both have consequences to living systems. Extreme events or disturbances can “reset the clock” or “shake up the system” and lead to reverberations that last for years to centuries or longer. Slow change can carry complex nonlinear systems (e.g., any living assemblage) into states where chaotic transitions and new behavior occur. These changes are seldom predictable, typically are observed after the fact, and understood only in retrospect. Climate changes may not be exciting, but as a well-known atmospheric scientist, Mike Wallace, from the University of Washington once noted, “subtle does not mean unimportant.”
Thus, individuals who observe the climate should be able to record observations accurately and depict both rapid and slow changes. In particular, an array of artificial influences easily can confound detection of slow changes. The record as provided can contain both real climate variability (that took place in the atmosphere) and fake climate variability (that arose directly from the way atmospheric changes were observed and recorded). As an example, trees growing near a climate station with an excellent anemometer will make it appear that the wind gradually slowed down over many years. Great care must be taken to protect against sources of fake climate variability on the longer-time scales of years to decades. Processes leading to the observed climate are not stationary; rather these processes draw from probability distributions that vary with time. For this reason, climatic time series do not exhibit statistical stationarity. The implications are manifold. There are no true climatic “normals” to which climate inevitably must return. Rather, there are broad ranges of climatic conditions. Climate does not demonstrate exact repetition but instead continual fluctuation and sometimes approximate repetition. In addition, there is always new behavior waiting to occur. Consequently, the business of climate monitoring is never finished, and there is no point where we can state confidently that “enough” is known.
D.1.2. Robustness
The most frequent cause for loss of weather data is the weather itself, the very thing we wish to record. The design of climate and weather observing programs should consider the meteorological equivalent of “peaking power” employed by utilities. Because environmental disturbances have significant effects on ecologic systems, sensors, data loggers, and communications networks should be able to function during the most severe conditions that realistically can be anticipated over the next 50–100 years. Systems designed in this manner are less likely to fail under more ordinary conditions, as well as more likely to transmit continuous, quality data for both tranquil and active periods.
D.1.3. Weather versus Climate
For “weather” measurements, pertaining to what is approximately happening here and now, small moves and changes in exposure are not as critical. For “climate” measurements, where values from different points in time are compared, siting and exposure are critical factors, and it is vitally important that the observing circumstances remain essentially unchanged over the duration of the station record.
Station moves can affect different elements to differing degrees. Even small moves of several meters, especially vertically, can affect temperature records. Hills and knolls act differently from the bottoms of small swales, pockets, or drainage channels (Whiteman 2000; Geiger et al. 2003). Precipitation is probably less subject to change with moves of 50–100 m than other elements (that is, precipitation has less intrinsic variation in small spaces) except if wind flow over the gauge is affected.
D.1.4. Physical Setting
Siting and exposure, and their continuity and consistency through time, significantly influence the climate records produced by a station. These two terms have overlapping connotations. We use the term “siting” in a more general sense, reserving the term “exposure” generally for the particular circumstances affecting the ability of an instrument to record measurements that are representative of the desired spatial or temporal scale.
D.1.5. Measurement Intervals
Climatic processes occur continuously in time, but our measurement systems usually record in discrete chunks of time: for example, seconds, hours, or days. These measurements often are referred to as “systematic” measurements. Interval averages may hide active or interesting periods of highly intense activity. Alternatively, some systems record “events” when a certain threshold of activity is exceeded (examples: another millimeter of precipitation has fallen, another kilometer of wind has moved past, the temperature has changed by a degree, a gust higher than 9.9 m/s has been measured). When this occurs, measurements from all sensors are reported. These measurements are known as “breakpoint” data. In relatively unchanging conditions (long calm periods or rainless weeks, for example), event recorders should send a signal that they are still “alive and well.” If systematic recorders are programmed to note and periodically report the highest, lowest, and mean value within each time interval, the likelihood is reduced that interesting behavior will be glossed over or lost. With the capacity of modern data loggers, it is recommended to record and report extremes within the basic time increment (e.g., hourly or 10 minutes). This approach also assists quality-control procedures.
There is usually a trade-off between data volume and time increment, and most automated systems now are set to record approximately hourly. A number of field stations maintained by WRCC are programmed to record in 5- or 10-minute increments, which readily serve to construct an hourly value. However, this approach produces 6–12 times as much data as hourly data. These systems typically do not record details of events at sub-interval time scales, but they easily can record peak values, or counts of threshold exceedance, within the time intervals.
Thus, for each time interval at an automated station, we recommend that several kinds of information—mean or sum, extreme maximum and minimum, and sometimes standard deviation—be recorded. These measurements are useful for quality control and other purposes. Modern data loggers and office computers have quite high capacity. Diagnostic information indicating the state of solar chargers or battery voltages and their extremes is of great value. This topic will be discussed in greater detail in a succeeding section.
Automation also has made possible adaptive or intelligent monitoring techniques where systems vary the recording rate based on detection of the behavior of interest by the software. Subinterval behavior of interest can be masked on occasion (e.g., a 5-minute extreme downpour with high-erosive capability hidden by an innocuous hourly total). Most users prefer measurements that are systematic in time because they are much easier to summarize and manipulate.
For breakpoint data produced by event reporters, there also is a need to send periodically a signal that the station is still functioning, even though there is nothing more to report. “No report” does not necessarily mean “no data,” and it is important to distinguish between the actual observation that was recorded and the content of that observation (e.g., an observation of “0.00” is not the same as “no observation”).
D.1.6. Mixed Time Scales
There are times when we may wish to combine information from radically different scales. For example, over the past 100 years we may want to know how the frequency of 5-minute precipitation peaks has varied or how the frequency of peak 1-second wind gusts have varied. We may also want to know over this time if nearby vegetation gradually has grown up to increasingly block the wind or to slowly improve precipitation catch. Answers to these questions require knowledge over a wide range of time scales.
D.1.7. Elements
For manual measurements, the typical elements recorded included temperature extremes, precipitation, and snowfall/snow depth. Automated measurements typically include temperature, precipitation, humidity, wind speed and direction, and solar radiation. An exception to this exists in very windy locations where precipitation is difficult to measure accurately. Automated measurements of snow are improving, but manual measurements are still preferable, as long as shielding is present. Automated measurement of frozen precipitation presents numerous challenges that have not been resolved fully, and the best gauges are quite expensive ($3–8K). Soil temperatures also are included sometimes. Soil moisture is extremely useful, but measurements are not made at many sites. In addition, care must be taken in the installation and maintenance of instruments used in measuring soil moisture. Soil properties vary tremendously in short distances as well, and it is often very difficult (“impossible”) to accurately document these variations (without digging up all the soil!). In cooler climates, ultrasonic sensors that detect snow depth are becoming commonplace.
D.1.8. Wind Standards
Wind varies the most in the shortest distance, since it always decreases to zero near the ground and increases rapidly (approximately logarithmically) with height near the ground. Changes in anemometer height obviously will affect distribution of wind speed as will changes in vegetation, obstructions such as buildings, etc. A site that has a 3-m (10-ft) mast clearly will be less windy than a site that has a 6-m (20-ft) or 10-m (33-ft) mast. Historically, many U.S. airports (FAA and NWS) and most current RAWS sites have used a standard 6-m (20-ft) mast for wind measurements. Some NPS RAWS sites utilize shorter masts. Over the last decade, as Automated Surface Observing Systems (ASOSs, mostly NWS) and Automated Weather Observing Systems (AWOSs, mostly FAA) have been deployed at most airports, wind masts have been raised to 8 or 10 m (26 or 33 ft), depending on airplane clearance. The World Meteorological Organization recommends 10 m as the height for wind measurements (WMO 1983; 2005), and more groups are migrating slowly to this standard. The American Association of State Climatologists (AASC 1985) have recommended that wind be measured at 3 m, a standard geared more for agricultural applications than for general purpose uses where higher levels usually are preferred. Different anemometers have different starting thresholds; therefore, areas that frequently experience very light winds may not produce wind measurements thus affecting long-term mean estimates of wind speed. For both sustained winds (averages over a short interval of 2–60 minutes) and especially for gusts, the duration of the sampling interval makes a considerable difference. For the same wind history, 1–second gusts are higher than gusts averaging 3 seconds, which in turn are greater than 5-second averages, so that the same sequence would be described with different numbers (all three systems and more are in use). Changes in the averaging procedure, or in height or exposure, can lead to “false” or “fake” climate change with no change in actual climate. Changes in any of these should be noted in the metadata.
D.1.9. Wind Nomenclature
Wind is a vector quantity having a direction and a speed. Directions can be two- or three dimensional; they will be three-dimensional if the vertical component is important. In all common uses, winds always are denoted by the direction they blowfrom (north wind or southerly breeze). This convention exists because wind often brings weather, and thus our attention is focused upstream. However, this approach contrasts with the way ocean currents are viewed. Ocean currents usually are denoted by the direction they are moving towards (eastward current moves from west to east). In specialized applications (such as in atmospheric modeling), wind velocity vectors point in the direction that the wind is blowing. Thus, a southwesterly wind (from the southwest) has both northward and eastward (to the north and to the east) components. Except near mountains, wind cannot blow up or down near the ground, so the vertical component of wind often is approximated as zero, and the horizontal component is emphasized.
D.1.10. Frozen Precipitation
Frozen precipitation is more difficult to measure than liquid precipitation, especially with automated techniques. Sevruk and Harmon (1984), Goodison et al. (1998), and Yang et al. (1998; 2001) provide many of the reasons to explain this. The importance of frozen precipitation varies greatly from one setting to another. This subject was discussed in greater detail in a related inventory and monitoring report for the Alaska park units (Redmond et al. 2005).
In climates that receive frozen precipitation, a decision must be made whether or not to try to record such events accurately. This usually means that the precipitation must be turned into liquid either by falling into an antifreeze fluid solution that is then weighed or by heating the precipitation enough to melt and fall through a measuring mechanism such as a nearly-balanced tipping bucket. Accurate measurements from the first approach require expensive gauges; tipping buckets can achieve this resolution readily but are more apt to lose some or all precipitation. Improvements have been made to the heating mechanism on the NWS tipping-bucket gauge used for the ASOS to correct its numerous deficiencies making it less problematic; however, this gauge is not inexpensive. A heat supply needed to melt frozen precipitation usually requires more energy than renewable energy (solar panels or wind recharging) can provide thus AC power is needed. Periods of frozen precipitation or rime often provide less-than-optimal recharging conditions with heavy clouds, short days, low-solar-elevation angles and more horizon blocking, and cold temperatures causing additional drain on the battery.
D.1.11. Save or Lose
A second consideration with precipitation is determining if the measurement should be saved (as in weighing systems) or lost (as in tipping-bucket systems). With tipping buckets, after the water has passed through the tipping mechanism, it usually just drops to the ground. Thus, there is no checksum to ensure that the sum of all the tips adds up to what has been saved in a reservoir at some location. By contrast, the weighing gauges continually accumulate until the reservoir is emptied, the reported value is the total reservoir content (for example, the height of the liquid column in a tube), and the incremental precipitation is the difference in depth between two known times. These weighing gauges do not always have the same fine resolution. Some gauges only record to the nearest centimeter, which is usually acceptable for hydrology but not necessarily for other needs. (For reference, a millimeter of precipitation can get a person in street clothes quite wet.) Other weighing gauges are capable of measuring to the 0.25-mm (0.01-in.) resolution but do not have as much capacity and must be emptied more often. Day/night and storm-related thermal expansion and contraction and sometimes wind shaking can cause fluid pressure from accumulated totals to go up and down in SNOTEL gauges by small increments (commonly 0.3-3 cm, or 0.01–0.10 ft) leading to “negative precipitation” followed by similarly non-real light precipitation when, in fact, no change took place in the amount of accumulated precipitation.
D.1.12. Time
Time should always be in local standard time (LST), and daylight savings time (DST) should never be used under any circumstances with automated equipment and timers. Using DST leads to one duplicate hour, one missing hour, and a season of displaced values, as well as needless confusion and a data-management nightmare. Absolute time, such as Greenwich Mean Time (GMT) or Coordinated Universal Time (UTC), also can be used because these formats are unambiguously translatable. Since measurements only provide information about what already has occurred or is occurring and not what will occur, they should always be assigned to the ending time of the associated interval with hour 24 marking the end of the last hour of the day. In this system, midnight always represents the end of the day, not the start. To demonstrate the importance of this differentiation, we have encountered situations where police officers seeking corroborating weather data could not recall whether the time on their crime report from a year ago was the starting midnight or the ending midnight! Station positions should be known to within a few meters, easily accomplished with GPS (Global Positioning System), so that time zones and solar angles can be determined accurately.
D.1.13. Automated versus Manual
Most of this report has addressed automated measurements. Historically, most measurements are manual and typically collected once a day. In many cases, manual measurements continue because of habit, usefulness, and desire for continuity over time. Manual measurements are extremely useful and when possible should be encouraged. However, automated measurements are becoming more common. For either, it is important to record time in a logically consistent manner.
It should not be automatically assumed that newer data and measurements are “better” than older data or that manual data are “worse” than automated data. Older or simpler manual measurements are often of very high quality even if they sometimes are not in the most convenient digital format.
There is widespread desire to use automated systems to reduce human involvement. This is admirable and understandable, but every automated weather/climate station or network requires significant human attention and maintenance. A telling example concerns the Oklahoma Mesonet (see Brock et al. 1995, and bibliography athttp://www.mesonet.ou.edu), a network of about 115 high–quality, automated meteorological stations spread over Oklahoma, where about 80 percent of the annual ($2–3M) budget is nonetheless allocated to humans with only about 20 percent allocated to equipment.
D.1.14. Manual Conventions
Manual measurements typically are made once a day. Elements usually consist of maximum and minimum temperature, temperature at observation time, precipitation, snowfall, snow depth, and sometimes evaporation, wind, or other information. Since it is not actually known when extremes occurred, the only logical approach, and the nationwide convention, is to ascribe the entire measurement to the time-interval date and to enter it on the form in that way. For morning observers (for example, 8 am to 8 am), this means that the maximum temperature written for today often is from yesterday afternoon and sometimes the minimum temperature for the 24-hr period actually occurred yesterday morning. However, this is understood and expected. It is often a surprise to observers to see how many maximum temperatures do not occur in the afternoon and how many minimum temperatures do not occur in the predawn hours. This is especially true in environments that are colder, higher, northerly, cloudy, mountainous, or coastal. As long as this convention is strictly followed every day, it has been shown that truly excellent climate records can result (Redmond 1992). Manual observers should reset equipment only one time per day at the official observing time. Making more than one measurement a day is discouraged strongly; this practice results in a hybrid record that is too difficult to interpret. The only exception is for total daily snowfall. New snowfall can be measured up to four times per day with no observations closer than six hours. It is well known that more frequent measurement of snow increases the annual total because compaction is a continuous process.
Two main purposes for climate observations are to establish the long-term averages for given locations and to track variations in climate. Broadly speaking, these purposes address topics of absolute and relative climate behavior. Once absolute behavior has been “established” (a task that is never finished because long-term averages continue to vary in time)—temporal variability quickly becomes the item of most interest.
D.2. Representativeness
Having discussed important factors to consider when new sites are installed, we now turn our attention to site “representativeness.” In popular usage, we often encounter the notion that a site is “representative” of another site if it receives the same annual precipitation or records the same annual temperature or if some other element-specific, long-term average has a similar value. This notion of representativeness has a certain limited validity, but there are other aspects of this idea that need to be considered.
A climate monitoring site also can be said to be representative if climate records from that site show sufficiently strong temporal correlations with a large number of locations over a sufficiently large area. If station A receives 20 cm a year and station B receives 200 cm a year, these climates obviously receive quite differing amounts of precipitation. However, if their monthly, seasonal, or annual correlations are high (for example, 0.80 or higher for a particular time scale), one site can be used as a surrogate for estimating values at the other if measurements for a particular month, season, or year are missing. That is, a wet or dry month at one station is also a wet or dry month (relative to its own mean) at the comparison station. Note that high correlations on one time scale do not imply automatically that high correlations will occur on other time scales.
Likewise, two stations having similar mean climates (for example, similar annual precipitation) might not co-vary in close synchrony (for example, coastal versus interior). This may be considered a matter of climate “affiliation” for a particular location.
Thus, the representativeness of a site can refer either to the basic climatic averages for a given duration (or time window within the annual cycle) or to the extent that the site co-varies in time with respect to all surrounding locations. One site can be representative of another in the first sense but not the second, or vice versa, or neither, or both—all combinations are possible.
If two sites are perfectly correlated then, in a sense, they are “redundant.” However, redundancy has value because all sites will experience missing data especially with automated equipment in rugged environments and harsh climates where outages and other problems nearly can be guaranteed. In many cases, those outages are caused by the weather, particularly by unusual weather and the very conditions we most wish to know about. Methods for filling in those values will require proxy information from this or other nearby networks. Thus, redundancy is a virtue rather than a vice.
In general, the cooperative stations managed by the NWS have produced much longer records than automated stations like RAWS or SNOTEL stations. The RAWS stations often have problems with precipitation, especially in winter, or with missing data, so that low correlations may be data problems rather than climatic dissimilarity. The RAWS records also are relatively short, so correlations should be interpreted with care. In performing and interpreting such analyses, however, we must remember that there are physical climate reasons and observational reasons why stations within a short distance (even a few tens or hundreds of meters) may not correlate well.
D.2.1. Temporal Behavior
It is possible that high correlations will occur between station pairs during certain portions of the year (i.e., January) but low correlations may occur during other portions of the year (e.g., September or October). The relative contributions of these seasons to the annual total (for precipitation) or average (for temperature) and the correlations for each month are both factors in the correlation of an aggregated time window of longer duration that encompasses those seasons (e.g., one of the year definitions such as calendar year or water year). A complete and careful evaluation ideally would include such a correlation analysis but requires more resources and data. Note that it also is possible and frequently is observed that temperatures are highly correlated while precipitation is not or vice versa, and these relations can change according to the time of year. If two stations are well correlated for all climate elements for all portions of the year, then they can be considered redundant.
With scarce resources, the initial strategy should be to try to identify locations that do not correlate particularly well, so that each new site measures something new that cannot be guessed easily from the behavior of surrounding sites. (An important caveat here is that lack of such correlation could be a result of physical climate behavior and not a result of faults with the actual measuring process; i.e., by unrepresentative or simply poor-quality data. Unfortunately, we seldom have perfect climate data.) As additional sites are added, we usually wish for some combination of unique and redundant sites to meet what amounts to essentially orthogonal constraints: new information and more reliably-furnished information.
A common consideration is whether to observe on a ridge or in a valley, given the resources to place a single station within a particular area of a few square kilometers. Ridge and valley stations will correlate very well for temperatures when lapse conditions prevail, particularly summer daytime temperatures. In summer at night or winter at daylight, the picture will be more mixed and correlations will be lower. In winter at night when inversions are common and even the rule, correlations may be zero or even negative and perhaps even more divergent as the two sites are on opposite sides of the inversion. If we had the luxury of locating stations everywhere, we would find that ridge tops generally correlate very well with other ridge tops and similarly valleys with other valleys, but ridge tops correlate well with valleys only under certain circumstances. Beyond this, valleys and ridges having similar orientations usually will correlate better with each other than those with perpendicular orientations, depending on their orientation with respect to large-scale wind flow and solar angles.
Unfortunately, we do not have stations everywhere, so we are forced to use the few comparisons that we have and include a large dose of intelligent reasoning, using what we have observed elsewhere. In performing and interpreting such analyses, we must remember that there are physical climatic reasons and observational reasons why stations within a short distance (even a few tens or hundreds of meters) may not correlate well.
Examples of correlation analyses include those for the Channel Islands and for southwest Alaska, which can be found in Redmond and McCurdy (2005) and Redmond et al. (2005). These examples illustrate what can be learned from correlation analyses. Spatial correlations generally vary by time of year. Thus, results should be displayed in the form of annual correlation cycles—for monthly mean temperature and monthly total precipitation and perhaps other climate elements like wind or humidity—between station pairs selected for climatic setting and data availability and quality.
In general, the COOP stations managed by the NWS have produced much longer records than have automated stations like RAWS or SNOTEL stations. The RAWS stations also often have problems with precipitation, especially in winter or with missing data, so that low correlations may be data problems rather than climate dissimilarity. The RAWS records are much shorter, so correlations should be interpreted with care, but these stations are more likely to be in places of interest for remote or under-sampled regions.
D.2.2. Spatial Behavior
A number of techniques exist to interpolate from isolated point values to a spatial domain. For example, a common technique is simple inverse distance weighting. Critical to the success of the simplest of such techniques is that some other property of the spatial domain, one that is influential for the mapped element, does not vary significantly. Topography greatly influences precipitation, temperature, wind, humidity, and most other meteorological elements. Thus, this criterion clearly is not met in any region having extreme topographic diversity. In such circumstances, simple Cartesian distance may have little to do with how rapidly correlation deteriorates from one site to the next, and in fact, the correlations can decrease readily from a mountain to a valley and then increase again on the next mountain. Such structure in the fields of spatial correlation is not seen in the relatively (statistically) well-behaved flat areas like those in the eastern U.S.
To account for dominating effects such as topography and inland–coastal differences that exist in certain regions, some kind of additional knowledge must be brought to bear to produce meaningful, physically plausible, and observationally based interpolations. Historically, this has proven to be an extremely difficult problem, especially to perform objective and repeatable analyses. An analysis performed for southwest Alaska (Redmond et al. 2005) concluded that the PRISM (Parameter Regression on Independent Slopes Model) maps (Daly et al. 1994; 2002; Gibson et al. 2002; Doggett et al. 2004) were probably the best available. An analysis by Simpson et al. (2005) further discussed many issues in the mapping of Alaska’s climate and resulted in the same conclusion about PRISM.
D.2.3. Climate-Change Detection
Although general purpose climate stations should be situated to address all aspects of climate variability, it is desirable that they also be in locations that are more sensitive to climate change from natural or anthropogenic influences should it begin to occur. The question here is how well we know such sensitivities. The climate-change issue is quite complex because it encompasses more than just greenhouse gasses.
Sites that are in locations or climates particularly vulnerable to climate change should be favored. How this vulnerability is determined is a considerably challenging research issue. Candidate locations or situations are those that lie on the border between two major biomes or just inside the edge of one or the other. In these cases, a slight movement of the boundary in anticipated direction (toward “warmer,” for example) would be much easier to detect as the boundary moves past the site and a different set of biota begin to be established. Such a vegetative or ecologic response would be more visible and would take less time to establish as a real change than would a smaller change in the center of the distribution range of a marker or key species.
D.2.4. Element-Specific Differences
The various climate elements (temperature, precipitation, cloudiness, snowfall, humidity, wind speed and direction, solar radiation) do not vary through time in the same sequence or manner nor should they necessarily be expected to vary in this manner. The spatial patterns of variability should not be expected to be the same for all elements. These patterns also should not be expected to be similar for all months or seasons. The suitability of individual sites for measurement also varies from one element to another. A site that has a favorable exposure for temperature or wind may not have a favorable exposure for precipitation or snowfall. A site that experiences proper air movement may be situated in a topographic channel, such as a river valley or a pass, which restricts the range of wind directions and affects the distribution of speed direction categories.
D.2.5. Logistics and Practical Factors
Even with the most advanced scientific rationale, sites in some remote or climatically challenging settings may not be suitable because of the difficulty in servicing and maintaining equipment. Contributing to these challenges are scheduling difficulties, animal behavior, snow burial, icing, snow behavior, access and logistical problems, and the weather itself. Remote and elevated sites usually require far more attention and expense than a rain-dominated, easily accessible valley location.
For climate purposes, station exposure and the local environment should be maintained in their original state (vegetation especially), so that changes seen are the result of regional climate variations and not of trees growing up, bushes crowding a site, surface albedo changing, fire clearing, etc. Repeat photography has shown many examples of slow environmental change in the vicinity of a station in rather short time frames (5–20 years), and this technique should be employed routinely and frequently at all locations. In the end, logistics, maintenance, and other practical factors almost always determine the success of weather- and climate-monitoring activities.
D.2.6. Personnel Factors
Many past experiences (almost exclusively negative) strongly support the necessity to place primary responsibility for station deployment and maintenance in the hands of seasoned, highly qualified, trained, and meticulously careful personnel, the more experienced the better. Over time, even in “benign” climates but especially where harsher conditions prevail, every conceivable problem will occur and both the usual and unusual should be anticipated: weather, animals, plants, salt, sensor and communication failure, windblown debris, corrosion, power failures, vibrations, avalanches, snow loading and creep, corruption of the data logger program, etc. An ability to anticipate and forestall such problems, a knack for innovation and improvisation, knowledge of electronics, practical and organizational skills, and presence of mind to bring the various small but vital parts, spares, tools, and diagnostic troubleshooting equipment are highly valued qualities. Especially when logistics are so expensive, a premium should be placed on using experienced personnel, since the slightest and seemingly most minor mistake can render a station useless or, even worse, uncertain. Exclusive reliance on individuals without this background can be costly and almost always will result eventually in unnecessary loss of data. Skilled labor and an apprenticeship system to develop new skilled labor will greatly reduce (but not eliminate) the types of problems that can occur in operating a climate network.
D.3. Site Selection
In addition to considerations identified previously in this appendix, various factors need to be considered in selecting sites for new or augmented instrumentation.
D.3.1. Equipment and Exposure Factors
D.3.1.1. Measurement Suite: All sites should measure temperature, humidity, wind, solar radiation, and snow depth. Precipitation measurements are more difficult but probably should be attempted with the understanding that winter measurements may be of limited or no value unless an all-weather gauge has been installed. Even if an all-weather gauge has been installed, it is desirable to have a second gauge present that operates on a different principle–for example, a fluid-based system like those used in the SNOTEL stations in tandem with a higher–resolution, tipping bucket gauge for summertime. Without heating, a tipping bucket gauge usually is of use only when temperatures are above freezing and when temperatures have not been below freezing for some time, so that accumulated ice and snow is not melting and being recorded as present precipitation. Gauge undercatch is a significant issue in snowy climates, so shielding should be considered for all gauges designed to work over the winter months. It is very important to note the presence or absence of shielding, the type of shielding, and the dates of installation or removal of the shielding.
D.3.1.2. Overall Exposure: The ideal, general all-purpose site has gentle slopes, is open to the sun and the wind, has a natural vegetative cover, avoids strong local (less than 200 m) influences, and represents a reasonable compromise among all climate elements. The best temperature sites are not the best precipitation sites, and the same is true for other elements. Steep topography in the immediate vicinity should be avoided unless settings where precipitation is affected by steep topography are being deliberately sought or a mountaintop or ridgeline is the desired location. The potential for disturbance should be considered: fire and flood risk, earth movement, wind-borne debris, volcanic deposits or lahars, vandalism, animal tampering, and general human encroachment are all factors.
D.3.1.3. Elevation: Mountain climates do not vary in time in exactly the same manner as adjoining valley climates. This concept is emphasized when temperature inversions are present to a greater degree and during precipitation when winds rise up the slopes at the same angle. There is considerable concern that mountain climates will be (or already are) changing and perhaps changing differently than lowland climates, which has direct and indirect consequences for plant and animal life in the more extreme zones. Elevations of special significance are those that are near the mean rain/snow line for winter, near the tree line, and near the mean annual freezing level (all of these may not be quite the same). Because the lapse rates in wet climates often are nearly moist-adiabatic during the main precipitation seasons, measurements at one elevation may be extrapolated to nearby elevations. In drier climates and in the winter, temperature and to a lesser extent wind will show various elevation profiles.
D.3.1.4. Transects: The concept of observing transects that span climatic gradients is sound. This is not always straightforward in topographically uneven terrain, but these transects could still be arranged by setting up station(s) along the coast; in or near passes atop the main coastal interior drainage divide; and inland at one, two, or three distances into the interior lowlands. Transects need not—and by dint of topographic constraints probably cannot—be straight lines, but the closer that a line can be approximated the better. The main point is to systematically sample the key points of a behavioral transition without deviating too radically from linearity.
D.3.1.5. Other Topographic Considerations: There are various considerations with respect to local topography. Local topography can influence wind (channeling, upslope/downslope, etc.), precipitation (orographic enhancement, downslope evaporation, catch efficiency, etc.), and temperature (frost pockets, hilltops, aspect, mixing or decoupling from the overlying atmosphere, bowls, radiative effects, etc.), to different degrees at differing scales. In general, for measurements to be areally representative, it is better to avoid these local effects to the extent that they can be identified before station deployment (once deployed, it is desirable not to move a station). The primary purpose of a climate-monitoring network should be to serve as an infrastructure in the form of a set of benchmark stations for comparing other stations. Sometimes, however, it is exactly these local phenomena that we want to capture. Living organisms, especially plants, are affected by their immediate environment, whether it is representative of a larger setting or not. Specific measurements of limited scope and duration made for these purposes then can be tied to the main benchmarks. This experience is useful also in determining the complexity needed in the benchmark monitoring process in order to capture particular phenomena at particular space and time scales.
Sites that drain (cold air) well generally are better than sites that allow cold air to pool. Slightly sloped areas (1 degree is fine) or small benches from tens to hundreds of meters above streams are often favorable locations. Furthermore, these sites often tend to be out of the path of hazards (like floods) and to have rocky outcroppings where controlling vegetation will not be a major concern. Benches or wide spots on the rise between two forks of a river system are often the only flat areas and sometimes jut out to give greater exposure to winds from more directions.
D.3.1.6. Prior History: The starting point in designing a program is to determine what kinds of observations have been collected over time, by whom, in what manner, and if these observation are continuing to the present time. It also may be of value to “re-occupy” the former site of a station that is now inactive to provide some measure of continuity or a reference point from the past. This can be of value even if continuous observations were not made during the entire intervening period.
D.3.2. Element-Specific Factors
D.3.2.1. Temperature: An open exposure with uninhibited air movement is the preferred setting. The most common measurement is made at approximately eye level, 1.5–2.0 m. In snowy locations sensors should be at least one meter higher than the deepest snowpack expected in the next 50 years or perhaps 2–3 times the depth of the average maximum annual depth. Sensors should be shielded above and below from solar radiation (bouncing off snow), from sunrise/sunset horizontal input, and from vertical rock faces. Sensors should be clamped tightly, so that they do not swivel away from level stacks of radiation plates. Nearby vegetation should be kept away from the sensors (several meters). Growing vegetation should be cut to original conditions. Small hollows and swales can cool tremendously at night, and it is best avoid these areas. Side slopes of perhaps a degree or two of angle facilitate air movement and drainage and, in effect, sample a large area during nighttime hours. The very bottom of a valley should be avoided. Temperature can change substantially from moves of only a few meters. Situations have been observed where flat and seemingly uniform conditions (like airport runways) appear to demonstrate different climate behaviors over short distances of a few tens or hundreds of meters (differences of 5–10°C). When snow is on the ground, these microclimatic differences can be stronger, and differences of 2–5°C can occur in the short distance between the thermometer and the snow surface on calm evenings.
D.3.2.2. Precipitation (liquid): Calm locations with vegetative or artificial shielding are preferred. Wind will adversely impact readings; therefore, the less the better. Wind effects on precipitation are far less for rain than for snow. Devices that “save” precipitation present advantages, but most gauges are built to dump precipitation as it falls or to empty periodically. Automated gauges give both the amount and the timing. Simple backups that record only the total precipitation since the last visit have a certain advantage (for example, storage gauges or lengths of PVC pipe perhaps with bladders on the bottom). The following question should be asked: Does the total precipitation from an automated gauge add up to the measured total in a simple bucket (evaporation is prevented with an appropriate substance such as mineral oil)? Drip from overhanging foliage and trees can augment precipitation totals.
D.3.2.3. Precipitation (frozen): Calm locations or shielding are a must. Undercatch for rain is only about 5 percent, but with winds of only 2–4 m/s, gauges may catch only 30–70 percent of the actual snow falling depending on density of the flakes. To catch 100 percent of the snow, the standard configuration for shielding is employed by the CRN (Climate Reference Network): the DFIR (Double-Fence Intercomparison Reference) shield with 2.4-m (8-ft.) vertical, wooden slatted fences in two concentric octagons with diameters of 8 m and 4 m (26 ft and 13 ft, respectively) and an inner Alter shield (flapping vanes). Numerous tests have shown this is the only way to achieve complete catch of snowfall (e.g., Yang et al. 1998; 2001). The DFIR shield is large and bulky; it is recommended that all precipitation gauges have at least Alter shields on them.
Near oceans, much snow is heavy and falls more vertically. In colder locations or storms, light flakes frequently will fly in and then out of the gauge. Clearings in forests are usually excellent sites. Snow blowing from trees that are too close can augment actual precipitation totals. Artificial shielding (vanes, etc.) placed around gauges in snowy locales always should be used if accurate totals are desired. Moving parts tend to freeze up. Capping of gauges during heavy snowfall events is a common occurrence. When the cap becomes pointed, snow falls off to the ground and is not recorded. Caps and plugs often will not fall into the tube until hours, days, or even weeks have passed, typically during an extended period of freezing temperature or above or when sunlight finally occurs. Liquid-based measurements (e.g., SNOTEL “rocket” gauges) do not have the resolution (usually 0.3 cm [0.1 in.] rather than 0.03 cm [0.01 in.]) that tipping bucket and other gauges have but are known to be reasonably accurate in very snowy climates. Light snowfall events might not be recorded until enough of them add up to the next reporting increment. More expensive gauges like Geonors can be considered and could do quite well in snowy settings; however, they need to be emptied every 40 cm (15 in.) or so (capacity of 51 cm [20 in.]) until the new 91-cm (36-in.) capacity gauge is offered for sale. Recently, the NWS has been trying out the new (and very expensive) Ott all-weather gauge. Riming can be an issue in windy foggy environments below freezing. Rime, dew, and other forms of atmospheric condensation are not real precipitation, since they are caused by the gauge.
D.3.2.4. Snow Depth: Windswept areas tend to be blown clear of snow. Conversely, certain types of vegetation can act as a snow fence and cause artificial drifts. However, some amount of vegetation in the vicinity generally can help slow down the wind. The two most common types of snow-depth gauges are the Judd Snow Depth Sensor, produced by Judd Communications, and the snow depth gauge produced by Campbell Scientific, Inc. Opinions vary on which one is better. These gauges use ultrasound and look downward in a cone about 22 degrees in diameter. The ground should be relatively clear of vegetation and maintained in a manner so that the zero point on the calibration scale does not change.
D.3.2.5. Snow Water Equivalent: This is determined by the weight of snow on fluid-filled pads about the size of a desktop set up sometimes in groups of four or in larger hexagons several meters in diameter. These pads require flat ground some distance from nearby sources of windblown snow and shielding that is “just right”: not too close to the shielding to act as a kind of snow fence and not too far from the shielding so that blowing and drifting become a factor. Generally, these pads require fluids that possess antifreeze-like properties, as well as handling and replacement protocols.
D.3.2.6. Wind: Open exposures are needed for wind measurements. Small prominences or benches without blockage from certain sectors are preferred. A typical rule for trees is to site stations back 10 tree-heights from all tree obstructions. Sites in long, narrow valleys can obviously only exhibit two main wind directions. Gently rounded eminences are more favored. Any kind of topographic steering should be avoided to the extent possible. Avoiding major mountain chains or single isolated mountains or ridges is usually a favorable approach, if there is a choice. Sustained wind speed and the highest gusts (1-second) should be recorded. Averaging methodologies for both sustained winds and gusts can affect climate trends and should be recorded as metadata with all changes noted. Vegetation growth affects the vertical wind profile, and growth over a few years can lead to changes in mean wind speed even if the “real” wind does not change, so vegetation near the site (perhaps out to 50 m) should be maintained in a quasi-permanent status (same height and spatial distribution). Wind devices can rime up and freeze or spin out of balance. In severely rimed or windy climates, rugged anemometers, such as those made by Taylor, are worth considering. These anemometers are expensive but durable and can withstand substantial abuse. In exposed locations, personnel should plan for winds to be at least 50 m/s and be able to measure these wind speeds. At a minimum, anemometers should be rated to 75 m/s.
D.3.2.7. Humidity: Humidity is a relatively straightforward climate element. Close proximity to lakes or other water features can affect readings. Humidity readings typically are less accurate near 100 percent and at low humidities in cold weather.
D.3.2.8. Solar Radiation: A site with an unobstructed horizon obviously is the most desirable. This generally implies a flat plateau or summit. However, in most locations trees or mountains will obstruct the sun for part of the day.
D.3.2.9. Soil Temperature: It is desirable to measure soil temperature at locations where soil is present. If soil temperature is recorded at only a single depth, the most preferred depth is 10 cm. Other common depths include 25 cm, 50 cm, 2 cm, and 100 cm. Biological activity in the soil will be proportional to temperature with important threshold effects occurring near freezing.
D.3.2.10. Soil Moisture: Soil-moisture gauges are somewhat temperamental and require care to install. The soil should be characterized by a soil expert during installation of the gauge. The readings may require a certain level of experience to interpret correctly. If accurate, readings of soil moisture are especially useful.
D.3.2.11. Distributed Observations: It can be seen readily that compromises must be struck among the considerations described in the preceding paragraphs because some are mutually exclusive.
How large can a “site” be? Generally, the equipment footprint should be kept as small as practical with all components placed next to each other (within less than 10–20 m or so). Readings from one instrument frequently are used to aid in interpreting readings from the remaining instruments.
What is a tolerable degree of separation? Some consideration may be given to locating a precipitation gauge or snow pillow among protective vegetation, while the associated temperature, wind, and humidity readings would be collected more effectively in an open and exposed location within 20–50 m. Ideally, it is advantageous to know the wind measurement precisely at the precipitation gauge, but a compromise involving a short split, and in effect a “distributed observation,” could be considered. There are no definitive rules governing this decision, but it is suggested that the site footprint be kept within approximately 50 m. There also are constraints imposed by engineering and electrical factors that affect cable lengths, signal strength, and line noise; therefore, the shorter the cable the better. Practical issues include the need to trench a channel to outlying instruments or to allow lines to lie atop the ground and associated problems with animals, humans, weathering, etc. Separating a precipitation gauge up to 100 m or so from an instrument mast may be an acceptable compromise if other factors are not limiting.
D.3.2.12. Instrument Replacement Schedules: Instruments slowly degrade, and a plan for replacing them with new, refurbished, or recalibrated instruments should be in place. After approximately five years, a systematic change-out procedure should result in replacing most sensors in a network. Certain parts, such as solar radiation sensors, are candidates for annual calibration or change-out. Anemometers tend to degrade as bearings erode or electrical contacts become uneven. Noisy bearings are an indication, and a stethoscope might aid in hearing such noises. Increased internal friction affects the threshold starting speed; once spinning, they tend to function properly. Increases in starting threshold speeds can lead to more zero-wind measurements and thus reduce the reported mean wind speed with no real change in wind properties. A field calibration kit should be developed and taken on all site visits, routine or otherwise. Rain gauges can be tested with drip testers during field visits. Protective conduit and tight water seals can prevent abrasion and moisture problems with the equipment, although seals can keep moisture in as well as out. Bulletproof casings sometimes are employed in remote settings. A supply of spare parts, at least one of each and more for less-expensive or moredelicate sensors, should be maintained to allow replacement of worn or nonfunctional instruments during field visits. In addition, this approach allows instruments to be calibrated in the relative convenience of the operational home—the larger the network, the greater the need for a parts depot.
D.3.3. Long-Term Comparability and Consistency
D.3.3.1. Consistency: The emphasis here is to hold biases constant. Every site has biases, problems, and idiosyncrasies of one sort or another. The best rule to follow is simply to try to keep biases constant through time. Since the goal is to track climate through time, keeping sensors, methodologies, and exposure constant will ensure that only true climate change is being measured. This means leaving the site in its original state or performing maintenance to keep it that way. Once a site is installed, the goal should be to never move the site even by a few meters or to allow significant changes to occur within 100 m for the next several decades.
Sites in or near rock outcroppings likely will experience less vegetative disturbance or growth through the years and will not usually retain moisture, a factor that could speed corrosion. Sites that will remain locally similar for some time are usually preferable. However, in some cases the intent of a station might be to record the local climate effects of changes within a small-scale system (for example, glacier, recently burned area, or scene of some other disturbance) that is subject to a regional climate influence. In this example, the local changes might be much larger than the regional changes.
D.3.3.2. Metadata: Since the climate of every site is affected by features in the immediate vicinity, it is vital to record this information over time and to update the record repeatedly at each service visit. Distances, angles, heights of vegetation, fine-scale topography, condition of instruments, shielding discoloration, and other factors from within a meter to several kilometers should be noted. Systematic photography should be undertaken and updated at least once every one–two years.
Photographic documentation should be taken at each site in a standard manner and repeated every two–three years. Guidelines for methodology were developed by Redmond (2004) as a result of experience with the NOAA CRN and can be found on the WRCC NPS Web pages at http://www.wrcc.dri.edu/nps and at ftp://ftp.wrcc.dri.edu/nps/photodocumentation.pdf.
The main purpose for climate stations is to track climatic conditions through time. Anything that affects the interpretation of records through time must to be noted and recorded for posterity. The important factors should be clear to a person who has never visited the site, no matter how long ago the site was installed.
In regions with significant, climatic transition zones, transects are an efficient way to span several climates and make use of available resources. Discussions on this topic at greater detail can be found in Redmond and Simeral (2004) and in Redmond et al. (2005).
D.4. Literature Cited
American Association of State Climatologists. 1985. Heights and exposure standards for sensors on automated weather stations. The State Climatologist 9.
Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus, S. J. Stadler, H. L. Johnson and M. D. Eilts. 1995. The Oklahoma Mesonet: A technical overview. Journal of Atmospheric and Oceanic Technology 12:5-19.
Daly, C., R. P. Neilson, and D. L. Phillips. 1994. A statistical-topographic model for mapping climatological precipitation over mountainous terrain. Journal of Applied Meteorology 33:140-158.
Daly, C., W. P. Gibson, G. H. Taylor, G. L. Johnson, and P. Pasteris. 2002. A knowledge-based approach to the statistical mapping of climate. Climate Research 22:99-113.
Doggett, M., C. Daly, J. Smith, W. Gibson, G. Taylor, G. Johnson, and P. Pasteris. 2004. Highresolution 1971-2000 mean monthly temperature maps for the western United States.
Fourteenth AMS Conf. on Applied Climatology, 84th AMS Annual Meeting. Seattle, WA, American Meteorological Society, Boston, MA, January 2004, Paper 4.3, CD-ROM.
Geiger, R., R. H. Aron, and P. E. Todhunter. 2003. The Climate Near the Ground. 6thedition. Rowman & Littlefield Publishers, Inc., New York.
Gibson, W. P., C. Daly, T. Kittel, D. Nychka, C. Johns, N. Rosenbloom, A. McNab, and G. Taylor. 2002. Development of a 103-year high-resolution climate data set for the conterminous United States. Thirteenth AMS Conf. on Applied Climatology. Portland, OR, American Meteorological Society, Boston, MA, May 2002:181-183.
Goodison, B. E., P. Y. T. Louie, and D. Yang. 1998. WMO solid precipitation measurement intercomparison final report. WMO TD 982, World Meteorological Organization, Geneva, Switzerland.
National Research Council. 1998. Future of the National Weather Service Cooperative Weather Network. National Academies Press, Washington, D.C.
National Research Council. 2001. A Climate Services Vision: First Steps Toward the Future. National Academies Press, Washington, D.C.
Redmond, K. T. 1992. Effects of observation time on interpretation of climatic time series – A need for consistency. Eighth Annual Pacific Climate (PACLIM) Workshop. Pacific Grove, CA, March 1991:141-150.
Redmond, K. T. 2004. Photographic documentation of long-term climate stations. Available from ftp://ftp.wrcc.dri.edu/nps/photodocumentation.pdf. (accessed 15 August 2004)
Redmond, K. T. and D. B. Simeral. 2004. Climate monitoring comments: Central Alaska Network Inventory and Monitoring Program. Available fromftp://ftp.wrcc.dri.edu/nps/alaska/cakn/npscakncomments040406.pdf. (accessed 6 April 2004)
Redmond, K. T., D. B. Simeral, and G. D. McCurdy. 2005. Climate monitoring for southwest Alaska national parks: network design and site selection. Report 05-01. Western Regional Climate Center, Reno, Nevada.
Redmond, K. T., and G. D. McCurdy. 2005. Channel Islands National Park: Design considerations for weather and climate monitoring. Report 05-02. Western Regional Climate Center, Reno, Nevada.
Sevruk, B., and W. R. Hamon. 1984. International comparison of national precipitation gauges with a reference pit gauge. Instruments and Observing Methods, Report No 17, WMO/TD – 38, World Meteorological Organization, Geneva, Switzerland.
Simpson, J. J., Hufford, G. L., C. Daly, J. S. Berg, and M. D. Fleming. 2005. Comparing maps of mean monthly surface temperature and precipitation for Alaska and adjacent areas of Canada produced by two different methods. Arctic 58:137-161.
Whiteman, C. D. 2000. Mountain Meteorology: Fundamentals and Applications. Oxford University Press, Oxford, UK.
Wilson, E. O. 1998. Consilience: The Unity of Knowledge. Knopf, New York.
World Meteorological Organization. 1983. Guide to meteorological instruments and methods of observation, No. 8, 5th edition, World Meteorological Organization, Geneva Switzerland.
World Meteorological Organization. 2005. Organization and planning of intercomparisons of rainfall intensity gauges. World Meteorological Organization, Geneva Switzerland.
Yang, D., B. E. Goodison, J. R. Metcalfe, V. S. Golubev, R. Bates, T. Pangburn, and C. Hanson. 1998. Accuracy of NWS 8” standard nonrecording precipitation gauge: results and application of WMO intercomparison. Journal of Atmospheric and Oceanic Technology 15:54-68.
Yang, D., B. E. Goodison, J. R. Metcalfe, P. Louie, E. Elomaa, C. Hanson, V. Bolubev, T. Gunther, J. Milkovic, and M. Lapin. 2001. Compatibility evaluation of national precipitation gauge measurements. Journal of Geophysical Research 106:1481-1491.
Other pages in this section