Variability of the Measurement of Temperature in the Canadian Surface Weather and Reference Climate Networks



Similar documents
GEOENGINE MSc in Geomatics Engineering (Master Thesis) Anamelechi, Falasy Ebere

Offshore Wind Farm Layout Design A Systems Engineering Approach. B. J. Gribben, N. Williams, D. Ranford Frazer-Nash Consultancy

Benchmarking Residential Energy Use

Head Loss in Pipe Flow ME 123: Mechanical Engineering Laboratory II: Fluids

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

EXPLANATION OF WEATHER ELEMENTS AND VARIABLES FOR THE DAVIS VANTAGE PRO 2 MIDSTREAM WEATHER STATION

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests

Standard Deviation Estimator

Current Profiles at the Offshore Wind Farm Egmond aan Zee J.W. Wagenaar P.J. Eecen

4.3. David E. Rudack*, Meteorological Development Laboratory Office of Science and Technology National Weather Service, NOAA 1.

FIVB Heat Stress Monitoring Protocol FIVB beach volleyball events Background

Normality Testing in Excel

An analysis of the 2003 HEFCE national student survey pilot data.

Pressure in Fluids. Introduction

Virtual Met Mast verification report:

Descriptive Statistics

J3.4 DESIGN TEMPERATURES FOR HEATING AND COOLING APPLICATIONS IN NORTHERN COLORADO -- AN UPDATE

The impact of window size on AMV

Report to the 79 th Legislature. Use of Credit Information by Insurers in Texas

Using Excel for inferential statistics

Northumberland Knowledge

Diagnostic Fracture Injection Tests (DFIT ) in Ultra Low Permeability Formations

AIR TEMPERATURE IN THE CANADIAN ARCTIC IN THE MID NINETEENTH CENTURY BASED ON DATA FROM EXPEDITIONS

Investigating the Accuracy of Predicted A Level Grades as part of 2009 UCAS Admission Process

Sonatype CLM Server - Dashboard. Sonatype CLM Server - Dashboard

Monsoon Variability and Extreme Weather Events

6.4 Normal Distribution

Climate and Weather. This document explains where we obtain weather and climate data and how we incorporate it into metrics:

Solar radiation data validation

REDUCING UNCERTAINTY IN SOLAR ENERGY ESTIMATES

Visualizing of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques

Shainin: A concept for problem solving

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Scientific Graphing in Excel 2010

EFFECTS OF COMPLEX WIND REGIMES ON TURBINE PERFORMANCE

Design and Deployment of Specialized Visualizations for Weather-Sensitive Electric Distribution Operations

Monitoring Soil Moisture from Space. Dr. Heather McNairn Science and Technology Branch Agriculture and Agri-Food Canada

Haar Fluctuations Scaling Analysis Software Without Interpolation

AP Physics 1 and 2 Lab Investigations

Wind Resource Assessment for BETHEL, ALASKA Date last modified: 2/21/2006 Compiled by: Mia Devine

Descriptive statistics Statistical inference statistical inference, statistical induction and inferential statistics

Basic Climatological Station Metadata Current status. Metadata compiled: 30 JAN Synoptic Network, Reference Climate Stations

Introduction to Regression and Data Analysis

Guidelines on Quality Control Procedures for Data from Automatic Weather Stations

Avoiding AC Capacitor Failures in Large UPS Systems

Near Real Time Blended Surface Winds

USING SIMULATED WIND DATA FROM A MESOSCALE MODEL IN MCP. M. Taylor J. Freedman K. Waight M. Brower

3 i Window Films. 3 Proof of Technology Experiment Cover Letter

Scholar: Elaina R. Barta. NOAA Mission Goal: Climate Adaptation and Mitigation

EVALUATING SOLAR ENERGY PLANTS TO SUPPORT INVESTMENT DECISIONS

>> BEYOND OUR CONTROL? KINGSLEY WHITE PAPER

SIGNAL GENERATORS and OSCILLOSCOPE CALIBRATION

Copyright 2007 Casa Software Ltd. ToF Mass Calibration

Simple linear regression

How to calibrate an RTD or Platinum Resistance Thermometer (PRT)

Paediatric Outpatient Survey 2011

Laboratory Report Scoring and Cover Sheet

Common Tools for Displaying and Communicating Data for Process Improvement

Data Processing Flow Chart

Canadian Vehicle Use Study: Electronic Data Collection

Pennsylvania System of School Assessment

Use Data Budgets to Manage Large Acoustic Datasets

CALCULATIONS & STATISTICS

Seion. A Statistical Method for Alarm System Optimisation. White Paper. Dr. Tim Butters. Data Assimilation & Numerical Analysis Specialist

Farm Business Survey - Statistical information

Means, standard deviations and. and standard errors

CLOUD COVER IMPACT ON PHOTOVOLTAIC POWER PRODUCTION IN SOUTH AFRICA

How To Check For Differences In The One Way Anova

Multivariate data visualization using shadow

Validity of Skid Resistance Data Collected Under Western Australian Conditions

Web Portal Step by Step

American Society of Heating Refrigeration and Air Conditioning Engineers (ASHRAE) Procedures for Commercial Building Energy Audits (2004)

Case Study Call Centre Hypothesis Testing

Jitter Measurements in Serial Data Signals

SolidWorks TolAnalyst Frequently Asked Questions

Quality control in dental practice Peter Kivovics DMD, BDS, MDSc, PhD, Chief Dental Officer for Hungary

Determination of g using a spring

Global Seasonal Phase Lag between Solar Heating and Surface Temperature

A Statistical Analysis of the Stewardship Ontario Residential Waste Audit Program

Wind Plant Operator Data Guide

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Deliverable D_O-INT_3.8 Protocol for personal and microenvironment monitoring surveys

By Authority Of THE UNITED STATES OF AMERICA Legally Binding Document

Cloud Model Verification at the Air Force Weather Agency

User manual data files meteorological mast NoordzeeWind

Introduction. Motor Types

Math 0980 Chapter Objectives. Chapter 1: Introduction to Algebra: The Integers.

Introduction to Risk Management

PV THERMAL SYSTEMS - CAPTURING THE UNTAPPED ENERGY

Georgia Standards of Excellence Curriculum Map. Mathematics. GSE 8 th Grade

Week 1. Exploratory Data Analysis

Activity 8 Drawing Isobars Level 2

MBA 611 STATISTICS AND QUANTITATIVE METHODS

Part 2: Analysis of Relationship Between Two Variables

Guide to the Texas A&M Transportation Institute Mobile Retroreflectometer Certification Program

Measurement of RF Emissions from a Final Coat Electronics Corrosion Module

APPENDIX C - Florida Energy Code Standard Reference Design Auto-Generation Tests

Simple Linear Regression Inference

Transcription:

Variability of the Measurement of Temperature in the Canadian Surface Weather and Reference Climate Networks ary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada 4905 Dufferin Street, Downsview, Ontario, Canada Phone: (416) 739-4111, Fax: (416) 739-5721 ary.beaney@ec.gc.ca ABSTRACT To assess the uncertainties of temperature and humidity measurement, the Test and Evaluation Division of the Meteorological Service of Canada organized a field evaluation of the main temperature and humidity sensor models currently used in Canadian surface weather and climate networks. The field study began in the fall of 2002 at the Centre for Atmospheric Research Experiments (CARE) in Egbert, Ontario. Twelve pairs of temperature and humidity sensors were set up in various configurations with respect to aspiration and shield type. Data was stratified by temperature. Operational comparability and functional precision values were derived to determine the differences between configurations and the variability within configurations. INTRODUCTION Over the past few decades, the progressive automation of the Canadian surface weather and climate networks has been undertaken by each of five national regions (Pacific and Yukon, Prairie and Northern, Ontario, Quebec, Atlantic). As a consequence, these networks are populated with a variety of different sensors installed in various configurations. Although individual manufacturers provide performance specifications for these sensors, the Meteorological Service of Canada had not undertaken nation-wide field evaluations. To assess measurement uncertainties of temperature and humidity, the Test and Evaluation Division of the Meteorological Service of Canada initiated a field evaluation of the major temperature and humidity sensor models currently used in Canadian surface weather and climate networks. The field study began in December, 2002 and continued for a ten-month period at the Centre for Atmospheric Research Experiments (CARE) in Egbert, Ontario. Due to the large number of sensors involved, each parameter (temperature and humidity) was analysed separately While this paper addresses air temperature sensors only, a similar paper presenting the results of the humidity analysis will follow at a later date. REFERENCE AND TEST SENSORS Due to the impracticality of representing all sensors and configurations present in the field, this study includes only those most representative of the Canadian surface weather and climate networks. Seven different sensor models were identified as being most representative of those currently in use in the field. For each model tested, two identical sensors were evaluated to establish differences between each other as well as from a common reference. Page 1

In addition to evaluating differences in performance between different sensor models, the study also addressed the influence of different sensor configurations. Four different configurations, using different combinations of sensor screen types and aspiration, were established for the majority of sensors tested. The two types of screens most commonly used in the field are wooden Stevenson screens (Figure 1) and ill screens (Figure 2). These screens are either aspirated (by providing a constant flow of air over the enclosed sensor) or non-aspirated (left to ventilate naturally). Figure 1 - Test sensors in non-aspirated Stevenson screen. Figure 2 - Test sensors in ill -plate screens All sensors in their various configurations included in this study are listed in Table 1. It should be noted that after preliminary analysis, four sensors exhibited uncharacteristically high variations from all other sensors under test (including the reference sensor). As the cause of these variations could not be easily established, the data from these sensors were not included in this analysis, but will be analysed separately at a later date. A list of the sensors removed from this analysis can be found in Appendix A. From this point forward in the paper, any reference to a particular sensor encompasses both the sensor model and the installation configuration. Page 2

Table 1. Sensors under test and their configurations. Sensor Screen/Shield Aspiration 44002A/WS/NA A 44002A (WS) Wooden Screen (NA) Non-Aspirated 44002A/WS/NA B 44002A (WS) Wooden Screen (NA) Non-Aspirated 442/WS/A A 442 (WS) Wooden Screen (A) Aspirated 442/WS/A B 442 (WS) Wooden Screen (A) Aspirated HMP35C//NA A HMP35C () ill -plate (NA) Non-Aspirated HMP35C//NA B HMP35C () ill -plate (NA) Non-Aspirated HMP45C//NA A HMP45C () ill -plate (NA) Non-Aspirated HMP45C//NA B HMP45C () ill -plate (NA) Non-Aspirated HMP45C/WS/NA A HMP45C (WS) Wooden Screen (NA) Non-Aspirated HMP45C/WS/NA B HMP45C (WS) Wooden Screen (NA) Non-Aspirated HMP45C2//A A HMP45C2 () ill (A) Aspirated HMP45C2//A B HMP45C2 () ill (A) Aspirated HMP45C2//NA A HMP45C2 () ill -plate (NA) Non-Aspirated HMP45C2//NA B HMP45C2 () ill -plate (NA) Non-Aspirated HMP45C2/WS/A A HMP45C2 (WS) Wooden Screen (A) Aspirated HMP45C2/WS/A B HMP45C2 (WS) Wooden Screen (A) Aspirated HMP45C2/WS/NA A HMP45C2 (WS) Wooden Screen (NA) Non-Aspirated HMP45CF//NA A HMP45CF () ill -plate (NA) Non-Aspirated HMP45CF//NA B HMP45CF () ill -plate (NA) Non-Aspirated HMP45CF/WS/NA A HMP45CF (WS) Wooden Screen (NA) Non-Aspirated HMP45CF/WS/NA B HMP45CF (WS) Wooden Screen (NA) Non-Aspirated PRT1000/WS/NA A PRT1000 (WS) Wooden Screen (NA) Non-Aspirated Figure 3 - Field reference temperature sensor (YSI20048) in aspirated wooden screen. The average of three reference sensors of the same model type (YSI SP20048) served as the site reference. This sensor is commonly used operationally by the Meteorological Service of Canada and has a long record of good, reliable performance. All three reference sensors were installed in aspirated wooden screens (Figure 3) in a triangle formation at the test site. The average of these three reference sensors was taken to represent the true temperature at the centre of this triangle. Test sensors were installed in a configuration which assured a distance of no more than 10 m from the centre of the reference triangle in accordance with the standard ASTM 4430. All requirements outlined in ASTM 4430 were followed and met for this experiment. The physical layout of all reference sensors and sensors under test is illustrated in Figure 4. A general view of the setup at the CARE test site is provided in Figure 5. Page 3

Figure 4 - Layout of all reference and test sensors at CARE test site. Reference sensors are located in the yellow boxes and are labeled (MSC TD A Aspirated). Figure 5 - eneral view of temperature/humidity sensors installation in CARE test site. Page 4

All data collected for this analysis, from both reference sensors and sensors under test, were of minutely resolution. Minutely reference average values were only included in the analysis when the differences between all reference sensors was less than 0.5 o C. To establish a common test period, minutely values were only included in the analysis when data were present for all sensors under test and the reference. If any one sensor was missing a particular minutely value, that minutely value was removed from all other sensors under test. DATA ANALYSIS After filtering and the removal of missing data, the resulting dataset was separated into three categories based on reference temperature: 1) <=-5 o C 2) >-5 o C and <=5 o C 3) >5 o C These temperature categories were established to identify temperature-dependent variability in the sensors under test. Each of these three temperature categories was analysed by applying the formulas for operational comparability and functional precision. Operational comparability was used to quantify the degree of variability of sensors under test from the reference. Operational comparability as defined in ASTM 4430 represents the root mean square of the difference between two simultaneous readings from two systems (reference and sensor under test) measuring the same quantity in the same environment. Functional Precision was used to quantify the degree of variability of identical sensors under test from each other. Functional precision as defined in ASTM 4430 represents the root mean square of the difference between simultaneous readings from two identical sensors. RESULTS Operational comparability scores for all three temperature categories are presented in Figures 6, 7 and 8 (aspirated sensors are highlighted in red). Of all configurations tested, operational comparability scores for aspirated sensors (in both ill and wooden screens) best agreed with the reference. As the reference average sensors were aspirated, this close agreement was somewhat expected. In addition to aspiration dependence, temperature dependence was also apparent with respect to operational comparability with the lowest scores for most sensors occurring in the middle temperature range (>-5 o C and <=5 o C). Page 5

Operational Comparability Scores Reference Average Temperature <=-5 o C Operational Comparability ( o C) 0.3 0.2 0.1 0.16 0.14 3 4 0.21 0.14 0.13 0.16 0.19 9 9 0.16 0.17 5 8 0.16 0.20 0.20 0.23 0.21 44002 44002 A / WS442 A /N WSHMP 442 /N /WS HMP /A /WS 35 HMP C/ /A 35 HMP C/ 45 / HMP C/ 45 / HMP C/ / 45 HMP C/ / 45 WSHMP C/ /N HMP 45 WS C2 /N HMP 45 45 C2 / HMP C2 45 /HMP C2 / 45HMP / C2 45 HMP C2 45 /WS HMP C2 /WS 45 HMP CF/ /WS 45 HMP CF/ 45 PRT / CF/ 45 / WS CF/ 1000 / WS //WS / Sensor Figure 6 Operational comparability scores for all sensors under test when reference temperature is <=-5 o C. Red bars represent aspirated sensors. Blue bars represent non-aspirated sensors. Operational Comparability Scores Reference Average Temperature >-5 o C and <=5 o C Operational Comparability ( o C) 0.3 0.2 0.1 0. 9 3 4 0.13 0. 0. 0.14 6 0.11 0.13 5 9 0. 0.13 0.13 0.11 44002 44002 A / WS442 A /N / WSHMP 442 /N /WS HMP /A /WS 35 HMP C/ /A 35 HMP C/ 45 / HMP C/ 45 / HMP C/ / 45 HMP C/ / 45 WSHMP C/ /N HMP 45 WS C2 /N HMP 45 45 C2 / HMP C2 45 /HMP C2 / 45HMP / C2 45 HMP C2 45 /WS HMP C2 /WS 45 HMP CF/ /WS 45 HMP CF/ 45 PRT / CF/ 45 / WS CF/ 1000 / WS //WS / Sensor Figure 7 Operational comparability scores for all sensors under test when reference temperature is >-5 o C and <=5 o C. Red bars represent aspirated sensors. Blue bars represent non-aspirated sensors. Page 6

Operational Comparability Scores Reference Average Temperature >5 o C Operational Comparability ( o C) 0.3 0.2 0.1 0.13 7 7 0.28 0.19 0.17 0.25 0. 0. 0.17 0.17 8 0. 0.17 0.16 0.17 0.22 44002 44002 A / WS442 A /N WSHMP 442 /N /WS HMP /A /WS 35 HMP C/ /A 35 HMP C/ 45 / HMP C/ 45 / HMP C/ / 45 HMP C/ / 45 WSHMP C/ /N HMP 45 WS C2 /N HMP 45 45 C2 / HMP C2 45 /HMP C2 / 45HMP / C2 45 HMP C2 45 /WS HMP C2 /WS 45 HMP CF/ /WS 45 HMP CF/ 45 PRT / CF/ 45 / WS CF/ 1000 / WS //WS / Sensor Figure 8 - Operational comparability scores for all sensors under test when reference temperature is >5 o C. Red bars represent aspirated sensors. Blue bars represent non-aspirated sensors. The distributions of the differences (between the reference and each sensor under test) were analysed and student s T-tests were done to determine whether differences between operational comparability scores were a result of higher variability or simply a higher population mean. The results of T-tests comparing the means of all sensors under test and the reference for all three temperature categories are displayed in Table 2. All sensors which have significantly different means from that of the reference are highlighted in red. The majority of sensors in all three temperature categories had significantly different means at the 95% confidence level. Some temperature dependence of T-test results was observed with 6 of 22 sensors not showing significant differences in the <=-5 o C temperature category, and only 2 of 22 sensors not showing significant differences from the reference in the >-5 o C and <=5 o C temperature category. No consistent patterns with respect to sensor model, aspiration, or screen type were observed in the t-test results. Page 7

Table 2 T-test results for comparison of reference and sensor under test means. All sensors highlighted in red were shown to have significantly different means at the 95% confidence leve1. <=-5 o C >-5 o C and <=5 o C >5 o C 44002A/WS/NA A 44002A/WS/NA A 44002A/WS/NA A 44002A/WS/NA B 44002A/WS/NA B 44002A/WS/NA B 442/WS/A A 442/WS/A A 442/WS/A A 442/WS/A B 442/WS/A B 442/WS/A B HMP35C//NA A HMP35C//NA A HMP35C//NA A HMP35C//NA B HMP35C//NA B HMP35C//NA B HMP45C//NA A HMP45C//NA A HMP45C//NA A HMP45C//NA B HMP45C//NA B HMP45C//NA B HMP45C/WS/NA A HMP45C/WS/NA A HMP45C/WS/NA A HMP45C/WS/NA B HMP45C/WS/NA B HMP45C/WS/NA B HMP45C2//A A HMP45C2//A A HMP45C2//A A HMP45C2//A B HMP45C2//A B HMP45C2//A B HMP45C2//NA A HMP45C2//NA A HMP45C2//NA A HMP45C2//NA B HMP45C2//NA B HMP45C2//NA B HMP45C2/WS/A A HMP45C2/WS/A A HMP45C2/WS/A A HMP45C2/WS/A B HMP45C2/WS/A B HMP45C2/WS/A B HMP45C2/WS/NA A HMP45C2/WS/NA A HMP45C2/WS/NA A HMP45CF//NA A HMP45CF//NA A HMP45CF//NA A HMP45CF//NA B HMP45CF//NA B HMP45CF//NA B HMP45CF/WS/NA A HMP45CF/WS/NA A HMP45CF/WS/NA A HMP45CF/WS/NA B HMP45CF/WS/NA B HMP45CF/WS/NA B PRT1000/WS/NA A PRT1000/WS/NA A PRT1000/WS/NA A Page 8

The most apparent pattern observed, when comparing operational comparability scores and distributions of differences for sensors under test, involved aspirated vs. non-aspirated sensors. The distribution of the differences for the HMP 45C2 WS in aspirated and non-aspirated configurations are displayed in Figures 9 and 10. The aspirated sensor distribution appears more peaked (mesokurtic) while the non-aspirated sensor distribution appears less peaked (platykurtic), and less aussian with a skew toward positive differences from the reference average. These differences are common when comparing all aspirated vs. non-aspirated sensors and were exacerbated in the >5 o C temperature category with differences in non-aspirated sensors becoming even more pronounced. Number of Occurrences (log scale) 1000000 100000 10000 1000 100 10 1 HMP45C2/WS/A A (<-5 o C) < =-5. > - 05 4. 55 > - 3 and. < 55 > - and 2 =. - < 55 > 4-1 and =. - < 55 3 > - and 0 =. - < 95 > 2-0 and =. - < 75 1 > - and 0 =. - < 55 > 0-0 and =. - < 35 0 > - and 0 =. - < 15 0 Difference Bins > and = 0. -05 < 0 > = and 0. -25 < 0 > 0. and = < 0 45 > and 0. = < 0 65 > 0. and = < 0 85 > and 1. = < 0 05 > 2. and = < 0 05 > and 3. = < 1 05 > 4. and = < 2 05 and = < 3 = 4 > 5. 05 Figure 9 Example of a frequency distribution of the differences between a sensor under test and the reference in an aspirated screen. The red bar represents the middle (optimum) difference bin. Number of Occurrences (log scale) 1000000 100000 10000 1000 100 10 1 HMP45C2/WS/NA A (<-5 o C) < =-5. > - 05 4. 55 > - 3 and. < 55 > - 2 and =. - < 55 > 4-1 and =. - < 55 > 3- and 0 =. - < 95 > 2-0 and =. - < 75 > 1-0 and =. - < 55 > 0- and 0 =. - < 35 > 0-0 and =. - < 15 0 Difference Bins > and = 0. -05 < 0 > = 0. and -25 < 0 > and 0. = < 0 45 > 0. and = < 0 65 > 0. and = < 85 0 > and 1. = < 0 05 > 2. and = < 0 05 > 3. and = < 05 1 > 4. and = < 05 2 and = < 3 = 4 > 5. 05 Figure 10 Example of a frequency distribution of the differences between a sensor under test and the reference in a non-aspirated screen. The red bar represents the middle (optimum) difference bin. Page 9

Differences from the reference average for the two sensors with the best and worst operational comparability scores are illustrated in Figure 11 (Note - for graphical purposes, the minutely datasets were reduced in resolution by including only the instantaneous minutely value on the hour. As the summary statistics of mean and coefficient of variation for the minutely and hourly series are virtually identical (Appendix B), the hourly time series provided a rough graphical representation of differences from the reference). As illustrated in Figure 11, both the mean of the differences and degree of variability can differ significantly between sensors under test. Operational comparability scores for all other sensors under test lie between these two extremes. Difference from Reference Average <-5 o C Operational Comparability - Best and Worst Case Difference ( o C) 3.0 2.5 2.0 1.5 1.0 0.5-0.5-1.0 Time 442/WS/A A HMP45CF/WS/NA A Figure 11 - The two time series represent the differences from the reference average for the sensors with the best and worst operational comparability scores. While in the <=-5 o C temperature category the HMP45CF/WS/NA A sensor had the highest operational comparability score and consequently was shown to compare the worst with the reference, of all the identical pairs of sensors tested, this sensor had the lowest functional precision score. Hourly differences from the reference average for the two identical HMP 45CF/WS/NA sensors (A and B) are illustrated in Figure. Hourly differences from the reference average for the pair of identical sensors with the worst functional precision score ( HMP35C//NA A) are illustrated in Figure 13. Page 10

Difference from Reference Average <-5 o C Functional Precision - Worst Case Difference ( o C) 3.5 3.0 2.5 2.0 1.5 1.0 0.5-0.5-1.0 Time HMP35C//NA A HMP35C//NA B Figure - The two time series represent the differences from the reference average for the pair of sensors with the best functional precision score. Difference from Reference Average <-5 o C Functional Precision - Best Case 3.0 2.5 Difference ( o C) 2.0 1.5 1.0 0.5-0.5-1.0 Time HMP45CF/WS/NA A HMP45CF/WS/NA B Figure 13 - The two time series represent the differences from the reference average for the pair of sensors with the worst functional precision score. Functional precision scores for all three temperature categories are displayed in Figures 14, 15 and 16. As was observed with operational comparability, of the three temperature categories, functional precision scores were lowest in the middle category (<-5 o C and >=5 o C). Unlike operational comparability results, no clear pattern between aspirated and non-aspirated sensors was observed when comparing functional precision scores. Page 11

Functional Precision Scores Reference Average Temperature <=-5 o C 0.3 Functional Precision ( o C) 0.2 0.1 5 5 5 8 7 5 4 44002 A / WS 442 HMP /WS 35 HMP C/ 45 HMP C/ 45 HMP C/ WS 45 HMP C2 Sensor 45 HMP C2 / 1 45 HMP C2 /W 45 HMP CF/ 45 CF/ WS Figure 14 Functional precision scores for all sensors under test when reference temperature is <=-5 o C. Red bars represent aspirated sensors. Blue bars represent non-aspirated sensors. Functional Precision Scores Reference Average Temperature >-5 o C and <=5 o C 0.3 Functional Precision ( o C) 0.2 0.1 4 4 0. 4 9 8 7 6 3 4 44002 A / WS 442 HMP /WS 35 HMP C/ 45 HMP C/ 45 HMP C/ WS 45 HMP C2 Sensor 45 HMP C2 / 1 45 HMP C2 /W 45 HMP CF/ 45 CF/ WS Figure 15 Functional precision scores for all sensors under test when reference temperature is >-5 o C and <=5 o C. Red bars represent aspirated sensors. Blue bars represent non-aspirated sensors. Page

Functional Precision Scores Reference Average Temperature >5 o C 0.3 Functional Precision ( o C) 0.2 0.1 7 0.11 0.19 7 0. 0.11 6 9 7 44002 A / WS 442 HMP /WS 35 HMP C/ 45 HMP C/ 45 HMP C/ WS 45 HMP C2 Sensor 45 HMP C2 / 1 45 HMP C2 /W 45 HMP CF/ 45 CF/ WS Figure 16 Functional precision scores for all sensors under test when reference temperature is >5 o C. Red bars represent aspirated sensors. Blue bars represent non-aspirated sensors. DISCUSSION Both operational comparability and functional precision scores from this analysis illustrate a wide range of variability among operational sensors in the Canadian surface weather and climate networks. Temperature dependence with respect to both the distributions of the differences from the reference and the differences between population means was apparent in both operational comparability and functional precision scores. Further stratification of the data by temperature may better identify the temperature ranges in which particular differences occur. The stratification of datasets by other meteorological parameters such as wind speed may also aid in further identifying the specific differences between operational sensors. The significant differences observed between aspirated and non-aspirated sensors may be less apparent during higher winds due to greater natural ventilation. It should be noted however that the results of this experiment represent a quantification of differences observed in one particular climate regime. The climatology of the test site for this study is not representative of all measurement sites throughout the country. As such, the differences observed between sensors may not be of consistent magnitude or direction in other Canadian climate regimes. To fully illustrate the differences between currently operational sensors, similar experiments would be required in different climate regimes throughout Canada. The fact that the same sensor ( HMP45CF/WS/NA) had the highest operational comparability score and the lowest functional precision score in the <=-5 o C temperature category illustrates the need for balance between sensor precision and sensor consistency in a climate network. Ideally, one would prefer a sensor which is shown to be both close to the truth and consistent in operation from sensor to sensor. If one is forced to answer the question of whether closeness to the truth coupled with unreliability is better or worse than reliability coupled with disparities from the true temperature, the answer is not straight forward. However, as systematic differences are traditionally easier to identify and remove than random differences, the latter may be the better choice; as long as the data from such sensors is suitably adjusted prior to use. Page 13

CONCLUSION Due to the vast geographic nature of Canada, the ability to properly identify variations in climate trends depends on instrument continuity from one climate region to another. This experiment identified a wide range of variability among currently operational sensors in the Canadian surface weather and climate networks. The observed variability results from differences in sensor type, sensor configuration and air temperature. Although the Meteorological Service of Canada intends to test possible replacements for operational sensors in different climate regimes throughout Canada, the degree of variability of data residing in the Canadian climate archive from currently operational sensors will remain unknown until similar experiments have been repeated in different climate regimes. ACKNOWLEDEMENTS Our thanks is given to the following colleagues who provided vital contributions to the this project: Peter Bowman, Bob Wilson and eorge Davis. We are grateful for the fruitful discussions with all of them. Page 14

Appendix A Sensor Screen/Shield Aspiration 442/WS/NA A 442 (WS) Wooden Screen (NA) Non-Aspirated 442/WS/NA B 442 (WS) Wooden Screen (NA) Non-Aspirated HMP45C2/WS/NA B HMP45C2 (WS) Wooden Screen (NA) Non-Aspirated PRT1000/WS/NA A PRT1000 (WS) Wooden Screen (NA) Non-Aspirated Sensors removed from experiment prior to analysis due to uncharacteristically high differences from both the reference and all other sensors under test. Appendix B Minutely (N=406299) Hourly (N=6763) Sensor Mean Coef.Var. Mean Coef.Var. Reference Average 6.019103 2.15205 6.0748 2.15510 44002A/WS/NA A 6.4509 2.08976 6.118610 2.09253 44002A/WS/NA B 6.052633 2.11524 6.046364 2.11813 442/WS/A A 5.998492 2.15639 5.992243 2.15943 442/WS/A B 6.065454 2.14065 6.058375 2.14394 HMP35C//NA A 6.304771 2.06078 6.297886 2.06402 HMP35C//NA B 6.051136 2.15043 6.043970 2.15381 HMP45C//NA A 6.033197 2.16513 6.026342 2.16854 HMP45C//NA B 6.026166 2.16560 6.018894 2.16920 HMP45C/WS/NA A 6.076401 2.14517 6.068909 2.14861 HMP45C/WS/NA B 6.237260 2.08898 6.229667 2.09226 HMP45C2//A A 5.981902 2.16791 5.974659 2.17119 HMP45C2//A B 6.132289 2.10982 6.6034 2.166 HMP45C2//NA A 6.102319 2.13094 6.095518 2.13424 HMP45C2//NA B 6.167084 2.09771 6.160065 2.10098 HMP45C2/WS/A A 6.040655 2.14264 6.034338 2.14561 HMP45C2/WS/A B 6.146586 2.10440 6.139071 2.10767 HMP45C2/WS/NA A 6.151596 2.11030 6.144211 2.11349 HMP45CF//NA A 6.144640 2.09118 6.136420 2.09483 HMP45CF//NA B 6.091604 2.10225 6.084622 2.10542 HMP45CF/WS/NA A 6.118393 2.08970 6.111363 2.09296 HMP45CF/WS/NA B 6.057680 2.10879 6.050991 2.11197 PRT1000/WS/NA A 6.243256 2.08055 6.236481 2.08349 Page 15