Using lidar and radar to evaluate cloud forecasts in operational models.



Similar documents
Towards an NWP-testbed

Sub-grid cloud parametrization issues in Met Office Unified Model

Cloud verification: a review of methodologies and recent developments

EVALUATION OF OPERATIONAL MODEL CLOUD REPRESENTATION USING ROUTINE RADAR/LIDAR MEASUREMENTS

How To Find Out How Much Cloud Fraction Is Underestimated

CALIPSO, CloudSat, CERES, and MODIS Merged Data Product

MSG-SEVIRI cloud physical properties for model evaluations

How To Find Out How Much Cloud Fraction Is Underestimated

Developing Continuous SCM/CRM Forcing Using NWP Products Constrained by ARM Observations

Cloud Profiling at the Lindenberg Observatory

Operate three cloud remote sensing stations. Work Package 2.

Climatology of aerosol and cloud properties at the ARM sites:

SAFNWC/MSG Cloud type/height. Application for fog/low cloud situations

VALIDATION OF SAFNWC / MSG CLOUD PRODUCTS WITH ONE YEAR OF SEVIRI DATA

The Next Generation Flux Analysis: Adding Clear-Sky LW and LW Cloud Effects, Cloud Optical Depths, and Improved Sky Cover Estimates

Diurnal Cycle: Cloud Base Height clear sky

The ARM-GCSS Intercomparison Study of Single-Column Models and Cloud System Models

Evaluating GCM clouds using instrument simulators

GOES-R AWG Cloud Team: ABI Cloud Height

Physical properties of mesoscale high-level cloud systems in relation to their atmospheric environment deduced from Sounders

Evaluation of the Effect of Upper-Level Cirrus Clouds on Satellite Retrievals of Low-Level Cloud Droplet Effective Radius

Clouds and the Energy Cycle

Validation of SEVIRI cloud-top height retrievals from A-Train data

Towards assimilating IASI satellite observations over cold surfaces - the cloud detection aspect

A SURVEY OF CLOUD COVER OVER MĂGURELE, ROMANIA, USING CEILOMETER AND SATELLITE DATA

Science Goals for the ARM Recovery Act Radars

UNIVERSITY OF READING

Evaluation of clouds in GCMs using ARM-data: A time-step approach

A glance at compensating errors between low-level cloud fraction and cloud optical properties using satellite retrievals

Overview of National Aeronautics and Space Agency Langley Atmospheric Radiation Measurement Project Cloud Products and Validation

Evaluation of a large-eddy model simulation of a mixed-phase altocumulus cloud using microwave radiometer, lidar and Doppler radar data

Cloud/Hydrometeor Initialization in the 20-km RUC Using GOES Data

Sensitivity of Surface Cloud Radiative Forcing to Arctic Cloud Properties

Passive and Active Microwave Remote Sensing of Cold-Cloud Precipitation : Wakasa Bay Field Campaign 2003

Mixed-phase layer clouds

Verification of cloud simulation in HARMONIE AROME

Improved diagnosis of low-level cloud from MSG SEVIRI data for assimilation into Met Office limited area models

Optimisation of Aeolus Sampling

cloud Impacts on Off-Line AQs

Comparing Properties of Cirrus Clouds in the Tropics and Mid-latitudes

How To Understand Cloud Radiative Effects

ECMWF Aerosol and Cloud Detection Software. User Guide. version /01/2015. Reima Eresmaa ECMWF

Observed Cloud Cover Trends and Global Climate Change. Joel Norris Scripps Institution of Oceanography

Synoptic assessment of AMV errors

Focus Group Proposal Whitepaper. ASR Quantification of Uncertainty in Cloud Retrievals (QUICR) Focus Group

RADIATION IN THE TROPICAL ATMOSPHERE and the SAHEL SURFACE HEAT BALANCE. Peter J. Lamb. Cooperative Institute for Mesoscale Meteorological Studies

An A-Train Water Vapor Thematic Climate Data Record Using Cloud Classification

Continental and Marine Low-level Cloud Processes and Properties (ARM SGP and AZORES) Xiquan Dong University of North Dakota

Fog and low cloud ceilings in the northeastern US: climatology and dedicated field study

Improvement in the Assessment of SIRS Broadband Longwave Radiation Data Quality

Application of Numerical Weather Prediction Models for Drought Monitoring. Gregor Gregorič Jožef Roškar Environmental Agency of Slovenia

Investigations on COSMO 2.8Km precipitation forecast

Combining Satellite High Frequency Microwave Radiometer & Surface Cloud Radar Data for Determination of Large Scale 3 D Cloud IWC 서은경

CAUSES. (Clouds Above the United States and Errors at the Surface)

TOPIC: CLOUD CLASSIFICATION

Presented by Stella Melo Environment Canada, Science and Technology, Cloud Physics and Severe Weather Research Section

Studying cloud properties from space using sounder data: A preparatory study for INSAT-3D

Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models

Earth s Cloud Feedback

Since launch in April of 2006, CloudSat has provided

Long-term Observations of the Convective Boundary Layer (CBL) and Shallow cumulus Clouds using Cloud Radar at the SGP ARM Climate Research Facility

ADM-Aeolus pre-launch campaigns with an airborne instrument demonstrator

Evaluating the Impact of Cloud-Aerosol- Precipitation Interaction (CAPI) Schemes on Rainfall Forecast in the NGGPS

Microwave observations in the presence of cloud and precipitation

A climatology of cirrus clouds from ground-based lidar measurements over Lille

SATELLITE OBSERVATION OF THE DAILY VARIATION OF THIN CIRRUS

Forecaster comments to the ORTECH Report

A review of cloud top height and optical depth histograms from MISR, ISCCP, and MODIS

Observed Southern Ocean Cloud Properties and Shortwave. Reflection Part I: Calculation of SW flux from observed cloud. properties

Solarstromprognosen für Übertragungsnetzbetreiber

Assessing Cloud Spatial and Vertical Distribution with Infrared Cloud Analyzer

How To Model An Ac Cloud

VAMP Vertical Aeolus Measurement Positioning

Comparative Evaluation of High Resolution Numerical Weather Prediction Models COSMO-WRF

Lecture 7a: Cloud Development and Forms

How To Understand Cloud Properties From Satellite Imagery

Development of an Integrated Data Product for Hawaii Climate

Evaluation of ECMWF cloud type simulations at the ARM Southern Great Plains site using a new cloud type climatology

STATUS AND RESULTS OF OSEs. (Submitted by Dr Horst Böttger, ECMWF) Summary and Purpose of Document

Remote Sensing of Environment

REGIONAL CLIMATE AND DOWNSCALING

Assessing the performance of a prognostic and a diagnostic cloud scheme using single column model simulations of TWP ICE

Interim Progress Report on SEARCH Project: Correction of Systematic Errors in TOVS Radiances

Evaluations of the CALIPSO Cloud Optical Depth Algorithm Through Comparisons with a GOES Derived Cloud Analysis

A fast, flexible, approximate technique for computing radiative transfer in inhomogeneous cloud fields

NOWCASTING OF PRECIPITATION Isztar Zawadzki* McGill University, Montreal, Canada

A Microwave Retrieval Algorithm of Above-Cloud Electric Fields

Recommended Methods for Evaluating Cloud and Related Parameters

Using Cloud-Resolving Model Simulations of Deep Convection to Inform Cloud Parameterizations in Large-Scale Models

ABSTRACT INTRODUCTION

Urban Boundary-layer Atmosphere Network

Cloud detection and clearing for the MOPITT instrument

Measurement of the effect of biomass burning aerosol on inhibition of cloud formation over the Amazon

Radiometer Physics GmbH Discrimination of cloud and rain liquid water path by groundbased polarized microwave radiometry

VIIRS-CrIS mapping. NWP SAF AAPP VIIRS-CrIS Mapping

MICROPHYSICS COMPLEXITY EFFECTS ON STORM EVOLUTION AND ELECTRIFICATION

STRATEGY & Parametrized Convection

Roelof Bruintjes, Sarah Tessendorf, Jim Wilson, Rita Roberts, Courtney Weeks and Duncan Axisa WMA Annual meeting 26 April 2012

Passive Remote Sensing of Clouds from Airborne Platforms

The impact of window size on AMV

Transcription:

Using lidar and radar to evaluate cloud forecasts in operational models. Anthony Illingworth, + Robin Hogan, Ewan O Connor, U of Reading, UK and many others at the U of Reading and in the CloudNET team DTC Workshop, Boulder, 2-4 Nov 2010

Plan of Talk 1. Using ground based radar and lidar. (mean and pdf of clouds properties: vertical profiles of cloud fraction, iwc, lwc) 2. Skill scores - the right cloud in the right place. ETS? New skills score metrics. 3. ARM sites for model evaluation. 4. First results from A train : Cloudsat and Calipso

How skillful is a forecast? ECMWF 500-hPa geopotential anomaly correlation Most model evaluations of clouds test the cloud climatology What about individual forecasts? Standard measure shows ECMWF forecast half-life of ~6 days in 1980 and ~9 days in 2000 But virtually insensitive to clouds!

Geopotential height anomaly Vertical velocity Cloud is noisier than geopotential height Z because it is separated by around two orders of differentiation: Cloud ~ vertical wind ~ relative vorticity ~ 2 streamfunction ~ 2 pressure Suggests cloud observations should be used routinely to evaluate models

The Cloudnet methodology Recently completed EU project; www.cloud-net.org BAMS Article June 2007 Aim: to retrieve and evaluate the crucial cloud variables in forecast and climate models using radar and lidar. Models: Met Office (4-km, 12-km and global), ECMWF, Météo-France, KNMI RACMO, Swedish RCA model, DWD Variables: target classification, cloud fraction, liquid water content, ice water content, drizzle rate, mean drizzle drop size, ice effective radius, TKE dissipation rate Sites: 4 Cloudnet sites in Europe, 6 ARM including 3 for mobile facility Period: Several years near-continuous data from each site Crucial aspects Common formats (including errors & data quality flags) allow all algorithms to be applied at all sites to evaluate all models Evaluate for months and years: avoid unrepresentative case studies April 2011 onwards - more European sites, Italy, Ireland

Pt Reyes NSA SGP Cloud profiling sites Core instrumentation at each site Cloud radar, lidar, microwave radiometers, raingauge Eureka Mace Head Cabauw Chilbolton Lindenberg Palaiseau Black Forest Azores Potenza New site at Julich JOYCE Shengzhou Niamey Manus Darwin Nauru Cloudnet Sites ARM/NOAA Sites ARM Mobile Facility

Standard CloudNET observations (e.g. Chilbolton) Radar Lidar, gauge, radiometers But can the average user make sense of these measurements?

First step: target classification Combining radar, lidar and model allows the type of cloud (or other target) to be identified From this can calculate cloud fraction in each model gridbox Ice Example from US ARM site: Need to Aerosol Liquid Rain Insects distinguish insects from cloud

Cloud fraction CHILBOLTON Observations Met Office Mesoscale Model ECMWF Global Model Meteo-France ARPEGE Model KNMI RACMO Model Swedish RCA model

Cloud fraction in 7 models Mean & PDF for 2004 for Chilbolton, Paris and Cabauw Uncertain above 7 km as must remove undetectable clouds in model 0-7 km All models except DWD underestimate mid-level cloud; some have separate radiatively inactive snow (ECMWF, DWD); Met Office has combined ice and snow but still underestimates cloud fraction Wide range of low cloud amounts in models Not enough overcast boxes. Illingworth et al, BAMS, 2007

Ice water content IWC estimated from radar reflectivity and temperature Rain events excluded from comparison due to mm-wave attenuation For IWC above rain, use cm-wave radar (e.g. Hogan et al., JAM, 2006) 3-7 km ECMWF and Met Office within the observational errors at all heights Encouraging: AMIP implied an error of a factor of 10! DWD (pink) far too low - Be careful in interpretation: mean IWC dominated by occasional large values so PDF more relevant for radiative properties - DWD (pink) pdf best apart from max bin so mean value worst.

A change to Meteo-France cloud scheme Compare cloud fraction to observations before and after April 2003 Note that cloud fraction and water content are entirely diagnostic But human obs. indicate model now underestimates mean cloud-cover! Compensation of errors: overlap scheme changed from random to maximum-random before after April 2003

Equitable threat score ETS is a widely used skill scorere 1 = perfect forecast, 0 = random forecast Measure of the skill of forecasting cloud fraction>0.05 Assesses the weather of the model not its climate Persistence forecast is shown for comparison Lower skill in summer convective events Met Office global and mesoscale equally good.

Contingency tables Model cloud Observed cloud Observed clear-sky a: Cloud hit b: False alarm a = 7194 b = 4098 Model clear-sky c: Miss d: Clear-sky hit c = 4502 d = 41062 DWD model, Murgtal For given set of observed events, only 2 degrees of freedom in all possible forecasts (e.g. a & b), because 2 quantities fixed: - Number of events that occurred n =a +b +c +d - Base rate (observed frequency of occurrence) p =(a +c)/n

Equitable Threat Score Observed cloud Observed clear-sky Model cloud Model clear-sky a: Cloud hit b: False alarm c: Miss d: Clear-sky hit - Number of events that occurred n =a +b +c +d - Base rate (observed frequency of occurrence) p =(a +c)/n Expectation value of a = E(a) = total number of modelled cloud * p E(a) = (a+b)*p = (a+b) (a +c)/n ETS = {cloud hit - E(cloud hit)}/{(obs or Mod cloud) - E(cloud hit)} ETS = {a - E(a)} / {(a+b+c) - E(a)} (but not meaningful as p zero) Heidke Skill Score, given by ETS = HSS/(2-HSS); (is unconditionally equitable, but not meaningful as p zero) Log of Odds Ratio: LOR = ln (ad/bc) (perfect forecast scores )

Desirable properties of verification measures 1. Equitable : all random forecasts receive expected score zero Constant forecasts of occurrence or non-occurrence also score zero Note that forecasting the right cloud climatology versus height but with no other skill should also score zero 2. Useful for rare events Almost all measures are degenerate in that they asymptote to 0 or 1 for vanishingly rare events Extreme dependency score Stephenson et al. (2008) explained this behavior: Almost all scores have a meaningless limit as base rate p 0 HSS tends to zero and LOR tends to infinity They proposed the Extreme Dependency Score: where n = a + b + c + d It can be shown that this score tends to a meaningful limit:

Symmetric extreme dependency score EDS problems: Easy to hedge (unless calibrated) (e.g. predict clouds all the time) Not equitable Solved by defining a symmetric version: (same score if swap models and obs) All the benefits of EDS, none of the drawbacks! SEDS = ln {E(a)/n } / {ln(a/n)} And σ = {(1/a)-(1/[a+c])} {-ln[e(a)/n]/ln(a/n) 2 } Hogan, O Connor and Illingworth (2009 QJRMS)

Skill versus height Most scores not reliable near the tropopause because cloud fraction tends to zero SEDS EDS LBSS HSS LOR New score reveals: Skill tends to slowly decrease at tropopause Mid-level clouds (4-5 km) most skilfully predicted, particularly by Met Office Boundary-layer clouds least skilfully predicted

Skill versus cloud-fraction threshold Consider 7 models evaluated over 3 European sites in 2003-2004 HSS HSS implies skill decreases significantly for larger cloudfraction threshold LOR LOR implies skill increases for larger cloud-fraction threshold

Skill versus cloud-fraction threshold HSS LOR SEDS SEDS has much flatter behaviour for all models (except for Met Office which underestimates high cloud occurrence significantly)

Skill versus lead time 2004 2007 Only possible for UK Met Office 12-km model and German DWD 7-km model Steady decrease of skill with lead time Both models appear to improve between 2004 and 2007 Generally, UK model best over UK, German best over Germany An exception is Murgtal in 2007 (Met Office model wins)

Key properties for estimating ½ life We wish to model the score S versus forecast lead time t as: S( t) = S t / τ t / τ 0e = S0 2 where τ 1/2 is forecast half-life 1/ 2 We need linearity Some measures saturate at high skill end (e.g. Yule s Q / ORSS) Leads to misleadingly long half-life...and equitability The formula above assumes that score tends to zero for very long forecasts, which will only occur if the measure is equitable

SEDS has property of linearity Reality (n=16, p=1/4) Forecast???????????????? a b 4 0 0 12 Best possible forecast c d SED score 1 1 3 3 9 Random unbiased forecast 4 12 0 0 Constant forecast of occurrence SED score decreases nearly linearly

Met Office DWD Forecast half life 2004 2007 Fit an inverse-exponential: S( t) = S t / τ 0 2 S 0 is the initial score and τ 1/2 is the half-life Noticeably longer half-life fitted after 36 hours Same thing found for Met Office rainfall forecast (Roberts 2008) First timescale due to data assimilation and convective events Second due to more predictable large-scale weather systems 1/ 2

What is the origin of the term ETS? First use of Equitable Threat Score : Mesinger & Black (1992) A modification of the Threat Score a/(a+b+c) They cited Gandin and Murphy s equitability requirement that constant forecasts score zero (which ETS does) although it doesn t satisfy requirement that non-constant random forecasts have expected score 0 ETS now one of most widely used verification measures in meteorology An example of rediscovery Gilbert (1884) discussed a/(a+b+c) as a possible verification measure in the context of Finley s (1884) tornado forecasts Gilbert noted deficiencies of this and also proposed exactly the same formula as ETS, 108 years before! Suggest that ETS is referred to as the Gilbert Skill Score (GSS) Or use the Heidke Skill Score, which is unconditionally equitable and is uniquely related to ETS = HSS / (2 HSS) Hogan, Ferro, Jolliffe and Stephenson (WAF)

ARM DATA: CCPP project US Dept of Energy Climate Change Prediction Program recently funded 5-year consortium project centred at Brookhaven, NY Implement updated Cloudnet processing system at Atmospheric Radiation Measurement (ARM) radar-lidar sites worldwide Ingests ARM s cloud boundary diagnosis, but uses Cloudnet for stats New diagnostics being tested Testing of NWP models NCEP, ECMWF, Met Office, Meteo-France... Over a decade of data at several sites: have cloud forecasts improved over this time?

a) c) c) d) Figure 3. Comparison of the seasonal composites of cloud fraction derived from observations (a) at the ARM SGP site for the years 2004 to 2009 with the values held in the ECMWF model (b), NCEP model (c) and the global version of the Met Office model (d).

a) b) c) d) Figure 4. Seasonal composites of the skill score SEDS at the ARM SGP site for the years 2004 to 2009 for ERA Interim (a), ECMWF (b), NCEP (c) and the global version of the Met Office model (d).

a) b) c) d) Figure 5. Same as Fig. 3 except for ice water content. a) Obs ARM SPG b) ECMWF c) NCEP d) Met Office

a) b) c) d) Figure 6. Same as Fig. 3 except for liquid water content. a) Obs ARM SPG b) ECMWF c) NCEP d) Met Office

DARWIN: FIRST RESULTS CLOUD FRACTION ECMWF MODEL OBSERVED

SGP - cloud fraction -skill 2004-2010: ECMWF ERA as ref. And SEDS skill (with error) as a function of lead time?

Spaceborne radar, lidar and radiometers EarthCare The A-Train NASA 700-km orbit CloudSat 94-GHz radar (launch 2006) Calipso 532/1064-nm depol. lidar MODIS multi-wavelength radiometer CERES broad-band radiometer AMSR-E microwave radiometer EarthCARE (launch 2015) ESA+JAXA 400-km orbit: more sensitive 94-GHz Doppler radar 355-nm HSRL/depol. lidar Multispectral imager Broad-band radiometer Heart-warming name

Spaceborne Overview What do spaceborne radar and lidar see? Towards a unified retrieval of ice clouds, liquid clouds, precipitation and aerosol Variational retrieval framework - GIVES ERRORS Results from CloudSat-Calipso ice-cloud retrieval Consistency with top-of-atmosphere radiative fluxes Evaluation and improvement of models Spatial structure of mid-latitude and tropical cirrus Test new version of ECMWF model which has prognostic cloud fraction, iwc and lwc. Current operational model has only prognostic total water content: iwc/lwc ratio diagnosed from temperature.

What do CloudSat and Calipso see? Cloudsat radar CALIPSO lidar Radar: ~D 6, detects whole profile, surface echo provides integral constraint Lidar: ~D 2, more sensitive to thin cirrus and liquid clouds but attenuated Target classification Insects Aerosol Rain Supercooled liquid cloud Warm liquid cloud Ice and supercooled liquid Ice Clear No ice/rain but possibly liquid Ground Delanoe and Hogan (2008, 2010)

Ingredients of a variational retrieval Aim: to retrieve an optimal estimate of the properties of clouds, aerosols and precipitation from combining these measurements To make use of integral constraints must retrieve components together For each ray of data, define observation vector y: Radar reflectivity values Lidar backscatter values Infrared radiances Shortwave radiances Surface radar echo (provides two-way attenuation) Define state vector x of properties to be retrieved: Ice cloud extinction, number concentration and lidar-ratio profile Liquid water content profile and number concentration Rain rate profile and number concentration Aerosol extinction coefficient profile and lidar ratio Forward model H(x) to predict the observations Microphysical component: particle scattering properties Radiative transfer component

The cost function The essence of the method is to find the state vector x that minimizes a cost function: The forward model H(x) predicts the observations from the state vector x Some elements of x are constrained by a prior estimate J n [ ] 2 [ ] 2 y nx yi H( x) xi b i 2 2 i= 1 σ y i= 1 σb = + i i + Smoothness constraints Each observation y i is weighted by the inverse of its error variance This term can be used to penalize curvature in the retrieved profile

Lidar observations Example ice cloud retrievals Delanoe and Hogan (2010) Lidar forward model Visible extinction Radar observations Ice water content Radar forward model Effective radius

Evaluation using CERES TOA fluxes Radar-lidar retrieved profiles containing only ice used with Edwards-Slingo radiation code to predict CERES fluxes Small biases but large random shortwave error: 3D effects? Shortwave Bias 4 W m -2, RMSE 71 W m -2 Longwave Bias 0.3 W m -2, RMSE 14 W m -2 Nicky Chalmers

A-Train versus models Ice water content 14 July 2006 Half an orbit 150 longitude at equator Delanoe et al. (2010)

Gridbox mean: In-cloud mean: Both models lack high thin cirrus Met Office has too narrow a distribution of in-cloud IWC ECMWF lacks high IWC values, remedied in new model version

pdf of ice water path - July 2006 Good agreement over six orders of magnitude. High iwp - models too low. Especially for operational ECMWF. New ECMWF model with prognostic iwc is much better than current operational model which has iwc/lwc diagnosed from total cloud water using temperature. OBSERVATIONS

Summary and outlook Model comparisons reveal: Rapid feed back to NWP centres of performance of new cloud schemes. Example: Meteo-France 2003: Example: testing of new ECMWF scheme before use operationallly. In Europe, higher skill for mid-level cloud and lower for boundary-layer cloud, but larger seasonal contrast in Southern US Findings applicable to other verification problems: Symmetric Extreme Dependency Score is a reliable measure of skill for both common and rare events (given we have large enough sample) Many measures regarded as equitable are only so for very large samples, including the Equitable Threat Score, but they can be rescaled Future work (in addition to CCPP): CloudSat & Calipso: all four years - annual cycle - regional skill. Cloudnet - extend to ARM sites plus more sites in Europe. EU-FP7 Actris infrastructure for rapid feedback to NWP modellers.