Evaluating GCM clouds using instrument simulators University of Washington September 24, 2009
Why do we care about evaluation of clouds in GCMs? General Circulation Models (GCMs) project future climate change Cloud feedbacks primary sources of inter-model differences in climate sensitivity Quantitative evaluation How do we determine which models to place our confidence in over others?
So, what s the problem with clouds? Differing scales Cloud processes: 100 to 250 meters GCM resolution: hundreds of kilometers Use parameterizations for sub-grid scale processes No consensus on the right parameterization Compensating errors Constrain integral to get top of atmosphere radiation What about the integrand? Can get the right top of atmosphere radiation with differing cloud profiles If we have confidence in the clouds, we can be confident we get the right top of atmosphere radiation for the right reasons
How are climate models evaluated? Component testing Ensure pieces behave as expected Inter-model comparison Consistency across different models Comparison to observational data Test ability of models to reproduce general observed features of past and current climates Look closer at results: are the distributions consistent with observations? Quantitative measures
But it s not as easy as it sounds... The model world Geophysical cloud properties (from parameterizations) Familiar mathematical quantities Gridbox mean fields on gridbox scale The real world Measure some sort of signal Retrieval algorithm employed Instrument sensitivity? Cloud attenuation? Multi-layer profiles? Spatial resolution?
Instrument simulators connect the two worlds cloud properties remote sensing signals retrieval algorithms retrieved cloud properties model cloud properties simulator synthetic signals retrieval algorithms retrieved model cloud properties What would the instrument see in the model world? Simulator takes model output and produces simulated instrument signal Allows comparison between model and observation Satellite simulator on model output easier than inverse retrieval on observational data
About the instrument simulators Cloud Feedback Model Inter-comparison Project (CFMIP) Observational Simulator Package (COSP)
About the MISR instrument Multi-angle Imaging SpectroRadiometer One of five instruments on-board NASA Terra platform Sun-synchronous polar orbit Nine different camera views Four different wavelengths 275 meter along track, 250 meter crosstrack resolution
MISR stereo cloud top height Parallax used to get cloud top height Geometric retrieval Minimal sensitivity to sensor calibration
About CloudSat Part of NASA A-Train constellation of satellites Launched 1 June 2006 millimeter wavelength cloud radar measures power backscattered by clouds 500 meter vertical resolution 1.4 km cross-track, 1.7 km along-track resolution
Example of CloudSat data
Other data of interest ISCCP International Satellite Cloud Climatology Project Established 1982 Large dataset Collect radiance measurements from various satellites Lower resolution Fewer channels CALIPSO Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation Active lidar plus passive infrared/visible imagers
Using joint histograms Get optical depth and cloud top height from MISR and ISCCP (both from observation and model/simulator) Compute relative occurance for each optical depth and cloud top height combination More complete picture of cloud distribution
Now, we need a model... NCAR Community Atmosphere Model (CAM) GFDL Atmosphere only Model (AM2) Different components Different results
What do we need from the models? We want model cloud radiative and optical properties Longwave emissivity Visible wavelength optical depth Precipitation fluxes
So, we ll just go to the archive... Model output from IPCC Fourth Assessment Report (AR4) is all archived (PCMDI) Output available from all climate models used in AR4 But, no cloud radiative and optical properties saved And it is all time averaged output
Now what? Where we are now Identified the need for model output not immediately available from archives Setup CAM and AM2 to save the outputs we need Wrote wrapper to run simulators on output from CAM and AM2 Where we go next Long model runs forced with observed SST Model output concurrent with available observational data Run simulators and compare simulated retrievals to observations Do the models produce profiles we observe? Quantitative analysis
CloudSat example: September, October, November
MISR example: Hawaiian Trade Cumulus
Where this is headed New versions of models will become available shortly Need to evaluate those models How do we decide which models to put our confidence in?
Acknowledgements Tom Ackerman Roger Marchand Cecilia Bitz Dargan Frierson Mark Zelinka Grads 08
CloudSat example: trail run with CAM Thank you!
CloudSat example: trail run with CAM
MISR stereo cloud top height, continued