Verification of cloud simulation in HARMONIE AROME
|
|
|
- Logan Patterson
- 9 years ago
- Views:
Transcription
1 METCOOP MEMO No. 01, 2013 Verification of cloud simulation in HARMONIE AROME A closer look at cloud cover, cloud base and fog in AROME Karl-Ivar Ivarsson, Morten Køltzow, Solfrid Agersten
2 Front: Low fog over snow. Photo: Karl-Ivar Ivarsson MetCoOp Meteorological Co-operation on Operational NWP (Numerical Weather Prediction) Norwegian Meteorological Institute (met.no) and Swedish Meteorological and Hydrological Institute (SMHI) Date: ISSN: Technical Memorandums (MEMO) from the MetCoOp collaboration are available from The reports and work by MetCoOp are licensed under CC BY-ND 3.0: Credit should be given to The Norwegian Meteorological institute and Swedish Meteorological and Hydrological Institute, shortened met.no and SMHI, with use of the logos. The logo should be followed by a text, i.e: «Work by The Norwegian Meteorological Institute and Swedish Meteorological and Hydrological Institute (SMHI)».
3 METCOOP MEMO (Technical Memorandum) No 01, 2013 Verification of cloud simulation in HARMONIE AROME A closer look at cloud cover, cloud base and fog in AROME Karl-Ivar Ivarsson, Morten Køltzow, Solfrid Agersten Summary The AROME model used as a benchmark for the MetCoOp project has been verified against Swedish automatic stations. Only stations capable of detecting clouds up to 7.5 km have been selected (about 45 stations). Satellite pictures have also been used for subjective evaluation only. Comparison with AROME has been done for the HIRLAM, ALARO and ECMWF models. Low cloud, detectable cloud, cloud base and fog have been verified in the study. A 'low cloud' means a cloud up to 2.5 km height. A 'detectable cloud' means a cloud up to 7.5 km. A cloud at the lowest model level has been regarded as 'fog' in this study. The prediction of a cloud base below one kilometre is more important than predicting a higher cloud base. AROME has the best skill for cloud base less than about one kilometre except in cold winter conditions, where HIRLAM has the best skill (equitable treat score, ETS has been used). The same result is seen for low cloud. AROME has the largest RMS-error for both cloud base and low cloud. This contradicting result is explained by a larger variability of the AROME forecasts compared to the other models. The RMS-error for detectable cloud is largest for AROME. The reason is partly due to larger variability of the AROME forecasts. The best skill is seen for ECMWF and HIRLAM. Fog seems to be somewhat over-predicted by AROME. The skill is the best of the models anyway, expect for cold weather in winter. There is a large over-prediction of ice-fog and low cloud in case of very cold weather (about - 30º or colder). The reason for this is not known.
4 Table of contents 1 INTRODUCTION CASE STUDIES Explanation of the figures Autumn November UTC Winter January UTC February UTC Spring March UTC Summer July UTC VERIFICATION STATISTICS Spring 2012 (AROME cycle 36h1.4) Winter 2010 (AROME cycle 37h1.1) CONCLUSIONS FIGURES AND TABLES ANNEX Explanation of common verification scores Explanation of skill scores... 28
5 1 Introduction This document is intended to give a brief summary of the quality of the cloud forecasts from the AROME model. This is the planned operational model for MetCoOp, see also 01/2012 METCOOP MEMO. Cloud cover and cloud base (lowest height of cloud cover) are two important parameters in a numerical weather forecast. Previously height of cloud cover has not been evaluated due to lack of observations. New automatic stations which have the ability to detect clouds up to 7.5 km have recently been introduced. Data from these stations are useful for verification. The main focus in this report will be on case studies, but some statistics will also be presented. The different types of verification scores used in meteorological verification statistics are often difficult to interpret. For this reason an explanation of the commonly used scores is included. The model results will be compared with other models available for the weather services of both countries. This document is organized as follows: In chapter 2 a number of cases studies are presented, chapter 3 gives some verification statistics are summarized, chapter 4 there is a conclusion and in the annex some commonly used verification scores are explained. Thanks to Rebecca Rudsar and others in the MetCoOp group for contribution and feedback regarding this report. 1
6 2 Case studies 2.1 Explanation of the figures Cloud base: The figures show forecasted cloud base as low cloud in yellow, middle high cloud in brown, high cloud in blue and precipitation in green. In case there where no post-processed cloud cover from the models, the cloud cover between the lowest and the highest level of interest, has been computed from the model level data. Observations are also plotted. Cloud cover: Automatic stations observing cloud cover are marked as triangles: No cloud is shown as unfilled (cloud cover 0-2 octas) Partly cloudy is shown as partly filled (cloud cover 3-5 octas) Overcast is shown as filled (cloud cover 6-8 octas), Manual observations stations observing cloud cover are marked as circles: No cloud is shown as unfilled (cloud cover 0 octas) Partly cloudy is shown as partly filled (cloud cover 1-7 octas) Fog: Overcast is shown as filled (cloud cover 8 octas), Forecast fog is marked with dark gray lines and observed fog is marked with three lines to the left of the circles or triangles. The parameter fog is obtained only from the HIRLAM model. For the other forecast models it is not possible to distinguish between low cloud and fog. 2
7 2.2 Autumn November UTC This is a typical case for autumn with low cloud and fog. A high pressure system with weak wind is present over Scandinavia, giving a typical condition for fog and low cloud. The satellite picture is shown in Figure 1. In Figure 2 the corresponding forecasts with AROME and a semioperational version of ALARO, and the ECMWF forecast and the Swedish HIRLAM forecast are shown. Figure 1: Satellite picture over southern Scandinavia, Baltic Sea and northern parts of Germany and Poland. Low cloud and / or fog in white, high cloud in blue, black or dark red. 3
8 4 Figure 2: Upper pictures; AROME forecast to the left and ALARO to the right. Lower picture: ECMWF forecast to the left and HIRLAM to the right. Explanation of the legend and colours in the figures, see chapter 2.1.
9 The AROME forecast of low cloud is fairly accurate in this case. There is however a tendency to over-predict the amount of low cloud. The opposite is seen in the ALARO forecast, where the tendency is to forecast too small amounts of low cloud. The forecasts from the ECMWF and HIRLAM models have no major errors, but there are several smaller errors for example no low cloud north of Oslo and the position of low cloud over northern parts of Germany and Poland. 2.3 Winter January UTC This is also a case with a high pressure system over Scandinavia. Dry, cold air is blowing northeastward from central Europe towards southern Sweden and southern Norway. The air is warmed over the open water in the Baltic Sea, so it is not a case with particularly cold weather. The low cloud from the different models is shown in figures 4 and 5. 5
10 Figure 3: Upper pictures; AROME forecast to the left and ALARO to the right. Lower picture: ECMWF forecast to the left and HIRLAM to right. Explanation of the legend and colours in the figures, see chapter
11 The results from the different models are similar to those in the autumn case. The models AROME, ECMWF and HIRLAM give a fairly accurate forecast of low cloud although there are some small errors. ALARO forecast has a too small amount of low cloud. One exception is over Germany and Poland where ALARO forecasts low cloud together with a small amount of precipitation. Neither the precipitation nor the low cloud are supported by observations February UTC This is a case with more pronounced cold weather. There is a cold high pressure system over northern Russia and a ridge over Scandinavia. A minor low pressure system is seen over the Baltic Sea. Low cloud for different forecasts are seen in figures 6 and 7. Figure 4: AROME forecast to the left and ALARO to right. Explanation of the legend and colours in the figures, see chapter
12 Figure 5: ECMWF forecast to the left and HIRLAM to right. Explanation of the legend and colours in the figures, see chapter 2.1. In this case of very cold weather there is a problem with the AROME forecast of low cloud. The occurrence of low cloud is severely overestimated. The 2m-temperatures are not cold enough. The same temperature problem is seen in the ALARO forecast, but the amount of low cloud is not overestimated. The ALARO model has a tendency to forecast the fraction of low cloud as well as middle level cloud too often near 0.5 (4 octas). ECMWF has no particular bias in the low cloud cover amount but the low cloud is often misplaced. The HIRLAM forecast has too little low cloud cover amount in this case. 8
13 2.4 Spring March UTC During spring time an over-prediction of fog is sometimes apparent. An example is shown in Figure 6. slot: slot: slot: slot: slot: slot: slot: slot: slot: slot: slot: slot: Figure 6: AROME forecast to the left and HIRLAM to right. Explanation of the legend and colours in the figures, see chapter 2.1. Generally the AROME forecast of low cloud is fairly accurate, although some over-prediction is seen. An example of this is seen in the figure. According to the observations there is little or no low cloud over Denmark (in the left corner).but the AROME forecasts low cloud over most of the area. The HIRLAM forecast has less over-prediction, but there is too much fog over parts of south-western Sweden. 9
14 2.5 Summer July UTC The AROME model was updated in summer The new version, 37h1.1, replaces the former version 36h1.4. The model domain was also extended. During summer 2012, the Arome forecast was regularly compared with several other models. This was done for a number of forecast lengths. An example is shown in Figure 7. Figure 7: A 36 hour, a 12 hour AROME forecast (Explanation of the legend and colours in the figures, see chapter 2.1) and a satellite picture at valid time to the right. In Figure 7 the two different AROME forecasts are compared with a satellite picture. Comparing the two forecasts and the observations over the Skagerrak area we see that the 12 hour forecast is better than the 36 hour forecast. This is a typical example of over-prediction of low cloud for the longer forecast lead times. Comparisons have also been done between the old (36h1.4) version and the new one (37h1.4) with respect to forecasts of low cloud, but no significant change is seen. The other models (ECMWF, ALARO and HIRLAM) had no low cloud over Skagerrak. (Not shown.) 10
15 3 Verification statistics 3.1 Spring 2012 (AROME cycle 36h1.4) Due to technical reasons AROME forecast are only available for a limited period in late April and May. Only automatic stations which can measure cloud cover and cloud base up to 7500 m are used. There are approximately 45 such stations and they are all located in Sweden. The observed and forecast frequencies of cloud base (for cloud cover of at least 2 octas) are plotted in figure 8. Figure 8: The frequency of cloud base for different heights in meters for AROME (red), ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) and observed (black) It is clear from Figure 8, that during this period the occurrence of a cloud base of 20 m (in practice fog ) in AROME is overpredicted. The forecast from ECMWF and HIRLAM are slightly more skilful. Only ALARO underpredicts fog. For low stratus ( m) all models underestimate the frequency of occurrence. Cloud base above one kilometre is more often observed than forecasted in AROME, but no clear systematic error is seen for HIRLAM. The skill score result for different thresholds of cloud base is shown in Figure 9. Only ETS is shown, but other scores, such as Kuipers skill score, AI etc give a similar result. See Annex 1 for explanation of the different scores. 11
16 Figure 9: ETS for different thresholds of cloud base. AROME (red), ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) AROME has the best score of the four models for the lowest thresholds. HIRLAM and ECMWF have a similar result during this period. The reason for the poor result for ALARO is not known. The verification results of cloud cover; both low cloud, (up to 2.5 km) and all detectable cloud (up to 7.5 km) for the same period as for the cloud base are shown in the following figures. The results for different forecast lengths are in Figure
17 Figure 10: The RMS error and bias for low cloud and all detectable clouds. AROME (red), ALARO (green), HIRLAM 7.3 (blue) and ECMWF (violet) There are more clouds in the forecast from AROME both regarding low cloud and all detectable cloud compared to the automatic observations. For ECMWF the opposite is seen. Many models have been tuned to fit manual observations which generally have a larger amount of cloud cover due to perspective effects. This effect is most pronounced for clouds with large vertical thickness such as cumulonimbus. Models with no bias relative to manual stations may have a positive bias when verified against automatic stations. 13
18 AROME has the largest RMS error, and ECMWF and ALARO have the smallest error. It might seem contradictory that AROME has a relatively large RMS error for low cloud, but has relatively high skill for cloud base (see Figure 14-ETS). One possible reason is the different variability of cloud cover in the models. Figure 11: The standard deviation of the forecasts of low cloud (left) and all detectable cloud (right). ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) and observation light blue. A high variability often leads to higher RMS error and AROME has the highest variability of all models. Probably the variability in AROME is too high, since it is higher than for observations. This is seen for both low cloud and all detectable cloud (se Figure 11). An automatic station only 'sees' a very small area of the sky, near zenith. It is probably much smaller than a square of 2.5 km which is the size of the grid squares of AROME. A grid square of AROME represents a larger area and thus should have a lower variability than the observations. 14
19 An alternative method of examining the same characteristics of the models is to use the frequency bias, see Figure 12. Figure 12: The frequency bias of the forecasts of low cloud (left) and all detectable clouds (right). AROME (red), ALARO (green), HIRLAM 7.3 (blue) and ECMWF (violet) The frequency bias should ideally be near one. AROME has too few events with octas between 2 and 6, and too many events with octas between 7 and 8, especially for low cloud. ALARO has the opposite characteristics. 15
20 The diurnal cycle of cloud cover is seen in Figure 13. Figure 13: The amount of low cloud (left) and detectable cloud (right) for different times of the day (in UTC). AROME (red), ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) and observation light blue The weak diurnal cycle of low cloud and all detectable cloud is well predicted by the AROME model, but on average the amounts are higher than observed. ECMWF over-predicts diurnal cycles for both cloud types. The different skill scores show somewhat diverging results, but ETS, shown in Figure 14 is representative. 16
21 Figure 14: ETS for low cloud (left) and all detectable clouds (right). AROME (red), ALARO (green), HIRLAM 7.3 (blue) and ECMWF (violet) AROME shows a good verification result compared to the other models for low cloud, and an average result for cloud base less than 7500 m. A model with low variability, such as ALARO, often has a good verification result with respect to RMS error, but the verification result is not that good when different skill scores based on contingency tables (see Annex 1 for explanation) are examined. This is seen from the verification result for this period. 17
22 3.2 Winter 2010 (AROME cycle 37h1.1) This period from November 20 to December 9, 2010 was a cold period in early winter. The quality of forecasting cloud cover, cloud base and fog when there are cold or very cold weather conditions is dependent on the ability of the model to describe the processes in mixed-phase and ice clouds. It is also dependent on how realistic the surface scheme is with respect to snow and ice. These processes should also take forest into consideration, since forest covers a large part of northern Europe. There are still some weaknesses in the description of these processes in AROME and in the current version of SURFEX (the surface scheme used together with the HARMONIE AROME model). In the previous example (from chapter 2.3, Figure 6) there is an over-prediction of the amount of low cloud and the forecast temperatures are not low enough. However the results in chapter 2.3 are from the cycle 36h1.4 of HARMONIE. Some improvements have been made in cycle 37h1.1. The two week period with cold winter conditions ( ) verified here is with AROME cycle 37h1.1. The observed and forecast frequencies of cloud base (for cloud cover of at least 2 octas) for this cold period are plotted in Figure
23 Figure 15: The frequency cloud base of different heights in meters for AROME (red), HIRLAM 5.5km (green), ECMWF (blue) and observed (black) The bias of the forecast can be derived as the difference between the forecast frequency and the frequency of the observations as seen in AROME has no significant bias for the lowest cloud base, but too few occurrences of clouds with height between 100 m and 1000 m. The opposite is seen for cloud base for middle level and high clouds. The most striking result for the ECMWF forecast is the large bias for the lowest level. There is a negative bias for ECMWF for cloud base between 100 m and 1000 m. For AROME this negative bias is more pronounced. For higher cloud base there is a slight positive bias for all models. The ECMWF model has been improved since 2010, so this bias is probably reduced in the present version. HIRLAM has no major bias. 19
24 Figure 16: The ETS for cloud base of different heights in meters for AROME (red), HIRLAM with 5.5 km grid (green), ECMWF (blue) and observed (black) The result with ETS is seen in figure 17. The result is in general not as good as for the spring period. One reason may be that the link between an area of condensation in a model and the observed cloud or fog is weaker when it is cold because the cloud may be an ice cloud instead of a water cloud. An ice cloud or ice fog is more optically transparent than a water cloud or fog consisting of water droplets, thus an observed ice cloud may be an area with a lower visibility. This means that treating cloud at the lowest level as fog may be questioned. Another reason for the degradation of the forecast quality could be because the micro-physics in mixed-phase clouds and ice clouds is more complicated than in water clouds. The verification results of cloud cover, both for low cloud, (up to 2.5 km) and all detectable cloud (up to 7.5 km) for the cold period are in the following figures. 20
25 Figure 17: The RMS error and bias for low cloud and for all detectable clouds. AROME (red), HIRLAM 7.1.2(green) and ECMWF (blue). The RMSE is smallest for ECMWF and largest for AROME. The RMSE may be affected by the variability of the forecast. The standard deviation for the forecasts and observations are seen in Figure 18. Figure 18: The standard deviation for low cloud and for all detectable clouds. (See explanation of the statistical methods in the last chapter.) AROME (red), HIRLAM 7.1.2(green), ECMWF (blue) and observation in violet 21
26 ECMWF has the lowest variability, and this contributes to the small RMS error seen in figure 17. HIRLAM has a similar variability to that which is found in AROME and a bias of the same magnitude, thus the large RMS error for AROME is not explained by the variability and bias. The frequency bias for low cloud and all cloud below 7500 m for different octas are shown in Figure 19: The frequency bias of the forecasts of low cloud (left) and all detectable cloud (right). AROME (red), HIRLAM (green), ECMWF (blue) All models, especially ECMWF, predict too much low cloud between 2 and 6 octas. This is also the case for all detectable clouds for ECMWF and to some extent for HIRLAM. AROME has no particular frequency bias for all detectable cloud. 22
27 The corresponding ETS values are seen in Figure 20. Figure 20: ETS for low cloud (left) and detectable clouds (right). (See explanation of statistical methods in the following chapter.) AROME (red), HIRLAM (green), ECMWF (blue) HIRLAM has the highest ETS for both low cloud and detectable cloud. ECMWF and AROME have similar scores. Comparing the spring and winter period it is seen that the ETS values are similar for HIRLAM whereas for ECMWF and AROME the ETS values are higher for the spring period (that means better). 23
28 4 Conclusions The AROME model has been compared with other models with respect to cloud cover and cloud base, including fog. The observations that are used in this study are satellite pictures and automatic stations which are able to detect clouds up to 7.5 km above ground. In general, the forecasts of cloud base regarding low cloud are as good as or better than for the other models in this comparison, but there is an over-prediction of low cloud in AROME. Also fog is over-predicted. There is an under-prediction of high cloud base. It is not possible to say if this is caused by too low amount of high clouds or if high clouds are hidden by low clouds in the model. AROME has a positive bias of both all detectable cloud and of low cloud. It is not obvious how this should be interpreted. Automatic stations only detect clouds directly above the stations, whereas a much larger part of the sky is included in manual cloud observations. Manual observations in general lead to larger values of the cloud cover due to perspective effects. It is possible that verification against manual observations may give a different result. The RMS error of low cloud and all detectable cloud is high for AROME compared to other models, but the skill scores are quite good anyway. This is probably due to a larger variation of cloud cover for AROME than for other models. The variation may bee too large. The only serious problem regarding cloud prediction in AROME is seen for very cold weather. There is an over-prediction of low cloud and fog which affects the 2m-temperature making it too warm. The reason for this problem is not known. A possible reason is an error in the surface, (too large transport of moisture from the ground). Another possible reason is that the vertical diffusion (turbulence) is too weak in this situation. It is also possible that the cloud micro-physics has to be improved such that mixed-phase clouds contain more liquid water. 24
29 5 Figures and tables Figure 1: Satellite picture over southern Scandinavia, Baltic Sea and northern parts of Germany and Poland. Low cloud and / or fog in white, high cloud in blue, black or dark red Figure 2: Upper pictures; AROME forecast to the left and ALARO to the right. Lower picture: ECMWF forecast to the left and HIRLAM to the right. Explanation of the legend and colours in the figures, see chapter Figure 3: Upper pictures; AROME forecast to the left and ALARO to the right. Lower picture: ECMWF forecast to the left and HIRLAM to right. Explanation of the legend and colours in the figures, see chapter Figure 4: AROME forecast to the left and ALARO to right. Explanation of the legend and colours in the figures, see chapter Figure 5: ECMWF forecast to the left and HIRLAM to right. Explanation of the legend and colours in the figures, see chapter Figure 6: AROME forecast to the left and HIRLAM to right. Explanation of the legend and colours in the figures, see chapter Figure 7: A 36 hour, a 12 hour AROME forecast (Explanation of the legend and colours in the figures, see chapter 2.1) and a satellite picture at valid time to the right Figure 8: The frequency of cloud base for different heights in meters for AROME (red), ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) and observed (black) Figure 9: ETS for different thresholds of cloud base. AROME (red), ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) Figure 10: The RMS error and bias for low cloud and all detectable clouds. AROME (red), ALARO (green), HIRLAM 7.3 (blue) and ECMWF (violet) Figure 11: The standard deviation of the forecasts of low cloud (left) and all detectable cloud (right). ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) and observation light blue Figure 12: The frequency bias of the forecasts of low cloud (left) and all detectable clouds (right). AROME (red), ALARO (green), HIRLAM 7.3 (blue) and ECMWF (violet) Figure 13: The amount of low cloud (left) and detectable cloud (right) for different times of the day (in UTC). AROME (red), ALARO (green), HIRLAM 7.3 (blue), ECMWF (violet) and observation light blue Figure 14: ETS for low cloud (left) and all detectable clouds (right). AROME (red), ALARO (green), HIRLAM 7.3 (blue) and ECMWF (violet) Figure 15: The frequency cloud base of different heights in meters for AROME (red), HIRLAM 5.5km (green), ECMWF (blue) and observed (black)
30 Figure 16: The ETS for cloud base of different heights in meters for AROME (red), HIRLAM with 5.5 km grid (green), ECMWF (blue) and observed (black) Figure 17: The RMS error and bias for low cloud and for all detectable clouds. AROME (red), HIRLAM 7.1.2(green) and ECMWF (blue) Figure 18: The standard deviation for low cloud and for all detectable clouds. (See explanation of the statistical methods in the last chapter.) AROME (red), HIRLAM 7.1.2(green), ECMWF (blue) and observation in violet Figure 19: The frequency bias of the forecasts of low cloud (left) and all detectable cloud (right). AROME (red), HIRLAM (green), ECMWF (blue) Figure 20: ETS for low cloud (left) and detectable clouds (right). (See explanation of statistical methods in the following chapter.) AROME (red), HIRLAM (green), ECMWF (blue) Figure 21: Red curve is the S(ap) curve (always protect) and green is S(np) (never protect) for different cost-loss ratios. a,b,c and d is as mentioned in figure
31 ANNEX 1 Explanation of common verification scores In this annex the different verification and evaluation scores are explained in detail. The first part of the annex is copied from METCOOP MEMO ( chapter 2.10) Abbreviations of the scores are used in the figures and the text in this report. RMS error (RMSE): 1/N (f(i) v(i))². Means root mean square error and is computed as the root of the mean of the forecast, f(i), minus observation, v(i) squared. N is the number of observations. Characteristics: Measures the correspondence between observations and forecast. Perfect value is zero. Lowering the variability of a forecast may result in a smaller RMS error, without increasing the value of the forecast. BIAS or systematic error (BIAS): 1/N ( (f(i) v(i) ). It is computed as the difference between the mean of the forecast and the mean of the observation. Characteristics: Measures the mean correspondence between observations and forecast. Perfect value is zero. Negative value means an 'under-prediction' of the event, positive value means the opposite. Standard deviation: 1/N (x(i) x(mean))². x may be either a forecast or an observation. It is the root of the mean of the squared value minus the mean of the value. Characteristics: Measures the mean variability of the forecast or the observation. The variability of forecasts and observations should normally not differ very much. But exceptions may exist. One example is when a forecast is representing a mean value of grid square with an expected smaller variability than an observation representing a point. 27
32 Explanation of skill scores General definition: Any forecast verified with statistical measure with the result S may be compared with the result found by using a reference forecast S(ref). This reference forecast could be any forecast based on the same statistics; for instance a random forecast, a climatological forecast or the result from another model. The skill score is then defined as: Skill-score = (S - S(ref) ) / ( S(perfect) - S(ref) ). S(perfect) is the best possible result that may be obtained in the study. For example it is zero for the RMS error. One is the best possible result. This is when S equals S(perfect). The skill score is zero if it has the same value as the reference forecast S(ref). Negative values indicate negative skill. Supplementary scores based on use of contingency tables: Variables used in contingency tables: The simplest contingency table consists of only two different observations and forecasts: Obs. Severe events Obs. Non-severe events Number of forecasts Forecast severe events A b a+b Forecast non-severe events C d c+d Number of observations a+c b+d a+b+c+d=n A perfect model should have a + d = N and b = c = 0. From this table many different types of scores may be derived: Frequency BIAS (FB): (a+b)/(a+c) for severe events and (c+d)/(b+d) for non-severe events. Characteristics: Measures the BIAS or systematic error of the forecast. No BIAS gives the value one (perfect value), a positive BIAS gives a value above one and a negative BIAS gives a value below one. False alarm rate (FAR): b/(b+d) Characteristics: Measures the number of alarms of severe weather compared to the number of the event with no severe weather. Perfect value is zero. 28
33 False alarm ratio: b/(a+b) Characteristics: Measures the number of alarms of severe weather compared to the number of the forecast of severe weather. Perfect value is zero. Probability of detection / Hit Rate (HR): a/(a+c) Characteristics: Measures the number of correct forecasts of severe weather compared to the number of observations of severe weather. Perfect value is one. Treat score: a/(a+b+c) = a /(N - d) Characteristics: Measures the number of correct forecasts of severe weather compared to the total number in the sample that do not contain correct forecasts of no severe weather. Perfect value is one. Different skill scores with some common characteristics: 1: Perfect value is one. 2: A random forecast gives the value zero. Here, a random forecast means a forecast with the same forecast frequency as the tested one, but its values are randomly distributed among the sample. It has the following values for a,b,c and d: a(random) = (a+b)(a+c)/n, b(random) = (a+b)(b+d)/n, c(random)= (c+d)(a+c)/n and d(random)= (c+d)(b+d)/n 3: A negative value indicates negative skill, but for some of the scores it indicates that the forecast has a 'negative signal', which means that the forecast may have a value if forecasts of severe weather and non-severe weather are replaced with each other. Equitable treat score (ETS): (a (a+b)(a+c)/n)/ ( a+b+c (a+b)(a+c)/n). ( Or ( a a(random) ) /( a+b+c a(random)) ) Special characteristics: Good: No large tendency of favouring forecasts with a large positive or negative BIAS. Poor: No clear relation with the value of the forecasts with respect to cost/loss relations. Kupiers skill score (KSS): (Probability of detection) - (False alarm rate) = a/(a+c) b/(b+d) Good: Has a clear relation with the value of the forecasts with respect to cost-loss relations. 29
34 It represents the economical value of the forecast for an 'optimal' user. It disfavours a forecast with a very large positive or negative bias. Poor: Favouring a forecast with a moderate positive or negative bias for more extreme event. An example is large amounts of rain, occurring infrequently. Then the 'optimal' user has an unlikely low cost-loss relation, with a very low protection cost. In case of a false alarm, KSS only accounts for this low protection cost. This leads to a higher KSS when heavy rain is forecasted more often than observed. AI (area index): (ad-bc)/((b + d)(a + c)) + c / (b + d) ln( N c /(( a+c)(c+d))) + b / (a + c) ln( N b /(( b+d)(a+b))) Good: Has a clear relation with the value of the forecasts with respect to the cost-loss relations. It represents the economical value of the forecast for a normally large group of 'optimal' users. Has no large tendency of favouring forecasts with a large positive or negative bias, since it also accounts for the size of the group that is 'optimal' users. In a case such as the one mentioned above, this group is very limited. Poor: Complicated and not very well known. In order to a get a value similar to the other scores, one may have to use the square of AI. Symmetric Extreme Dependency Score (SEDS): ( ( ln( (a+b)/n ) + ln( (a+c)/n ) ) + ln(a/n) ) - 1 Extremal Dependency Index (EDI): ( ln(b/(b+d)) - ln(a/(a+c)) ) / ( ln(b/(b+d)) + ln(a/(a+c)) ) Symmetric Extremal Dependency Index (SEDI): ( ln(b/(b+d)) - ln(a/(a+c)) -ln(1-(b/(b+d))) + ln(1-(a/(a+c))) )/( ln(b/(b+d)) + ln(a/(a+c)) +ln(1- (b/(b+d))) + ln(1-(a/(a+c))) ) Those three scores have some characteristics in common: Good: The result does not deteriorate when the observed frequency of the events considered becomes rare as many other scores do. This means that it is less difficult to compare the results of forecasts from different climate regimes. Also the result does not vary so much with long term variation of the observed frequency of the observed phenomena. For example a year with lower occurrences of gale should give mainly the same result as a windier year, if the forecast quality is unchanged. 30
35 Poor: Those new scores are less known and complicated. The best score of these three seems to be the first one, Symmetric Extreme Dependency Score (SEDS) since it has no large tendency of favouring a forecast with a large positive or negative bias. The other two are problematic in that sense. The cost-loss ratio The cost-loss ratio (C/L) is often used for economical evaluation of forecast value. It is ratio of the cost for protection (C) of a severe weather event to the net loss of value (L) if the severe weather occurs without protection. The contingency table above becomes (in economical terms): Obs. Severe events Obs. Non-severe events Forecast related costs Forecast severe events a C b C (a + b) C Forecast non-severe events c L 0 c L Observation related costs a C +c L b C (a + b) C + c L Using current forecast will give an economical outcome of E(c) = (a + b) C + c L (as in table) In other cases somewhat different from the table: A prefect forecast will give an economical outcome of E(perf) = (a+c) C Newer protecting will give an economical outcome of E(np) = (a+c) L Always protecting will give an economical outcome of E(ap) =(a+b+c+d) C A skill score based on always protecting then becomes (X = C/L) : S(ap) = [E(c) E(ap)] / [E(perf) E(ap)] = (c (c+d) X ) / (b + d), and based on newer protecting, S(np) = [E(c) E(np)] / [E(perf) E(np)] = (a + b b /( 1-X )) / (a + c). It can be shown that an 'optimal user' is a user for which the the cost-loss ratio is equal to the frequency of severe weather, (a+c)/n. Then S(ap) = S(np) = Kuipers skill score. In Figure 21 there is an idealized example in which a = 30, b= 6, c =10 and d=54. 31
36 Figure 21: Red curve is the S(ap) curve (always protect) and green is S(np) (never protect) for different cost-loss ratios. a,b,c and d is as mentioned in figure. Lines meet at the cost-loss ratio 0.4, which is the cost loss ratio for the 'optimal user' in this case. The Kuipers skill score is then AI, or area index is the 'positive' area between the lowest of the two scores S(ap) and S(np). In this case it is about
37 Norwegian Meteorological institute Postboks 43 Blindern, NO 0313 OSLO Phone: Telefax: Swedish Meteorological and Hydrological Institute SE NORRKÖPING Phone Telefax ISSN:
Use of numerical weather forecast predictions in soil moisture modelling
Use of numerical weather forecast predictions in soil moisture modelling Ari Venäläinen Finnish Meteorological Institute Meteorological research [email protected] OBJECTIVE The weather forecast models
Developing Continuous SCM/CRM Forcing Using NWP Products Constrained by ARM Observations
Developing Continuous SCM/CRM Forcing Using NWP Products Constrained by ARM Observations S. C. Xie, R. T. Cederwall, and J. J. Yio Lawrence Livermore National Laboratory Livermore, California M. H. Zhang
Cloud verification: a review of methodologies and recent developments
Cloud verification: a review of methodologies and recent developments Anna Ghelli ECMWF Slide 1 Thanks to: Maike Ahlgrimm Martin Kohler, Richard Forbes Slide 1 Outline Cloud properties Data availability
Towards an NWP-testbed
Towards an NWP-testbed Ewan O Connor and Robin Hogan University of Reading, UK Overview Cloud schemes in NWP models are basically the same as in climate models, but easier to evaluate using ARM because:
Comparative Evaluation of High Resolution Numerical Weather Prediction Models COSMO-WRF
3 Working Group on Verification and Case Studies 56 Comparative Evaluation of High Resolution Numerical Weather Prediction Models COSMO-WRF Bogdan Alexandru MACO, Mihaela BOGDAN, Amalia IRIZA, Cosmin Dănuţ
Comparison of visual observations and automated ceilometer cloud reports at Blindern, Oslo. Anette Lauen Borg Remote sensing MET-Norway
Comparison of visual observations and automated ceilometer cloud reports at Blindern, Oslo Anette Lauen Borg Remote sensing MET-Norway A test of our ceilometer data To fully exploit our new ceilometer
Convective Clouds. Convective clouds 1
Convective clouds 1 Convective Clouds Introduction Convective clouds are formed in vertical motions that result from the instability of the atmosphere. This instability can be caused by: a. heating at
Clouds and What They Mean
Vocabulary and Writing Worksheet 1. Choose the best vocabulary word for each sentence and write it in the blank. dew point evaporation fog gas precipitation relative humidity a. Relative humidity refers
Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Parameterization of Cumulus Convective Cloud Systems in Mesoscale Forecast Models Yefim L. Kogan Cooperative Institute
Towards assimilating IASI satellite observations over cold surfaces - the cloud detection aspect
Towards assimilating IASI satellite observations over cold surfaces - the cloud detection aspect Tuuli Perttula, FMI + Thanks to: Nadia Fourrié, Lydie Lavanant, Florence Rabier and Vincent Guidard, Météo
The APOLLO cloud product statistics Web service
The APOLLO cloud product statistics Web service Introduction DLR and Transvalor are preparing a new Web service to disseminate the statistics of the APOLLO cloud physical parameters as a further help in
Sub-grid cloud parametrization issues in Met Office Unified Model
Sub-grid cloud parametrization issues in Met Office Unified Model Cyril Morcrette Workshop on Parametrization of clouds and precipitation across model resolutions, ECMWF, Reading, November 2012 Table of
Investigations on COSMO 2.8Km precipitation forecast
Investigations on COSMO 2.8Km precipitation forecast Federico Grazzini, ARPA-SIMC Emilia-Romagna Coordinator of physical aspects group of COSMO Outline Brief description of the COSMO-HR operational suites
Application of Numerical Weather Prediction Models for Drought Monitoring. Gregor Gregorič Jožef Roškar Environmental Agency of Slovenia
Application of Numerical Weather Prediction Models for Drought Monitoring Gregor Gregorič Jožef Roškar Environmental Agency of Slovenia Contents 1. Introduction 2. Numerical Weather Prediction Models -
SAFNWC/MSG Cloud type/height. Application for fog/low cloud situations
SAFNWC/MSG Cloud type/height. Application for fog/low cloud situations 22 September 2011 Hervé LE GLEAU, Marcel DERRIEN Centre de météorologie Spatiale. Lannion Météo-France 1 Fog or low level clouds?
Mesoscale re-analysis of historical meteorological data over Europe Anna Jansson and Christer Persson, SMHI ERAMESAN A first attempt at SMHI for re-analyses of temperature, precipitation and wind over
Night Microphysics RGB Nephanalysis in night time
Copyright, JMA Night Microphysics RGB Nephanalysis in night time Meteorological Satellite Center, JMA What s Night Microphysics RGB? R : B15(I2 12.3)-B13(IR 10.4) Range : -4 2 [K] Gamma : 1.0 G : B13(IR
A SURVEY OF CLOUD COVER OVER MĂGURELE, ROMANIA, USING CEILOMETER AND SATELLITE DATA
Romanian Reports in Physics, Vol. 66, No. 3, P. 812 822, 2014 ATMOSPHERE PHYSICS A SURVEY OF CLOUD COVER OVER MĂGURELE, ROMANIA, USING CEILOMETER AND SATELLITE DATA S. STEFAN, I. UNGUREANU, C. GRIGORAS
WEATHER AND CLIMATE practice test
WEATHER AND CLIMATE practice test Multiple Choice Identify the choice that best completes the statement or answers the question. 1. What role does runoff play in the water cycle? a. It is the process in
Not all clouds are easily classified! Cloud Classification schemes. Clouds by level 9/23/15
Cloud Classification schemes 1) classified by where they occur (for example: high, middle, low) 2) classified by amount of water content and vertical extent (thick, thin, shallow, deep) 3) classified by
ASSESSMENT OF THE CAPABILITY OF WRF MODEL TO ESTIMATE CLOUDS AT DIFFERENT TEMPORAL AND SPATIAL SCALES
16TH WRF USER WORKSHOP, BOULDER, JUNE 2015 ASSESSMENT OF THE CAPABILITY OF WRF MODEL TO ESTIMATE CLOUDS AT DIFFERENT TEMPORAL AND SPATIAL SCALES Clara Arbizu-Barrena, David Pozo-Vázquez, José A. Ruiz-Arias,
Introduction to the forecasting world Jukka Julkunen FMI, Aviation and military WS
Boundary layer challenges for aviation forecaster Introduction to the forecasting world Jukka Julkunen FMI, Aviation and military WS 3.12.2012 Forecast for general public We can live with it - BUT Not
REGIONAL CLIMATE AND DOWNSCALING
REGIONAL CLIMATE AND DOWNSCALING Regional Climate Modelling at the Hungarian Meteorological Service ANDRÁS HORÁNYI (horanyi( [email protected]@met.hu) Special thanks: : Gabriella Csima,, Péter Szabó, Gabriella
Project Title: Quantifying Uncertainties of High-Resolution WRF Modeling on Downslope Wind Forecasts in the Las Vegas Valley
University: Florida Institute of Technology Name of University Researcher Preparing Report: Sen Chiao NWS Office: Las Vegas Name of NWS Researcher Preparing Report: Stanley Czyzyk Type of Project (Partners
WeatherBug Vocabulary Bingo
Type of Activity: Game: Interactive activity that is competitive, and allows students to learn at the same time. Activity Overview: WeatherBug Bingo is a fun and engaging game for you to play with students!
A verification score for high resolution NWP: Idealized and preoperational tests
Technical Report No. 69, December 2012 A verification score for high resolution NWP: Idealized and preoperational tests Bent H. Sass and Xiaohua Yang HIRLAM - B Programme, c/o J. Onvlee, KNMI, P.O. Box
Active Fire Monitoring: Product Guide
Active Fire Monitoring: Product Guide Doc.No. Issue : : EUM/TSS/MAN/15/801989 v1c EUMETSAT Eumetsat-Allee 1, D-64295 Darmstadt, Germany Tel: +49 6151 807-7 Fax: +49 6151 807 555 Date : 14 April 2015 http://www.eumetsat.int
Data Sets of Climate Science
The 5 Most Important Data Sets of Climate Science Photo: S. Rahmstorf This presentation was prepared on the occasion of the Arctic Expedition for Climate Action, July 2008. Author: Stefan Rahmstorf, Professor
Monsoon Variability and Extreme Weather Events
Monsoon Variability and Extreme Weather Events M Rajeevan National Climate Centre India Meteorological Department Pune 411 005 [email protected] Outline of the presentation Monsoon rainfall Variability
Unified Cloud and Mixing Parameterizations of the Marine Boundary Layer: EDMF and PDF-based cloud approaches
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Unified Cloud and Mixing Parameterizations of the Marine Boundary Layer: EDMF and PDF-based cloud approaches Joao Teixeira
Titelmasterformat durch Klicken. bearbeiten
Evaluation of a Fully Coupled Atmospheric Hydrological Modeling System for the Sissili Watershed in the West African Sudanian Savannah Titelmasterformat durch Klicken June, 11, 2014 1 st European Fully
Estimation of satellite observations bias correction for limited area model
Estimation of satellite observations bias correction for limited area model Roger Randriamampianina Hungarian Meteorological Service, Budapest, Hungary [email protected] Abstract Assimilation of satellite radiances
The Clouds Outside My Window. National Weather Service NOAA
The Clouds Outside My Window National Weather Service NOAA The Clouds Out My Window Written and illustrated by John Jensenius My window With help from Owlie Skywarn 1 The Clouds Outside My Window Whether
Clouds/WX Codes. B.1 Introduction
Clouds/WX Codes B.1 Introduction This appendix provides the necessary tables and specific instructions to enter Clouds/Wx at the Surface Data screen. This guidance assumes no previous knowledge of synoptic
Winds. Winds on a weather map are represented by wind barbs; e.g., Notes:
Winds Winds on a weather map are represented by wind barbs; e.g., flag half flag pennant wind direction The wind is blowing from the side with the flags and pennants (think an arrow with feathers) Speeds
Cloud Model Verification at the Air Force Weather Agency
2d Weather Group Cloud Model Verification at the Air Force Weather Agency Matthew Sittel UCAR Visiting Scientist Air Force Weather Agency Offutt AFB, NE Template: 28 Feb 06 Overview Cloud Models Ground
An exploratory study of the possibilities of analog postprocessing
Intern rapport ; IR 2009-02 An exploratory study of the possibilities of analog postprocessing Dirk Wolters De Bilt, 2009 KNMI internal report = intern rapport; IR 2009-02 De Bilt, 2009 PO Box 201 3730
Synoptic assessment of AMV errors
NWP SAF Satellite Application Facility for Numerical Weather Prediction Visiting Scientist mission report Document NWPSAF-MO-VS-038 Version 1.0 4 June 2009 Synoptic assessment of AMV errors Renato Galante
Education and Outreach Lesson Plan
Education and Outreach Lesson Plan Visit our online activities collection http://education.arm.gov/ Grade levels K 2 Common Covering Clouds Common Covering Clouds Approximate Time 1 1/2 hours, or two 45-minute
Solarstromprognosen für Übertragungsnetzbetreiber
Solarstromprognosen für Übertragungsnetzbetreiber Elke Lorenz, Jan Kühnert, Annette Hammer, Detlev Heienmann Universität Oldenburg 1 Outline grid integration of photovoltaic power (PV) in Germany overview
The APOLLO cloud product statistics Web service The APOLLO cloud product statistics Web service
The APOLLO cloud product statistics Web service Introduction DLR and Transvalor are preparing a new Web service to disseminate the statistics of the APOLLO cloud physical parameters as a further help in
Name Period 4 th Six Weeks Notes 2015 Weather
Name Period 4 th Six Weeks Notes 2015 Weather Radiation Convection Currents Winds Jet Streams Energy from the Sun reaches Earth as electromagnetic waves This energy fuels all life on Earth including the
Forecaster comments to the ORTECH Report
Forecaster comments to the ORTECH Report The Alberta Forecasting Pilot Project was truly a pioneering and landmark effort in the assessment of wind power production forecast performance in North America.
What the Heck are Low-Cloud Feedbacks? Takanobu Yamaguchi Rachel R. McCrary Anna B. Harper
What the Heck are Low-Cloud Feedbacks? Takanobu Yamaguchi Rachel R. McCrary Anna B. Harper IPCC Cloud feedbacks remain the largest source of uncertainty. Roadmap 1. Low cloud primer 2. Radiation and low
Temperature affects water in the air.
KEY CONCEPT Most clouds form as air rises and cools. BEFORE, you learned Water vapor circulates from Earth to the atmosphere Warm air is less dense than cool air and tends to rise NOW, you will learn How
Activity 4 Clouds Over Your Head Level 1
Activity 4 Clouds Over Your Head Level 1 1 Objectives: Students will become familiar with the four main types of clouds: stratus, cirrus, cumulus, and cumulonimbus and their characteristics. Students will
The Next Generation Flux Analysis: Adding Clear-Sky LW and LW Cloud Effects, Cloud Optical Depths, and Improved Sky Cover Estimates
The Next Generation Flux Analysis: Adding Clear-Sky LW and LW Cloud Effects, Cloud Optical Depths, and Improved Sky Cover Estimates C. N. Long Pacific Northwest National Laboratory Richland, Washington
Geography affects climate.
KEY CONCEPT Climate is a long-term weather pattern. BEFORE, you learned The Sun s energy heats Earth s surface unevenly The atmosphere s temperature changes with altitude Oceans affect wind flow NOW, you
Improved diagnosis of low-level cloud from MSG SEVIRI data for assimilation into Met Office limited area models
Improved diagnosis of low-level cloud from MSG SEVIRI data for assimilation into Met Office limited area models Peter N. Francis, James A. Hocking & Roger W. Saunders Met Office, Exeter, U.K. Abstract
Meteorological Forecasting of DNI, clouds and aerosols
Meteorological Forecasting of DNI, clouds and aerosols DNICast 1st End-User Workshop, Madrid, 2014-05-07 Heiner Körnich (SMHI), Jan Remund (Meteotest), Marion Schroedter-Homscheidt (DLR) Overview What
Near Real Time Blended Surface Winds
Near Real Time Blended Surface Winds I. Summary To enhance the spatial and temporal resolutions of surface wind, the remotely sensed retrievals are blended to the operational ECMWF wind analyses over the
FOR SUBSCRIBERS ONLY! - TRIAL PASSWORD USERS MAY NOT REPRODUCE AND DISTRIBUTE PRINTABLE MATERIALS OFF THE SOLPASS WEBSITE!
FOR SUBSCRIBERS ONLY! - TRIAL PASSWORD USERS MAY NOT REPRODUCE AND DISTRIBUTE PRINTABLE MATERIALS OFF THE SOLPASS WEBSITE! 1 NAME DATE GRADE 5 SCIENCE SOL REVIEW WEATHER LABEL the 3 stages of the water
TOPIC: CLOUD CLASSIFICATION
INDIAN INSTITUTE OF TECHNOLOGY, DELHI DEPARTMENT OF ATMOSPHERIC SCIENCE ASL720: Satellite Meteorology and Remote Sensing TERM PAPER TOPIC: CLOUD CLASSIFICATION Group Members: Anil Kumar (2010ME10649) Mayank
Cloud seeding. Frequently Asked Questions. What are clouds and how are they formed? How do we know cloud seeding works in Tasmania?
What are clouds and how are they formed? Clouds are composed of water droplets and sometimes ice crystals. Clouds form when air that is rich in moisture near the Earth s surface rises higher into the atmosphere,
Formation & Classification
CLOUDS Formation & Classification DR. K. K. CHANDRA Department of forestry, Wildlife & Environmental Sciences, GGV, Bilaspur What is Cloud It is mass of tiny water droplets or ice crystals or both of size
What Causes Climate? Use Target Reading Skills
Climate and Climate Change Name Date Class Climate and Climate Change Guided Reading and Study What Causes Climate? This section describes factors that determine climate, or the average weather conditions
Cloud/Hydrometeor Initialization in the 20-km RUC Using GOES Data
WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPEN PROGRAMMME AREA GROUP ON INTEGRATED OBSERVING SYSTEMS EXPERT TEAM ON OBSERVATIONAL DATA REQUIREMENTS AND REDESIGN OF THE GLOBAL OBSERVING
Outline of RGB Composite Imagery
Outline of RGB Composite Imagery Data Processing Division, Data Processing Department Meteorological Satellite Center (MSC) JMA Akihiro SHIMIZU 29 September, 2014 Updated 6 July, 2015 1 Contents What s
A system of direct radiation forecasting based on numerical weather predictions, satellite image and machine learning.
A system of direct radiation forecasting based on numerical weather predictions, satellite image and machine learning. 31st Annual International Symposium on Forecasting Lourdes Ramírez Santigosa Martín
WindREN AB IEA Task 19 national overview - Swedish activities in measurements and mapping of icing and de-icing of wind turbines Göran Ronsten,
WindREN AB IEA Task 19 national overview - Swedish activities in measurements and mapping of icing and de-icing of wind turbines Göran Ronsten, WindREN Why bother? Binding targets to be achieved by 2020
MOGREPS status and activities
MOGREPS status and activities by Warren Tennant with contributions from Rob Neal, Sarah Beare, Neill Bowler & Richard Swinbank Crown copyright Met Office 32 nd EWGLAM and 17 th SRNWP meetings 1 Contents
The impact of window size on AMV
The impact of window size on AMV E. H. Sohn 1 and R. Borde 2 KMA 1 and EUMETSAT 2 Abstract Target size determination is subjective not only for tracking the vector but also AMV results. Smaller target
Marko Markovic Department of Earth and Atmospheric Sciences. University of Quebec at Montreal
An Evaluation of the Surface Radiation Budget Over North America for a Suite of Regional Climate Models and Reanalysis Data, Part 1: Comparison to Surface Stations Observations Marko Markovic Department
Chapter 3: Weather Map. Weather Maps. The Station Model. Weather Map on 7/7/2005 4/29/2011
Chapter 3: Weather Map Weather Maps Many variables are needed to described weather conditions. Local weathers are affected by weather pattern. We need to see all the numbers describing weathers at many
SIXTH GRADE WEATHER 1 WEEK LESSON PLANS AND ACTIVITIES
SIXTH GRADE WEATHER 1 WEEK LESSON PLANS AND ACTIVITIES WATER CYCLE OVERVIEW OF SIXTH GRADE WATER WEEK 1. PRE: Evaluating components of the water cycle. LAB: Experimenting with porosity and permeability.
Volcanic Ash Monitoring: Product Guide
Doc.No. Issue : : EUM/TSS/MAN/15/802120 v1a EUMETSAT Eumetsat-Allee 1, D-64295 Darmstadt, Germany Tel: +49 6151 807-7 Fax: +49 6151 807 555 Date : 2 June 2015 http://www.eumetsat.int WBS/DBS : EUMETSAT
How To Assess The Vulnerability Of The Neman River To Climate Change
Management of the Neman River basin with account of adaptation to climate change Progress of the pilot project since February, 2011 Vladimir Korneev, Central Research Institute for Complex Use of Water
Read and study the following information. After reading complete the review questions. Clouds
Name: Pd: Read and study the following information. After reading complete the review questions. Clouds What are clouds? A cloud is a large collection of very tiny droplets of water or ice crystals. The
MICROPHYSICS COMPLEXITY EFFECTS ON STORM EVOLUTION AND ELECTRIFICATION
MICROPHYSICS COMPLEXITY EFFECTS ON STORM EVOLUTION AND ELECTRIFICATION Blake J. Allen National Weather Center Research Experience For Undergraduates, Norman, Oklahoma and Pittsburg State University, Pittsburg,
Evaluation of the Effect of Upper-Level Cirrus Clouds on Satellite Retrievals of Low-Level Cloud Droplet Effective Radius
Evaluation of the Effect of Upper-Level Cirrus Clouds on Satellite Retrievals of Low-Level Cloud Droplet Effective Radius F.-L. Chang and Z. Li Earth System Science Interdisciplinary Center University
An Analysis of the Rossby Wave Theory
An Analysis of the Rossby Wave Theory Morgan E. Brown, Elise V. Johnson, Stephen A. Kearney ABSTRACT Large-scale planetary waves are known as Rossby waves. The Rossby wave theory gives us an idealized
Make a Cloud Finder. How to Fold the Cloud Finder: Play the Weather Word Game:
Make a Cloud Finder Make a Cloud Finder. The pattern is included here. Learn the names of the beautiful clouds that may appear in the sky where you live. Color your Cloud Finder, and cut it out on the
Evalua&ng Downdra/ Parameteriza&ons with High Resolu&on CRM Data
Evalua&ng Downdra/ Parameteriza&ons with High Resolu&on CRM Data Kate Thayer-Calder and Dave Randall Colorado State University October 24, 2012 NOAA's 37th Climate Diagnostics and Prediction Workshop Convective
How To Model An Ac Cloud
Development of an Elevated Mixed Layer Model for Parameterizing Altocumulus Cloud Layers S. Liu and S. K. Krueger Department of Meteorology University of Utah, Salt Lake City, Utah Introduction Altocumulus
Climate Models: Uncertainties due to Clouds. Joel Norris Assistant Professor of Climate and Atmospheric Sciences Scripps Institution of Oceanography
Climate Models: Uncertainties due to Clouds Joel Norris Assistant Professor of Climate and Atmospheric Sciences Scripps Institution of Oceanography Global mean radiative forcing of the climate system for
If wispy, no significant icing or turbulence. If dense or in bands turbulence is likely. Nil icing risk. Cirrocumulus (CC)
Cirrus (CI) Detached clouds in the form of delicate white filaments or white patches or narrow bands. These clouds have a fibrous or hair like appearance, or a silky sheen or both. with frontal lifting
Analyze Weather in Cold Regions and Mountainous Terrain
Analyze Weather in Cold Regions and Mountainous Terrain Terminal Learning Objective Action: Analyze weather of cold regions and mountainous terrain Condition: Given a training mission that involves a specified
6 th Grade Science Assessment: Weather & Water Select the best answer on the answer sheet. Please do not make any marks on this test.
Select the be answer on the answer sheet. Please do not make any marks on this te. 1. Weather is be defined as the A. changes that occur in cloud formations from day to day. B. amount of rain or snow that
Cloud Correction and its Impact on Air Quality Simulations
Cloud Correction and its Impact on Air Quality Simulations Arastoo Pour Biazar 1, Richard T. McNider 1, Andrew White 1, Bright Dornblaser 3, Kevin Doty 1, Maudood Khan 2 1. University of Alabama in Huntsville
SOLAR IRRADIANCE FORECASTING, BENCHMARKING of DIFFERENT TECHNIQUES and APPLICATIONS of ENERGY METEOROLOGY
SOLAR IRRADIANCE FORECASTING, BENCHMARKING of DIFFERENT TECHNIQUES and APPLICATIONS of ENERGY METEOROLOGY Wolfgang Traunmüller 1 * and Gerald Steinmaurer 2 1 BLUE SKY Wetteranalysen, 4800 Attnang-Puchheim,
Overview of the IR channels and their applications
Ján Kaňák Slovak Hydrometeorological Institute [email protected] Overview of the IR channels and their applications EUMeTrain, 14 June 2011 Ján Kaňák, SHMÚ 1 Basics in satellite Infrared image interpretation
Knut Helge Midtbø, Mariken Homleid and Viel Ødegaard (P.O. Box 43, N-0313 OSLO, NORWAY)
Verification of wind and turbulence forecasts for the airports Mehamn, Honningsvåg, Hammerfest, Hasvik, Tromsø, Bardufoss, Evenes, Narvik, Svolvær, Leknes, Mo i Rana, Mosjøen, Sandnessjøen, Brønnøysund,
Tropical Cyclogenesis Monitoring at RSMC Tokyo Mikio, Ueno Forecaster, Tokyo Typhoon Center Japan Meteorological Agency (JMA)
JMA/WMO Workshop on Effective Tropical Cyclone Warning in Southeast Asia 11 14 March, 2014 Tropical Cyclogenesis Monitoring at RSMC Tokyo Mikio, Ueno Forecaster, Tokyo Typhoon Center Japan Meteorological
Observed Cloud Cover Trends and Global Climate Change. Joel Norris Scripps Institution of Oceanography
Observed Cloud Cover Trends and Global Climate Change Joel Norris Scripps Institution of Oceanography Increasing Global Temperature from www.giss.nasa.gov Increasing Greenhouse Gases from ess.geology.ufl.edu
Safe Operating Procedure
Safe Operating Procedure (Revised 11/11) OPACITY OF EMISSIONS FROM COMBUSTION SOURCES (For assistance, please contact EHS at (402) 472-4925, or visit our web site at http://ehs.unl.edu/) The University
Turbulent mixing in clouds latent heat and cloud microphysics effects
Turbulent mixing in clouds latent heat and cloud microphysics effects Szymon P. Malinowski1*, Mirosław Andrejczuk2, Wojciech W. Grabowski3, Piotr Korczyk4, Tomasz A. Kowalewski4 and Piotr K. Smolarkiewicz3
Huai-Min Zhang & NOAAGlobalTemp Team
Improving Global Observations for Climate Change Monitoring using Global Surface Temperature (& beyond) Huai-Min Zhang & NOAAGlobalTemp Team NOAA National Centers for Environmental Information (NCEI) [formerly:
