Wind Power Forecasting Pilot Project Part B: The Quantitative Analysis Final Report
|
|
|
- Erica McGee
- 10 years ago
- Views:
Transcription
1 Report Wind Power Forecasting Pilot Project Part B: The Quantitative Analysis Final Report A Report to: Attention: The Alberta Electric System Operator 25, 33-5 th Avenue SW Calgary, Alberta T2P L4 Mr. Darren McCrank, P.Eng. Wind Power Forecasting Pilot Project Project Manager Fax: (43) Submitted by: Donald (Don) C. McKay, Ph.D., MBA General Manager Tel: (95) , Ext. 499 Fax: (95) [email protected] Report No.: pages, 5 Appendices Date: August 18, Speakman Drive, Mississauga, ON Canada L5K 1B3 Tel Fax
2 Wind Power Forecasting Pilot Project DISCLAIMER The conclusions reached in this report were arrived at using data and other information provided to ORTECH by others. The activities of ORTECH in preparing this report were limited to analyzing the data and other information supplied to ORTECH. Except in the case of obvious anomalies or discrepancies no attempt was made to verify the data or the accuracy thereof as such activities were beyond the scope of ORTECH s engagement. While ORTECH prepared this report in accordance with professional standards and has no reason to question the data or information provided to it, ORTECH assumes no responsibility or liability for the accuracy, completeness or veracity of the data or other information supplied to it in the course of preparing this report.
3 Wind Power Forecasting Pilot Project TABLE OF CONTENTS Page No. EXECUTIVE SUMMARY INTRODUCTION DATA SETS Data Completeness RESULTS AND ANALYSES What is the General Accuracy of the Forecasts? MAE and RMSE Wind Speed Decomposition of RMSE RMSE and MAE Power Normalized RMSE Normalized MAE What is the accuracy of the Forecasts at the different forecast horizons studied (T=1 hour to T=48 hours)? What is the accuracy of the Forecasts at different hours of the day and seasons of the year? Hours of the Day Seasons of the Year What is the accuracy of the Forecasted Meteorological Data before Running through the Power Conversion models? What is the accuracy of the Power Conversion? What is the Potential co-variance from given data samples? What is the accuracy of the Forecast at different wind speeds or different points of a Wind Power Facility s power curve? What is the relative comparison between Forecasts? Contemporary Analyses Wind Power Performance Index Root Mean Square Error of Wind and Power Forecasts Bias of Wind and Power Forecasts General Assessment...53
4 Wind Power Forecasting Pilot Project TABLE OF CONTENTS Page No. 3.9 Which is the region with the least amount of error? Which forecaster forecasts best in that regions and why? What is the effect of spatial smoothing on forecast error? How well do the forecasts predict fast ramp up and ramp down times, Event analysis (CSI)? What is the Impact on data availability? Are there times (day/month/weather pattern) when there is more uncertainty in the forecasts than other times?) What is the relationship between the spread of the min/max and the forecast error? What is the correlation factor between all three forecasts? Is this related to the forecast error? Summary of Findings GENERAL OBSERVATIONS AND RECOMMENDATIONS Data Trial Period Freezing the Models Existing vs. Future Sites Focusing on the Priorities...65
5 Wind Power Forecasting Pilot Project TABLE OF CONTENTS TABLES Page No. Table 2-1 Table 2-2 Table 2-3 Table 2-4 Table 2-5 Table 2-6 Table 2-7 Table 2-8 Summary of the 1-Minute Measured Wind Speed Data Recovery Rates for each Site during the Q4 (February 1, 28 : UTC - April 3, 28 23: UTC)...12 Summary of the Hourly-Averaged Measured Wind Speed Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC - April 3, 28 23: UTC)...12 Summary of the 1-Minute Measured/Calculated Power Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC)...13 Summary of the Hourly-Averaged Measured/Calculated Power Data Recovery Rates for each of the Sites during Q4 (February 1, 28 : UTC April 3, 28 23: UTC)...13 Summary of Acceptable Ranges of Variables used by ORTECH to Screen-Out Measured/Calculated and Forecasted Wind and Power Data...14 Summary of the Annual Recovery Rates of Measured Hourly- Averaged Wind-Speed and Measured/Calculated Power Data at the Different Sites and Regions from May 1, 27 to April 3, Summary of the Annual Recovery Rates of Hourly-Averaged Wind-Speed Data-Sets used in the Statistical Analyses at Each of the Selected Horizons and at the Different Regions from May 1, 27 to April 3, Summary of the Annual Recovery Rates of Hourly-Averaged Power Data-Sets used in the Statistical Analyses at Each of the Selected-Horizons and at the Different Regions from May 1, 27 to April 3,
6 Wind Power Forecasting Pilot Project TABLE OF CONTENTS Page No. Table 3-1 Table 3-2 Table 3-3 Table 3-4 Table 3-5 Table 3-6 Table 3-7 General Accuracy for Power Forecasts for Each Forecaster on an Annual Base...35 Annual Relative Standard Deviations of Measured Wind Speed and Power in Different Regions...4 Annual Ration of MAE and RMSE for Power Normalized by the Measured Average Power to MAE and RMSE for Wind Speed Normalized by the Measured Mean Wind Speed...41 Sum of various metrics for nine horizons for wind and power for the South West and South Central regions for all three forecasters...53 Annual Summary of Rapid Ramp Events for specified Time Horizons captured by Each Forecaster (% of total number of events) When Event is Forecasted Not More Than 12 Hours In advance of the Actual Event...58 Annual Summary of Rapid Ramp Events for Specified Time Horizons Captured by Each Forecaster (% of total number of events) When Event is Forecasted Either 6 Hours Before or 6 Hours After the Actual Event...58 Confidence limits (%) of minimum and maximum predicted power distributions...6
7 Wind Power Forecasting Pilot Project TABLE OF CONTENTS FIGURES Page No. Figure 3-1 Figure 3-2 Figure 3-3 Figure 3-4 Figure 3-5 Figure 3-6 Comparison of the time series of measured and predicted wind speeds at first forecast horizon (T=1hr) by an anonymous forecaster at a particular wind farm (name is not disclosed). Note examples are circled...23 Annual Mean Absolute Error (MAE) of wind speed (WS) predictions (Pred) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by three forecasters A, B and C as a function of forecast horizons Annual Root Mean Square Error (RMSE) of wind speed (WS) predictions (Pred.) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All facilities (AF) by three forecasters A, B and C as a function of forecast horizons Annual errors of wind speed predictions in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by Forecasters A, B, and C as a function of forecast horizon. Note that RMSE and its components: dispersion (disp), bias (bias) and standard deviation of bias (sdbias) are shown with different symbols and colors...28 Annual Normalized Root Mean Square Error (RMSE %) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity (RC) of the region of power aggregation...3 Annual Errors of power predictions in South West (SW), South Central (SC), Existing Facilities (EF), by Forecasters A, B, and C as a function of forecast horizon. Note that RMSE and its components: dispersion (disp), bias (bias) and standard deviation of bias (sdbias) are shown with different symbols and colors...32
8 Wind Power Forecasting Pilot Project TABLE OF CONTENTS Figure 3-7 Figure 3-8 Figure 3-9 Figure 3-1 Figure 3-11 Figure 3-12 Page No. Normalized Root Mean Square Error (RMSE) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity of the region of power aggregation Accuracy of the Power Forecast for Specific 6 Hour Time Periods for Each Forecaster Using RMSE (% Rated Capacity for the South West (SW), South Central (SC), and Existing Facilities (EF)...37 Seasonal Accuracy of the Power Forecast for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF)...38 Typical Power Curve (GE 1.5 MW sle)...42 Annual RMSE of wind speed predictions at the chosen forecast horizons (T = 1, T =2 and T = 48hr) in Existing Facilities (EF) by Forecasters A, B and C as a function of predicted wind speed...44 Annual Normalized RMSE of Power for Existing Facilities at Different Predicted Wind Speed Bins...45 Figure 3-13 RMSE Annual Comparison between Forecasters for Wind by Region...47 Figure 3-14 Annual Comparison between Forecasters for Power by Region (Normalized RMSE % of Rated Capacity (RC)) and a Comparison against Persistence for SW, SC Regions and EF Figure 3-15 Figure 3-16 Figure 3-17 Figure 3-18 Figure 3-19 Annual Comparison between Forecasters Against Persistence for SW, SC Regions and EF for Power using SS (RMSE)...48 Annual Normalized Performance Index for Wind using RMSE for Four Regions for the Three Forecasters...5 Annual Normalized Performance Index for Power Using RMSE in Two Regions for the Three Forecasters...5 Annual Normalized Performance Index for Wind using Bias for Four Regions for the Three Forecasters...52 Annual Normalized Performance Index for Power Using Bias in Two Regions for the Three Forecasters...52
9 Wind Power Forecasting Pilot Project TABLE OF CONTENTS Page No. Figure 3-2 Annual Ratio of Regional Standard Deviation of Power Prediction Errors to the Average Standard Deviation of the Individual Sites in the Region for Various Region Sizes and Forecast Horizons...56 APPENDICES Appendix A Appendix B Appendix C Appendix D Appendix E Alberta Forecast Regions Bibliography Data Completeness for Each Quarter Selected Wind Statistics by Quarter Selected Power Statistics by Quarter
10 Wind Power Forecasting Pilot Project Page 1 of 65 EXECUTIVE SUMMARY As part of the Alberta Energy System Operator s (AESO) wind power forecasting pilot project, ORTECH Power (ORTECH) was contracted to provide quantitative analysis services. Wind and power forecast data was provided by three independent wind forecasting firms, referred to herein as Forecaster A, Forecaster B and Forecaster C. The analysis consisted of comparing the predicted data against measured meteorological power data for seven existing Alberta wind power facilities (Existing Facilities), and measured meteorological data and derived power data for five future Alberta wind power facilities (Future Facilities). The measured, predicted, and simulated data was collected and distributed to the forecasters and ORTECH by GENIVAR (Phoenix Engineering). Analysis was carried out by examining the available data from each of the forecasters in seven categories, consisting of the entire dataset All Facilities (AF), Existing Facilities (EF), Future Facilities (FF), and four geographic regions. To define the four regions, the available data for the Existing Facilities and the Future Facilities were distributed geographically into South West (SW) Region, South Central (SC) Region, South East (SE) Region, and a Central Region (CE) (see map in Appendix A). The analysis for the individual facilities was not included in this report due to confidentially requirements. The forecast data was analyzed using statistical methods with an emphasis on deriving meaningful and precise values for averaged errors between predicted and observed data that would provide some insight into the error characteristics based on the diagram below. Observations: Hourly time-scale Temporal statistics: Forecasts: 1 to 48 hour time scales Meteorological situations: 3 to >8 hour time scales 1 to 48 hour time horizons and Seasonal Stratified Statistics: season and meteorological situation Summary statistics: confidence intervals etc. resolved 1 to 48 hour horizons Distinction between forecasters: resolved 1 to 48 hour horizons The final report documents the analysis for the one-year study period, extending from May 1, 27 to April 3, 28, inclusive.
11 Wind Power Forecasting Pilot Project Page 2 of 65 The final report attempts to draw out specific findings and/or insights based on a set of questions posed by the working group after reviewing the three progress reports. The questions posed by the working group were; 1. What is the general accuracy of the Forecasts? 2. What is the accuracy of the Forecasts at the different forecast horizons studied (T=1 hour to T=48 hours)? 3. What is the accuracy of the Forecasts at different hours of the day and seasons of the year? 4. What is the accuracy of the Forecasted Meteorological Data before running through the Power Conversion models? 5. What is the accuracy of the Power Conversion? 6. What is the Potential co-variance from given data samples? 7. What is the accuracy of the Forecast at different wind speeds or different points of a Wind Power Facility's power curve? 8. What is the relative comparison between Forecasts? 9. Which is the region with the least amount of error? Which forecaster forecasts best in that region and why? 1. What is the effect of spatial smoothing on forecast error? 11. How well do the forecasts predict fast ramp up and ramp down times, event analysis (CSI)? 12. What is the Impact on data availability? 13. Are there times (day/month/weather pattern) when there is more uncertainty in the forecasts than other times? 14. What is the relationship between the spread of the min/max and the forecast error? 15. What is the correlation factor between all three forecasts? Is this related to the forecast error? The report is presented as follows: Section 2. describes the data sets provided by GENIVAR (Phoenix Engineering) Section 3. the results and analyses (response to questions posed by working group) Section 4. general observations and recommendations Appendices C, D and E provide quarterly statistics.
12 Wind Power Forecasting Pilot Project Page 3 of 65 From the analyses undertaken in Section 3., addressing questions raised by the working group, a summary of the findings was produced which is outlined below. After each finding the relevant section were the analysis is discussed is indicated. The summary of findings is; 1. Forecasting wind energy in the regions examined in Alberta over a 48 hour time horizon is possible (all sections) 2. All three forecasters can provide general forecasts in the regions examined over the 48 hour time horizon.(sections 3.1 and 3.8) 3. The largest error in the forecasts is due to phases errors. The different regions show a consistent behaviour for phase error suggesting a similar property affects all regions in a similar manner. (sections 3.1 and 3.2) 4. The accuracy of the forecasts in general decreases as the forecast horizon increases. While there is a variation between forecasters in accuracy at each forecast horizon, the trend of decreasing accuracy as the forecast horizon increases is preserved by each forecaster. (sections 3.1 and 3.2) 5. The wind power forecasts are the least accurate during the afternoon periods between hours 13 and 18 after forecast horizon T=6. (section 3.3) 6. The least accurate wind power forecasts are during the winter season (November, December, January, February) for all forecasters. The best accuracy is during the summer season (June, July, and August). (section 3.3) 7. Wind speed prediction errors are amplified by due to power conversion. (section 3.5) 8. The power prediction errors peak at the predicted wind speed of 1 m/s and show a different pattern from the wind speed error.(section 3.7) 9. The region with the least amount of error for wind is the central region (CE). The region with the largest amount of error is the Southwest Region (SW). (section 3.9) 1. For power forecasts, the accuracy for both SW and SC regions are similar. (section3.9) 11. Spatial smoothing reduces the error as the number of wind farms and the size of the area covered increases. (section 3.1) 12. All forecasters do not predict ramp events effectively. The reason may be that the forecasters were not given this specific objective. (section 3.11) 13. In examining the min-max spread for predicted power it was found that the measured values fell between the min-max predicted power 81%-95% of the time. (section 3.14)
13 Wind Power Forecasting Pilot Project Page 4 of 65 A number of general observations and recommendations are presented. These include: a) Data (i) ORTECH relied on GENIVAR (Phoenix Engineering) for providing a QA/QC d measured data-sets for the different sites. Although, ORTECH applied a screen out criteria on the data, more confidence could have been achieved if one or both of the recommendations listed below was employed: Recommendation 1: GENIVAR (Phoenix Engineering) to provide ORTECH with a coded data-set along with a key to the codes used to substitute invalid data based on their QA/QC procedures. A detailed QA/QC report would also be helpful Recommendation 2: ORTECH to receive a raw database and to apply its own QA/QC measures which include the following:» Reviewing instruments orientation and calibration reports and correcting the data accordingly when necessary.» Flagging data with abnormal wind speeds or power and/or standard deviations and filtering them out if they fall outside of a certain or agreed-upon range.» Screening the data for icing events or any other anomalies that may have not been caught in the screening-out criteria and filtering them out.» Comparing wind speed data from different anemometers levels and from adjacent sites looking for discrepancies that are then filtered when necessary.» Other site specific QA/QC procedures. (ii) Some of the measured-data were modified and/or added after the end of each quarter. It was expected to have the data QA/QC d and ready to be analyzed before they were posted on GENIVAR s (Phoenix Engineering) wind-server for ORTECH to download, which was not the case. Recommendation: ORTECH to receive final and unchanged values in the measured data-set.
14 Wind Power Forecasting Pilot Project Page 5 of 65 (iii) ORTECH had neither access to the measured data provided to each of the forecasters nor to their availability. It was assumed that the measured data made available to ORTECH and to each of the forecasters were comparable. Recommendation: ORTECH to receive a data-availability report from each of the forecasters summarizing the hours that were used from the measured data to produce their forecast. A more reliable comparison between the different forecasters could have been produced. (iv) ORTECH assumed that none of the forecasters did prescreening to their results. The fact that in some cases forecasters did not provide forecasts up to the 48 th time horizon (T=48) puts this assumption into question. Recommendation: ORTECH to receive from each of the forecaster reports summarizing their prescreening criteria and listing their omitted forecasts if they do prescreening or a statement saying that they do not. b) Trial Period In undertaking such a large project a trial period would have been advantageous. This trial period would allow all the components of the project (forecasters, data providers, data analyst) to test their procedures to ensure that the logistics were running as smoothly as possible. It is recommended that at least a two to three month trial period be undertaken before the main project is initiated. The trial period could be undertaken using a subset of the sites. c) Freezing the Models Provided one of the questions to be addressed is a comparison between forecasters then it is recommended that after an initial training period the forecast model codes be frozen. If they are not frozen the consultant doing the quantitative analysis can not predict whether the output at the start of the project was the same as the output generated from the forecast model at the end of the project. Another alternative would be to have each forecaster describe in detail the changes made to the model as the project progressed.
15 Wind Power Forecasting Pilot Project Page 6 of 65 d) Existing vs. Future Sites Given, that the main emphasis of the study was on wind power forecasting and not wind speed forecasting, it is recommended, that only existing sites which have measured wind power data be used. By using only measured wind power data a more representative indication of the accuracy of the forecaster s power conversion methodologies is provided. e) Focusing on the Priorities Given the amount of data, the presentation and analysis of the information generated was considerable. It is recommended that future Working Groups in cooperation with the consultants determine what are the specific questions that are to be addressed and thus defining the metrics/analyses that will be the focus of the progress and final reports.
16 Wind Power Forecasting Pilot Project Page 7 of INTRODUCTION As part of the Alberta Energy System Operator s (AESO) wind power forecasting pilot project, ORTECH Power (ORTECH) was contracted to provide quantitative analysis services. Wind and power forecast data was provided by three independent wind forecasting firms, referred to herein as Forecaster A, Forecaster B and Forecaster C. The analysis consisted of comparing the predicted data against actual meteorological data and real power data for seven existing Alberta wind power facilities and actual meteorological data and derived power data for five future Alberta wind power facilities. Analysis was carried out by examining the available data from each of the forecasters in seven categories, consisting of the entire dataset All Facilities (AF), Existing Facilities (EF), Future Facilities (FF) and four geographic regions. To define the four regions, the available data for the Existing Facilities and the Future Facilities were distributed geographically into South West (SW) Region, South Central (SC) Region, South East (SE) Region, and a Central Region (CE) (see map in Appendix C). The analysis for the individual facilities was not included in this report due to confidentiality concerns. AESO and the forecasters have access to all the graphs produced using their datasets which were provided to ORTECH for the purposes of this final report prior to issuing it. This report documents the analysis for the one-year study period, extending from May 1, 27 to April 3, 28, inclusive. The final report deviates from the three quarterly progress reports provided previously in that the final report attempts to emphasize more insight into what the information is saying as opposed to just providing the metrics. As well, the report attempts to address the questions posed by the working group, derived after reviewing the three quarterly reports. The questions posed by the working group are as follows; 1. What is the general accuracy of the Forecasts? 2. What is the accuracy of the Forecasts at the different forecast horizons studied (T=1 hour to T=48 hours)? 3. What is the accuracy of the Forecasts at different hours of the day and seasons of the year? 4. What is the accuracy of the Forecasted Meteorological Data before running through the Power Conversion models? 5. What is the accuracy of the Power Conversion?
17 Wind Power Forecasting Pilot Project Page 8 of What is the Potential co-variance from given data samples? 7. What is the accuracy of the Forecast at different wind speeds or different points of a Wind Power Facility's power curve? 8. What is the relative comparison between Forecasts? 9. Which is the region with the least amount of error? Which forecaster forecasts best in that region and why? 1. What is the effect of spatial smoothing on forecast error? 11. How well do the forecasts predict fast ramp up and ramp down times, event analysis (CSI)? 12. What is the Impact on data availability? 13. Are there times (day/month/weather pattern) when there is more uncertainty in the forecasts than other times? 14. What is the relationship between the spread of the min/max and the forecast error? 15. What is the correlation factor between all three forecasts? Is this related to the forecast error? Figures and tables are presented in the report proper as an illustration of the data used in support of addressing the questions. The complete set of figures and tables used to address each question are presented on a CD accompanying this report due to the large amount of information Only power data from the Existing Facilities and the two regions (SW,SC) which only have existing facilities are presented since for the Future Facilities one can not compare predicted power against a measured power. For completeness, selected tables and figures for wind speed and power by quarter are presented in Appendix C to E. The report is presented as follows: Section 2. describes the data sets provided by GENIVAR (Phoenix Engineering) Section 3. the results and analyses (response to questions posed by working group) Section 4. general observations and recommendations
18 Wind Power Forecasting Pilot Project Page 9 of DATA SETS GENIVAR (Phoenix Engineering) made the following data available and accessible to ORTECH: a) Actual meteorological data and real power data for seven (7) existing Alberta wind power facilities and actual meteorological data and calculated wind power data from five (5) future Alberta wind power facilities from May 1, 27 to April 3, 28, inclusive. b) Forecast meteorological and power datasets for seven (7) existing Alberta wind power facilities, five (5) future facilities and the different Regions (i.e. South West (SW), South Central (SC), South East (SE), Central (CE), Existing-Facilities (EF), Future-Facilities (FF) and All-Facilities (AF)) from three (3) forecasters for May 1, 27 to April 3, 28, inclusive, every hour, on the hour. Both actual and forecasted data gathered for the first, second and third quarter progress reports (i.e. Q1, Q2 and Q3) were used in this final report to cover for May 1, 27 to July 31, 27, August 1, 27 to October 31, 27 and November 1, 27 to January 31, 28 periods respectively. To complete a full year, ORTECH collected a fourth quarter dataset (February 1, 28 to April 3, 28) from GENIVAR (Phoenix Engineering) in the same fashion as the other quarters. Changes to the data made available by GENIVAR (Phoenix Engineering) after the end of each quarter were not incorporated into the datasets used by ORTECH. Also power data forecasts for Existing-Facilities (EF) (the Region) were evaluated based on eleven rather than twelve months of data from June 27 to April 28 since forecasted power datasets for the month of May 27 for Existing-Facilities (EF) for all forecasters were not complete. ORTECH relied solely on the aforementioned data-source and monitored the source to ensure all the data and/or information considered necessary for performing this analyses was available.
19 Wind Power Forecasting Pilot Project Page 1 of 65 The measured meteorological data i.e. wind speeds at different heights, wind direction, barometric pressure and temperature at each individual site (a total of 12 sites) were retrieved from GENIVAR (Phoenix Engineering) historicalreadings wind server. The naming convention was provided by AESO and the output attained is the ten (1) minute average data which was then averaged on an hourly basis by ORTECH. Only the hours with an acceptable count ( 8%) of valid ten (1) minute data were taken into consideration. ORTECH has also converted the time stamps of the retrieved data from Mountain Standard Time (MST) to Universal Time (UTC) in order to perform the analyses against the forecasted data; hence, ascertaining an apple to apple comparison. The measured power data at the existing facilities were also acquired from GENIVAR (Phoenix Engineering) historical-readings wind server in the same manner as the meteorological data except for a different naming convention; AESO.CSDR2 which stands for AESO Current Supply and Demand. Power data were obtained at the seven (7) existing facilities for the months of February, March and April (for the 4 th quarter). The predicted/calculated power data for the five (5) future sites were provided to ORTECH by GENIVAR (Phoenix Engineering) through their ftp site. Transmission constraint periods for each month at specific sites were provided to ORTECH by AESO. ORTECH applies a -997 code to the data during these periods which are then rejected when applying a screen-out criteria based on acceptable ranges listed in Table 2-5 in section 2.1. Similar to previous quarters, each of the three (3) forecasters has provided forecast datasets for each of the twelve (12) sites as well as the four (4) regions (i.e. South-West (SW), South-Central (SC), South-East (SE) and Central (CE)) and for Existing-Facilities (EF), Future-Facilities (FF) and All-Facilities (AF). These datasets were made available to ORTECH through GENIVAR (Phoenix Engineering) website for the 4 th quarter as well; February 1, 28 to April 3, 28, inclusive, every hour, on the hour. Each dataset consists of a forecast for forty-eight (48) forecast time horizons: T=1 (one hour ahead) to T=48 (48 hours ahead) and they are divided into two categories: meteorological and power. The meteorological datasets include forecasted wind speed, wind direction, temperature, air pressure and their uncertainty (min/max) and averages at each of the twelve (12) sites. The power datasets include forecasted power and ramp rate and their uncertainty (min/max) at each of the twelve (12) sites and the aforementioned four regions and EF, FF and AF.
20 Wind Power Forecasting Pilot Project Page 11 of 65 Tables 2-1 to 2-4 provide the monthly and total 1-minute and hourly averaged measured wind speed and measured/calculated power data recovery rates (prior to employing the screen-out criteria detailed in section 2.1) for the 4 th quarter (Q4) from February 1, 28 to April 3, 28, respectively. Similar tables for the 1 st quarter (Q1), 2 nd quarter (Q2) and 3 rd quarter (Q3) are shown in Appendix C. The data recovery rate is defined as the number of valid data records collected versus that possible over the reporting period. The method of calculation is as follows: Where, Data Recovery Rate (%) = Data Records Collected *(1) Data Records Possible Data Records Collected = Data Records Possible Number of Invalid Records As shown in Tables 2-1 and 2-2 the 1-minute and hourly recovery rate for the measured wind speed during the 4 th quarter were generally 97% except for site 8 and site 1 which, returned a 1-minute recovery rate of 81.2%.and 87.3%, respectively. The low recovery rate at site 8 has resulted from a SCADA PC/hardware problem detected during the month of February; hence, returning a recovery rate as low as 52.3% for that month. The recovery rate for the power data were also reasonable ( 98%) throughout the analysis period of the 4 th quarter as shown in Tables 2.3 and 2.4. A 9% overall recovery rate is normally considered as the minimum requirement by the industry to be temporally representative.
21 Wind Power Forecasting Pilot Project Page 12 of 65 Table 2-1: Summary of the 1-Minute Measured Wind Speed Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) 1 Minute Data Recovery Rate (%) Site/Month February March April All Months Table 2-2: Summary of the Hourly-Averaged Measured Wind Speed Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) Hourly Data Recovery Rate (%) Site/Month February March April All Months
22 Wind Power Forecasting Pilot Project Page 13 of 65 Table 2-3: Summary of the 1-Minute Measured/Calculated Power Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) 1-Minute Data Recovery Rate (%) Site/Month February March April All Months Table 2-4: Summary of the Hourly-Averaged Measured/Calculated Power Data Recovery Rates for each of the Sites during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) Hourly Data Recovery Rate (%) Site/Month February March April All Months
23 Wind Power Forecasting Pilot Project Page 14 of Data Completeness ORTECH did not apply a standard quality control and quality assurance procedures on the measured or forecasted data retrieved. Instead, screen-out criteria is; however, employed on both datasets following a list of acceptable ranges, approved by AESO, for the different variables as detailed in Table 2-5. Values laying outside of these ranges were rejected. In order to perform legitimate statistical analyses, a complete set of data is required. Therefore, ORTECH in the analyses of the four quarters considered only data-sets that are valid (based on the screen-out criteria) and available from all sources (i.e. measured and forecasted from forecasters A, B and C). If data are invalid or missing at a specific hour/period from one of the aforementioned sources, ORTECH rejects that hour/period and does not consider them in the statistical analyses. Table 2-5: Summary of Acceptable Ranges of Variables used by ORTECH to Screen-Out Measured/Calculated and Forecasted Wind and Power Data Variable Acceptable Range (inclusive) Wind Speed (m/s) to 6 Wind Direction ( ) to 36 Power (MW) -.1 x site-specific rated capacity to 1.1 x site-specific rated capacity Surface Temperature ( C) -6 to 6 Surface Pressure (mbar) 5 to 15 The annual recovery rate for the hourly-averaged measured wind-speed and measured/calculated power after applying the screen-out criteria for the reporting period (May 1, 27 to April 3, 28) are summarized in Table 2-6. Each of the four (4) regions (SW, SC, SE and CE) includes three existing and/or future wind farms; thus, ORTECH has stacked together the wind speed data from the sites relevant for each region since it is difficult to generalize and/or normalize the wind speed for each Region. The wind speed recovery rates calculated for those Regions and presented in Table 2-6 are; subsequently, calculated based on the stacked data; hence, one should be careful before drawing any conclusions based on these recovery rates.
24 Wind Power Forecasting Pilot Project Page 15 of 65 The annual recovery rate (R) for wind-speed and power complete data-sets that are used in the statistical analyses at each of the Selected-Horizons and at the different Regions are summarized in Tables 2-7 and 2-8 respectively and they are calculated as follows: R = Number of hours with available and valid data during the year of analysis Number of hours possible during the year of analysis (i.e hours) The annual average recovery rates for the complete data-set of hourlyaveraged wind-speed were greater than 9% in all Regions except for the CE and FF where the annual average recovery rates returned were 83.6% and 87.2%, respectively as shown in Table 2-7. These rather low recovery rates are the result of the low recovery rates (illustrated in Table 2-6) returned for the measured wind-speed at some of the future sites particularly sites 1, 11 and 12 which fall within those regions. The annual recovery rates for the complete data-set of power were greater than 92% for the SW, SC and CE regions while they were around 9% for the SE, CE and EF with the lowest for the latter region which returned an annual recovery rate of 88.1% as illustrated in Table 2-8. The fairly low annual recovery rate at this region (i.e. EF) has resulted from incomplete forecasted power data (from all forecasters) during the month of May 27. To avoid misleading results ORTECH considered only eleven months (11) from June 27 to April 28, inclusive for the existing-facilities in the analyses of power (summarized in the following sections).
25 Wind Power Forecasting Pilot Project Page 16 of 65 Table 2-6: Summary of the Annual Recovery Rates of Measured Hourly-Averaged Wind-Speed and Measured/Calculated Power Data at the Different Sites and Regions from May 1, 27 to April 3, 28 Site/Region Recovery Rate of Hourly Wind Speed Data (%) Recovery Rate of Hourly Power Data (%) SW SC SE CE EF FF AF
26 Wind Power Forecasting Pilot Project Page 17 of 65 Table 2-7: Summary of the Annual Recovery Rates of Hourly-Averaged Wind-Speed Data-Sets used in the Statistical Analyses at Each of the Selected-Horizons and at the Different Regions from May 1, 27 to April 3, 28. Data Recovery Rate (%) Horizon/Region SW SC SE CE EF FF AF T= T= T= T= T= T= T= T= T= T= Avg (T=1 -T=48) Min (T=1 -T=48) Max (T=1 -T=48)
27 Wind Power Forecasting Pilot Project Page 18 of 65 Table 2-8: Summary of the Annual Recovery Rates of Hourly-Averaged Power Data-Sets used in the Statistical Analyses at Each of the Selected-Horizons and at the Different Regions from May 1, 27 to April 3, 28 Data Recovery Rate (%) Horizon/Region SW SC SE CE EF FF AF T= T= T= T= T= T= T= T= T= T= Avg (T=1 -T=48) Min (T=1 -T=48) Max (T=1 -T=48)
28 Wind Power Forecasting Pilot Project Page 19 of RESULTS AND ANALYSES The forecast data was analyzed using statistical methods described in the first, second, and third quarterly reports. Emphasis was placed on deriving meaningful and precise values for a metric that characterizes the typical magnitude of error over all hours in the evaluation between the predicted and measured data. Most error measures produce numbers that assess the deviations between predicted and measured values, but it is more meaningful to arrive at a statement such as at site X, y% of the forecasts are within z value from the mean which requires the correct choice of error measure and some indication of the type of distribution. To this end, only those statistical parameters that provide some insight into the error characteristics are presented based on the diagram below. Observations: Hourly time-scale Temporal statistics: Forecasts: 1 to 48 hour time scales Meteorological situations: 3 to >8 hour time scales 1 to 48 hour time horizons and Seasonal Stratified Statistics: season and meteorological situation Summary statistics: confidence intervals etc. resolved 1 to 48 hour horizons Distinction between forecasters: resolved 1 to 48 hour horizons The final outcome should attempt to distinguish one forecast methodology from another resolved to 1 to 48 hour forecast horizons for various regions. The information provided in this report works toward that goal. After reviewing the three quarterly reports the working group provided a set of specific questions that they wished to have addressed in the final report. The questions listed below are addressed in this section of the final report. In addressing these questions an extensive amount of statistical analysis was undertaken to try to provide some insight and support for any findings. At the direction of the working group the emphasis was put on power as opposed to wind. In some of the sections below metrics for both wind and power are presented. However following the working group s direction most sections only show metrics for power. Similarly in most sections only one metric is shown which, is the Root Mean Square Error (RMSE). All the metrics, figures and tables used in the analysis are attached at the back of the report on a CD.
29 Wind Power Forecasting Pilot Project Page 2 of 65 The questions that are addressed, as provided by the working group are as follows; 1. What is the general accuracy of the Forecasts? 2. What is the accuracy of the Forecasts at the different forecast horizons studied (T=1 hour to T=48 hours)? 3. What is the accuracy of the Forecasts at different hours of the day and seasons of the year? 4. What is the accuracy of the Forecasted Meteorological Data before running through the Power Conversion models? 5. What is the accuracy of the Power Conversion? 6. What is the Potential co-variance from given data samples? 7. What is the accuracy of the Forecast at different wind speeds or different points of a Wind Power Facility's power curve? 8. What is the relative comparison between Forecasts? 9. Which is the region with the least amount of error? Which forecaster forecasts best in that region and why? 1. What is the effect of spatial smoothing on forecast error? 11. How well do the forecasts predict fast ramp up and ramp down times, event analysis (CSI)? 12. What is the Impact on data availability? 13. Are there times (day/month/weather pattern) when there is more uncertainty in the forecasts than other times? 14. What is the relationship between the spread of the min/max and the forecast error? 15. What is the correlation factor between all three forecasts? Is this related to the forecast error? Results are presented for the four (4) geographical regions: South-West (SW), South-Central (SC), South East (SE) and Central (CE), as well as, for Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) when wind speed is presented. When power is presented only the SW, SC and EF are presented in order to compare forecasted power to measured power.
30 Wind Power Forecasting Pilot Project Page 21 of 65 Wind speed and power forecasts were provided each hour for forty-eight (48) hourly forecast horizons, T=1 (one hour ahead) to T=48 (48 hours ahead). In the following text Selected Forecast Horizons refers to T=1, T=2, T=3, T=4, T=6, T=12, T=17, T=24, T=36 and T=48 which were defined by the working group. As indicated in section 2., the forecasted power data-set for the month of May 27 for Existing Facilities for all forecasters was not complete. Therefore, when evaluating the metrics for power for Existing Facilities only eleven months of data was used (June 27-April 28). The results include those illustrations that were deemed relevant as mentioned above. A large number of additional illustrations can be generated from the statistical analysis, but were not included for practical reasons. They are available in the CD accompanying the report. 3.1 What is the General Accuracy of the Forecasts? MAE and RMSE Wind Speed Each of the four regions (SW, SC, SE, and CE) includes three existing or/and future facilities, thus it is difficult to generalize and/or normalize the wind speed for each region. Therefore, the assessment of the forecast errors for the wind speed was processed for each region without normalization, i.e., the predicted and measured pairs at those individual sites were simply stacked together for the relevant regions. What is meant by stacking, is grouping the wind data collected from sites at a particular region thereby considering them as one long data set. The objective of such a grouping is to derive regional error statistics for wind data without doing any averaging. The procedure of such grouping can be explained through an example as follows. The South West region contains three meteorological masts. Therefore, three sets of wind data time series are available. These three data sets are combined together irrespectively (of dates/times). The error statistics are then calculated using the same method as for an individual site. With this approach averaged wind speed data from any individual time series are not derived.
31 Wind Power Forecasting Pilot Project Page 22 of 65 In looking at the general accuracy of the wind speed as predicted by the forecasters in all regions and at all time horizons the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) were the specific metrics used. In the case of the RMSE it was decomposed into its component part the bias and the variance of the error. The RMSE connects the important statistical quantities of the two time series. It is composed of three different components which contribute to the RMSE originating from different effects. The bias accounts for the difference between the mean values of forecast and measurement. The standard deviation (sde) measures the fluctuations of the error around its mean. The sde is very useful as it directly provides the 68%-confidence interval if the errors are normally distributed. In the context of comparing forecast and measurement, the sde has two contributions. The first is the sdbias, i.e. the difference between the standard deviations of predicted and measured values, which evaluates errors due to wrongly forecasted variability. This together with the bias is an indicator of amplitude errors. The second contribution is the dispersion (disp) which involves the cross-correlation coefficient (r) weighted with the standard deviations of both time series. Thus, disp accounts for the contribution of phase errors to the RMSE. The amplitude and phase errors are illustrated in Figure 3-1 which shows a fourday comparison of the time series of measured and predicted wind speeds at one wind farm. The amplitude error is clearly seen at 17 th hour on January 14, 28. The phase error is more pronounced at 2 th hour on January 12, 28 and 9 th hour on January 13, 28.
32 Wind Power Forecasting Pilot Project Page 23 of 65 Figure 3-1: Comparison of the time series of measured and predicted wind speeds at first forecast horizon (T=1hr) by an anonymous forecaster at a particular wind farm (name is not disclosed). Note examples are circled Measured Predicted Amplitude Error Windspeed (m/s) Phase Error 1/12/28 1/13/28 1/14/28 1/15/28 1/16/28 Date Figure 3-2 and Figure 3-3 show the annual MAE and RMSE respectively by region and by forecast horizon for each forecaster. For completeness, the quarterly MAE and RMSE are presented in Appendix D.
33 Wind Power Forecasting Pilot Project Page 24 of 65 Figure 3-2: 5 Annual Mean Absolute Error (MAE) of wind speed (WS) predictions (Pred) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by three forecasters A, B and C as a function of forecast horizons. 5 Forecaster A WIND : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF 5 Forecaster C WIND : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF
34 Wind Power Forecasting Pilot Project Page 25 of 65 Figure 3-3: Annual Root Mean Square Error (RMSE) of wind speed (WS) predictions (Pred.) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All facilities (AF) by three forecasters A, B and C as a function of forecast horizons. Forecaster A MET : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B MET : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF 5 Forecaster C MET : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF
35 Wind Power Forecasting Pilot Project Page 26 of 65 Referring to Figures 3-2 and 3-3, the annual MAE and RMSE, the range of values over the three forecasters for MAE are; from 1.4 to 3.5 m/s and for RMSE from 1.9 to 4.6 m/s. It should be noted that Forecaster A and C show a rapid increase in MAE and RMSE from time horizon T=1 to time horizon T=6 before flattening out while Forecaster B remains relatively flat over all horizons. This difference between Forecasters A and C and Forecaster B is probably due to the forecast methodologies used. For all forecasters the ordering of the regions for both MAE and RMSE are the same, i.e. the largest MAE and RMSE values are in the SW region and the lowest values are in the Central (CE) region with the other regions falling in between. There are possibly three reasons why the SW region has the largest values and the CE region has the smallest values. As shown on the map of Alberta in Appendix A, the SW region s topography is more complex than for the CE region, which would indicate that it is more difficult to forecast in the SW region. Secondly, the SW region in relative size to the CE region is much smaller and thus the individual sites are closer together than in the CE region and again may have an effect on forecasting based on topography and spatial smoothing. Finally the winds on average are lighter in the CE regions than in the SW region which can influence the statistic Decomposition of RMSE Figure 3-4 shows the decomposition of RMSE for the three forecasters. For all forecasters the dispersion (disp) component dominates the RMSE in all regions over all forecast horizons. The disp accounts for the contribution of phase errors to the RMSE. Thus, the largest error in the forecasts is due to phase errors. It should be noted that different regions show a consistent behaviour for disp suggesting a similar property affects all regions in a similar manner. After time horizon T=6 there appears to be a linear increase of dispersion with prediction horizon indicating a systematic growth in the average phase errors between prediction and measurement with increasing lead time. The second contributor of RMSE seems to be sdbias, the absolute value of which increases steeply for Forecasters A and C until forecast horizon T=6 and remains asymptotic with increasing lead time. Another common feature is the negative sdbias which indicates the variability of the predicted wind speed is smaller than the measured wind speed variability. Model fidelity, as would be expected, limits the forecaster s ability to predict the finer detail of wind speed.
36 Wind Power Forecasting Pilot Project Page 27 of 65 The bias is the smallest contributor to the RMSE although it seems significant in the Central region for Forecaster A, all regions for Forecaster B and in the South West, South East and Central regions for Forecaster C. The bias varies in the range of -1.4 to 1.5 m/s. An attempt was made to correlate the bias with the terrain feature, but no commonality was found among the three forecasters. This might indicate the difference in the forecast methodologies employed.
37 Wind Power Forecasting Pilot Project Page 28 of 65 Figure 3-4 Annual errors of wind speed predictions in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by Forecasters A, B, and C as a function of forecast horizon. Note that RMSE and its components: dispersion (disp), bias (bias) and standard deviation of bias (sdbias) are shown with different symbols and colors. Errors of Pred. WS (m/s) SW SC SE CE Errors of Pred. WS (m/s) SW SC SE CE -2-2 Forecaster A : WIND Errors of Pred. WS (m/s) EF FF AF RMSE DISP BIAS SDBIAS Forecaster B : WIND Errors of Pred. WS (m/s) EF FF AF RMSE DISP BIAS SDBIAS Errors of Pred. WS (m/s) SW SC SE CE -2 Forecaster C : WIND Errors of Pred. WS (m/s) EF FF AF RMSE DISP BIAS SDBIAS
38 Wind Power Forecasting Pilot Project Page 29 of RMSE and MAE Power The accuracy of wind power prediction is dictated by different factors including: The accuracy of wind speed prediction; The amplification and dampening of the wind speed prediction error through the nonlinear power curve; and Wind farm efficiency including the turbine availability and performance. The overall accuracy of wind speed prediction has been assessed in the previous section. The general accuracy of power prediction is described in this part of the report. Due to the differences in the wind farm efficiency between the Existing Facilities and Future Facilities, the assessment of wind power prediction accuracy is focused on the wind power regions consisting of the existing wind farms, i.e., South West region and South Central region, and Existing Facilities. As noted in the data section (section 2) only 11 months of data were used for the Existing Facilities power analysis since for power prediction the forecasters only had the months of June 27 to April 28 period fully populated. All the error measures described in this section are normalized by the rated wind power capacity Normalized RMSE Analogous to the wind speed error measures, Figure 3-5 shows the annual normalized RMSE at different forecast horizons in different regions for the three forecasters. For completeness, the quarterly normalize RMSE is presented in Appendix E.
39 Wind Power Forecasting Pilot Project Page 3 of 65 Figure 3-5: Annual Normalized Root Mean Square Error (RMSE %) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity (RC) of the region of power aggregation. Forecaster A PWR : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster B PWR : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster C PWR : RMSE Norm. RMSE of Power Pred.(%) SW SC EF
40 Wind Power Forecasting Pilot Project Page 31 of 65 From Figure 3-5, the normalized annual RMSE of the power prediction exhibits a general increase with the increase of forecast horizons, particularly for the first 6 forecast horizons. The normalized RMSE is in the range of 6% to 2% for the first six forecast horizons and 2% to 3% between the 7 th and 48 th forecast horizon. The South West (SW) and South Central (SC) regions, which include the Existing Facilities, and have the smallest spatial dimensions as shown in Appendix A, have the highest normalized RMSE. The normalized RMSE is in general lower for regions involving future facilities (not shown here) or with larger spatial dimensions. For the Future Facilities, the measured power was calculated by using power curves and wind speed distributions with a generic wind farm performance, which probably reduces the prediction errors. As discussed in section 3.1.1, the RMSE can be decomposed into meaningful parts to shed some insight on the different error sources, such as bias as an amplitude indicator, difference between the standard deviations of predicted and measured indicating errors due to wrongly predicted variability, and dispersion indicating the contribution of phase errors to the RMSE. Figure 3-6 shows the decomposition of the RMSE for power. The decomposition of the RMSE for power shows similar traits as for wind Normalized MAE Figure 3-7 presents the normalized annual MAE. For completeness, the quarterly normalized MAE is presented in Appendix E. The normalized annual MAE shows a general increase with the forecast horizons, particularly for the first 6 forecast horizons, similar to the pattern of the normalized RMSE.
41 Wind Power Forecasting Pilot Project Page 32 of 65 Figure 3-6: Annual Errors of power predictions in South West (SW), South Central (SC), Existing Facilities (EF), by Forecasters A, B, and C as a function of forecast horizon. Note that RMSE and its components: dispersion (disp), bias (bias) and standard deviation of bias (sdbias) are shown with different symbols and colors. SW SC SW SC Norm. Errors (%) 3 15 Norm. Errors (%) EF -15 EF Forecaster A : POWER Norm. Errors (%) RMSE DISP BIAS SDBIAS Forecaster B : POWER Norm. Errors (%) RMSE DISP BIAS SDBIAS SW SC 45 Norm. Errors (%) EF Forecaster C : POWER Norm. Errors (%) RMSE DISP BIAS SDBIAS
42 Wind Power Forecasting Pilot Project Page 33 of 65 Figure 3-7: Annual Normalized Mean Absolute Error (MAE %) of power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the MAE is normalized by the rated capacity of the region of aggregation. Forecaster A PWR : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster B PWR : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster C PWR : MAE Norm. MAE of Power Pred.(%) SW SC EF
43 Wind Power Forecasting Pilot Project Page 34 of What is the accuracy of the Forecasts at the different forecast horizons studied (T=1 hour to T=48 hours)? Table 3.1 provides an indication of the accuracy for power at the different forecast horizons, suggested by the working group, for each forecaster. The table presents the Mean Absolute Error and the P95 both in units of power (MW) which is the confidence level at which the Absolute Error does not exceed that value 95% of the time. The rated capacity for each region is also presented. In general the accuracy decreases as the forecast horizon increases. There is a variation between forecasters in accuracy at each forecast horizon. However the trend of decreasing accuracy as the forecast horizon increases is preserved by each forecaster.
44 Wind Power Forecasting Pilot Project Page 35 of 65 Table 3-1: General Accuracy for Power Forecasts for Each Forecaster on an Annual-Base Regions SW SC EF SW SC EF SW SC EF Rated Horizon Capacity Forecaster A 149.8MW MAE P MW MAE P MW MAE P Forecaster B 149.8MW MAE P MW MAE P MW MAE P Forecaster C 149.8MW MAE P MW MAE P MW MAE P
45 Wind Power Forecasting Pilot Project Page 36 of What is the accuracy of the Forecasts at different hours of the day and seasons of the year? Hours of the Day Figure 3-8 presents the annual accuracy of the power forecast for different 6 hour time periods (1-6 hours, 7-12 hours, hours and hours) of the day for the SW, SC, and EF for the specified forecast horizons for each forecaster. For completeness, the quarterly accuracy of the power forecasts for different 6 hour time periods of the day are provided in Appendix E. While the values for each time of day may vary from forecaster to forecaster it is apparent that the forecasts are the least accurate during the afternoon periods between hours 13 and 18 after forecast horizon T=6. The reason for the less accuracy in the afternoon periods can be accounted for by higher wind speeds and their variability which convert to higher power generation producing larger errors which can be explained by the power conversion curve (see section 3.5, 3.7) Seasons of the Year Figure 3-9 shows the accuracy of the power forecasts over the four seasons of the year as indicated by the working group specifically; Winter...November, December, January, February Spring...March, April, May Summer...June, July, August Fall...September, October From Figure 3-9 the lowest accuracy occurs during the winter season for all forecasters while the greatest accuracy is during the summer season. It should be noted that for forecaster B the summer forecasts in the SW region are less accurate than forecasters A and C. When the hourly values (measured vs. forecast) for June through August were examined for Forecaster B in the SW Region there was a much larger deviation from the measured values for July and August than for Forecasters A and C. The reduced accuracy in the winter season could be due to higher wind speeds and an increased number of weather systems covering the total area during that time of year. Appendix E contains the accuracy of the power forecasts by month that make up the four seasons of the year.
46 Wind Power Forecasting Pilot Project Page 37 of 65 Figure 3-8: Annual Accuracy of the Power Forecast for Specific 6 Hour Time Periods for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF).
47 Wind Power Forecasting Pilot Project Page 38 of 65 Figure 3-9: Seasonal Accuracy of the Power Forecast for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF).
48 Wind Power Forecasting Pilot Project Page 39 of What is the accuracy of the Forecasted Meteorological Data before running through the Power Conversion models? This question has been addressed in sections by examining the accuracy of the wind speed forecasts. 3.5 What is the accuracy of the Power Conversion? The error in the power has three components, an error due to an error in the wind speed measurement, a systematic error due to the shape of the power curve and an error due to a lack of an indication of wind farm performance. The prediction errors in wind speed may be amplified or damped when predicted wind speeds are converted into the wind power prediction. The amplification magnitude depends on the power curve, particularly on the derivative of the power curve at different wind speeds. If the wind farm performance is known, the power conversion errors can be modeled. However, the wind farm performance data was not known to either ORTECH, or to the forecasters at the time when the wind power prediction is issued. Therefore, ORTECH attempted to lump the power conversion error and the effect of the wind farm performance on the prediction errors together by comparing the same error measures, i.e., RMSE and MAE for both the wind speed and power prediction. In order to facilitate the comparison, both the RMSE and MAE should be normalized at a common scale. The common scale for normalizing the wind speed is the average measured wind speed. For normalizing the power, the common scale is the mean of the measured power because it inherently considers both the turbine type and typical wind speed (reference 3 in the bibliography). The mean values and standard deviations of measured wind speed and power annually are shown in Table 3-2. The relative standard deviation of measured wind, i.e., the ratio of the standard deviation to the average wind speed is in a narrow range between.53 and.6 in different regions with an exception of.47 in the Central (CE) region which has the lowest average measured wind speed. The relative standard deviation of measured power, i.e., the ratio of the standard deviation to the average measured power is in a range between.72 and.94 in the different regions. The relative standard deviation of measured power is larger than the relative standard deviation of measured wind speed by a factor of 1.23 to 2.2. This factor is due to the power curve and wind farm performance, and could be regarded as a reference value for gauging the power conversion errors. According to reference 3 in the bibliography, if the power was proportional to wind speed cube and the variations in wind speed were small compared to the mean value, this factor would be around 3 corresponding to the average derivative of the power curve.
49 Wind Power Forecasting Pilot Project Page 4 of 65 Table 3-3 shows the ratios of the normalized RMSE and MAE for power to those for wind speed. The RMSE and MAE are normalized by the mean measured power and mean measured wind speed for the power and wind speed predictions, respectively. These ratios can be used as an indicator for the power conversion errors. Generally speaking, as shown in Table 3-3 the wind speed prediction errors are amplified by a factor of due to power conversion for all forecast horizons by all three forecasters except for Forecaster C at the 1 st forecast horizon with the wind speed prediction errors dampened after power conversion. This dampening feature at the 1 st forecast horizon might indicate a different power conversion or power prediction approach. The magnitude of error amplification due to power conversion is low relative to those reported in the literature (reference 3 in bibliography). The magnitude of the power conversion errors by all three forecasters is similar to what is seen in the measured values. Table 3-2: Annual Relative Standard Deviations of Measured Wind Speed and Power in Different Regions Region Mean Measured Wind Speed (m/s) Standard Deviation of Measured Wind Speed (m/s) Relative Standard Deviation of Measured Wind Speed (m/s) Rated Power (MW) Mean Measured Power Output (MW) Standard Deviation of Measured Power Output (MW) Relative Standard Deviation of Measured Power Output Ratio of Standard Deviation of Measured Power to Measured Wind Speed SW SC SE CE EF FF AF
50 Wind Power Forecasting Pilot Project Page 41 of 65 Table 3-3: Annual Ratio of MAE and RMSE for Power Normalized by the Measured Average Power to MAE and RMSE for Wind Speed Normalized by the Measured Mean Wind Speed. Forecast Horizon South-West Region South-Central Region Existing Facilities Ratio of Ratio of Ratio of Ratio of Ratio of Ratio of MAE RMSE MAE RMSE MAE RMSE Forecaster A Forecaster B Forecaster C Note: Ratio of MAE Ratio of MAE for the Predicted Power Normalized by the Measured Average Power to MAE for the Predicted Wind Speed Normalized by the Measured Mean Wind Speed. Ratio of RMSE Ratio of RMSE for the Predicted Power Normalized by the Measured Average Power to RMSE for the Predicted Wind Speed Normalized by the Measured Mean Wind Speed.
51 Wind Power Forecasting Pilot Project Page 42 of What is the Potential co-variance from given data samples? ORTECH examined this area but was unable to yield any meaningful information 3.7 What is the accuracy of the Forecast at different wind speeds or different points of a Wind Power Facility's power curve? Typically, a wind power curve defines the relationship between wind speed and power. Figure 3-1 shows a typical power curve for a GE1.5 MW sle wind turbine indicating the electrical power (kw) output for normal turbulence intensities (1%<TI<15%) at different wind speeds at hub height and the absolute gradient of power curve (Wsm-1). It can be seen that the effect of an error in wind speed predictions on power predictions will vary by the point on the power curve due to its non-linear shape Therefore, it is desirable to understand the accuracy at different wind speeds, i.e., at different points on the power curve. In order to do this, the forecasting errors in each region were binned based on the predicted wind speeds at a given forecasting horizon. The range of the predicted wind speeds is divided into equidistant bins with width set to 1 m/s, i.e., -1, 1-2,, m/s. The upper bound for each bin is closed, and used to illustrate the accuracy at different wind speeds. For each bin, the RMSE and MAE were calculated. Figure 3-1: Typical Power Curve (GE 1.5 MW sle) Power (KW) Power Curve Absolute Gradient of Power Curve (Wsm-1) Wind Speed (m/s)
52 Wind Power Forecasting Pilot Project Page 43 of 65 The absolute gradient of power is not resolved at cut-in and cut-out wind speeds. Figure 3-11 shows the annual RMSE for wind speed for Existing Facilities (EF) for specific forecast horizons for each forecaster. One can see that for wind speeds between 5 m/s and 15 m/s the RMSE steadily increases. The values below 5 m/s are insignificant for power prediction and above 15 m/s have less statistical significance due to a small number of data points. The increased errors for the predicted wind speed between 5 m/s and 15 m/s are amplified differently due to the non-linear power curve. Figure 3-12 shows the Annual normalized RMSE of power prediction at different predicted wind speed bins for all three forecasters. It demonstrates a consistent peak at the predicted wind speed of 1 m/s, and it shows a different pattern from the wind speed error. This appears to reflect the amplification of errors in wind speed prediction due to the local high derivative on the non-linear power curve at the wind speeds of about 1 m/s. The second peak occurring around 2 m/s may be attributable to both the low derivative on the power curve near the cut-out wind speed and less significant statistical representation due to considerably fewer data points at higher wind speeds.
53 ORTECH Power Wind Power Forecasting Pilot Project Part B: The Quantitative Analysis Final Report 6 T1 T2 T3 T4 T6 T12 T24 T36 T Pred. WS (m/s) 2 EF 6 25 T1 T2 T3 T4 T6 T12 T24 T36 T Pred. WS (m/s) 2 25 Pred. WS (m/s) EF Forecaster B RMSE of Pred. WS (m/s) Annual RMSE of wind speed predictions at the chosen forecast horizons (T =1, T =2, and T = 48hr) in Existing Facilities (EF) by Forecasters A, B and C as a function of predicted wind speed. 7 RMSE of Pred. WS (m/s) Forecaster C Forecaster A Figure 3-11: Page 44 of 65 Draft Report # EF 6 T1 T2 T3 T4 T6 T12 T24 T36 T RMSE of Pred. WS (m/s) 2 25
54 Wind Power Forecasting Pilot Project Page 45 of 65 Part B: The Quantitative Analysis Final Report Draft Report #793-4 Figure 3-12: Annual Normalized RMSE of Power for the Existing Facilities at Different Predicted Wind Speed Bins Forecaster A : RMSE Norm. RMSE of Pred. Power (%) 4 EF T1 T2 3 T3 T4 2 T6 T12 T17 1 T24 T36 T Pred. WS (m/s) Forecaster B : RMSE Norm. RMSE of Pred. Power (%) 4 EF T1 T2 3 T3 T4 2 T6 T12 T17 1 T24 T36 T Pred. WS (m/s)
55 Wind Power Forecasting Pilot Project Page 46 of What is the relative comparison between Forecasts? Contemporary Analysis This section provides contemporary comparisons between the forecasters for wind speed and power. The metric shown for comparison is the RMSE for wind speed for all regions, the RMSE for power, for all regions and the RMSE using persistence as the reference for power in SW, SC and EF Wind Speed Figure 3-13 provides the annual comparison between forecasters for wind speed using the RMSE. For wind speed, the forecasters were not compared against persistence due to the fact that at a number of sites the wind speed data supplied to the forecasters was not received on time thus the value that they used at horizon zero was different than what ORTECH used to calculate persistence. ORTECH used the horizon zero value in the data set supplied at the end of the quarter not the value provided in real time. Some of the information contained in this section is redundant and has been shown in other sections. The use of the information in other sections is to describe different features and not necessarily for comparing the forecasters as is the purpose of this section. Figure 3-13, shows that in most regions and at most horizons the difference in the RMSE between forecasters is small. For T=1 and T=2, Forecaster A has the lowest RMSE in all regions. For completeness, Appendix D contains the comparison between forecasters for wind speed for each quarter using the RMSE.
56 Wind Power Forecasting Pilot Project Page 47 of 65 Figure 3-13: RMSE Annual Comparison between Forecasters for Wind by Region. RMSE : WIND RMSE (m/s) RMSE (m/s) RMSE (m/s) RMSE (m/s) SW SE EF AF SC CE FF A B C Power Figures 3-14 and 3-15 provide the comparison for power between forecasters and between forecasters and persistence for the SW and SC Regions and EF using the RMSE, and skill score (if compared against persistence). Figure 3-14 shows that the forecasts are close to persistence in the early horizons T=1 and T=2 in the SW and SC Regions and in EF. From the horizon T=6 onwards, the forecasts are better than persistence. For completeness, Figure 3-14 is provided for each quarter in Appendix E. In most regions, there is little to distinguish between the forecasters but forecaster B shows larger errors in the first two horizons In Figure 3-15 the persistence dominates up until T=3 after which the forecasts are better than persistence. In all regions for T=6 to T=48 there is little to distinguish amongst the three forecasters. From T=6 to T=48, the skill scores are in a range between 1%-5% better than persistence.
57 Wind Power Forecasting Pilot Project Page 48 of 65 Figure 3-14: Annual Comparison between Forecasters for Power by Region (Normalized RMSE % of Rated Capacity (RC)) and a Comparison against Persistence for SW, SC Regions and EF. RMSE (%RC) SW SC RMSE (%RC) SE CE RMSE (%RC) EF FF RMSE : PWR RMSE (%RC) AF A B C P Figure 3-15: Annual Comparison between Forecasters Against Persistence for SW, SC Regions and EF for Power using SS (RMSE). Skill Score (%) RMSE : PWR Skill Score (%) SW EF A B C A B C SC A B C
58 Wind Power Forecasting Pilot Project Page 49 of Performance Index Section provided contemporary comparisons between the forecasters. A different method to compare forecasters was devised to see if more insight could be obtained. Without specific ranking of the forecasts it is, however, possible to make some general observations regarding the relative success of the forecasts with respect to region and time horizon. A generalized performance index (I) is defined on whether the metric for the forecast is above or below the average of the metric for the three forecasts. For a metric where the lowest value is the best (true for most error metrics): I metric metric = = 2. mean metric mean metric This index has the property that a forecast metric that is the highest above the mean of the three forecasts metrics has an index that is lowest; the average forecast metric has an index of 1.. Furthermore, the narrower the spread of the three forecast indices about 1. the closer the consensus between the three forecasts as determined by a specific metric used for computing the index. The following observations are based on the forecast results for all four regions for wind and the two regions that comprise existing power generation capacity; the South West Region (SW) and South Central Region (SC) for power Root Mean Square Error of Wind and Power Forecasts Referring to Figures 3-16 and Figure 3-17 in the four regions for wind and the SW and SC for power respectively: For wind, Figure 3-16, from forecast horizons T=1 to T=6 forecaster A performs better in all regions. From T=7 to T=48 forecaster B performs better in the SW, SC, and SE regions. In the CE region all forecasters perform the same after T=6. For power, Figure 3-17 forecaster C performed best in the SC up until T=4 and both forecaster A and C performed the best in the SW up until T=4. After T=4 all forecasters performed similarly in the two regions.
59 Wind Power Forecasting Pilot Project Page 5 of 65 Figure 3-16: Annual Normalized Performance Index for Wind using RMSE for Four Regions for the Three Forecasters. 1.3 RMSE : WIND Normalized Performance Index Normalized Performance Index SW SE A B C A B C SC A B C CE A B C Figure 3-17: Annual Normalized Performance Index for Power Using RMSE in Two Regions for the Three Forecasters. Normalized Performance Index SW A B C RMSE : PWR Normalized Performance Index SC A B C
60 Wind Power Forecasting Pilot Project Page 51 of Bias of Wind and Power Forecasts As another example of the performance index Figure 3-18 and Figure 3-19 show the bias for wind and power respectively. Unlike for the RMSE, on the basis of bias, Forecaster A performs better at all forecast horizons for wind in the SW and SE regions, is similar in performance with Forecaster C in the SC region and CE region. For power there is no overall best performer with all three forecasters performing better at different forecast horizons. Again, the non-linearity of the power curve results in the wind speed bias not being simply related to the bias in the power. An examination of other forecast metrics for wind speed and power all support the general conclusion that error metrics based on predicted wind speed are not necessarily good predictors of the errors of the predicted power.
61 Wind Power Forecasting Pilot Project Page 52 of 65 Figure 3-18: Annual Normalized Performance Index for Wind using Bias for Four Regions for the Three Forecasters BIAS : WIND Normalized Performance Index Normalized Performance Index SW SE A B C A B C SC A B C CE A B C Figure 3-19: Annual Normalized Performance Index for Power Using BIAS in Two Regions for the Three Forecasters. Normalized Performance Index SW A B C BIAS : PWR Normalized Performance Index SC A B C
62 Wind Power Forecasting Pilot Project Page 53 of General Assessment A general assessment of the consensus between forecasts has been made by reviewing the spread of the forecast index summed over T = 1, 2, 3, 4, 6, 12, 17, 24, 48 for each forecaster. In Table 3-4, for bias errors, dispersion errors, MAE, RMSE and sdbias, the forecast index nearly all these metrics summed over the above nine horizons for each forecaster show a lesser spread between forecasters for the SC region than for the SW region forecasts. This indicates that, in general, there is better consensus in the SC region between the three forecasters than in the SW region. Table 3-4: Sum of various metrics for nine horizons for wind and power for the South West and South Central regions for all three forecasters. Summary of Wind Speed (May 1, 27 April 3, 28) Forecasters Bias Dispersion MAE RMSE SD Bias South-West Region (SW) A B C South Central Region (SC) A B C Summary of Power (May 1, 27 - April 3, 28) South-West Region (SW) A B C South Central Region (SC) A B C Low Medium High
63 Wind Power Forecasting Pilot Project Page 54 of Which is the region with the least amount of error? Which forecaster forecasts best in that region and why? Referring to Figure 3-2 and Figure 3-3 in section one observes that for wind the region with the least amount of error(rmse,mae) is the Central(CE) Region for each forecaster. The region with the highest amount of error(rmse,mae) is the South West(SW)region for each forecaster. As addressed in Section for all forecasters the ordering of the regions for both MAE and RMSE are the same, i.e. the largest MAE and RMSE values are in the SW region and the lowest values are in the Central (CE) region with the other regions falling in between. Three possible reasons why the SW region has the largest values and the CE region has the smallest values have been identified. The map of Alberta in Appendix A shows that the SW region s topography is more complex than for the CE region, which would suggest that it is more difficult to forecast in the SW region. Secondly, the SW region in relative size to the CE region is much smaller and thus the individual sites are closer together than in the CE region and again may have an effect on forecasting based on topography and spatial smoothing. Finally the wind speeds on average are lighter in the CE regions than in the SW region which can influence the statistical metrics. As to the best forecaster in the CE region, the range of errors between forecasters in that region(see Figure 3-13 section ) are indistinguishable between the three forecasters. For power forecasts, given that one should only look at regions where there are existing facilities, neither the RMSE and/or MAE in the SW and SC regions differ from one another by region for all forecasters.
64 Wind Power Forecasting Pilot Project Page 55 of What is the effect of spatial smoothing on forecast error? The spatial smoothing effect reduces the error for a region with more than one wind farm facility. In order to account for this, a statistical description of the regional prediction error can be determined by: σ σ 2 1 regional = 2 N i j σ r i j ij Where N is the number of sites in the region, σ i is the standard deviation of the forecast error for i th site, σ j is the standard deviation of the forecast error for j th site, and r ij is cross-correlation coefficients of the time series of the pair wise error between pairs of single sites (i and j ). σ regional / σ is presented as a function of forecast horizons for each region.. The symbol σ denotes arithmetic average of standard deviations of errors of single sites, i.e., σ = ( σ 1+ σ 2 + σ 3.+ σ n ) / n, where n is the total number of sites in the region. The regions considered as per requested by AESO are shown in the map in Appendix A. The region sizes are approximated by the circles. Figure 3-2 shows the σ regional / σ based on the power prediction errors for the South West (SW) and South Central (SC) regions, as well as the Existing Facilities (EF). The region sizes for both the SW and SC regions are similar while the region size for the EF is much larger. As shown in Figure 3-2, the σ regional / σ, the regional reduction in power prediction errors decreases with increasing region sizes at all forecast horizons. The reduction is about 2% for the SW or SC region for the 1 st forecast horizon while the reduction is near 4% for the EF. The reduction decreases with the increase of forecast horizon. The spatial smoothing that reduces the power prediction errors can be seen from Figures3-5 and Figure 3-7 in section and section respectively
65 Wind Power Forecasting Pilot Project Page 56 of 65 Figure 3-2: Annual Ratio of Regional Standard Deviation of Power Prediction Errors to the Average Standard Deviation of the Individual Sites in the Region for Various Region Sizes and Forecast Horizons. 1.8 Forecaster A Forecaster C σ regional / σ SW SC EF 1.8 Forecaster B Forecast Horizon Regional Smoothing: POWER σ regional / σ Forecast Horizon
66 Wind Power Forecasting Pilot Project Page 57 of 65 Figures 3-5 and 3-7 provide the RMSE and MAE for power for each forecaster in the SW, SC regions which have only existing sites and for Existing Facilities (EF) which include all the sites in the SW and SC regions and one site in the SE region, a total of seven sites. Figures 3-5 and 3-7 the RMSE and the MAE are reduced from the specific regions (SW, SC) to an expanded area that includes all existing facilities (EF). It should be noted that this could also be demonstrated when considering the transition from an individual site up to the seven existing sites that are included in the EF. Due to confidentially agreements the individual sites are not shown. Thus, including more spatially distributed wind farms should lead to a reduction of the prediction error of the aggregated power prediction compared to an individual site How well do the forecasts predict fast ramp up and ramp down times, event analysis (CSI)? The evaluation matrix for assessing the prediction of rapid ramp events has not been widely studied. ORTECH recognizes that evaluating the capability of forecasting extreme events is very challenging and exploratory. The evaluation criteria will vary depending on the operational regimes of the electric grids. In the 1 st, 2 nd and 3 rd quarterly reports, ORTECH proposed an approach, Critical Success Index (CSI) to assess the prediction of rapid ramp events. This approach is similar to that used by 3Tier (reference 8 in bibliography). To apply the CSI, the events to be assessed and a window to qualify the prediction as success have to be defined. It was requested by the working group that any hourly ramp events above 2% of the rated power capacity should be assessed. The window to qualify the prediction as a success should be defined based on the grid operational requirements. The following windows was initially proposed by ORTECH and then accepted by the working group for this assessment: 1. amplitude of the ramp event is within 8% - 12% of the actual measured one; 2. the event is forecasted not more than 12 hours in advance of the actual events: 3. the event is forecasted either 6 hours before or 6 hours after the actual event The rapid ramp events assessed in this report which have the hourly ramp rate above the 2% of the rated power capacity, are listed in Table 3-5 for South West (SW) and South Central (SC) regions, and the Existing Facilities (EF) when the event is forecasted not more than 12 hours in advance of the actual event, and Table 3-6 for South West (SW) and South Central (SC) regions, and the Existing Facilities (EF) when the event is forecasted either 6 hours before or 6 hours after the actual event.
67 Wind Power Forecasting Pilot Project Page 58 of 65 Table 3-5 : Annual Summary of Rapid Ramp Events for specified Time Horizons captured by Each Forecaster (% of total number of events) When Event is Forecasted Not More Than 12 Hours In Advance of the Actual Event. Region SW SC EF Measured Ramp-Up Measured Ramp-Down Total Ramp Forecaster A B C A B C A B C T = T = T = T = T = T = T = T = T = Average for 48 horizons Table 3-6: Annual Summary of Rapid Ramp Events for Specified Time Horizons Captured by Each Forecaster (% of total number of events) When Event is Forecasted Either 6 Hours Before or 6 Hours After the Actual Event. Region SW SC EF Measured Ramp-Up Measured Ramp-Down Total Ramp Forecaster A B C A B C A B C T = T = T = T = T = T = T = T = T = Average for 48 horizons
68 Wind Power Forecasting Pilot Project Page 59 of 65 Using the CSI methodology and the window criteria outlined above none of the forecasters did well in predicting the ramp rates as shown by the low percentages of the total ramp rates. However, it must be noted that the forecasters were not given this specific objective, i.e. tune your models to predict rapid ramp rates by the working group. It is ORTECH s opinion the forecasters tuned their models to do well in predicting the general conditions What is the impact on data availability? In previous quarterly progress reports ORTECH provided a discussion and a theoretical method of looking at the problem of the forecasters not receiving their input data on time. ORTECH attempted to determine how often and what length of delay (1 hour, 2 hours, etc.) using the Phoenix Engineering time stamps. However, it was noted that at some sites the data was reloaded and therefore, ORTECH could not draw any conclusions whether or not the forecasters received the input data on time. ORTECH suggested that the forecasters provide to ORTECH the time and dates when they did not receive the input data on time so that ORTECH could do a true analysis in future reports. This information was only provided by one forecaster so this analysis could not be undertaken Are there times (day/month/weather pattern) when there is more uncertainty in the forecasts than other times? Referring to section 3.3.1, for the time of day, while the values for each time of day may vary from forecaster to forecaster it is apparent that the forecasts are the least accurate during the afternoon periods between hours 13 and 18 after forecast horizon T=6. The reason for the lower accuracy in the afternoon periods can be accounted for by higher wind speeds and their variability which convert to higher power generation. Referring to section 3.3.2, seasonally the least accuracy is during the winter season for all forecasters while the best accuracy is during the summer season. It should be noted that for forecaster B the summer forecasts in the SW region are less accurate than forecasters A and C. When the hourly values (measured vs predicted) for June through August were examined for Forecaster B in the SW Region there was a much larger deviation from the measured values for July and August than for Forecaster s A and C.
69 Wind Power Forecasting Pilot Project Page 6 of What is the relationship between the spread of the min/max and the forecast error? As suggested in comments to the 2 nd and 3 rd quarterly progress reports, a correlation study of min-max spread and absolute errors of power and wind speed predictions was performed. Although it was found that the spread of errors increased as a function of min-max spread, the correlation coefficient (R) varied from.25 to.43 depending upon the region of power/wind aggregation, forecast horizons and forecasters. Because of poor correlation coefficients, it is difficult to assess uncertainty quantitatively. As suggested by the working group, the relation of the mean and standard deviation of min-max spread to the standard error of prediction was also assessed. No generalized relation could be found. As in the 3 rd quarterly progress report a box plot analysis for the annual analysis of predicted min-max, predicted average and measured data was performed. Similar to the conclusion in the 3 rd quarterly report the measured average values for power were capped by the minimum and maximum predicted values. In this report we have quantified the number of the measured power time periods that fall between minimum and maximum predicted values for 95% of time (for the last 12 months). These confidence limits for measured power are tabulated in Table 3-7 for the South West (SW), South Central (SC) and Existing Facilities (EF)) for Forecasters A, B, and C for selected forecast horizons. Between 81% and 95% (depending on forecaster and forecast horizon) of the time the measured values fall in between predicted minimum and maximum power values. Table 3-7: Confidence limits (%) of minimum and maximum predicted power distributions. Regions SW SC EF Forecaster A B C A B C A B C T = T = T = T = T = T = T = T = T = Average
70 Wind Power Forecasting Pilot Project Page 61 of 65 This analysis is consistent with the finding of box-plot analysis mainly that the minimum and maximum predicted values for power between 81% and 95% are reliable. This can be interpreted that any observed value will fall within those extreme limits at least from 81% and 95% of the time depending on forecast horizon and forecaster What is the correlation factor between all three forecasts? Is this related to the forecast error? The focus has been on the evaluation of predictions issued by the forecasters against the measured power and wind speed data. The aspect of mutual comparison of forecasters through correlation was beyond the scope of this study Summary of Findings From the analyses undertaken in the sections , to address questions raised by the working group, below is a summary of the findings is presented below. After each finding the relevant section is indicated. 1. Forecasting wind energy in the regions examined in Alberta over a 48 hour time horizon is possible (all sections) 2. All three forecasters can provide general forecasts in the regions examined over the 48 hour time horizon.(sections 3.1 and 3.8) 3. The largest error in the forecasts is due to phases errors. The different regions show a consistent behaviour for phase error suggesting a similar property affects all regions in a similar manner. (sections 3.1 and 3.2) 4. The accuracy of the forecasts in general decreases as the forecast horizon increases. While there is a variation between forecasters in accuracy at each forecast horizon, the trend of decreasing accuracy as the forecast horizon increases is preserved by each forecaster. (sections 3.1 and 3.2) 5. The wind power forecasts are the least accurate during the afternoon periods between hours 13 and 18 after forecast horizon T=6. (section 3.3) 6. The least accurate wind power forecasts are during the winter season (November, December, January, February) for all forecasters. The best accuracy is during the summer season (June, July, and August). (section 3.3) 7. Wind speed prediction errors are amplified by due to power conversion. (section 3.5)
71 Wind Power Forecasting Pilot Project Page 62 of The power prediction errors peak at the predicted wind speed of 1 m/s and show a different pattern from the wind speed error.(section 3.7) 9. The region with the least amount of error for wind is the central region (CE). The region with the largest amount of error is the Southwest Region (SW). (section 3.9) 1. For power forecasts, the accuracy for both SW and SC regions are similar. (section3.9) 11. Spatial smoothing reduces the error as the number of wind farms and the size of the area covered increases. (section 3.1) 12. All forecasters do not predict ramp events effectively. The reason may be that the forecasters were not given this specific objective. (section 3.11) 13. In examining the min-max spread for predicted power it was found that the measured values fell between the min-max predicted power 81%-95% of the time. (section 3.14)
72 Wind Power Forecasting Pilot Project Page 63 of GENERAL OBSERVATIONS AND RECOMMENDATIONS The observations during the first quarter lead to suggestions for forecast and data improvement in the subsequent quarters. A number of these suggestions were incorporated into the following quarter progress reports. For the final report the suggestions provided below augment the suggestions and observations made in the previous progress reports. 4.1 Data Listed below some of the shortcomings in the datasets that are worthwhile noting followed by recommendations to resolve and/or avoid these should there be a future study. (i) ORTECH relied on GENIVAR (Phoenix Engineering) for providing a QA/QC d measured data-sets for the different sites. Although, ORTECH applied a screen out criteria on the data, more confidence could have been achieved if one or both of the recommendations listed below was employed: Recommendation 1: GENIVAR (Phoenix Engineering) to provide ORTECH with a coded data-set along with a key to the codes used to substitute invalid data based on their QA/QC procedures. A detailed QA/QC report would also be helpful Recommendation 2: ORTECH to receive a raw database and to apply its own QA/QC measures which include the following:» Reviewing instruments orientation and calibration reports and correcting the data accordingly when necessary.» Flagging data with abnormal wind speeds or power and/or standard deviations and filtering them out if they fall outside of a certain or agreed-upon range.» Screening the data for icing events or any other anomalies that may have not been caught in the screening-out criteria and filtering them out.» Comparing wind speed data from different anemometers levels and from adjacent sites looking for discrepancies that are then filtered when necessary.» Other site specific QA/QC procedures.
73 Wind Power Forecasting Pilot Project Page 64 of 65 (ii) Some of the measured-data were modified and/or added after the end of each quarter. It was expected to have the data QA/QC d and ready to be analyzed before they were posted on GENIVAR s (Phoenix Engineering) wind-server for ORTECH to download, which was not the case. Recommendation: ORTECH to receive final and unchanged values in the measured data-set. (iii) ORTECH had neither access to the measured data provided to each of the forecasters nor to their availability. It was assumed that the measured data made available to ORTECH and to each of the forecasters were comparable. Recommendation: ORTECH to receive a data-availability report from each of the forecasters summarizing the hours that were used from the measured data to produce their forecast. A more reliable comparison between the different forecasters could have been produced. (iv) ORTECH assumed that none of the forecasters did prescreening to their results. The fact that in some cases forecasters did not provide forecasts up to the 48 th time horizon (T=48) puts this assumption into question. Recommendation: ORTECH to receive from each of the forecaster reports summarizing their prescreening criteria and listing their omitted forecasts if they do prescreening or a statement saying that they do not. 4.2 Trial Period In undertaking such a large project a trial period would have been advantageous. This trial period would allow all the components of the project (forecasters, data providers, data analyst) to test their procedures to ensure that the logistics were running as smoothly as possible. It is recommended that at least a two to three month trial period be undertaken before the main project is initiated. The trial period could be undertaken using a subset of the sites.
74 Wind Power Forecasting Pilot Project Page 65 of Freezing the Models Provided one of the questions to be addressed is a comparison between forecasters then it is recommended that after an initial training period the forecast model codes be frozen. If they are not frozen the consultant doing the quantitative analysis can not predict whether the output at the start of the project was the same as the out put generated from the forecast model at the end of the project. Another alternative would be to have each forecaster describe in detail the changes made to the model as the project progressed. 4.4 Existing vs Future Sites Given, that the main emphasis of the study was on wind power forecasting and not wind speed forecasting, it is recommended, that only existing sites which have measured wind power data be used. By using only measured wind power data a more representative indication of the accuracy of the forecaster s power conversion methodologies is provided. 4.5 Focusing on the Priorities Given the amount of data, the presentation and analysis of the information generated was considerable. It is recommended that future Working Groups in cooperation with the consultants determine what are the specific questions that are to be addressed and thus defining the metrics/analyses that will be the focus of the progress and final reports.
75 Wind Power Forecasting Pilot Project Appendix A APPENDIX A Alberta Forecast Regions (1 page)
76 Wind Power Forecasting Pilot Project Appendix A The map below was taken from the AESO RFP and is a composite of the Environment Canada Wind Atlas map for Alberta overlaid with the Alberta wind forecast regions supplied by Phoenix Engineering.
77 Wind Power Forecasting Pilot Project Appendix B APPENDIX B Bibliography (1 page)
78 Wind Power Forecasting Pilot Project Appendix B 1. Giebel, G. The State-Of-The-Art in Short-Term Prediction of Wind Power, Project ANEMOS: August Kariniotakis, G. et al. What performance can be expected by short-term wind power prediction models depending on site characteristics? EWEC 4 Conference (Proceedings), London, UK: November Lange, Matthias. Analysis of the Uncertainty of Wind Power Predictions, Thesis (PhD) Faculty of Mathematics and Natural Sciences, Carl von Ossietsky University, Oldenburg, Germany: Madsen H. et al. A Protocol for Standardizing the Performance Evaluation of Short Term Wind Power Prediction Models, Project ANEMOS: March Madsen H. et al. Standardizing the Performance Evaluation of Short Term Wind Power Prediction Models, Wind Engineering, Volume 29, Number 6: Möhrlen C. Uncertainty in Wind Energy Forecasting, Thesis (PhD) Department of Civil and Environmental Engineering, University College Cork, Ireland: May Texas Wind Energy Forecasting System Development and Testing, Phase 2: 12- Month Testing, EPRI, Palo Alto, CA, and U.S. Department of Energy, Washington, DC: Grinit E, 3TIER A Prototype Day-Ahead Forecast System for Rapid Wind Events, presented at the CanWEA 23 Annual Conference September 3-October 3, 27 Quebec City, Quebec
79 Wind Power Forecasting Pilot Project Appendix C APPENDIX C Data Recovery Rates By Quarter (8 pages)
80 Wind Power Forecasting Pilot Project Appendix C Quarter 1: May 1, 27-July 31, 27 Table 2.1: Summary of the 1-Minute Measured Wind Speed Data Recovery Rates for each Site during the Analysis Period (May 1, 27 : - July 31, 27 23:5) 1 Minute Data Recovery Rate (%) Site/Month May June July All Months Table 2.2: Summary of the Hourly Measured Wind Speed Data Recovery Rates for each Site during the Analysis Period (May 1, 27 : - July 31, 27 23:5) Hourly Data Recovery Rate (%) Site/Month May June July All Months
81 Wind Power Forecasting Pilot Project Appendix C Table 2.3: Summary of the 1-Minute Power Data Recovery Rates from each of the Facilities during the Analysis Period (May 1, 27 : - July 31, 27 23:5) 1-Minute Data Recovery Rate (%) Site/Month May June July All Months
82 Wind Power Forecasting Pilot Project Appendix C Quarter 2; August 1, 27-October 31, 27 Table 2.1: Summary of the 1-Minute Measured Wind Speed Data Recovery Rates for each Site during the Analysis Period (August 1, 27 : UTC - October 31, 27 23:5 UTC) 1 Minute Data Recovery Rate (%) Site/Month August September October All Months Table 2.2: Summary of the Hourly-Averaged Measured Wind Speed Data Recovery Rates for each Site during the Analysis Period (August 1, 27 : UTC - October 31, 27 23:5 UTC) Hourly Data Recovery Rate (%) Site/Month August September October All Months
83 Wind Power Forecasting Pilot Project Appendix C Table 2.3: Summary of the 1-Minute Measured/Calculated Power Data Recovery Rates for each of the Facilities during the Analysis Period (August 1, 27 : UTC - October 31, 27 23:5 UTC) 1-Minute Data Recovery Rate (%) Site/Month August September October All Months Table 2.4: Summary of the Hourly-Averaged Measured/Calculated Power Data Recovery Rates for each of the Facilities during the Analysis Period (August 1, 27 : UTC - October 31, 27 23:5 UTC) Hourly Data Recovery Rate (%) Site/Month August September October All Months
84 Wind Power Forecasting Pilot Project Appendix C Quarter 3: November 1, 27- January 31, 28 Table 2-1: Summary of the 1-Minute Measured Wind Speed Data Recovery Rates for each Site during the Analysis Period (November 1, 27 : UTC - January 31, 28 23:5 UTC) 1 Minute Data Recovery Rate (%) Site/Month November December January All Months Table 2-2: Summary of the Hourly-Averaged Measured Wind Speed Data Recovery Rates for each Site during the Analysis Period (November 1, 27 : UTC - January 31, 28 23:5 UTC) Hourly Data Recovery Rate (%) Site/Month November December January All Months
85 Wind Power Forecasting Pilot Project Appendix C Table 2-3: Summary of the 1-Minute Measured/Calculated Power Data Recovery Rates for each of the Facilities during the Analysis Period (November 1, 27 : UTC - January 31, 28 23:5 UTC) 1-Minute Data Recovery Rate (%) Site/Month November December January All Months Table 2-4: Summary of the Hourly-Averaged Measured/Calculated Power Data Recovery Rates for each of the Facilities during the Analysis Period (November 1, 27 : UTC - January 31, 28 23:5 UTC) Hourly Data Recovery Rate (%) Site/Month November December January All Months
86 Wind Power Forecasting Pilot Project Appendix C Quarter 4: February 1, 28-April 3, 28 Table 2-1: Summary of the 1-Minute Measured Wind Speed Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) 1 Minute Data Recovery Rate (%) Site/Month February March April All Months Table 2-2: Summary of the Hourly-Averaged Measured Wind Speed Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) Hourly Data Recovery Rate (%) Site/Month February March April All Months
87 Wind Power Forecasting Pilot Project Appendix C Table 2-3: Summary of the 1-Minute Measured/Calculated Power Data Recovery Rates for each Site during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) 1-Minute Data Recovery Rate (%) Table 2-4: Summary of the Hourly-Averaged Measured/Calculated Power Data Recovery Rates for each of the Sites during Q4 (February 1, 28 : UTC April 3, 28 23: UTC) Hourly Data Recovery Rate (%) Site/Month February March April All Months
88 Wind Power Forecasting Pilot Project Appendix D APPENDIX D Selected Wind Statistics by Quarter (12 pages)
89 Wind Power Forecasting Pilot Project Appendix D Quarter 1: May 1, 27-July 31, 27 Figure 3-2-1: First Quarter (Q1) Mean Absolute Error (MAE) of wind speed (WS) predictions (Pred) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by three forecasters A, B and C as a function of forecast horizons Forecaster A WIND Q1 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q1 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF 5 Forecaster C WIND Q1 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF
90 Wind Power Forecasting Pilot Project Appendix D Quarter 2: August 1, 27-October 31, 27 Figure 3-2-2: Second Quarter (Q2) Mean Absolute Error (MAE) of wind speed (WS) predictions (Pred) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by three forecasters A, B and C as a function of forecast horizons Forecaster A WIND Q2 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q2 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF 5 Forecaster C WIND Q2 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF
91 Wind Power Forecasting Pilot Project Appendix D Quarter 3: November 1, 27- January 31, 28 Figure 3-2-3: Third Quarter (Q3) Mean Absolute Error (MAE) of wind speed (WS) predictions (Pred) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by three forecasters A, B and C as a function of forecast horizons Forecaster A WIND Q3 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q3 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF 5 Forecaster C WIND Q3 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF
92 Wind Power Forecasting Pilot Project Appendix D Quarter 4: February 1, 28-April 3, 28 Figure 3-2-4: Fourth Quarter (Q4) Mean Absolute Error (MAE) of wind speed (WS) predictions (Pred) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All Facilities (AF) by three forecasters A, B and C as a function of forecast horizons Forecaster A WIND Q4 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q4 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF 5 Forecaster C WIND Q4 : MAE MAE of WS Pred. (m/s) SW SC SE CE EF FF AF
93 Wind Power Forecasting Pilot Project Appendix D Quarter 1: May 1, 27-July 31, 27 Figure 3-3-1: First Quarter (Q1) Root Mean Square Error (RMSE) of wind speed (WS) predictions (Pred.) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All facilities (AF) by three forecasters A, B and C as a function of forecast horizons. Forecaster A WIND Q1 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q1 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster C WIND Q1 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF
94 Wind Power Forecasting Pilot Project Appendix D Quarter 2: August 1, 27-October 31, 27 Figure 3-3-2: Second Quarter (Q2) Root Mean Square Error (RMSE) of wind speed (WS) predictions (Pred.) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All facilities (AF) by three forecasters A, B and C as a function of forecast horizons. Forecaster A WIND Q2 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q2 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster C WIND Q2 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF
95 Wind Power Forecasting Pilot Project Appendix D Quarter 3: November 1, 27- January 31, 28 Figure 3-3-3: Third Quarter (Q3) Root Mean Square Error (RMSE) of wind speed (WS) predictions (Pred.) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All facilities (AF) by three forecasters A, B and C as a function of forecast horizons. Forecaster A WIND Q3 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q3 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster C WIND Q3 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF
96 Wind Power Forecasting Pilot Project Appendix D Quarter 4: February 1, 28-April 3, 28 Figure 3-3-4: Fourth Quarter (Q4) Root Mean Square Error (RMSE) of wind speed (WS) predictions (Pred.) in South West (SW), South Central (SC), South East (SE), Central (CE), Existing Facilities (EF), Future Facilities (FF) and All facilities (AF) by three forecasters A, B and C as a function of forecast horizons. Forecaster A WIND Q4 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster B WIND Q4 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF Forecaster C WIND Q4 : RMSE RMSE of WS Pred. (m/s) SW SC SE CE EF FF AF
97 Wind Power Forecasting Pilot Project Appendix D Quarter 1: May 1, 27-July 31, 27 Figure : RMSE First Quarter Comparison between Forecasters for Wind by Region. RMSE (m/s) SW SC RMSE (m/s) SE CE RMSE (m/s) EF FF RMSE Q1 : WIND RMSE (m/s) AF A B C
98 Wind Power Forecasting Pilot Project Appendix D Quarter 2: August 1, 27-October 31, 27 Figure : RMSE Second Quarter Comparison between Forecasters for Wind by Region. RMSE Q2 : WIND RMSE (m/s) RMSE (m/s) RMSE (m/s) RMSE (m/s) SW SE EF AF SC CE FF A B C
99 Wind Power Forecasting Pilot Project Appendix D Quarter 3: November 1, 27- January 31, 28 Figure : RMSE Third Quarter Comparison between Forecasters for Wind by Region. RMSE Q3 : WIND RMSE (m/s) RMSE (m/s) RMSE (m/s) RMSE (m/s) SW SE EF AF SC CE FF A B C
100 Wind Power Forecasting Pilot Project Appendix D Quarter 4: February 1, 28-April 3, 28 Figure : RMSE Fourth Quarter Comparison between Forecasters for Wind by Region. RMSE Q4 : WIND RMSE (m/s) RMSE (m/s) RMSE (m/s) RMSE (m/s) SW SE EF AF SC CE FF A B C
101 Wind Power Forecasting Pilot Project Appendix E Appendix E Selected Wind Power Statistics by Quarter (16 pages)
102 Wind Power Forecasting Pilot Project Appendix E Quarter 1: May 1, 27-July 31, 27 Figure 3-5-1: First Quarter (Q1) Normalized Root Mean Square Error (RMSE %) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity (RC) of the region of power aggregation Forecaster A PWR Q1 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster B PWR Q1 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster C PWR Q1 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF
103 Wind Power Forecasting Pilot Project Appendix E Quarter 2: August 1, 27-October 31, 27 Figure : Second Quarter (Q2) Normalized Root Mean Square Error (RMSE %) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity (RC) of the region of power aggregation Forecaster A PWR Q2 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster B PWR Q2 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster C PWR Q2 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF
104 Wind Power Forecasting Pilot Project Appendix E Quarter 3: November 1, 27- January 31, 28 Figure 3-5-7: Third Quarter (Q3) Normalized Root Mean Square Error (RMSE %) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity (RC) of the region of power aggregation Forecaster A PWR Q3 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster B PWR Q3 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster C PWR Q3 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF
105 Wind Power Forecasting Pilot Project Appendix E Quarter 4: February 1, 28-April 3, 28 Figure 3-5-4: Fourth Quarter (Q4) Normalized Root Mean Square Error (RMSE %) of Power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the actual errors are normalized by the rated capacity (RC) of the region of power aggregation Forecaster A PWR Q4 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster B PWR Q4 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF Forecaster C PWR Q4 : RMSE Norm. RMSE of Power Pred.(%) SW SC EF
106 Wind Power Forecasting Pilot Project Appendix E Quarter 1: May 1, 27-July 31, 27 Figure 3-7-1: First Quarter (Q1) Normalized Mean Absolute Error (MAE %) of power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the MAE is normalized by the rated capacity of the region of aggregation. Forecaster A PWR Q1 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster B PWR Q1 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster C PWR Q1 : MAE Norm. MAE of Power Pred.(%) SW SC EF
107 Wind Power Forecasting Pilot Project Appendix E Quarter 2: August 1, 27-October 31, 27 Figure 3-7-2: Second Quarter (Q2) Annual Normalized Mean Absolute Error (MAE %) of power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the MAE is normalized by the rated capacity of the region of aggregation. Forecaster A PWR Q2 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster B PWR Q2 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster C PWR Q2 : MAE Norm. MAE of Power Pred.(%) SW SC EF
108 Wind Power Forecasting Pilot Project Appendix E Quarter 3: November 1, 27- January 31, 28 Figure 3-7-3: Third Quarter (Q3) Annual Normalized Mean Absolute Error (MAE %) of power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the MAE is normalized by the rated capacity of the region of aggregation. Forecaster A PWR Q3 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster B PWR Q3 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster C PWR Q3 : MAE Norm. MAE of Power Pred.(%) SW SC EF
109 Wind Power Forecasting Pilot Project Appendix E Quarter 4: February 1, 28-April 3, 28 Figure Fourth Quarter (Q4) Annual Normalized Mean Absolute Error (MAE %) of power predictions in South West (SW), South Central (SC), and existing facilities (EF) by three forecasters A, B and C as a function of forecast horizons. Note that the MAE is normalized by the rated capacity of the region of aggregation. Forecaster A PWR Q4 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster B PWR Q4 : MAE Norm. MAE of Power Pred.(%) SW SC EF Forecaster C PWR Q4 : MAE Norm. MAE of Power Pred.(%) SW SC EF
110 Wind Power Forecasting Pilot Project Appendix E Quarter 1: May 1, 27-July 31, 27 Figure 3-8-1: First Quarter (Q1) Accuracy of the Power Forecast for Specific 6 Hour Time Periods for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF).
111 Wind Power Forecasting Pilot Project Appendix E Quarter 2: August 1, 27-October 31, 27 Figure 3-8-2: Second Quarter (Q2) Accuracy of the Power Forecast for Specific 6 Hour Time Periods for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF).
112 Wind Power Forecasting Pilot Project Appendix E Quarter 3: November 1, 27- January 31, 28 Figure 3-8-3: Third Quarter (Q3) Accuracy of the Power Forecast for Specific 6 Hour Time Periods for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF).
113 Wind Power Forecasting Pilot Project Appendix E Quarter 4: February 1, 28-April 3, 28 Figure 3-8-4: Fourth Quarter (Q4) Accuracy of the Power Forecast for Specific 6 Hour Time Periods for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF).
114 Wind Power Forecasting Pilot Project Appendix E Figure: Accuracy of the Power Forecast By Month( 1=January, 12=December) for Each Forecaster Using RMSE (% Rated Capacity) for the South West (SW), South Central (SC), and Existing Facilities (EF)
115 Wind Power Forecasting Pilot Project Appendix E Quarter 1: May 1, 27-July 31, 27 Figure : First Quarter Comparison between Forecasters for Power by Region (Normalized RMSE % of Rated Capacity (RC)) and a Comparison against Persistence for SW, SC Regions and EF. RMSE (%RC) SW SC RMSE (%RC) SE CE RMSE (%RC) EF FF RMSE Q1 : PWR RMSE (%RC) AF A B C P
116 Wind Power Forecasting Pilot Project Appendix E Quarter 2: August 1, 27-October 31, 27 Figure : Second Quarter Comparison between Forecasters for Power by Region (Normalized RMSE % of Rated Capacity (RC)) and a Comparison against Persistence for SW, SC Regions and EF. RMSE (%RC) SW SC RMSE (%RC) SE CE RMSE (%RC) EF FF RMSE Q2 : PWR RMSE (%RC) AF A B C P
117 Wind Power Forecasting Pilot Project Appendix E Quarter 3: November 1, 27- January 31, 28 Figure : Third Quarter Comparison between Forecasters for Power by Region (Normalized RMSE % of Rated Capacity (RC)) and a Comparison against Persistence for SW, SC Regions and EF. RMSE (%RC) SW SC RMSE Q3 : PWR RMSE (%RC) RMSE (%RC) RMSE (%RC) SE EF AF CE FF A B C P
118 Wind Power Forecasting Pilot Project Appendix E Quarter 4: February 1, 28-April 3, 28 Figure : Fourth Quarter Comparison between Forecasters for Power by Region (Normalized RMSE % of Rated Capacity (RC)) and a Comparison against Persistence for SW, SC Regions and EF. RMSE (%RC) SW SC RMSE (%RC) SE CE RMSE (%RC) EF FF RMSE Q4 : PWR RMSE (%RC) AF A B C P
Forecaster comments to the ORTECH Report
Forecaster comments to the ORTECH Report The Alberta Forecasting Pilot Project was truly a pioneering and landmark effort in the assessment of wind power production forecast performance in North America.
APPENDIX N. Data Validation Using Data Descriptors
APPENDIX N Data Validation Using Data Descriptors Data validation is often defined by six data descriptors: 1) reports to decision maker 2) documentation 3) data sources 4) analytical method and detection
Virtual Met Mast verification report:
Virtual Met Mast verification report: June 2013 1 Authors: Alasdair Skea Karen Walter Dr Clive Wilson Leo Hume-Wright 2 Table of contents Executive summary... 4 1. Introduction... 6 2. Verification process...
A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data
A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt
AP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
Value of storage in providing balancing services for electricity generation systems with high wind penetration
Journal of Power Sources 162 (2006) 949 953 Short communication Value of storage in providing balancing services for electricity generation systems with high wind penetration Mary Black, Goran Strbac 1
DATA VALIDATION, PROCESSING, AND REPORTING
DATA VALIDATION, PROCESSING, AND REPORTING After the field data are collected and transferred to your office computing environment, the next steps are to validate and process data, and generate reports.
APPENDIX 15. Review of demand and energy forecasting methodologies Frontier Economics
APPENDIX 15 Review of demand and energy forecasting methodologies Frontier Economics Energex regulatory proposal October 2014 Assessment of Energex s energy consumption and system demand forecasting procedures
MGT 267 PROJECT. Forecasting the United States Retail Sales of the Pharmacies and Drug Stores. Done by: Shunwei Wang & Mohammad Zainal
MGT 267 PROJECT Forecasting the United States Retail Sales of the Pharmacies and Drug Stores Done by: Shunwei Wang & Mohammad Zainal Dec. 2002 The retail sale (Million) ABSTRACT The present study aims
Industry Environment and Concepts for Forecasting 1
Table of Contents Industry Environment and Concepts for Forecasting 1 Forecasting Methods Overview...2 Multilevel Forecasting...3 Demand Forecasting...4 Integrating Information...5 Simplifying the Forecast...6
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to
The impact of window size on AMV
The impact of window size on AMV E. H. Sohn 1 and R. Borde 2 KMA 1 and EUMETSAT 2 Abstract Target size determination is subjective not only for tracking the vector but also AMV results. Smaller target
Sandia National Laboratories New Mexico Wind Resource Assessment Lee Ranch
Sandia National Laboratories New Mexico Wind Resource Assessment Lee Ranch Data Summary and Transmittal for September December 2002 & Annual Analysis for January December 2002 Prepared for: Sandia National
2013 MBA Jump Start Program. Statistics Module Part 3
2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just
Ch.3 Demand Forecasting.
Part 3 : Acquisition & Production Support. Ch.3 Demand Forecasting. Edited by Dr. Seung Hyun Lee (Ph.D., CPL) IEMS Research Center, E-mail : [email protected] Demand Forecasting. Definition. An estimate
User manual data files meteorological mast NoordzeeWind
User manual data files meteorological mast NoordzeeWind Document code: NZW-16-S-4-R03 Version: 2.0 Date: 1 October 2007 Author: ing. HJ Kouwenhoven User manual data files meteorological mast NoordzeeWind
http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions
A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.
How To Calculate A Multiiperiod Probability Of Default
Mean of Ratios or Ratio of Means: statistical uncertainty applied to estimate Multiperiod Probability of Default Matteo Formenti 1 Group Risk Management UniCredit Group Università Carlo Cattaneo September
Step-by-Step Analytical Methods Validation and Protocol in the Quality System Compliance Industry
Step-by-Step Analytical Methods Validation and Protocol in the Quality System Compliance Industry BY GHULAM A. SHABIR Introduction Methods Validation: Establishing documented evidence that provides a high
Design of a Weather- Normalization Forecasting Model
Design of a Weather- Normalization Forecasting Model Project Proposal Abram Gross Yafeng Peng Jedidiah Shirey 2/11/2014 Table of Contents 1.0 CONTEXT... 3 2.0 PROBLEM STATEMENT... 4 3.0 SCOPE... 4 4.0
Northumberland Knowledge
Northumberland Knowledge Know Guide How to Analyse Data - November 2012 - This page has been left blank 2 About this guide The Know Guides are a suite of documents that provide useful information about
Improving Demand Forecasting
Improving Demand Forecasting 2 nd July 2013 John Tansley - CACI Overview The ideal forecasting process: Efficiency, transparency, accuracy Managing and understanding uncertainty: Limits to forecast accuracy,
Content Sheet 7-1: Overview of Quality Control for Quantitative Tests
Content Sheet 7-1: Overview of Quality Control for Quantitative Tests Role in quality management system Quality Control (QC) is a component of process control, and is a major element of the quality management
Introduction to time series analysis
Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples
Solar Input Data for PV Energy Modeling
June 2012 Solar Input Data for PV Energy Modeling Marie Schnitzer, Christopher Thuman, Peter Johnson Albany New York, USA Barcelona Spain Bangalore India Company Snapshot Established in 1983; nearly 30
Data Management Implementation Plan
Appendix 8.H Data Management Implementation Plan Prepared by Vikram Vyas CRESP-Amchitka Data Management Component 1. INTRODUCTION... 2 1.1. OBJECTIVES AND SCOPE... 2 2. DATA REPORTING CONVENTIONS... 2
Statistical Rules of Thumb
Statistical Rules of Thumb Second Edition Gerald van Belle University of Washington Department of Biostatistics and Department of Environmental and Occupational Health Sciences Seattle, WA WILEY AJOHN
Data Exploration Data Visualization
Data Exploration Data Visualization What is data exploration? A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping to select
Solar and PV forecasting in Canada
Solar and PV forecasting in Canada Sophie Pelland, CanmetENERGY IESO Wind Power Standing Committee meeting Toronto, September 23, 2010 Presentation Plan Introduction How are PV forecasts generated? Solar
Outline: Demand Forecasting
Outline: Demand Forecasting Given the limited background from the surveys and that Chapter 7 in the book is complex, we will cover less material. The role of forecasting in the chain Characteristics of
ILM Level 3 Certificate in Using Active Operations Management in the Workplace (QCF)
PAGE 1 ILM Level 3 Certificate in Using Active Operations Management in the Workplace (QCF) CONTENTS Qualification Overview: ILM Level 5 Award, Certificate and Diploma in Management APPENDICES Appendix
Time series Forecasting using Holt-Winters Exponential Smoothing
Time series Forecasting using Holt-Winters Exponential Smoothing Prajakta S. Kalekar(04329008) Kanwal Rekhi School of Information Technology Under the guidance of Prof. Bernard December 6, 2004 Abstract
Calibration of the MASS time constant by simulation
Calibration of the MASS time constant by simulation A. Tokovinin Version 1.1. July 29, 2009 file: prj/atm/mass/theory/doc/timeconstnew.tex 1 Introduction The adaptive optics atmospheric time constant τ
Medical Information Management & Mining. You Chen Jan,15, 2013 [email protected]
Medical Information Management & Mining You Chen Jan,15, 2013 [email protected] 1 Trees Building Materials Trees cannot be used to build a house directly. How can we transform trees to building materials?
Cross Validation. Dr. Thomas Jensen Expedia.com
Cross Validation Dr. Thomas Jensen Expedia.com About Me PhD from ETH Used to be a statistician at Link, now Senior Business Analyst at Expedia Manage a database with 720,000 Hotels that are not on contract
Discussion Paper On the validation and review of Credit Rating Agencies methodologies
Discussion Paper On the validation and review of Credit Rating Agencies methodologies 17 November 2015 ESMA/2015/1735 Responding to this paper The European Securities and Markets Authority (ESMA) invites
On Correlating Performance Metrics
On Correlating Performance Metrics Yiping Ding and Chris Thornley BMC Software, Inc. Kenneth Newman BMC Software, Inc. University of Massachusetts, Boston Performance metrics and their measurements are
Solar radiation data validation
Loughborough University Institutional Repository Solar radiation data validation This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: MCKENNA, E., 2009.
Time series analysis in loan management information systems
Theoretical and Applied Economics Volume XXI (2014), No. 3(592), pp. 57-66 Time series analysis in loan management information systems Julian VASILEV Varna University of Economics, Bulgaria [email protected]
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Wind resources map of Spain at mesoscale. Methodology and validation
Wind resources map of Spain at mesoscale. Methodology and validation Martín Gastón Edurne Pascal Laura Frías Ignacio Martí Uxue Irigoyen Elena Cantero Sergio Lozano Yolanda Loureiro e-mail:[email protected]
4.1 Exploratory Analysis: Once the data is collected and entered, the first question is: "What do the data look like?"
Data Analysis Plan The appropriate methods of data analysis are determined by your data types and variables of interest, the actual distribution of the variables, and the number of cases. Different analyses
16 : Demand Forecasting
16 : Demand Forecasting 1 Session Outline Demand Forecasting Subjective methods can be used only when past data is not available. When past data is available, it is advisable that firms should use statistical
Fairfield Public Schools
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
Climate and Weather. This document explains where we obtain weather and climate data and how we incorporate it into metrics:
OVERVIEW Climate and Weather The climate of the area where your property is located and the annual fluctuations you experience in weather conditions can affect how much energy you need to operate your
STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI
STATS8: Introduction to Biostatistics Data Exploration Babak Shahbaba Department of Statistics, UCI Introduction After clearly defining the scientific problem, selecting a set of representative members
REDUCING UNCERTAINTY IN SOLAR ENERGY ESTIMATES
REDUCING UNCERTAINTY IN SOLAR ENERGY ESTIMATES Mitigating Energy Risk through On-Site Monitoring Marie Schnitzer, Vice President of Consulting Services Christopher Thuman, Senior Meteorologist Peter Johnson,
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Validation and Calibration. Definitions and Terminology
Validation and Calibration Definitions and Terminology ACCEPTANCE CRITERIA: The specifications and acceptance/rejection criteria, such as acceptable quality level and unacceptable quality level, with an
SOFTWARE FOR THE OPTIMAL ALLOCATION OF EV CHARGERS INTO THE POWER DISTRIBUTION GRID
SOFTWARE FOR THE OPTIMAL ALLOCATION OF EV CHARGERS INTO THE POWER DISTRIBUTION GRID Amparo MOCHOLÍ MUNERA, Carlos BLASCO LLOPIS, Irene AGUADO CORTEZÓN, Vicente FUSTER ROIG Instituto Tecnológico de la Energía
Influence of Solar Radiation Models in the Calibration of Building Simulation Models
Influence of Solar Radiation Models in the Calibration of Building Simulation Models J.K. Copper, A.B. Sproul 1 1 School of Photovoltaics and Renewable Energy Engineering, University of New South Wales,
Collaborative Forecasting
Collaborative Forecasting By Harpal Singh What is Collaborative Forecasting? Collaborative forecasting is the process for collecting and reconciling the information from diverse sources inside and outside
INTERNATIONAL STANDARD ON AUDITING 200 OBJECTIVE AND GENERAL PRINCIPLES GOVERNING AN AUDIT OF FINANCIAL STATEMENTS CONTENTS
INTERNATIONAL STANDARD ON AUDITING 200 OBJECTIVE AND GENERAL PRINCIPLES GOVERNING (Effective for audits of financial statements for periods beginning on or after December 15, 2005. The Appendix contains
Fitch CDS Pricing Consensus Curve Methodology
Fitch CDS Pricing Consensus Curve Methodology Overview The Fitch CDS Pricing Service aims to provide greater transparency within the global Credit Derivative market. Fitch CDS Pricing is an industry-leading
Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard
Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express
Short-Term Forecasting in Retail Energy Markets
Itron White Paper Energy Forecasting Short-Term Forecasting in Retail Energy Markets Frank A. Monforte, Ph.D Director, Itron Forecasting 2006, Itron Inc. All rights reserved. 1 Introduction 4 Forecasting
Analysis of Bayesian Dynamic Linear Models
Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main
Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I
Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting
APPENDIX E THE ASSESSMENT PHASE OF THE DATA LIFE CYCLE
APPENDIX E THE ASSESSMENT PHASE OF THE DATA LIFE CYCLE The assessment phase of the Data Life Cycle includes verification and validation of the survey data and assessment of quality of the data. Data verification
USING SIMULATED WIND DATA FROM A MESOSCALE MODEL IN MCP. M. Taylor J. Freedman K. Waight M. Brower
USING SIMULATED WIND DATA FROM A MESOSCALE MODEL IN MCP M. Taylor J. Freedman K. Waight M. Brower Page 2 ABSTRACT Since field measurement campaigns for proposed wind projects typically last no more than
Exponential Smoothing with Trend. As we move toward medium-range forecasts, trend becomes more important.
Exponential Smoothing with Trend As we move toward medium-range forecasts, trend becomes more important. Incorporating a trend component into exponentially smoothed forecasts is called double exponential
Forecasting the first step in planning. Estimating the future demand for products and services and the necessary resources to produce these outputs
PRODUCTION PLANNING AND CONTROL CHAPTER 2: FORECASTING Forecasting the first step in planning. Estimating the future demand for products and services and the necessary resources to produce these outputs
Forecasting Methods. What is forecasting? Why is forecasting important? How can we evaluate a future demand? How do we make mistakes?
Forecasting Methods What is forecasting? Why is forecasting important? How can we evaluate a future demand? How do we make mistakes? Prod - Forecasting Methods Contents. FRAMEWORK OF PLANNING DECISIONS....
RELEVANT TO ACCA QUALIFICATION PAPER P3. Studying Paper P3? Performance objectives 7, 8 and 9 are relevant to this exam
RELEVANT TO ACCA QUALIFICATION PAPER P3 Studying Paper P3? Performance objectives 7, 8 and 9 are relevant to this exam Business forecasting and strategic planning Quantitative data has always been supplied
Intro to GIS Winter 2011. Data Visualization Part I
Intro to GIS Winter 2011 Data Visualization Part I Cartographer Code of Ethics Always have a straightforward agenda and have a defining purpose or goal for each map Always strive to know your audience
Wind Forecasting stlljectives for Utility Schedule and Energy Traders
NREUCP 500 24680 UC Category: 1210 Wind Forecasting stlljectives for Utility Schedule and Energy Traders Marc N. Schwartz National Renewable Energy Laboratory Bruce H. Bailey A WS Scientific, Inc. Presented
CALL VOLUME FORECASTING FOR SERVICE DESKS
CALL VOLUME FORECASTING FOR SERVICE DESKS Krishna Murthy Dasari Satyam Computer Services Ltd. This paper discusses the practical role of forecasting for Service Desk call volumes. Although there are many
ASSURING THE QUALITY OF TEST RESULTS
Page 1 of 12 Sections Included in this Document and Change History 1. Purpose 2. Scope 3. Responsibilities 4. Background 5. References 6. Procedure/(6. B changed Division of Field Science and DFS to Office
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides
Credit Risk Stress Testing
1 Credit Risk Stress Testing Stress Testing Features of Risk Evaluator 1. 1. Introduction Risk Evaluator is a financial tool intended for evaluating market and credit risk of single positions or of large
THE PREDICTIVE MODELLING PROCESS
THE PREDICTIVE MODELLING PROCESS Models are used extensively in business and have an important role to play in sound decision making. This paper is intended for people who need to understand the process
Integrated Resource Plan
Integrated Resource Plan March 19, 2004 PREPARED FOR KAUA I ISLAND UTILITY COOPERATIVE LCG Consulting 4962 El Camino Real, Suite 112 Los Altos, CA 94022 650-962-9670 1 IRP 1 ELECTRIC LOAD FORECASTING 1.1
4. Simple regression. QBUS6840 Predictive Analytics. https://www.otexts.org/fpp/4
4. Simple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/4 Outline The simple linear model Least squares estimation Forecasting with regression Non-linear functional forms Regression
CLOUD COVER IMPACT ON PHOTOVOLTAIC POWER PRODUCTION IN SOUTH AFRICA
CLOUD COVER IMPACT ON PHOTOVOLTAIC POWER PRODUCTION IN SOUTH AFRICA Marcel Suri 1, Tomas Cebecauer 1, Artur Skoczek 1, Ronald Marais 2, Crescent Mushwana 2, Josh Reinecke 3 and Riaan Meyer 4 1 GeoModel
Stock market booms and real economic activity: Is this time different?
International Review of Economics and Finance 9 (2000) 387 415 Stock market booms and real economic activity: Is this time different? Mathias Binswanger* Institute for Economics and the Environment, University
Power & Water Corporation. Review of Benchmarking Methods Applied
2014 Power & Water Corporation Review of Benchmarking Methods Applied PWC Power Networks Operational Expenditure Benchmarking Review A review of the benchmarking analysis that supports a recommendation
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
Algebra 1 Course Information
Course Information Course Description: Students will study patterns, relations, and functions, and focus on the use of mathematical models to understand and analyze quantitative relationships. Through
USE OF REMOTE SENSING FOR WIND ENERGY ASSESSMENTS
RECOMMENDED PRACTICE DNV-RP-J101 USE OF REMOTE SENSING FOR WIND ENERGY ASSESSMENTS APRIL 2011 FOREWORD (DNV) is an autonomous and independent foundation with the objectives of safeguarding life, property
Power fluctuations from large offshore wind farms
Power fluctuations from large offshore wind farms Poul Sørensen Wind Energy Systems (VES) Wind Energy Division Project was funded by Energinet.dk PSO 2004-6506 Geographical spreading 2 Wind turbine sites
Solarstromprognosen für Übertragungsnetzbetreiber
Solarstromprognosen für Übertragungsnetzbetreiber Elke Lorenz, Jan Kühnert, Annette Hammer, Detlev Heienmann Universität Oldenburg 1 Outline grid integration of photovoltaic power (PV) in Germany overview
Chapter 3 - GPS Data Collection Description and Validation
Chapter 3 - GPS Data Collection Description and Validation The first step toward the analysis of accuracy and reliability of AVI system was to identify a suitable benchmark for measuring AVI system performance.
The Wind Integration National Dataset (WIND) toolkit
The Wind Integration National Dataset (WIND) toolkit EWEA Wind Power Forecasting Workshop, Rotterdam December 3, 2013 Caroline Draxl NREL/PR-5000-60977 NREL is a national laboratory of the U.S. Department
Final Report Bid Evaluation and Selection Process For Wind-Generated Electricity Hydro-Quebec Distribution Call For Tenders Process.
Final Report Bid Evaluation and Selection Process For Wind-Generated Electricity Hydro-Quebec Distribution Call For Tenders Process March, 2005 Prepared by Merrimack Energy Group, Inc. Merrimack M Energy
IRG-Rail (13) 2. Independent Regulators Group Rail IRG Rail Annual Market Monitoring Report
IRG-Rail (13) 2 Independent Regulators Group Rail IRG Rail Annual Market Monitoring Report February 2013 Index 1 Introduction...3 2 Aim of the report...3 3 Methodology...4 4 Findings...5 a) Market structure...5
Data Visualization Techniques
Data Visualization Techniques From Basics to Big Data with SAS Visual Analytics WHITE PAPER SAS White Paper Table of Contents Introduction.... 1 Generating the Best Visualizations for Your Data... 2 The
Real-time PCR: Understanding C t
APPLICATION NOTE Real-Time PCR Real-time PCR: Understanding C t Real-time PCR, also called quantitative PCR or qpcr, can provide a simple and elegant method for determining the amount of a target sequence
IBM SPSS Forecasting 22
IBM SPSS Forecasting 22 Note Before using this information and the product it supports, read the information in Notices on page 33. Product Information This edition applies to version 22, release 0, modification
Selecting members of the QUMP perturbed-physics ensemble for use with PRECIS
Selecting members of the QUMP perturbed-physics ensemble for use with PRECIS Isn t one model enough? Carol McSweeney and Richard Jones Met Office Hadley Centre, September 2010 Downscaling a single GCM
Software Metrics & Software Metrology. Alain Abran. Chapter 4 Quantification and Measurement are Not the Same!
Software Metrics & Software Metrology Alain Abran Chapter 4 Quantification and Measurement are Not the Same! 1 Agenda This chapter covers: The difference between a number & an analysis model. The Measurement
2016 ERCOT System Planning Long-Term Hourly Peak Demand and Energy Forecast December 31, 2015
2016 ERCOT System Planning Long-Term Hourly Peak Demand and Energy Forecast December 31, 2015 2015 Electric Reliability Council of Texas, Inc. All rights reserved. Long-Term Hourly Peak Demand and Energy
Mathematics. Mathematical Practices
Mathematical Practices 1. Make sense of problems and persevere in solving them. 2. Reason abstractly and quantitatively. 3. Construct viable arguments and critique the reasoning of others. 4. Model with
Algorithmic Trading Session 1 Introduction. Oliver Steinki, CFA, FRM
Algorithmic Trading Session 1 Introduction Oliver Steinki, CFA, FRM Outline An Introduction to Algorithmic Trading Definition, Research Areas, Relevance and Applications General Trading Overview Goals
