You are here

What are the different methods of statistical downscaling and what are the strengths/weaknesses?

This is a growing collection of information on statistical downscaling techniques.  This list is not inclusive of all techniques but highlights those that GLISA has studied.  Check back periodically for updated information.

The type of statistical downscaling that is applied to global climate model (GCM) output can impact the climate story that the downscaled projections tell.  It is important to know the strengths and weaknesses of the methods if you are considering using statistically downscaled data.  Analysis and evaluation of the data is recommnended at the spatial scales of the climate application and climate questions that are being asked.

Advantages of Statistical Downscaling:  Statistically downscaled projections are relatively easy to produce because they do not require heavy computing resources.  Due to the computation advantages mass ensembles of projections can be produced.  Projections can also be downscaled to point-specific locations, however the data must be carefully interpreted at that scale.  The results can be compared to observations for a historical time period.

Disadvantages of Statistical Downscaling: Although downscaled projections provide data at local spatial scales relevant to decision makers, caution must be taken when interpreting the information.  Two major disadvantages exist: 1) local, small-scale dynamics and climate feedbacks are not simulated, and 2) assumptions of stationarity between the large- and small-scale dynamics are made to downscale future projections.  The GCMs are not able to simulate weather and climate processes at scales smaller than their grid spacing, and statistically downscaled data do not add information at the smaller scale.  Statistically downscaled data can only reflect the information in the observations onto the projections.  For example, in the Great Lakes there is a lake-effect zone parallel to the down-wind coastlines that typically receives more wintertime precipitation.  The GCMs do not simulate this feature because they are too coarse to capture lake-effect processes.  Statistically downscaled projections will show enhanced lake-effect precipitation if there is a signal of it in the observations because the projections are heavily weighted by the observed climate.  However, the projections do not provide any information about the lake-effect processes or how those might change over time.  In fact, statistical downscaling assumes those processes will NOT change over time because without actually simulating them there is no way of telling how they would change.  The problem of assuming stationarity is that some of the interactions between the large- and small-scale are already changing, so this information is not represented in the downscaled projections.  For example, lake-ice on the Great Lakes plays an important role in determining the amount of snowfall that is observed downwind, and in recent years the amount of ice coverage has been declining.  Accompanied with this decline has been an increase of lake-effect snowfall in some areas.  Statistical downscaling does not take into account the downward trend in lake-ice in future years, so it does not include this additional moisture source for future snowfalls.         

Bias Corrected Spatial Disaggregation (BCSD)  Biases (differences between what the model simulates and what was observed during a historical period) are removed using a quantile mapping technique (more on quantiles here).  This kind of technique compares the simulated climate values to the observed values at specific points in the statistical distribution and adjusts the simulated values so they match what was observed.  This is not done on a time-by-time basis, rather, points in the statistical distribution of each are compared and adjusted.  The amount of adjustment is recorded and also applied to future simulations.  The adjusted simulations are then downscaled to a finer-resolution spatial scale using a linear interpolation method.  The downscaling method calculates the values in between the adjusted data points (to match the smaller scale resolution) by using the surrounding data point values and linear relationships based on the distance between the large- and small-scale data point locations.

Additional information on the bias correction and downscaling methods that were applied to the CMIP3 climate projections is available here.     

Bias Corrected Constructed Analogue (BCCA) Similar to the BCSD, the analogue method relies on historical climate observations.  GCM simulations are bias corrected against observations like in the BCSD method.  Each time slice in the future bias corrected GCM projection is compared to historical observations to select the most similar observed 30 days.  Similarity is based on how well spatial patterns and intensity of say, precipitation or temperature are represented.  A linear combination of the 30 selected days is constructed to produce an analogue that most closely matches the GCM.  The coefficients for the 30 historical days are applied to the higher-resolution observations at those same times to produce a downscaled analogue.  One potential issue with this method is that the GCM may project conditions that have not been observed so analogues will not exist.

Additional information on constructed analogues can be found here and there's a nice diagram of the method on page 17 of the document. 

Asynchronous Regression (AR)

 
FAQ Tags: 
Climate Projections
Downscaling
Uncertainty Information