5.1.5 Estimating uncertainty of area and change in area
The IPCC definition of good practice requires that emissions inventories should satisfy two criteria: (1) neither over- nor under-estimates so far as can be judged, and (2) uncertainties reduced as far as is practicable (IPCC, 2003; preface).
In statistical terms, the first criterion is closely related to the statistical concept of bias. Bias is a property of a statistical formula called an estimator which, when applied to sample data, produces an estimate. An estimator is characterized as unbiased if the average of all estimates calculated using data for all possible samples acquired using the sampling design equals the true value of the parameter of interest; otherwise, an estimator is characterized as biased. In practice, application of the estimator to all possible samples is impossible, so that bias can only be estimated, and an estimate obtained using an unbiased estimator may still deviate substantially from the true value; hence, the concept of confidence interval. A confidence interval expresses the uncertainty of a sample-based estimate and is formulated as a sample-based estimate of the parameter plus/minus the sample-based estimate of the standard error of the parameter estimate, multiplied by the confidence level. Confidence intervals at the 95%-level are interpreted as meaning that 95% of such intervals, one for each set of sample data, include the true value of the parameter. The width of a confidence interval is closely related to precision, a measure of the uncertainty addressed by the second IPCC criterion. Confidence intervals constructed using unbiased estimators therefore satisfy both IPCC good practice criteria specified above. This section provides advice on how to use such estimators to infer central values and confidence intervals for activity data.
Methods that produce estimates of activity data as sums of areas of map units assigned to map classes are characterized as pixel counting and generally make no provision for accommodating the effects of map classification errors. Further, although confusion or error matrices and map accuracy indices can inform issues of systematic errors and precision, they do not directly produce the information necessary to construct confidence intervals. Therefore, pixel-counting methods provide no assurance that estimates are “neither over- nor under-estimates” or that “uncertainties are reduced as far as practicable”. The role of reference data, also characterized as accuracy assessment data, is to provide such assurance by adjusting for estimated systematic classification errors and estimating uncertainty, thereby providing the information necessary for construction of confidence intervals for compliance with IPCC good practice guidance.
Direct observations of ground conditions by field crews are often considered the most reliable source of reference data, but interpretations of aerial photography and satellite data are also used. When the source of reference data is not direct ground observations, the reference data must be of at least the same and preferably of greater quality with respect to both resolution and accuracy than remote sensing-based map data. For accuracy assessment and estimation to be valid for an area of interest using the familiar design- or probability-based framework (McRoberts, 2014), the reference data must be collected using a probability sampling design, regardless of how the training data used to classify for example a satellite image are collected. Probability sampling designs to consider are simple random (SRS), systematic (SYS), stratified random (simple random sampling within strata) or systematic (systematic sampling within strata) (STR), and two-stage and cluster sampling. A key issue when selecting a sampling design is that the sample size for each activity must be large enough to produce sufficiently precise estimates of the area of the activity, given the policy requirement and the costs involved. SRS and SYS designs produce sample sizes for individual activities that are approximately proportional to their occurrence. If a very large overall sample is obtained, then SRS or SYS may produce large enough sample sizes for individual activities to produce estimates of sufficient precision. However, unless the overall sample size is large, sample sizes for activities representing small proportions of the total area may be too small to satisfy the precision criterion. Thus, given the likely rarity of some activities and the potentially large costs associated with large samples, serious consideration should be given to stratified sampling (STR) for which the strata correspond to map activity classes. With two-stage sampling, initial primary sampling locations are chosen, then several secondary sample units are selected within the primary sampling units. The motivation is often to reduce sampling costs but several factors must be considered when planning a two-stage sampling design. If distances between pairs of second-stage sampling units are less than the geographic range of spatial correlation, then observations will tend to be similar and the sampling will be less efficient. Further, the analysis of the sample is often more complex than if analysing a sample selected by SRS, SYS or STR designs. When dealing with continuous observations (such as proportion of forest) rather than classifying forest into classes or categories, model-assisted estimators may be more efficient. Typically, these estimators use the map predictions as the model predictions and then use a reference sample selected using an appropriate sampling design to correct for estimated bias resulting from systematic classification or prediction error.
Once a sample of reference observations has been collected, the activity area and the associated confidence interval are estimated using a statistical estimator corresponding to the sampling design.