Spatial + Temporal Scope [MGD Sections]

REDD+ Activities [MGD Sections]

Forest Definition [MGD Sections]

Carbon Pools [MGD Sections]

5.1.2   Maps of forest/non-forest, land use, or forest stratification Previous topic Parent topic Child topic Next topic

At the heart of the use of remote sensing images is the translation of the remotely sensed measurements into information about surface conditions (land cover) and then some additional information to enable the translation of land cover to land use for reporting consistently with IPCC categories. Generating the various kinds of activity data necessary for estimating GHG emissions and removals involves categorization of lands. Ideally countries would develop maps with categories that best suit their conditions and broad national and international reporting requirements. These country specific categories would then be represented as the corresponding IPCC categories through the application of country specific rules to enable reporting to these IPCC land use categories through time.
For example, for estimation of forest area changes, a map is usually made that includes the categories forest and non-forest(1). To correspond to the top-level categorization adopted by IPCC GPG2003, the map would need to have at least the following categories: Forest Land, Cropland, Grassland, Wetlands, Settlements and Other Land. There may be need to stratify forest areas according to ecosystem types or other nationally relevant categories for a range of reasons, for example broader NFMS reporting requirements or to minimize the variability in carbon content. Consequently, methods that define categories, or classes, using remote sensing and attribution are particularly relevant. Collectively, these methods are referred to as image classification, and there is a long history of their use in remote sensing. There has also been extensive research on the best methods for image classification and as a result a wide variety of choices are available. Most image processing packages include several algorithms for image classification. Common image classification algorithms include maximum likelihood, decision trees, support vector machines and neural networks. Many of these are available in standard image processing software packages(2).
Image classification begins with the definition of the categories or classes to be included in the map. In supervised classification, it is necessary to provide training samples of each of the classes to be included. These samples could come from a variety of sources, including sample sites from an NFI, or could be obtained from high resolution images. For the basic classes of forest/non-forest, or the small number of top-level categories used by IPCC GPG, examples can often easily be found in the images being classified. Often images from a single date are used for image classification. However, multiple images from different seasons can also be used in image classification to try to capture classes with seasonal dynamics. As the level of stratification of forests increases, alternative sources of reference data to train classifiers will be needed, such as prior vegetation maps or field plots.
Classification can be done by visual interpretation, but this can be very human resource intensive(3) because the number of pixels may be very large and interpretations can vary due to human judgement. This may be overcome by using automated algorithms in either non-supervised or supervised approaches to give results consistent with human interpreters in allocating a pixel to one forest type or another, or to segment the data. Non-supervised approaches use classification algorithms to assign image pixels into one of a number of unlabelled class groupings. Expert image interpreters then assign each of the groupings of pixels a value corresponding to the desired land class. Supervised approaches use expertly-defined areas of known vegetation types to tune the parameters of classification algorithms which then identify and label areas similar to the input training data. The approaches have different challenges which are best addressed by iterative trials: supervised classification may wish to use more classes than are statistically separable; unsupervised methods may generate fewer classes than are desired and a given cover type may be split between several groupings. In both cases human interpreters can check whether the results of applying the algorithm appear reasonable in terms of the forest type distribution expected from prior information, and result in the absence of unlikely features. The relative advantage depends on whether the time taken in checking automatic classifications exceeds the time taken to achieve consistent results by relying entirely on human interpreters.
Rarely does the first attempt at image classification result in the final map. Close examination of the classification results often reveals issues and problems that can be resolved by changes in the classification process. There are many ways to try to improve the results of a classification with noticeable problems, including the addition of more or improved training data. It may also be helpful to include additional kinds of data in the classification, such as topographic or climatic data.
Recognition of various strata of modified natural forests will generally need to take account of surrounding pixels because features such as crown cover disturbance, fragmentation or logging infrastructure will not occur in every pixel of the area affected. Consequently, when considering the boundary between modified natural forest and primary forest it will be necessary to establish a radius within which evidence for modification is taken to be relevant to the pixel in question. If pixel based classification is to be used subsequently the radius is used directly; if the pixels are first to be segmented (grouped according to common properties) it becomes an input to the segmentation process (Box 22).
Conceptually this radius is the distance needed to regain the characteristics of primary forest, represented for REDD+ purposes. A default of 500 metres can be used, but the value will depend on forest ecosystem and type of modification, and is best established by measurement(4), especially if using an IPCC Tier 2 or 3 method. If the result of using a particular radius of influence is that fragments of nominally primary forest appear along the boundary between primary and modified natural forest, then the radius of influence being used is probably too small. This is because forest within a fragmented landscape is more likely to be modified than primary. Having established image characteristics of forest types and the radius of influence it is possible to assign a forest type (and sub-strata) to each pixel for the entire forest area of the country, as described above.
Attribution integrates remote sensing data, forest inventory and ancillary datasets to attribute the land-cover change observations to the most likely disturbance type (natural or anthropogenic). Typical data sets used in attribution include those with information relating to fires, forest management areas, agricultural areas, road coverage and urban areas (Mascorro et al., 2015). As satellite-based algorithms detect increasingly diverse change processes, the need to distinguish among the agents causing the change becomes critical. Not only do different change types have different impacts on natural and anthropogenic systems, they also provide insight into the overall processes controlling landscape condition. Reaching this goal requires overcoming two central challenges. The first is related to scale mismatch: change detection in digital images occurs at the level of individual pixels, but change processes in the real world operate on areas larger or smaller than pixels, depending on the process. The second is related to separability: change agents are defined by natural and anthropogenic factors that have no connection with the spectral space on which the change is initially detected. Different change agents may have nearly identical spectral signatures of change at the pixel and even the patch level, and must be distinguished by factors completely outside the realm of remote sensing (Kennedy et al., 2014).

Box 22: Pixel and object-based methods and segmentation

Acceptable accuracies for land cover and land cover changes can be achieved using either pixel-based or object-based classification methods. Object-based methods first group together pixels with common characteristics, a process called segmentation. At medium resolution as defined here these can sometimes yield higher overall accuracies than pixel-based methods for land cover classification (Gao & Francois Mas, 2008). Segmentation is also useful for reducing speckle noise in SAR images prior to classification. However if the smallest number of pixels to be grouped (the minimum mapping unit) is too large there is a risk of biasing the classification results, e.g. if the MMU is to large then an area could be counted as deforested on the basis of reduced crown cover even if it contained areas still meeting the national forest definition. In practice the minimum mapping unit should not exceed the smallest object discernible in the imagery.
Image segments provide an advantage when part of a processing chain requires human interpreter input. This is because image segments can be combined into larger polygons which can be more easily reviewed and revised for classification errors (FAO & JRC, 2012 Opens in new window). Tracking change at the pixel level opens the way to better representation of carbon pool dynamics, however it requires significantly more data processing.
Pixel-based approaches are potentially most useful where there are multiple changes in land use within a short period (for example, 10-15 year re-clearing cycles). They are most suited when there is complete data coverage (sometimes referred to as wall-to-wall), and require methods to ensure time series consistency at the pixel level. The approach may also be applied to sample based methods where pixel-level time series consistency methods are used, with the results scaled up based on the sample size. The results may still be summarised in land use change matrices. In fact the method is equivalent to matrix representation at the pixel level (AGO, 2002).
In addition to the general principles of consistent representation of land when using remote sensing for representing land or tracking units of land using a pixel approach, MGD advice is that:
  • Once a pixel is included, then it should continue to be tracked for all time. This will prevent the double counting of activities in the inventory and will also make emissions estimates more accurate.
  • Stocks may be attributed to pixels, but only change in stocks and consequent emissions and removals are reported, with attention to continuity to prevent the risk of estimating large false emissions and removals as land moves between categories.
  • Tracking needs to be able to distinguish both land cover changes that are land-use changes, and land cover changes that lead to emissions within a land-use category. This prevents incorrect allocation of lands and incorrect emissions or removals factors or models being applied that could bias results.
Rules are needed to ensure consistent classification by eliminating oscillation of pixels between land uses when close to the definition limits.

 (1)
Italics are used here to indicate names of categories (also called classes) in a map.
 (2)
Packages include Orfeo Opens in new window, QGIS Opens in new window and GDAL Opens in new window.
 (3)
See section 2.1 of the GOFC-GOLD sourcebook Opens in new window.
 (4)
For example, work in Guyana using change metrics indicated that almost all the degradation associated with new infrastructure occurs within a buffer zone about 100 metres deep (Winrock International, 2012).