You are looking the documentation of a development version. The release version is available at master.

OBIA conception documentation

Object Based Image Analysis (OBIA)

Note

This page describes the technical choices done during the OBIA implementation. To learn how use it, refer the tutorial

Managing input data

Preprocess sensor data

As in iota2 classification workflow, the first steps are dedicated to preprocess the data. Available dates are searched for initialize the gapfilling, common masks are produced including all enabled sensors. Validity masks are produced according to cloud coverage. All these steps are also requiered for OBIA as they are generic for handle all information available.

Preprocess input segmentation

Here begin really the OBIA workflow. Two cases must be handled for this step:

  1. The user provides a segmentation over all tiles required.

  2. The user wants iota2 produces the segmentation

User provided segmentation

As all user provided data, the first step is ensure that the data will be compatible with the iota2 workflow. The segmentation can be a tif image, or a vector file like sqlite or shapefile. GML files are forbidden as most of vector tools are unable to find the projection.

The segmentation can be exhaustive over all the requiered zone, i.e segments over all tiles or can be sparse. In this case, the output classification will be only produced for segments in the original segmentation.

The first step is to split the segmentation over all tiles. This is required as iota2 works by tiles. Border segments are cut but there are a covering zone between tiles. From now, until the production of the final classification, the original segmentation will be not used.

If the segmentation is raster, it is vectorized after splitting over tiles. Then, a new columns using a iota2 constant name is created assigning a different identifier to all segment. This is required to avoid a segment cut in several part to be deleted later.

iota2 provided segmentation

In this case, the chain will use the full time series and process SLIC (Simple Linear Iterative Clustering) for each tiles. At the end of the process, one segmentation by tile is provided as raster and vectorized. Then we have the same data available than the previous case. The main difference will be at the end of the workflow when producing the final classification.

Handle eco-climatic region

Eco-climatic or region of interest files are very important in the OBIA workflow. Indeed, they impact the learning step by dividing samples by similarity. Unlike the pixel classification, samples are not splitted at the region limits, but all the object is associated to the region. This makes it possible to create much more flexible boundaries between regions.

Then it is necessary to find the intersection between segments and region. To this end, the following workflow is applied:

  1. Compute intersections between common mask of the tile and the region shapefile

  2. Compute intersections between the region grid and the segmentation

  3. Keep only one intersection between a segment and one region.

If no region are provided, all the segment are assigned to the same region.

Compute tile enveloppe

The tile enveloppe provided by iota2 are computed using tile priority and validity masks. In pixel classification they indicate from which tile pixel comes in the final classification. The enveloppe can be not squared, and be a multipolygon with very small area around the central scene of the tile.

The tile enveloppe will be used in the lasts steps to reassemble the classification.

Learning steps

Split into learning and validation

In classification process, the split between learning and validation polygon is a common step. At the end of this process, the reference data is splitted over each tile and region using the tile enveloppe. This process is the same as in pixel classification.

Find intersection between segmentation and reference data

The learning samples are already splitted in tiles and region, then we must now associate each one with a segment. To this end, the region field in learning samples is removed and only the one attributed in the segmentation is used.

In case of duplicates:

  • if a learning polygon is covering more of one segment, it will be splitted over all segments intersecting

  • if a segment intersect more of one learning polygon it is removed, as we can not choose which classes is the good one.

Two modes are available at this point:

  1. clip learning samples and segment, only the common part of both are keep

  2. keep the entire segment which intersect the learning samples

Use the parameter full_learn_segment to manage these options.

Computing zonal statistics

Once the learning data are processed, zonal statistics can be processed. OTB ZonalStatistics is used to provide five statistics:

  • mean

  • min

  • max

  • std

  • count

Two issues arise:

  1. the number of features that will be produced

  2. the ram required to compute zonal statistics over an entire tile

Indeed, for instance a Sentinel-2 full year time series counts 468 features. Then zonal statistics can produce 2340 features. Some of vector format are limited, a sqlite (which is more use in iota2) can not have more than 2000 columns, and shapefiles are limited to 2go.

To solve this two issues, the ESRI shapefile format is used, and a new grid is computed over the tile. The grid is computed by region in the tile. Then zonal statistics are produced over all subgrid. Each processing provides one samples files for the learning step.

Note about the grid used

The grid size must be parametrized by the user depending of its need and ressources, using buffer_size parameter. By default, the grid is a 20 kilometers squared resolution. Tests have been made using the shapefile format with three statistics and a 468 bands S2 time series. In this configuration, it is required to have at least 80go of RAM to be processed. By reducing the buffer size, the RAM requirement decrease, but processing time increase.

Learn one model by region

Once zonal statistics are computed, a model is produced for each region.

The only warning in this step is about the total number of samples files for a given region. This is directly related to the grid defined in the previous step. In addition, the OTB TrainVectorClassifier requires that all features are contained in all files (columns name) but the samples can be splitted over several files. Again the limitation will be the total RAM required to learn the model.

Classifications step

Splitting tile between regions

Same situation as for learning, an entire tile can be handled directly in RAM, but only one part of the tile is covered by a region.

A grid is designed too. At the difference of the learning step, the grid can be larger as only one file is processed in the same time.

Classify

The classification steps is composed of six parts:

  • designed the grid by region and write the corresponding shapefiles

  • rasterize the shapefile to be faster in zonal statistics computation

  • Compute the zonal statistics (which produce and xml)

  • Join the statistics and the geometry providing by the shapefile

  • Classify the shapefile containing statistics

  • Keep only columns of interest for the final product: original segmentation identifier, geometry, predicted class and confidence

Outputs production

Produce the final land cover map

Using all vector files produced by the classification step, two cases are available:

  1. User provided segmentation

  2. iota2 provided segmentation

User provided segmentation

Warning

This is not yet implemented. This can be improve the output map visual quality

In this case, the first step is to merge all shapefile into a dataframe. Then it is possible to associate the label and confidence to the initial geometry. To choose which segment is keep among the duplicates (common part between tiles), the enveloppe is used. As for region, the first intersection found is keept.

iota2 provided segmentation

Note

In all case, this option is currently used.

In this case, the main difficulty is to ensure the coherence of the geometry between the tiles. Without an efficient way to generate segmentation across multiple tiles without having to merge images beforehand, there is no way to provide uniform segmentation.

The solution proposed is to clip each tile according is tile enveloppe. For each enveloppe boundary, there are oversegmentation with the risk that the left object has a different label than the right one.

Validation

To validate the produced map, the input reference data was split between learning and validation dataset. As for learning, the validation polygons are clipped according to the segmentation with the same rules. If a validation polygon intersects at least to segments, it is clipped. If a segment intersected at least two validation polygons, the validation polygons are removed from the dataset.

This can be improved by ensuring that all validation polygons which intersects the segment have different class labels.

Once the validation data has been processed, one polygon correspond to one validation samples. Then by counting and comparing the prediction with the reference data, a confusion matrix can be wrote. With the confusion matrix, the standard classification metrics are computed: kappa, overall accuracy, Fscore-1, precision, recall.

As OBIA is an object approach, the visual validation is important too. The geometry of the final classification depends on the input segmentation. In the case on SLIC, superpixels are used. A possible post processing could be to aggregates adjacent superpixels with same labels to smooth the map.

Known issues

Learning and Validation samples number

In OBIA, the number of learning and validation polygons can vary. This comes from the intersection between the segmentation and the reference data. To know the exact number of samples used for learning, look at the files in learningSamples folder. The samples of each files by seed must be summed up to have the total number.

For validation, the total number of samples in provided by summing all element in the confusion matrix.

Holes in map

This issue can appear if the segmentation does not respect the ratio between object size and pixels. From OTB application, if a segment have an area lower than 2 * pixel area, the application return nan or not return statistics for this segment.

OTB error

An error std::Bad_Alloc can sometimes happen. Relaunching the chain with the restart option seem to be sufficient to pass over this error.