Skip to content

diogosens/cnn_sar_image_classification

Repository files navigation

Convolutional Neural Network to detect deforestation in the Amazon Rainforest

This project is part of my final work as an Aerospace Engineering student, and it comprises the development of a convolutional neural network (CNN) capable of classifying SAR images of deforestation in the Amazon Rainforest. The database used to train the CNN was created using the imagery avaiable in the European Space Agency (ESA) portal Copernicus.

Choosing the target area

The target area was the region inside the municipality of São Félix do Xingu, in the state of Pará, Brazil, and the sensing was made in July 5th, 2021. This city is particularly suitable for this project since it is the number one in cumulative forest degradation up to 2020, according to the National Institute of Space Research (INPE). Around 24% of São Félix's territory (more than 83 thousands square kilometers, that is more than the territory of Austria) has already been deforested.

Collecting de dataset

Synthetic Aperture Array (SAR) imaging is a method of remote sensing that operates beyond the visible light spectrum, using microwaves to form the image. The radiation in this wavelength is less susceptible to atmospheric interference than in the optical range. This is particularly fitting for monitoring the Amazon Rainforest, a region considerably umid and prone to cloud formation in a great part of the year. The downside is that, alternatively, a SAR image is less intuitive to be interpreted by a human eye than an optical image.

In order to remove the aspect of a televison tuned to a dead channel, it is necessary to pre-process the colleceted images. More details on this process will be avaiable in a near future (when my work will be published by the library of Universidade de Brasília). For the time being, it suffices to say that the original image turned into 27 new image as the one that follows:

Everyone of these new images were sliced in small chunks, resulting in about 1800 samples that comprised the dataset to be used to train the neural network that has yet to be developed.

Labelling the samples

As the above picture can demonstrate, the resulting faux-colors of the pre-processing step highlight the contrast between the areas where the forest is preserved and those where there are deforestation hotspots. Using high-resolution optical images of the same region as a complementary guide, every sample was manually labeled as one of these four categories:

  • totally preserved, when there is no trace of deforestation;
  • partially preserved, when there is some trace of deforestation, but the preserved prevail;
  • partially deforested, when the deforested area is the prevailing feature;
  • totally deforested, when there is no trace of preserved area.

Later in the CNN trainin step it will be clearer that this categorization were not optimal, to say the least.

Developing de convolutional neural network

CNN is a deep neural network specifically developed to be applied in the recognition of visual pattern. This architecture is made by two kinds of hidden layers:

  • a convolutional layer (as the name goes), that pass a small window (the filter) through the input image, making a convolutional operation (dot product) between the filter and every chunck of pixels comprised in the perception window;
  • a pooling layer, that gets as an input the output of the convolutional layer and makes a dimensional reduction operation, normally a mean operation with a given number (2x2, 3x3, depending on the desired reduction) of inputs.

These operations are well suited in finding patterns in a picture with a good amount of generalization in order to prevent overfitting. The CNN developed for this work can be seen in the following picture:

Training, testing and results

Using four labels to pre-classify the dataset used to train de CNN ended up to be a bad idea. CNN architecture is good to find commom patterns in a set of pictures, as long as these patterns are well generalized. Trying to differentiate between 'partially preserved' and 'partially deforested' proved to be unfruitful, with a low accuracy (75%) in small epochs and an increasing overfitting with more epochs.

Thus, a merge between these two labels was made, with considerably better results. Bearing this in mind, this new merged label was once again merged with the label 'totally deforested', creating a binary scenario ('preserved', 'not preserved') with even better results (accuracy of about 90%). These results are shown in the following graphics:

About

My final undergrad thesis in Aerospace Engineering, aimed to develop a convolutional neural network capable of classifying SAR images of the Amazon Rainforest.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages