The patient is, at least, under observation. More than 1,000 surveillance satellites are orbiting the Earth, many of them small, but including around 100 large ones, most of which take pictures: And this monitoring is desperately necessary, above all for a better understanding of the effects of climate change. In the meantime, however, researchers, authorities and organizations can barely keep on top of the flood of images.
“Just looking at the Sentinel satellite missions of the European Space Agency (ESA), we are talking about 12 terabytes of data being transmitted to Earth every day,” says Professor Begüm Demir, Head of the Remote Sensing Image Analysis Group at TU Berlin. Funded by the European Research Council, Demir began five years ago to develop an analysis and information system to process the extensive image data from the “Sentinel-1” and “Sentinel-2” satellites. This also led to the development of the reference image database “BigEarthNet” and, as a follow-up project, the search engine “EarthQube.” For the first time, the latter has enabled a reverse search on satellite images. For example, a user can upload as a search query a satellite image of an area of burned forest, and “EarthQube” will then return images of other burned areas on Earth that contain similar spatial and spectral information.
The images of “Sentinel-1” are radar images, whereas those of “Sentinel-2” are taken with visible and infrared light. “EarthQube” can also search images for special features: “Mines, industrial sites, garbage dumps, vineyards, fields, swamps, forests, and more,” says Demir, whose research group is based at TU Berlin’s Berlin Institute for the Foundations of Learning and Data (BIFOLD). Researchers there are working on big data management and machine learning. “Of course, Artificial Intelligence – AI for short – is behind these applications. To recognize objects, AI models must first be trained using many tens of thousands of training images,” explains Begüm Demir. For this purpose, each pixel must be assigned an object. “Doing this manually would be time-consuming and complex. That is why, in one project, we focused on so-called explainable AI.”
The goal was to identify different tree species automatically. The pixels of the training images were not assigned by hand; instead, the AI was told in a general way: “This picture contains pine trees” – and then we watched its learning process. “This is only possible with explainable AI. Soon we were able to automatically determine how the pixels needed to be assigned to enable optimal recognition,” says Demir.
The method was applied as part of “TreeSatAI,” a cooperation between Begüm Demir and the Chair of Geoinformation in Environmental Planning at TU Berlin. “Recognition by self-learning algorithms is particularly complex for trees, because each tree can look very different depending on species, season, location, age, or vitality,” says Professor Birgit Kleinschmit, the head of chair and project supervisor. “And in the Lower Saxony forests that we used for classification, there are, after all, no fewer than 60 different species of trees.”
Images taken by airplanes were used in addition to those provided by “Sentinel-1” and “Sentinel-2;” and forest inventory data from the Lower Saxony State Forests were used for the precise mapping of the training images. “We also explored the potential of social media applications such as the recognition app ‘Pl@ntNet,’” Kleinschmit says. The result of the TU Berlin-internal cooperation: 150,000 attributed image sections as well as optimized AI algorithms, all of which are now freely available for use by government agencies and environmental service providers in the field of forest monitoring.