Enhancing Satellite Imagery Readability with Super-resolution Machine Learning Models
The satellite imagery landscape has been utterly transformed in recent years, with a surge in quality and accessibility that has made these images essential for various sectors—from agriculture to urban planning. Yet, even with the highest resolution images available, analysts often grapple with issues in interpretation, wishing for a few pixels more to work with.
AI has at last provided a solution: new super-resolution (SR) models, which could help take satellite imagery to the next level of clarity.
Super-resolution, although not technically enhancing the actual resolution of the data, has the potential to significantly improve the visual quality of satellite imagery, effectively creating High Definition (HD) images. In commercial settings, SR has been employed to refine images from 30cm to 15cm resolution or 50cm to 30cm resolution. Some even ventured to enhance low-resolution open-source data, such as increasing European Sentinel-2 10m data to an impressive 2.5m resolution. What’s more, this enhancement can be applied to all spectral bands.
Despite its name, super-resolution’s improvements are primarily aesthetic. By generating extra pixels, SR refines the edges of objects and artificially reconstructs details, enhancing the overall visual clarity and readability of the image. However, SR does not unveil any hidden data that wasn’t initially captured. For instance, if an object isn’t present in the original data, SR will not reveal it in the enhanced image. Consequently, while an SR image may contain more pixels than its low-resolution counterpart, the Ground Sampling Distance (GSD) remains unchanged.
Despite these limitations, the advent of super-resolution technology is unlocking numerous new possibilities and practical applications across a wide array of industries. As the technology continues to evolve, its potential to revolutionize satellite imagery and its applications will undoubtedly reach even greater heights.
How does it work, and how is it implemented in satellite imagery?
The concept of super-resolution is not unique to satellite imagery. It is now a fairly widely available machine-learning task used to upscale and improve the details within an image; creating a high resolution output from a low resolution input, usually for aesthetic purposes. By applying an algorithm to the low-resolution image, absent details are filled in, increasing the resolution while maintaining or improving visual quality. Of course, the higher the original input resolution, the better the result in the SR image. SR is particularly good at creating precise edges and enhancing long linear features.
AI super-resolution models use deep learning techniques, whereby algorithms are trained on large datasets containing pairs of high-resolution and corresponding low-resolution images. This process teaches the model to recognise which data is missing from an image, and therefore how to reconstruct it.
The most common architecture for these AI models is a type of neural network called a convolutional neural network (CNN), which is specifically designed for processing grid-like data—e.g. digital images. Using this method, images are processed through multiple filters, including convolutional and pooling layers, which are able to extract high-level features and textures, thereby enabling the model to recognize complex patterns.
The model is also trained to minimize the difference between the SR output and the true high-resolution image. This is done using a loss function (such as mean squared error or perceptual loss) and encourages the model to produce reconstructed images which are both aesthetically pleasing and accurate.
Additional challenges arise when applying SR to satellite data. Traditional super-resolution algorithms have been developed on RGB images, but to work with satellite imagery they need to be trained on multispectral—or hyperspectral—data, which can have dozens of bands. Atmospheric conditions present additional hurdles to overcome and may be misinterpreted by an AI model.
Notable super-resolution approaches and products
Despite the challenges, there are already several notable products on the market that leverage SR technology, each of which employs cutting-edge machine learning techniques to provide analysts with enhanced detail and readability.
One such example is UP42’s CNN-based SR model, an algorithm which uses multispectral and pan-sharpened image pairs in training (instead of downscaled images). UP42’s model can quadruple the resolution of Pléiades and SPOT imagery, and is available on its marketplace.
Another example is coming from South Korean company Nara Space which developed a super resolution algorithm that improves the readability of Pléiades Neo images from 30 cm to 10 cm. Airbus itself offers Pléiades Neo HD15 product with 15cm enhanced resolution.
Maxar’s HD technology has also had significant success, producing 15cm SR images which consistently rate at 6–7 on the National Imagery Interpretability Scale (NIIS). (The NIIS is a subjective scale used by image analysts to rank the quality of aerial imagery from 0–9, where, for example, basic land use is intelligible in 1, and a car licence plate can be read at 8).
Benefits and applications of super-resolution models in satellite imagery
SR models in satellite imagery offer opportunities across a multitude of applications, including mapping, monitoring, feature identification and analytics. By enhancing the visual experience and reducing pixelation, these models provide analysts with actionable information, allowing them to discern smaller features on the ground, and thereby contribute to better decision-making.
In urban planning, SR enables more precise identification of features, and reduces error rates dramatically in identifying smaller objects like lampposts, solar panels, road signs and vehicles. Enhanced images of infrastructure conditions may prove advantageous for asset monitoring and disaster management. SR models not only help with counting features, but can also enhance the textural information of vegetation and tree canopies, which has applications for agriculture and climate monitoring.
It’s also worth mentioning that SR has benefits for both new and archive imagery. For a start, it can help to increase the global supply of 30cm imagery by improving low-resolution archive data to match newly-collected 30cm native data. Tasking the highest-resolution satellites, meanwhile, is often prohibitively expensive, and SR models can provide a cost-effective alternative for a wide range of applications. In fact, the readability of satellite data when enhanced to 15cm becomes very close to 10cm aerial data. In some use cases, SR satellite imagery data could replace aerial data—at a fraction of the cost.
One particularly significant advantage of SR models is their ability to improve the accuracy of other AI models applied to satellite data. For example, early tests by Maxar showed that machine-learning models detecting cars had their accuracy improved by 15-40% when working with images enhanced by SR. In some cases, SR models even have improved interpretability over native 30cm data—particularly with regard to linear features and reconstructing markings on the ground or on vehicles.
Super-resolution: overcoming challenges to provide more reliable geospatial data
While super-resolution models offer a range of advantages, implementing them in satellite imagery is not without challenges. Aside from the primary concern of the model’s accuracy, other technical limitations can also pose significant challenges, as applying a SR machine learning model to satellite data can require a huge amount of processing power and data storage. Ethical and legal considerations, including privacy concerns and data usage restrictions, must also be addressed, while integrating super-resolution models with existing workflows and systems can prove complex and time-consuming. All of these challenges could potentially impede the adoption of SR in certain applications; however, with the major satellite companies like Maxar and marketplaces like UP42 all striving to overcome them, super-resolution satellite imagery is becoming a reality.
Super-resolution machine learning models offer an innovative solution to enhance satellite imagery readability for both human analysts and AIs, providing higher quality data for various applications. Their potential benefits in terms of improved decision-making and AI performance make them a hugely valuable addition to the analyst’s toolbox—and potentially a cost-effective alternative to native 30cm, or even 10cm aerial data, for many customers. As technology continues to advance, we can expect further improvements in super-resolution techniques and their applications in satellite imagery, paving the way for even more reliable geospatial data.
Did you like the blog post? Subscribe to our newsletter
Can Earth Observation Technology Help to Restore Trust in Carbon Offsetting?
With the world’s major economies still heavily reliant on fossil fuels, and the net zero targets set by the Paris agreement of 2015 still a pipe dream, our planet is on the brink of devastating climate change. Despite concerted efforts of global governments and grass-roots activists, most companies are failing to decarbonise at the necessary rate, and many of them will still be producing significant volumes of greenhouse gases by 2050.
One solution which has increasingly been used by companies to compensate for this lack of progress is carbon offsetting—an appealing solution by which companies can plot a route to net zero not by reducing their own footprint, but through funding projects that sequester carbon or reduce overall global emissions.
However, in recent years, trust in the carbon offsetting market has been massively eroded, with details emerging about bogus projects or exaggerated claims. Is there a way to ensure that carbon offsetting projects are really doing what they claim to do, or is the whole system fundamentally flawed? Satellites may provide an answer…
Carbon offsetting and the carbon market
First, a quick summary of the carbon market, which has two main elements. The first is the regulatory carbon permit system, whereby large-scale polluters like power plants, factories, and other industrial infrastructure properties are incentivised financially to reduce their carbon emissions. The other is carbon offsetting—which is often voluntary—in which polluters compensate for their emissions, rather than reducing them.
The carbon market operates on a system known as ‘cap and trade’. Through cap-and-trade, governments set a cap on the level of emissions permissible by large polluters, and this cap is divided into a number of carbon permits—effectively forming an allowance to emit a specific amount of greenhouse gases (GHGs). Permits are priced by metric tonne of CO2, and can be bought directly from the government, or traded between companies if they are over their limit or have a surplus of credits. Each year, the cap on total emissions gets lower, and individual permits get more expensive. The world’s largest and longest-running cap-and-trade system is the European Union Emissions Trading System (EU ETS), which started in 2005.
Carbon offsetting is another way for companies to contribute to a global reduction in emissions, but instead of directly reducing their own, they compensate for them through investing in projects that reduce or sequester GHGs. Carbon offsetting can be part of the carbon market if the relevant regulatory system permits the use of ‘carbon credits’ as a means to offset excess emissions. In these cases, one carbon credit is treated as equivalent to one tonne of CO2 under the cap-and-trade system.
How does carbon offsetting work?
The story of carbon offsetting began in 1989, when the company Applied Energy Services financed an agriforest in Guatemala to ‘offset’ the building of a coal-fired power station. Today, businesses can buy carbon credits from projects that involve renewable energy generation, reforestation or afforestation (establishing a forest on land not previously forested), carbon capture and storage etc. Carbon offsetting by businesses is voluntary—and they often do it to meet their own sustainability targets or to raise their environmental credentials—but it has also been common for offsetting credits to be permitted in cap-and-trade systems, subject to varying rules.
Unfortunately, businesses have tended to hide behind carbon credits while making little effort to reduce actual emissions, leading to understandable accusations of greenwashing and an overall loss of trust in the concept of offsetting. Even more alarmingly, carbon offsetting projects have often been misleading and failed to deliver on their promises. Earlier this year, a stinging report into the carbon standards organization Verra, issuer of the Verified Carbon Standard (VCS), indicated that as little as 10% of its offsetting projects produce the emissions reductions they claim—while some projects are entirely fraudulent, producing no reduction at all.
In this context, it’s not surprising that the EU ETS has not permitted the use of international offset credits in its carbon market since 2020. However, as part of the agreement signed at COP26, polluters will be able to continue offsetting emissions, subject to certain criteria, and predictions are that this market could be worth $200 billion by 2050.
Whether it’s part of a cap-and-trade system or the voluntary market, it is crucial to ensure carbon offsetting projects are effective and genuine. As such, projects are required to adhere to principles of Additionality (must lead to a new and measurable reduction in emissions that would not have otherwise occurred), Permanence (must have long-term durability) and Verification (by an independent and accredited organization). Aside from the now-discredited VCS, independent standards include the Gold Standard, and the Climate Community and Biodiversity Standards (CCBS).
It is vital that carbon offsetting projects adhere to these principles—not just to ensure they contribute to the overall goal of reducing greenhouse gas emissions, but also to increase credibility and rebuild trust.
Satellite data monitoring offsetting
Many reforestation or carbon sequestering projects are located in remote or hard-to-reach areas, which presents a challenge for measuring their success. Here, satellites have significant advantages over aerial or drone monitoring, not to mention conventional in-person measurement, all of which are costly and time-consuming. Using SAR and multispectral data, combined with vegetation indices, biomass and carbon stocks can be estimated with a high degree of accuracy. Frequent revisits can confirm project permanence, and high-resolution imagery enables analysts to view activity in both the project area and control areas, to ensure additionality.
There are now several companies turning to satellite data to verify the success of carbon offset projects. One such company is Pachama, which uses three types of satellite imagery—optical-infrared, radar, and lidar—in combination with artificial intelligence to monitor and verify the effectiveness of forest-based projects. They also provide a platform for businesses to discover and invest in high-quality carbon offset projects that meet strict additionality, permanence, and verification criteria.
British company Sylvera specializes in providing independent, data-driven assessments of carbon offset forestry projects using satellite data and advanced analytics. Sylvera’s platform offers project ratings that help businesses identify high-impact projects. Tel Aviv-based Albo Climate is another company using machine-learning algorithms and multispectral data to measure ‘above ground biomass’ (AGB) carbon stocks.
One of the most interesting companies in this sector is CarbonStack, increasing transparency through innovative use of two technologies. The company supports afforestation projects, using blockchain technology to make a publicly accessible ledger of activities, and also utilising satellite observation for forest monitoring. CarbonStack has partnered with UP42 to gather this high-resolution satellite data, using imagery from the Pléaides Neo constellation, whose 30cm resolution enables the company to identify individual trees. The relationship enabled Carbon Stack to monitor 50,000 trees planted across Europe in 2022—and provided significant time and cost savings when compared with drone or aerial photography.
Monitoring by satellite has another big benefit. As high-resolution satellites can measure carbon sequestration in areas as small as 10m2, they present an opportunity for small landowners to take part—and share some of that $200 billion market. This means that offsetting can benefit everyone, not just the multinational corporations or NGOs.
Satellite data = transparency = trust
It’s probably an uncomfortable truth, but one we should acknowledge, that for many industries, fully decarbonising may never be possible. Therefore, offsetting GHG emissions—authentically and verifiably—will have to be part of any net zero solution. To do this, we need transparency and trust.
By using remote sensing data and other technologies to assess, evaluate, and authenticate projects, companies like those above are generating a higher degree of transparency in carbon offsetting projects, and thereby going some way to restoring trust in the sector. This, in turn, makes it a more reliable tool for addressing climate change.
Satellite imagery has a rich history when it comes to protecting our planet—it was of course the iconic ‘Earthrise’ photo of Earth rising over the moon’s horizon, taken during the Apollo 8 mission in 1968, which served as a catalyst for the environmental movement. The trend looks set to continue: by ensuring the authenticity of carbon offsetting projects, satellites can play a major role in our transition towards net zero.
Did you like the article? Check out our monthly newsletter: