Aclima to scale intelligent air pollution mapping platform with $24M funding
Remember Aclima, the tech startup helping Google map air pollution in places like San Francisco and London? Well, it just raised a cool $24 million in Series A funding to expand its air quality monitoring program.
Aclima’s environmental monitoring sensors – fitted on the roof of Google’s Street View cars and planted on stationary objects like buildings and lampposts – have been gathering millions of hyperlocal air quality data points in various cities. This includes information about emissions like carbon dioxide, particulate matter, carbon monoxide, methane, black carbon, ozone, total volatile organic compounds, etc.
Davida Herzl, founder and CEO of the California-based startup, explains that this funding will be used to scale up and advance both mobile and stationary mapping of hyperlocal air pollution and emissions in various cities. She says, “This funding will accelerate Aclima’s efforts to both fill a critical gap in air pollution and emissions data, and transform this new class of information into environmental intelligence that drives better decision-making for communities, cities and the enterprise.”
What has worked in Aclima’s favor is that the startup has found a way to reduce the cost of traditional air pollution measurement methods significantly. Its platform uses machine learning to deliver high-resolution emission maps with up to 100,000 times greater spatial resolution than conventional sources.
So, it’s not really surprising that Aclima has been rubbing shoulders with organizations like Google and the US Environmental Protection Agency (EPA). Climate risk management is a pressing issue for all governments, and having access to hyperlocal emissions data and pollution insights is a critical component for making informed decisions.
Last year, Aclima released the results of its year-long mobile mapping campaign in Oakland, California, which found that air pollution could vary from block to block along the same city street. This study, published in Environmental Science & Technology, highlighted the importance of identifying local pollution hotspots in order to understand their impact on human health and environment.
How to Master Geospatial Analysis with Python
For a long time, there was no reference book available with the most-used geospatial Python libraries, nor one that marks the transition from Python 2 to 3 – an important landmark because Python 3 has fixed many issues under the hood, which is why it’s a major language update. Finally, a book that included both raster and vector data analysis using Jupyter Notebooks was very much needed as this has become the new standard for writing code and visualizing spatial data in an interactive web environment, instead of working with a code editor.
A new book, called “Mastering Geospatial Analysis with Python” (Packt Publishing), tries to fill this gap. Whereas other geospatial Python usually cover only a small sample of Python libraries, or even one type of application, this book takes a more holistic approach covering a wide range of tools available for interacting with geospatial data. This is done through short software tutorials that show how to use a dataset for real-world everyday data management, analysis and visualization problems.
A geospatial analyst toolkit
The book starts with an introduction the most powerful Python libraries. One example is GDAL, whose read and write capabilities are used throughout the industry on a daily basis, whether as a part of desktop software or as a standalone solution. Also included in the book are new, more Pythonic libraries built on top of GDAL, such as Rasterio, GeoPandas and Fiona. Brand new geospatial libraries such as Esri’s ArcGIS API for Python, Carto’s CARTOFrames and Mapbox’ MapboxGL-Jupyter that haven’t been covered anywhere else yet. These libraries are examples of how geospatial companies are releasing APIs to interact with a cloud-based infrastructure to store, visualize, analyze and edit geospatial data.
It´s no surprise that geospatial companies release these types of APIs. Taking Google and Amazon’s platforms as an example, spatial data only makes sense if you have the right platform and tools to manage that data. This book enables you to try out different geospatial platforms and APIs, so you can compare capabilities of each one. Raster and vector data analysis are another important topic. Vector and raster data analysis is still performed on a daily basis and therefore should be part of every geospatial analyst’s toolkit. Apart from spatial analysis, the book teaches you how to create a geospatial REST API, process data in the cloud and create a web mapping application.
If you interact on a daily basis with spatial databases, Python has got you covered. With some scripting experience under your belt, you’ll learn how to manage databases such as PostGIS, SQL Server and Spatialite. It´s no surprise that PostGIS is gaining more territory and is mentioned more often in job ads for geospatial analysts.
Finally, some last words about the future of geospatial analysis. The adoption of AI, machine learning and blockchain technology is already transforming geospatial technology. Expect even more data types, formats, standards and converging technologies where geospatial as-we-know-it will have its place. It’s only recently that geospatial technology has been gaining interest from other domains. This book hopes to cross the bridge between different domains, showing that Python is an excellent way of getting into the geospatial domain and discover its many great tools.