New high resolution mosaic from the Arctic and Antarctic
There is new notice from the sixth and ‘seventh’ continent: The Polar Geospatial Center (PCG) at University of Minnesota has accomplished a high resolution mosaic from the Arctic and Antarctic showing the poles at an unprecedented detail. Since the launch of the LANDSAT mosaic of Antarctica six years ago (LIMA), which employed the panchromatic mode on a sub-pixel level for achieving a resolution of 15m, the latest product has a never seen horizontal resolution of around half a meter applying imagery of commercial satellites such as WorldView-1, WorldView-2, QuickBird-2 and GeoEye-1.
One of the biggest challenges was to cope with the high number of images, over 30000, from different companies requiring the correction for the orientation and the automatisation of ‘orthorectification’, in order to obtain one single mosaic. The images are implemented in web-based mapping applications of the National Science Foundation for viewing high-resolution, commercial satellite imagery mosaics of the polar regions. Since the launch of the new bedrock topography of Antarctica ‘bedmap2’ in spring this year it’s the second important remote sensing achievement within short time.
According to PCG director Paul Morin those images reveal every penguin colony and every glacier crevasse. PCG Imagery Viewers are accessible to any researchers who have US federal funding. The imagery is licensed for federal government use and is not available to the general public. Other requests involve specific requirements and will be linked to a certain purpose and date. In general, the benefit goes to scientific and governmental activities.
The new imagery collection obviously will be beneficial for science such as climate research and glacier studies. As the spatial resolution permits to even detect penguin colonies, ecological research will also profit from the images as for instance the number of animal groups can be counted better than ever before. At this point I want to stress that knowledge extracted from research always implies to be aware of new (political) responsibilities. I am pointing to the soaring interest of adjoining states in Arctic oil reserves and the environmental problems that have to be linked to this topic. Therefore, the new level of information (PCG maps, bedmap2,…), which is going to be beneficial for different questions and parties, needs a strong discussion about the usability and potential consequences of practical applications.
Processing “tons of data / big data” while you are sleeping: GDAL and Shell Scripting;
Handling and processing big data is awesome but always questions the physical system capacity, data processing software, memory availability, efficient algorithm, processing time and so. If any of these go wrong Oh! dear, you are in trouble with big data.
Among geo-people, its very common, dealing with satellite/raster/gridded data. Now-a-days most of freely available huge datasets comes in grid format and if it’s an spatio-temporal data set Oh! boy it could be TB of data with thousands millions of records and thats a real nightmare. For an instance, GRidded Information in Binary (GRIB) data; basically weather data dateset, very useful for any sort of research involving temperature, humidity, precipitation and so. You can get the daily data for free in 0.5×0.5 degree spatial resolution, for whole world. So, now you know where to get huge amount of free data but question is how to process this data mass?
To make the long story short, lets pick a question to answer, “how can we create a weekly long term average (LTA) from 20 years of daily data? And what we need to make the processing efficient?” – simeplest way would be using shell script and make use of gdal library. Following is some steps and directions:
first to download the daily data using grib api (and …maybe with little bit of shell script for naming and organizing the data for more convenience) for the time period and perimeter we are interested in. then create a grid using gdal_grid function and make geoTIFF or any other format you are interested in (we also can do point wise calculation and grid it at the end)
for convenience we can define three directories src_path (where the geoTIFFs are), stage_dir (this is staging area where the intermediate processing file will be kept) and dest_path (where we want to find output). Besides bash script here I have used some gdal functions like that gdal_calc.py, gdalinfo and gdal_translate etc. details are discussed bellow:
# its always a question while processing raster data how to deal with no data pixels. here’s a solution how to assigning 255 as NoDataValue to different unwanted pixels:
# after we have created the summarized image its necessary to create status bitmap image (SM) with 0 for unusable and 1 for usable pixel values; using this SM image we can exclude the unwanted pixels from the LTA estimation:
# finally we use the summarized Tavg image and SM image and create the long term image average (LTA) and save it in destination folder:
Here the scripting ends. now you can use these scripts and use some global variables like International Week (iw) and put it in a loop. Run the script than go to party and sleep tension free 🙂 . hopefully in next days you will get tons of data processed and stored in the destination folder.