Sentinel Bot

I’ve been interested in the Sentinel satellite missions, but somehow one can get very distanced from these things unless you’re actively working on them or using their products in some sort of project. As such, I decided I needed a stream of images to keep me interested, and so went about having images pulled down automatically.

On top of this, considering I’m quite fond of Twitter (As the only social media I actively use), I decided to try and have the best of both worlds, so others could share in the Sentinel image goodness.

Having thought about it enough, and having a day free on Saturday, I decided to get to it. I hooked up various parts in an image processing pipeline and sentinel_bot was born. The idea was to have a bot which automatically searches for images which are relatively cloud free, and produce a decent-quality image for direct upload to twitter. It’s having some teething issues (Color balance) but I’m tweaking it slightly to try and make sure the images are at least intelligible.

At the minute it’s tweeting once every 40 minutes or so, but I’ll probably slow that down once it’s gotten a few hundred up.

In celebration, I’ve collated 10 interesting ones so far into an album below (click to enlarge), if you want to check it out it’s at www.twitter.com/sentinel_bot

Advertisements

EGU Poster/PICO

I’ve been neglecting this blog somewhat, but have a glut of new posts on the horizon! For now, I’ve uploaded both the PICO (Presentation of Interactive Content, which I’ll be blogging about!) and Poster from my attendance at the European Geophysical Union’s AGM last week. They can be found on the respective session pages in which they were featured (search “Connor” to find me):

Unmanned Aerial Systems: Platforms, Sensors and Applications in the Geosciences (co-organized)

High Resolution Topography in the Geosciences: Methods and Applications (co-organized)

I’m hoping others involved in the sessions do likewise as it would be a very interesting repository to look back at, and as a good insight into the cutting edge for those who could not attend!

Mangrove canopy using photogrammetry

A good paper released in the open access journal Remote Sensing in Ecology and Conservation today detailed the use of worldview stereo pairs to estimate canopy height within mangroves in Mozambique. One of the factors making this more practical than other forest types is the relative uniformity of the canopy height (“tree height saturates and remains relatively consistent in mature and intact forests”), as well as the fact that they only occur at or around sea level, which allows for confidence to be given in ground control readings as well as the canopy height itself.

The introduction details the use of the NASA Ames stereo pipeline, which has been of some interest to me after coming across a very good description of an implementation of a semi-global matching algorithm written by an engineer who has a colleague working on Ames. SGM is particularly relevant to SfM-MVS photogrammetry (See Hirschmuller’s papers for more detail!), but is also being incorporated into big software packages such as IMAGINE. Zack’s blog is pretty amazing in general and I recommend it be given some attention!

The paper itself lists some pretty great results, and some of the maps generated are beautiful! It serves as a proof of concept for modern photogrammetry, which has come on leaps and bounds, and the potential for back-projecting and reprocessing data in this way is pretty exciting. I’ll be keeping my eye on the journal for future applications to environmental remote sensing!

Monitoring climate from space

I’ve participated in this MOOC over the last few weeks run by ESA on monitoring climate from space which is a good introductory resources for anyone interested in Earth Observation generally. The range of topics is broad and I certainly learned a thing or two so if you’re a novice and are interested in the field I’d certainly recommend giving it a go!

https://www.futurelearn.com/courses/climate-from-space

GLAS – Spaceborne LiDAR

IceSAT was a unique satellite launched in 2003 with the aim to provide accurate topographic information of the Earth’s surface over a number of years. One of it’s aims was recovery of ice sheet mass balance and cloud property information, as detailed on NASA’s page here. Onboard this satellite was an instrument called GLAS, the geoscience laser altimetry system, a spaceborne LiDAR I’ve been meaning to look at for a long time. Last Sunday I decided to give it a look in the afternoon, and after a bit of tinkering with the files available here I produced a point cloud showing the Earth’s topography as seen by GLAS, collated for one month in 2003.

The files are structured that topographic information (The GLAH14 and GLAH15 files I used) are split up for each individual day, each with 14 orbits per file. I’ve presented one of the files below, as seen in CloudCompare, about 1.3 million points.

GLAS_One

GLAH14 data collected on the 4th March 2003

As you can see, each day doesn’t recover a detailed scan of the Earth, so a collation of the month gives us a better idea as to the extent of the mission.

GLAS_March

GLAH14/15 data for all of March 2003 – ~64,000,000 points

I really like this as not only can you see the completeness of the data (I had to mask lots of points), but it gives you a really good idea of what a low earth orbit looks like. IceSAT orbited about 600km above the Earth’s surface, and this is the pattern produced.

After removing duplicate points we can then compare each of the heights recovered from each laser pulse. For practicality’s sake I just used height above the WGS84 ellipsoid, a term different from height above sea level which is normally used. This which just saved a bit of time, one of a couple of other corners which were cut to produce the cloud. This gives us a global model of surface topography for March 2003, shown below.

GLAS_Ref

The final product is pretty interesting and it was a really fun exercise to do. I’m hosting the model on my webspace also, which is viewable here.

Artifical texturing

For problems associated with photoscanning low-contrast objects projecting light onto the scene is one clever way of ensuring a good match for stereo images. It’s nothing new, and remains a focus within computer vision research as it can reduce the need for camera calibration in stereo setups, such as the bumblebee, which are used for industrial applications. Structured light setups have been suggested for low resolution blurry imagery such as those in underwater cameras, where solutions to the correspondence problem are shaky at best. I know of one PhD student who’s focusing on accurate camera calibration using these methods, but I wonder if projecting an artificial texture on an object with low contrast and using multi-view stereo with a self-calibration could be equally as useful. Structured light scanners do just this for two cameras with a laser beam, but I’m thinking about if you could just do it with a home projector or similar and keep it constant throughout images.

The first figure of this web page (from 12 years ago!) exemplifies the idea and can solve for camera positions/intrinsics using the laser. There are other good ideas with this in mind from this University of Washington lecture slide set from 6 years ago, their researchers always seem to be ahead of the game!