Mangrove canopy using photogrammetry

A good paper released in the open access journal Remote Sensing in Ecology and Conservation today detailed the use of worldview stereo pairs to estimate canopy height within mangroves in Mozambique. One of the factors making this more practical than other forest types is the relative uniformity of the canopy height (“tree height saturates and remains relatively consistent in mature and intact forests”), as well as the fact that they only occur at or around sea level, which allows for confidence to be given in ground control readings as well as the canopy height itself.

The introduction details the use of the NASA Ames stereo pipeline, which has been of some interest to me after coming across a very good description of an implementation of a semi-global matching algorithm written by an engineer who has a colleague working on Ames. SGM is particularly relevant to SfM-MVS photogrammetry (See Hirschmuller’s papers for more detail!), but is also being incorporated into big software packages such as IMAGINE. Zack’s blog is pretty amazing in general and I recommend it be given some attention!

The paper itself lists some pretty great results, and some of the maps generated are beautiful! It serves as a proof of concept for modern photogrammetry, which has come on leaps and bounds, and the potential for back-projecting and reprocessing data in this way is pretty exciting. I’ll be keeping my eye on the journal for future applications to environmental remote sensing!

Advertisements

Monitoring climate from space

I’ve participated in this MOOC over the last few weeks run by ESA on monitoring climate from space which is a good introductory resources for anyone interested in Earth Observation generally. The range of topics is broad and I certainly learned a thing or two so if you’re a novice and are interested in the field I’d certainly recommend giving it a go!

https://www.futurelearn.com/courses/climate-from-space

GLAS – Spaceborne LiDAR

IceSAT was a unique satellite launched in 2003 with the aim to provide accurate topographic information of the Earth’s surface over a number of years. One of it’s aims was recovery of ice sheet mass balance and cloud property information, as detailed on NASA’s page here. Onboard this satellite was an instrument called GLAS, the geoscience laser altimetry system, a spaceborne LiDAR I’ve been meaning to look at for a long time. Last Sunday I decided to give it a look in the afternoon, and after a bit of tinkering with the files available here I produced a point cloud showing the Earth’s topography as seen by GLAS, collated for one month in 2003.

The files are structured that topographic information (The GLAH14 and GLAH15 files I used) are split up for each individual day, each with 14 orbits per file. I’ve presented one of the files below, as seen in CloudCompare, about 1.3 million points.

GLAS_One

GLAH14 data collected on the 4th March 2003

As you can see, each day doesn’t recover a detailed scan of the Earth, so a collation of the month gives us a better idea as to the extent of the mission.

GLAS_March

GLAH14/15 data for all of March 2003 – ~64,000,000 points

I really like this as not only can you see the completeness of the data (I had to mask lots of points), but it gives you a really good idea of what a low earth orbit looks like. IceSAT orbited about 600km above the Earth’s surface, and this is the pattern produced.

After removing duplicate points we can then compare each of the heights recovered from each laser pulse. For practicality’s sake I just used height above the WGS84 ellipsoid, a term different from height above sea level which is normally used. This which just saved a bit of time, one of a couple of other corners which were cut to produce the cloud. This gives us a global model of surface topography for March 2003, shown below.

GLAS_Ref

The final product is pretty interesting and it was a really fun exercise to do. I’m hosting the model on my webspace also, which is viewable here.

ISS Photogrammetry

I saw Changchang Wu’s model of the Earth from footage taken from the ISS put through VisualSFM on youtube, so I thought I’d repeat the process using some newer footage (Taken with a Nikon D4) over Europe. The image squence (available here) starts somewhere in the English channel and follows the spacecraft all the way south to the red sea and Arabian peninsula.

The ground sample distance for a full frame sensor at 16 megapixels at a flight height of 400km with a 24mm lens is 120m in nadir. The field of view across the width of the sensor is about 600km.

I took every third frame to make up the photogrammetric model, with only 174 being aligned for the model itself. The point cloud produced is available for viewing here, and I hope to do more of these in the future aswell as georeference it! I’ve included a gif below of all the frames used as well as the viewshed from Photoscan.

http://imgur.com/PjCGsPH

ISS

Artifical texturing

For problems associated with photoscanning low-contrast objects projecting light onto the scene is one clever way of ensuring a good match for stereo images. It’s nothing new, and remains a focus within computer vision research as it can reduce the need for camera calibration in stereo setups, such as the bumblebee, which are used for industrial applications. Structured light setups have been suggested for low resolution blurry imagery such as those in underwater cameras, where solutions to the correspondence problem are shaky at best. I know of one PhD student who’s focusing on accurate camera calibration using these methods, but I wonder if projecting an artificial texture on an object with low contrast and using multi-view stereo with a self-calibration could be equally as useful. Structured light scanners do just this for two cameras with a laser beam, but I’m thinking about if you could just do it with a home projector or similar and keep it constant throughout images.

The first figure of this web page (from 12 years ago!) exemplifies the idea and can solve for camera positions/intrinsics using the laser. There are other good ideas with this in mind from this University of Washington lecture slide set from 6 years ago, their researchers always seem to be ahead of the game!