Predictions, predictions, predictions

I’ve just listened to the latest episode of Alastair and Andrew‘s podcast, scene from above, and the discussion section based around near-future predictions for the Earth Observation (EO) industry, as well as some of the discussion in the news section, was extremely interesting. I’m fully onboard the hype train for machine learning booming in EO, with Andrew seemingly somewhat skeptical.

Before I go into why I think that’s the case, I’ll mention Alastair speaks about a Voyager documentary, the Farthest (I’ve actually just noticed a big Irish producer, crossing the line was involved in production, wahay!). It sounds absolutely incredible, and will go on my watch list, but Alastair’s comments reminded me of an xkcd comic alluding to the fact that the edge of the solar system is difficult to define! I actually really enjoyed listening to their thoughts on Voyager in general, and would love to hear more discussion around the history of EO as well as wider planetary missions – every time I read and think about Corona, for example, I can’t help but be amazed.


Voyager spacecraft (NASA)


One of the main predictions made within the main section of the podcast is that analysis ready data (ARD) will see wider use and release by data providers. We have seen a move towards sentinel 2 ARD and planet have recently released their atmospherically corrected surface reflectance product, I would hope this is an indication that this is quite well developed already!


A figure from Planet’s surface reflectance white paper (source)

On the machine learning (ML) front, I attended a google earth engine workshop at the beginning of this year, and having had fruitful discussions with the host on the project’s directions, I think the iron is hot for ML and the hype justified. In particular, the host spoke about the team preparing tensor flow integration into the platform in time for AGU next year. Having been lucky enough to participate in (albeit not at a competitive level) the planet kaggle competition for classifying image excerpts into one or more classes last year, I have a decent idea of just why there has been a frenzy of research surrounding convolutional neural networks (CNNs) in the computer vision community, and I’m surprised that they haven’t appeared more in EO research.

While Andrew notes that supervised and unsupervised classification has been around and used for decades, the difference between those and deep-learned information is like night and day in my opinion. The competition, past the task presented, gave me a look into how neural networks are transforming image analysis, and how recurrent CNNs on massive scales could be leveraged in an environmental context for things like linking phenological mapping to data which might provide reasons as to why a change is happening with spatial context. Object-based analysis is unparalleled for applications like this, and CNNs are now so easy to use and much better at handling massive data sets than previous methods. Computer scientists are poised to integrate more and more with the EO community as higher resolution data becomes available, and so I feel like when high temporal and spatial resolution open data becomes available multi-disciplinary research will really kick off. Infact, I put together a starter ipython notebook for bird identification, showing just how easy it is using a pre-trained CNN for this application, albeit not with EO data.


Example plot from ipython notebook

This leads to a prediction of my own – as more imaging scientists move into EO, Unmanned aerial vehicle (UAV) and satellite data will need to be better integrated. Currently, there are a raft of problems linking data collected from consumer level cameras onboard UAVs to satellite data, not least of which is radiometric normalization. The demand for higher resolution data from the deep learning end of the community will lead to new standards being introduced for how UAV data is collected and metadata stored (shameless plug). EO platforms will begin to integrate publicly collected UAV data and satellite researchers will begin to collaborate with computer scientists using nearer earth images. We will then see satellites being used as an early warning systems and UAV missions automatically launched off the back of satellite derived information in a range of new applications.

This isn’t a particularly insightful prediction, but one which continuously hasn’t really been addressed. I’m always surprised as to how infrequently satellite and UAV data are used in tandem, but I’m hoping this will change!

That’s all for now, look for my Google Earth Engine blog coming next week, I was blown away by the product and definitely need to do a separate post on it 🙂


UAV cameras revisited

I revisited an earlier blog post, What camera for a UAV?, for an article for GIM international this month. I had a great time expanding upon lots of what I have learned about over the last two years!

I seemed to have raised a few eyebrows at the suggestion that GoPros are not optimal for photogrammetric use, as they have frequently been used and their lenses have been very well modeled, but in the spirit of the article I still thought it was worth pointing out!


First page from the GIM article

You can find the full article here.

A talk from the RGS conference

I gave a talk last week the Royal Geographical Society conference. It was a really interesting event, presenting an extremely diverse audience interested in different types of geography. As part of the event there was a UAV session which dove into concepts ranging from UAV geography and the impact they were having on human culture, to practical investigations using UAVs in the field.

My talk was on reporting image quality which I’ve written about on this blog before, as well as in an article as part of the SENSED publication. My supervisor had great foresight and brought along a voice recorder, so I’ve set the audio to the slides so as to have it for the record, and thought  I’d share it here too!

What camera for a UAV?

UAV photography has come on leaps and bounds within the last few years, but considering which camera to use often isn’t the focal point (!) of many articles. With this in mind, let’s consider a few stereotypical camera setups, and why we may/may not want to use some in certain situations. Certainly for photogrammetry the high grade cameras are better, but with weight restrictions everything becomes a little more ambiguous. Here I’ll consider three that I’d like to test side by side in the future, and a fourth I’m unlikely to ever see.

  1. Go pro hero 4

The quintessential beginner’s UAV/rugged terrain setup, go pros are legendary for how stable their videos are. They weigh 82g and so are a very light choice for mounting onto UAVs, a major consideration for camera selection. While many in the photography community are megapixel mad, often diffraction effects are a more important consideration. 12 Megapixels on a ~6mm wide sensor will likely cause serious softening of edges, and while ground sample distance will be the same as others on paper, ‘spatial resolution’, the ability to resolve individual objects based on the Rayleigh criterion, will be far lower.


Image acquired using a gopro on a rotary UAV

The distortion at the edges is a further nuisance for metric applications, as it needs to be correctly modelled to ensure accuracy within measurements. The example is shot at an angle oblique to the surface, which typically isn’t used in metric applications, but shows the distortions pretty well.

  1. Ricoh GR

This is a camera which I see being used more and more for photogrammetric applications, it boasts a high dynamic range and an APS-C sensor (~22mm width) with a well machined lens at a fixed focal length of 18mm. In the last blog post, I spoke about smartplanes surveying a mine in Sweden where they used a Ricoh to achieve some pretty amazing results. It also happens to be the camera used in the sample dataset provided with MicMac, an open-source SfM software which I have talked about on this blog previously. It light enough (168g) that it would fit on, for example, a Conservation Drones fixed wing platform, which would constitute an amazingly low cost setup which could deliver survey-grade results, with some care. I’ve included a sample which was included in the MicMac data at a downsampled .5 MP resolution, but you can see the contrast is great!


1 image from MicMacs example dataset

  1. Sony A7S

Part of an emerging crowd of mirrorless cameras known as MILCs (Mirrorless interchangeable lens cameras), I think this could be a good choice for higher grade surveys, considering it’s seemingly ridiculous low light shooting capabilities, full-frame (36mm width) sensor and relatively lower weight (450g body vs, for example a Nikon D810’s 1kg). It makes me wonder of how viable a mixed workflow would be with RTK-GPS, a low cost laser rangefinder and this camera to produce really high quality orthophotos which can be fully georeferenced off two separate data sources. Particularly, I wonder if it would be a good choice for canopy height modelling, as lasers are known to perform worse during daytime, so maybe a dusk survey with a ramped ISO could produce some good results. One advantage over the others is that the lens is interchangeable, so while the Ricoh is somewhat limited in its applications, the A7S could be used for many things. One I’m going to keep in mind for when I happen upon a spare £1,500.

I found a video on vimeo with video footage flown on a DJI zenmuse – A setup around about £3.5k but would produce some amazing accuracy I’m sure.

  1. Phase one iXU 180

Not much to say about this, but putting it in to consider the very very top end of the market. Phase one have produced this medium format camera (53.7mm sensor width) with a pricetag of $60,000, though it does tick all the boxes for aerial surveying. It’s about the same weight as the D810 (1kg), so won’t mount on most consumer grade UAVs, though we aren’t really going for low-cost here. I don’t suspect I’ll ever see one with my eyes, but have just grabbed the demo from their website of which I’ll included a downsampled version here for completeness, it’s pretty good looking flying 600m above ground which means a wide field of view, ground sample distance is 5.2cm.


Keep an eye on an upcoming post about various UAV platforms, which I plan to tag onto what I’ve presented here!

Flight riot/Conservation drones

Flight riot’s website is a good compilation of much of the activity surrounding UAV photogrammetry, and brings together many of the central themes surrounding what I’m interested in. Tutorials on things like the Canon Hack Development Kit (here) and discussion on Optimal Orientations for image capture (here)  are pretty impressive as it’s linking sections of different communities and collating the (somewhat) technical information into one place, which is an exciting prospect. Further tutorials into actually structure-from-motion photogrammetry including workflows using both VisualSfM and Meshlab make this site a great resource for the budding photogrammetrist. I think I’ll be contributing to this community at some stage in the near future, and will be keeping a keen eye on their activities.

They also have an association with conservation drones, a group I’m also quite interested in who are using low-cost drones to do biodiversity surveys in tropical regions.Having briefly spoken to them about their work and future plans, I’m quite excited about the potential for cheap and cheerful drones to contribute positively to proper land management and surveying in developing countries.

Check out their websites!