Leaflet maps, beautiful!

I decided to try and make some sort of map to help visualize the goings on of the movement of refugees through Europe, but got distracted in the technical aspects of putting together a web map and appreciating the effort that some open source developers have gone through to make really beautiful tiles for openlayers-based maps.

As such, I’ve put together a chloropleth map showing where the 130,000 extra refugees Europe is being asked to accept would go based purely on GDP. In essence, it’s a map of EU countries GDPs but may give a bit of context as to the practicality of certain countries offering asylum. It’s got some basic javascript components (Roll over for info, with highlighting), I’ve disabled the zoom as it wasn’t very relevant considering how sparsely the dataset is populated.

You can see it in my webspace here, a printscreen is attached! Here‘s a second similar one, showing number of asylum applications per country as of December 2014.

refug

Advertisements

Stereo matching

I love opencv for many reasons, but above all the reason that has me most taken is the extensive documentation provided (I use the python wrapper which is brilliant!). Looking through the numerous basic examples shipped out with the distribution, and given that image alignment and dense matching is a central theme running through my research, it occurred to me that sharing some of what it has to offer would be a good idea.

As such, I’ve thrown together a web based implementation of their dense stereo matcher using Python’s convenient CGI filetype. There’s lots wrong with it, it’s set up to mainly deal with the image sets from the Middlebury examples listed on the page for the moment. I have done it with my own images too which will work provided they’re the same size, I haven’t quite dealt with resizing images and the sort.

The inputs are two URLs to images (.jpg or .png) hosted somewhere, the output after processing is a disparity map generated using the block matching algorithm. It’s being hosted at my website here, and an example is presented below.

Note: It takes about 15 seconds for the disparity map to show

Note2: Now defunct, planning to make into a heroku app!

Disparity map

Disparity map

50 posts!

For my 50th post I thought I’d just present another couple of geovisualisations based off of the UK national LiDAR inventory. The first is London, and is relatively complete. The point spacing at the most zoomed level is 7m as this was the best I could get without crashing my computer! I still think it’s quite interesting to see, and I hope to add some more functionality to it as time goes on. It’s viewable here.

Secondly I wanted to present a more topographically diverse region, so I searched for the area around Snowdonia, which had no data listed. Next, I searched around Ben Nevis, which had no data either! I then searched around Scafell Pike where there was a tile with data present, though you’ll see that it’s patchy, but somewhat interesting nonetheless! See it here.

The importance of radiance and absolute units

One thing that’s been on my mind quite frequently over the past year is the bridge between image processing, which is often carried out purely in digital numbers, and their relationship to radiance, which is more commonly used in wider Remote Sensing. Whilst digital numbers have huge memory advantages for applications in computer vision and robotics (integers are far faster to deal with then floating point numbers), radiance seems to be all to often left at the wayside. In this blog I hope to discuss why this is the case, and in what kind of applications I hope to see the culture change.

1. UAV Photogrammetry

I’ll start with the main one for me, applications in UAV photogrammetry, which I’m specialising in. Photogrammetry is a very old discipline which has seen new life through modern structure-from-motion applications. The concepts all come from the computer science community, and the entire workflow is executed on digital numbers – difference of gaussian feature trackers (SIFT, SURF) were designed with this in mind. Lots of these applications are being done using consumer grade cameras, with operators capturing JPGs and inputting them into software packages. The simplicity is very attractive and it’s easy to see why they have become so popular, but the problem for me lies in color balance. If we take a product from an SfM survey we have a rich data source, with detailed information of normals which can be provided to calibrate satellite images and provide a reference for change detection. Indeed, if you’re using an image-based survey to do something like map glacial dynamics, I feel like looking for a product which has absolute color information (W/m-2) with a degree of uncertainty associated with it is natural, and should be the norm. One of my favorite papers I’ve read in the last year, a landmark produced by Debevec and Malik which I’ve mentioned on this blog before demonstrated how practical this is, and how it just requires some extra calibration/preparation. Given this paper was written in 1997 and is still taught widely to this day (one example, albeit from 2009) in computational photography, I don’t know why we can’t generate radiance maps as standard given the concepts are so evolved and code is widely available. While their work focuses on high dynamic range imaging, you can do it with one image from a spectrally calibrated camera.

radiance

Example radiance map from Debevec and Malik’s paper

I have a sneaking suspicion I might be missing something. I’m aware of the difficulties in atmospheric correction but that isn’t enough to dissuade me from it’s generation. From what I gather, and I have spoken with one student who had done it, all you need is a monochromator or uniform source system for the calibration.

Uniform Source System

Uniform Source System

I can imagine a calibrated camera which can replicate the spectral bands of RGB from landsat or similar, and aid in satellite image simulations. This is done with high grade industrial cameras already, and I feel the extension to consumer/academic work is actually straightforward enough to do just for the sake of it.

2. Describing features

Features are a mainstay of image alignment, and are used in many mosaiicing algorithms. Can we describe features in terms of radiance? Again, this is something which computational/architectural photographers have been looking at, and I particularly like this proceedings paper from Kontogianni on the subject. He generates tone maps to display how higher dynamic ranges can produce more points for alignment algorithms, and while not the point of this blog entry, is a nice byproduct.

HDR

One figure from Kontogianni’s paper

He uses Debevec and Malik’s implementation for radiance mapping, and demonstrates how straightforward this could be to apply to a UAV application. While we would lose the HDR aspect, the radiance map would still be recovered and very useful.

As well as this, we could perhaps test how changing bins for radiance to digital number conversions affect the detected features. Considering how well studied and explained something like SIFT is, I would consider a study like this very worthwhile, particularly in the geoscience domain. Can we start describing features at a feature level in terms of the radiance to digital number conversion? When is a feature not a feature? How much contrast is needed for a feature to be a feature? If this is at the mercy of how the jpg is binned then it’s clearly very important. Again, I feel like I might be missing something and would love to know if this is being done anywhere.

3. Databasing

Lastly, while radiance maps (instead of digital number orthophotos) would be much larger to store, in terms of generating an accurate historical database for SfM work which allows for studies to be directly compared and updated as algorithms evolve, I feel like it would be worth it for higher grade studies. Considering how important some of these datasets being generated at the minute will be for climate models/baselines for future studies, providing and databasing everything correctly for not only replication, but extension for accurate comparison to other datasources should be a focal point of modern geomatics in my opinion.

Absolute units are the basic unit for describing the world, and while such care is taken in many domains to impose SI units, nobody seems too concerned with it for photogrammetry applications. I imagine a future where due to the level of detail provided from the historical record that when higher-level object detectors become more developed we won’t have to scratch our heads over EXIF files and try and back project/guess what the radiance mapping is. Absolute units are the way to go on all fronts!

I hope this entry is of some interest to any readers, and I would love for anyone with literature suggestions to get in touch!

A national 3D point cloud

I was searching various forks of Potree‘s web based point cloud viewer on Github, and happen to have stumbled across a fork with a python bindings for processing huge datasets. The example given is 640billion points, and has a live search feature that is pretty damn cool. The customisable color bar is also something I’m pretty excited about, as these ideas can be used for more than just height modelling – I’m looking forward to seeing some thematic models with this in mind, with perhaps an active legend which changes with whatever level of zoom (octree) you’re on. The Netherlands, however, is pretty flat, so I may try and adapt these ideas to the UK inventory over the weekend if I have time – either way I suggest you give the webviewer a look as it’s a technical marvel if nothing else!

I’ll include 1 screen cap showing the detail at the lowest level, you can make out individual houses and trees, really impressed by it all!

From the giant dataset at the lowest level, a building set amongst trees

From the giant dataset at the lowest level, a building set amongst trees

http://ahn2.pointclouds.nl/

Validating the UK LiDAR inventory/SfM products

On September 1st the geomatics section of the UK Environment Agency released it’s LiDAR inventory for free (including commercial use). I thought I’d take the chance to compare it with an SfM survey which was carried out on a relatively flat field in Damerham, UK. It was the subject of a georeferenced point cloud I generated previously (viewable here), and I was wondering what kind of differences we would see (or would expect to see) vs. what will presumably be the new national benchmark in an area which shouldn’t change much topographically.

First, I generated a geotiff using a new function in CloudCompare for the Damerham data. I then needed to find the tile reference where the field was located and requested that data from the environment agency’s new portal. I loaded both of these into QGIS and generated a difference DEM based on these inputs, shown below.

Difference in raster grids (LiDAR vs SfM survey)

Difference in raster grids (LiDAR vs SfM survey)

Next we can do the reverse. First we load our Damerham cloud, which was made previously and georeferenced in Agisoft’s SfM package. We then convert the ascii grid to a LAS file using one of the many very handy tools found in Lastools toolbox, las2las can do this for us. Now with the two clouds ready we can use the cloud-to-cloud distance tool to measure the difference between the two.

c2cdist_DSM

Histograms for cloud-to-cloud distance of LiDAR vs SfM clouds

Interesting! There seems to be a pretty big offset between the two. I decided to filter out all points <25cm and all points >60cm as it was such a small amount of the cloud, and generated a new extract which is presented below.

C2C_Filtered_With_hist

Cloud-to-cloud distance between LiDAR data and SfM survey using GCPs

It’s a bigger difference then I was expecting to see, and would love to test a few more SfM surveys in areas of simple topography which don’t change often to see how they fair against what will become the national LiDAR.

I had one other dataset to hand today with which to try, a terrestrial LiDAR survey of a coastal cliff in Wales, featured in this paper. Here‘s an SfM cloud I produced from using the imagery from that paper. I loaded the relevant tile into QGIS, but was required to do a reprojection as the survey was done in UTM30N, a different coordinate system to the OSGB system of the LiDAR data. After performing the reprojection I continued in much the same way, though I won’t present the QGIS screengrabs as they leave something to be desired! On loading both clouds into cloud compare I was greeted with quite the difference, as shown below.

Reproj

Offset after reprojecting

This is the nature of reprojections and coordinate systems, I just did a simple shift in Z to get it to line up more or less to where it would sit to visually check the fit, it looked pretty good!

Consti_translate

SfM cloud draped on the LiDAR

The LiDAR data (This is 1m, not even the highest!) is actually really amazing, it’s accuracy rivaling the result of this survey done just 3 years ago. I’ll include 1 more screen capture of the coastal town, bonus points for whoever can tell me what the strange streaking effect across the cloud is!

LiDAR_Coast

Town beside constitution hill