I’ve been neglecting this blog of late, partly because I’ve been ill and partly because I’ve been focusing my writing efforts elsewhere, but thought it was due time I put something up. Followers might remember that last year at EGU I presented a poster detailing results of investigating the variation of the greyscale input channel into Structure-from-Motion (SfM) photogrammetric blocks. Whilst the results showed very slight differences, I didn’t present one interesting, subtle effect, which shows how robust the process is to differences within images.

Within the SfM process, camera parameters which correct for distortions in the lens are fitted, which can subsequently be extracted for separate analysis. Returning to the greyscaling theme for inclusion in my final thesis, I’m pulling out the lens models for each block, and noticed the focal length being fitted to each block subtly changing, but in a manner we might expect.

Chromatic aberration

Chromatic aberration is caused by differences in the refractive indices of the glass in the lens between light of different wavelengths, which causes the focal point of the image formed for each wavelength to be slightly different. Thus, in colour images and for other optical equipment (I remember seeing it in many different sets of binoculars), we can see colour banding around the edges of high contrast features.


Chromatic aberration seen at the front (red fringe) and back (green fringe) of the candle

Within photogrammetric blocks using single channel, we might expect the focal length to be optimised for specifically that colour’s focal length as it interacts with the specific lens being used. Indeed, this is demonstrable in the tests I have run – we see a slight lengthening of the focal length as more of the red channel is introduced to the image block accounting for the interaction with the lens, testing on an RGB image set collected of a cliff near Hunstanton, UK.


Self-calibrating bundle adjustment fits longer focal lengths to greyscale bands containing a greater proportion of the red channel from an RGB image. Colours of the plotted points represent the RGB colour combination the greyscale photogrammetric block was derived from. The larger circles represent pure red, green and blue channels.

Whilst this might be expected, I was surprised by how obvious a trend was being shown, and it’s testament to how sensitive SfM is at picking up even small changes in image blocks. Watch this space for more insight into what this means for assessing quality of images going into SfM procedures, and how we might gain intuition into image quality as a result of this trend!

EGU 2017

As a result of a travel grant awarded to me by the Remote Sensing and Photogrammetry Society, I was lucky enough to be able to return to EGU this year, albeit only for the Wednesday. I was there to present my research, in a poster format, based on raw image processing in structure-from-motion workflows. After arriving in Vienna on Tuesday afternoon I went straight the hostel I was staying at to review my poster and to finalize the sessions I would go to.

I got to the conference early in the morning, and set up my poster which was to be presented during the high resolution topography in the geosciences session. After taking a short break to grab a coffee, I headed over to the first session of the day – Imaging, measurements and modelling of physical and biological processes in soils. After last year’s fascinating run of discussions about soil and soil erosion, I decided my one day at EGU would be largely dedicated to that theme!

One particular talk which caught my eye used data fusion of laser scanning and NIR spectrometry with the goal to couple the two datasets for use in examining feedbacks in soil processes. Some very cool kit, and very blue-sky research, a good way to start the day!

After lunch, I almost exclusively attended a land degradation session, which featured some very interesting speakers. Many focused on integrating modern techniques for prevention of soil erosion and gully formation into farming practices in Africa. Interestingly, while the talks almost all focused on case studies and success in showing the physical effects of taking these actions, the Q & As were very much about social aspects, and how to bring about the cultural change within farming communities.

Another notable talk was given by a group who were aiming to promote the use of a targeted carbon economy which sees citizens from carbon consuming countries pay for the upkeep and management of forestry in developing communities. The presentation was very clear and set solid numbers onto each factor introduced, which meant it was much easier to share the vision portrayed, definitely something I’ll be following in the future!

This lead to the poster session in which I was participating, which was well attended and seemed to generate lots of interest. By the time I arrived to present at the evening session, the 15 A4 posters I had printed had been hoovered up, which is always a good sign! Over the course of the hour and a half I was visited by many people who I had met before at various conferences – it’s always nice to have people you know come to say hello, especially as affable a bunch as geomorphologists!


The poster I presented

One group of particular interest were from Trinity College Dublin, where I had done my undergraduate degree many moons ago. Niamh Cullen is doing research into coastal processes in the West of Ireland and is using photogrammetry to make some measurements, and so we had a very good discussion on project requirements/best strategy. She’s also involved in the Irish Geomorphology group, who’s remit seeks to establish a community of geomorphologists in Ireland.

In the evening I attended the ECR geomorphologist dinner, which was great fun, a good way to wrap up proceedings! I look forward to participating in EGU in the future in whatever capacity I can.

Photo pairs in VisualSFM (VSfM)

One handy function of VisualSfM which can save a huge amount of time in bundle adjustment is instructing the software on which photos overlap, and which don’t. This will save time on the software trying to match images which have no overlapping area, and will generally just be a lot cleaner.

At the high end level, people can do this by inputting GPS coordinates as an initial ‘guess’, with which the bundle adjustment can then play around with. Our solution assumes we know the overlap of the input photos, and so we know which possible matches there can be. From this, we can produce a file with candidate image pairs for speeding up BA.

I’ve put together a simple python script for this with a few options for creating the file needed to preselect image pairs. The script assumes photos have been taken in order, in either a ‘linear’ (where the ends don’t meet) or ‘circular’ (where the last photo overlaps the first) configuration, and pairs each photo with x photos either side of it. It needs to be executed in the folder where the image files are located and produces a file named ‘list.txt’. This can be inputted into VSfM, with more instructions available here.

The script takes 4 parameters.

  1. Number of images infront/behind the current image with which to make pairs, assuming the images were taken in order
  2. The filetype (case sensitive for now)
  3. The imaging configuration – ‘linear’ if the first image does not overlap the last, ‘circular’ if it does
  4. The delimiter, options are ‘comma’ and ‘space’ (used in VSfM)

Sample: ‘python 3 tif circular comma’

It can be downloaded from the public Github repository here. Hope this helps someone 🙂

Thoughts on EGU

At the end of April I had the opportunity to participate in a big international conference in Vienna, which saw over 13,000 scientists participate and contribute to research in the geosciences. As in a previous post, I contributed a poster and PICO presentations, but not I’d like to outline what I enjoyed/didn’t enjoy about my time there.

Day 1. Sunday evening reception

Arriving at the Vienna International Centre, I was somewhat overwhelmed at the scale of the whole event. At reception there was a hive of activity, though registering was a matter of a minute or so due to having registered out details online previous to the event. The evening reception consisted of food and drink, all included in the registration price, which is a surefire way to get things going. I met a few interesting people, but spent the majority of the evening with both my supervisor and some students from Loughborough University. We posted in front of NASA’s ‘hyperwall’, a screen showing visualisations mainly of atmospheric patterns of the Earth. While I do like data visualisation, this seemed a bit out of place to me, like a talking piece on a coffee table, but for geoscience.


Austria center, conference HQ

My first impressions were good, and I was excited to sink my teeth into the week’s activities.

Day 2. Monday 18th April

The day’s schedule didn’t have too much that was appealing, so I used most of the morning for orientation, and a chance to observe a few PICO sessions, in which I had never participated/seen before. These involve presenters giving a ‘two minute madness’; a rapid  overview of their research, and subsequently having people exploring an application prepared by the presenter for the following hour. The first session I attended was on a topic I had no interest in, but the idea seemed to work quite well as a concept. Previously, I had only been to poster sessions for presentation of content, so this was quite refreshing. I explored a few presentations on the touchscreens, which helped prepare for the one I had to deliver, as well as make some small edits to it.

In the afternoon I attended a few talks as part of the ‘Peatlands under pressure‘ session. Having studied carbon cycling/modelling during my masters degree, it was interesting to see practicing carbon scientists outside of the teaching environment. It was also of interest as Ireland has lots of peat bogs! A quick search left unsatisfactory results,  so I checked the CORINE land cover dataset from 2012, which reported that, exluding inland water bodies, Ireland’s land mass was 14.85% peat bog.


Quick map showing Irish Peat bog cover

The participants were largely good,  and I enjoyed the level of detail that went in to a few. After a nice evening meal and a hiccup or two in central Vienna that night, I was prepared for my day of poster presenting on day 2.

 Day 2. Tuesday 19th April

This day had an extremely relevant oral session, and one PICO session I was very excited about. The day started with PICO presentations from the session ‘Frontiers in Geomorphometry and Earth Surface Dynamics: Possibilities, Limitations and Perspectives‘. It included some people whom had previously shared data with me, including Matt Westoby who’s photogrammetric subject features on my website. His talk on ‘direct georeferencing’ was quite interesting, a concept getting a great deal of attention and which is being studied by a friend up in Newcastle. His visualisations were great, but he was so busy talking to people after his presentation that I just decided it’d make more sense to catch up at the poster session that evening.

Another very interesting presentation was that of Andreas Kaiser, who is working towards practical in situ landslide monitoring using multi-camera networks (In this case 3 canon 400D cameras). He was kind enough to send me data he had captured, as he was having trouble getting consistent results from agisoft. I plan to test some newer photogrammetric concepts which are undoubtedly more demanding to see how results compare. A work in progress.

The oral presentations consisted of James Brasington giving a keynote on photogrammetric surveying of a braided river bed in New Zealand. His results were very interesting, with a scale of use of photogrammetry which I hadn’t seen before. His student Joe James, reported they had upwards of 4,000 images of the area. It was an engaging speech, well delivered, and reassuring that I wasn’t veering too far off in my usage of structure-from-motion.

A couple of other noteworthy presentations included Michael Wimmer on two-media photogrammetry, a thing I’ve read about a number of times before, which attempts to correct for water diffraction to model visible topography covered by water. His results left something to be desired, but a challenging and useful topic is always going to be tricky to develop, I was impressed.

The big winner for me, however, was that of Ellen Schwalbe, who had set up a camera array to monitor glaciers long team. This raised similar questions to Andreas’ PICO, as controlling for movements in camera position relative to the rest of the scene is extremely difficult. Nevertheless, the time lapse camera network had picked up some really amazing calving events, as well as some good volumetric analysis to boot. I was very impressed by the research design, this was the best talk of the day I think.

That evening was my poster presentation, which was far more casual than I was expecting it to be. Beers were served during the evening, which always helps break the ice, and I had great talks with colleagues from TU Dresden, including Annette Eltner who had been kind enough to share some data with me. Another good chat was had with Karen Anderson who put me in touch with a student of hers with which I have common research interests. In all, a good, full day which left me with reams of ideas to pursue.


The poster I presented.

 Day 3. Wednesday 20th April

A day without a great deal of relevant talks for me, I decided to revisit my roots in the morning and go to a couple of oral talks about fires, a topic which I had studied for my masters’ dissertation. Researchers from Colorado gave a good talk on post-fire recovery on hillslopes for different storm types which I quite enjoyed, but many of the talk weren’t as engaging.

During the day I observed a PICO session with my supervisor in order to get a feel for what I should expect in my own. The session was on nutrient recovery from farmed land, and featured a range of presentations of varying quality, some ranging from single slide short and snappy presentations to people attempting to deliver a full talk at a rate of one slide every 5 seconds. With PICO, brevity is a blessing and waffling is badly punished, an attractive attribute of any format. In short; all killer no filler.

In the afternoon I attended some talks about soil erosion and the formation of channels based on observation both in situ and lab-based, with one particular group (Can’t find the abstract!) performed a very well controlled lab experiment using a rainfall simulator on a flat sand bed to see the changing impacts of rainfall severity on soil patterns. They were well received, and I enjoyed the talk!

The evening poster session was OK, but I was very focussed on delivering a coherent PICO that I didn’t engage as much as I could have.

 Day 4. Thursday 21st April

The session I was participating in, ‘Unmanned Aerial Systems: Platforms, Sensors and Applications in the Geosciences‘, was on first thing in the morning and split into two parts, of which I was in the latter. The first talk of each PICO session consists of a ten minute keynote, this one was a very interesting concept on the measurement of soil-organic carbon using a multispectral UAV. The idea was to allow for precision monitoring of small fluxes in the measure, and could potentially be applied to farmed crops.

After the first session, the PICO hour was very interesting, and I mainly focused on the sructure-from-motion aspects, including the very interesting talk on DEM reproducibility by Niels Anders and co-authors including my supervisor, who presented.

I was second up on the second half of the session, and after a clunky start motor-mouthed my way through the majority of the 2 minutes. I think if I had used the first ten seconds more wisely I would have been OK, but these are the margins PICO demands! Afterwards I had some very stimulating talks with people who were interested, including a geologist from the Cordoba geological survey, an Austrian forester and a French glaciologist all of whom were seeking camera advice. The main point, above all, was shoot in RAW. As Verhoeven puts it in his paper, ‘RAW is the only scientifically justifiable file format’.

My PICO consisted of an interactive stereo matcher which put users in charge of image exposure, and could see both the histrograms and matching accuracies of the image pairs changing as they changed the settings. Unfortunately I think I overblew it a little, and people approaching me weren’t too interested in the graphical interface, but more in talking with me about camera settings. Not too worry though, you, fine reader, can get your own copy from the EGU portal here.


Sample slide from my 2 minute talk

The session put me in high spirits, and really put into context how comforting it is to talk about things you know about with like-minded people. At a conference this big, I found myself without the expertise and vocabulary in many situations to maintain a dialogue, so I tried to simplify (perhaps to a fault) all of which I was talking about.


The only other talk I went to was on building surveying, and the use of novel instruments to measure stress applied, particularly focused on non-destructive methods for historical buildings. Some of the concepts could be applied in conjunction with other research I had seen, and I felt I started to think along the right lines of collaboration more towards the end of the day, which I could have used earlier in the session!

 Day 5. Friday 22nd April

The last day consisted of several sessions on science communication I was interested in. Having spoken with a couple who had set up the very informative and well organised SciCom website here, I decided to go along to the poster presentation to see what else was being discussed. Science communication is so important, and I’ve mentioned a charity I admire who are directly linked with it before. The public understanding section of this session was well put together, and gave me hope that scientists will one day be better communicators as a whole.

The last session I planned to attend was on open source software in geoscience, and, to be honest, I was quite dissapointed. I try my best at every step to not reinvent the wheel, but it seemed like in this session it’s what everyone was doing. I understand competition is important as a concept, but when you’re rewriting well documented and implemented code available through every channel imaginable I just don’t see the benefit versus the amount of time required. It was a PICO session, and most of the apps had examples of maps produced using their software, and all I could think of was how most were already very accessible by different means.


I really enjoyed my time at EGU, and would certainly recommend PhD students to go at least once. I must confess, however, I am excited to go to smaller conferences in the future, as the scale was one thing I never got used to. Even the amount of time to organise a good schedule was intimidating! Nonetheless, I hope you enjoy my account of things, and short of going to the conference itself, you should visit Vienna! It really is a very beautiful city.

Photogrammetry from Bristol


From the docks

I visited a friend in Bristol this weekend and had a few hours on Sunday to explore the city a little bit. I decided to give making a photogammetric model of St. Peter’s church a go, seeing as it’s steeped in history and in a reasonably clutter free area for gathering photos. I unfortunately only brought  a 50mm EF-2 lense, which wasn’t ideal, but I decided to try anyway. In total I took 186 images, 164 of which were incorporated into the sparse cloud/bundle adjustment.



I unfortunately couldn’t get far enough away to get good  enough imagery from one side of the church, and dense vegetation towards the back meant photo-matching was quite poor there. Nonetheless, it was a fun exercise and one I look forward to repeating!

Location on OSM.

You can see the model here.

Digitizing Elmo (Windows)

Just today I participated in the first step of a collaborative project between departments at Kingston which involved a live demo – the product was a point cloud of of an Elmo/Monster toy as shown below. Here I’ll just go through the steps involved in the cloud generation, so hopefully readers can replicate it themselves!


Our test subject

  • Images

For generating the data, I used my Canon 500D with a 50mm EF2 lens, so nothing overly fancy. I started by putting the subject on a raised platform in order to minimize the effect of reconstructing the ground, and used an aperture of f/5 or so to ensure the subject was in focus, but that the ground was not. A good example of precautions to take when dealing with low-texture objects is introduced in this blog post, but can often be limited by the amount of RAM in a computer. As I was somewhat time limited, I decided to forgo the accuracy of using a tripod and so used a fast shutter speed (1/30 s) with an ISO of 400 to compensate, and generally just tried to get a reasonable amount of coverage on the subject. I took some other images with a wider aperture (f/2) and faster shutter speed (1/50 s) also. I threw a few paintbrushes into the scene to generate a bit more texture.

The test dataset (Some images are very poor I’m aware!) can be downloaded here.

  • Model building

For convenience, Agisoft Photoscan (There’s a free 30-day trial) was used to build the model, though other open source alternatives exist, such as VisualSFM or MicMac. I’ve included a short slideshow on what the exact steps are in Photoscan below to hopefully make it easy to follow!

This slideshow requires JavaScript.

  • Level/denoise in CloudCompare

CloudCompare is an open source point cloud editing software available here. Because our model is exported without any coordinate system, it can’t tell up from down, but we can fix this! In CloudCompare we can use the leveling tool to quickly orient the model so it’s a bit easier to view. Another useful tool is the ‘Statistical outlier removal filter’ in tools-> clean-> SOR filter, though we’ll skip it in this case.

This slideshow requires JavaScript.

  • Preparing to upload

Potree is a free point cloud viewer which can be used to host datasets online. Here we’ll just used in it’s most basic form to get a minimum example out. This section gets a bit hairier than the others but hopefully it’s intelligible. We’ll need to download and unzip both the potree converter and potree in the same directory, making a new subdirectory for each; ‘Converter’ and ‘Potree’. Next we’ll add the model we saved from CloudCompare to the potree converter directory, renaming it to ‘model.las’. Then we’ll follow the slides below!

Note – the command for the fourth slide is ‘PotreeConverter.exe model.las -o ../../Potree/potree-1.3/model_out –generate-page model’

This slideshow requires JavaScript.

  • Upload to the web

While there’s instructions for Kingston Students on how to upload web pages, this is a general skill that is good to have. We use FileZilla FTP to log in to our server, and the idea is to upload the entirety of the Potree folder, which contains all resources necessary for rendering the scene. The actual HTML page where the model is located is stored in the directory potree-1.3/model_out/examples/ , and can be accessed by this once uploaded.


The final version of the model generated is viewable at the directory here –

EDIT: I’ve lost the original model for this, though will reupload a new version soon

If the Potree stuff is a bit too hairy CloudCompare is a brilliant software for toying with models, I recommend giving it some time as it’s an extremely useful software package!

  • Conclusion

This is a basic tutorial on how to rapidly get 3D models online using nothing but a handheld camera and a laptop. Including taking the images this process took around 20 minutes, but can be speeded up in many ways (including taking better but fewer images). The cloudcompare step can be skipped to speed up even further, but having a ‘ground floor’ plane is in my opinion almost a necessity for producing a model.

This is not intended to be best practice photogrammetry or even close, this is intended to give an overview on modern photogrammetric processes and how they can be applied to rapidly generate approximations to real world objects. These can be then cleaned and models generated for use in applications such as 3D printing, videogames or interactive gallerys.

  • Complete software list





Validating the UK LiDAR inventory/SfM products

On September 1st the geomatics section of the UK Environment Agency released it’s LiDAR inventory for free (including commercial use). I thought I’d take the chance to compare it with an SfM survey which was carried out on a relatively flat field in Damerham, UK. It was the subject of a georeferenced point cloud I generated previously (viewable here), and I was wondering what kind of differences we would see (or would expect to see) vs. what will presumably be the new national benchmark in an area which shouldn’t change much topographically.

First, I generated a geotiff using a new function in CloudCompare for the Damerham data. I then needed to find the tile reference where the field was located and requested that data from the environment agency’s new portal. I loaded both of these into QGIS and generated a difference DEM based on these inputs, shown below.

Difference in raster grids (LiDAR vs SfM survey)

Difference in raster grids (LiDAR vs SfM survey)

Next we can do the reverse. First we load our Damerham cloud, which was made previously and georeferenced in Agisoft’s SfM package. We then convert the ascii grid to a LAS file using one of the many very handy tools found in Lastools toolbox, las2las can do this for us. Now with the two clouds ready we can use the cloud-to-cloud distance tool to measure the difference between the two.


Histograms for cloud-to-cloud distance of LiDAR vs SfM clouds

Interesting! There seems to be a pretty big offset between the two. I decided to filter out all points <25cm and all points >60cm as it was such a small amount of the cloud, and generated a new extract which is presented below.


Cloud-to-cloud distance between LiDAR data and SfM survey using GCPs

It’s a bigger difference then I was expecting to see, and would love to test a few more SfM surveys in areas of simple topography which don’t change often to see how they fair against what will become the national LiDAR.

I had one other dataset to hand today with which to try, a terrestrial LiDAR survey of a coastal cliff in Wales, featured in this paper. Here‘s an SfM cloud I produced from using the imagery from that paper. I loaded the relevant tile into QGIS, but was required to do a reprojection as the survey was done in UTM30N, a different coordinate system to the OSGB system of the LiDAR data. After performing the reprojection I continued in much the same way, though I won’t present the QGIS screengrabs as they leave something to be desired! On loading both clouds into cloud compare I was greeted with quite the difference, as shown below.


Offset after reprojecting

This is the nature of reprojections and coordinate systems, I just did a simple shift in Z to get it to line up more or less to where it would sit to visually check the fit, it looked pretty good!


SfM cloud draped on the LiDAR

The LiDAR data (This is 1m, not even the highest!) is actually really amazing, it’s accuracy rivaling the result of this survey done just 3 years ago. I’ll include 1 more screen capture of the coastal town, bonus points for whoever can tell me what the strange streaking effect across the cloud is!


Town beside constitution hill