Chroma

I’ve been neglecting this blog of late, partly because I’ve been ill and partly because I’ve been focusing my writing efforts elsewhere, but thought it was due time I put something up. Followers might remember that last year at EGU I presented a poster detailing results of investigating the variation of the greyscale input channel into Structure-from-Motion (SfM) photogrammetric blocks. Whilst the results showed very slight differences, I didn’t present one interesting, subtle effect, which shows how robust the process is to differences within images.

Within the SfM process, camera parameters which correct for distortions in the lens are fitted, which can subsequently be extracted for separate analysis. Returning to the greyscaling theme for inclusion in my final thesis, I’m pulling out the lens models for each block, and noticed the focal length being fitted to each block subtly changing, but in a manner we might expect.

Chromatic aberration

Chromatic aberration is caused by differences in the refractive indices of the glass in the lens between light of different wavelengths, which causes the focal point of the image formed for each wavelength to be slightly different. Thus, in colour images and for other optical equipment (I remember seeing it in many different sets of binoculars), we can see colour banding around the edges of high contrast features.

DSC00128_crop

Chromatic aberration seen at the front (red fringe) and back (green fringe) of the candle

Within photogrammetric blocks using single channel, we might expect the focal length to be optimised for specifically that colour’s focal length as it interacts with the specific lens being used. Indeed, this is demonstrable in the tests I have run – we see a slight lengthening of the focal length as more of the red channel is introduced to the image block accounting for the interaction with the lens, testing on an RGB image set collected of a cliff near Hunstanton, UK.

focal_lengths.png

Self-calibrating bundle adjustment fits longer focal lengths to greyscale bands containing a greater proportion of the red channel from an RGB image. Colours of the plotted points represent the RGB colour combination the greyscale photogrammetric block was derived from. The larger circles represent pure red, green and blue channels.

Whilst this might be expected, I was surprised by how obvious a trend was being shown, and it’s testament to how sensitive SfM is at picking up even small changes in image blocks. Watch this space for more insight into what this means for assessing quality of images going into SfM procedures, and how we might gain intuition into image quality as a result of this trend!

Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.

Photogrammetry rules of thumb

I’ve uploaded a CloudCompare file of some fieldwork I did last year to my website here. It uses the UK national LiDAR inventory data, mentioned in the post here. I think it espouses lots of the fundamentals discussed here, and is a good starting point for thinking about network design.

80% overlap

This dates way back, and I’m unsure of where I heard it first, but 80% overlap between images in a photogrammetric block with a nadir viewing geometry is an old rule of thumb from aerial imaging (here’s a quick example I found from 1955), and carries through to SfM surveying. I think it should likely be a first port of call for amateurs doing surveys of surfaces, as it’s very easy to jot down an estimate before undertaking a survey. For this, we should consider just camera positions orthogonal to the surface normal (see this post) and estimate a ground sample distance to aid us with camera spacing from there.

1:1000 rule

This has become superseded in recent years, but is still a decent rule of thumb for beginners in photogrammetry. It says that, in general (very general!), the surface precision of a photogrammetric block will be around 1/1000th of the distance to the surface. Thus, if we are imaging a cliff face from 30m away, we can realistically expect accuracy to within 3 cm of that cliff. This is very useful, especially if you know beforehand the required accuracy of the survey. This is also a more stable starting point than GSD, whose quality as a metric which can vary widely depending on your camera selection.

Convergent viewing geometry

Multi-angular data is intuitively desirable to gather, with the additional data comes additional data processing considerations, but recently published literature has suggested that adding these views has the secondary effect of mitigating systematic errors within photogrammetric bundles. Thus, when imaging a surface, try and add cameras at off angles from the surface normal in order to build a ‘strong’ imaging network, to avoid systematic error creeping in.

Shoot in RAW where possible

Whilst maybe unnecessary for many applications, RAW images allow the user to capture a much great range of colour within an image, owing to the fact that colours are written on 12/14 bits rather than the 8 of JPG images. Adding to this, jpg compression can impact the quality of the 3D point clouds, so using uncompressed images is advised.

Mind your motion

Whilst SfM suggests that the camera is moving, we need to bear in mind that moving cameras are subject to blur, and this is sometimes difficult to detect, especially when shooting in tough conditions where you can’t afford to look at previews. Thus, you can pre-calculate a reasonable top speed for the camera to be moving, and stick to that. We recommend a maximum of 1.5 pixels in GSD over the course of each exposure given the literature and as advised by the OS.

Don’t overparameterize the lens model

Very recently, studies have suggested that overparameterizing the lens model, particularly when poorer quality equipment is being used without good ground control, can lead to a completely unsuitable lens model being fit which will impact the quality of results. The advice – only fit f, cx, cy, k1 and k2 parameters if you’re unsure of what you’re doing. This is far from the default settings in most software packages!

Conclusion

I had a few more points in my long list, but for now these 6 will suffice. Whilst I held back on camera selection here you can read my previous camera selection post for some insight into what you should be looking for. Hope this helps!

EGU 2017

As a result of a travel grant awarded to me by the Remote Sensing and Photogrammetry Society, I was lucky enough to be able to return to EGU this year, albeit only for the Wednesday. I was there to present my research, in a poster format, based on raw image processing in structure-from-motion workflows. After arriving in Vienna on Tuesday afternoon I went straight the hostel I was staying at to review my poster and to finalize the sessions I would go to.

I got to the conference early in the morning, and set up my poster which was to be presented during the high resolution topography in the geosciences session. After taking a short break to grab a coffee, I headed over to the first session of the day – Imaging, measurements and modelling of physical and biological processes in soils. After last year’s fascinating run of discussions about soil and soil erosion, I decided my one day at EGU would be largely dedicated to that theme!

One particular talk which caught my eye used data fusion of laser scanning and NIR spectrometry with the goal to couple the two datasets for use in examining feedbacks in soil processes. Some very cool kit, and very blue-sky research, a good way to start the day!

After lunch, I almost exclusively attended a land degradation session, which featured some very interesting speakers. Many focused on integrating modern techniques for prevention of soil erosion and gully formation into farming practices in Africa. Interestingly, while the talks almost all focused on case studies and success in showing the physical effects of taking these actions, the Q & As were very much about social aspects, and how to bring about the cultural change within farming communities.

Another notable talk was given by a group who were aiming to promote the use of a targeted carbon economy which sees citizens from carbon consuming countries pay for the upkeep and management of forestry in developing communities. The presentation was very clear and set solid numbers onto each factor introduced, which meant it was much easier to share the vision portrayed, definitely something I’ll be following in the future!

This lead to the poster session in which I was participating, which was well attended and seemed to generate lots of interest. By the time I arrived to present at the evening session, the 15 A4 posters I had printed had been hoovered up, which is always a good sign! Over the course of the hour and a half I was visited by many people who I had met before at various conferences – it’s always nice to have people you know come to say hello, especially as affable a bunch as geomorphologists!

out_poster.jpg

The poster I presented

One group of particular interest were from Trinity College Dublin, where I had done my undergraduate degree many moons ago. Niamh Cullen is doing research into coastal processes in the West of Ireland and is using photogrammetry to make some measurements, and so we had a very good discussion on project requirements/best strategy. She’s also involved in the Irish Geomorphology group, who’s remit seeks to establish a community of geomorphologists in Ireland.

In the evening I attended the ECR geomorphologist dinner, which was great fun, a good way to wrap up proceedings! I look forward to participating in EGU in the future in whatever capacity I can.

Notre Dame

SfM revisited

Snavely’s 2007 paper was one of the first breakout pieces of research bringing the power of bundle adjustment and self-calibration of unordered image collections to the community. It paved the way for the use of SfM in many other contexts, but I always appreciated how simple and focused the piece of work was, and how well explained each step in the process is.

snave

Reconstruction of Notre Dame from Snavely’s paper

For this contribution, I had hoped to try and recreate a figure from this paper, in which the front facade of the Notre Dame cathedral was reconstructed from internet images. I spent last weekend in Paris, so I decided I’d give a go at collecting my own images and pulling them together into a comparable model.

Whilst the doors of the cathedral were not successfully included due to the hordes of tourists in each image, the final model came out OK, and is view-able on my website here.

ND_cat.png

View of the Cathedral on Potree

HDR stacking

As a second mini-experiment, I thought I’d see how a HDR stack compared with a single exposure from my A7. The dynamic range of the A7, shooting from a tripod at ISO 50 is around 14EV stops, so  I wasn’t expecting a huge amount of dynamic range to be outside this, though potentially parts of the windows could be retrieved. For the experiment, I used both Hugin‘s HDR functionality and a custom python script using openCV bindings for generating HDR images which can be downloaded here.

Results were varied, with really only Merten’s method of HDR generation showing any notable improvement on the original input.

This slideshow requires JavaScript.

Some interesting things happened, including Hugin’s alignment algorithm misaligning the image (or miscalculating the lens distortion) to create a bowed out facade by default, pretty interesting to see! I believe, reading Robertson’s paper, his method was generated more to be used on grayscale images rather than full colour, but thought I’d leave the funky result in for completeness.

If we crop into the middle stain glass we can see some of the fine detail the HDR stacks might be picking up in comparison to the original JPG.

This slideshow requires JavaScript.

We can see a lot of the finer detail of the famous stained-glass windows revealed by Merten’s HDR method, which is very cool to see! I’m impressed with just how big the difference is between it and the default off-camera JPG.

Looking at the raw file from the middle exposure, much of the detail of the stain glass is still there, though has been clipped in the on-camera JPG processing.

fre

Original image processed from RAW and contrast boosted showing fine detail on stained glass

It justifies many of the lines of reasoning I’ve presented in the last few contributions on image compression, as these fine details can often reveal features of interest.

I had actually planned to present the results from a different experiment first, though will be returning to that in a later blog post as it requires much more explanation and data processing, watch this space for future contributions from Paris!

Leafiness

I thought it might be fun to try something different, and delve back into the world of satellite remote sensing (outside of Sentinel_bot, which isn’t a scientific tool). It’s been a while since I’ve tried anything like this, and my skills have definitely degraded somewhat, but I decided to fire up GrassGIS and give it a go with some publicly available data.

I set myself a simple task of trying to guess how ‘leafy’ streets are within an urban for urban environment from Landsat images. Part of the rationale was that whilst we could count trees using object detectors, this requires high resolution images. While I might do a blog on this at a later date, it was outside the scope of what I wanted to achieve here which is at a very coarse scale. I will be using a high resolution aerial image for ground truthing!

For the data, I found an urban area on USGS Earth Explorer with both high resolution orthoimagery and a reasonably cloud free image which were within 10 days of one another in acquisition. This turned out to be reasonably difficult to find, with the aerial imagery being the main limiting factor, but I found a suitable area in Cleveland, Ohio.

The aerial imagery is a 30 cm resolution having been acquired using a Williams ZI Digital Mapping Camera, and was orthorectified prior to download. For the satellite data, a Landsat 5 Thematic Mapper raster was acquired covering the area of interest, with a resolution of 30 m in the bands we are interested in.

This experiment sought to use the much researched NDVI, a simple index used for recovering an estimate of vegetation presence and health.

Initially, I loaded both datasets into QGIS to get an idea of the resolution differences

jezzer.png

Aerial image overlain on Landsat 5 TM data (green channel)

So a decent start, looks like our data is valid in some capacity and should be an interesting mini-experiment to run! The ground truth data is resolute enough to let us know how the NDVI is doing, and will be used farther downstream.

 

Onto GrassGIS, which I’ve always known has great features for processing satellite imagery, though I’ve never used. It’s also largely built on python, which is my coding language of choice, so I feel very comfortable troubleshooting the many errors fired at me!

The bands were loaded, DN -> reflectance conversion done (automatically, using GrassGIS routines) and a subsequent NDVI raster derived.

ndvi2.png

Aerial image overlain on NDVI values. Lighter pixels denote a higher presence of vegetation

Cool! We’ve got our NDVI band, and can ground truth it against the aerial photo as planned.

ndvi1

Lighter values were seen around areas containing vegetation

Last on the list is grabbing a vector file with street data for the area of interest so we can limit the analysis to just pixels beside or on streets. I downloaded the data from here and did a quick clip to the area of interest.

roads1.png

Vector road network (in yellow) for our aerial image. Some new roads appear to have been built.

I then generated a buffer from the road network vector file, and generated a raster mask from this so only data within 20 m of a road would be included in analyses. The result is a first stab at our leafy streets index!

map1.jpg

Visual inspection suggests it’s working reasonably well when compared with the reference aerial image, a few cropped examples are shown below.

This slideshow requires JavaScript.

Lastly, we can use this this data to scale things up, and make a map of the wider area in Cleveland. This would be simple to do for anywhere with decent road data.

map3.jpgThis might be useful for sending people on the scenic route, particularly in unfamiliar locations. Another idea might be to use it in a property search, or see if there’s a correlation with real estate prices. Right now I’ve run out of time for this post, but might return to the theme at a later date!

 

Control freak

In formulating a research design initially, I spent much time considering how best to control the experiments I was undertaking. Control, from a geoscientific photogrammetry perspective, can really be quite tricky, as the amount of settings and equipment involve can mean that one quickly loses the run of oneself.

Research planning

In my limited wisdom during the planning phase I actually undertook a plan to demonstrate exactly where we would capture imagery from, right down to the OSGB coordinates and orientation of the cameras in the scene, using Cloudcompare to help in visualization. I sourced the topographic data from the LiDAR inventory provided by the UK geomatics service, which provided a DEM with 0.5 m resolution.

FW1.png

A screenshot showing camera positions from my research plan

I think this was a very worthwhile task – it was very demanding in terms of the skills I needed to use and made me think about how far I could bring the experiment in the planning stage. While maybe overkill, I have visions of the near future where one might be able to task a robot with a built in RTK-GPS to acquire images from these exact positions/orientations daily for a specified time period. This would eliminate much of the bias seen in studies done over the same research area, but with different equipment and camera network geometries.

You could argue that this is already happening with programmable UAVs, though I haven’t seen anything that practical for a terrestrial scene. This is outside the scope of this post, but did provide motivation for expanding as much as possible in the planning phase.

So while we might be able to control camera positions and orientations, in the planning phase at least, there are some things we know are absolutely outside our control. The weather is the most obvious one, but with a cavalier attitude I thought how about I might go about controlling that too. This lead me to considering the practicalities of simulating the full SfM workflow.

To attempt this I took a model of Hunstanton which had previously been generated from a reconnaissance mission to Norfolk last May. It had been produced using Agisoft Photoscan and outputted as a textured ‘.obj’ file, a format which I wasn’t overly familiar with, but would become so. What followed was definitely an interesting experiment, though I’m willing to admit it probably wasn’t the most productive use of time.

Controlling the weather

Blender is an open source 3D animation software which I had been toying around with previously for video editing. It struck me that, considering blender actually has a physics based engine, there might be reasonable ways of simulating varying camera parameters within a scene with simulated lighting provided by a sun which we control.

Blender1.png

The Hunstanton obj file, with the Sun included

So the idea here is to put a sun directly overhead, and render some images of the cliff by moving the camera in the scene. For the initial proof of concept I took 5 images along a track, using settings imitating a Nikon D700 with a 24 mm lens, focused to 18 m (approx distance to cliff, from CloudCompare), with shutter speed set to 1/500 s (stationary camera) and ISO at 200. The aperture was f/8, but diffraction effects can’t be introduced in the software due to limitations in the physics engine. The 5 images are displayed below, with settings from the Physical Camera python plugin included at the end.

This slideshow requires JavaScript.

Full control! We have the absolute reference to compare what will be the newly generated model to, we can vary the camera settings to simulate the effects of motion blur, noise and focus and then but the degraded image sets through the software!

Plugging these 5 images back into Agisoft again, masking the regions where there is no data, produces a new point cloud purely derived from the simulation.

FW2.png

Dense point cloud produced from the simulated images

We can then load both the model and derived point cloud into CloudCompare and measure the Cloud-to-mesh distance.

fw_front

From the front

fw_front2

From the back

This is where I left my train of thought, as I needed to return back to doing some practical work. I still think there could be some value in this workflow, though it does definitely need to be hashed out some more – the potential for varying network geometry ontop of all the other settings is very attractive!

For now though, it’s back to real world data for me, as I’m still producing the results for the fieldwork I did back in October!