Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.

Writing blues

After having been ill the last week and a half I’m currently trying to get back into the swing of writing, which I find is largely the hardest part of research where really it doesn’t/shouldn’t need to be. One thing in particular I find very difficult is starting – I often pore over the first words/sentence for a very long time when I do sit down to write.

One forward step I’ve come to in an attempt to mitigate this is to give myself as many opportunities as possible to start writing. While obviously this could involve carrying a pen and paper around everywhere and waiting for inspiration to hit, I think the practicalities of translating esoteric squiggles and keeping the notes in decent order a bit beyond me, so I rarely give it a proper go.

Enter the bluetooth keyboard, a product recommended to me by my supervisor to ensuring you can start taking notes/writing wherever you are. I was skeptical at first, due to the variable key size and slight faff of connecting via bluetooth to my phone, but after giving it a couple of hours on a recent visit to the RGS I was sold. Currently I’m typing up a version of this blog post on my phone sitting on a train from Holyhead to Chester on the way back to London. I’m getting great pleasure from watching the trees go by after every few sentences!

2d5dde0a-bac2-42c6-bf82-9bc28a34c520

Product photo from Microsoft’s site

While I know this entry will read like an advertorial, that isn’t the intention, I’m just very wary of the summer’s PhD writing ahead, and am glad to have an excuse to do the lion’s share sitting in a park rather than in my stuffy office! For now, back to writing, though I’m preparing a more technical blog post which should be finished later tomorrow.

for_up.jpg

Spotted from the train in Wales

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

Photogrammetry rules of thumb

I’ve uploaded a CloudCompare file of some fieldwork I did last year to my website here. It uses the UK national LiDAR inventory data, mentioned in the post here. I think it espouses lots of the fundamentals discussed here, and is a good starting point for thinking about network design.

80% overlap

This dates way back, and I’m unsure of where I heard it first, but 80% overlap between images in a photogrammetric block with a nadir viewing geometry is an old rule of thumb from aerial imaging (here’s a quick example I found from 1955), and carries through to SfM surveying. I think it should likely be a first port of call for amateurs doing surveys of surfaces, as it’s very easy to jot down an estimate before undertaking a survey. For this, we should consider just camera positions orthogonal to the surface normal (see this post) and estimate a ground sample distance to aid us with camera spacing from there.

1:1000 rule

This has become superseded in recent years, but is still a decent rule of thumb for beginners in photogrammetry. It says that, in general (very general!), the surface precision of a photogrammetric block will be around 1/1000th of the distance to the surface. Thus, if we are imaging a cliff face from 30m away, we can realistically expect accuracy to within 3 cm of that cliff. This is very useful, especially if you know beforehand the required accuracy of the survey. This is also a more stable starting point than GSD, whose quality as a metric which can vary widely depending on your camera selection.

Convergent viewing geometry

Multi-angular data is intuitively desirable to gather, with the additional data comes additional data processing considerations, but recently published literature has suggested that adding these views has the secondary effect of mitigating systematic errors within photogrammetric bundles. Thus, when imaging a surface, try and add cameras at off angles from the surface normal in order to build a ‘strong’ imaging network, to avoid systematic error creeping in.

Shoot in RAW where possible

Whilst maybe unnecessary for many applications, RAW images allow the user to capture a much great range of colour within an image, owing to the fact that colours are written on 12/14 bits rather than the 8 of JPG images. Adding to this, jpg compression can impact the quality of the 3D point clouds, so using uncompressed images is advised.

Mind your motion

Whilst SfM suggests that the camera is moving, we need to bear in mind that moving cameras are subject to blur, and this is sometimes difficult to detect, especially when shooting in tough conditions where you can’t afford to look at previews. Thus, you can pre-calculate a reasonable top speed for the camera to be moving, and stick to that. We recommend a maximum of 1.5 pixels in GSD over the course of each exposure given the literature and as advised by the OS.

Don’t overparameterize the lens model

Very recently, studies have suggested that overparameterizing the lens model, particularly when poorer quality equipment is being used without good ground control, can lead to a completely unsuitable lens model being fit which will impact the quality of results. The advice – only fit f, cx, cy, k1 and k2 parameters if you’re unsure of what you’re doing. This is far from the default settings in most software packages!

Conclusion

I had a few more points in my long list, but for now these 6 will suffice. Whilst I held back on camera selection here you can read my previous camera selection post for some insight into what you should be looking for. Hope this helps!

Notre Dame

SfM revisited

Snavely’s 2007 paper was one of the first breakout pieces of research bringing the power of bundle adjustment and self-calibration of unordered image collections to the community. It paved the way for the use of SfM in many other contexts, but I always appreciated how simple and focused the piece of work was, and how well explained each step in the process is.

snave

Reconstruction of Notre Dame from Snavely’s paper

For this contribution, I had hoped to try and recreate a figure from this paper, in which the front facade of the Notre Dame cathedral was reconstructed from internet images. I spent last weekend in Paris, so I decided I’d give a go at collecting my own images and pulling them together into a comparable model.

Whilst the doors of the cathedral were not successfully included due to the hordes of tourists in each image, the final model came out OK, and is view-able on my website here.

ND_cat.png

View of the Cathedral on Potree

HDR stacking

As a second mini-experiment, I thought I’d see how a HDR stack compared with a single exposure from my A7. The dynamic range of the A7, shooting from a tripod at ISO 50 is around 14EV stops, so  I wasn’t expecting a huge amount of dynamic range to be outside this, though potentially parts of the windows could be retrieved. For the experiment, I used both Hugin‘s HDR functionality and a custom python script using openCV bindings for generating HDR images which can be downloaded here.

Results were varied, with really only Merten’s method of HDR generation showing any notable improvement on the original input.

This slideshow requires JavaScript.

Some interesting things happened, including Hugin’s alignment algorithm misaligning the image (or miscalculating the lens distortion) to create a bowed out facade by default, pretty interesting to see! I believe, reading Robertson’s paper, his method was generated more to be used on grayscale images rather than full colour, but thought I’d leave the funky result in for completeness.

If we crop into the middle stain glass we can see some of the fine detail the HDR stacks might be picking up in comparison to the original JPG.

This slideshow requires JavaScript.

We can see a lot of the finer detail of the famous stained-glass windows revealed by Merten’s HDR method, which is very cool to see! I’m impressed with just how big the difference is between it and the default off-camera JPG.

Looking at the raw file from the middle exposure, much of the detail of the stain glass is still there, though has been clipped in the on-camera JPG processing.

fre

Original image processed from RAW and contrast boosted showing fine detail on stained glass

It justifies many of the lines of reasoning I’ve presented in the last few contributions on image compression, as these fine details can often reveal features of interest.

I had actually planned to present the results from a different experiment first, though will be returning to that in a later blog post as it requires much more explanation and data processing, watch this space for future contributions from Paris!

EO Detective interviews Tim Peake

I saw this on EODetective‘s twitter account – an interview with Tim Peake about the process behind the astronaut’s photography generated on board the ISS. I’ve actually used a strip of them before to make a photogrammetric model of Italy, and was very curious about the process behind their capture.

Interesting to see they use unmodified Nikon D4s – I was curious about why they were using a relatively small aperture (f/11) for the capture of the images I had downloaded, and while ISO was mentioned I’m still left wondering. I guess they don’t really think about it as they are very busy throughout the day, though he did mention they leave them in fully automatic most of the time. While you could potentially get better quality images from setting a wider aperture, as per DxoMark’s testing on 24 mm lenses, I’m guessing the convenience of using fully-auto settings outweigh the cost.

But that’s not really in the spirit of the interview, which is more to get a general sense of life aboard the ISS.

normed.jpg

A sample image from the ISS

MP map

Just a quick entry detailing an interactive map showing MPs’ constituencies and party membership created at the request of a friend. It uses leaflet.js and geojson to draw the map, meaning it’s standalone html code which can be easily moved and modified.

mp_map.png

It’s based largely on the chloropleth example included in the leaflet documentation and was pretty interesting to make!

You can see it at my website here.