Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.

Writing blues

After having been ill the last week and a half I’m currently trying to get back into the swing of writing, which I find is largely the hardest part of research where really it doesn’t/shouldn’t need to be. One thing in particular I find very difficult is starting – I often pore over the first words/sentence for a very long time when I do sit down to write.

One forward step I’ve come to in an attempt to mitigate this is to give myself as many opportunities as possible to start writing. While obviously this could involve carrying a pen and paper around everywhere and waiting for inspiration to hit, I think the practicalities of translating esoteric squiggles and keeping the notes in decent order a bit beyond me, so I rarely give it a proper go.

Enter the bluetooth keyboard, a product recommended to me by my supervisor to ensuring you can start taking notes/writing wherever you are. I was skeptical at first, due to the variable key size and slight faff of connecting via bluetooth to my phone, but after giving it a couple of hours on a recent visit to the RGS I was sold. Currently I’m typing up a version of this blog post on my phone sitting on a train from Holyhead to Chester on the way back to London. I’m getting great pleasure from watching the trees go by after every few sentences!

2d5dde0a-bac2-42c6-bf82-9bc28a34c520

Product photo from Microsoft’s site

While I know this entry will read like an advertorial, that isn’t the intention, I’m just very wary of the summer’s PhD writing ahead, and am glad to have an excuse to do the lion’s share sitting in a park rather than in my stuffy office! For now, back to writing, though I’m preparing a more technical blog post which should be finished later tomorrow.

for_up.jpg

Spotted from the train in Wales

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

Photogrammetry rules of thumb

I’ve uploaded a CloudCompare file of some fieldwork I did last year to my website here. It uses the UK national LiDAR inventory data, mentioned in the post here. I think it espouses lots of the fundamentals discussed here, and is a good starting point for thinking about network design.

80% overlap

This dates way back, and I’m unsure of where I heard it first, but 80% overlap between images in a photogrammetric block with a nadir viewing geometry is an old rule of thumb from aerial imaging (here’s a quick example I found from 1955), and carries through to SfM surveying. I think it should likely be a first port of call for amateurs doing surveys of surfaces, as it’s very easy to jot down an estimate before undertaking a survey. For this, we should consider just camera positions orthogonal to the surface normal (see this post) and estimate a ground sample distance to aid us with camera spacing from there.

1:1000 rule

This has become superseded in recent years, but is still a decent rule of thumb for beginners in photogrammetry. It says that, in general (very general!), the surface precision of a photogrammetric block will be around 1/1000th of the distance to the surface. Thus, if we are imaging a cliff face from 30m away, we can realistically expect accuracy to within 3 cm of that cliff. This is very useful, especially if you know beforehand the required accuracy of the survey. This is also a more stable starting point than GSD, whose quality as a metric which can vary widely depending on your camera selection.

Convergent viewing geometry

Multi-angular data is intuitively desirable to gather, with the additional data comes additional data processing considerations, but recently published literature has suggested that adding these views has the secondary effect of mitigating systematic errors within photogrammetric bundles. Thus, when imaging a surface, try and add cameras at off angles from the surface normal in order to build a ‘strong’ imaging network, to avoid systematic error creeping in.

Shoot in RAW where possible

Whilst maybe unnecessary for many applications, RAW images allow the user to capture a much great range of colour within an image, owing to the fact that colours are written on 12/14 bits rather than the 8 of JPG images. Adding to this, jpg compression can impact the quality of the 3D point clouds, so using uncompressed images is advised.

Mind your motion

Whilst SfM suggests that the camera is moving, we need to bear in mind that moving cameras are subject to blur, and this is sometimes difficult to detect, especially when shooting in tough conditions where you can’t afford to look at previews. Thus, you can pre-calculate a reasonable top speed for the camera to be moving, and stick to that. We recommend a maximum of 1.5 pixels in GSD over the course of each exposure given the literature and as advised by the OS.

Don’t overparameterize the lens model

Very recently, studies have suggested that overparameterizing the lens model, particularly when poorer quality equipment is being used without good ground control, can lead to a completely unsuitable lens model being fit which will impact the quality of results. The advice – only fit f, cx, cy, k1 and k2 parameters if you’re unsure of what you’re doing. This is far from the default settings in most software packages!

Conclusion

I had a few more points in my long list, but for now these 6 will suffice. Whilst I held back on camera selection here you can read my previous camera selection post for some insight into what you should be looking for. Hope this helps!

EGU 2017

As a result of a travel grant awarded to me by the Remote Sensing and Photogrammetry Society, I was lucky enough to be able to return to EGU this year, albeit only for the Wednesday. I was there to present my research, in a poster format, based on raw image processing in structure-from-motion workflows. After arriving in Vienna on Tuesday afternoon I went straight the hostel I was staying at to review my poster and to finalize the sessions I would go to.

I got to the conference early in the morning, and set up my poster which was to be presented during the high resolution topography in the geosciences session. After taking a short break to grab a coffee, I headed over to the first session of the day – Imaging, measurements and modelling of physical and biological processes in soils. After last year’s fascinating run of discussions about soil and soil erosion, I decided my one day at EGU would be largely dedicated to that theme!

One particular talk which caught my eye used data fusion of laser scanning and NIR spectrometry with the goal to couple the two datasets for use in examining feedbacks in soil processes. Some very cool kit, and very blue-sky research, a good way to start the day!

After lunch, I almost exclusively attended a land degradation session, which featured some very interesting speakers. Many focused on integrating modern techniques for prevention of soil erosion and gully formation into farming practices in Africa. Interestingly, while the talks almost all focused on case studies and success in showing the physical effects of taking these actions, the Q & As were very much about social aspects, and how to bring about the cultural change within farming communities.

Another notable talk was given by a group who were aiming to promote the use of a targeted carbon economy which sees citizens from carbon consuming countries pay for the upkeep and management of forestry in developing communities. The presentation was very clear and set solid numbers onto each factor introduced, which meant it was much easier to share the vision portrayed, definitely something I’ll be following in the future!

This lead to the poster session in which I was participating, which was well attended and seemed to generate lots of interest. By the time I arrived to present at the evening session, the 15 A4 posters I had printed had been hoovered up, which is always a good sign! Over the course of the hour and a half I was visited by many people who I had met before at various conferences – it’s always nice to have people you know come to say hello, especially as affable a bunch as geomorphologists!

out_poster.jpg

The poster I presented

One group of particular interest were from Trinity College Dublin, where I had done my undergraduate degree many moons ago. Niamh Cullen is doing research into coastal processes in the West of Ireland and is using photogrammetry to make some measurements, and so we had a very good discussion on project requirements/best strategy. She’s also involved in the Irish Geomorphology group, who’s remit seeks to establish a community of geomorphologists in Ireland.

In the evening I attended the ECR geomorphologist dinner, which was great fun, a good way to wrap up proceedings! I look forward to participating in EGU in the future in whatever capacity I can.

EO Detective interviews Tim Peake

I saw this on EODetective‘s twitter account – an interview with Tim Peake about the process behind the astronaut’s photography generated on board the ISS. I’ve actually used a strip of them before to make a photogrammetric model of Italy, and was very curious about the process behind their capture.

Interesting to see they use unmodified Nikon D4s – I was curious about why they were using a relatively small aperture (f/11) for the capture of the images I had downloaded, and while ISO was mentioned I’m still left wondering. I guess they don’t really think about it as they are very busy throughout the day, though he did mention they leave them in fully automatic most of the time. While you could potentially get better quality images from setting a wider aperture, as per DxoMark’s testing on 24 mm lenses, I’m guessing the convenience of using fully-auto settings outweigh the cost.

But that’s not really in the spirit of the interview, which is more to get a general sense of life aboard the ISS.

normed.jpg

A sample image from the ISS

Reflecting on Wavelength

Two years ago I agreed to join the committee of a professional body known as the Remote Sensing and Photogrammetry Society (RSPSoc), a professional body whose remit is to promote and educate its members and the public on advancements in Remote Sensing Science. When I signed up to join as the Wavelength representative, I admittedly knew very little about not only how this society operated, but societies in general, and what their function was in the greater scope of progress of Science. I took on the role knowing I’d have to learn fast, and, after a two year lead period, host a conference focusing on Remote Sensing and Photogrammetry, which would serve to bring early career researchers from both academia and industry together to discuss the latest advancements in RSP Science.

The first Wavelength conference I attended way back in 2015 was at Newcastle, a few months after my first conference experience at the 2014 GRSG meeting in London, just two months after starting my project.

The difference was apparent, with the GRSG attracting the old guard from all over the world to contribute to the conference. I distinctly remember Nigel Press, a veteran Remote Sensor and founder of NPA satellite mapping, turning around to the crowd during a Q and A session pleading with people to start taking risks funding/supporting hyperspectral satellite missions, as their contributions to geological research was so apparent. I didn’t mention it in my write up from that conference, but it really stuck with me as, at least for that minute, it all seemed so human. But apart from that, it was all quite formal and difficult to tell how I, as a novice, could really play a part.

With Wavelength, however, this humanity is what it’s all about! When everyone’s a novice, you can afford to be a bit more gung-ho with your opinions. As someone who tries to always ask, or at least dream up, a question during Q and A portions of talks, I loved it so much. Rich bluesky discussions have kept me motivated around the inevitable slower portions of writing and finicky data processing of my project, and Wavelength had them in buckets! The fact that I got so much out of it was part of my reason for volunteering to host it, as I felt like it would be a way for me to contribute back to the community, and get more involved in RSPSoc.

After an extremely enjoyable and well-run conference at MSSL during the spring of 2016, it was up to me to deliver a conference in Kingston in March 2017, while coordinating the final run in to my PhD project. While things could definitely have been done better, and I should have maybe been a bit more ruthless about advertising the conference to a wider audience, I have to say I think it ran quite smoothly, and the delegates got a lot out of it, as did I! I’ll include a summary of each day below, and my favourite parts throughout the three day agenda, including a longer description of one delegate presentation.

Monday 13th March

Delegates arrived at Kingston train station at around 11.30 am. I had enlisted the help of my colleague Paddy to go and meet the delegates, as I had to run up the poster boards to the conference room. After lunch and a quick roll call, things kicked off with 6 talks spanning image processing and Remote Sensing of vegetation.

Andrew Cunliffe, eventual winner of best speaker, showed some captivating UAV footage of Qikiqtaruk, a site where arctic ecology is being furtively researched to try to gain insight into differences between observations at different scales, both the changing ecological and geomorphological landscapes. I was interested in his hesitance in saying what he was doing for UAVs was not ‘ground truthing’ of satellite images, but more ‘evaluation’ thereof, as ground truth was never really acquired (outside of GCPs for a few of the 3D models). You can check out his profile on google scholar, which lists some pretty interesting research!

Monday wrapped up with a meal at a local Thai food restaurant, the Cocoanut, a staple with the Kingston Research folk!

Tuesday 14th March

After a tour of Kingston’s town centre in the morning, we returned to the conference venue to listen to Alastair Graham, of geoger fame, give an insightful and extremely helpful talk about career options for Remote Sensing scientists. I felt really lucky to have had the opportunity to host him – truth be told it was a bit of a fluke we crossed paths at all! He had been retweeting some of the tweets from the @sentinel_bot twitter account I had made, which caused me to look at his twitter and subsequently his website. Realising he was organising an RS meeting in Oxford the month before Wavelength (Rasters Revealed), I jumped at the chance to get him onboard, and I’m glad I did! I won’t go into his use of sli.do, but only mention that it’s worth looking into.

On Tuesday, James Brennan’s talk about the next generation of MODIS burnt area products brought me back to my Masters’ days at UCL, and my time spent with the JRCTIP products. James’ talk was focused on the binary nature of classification, and how he was looking into using a DCT to model behaviours of fires, something like a fuzzy land classification. It was really engaging and I enjoyed his super-relaxed style of presenting.

DSC00004.JPG

Delegates eye up some posters

Tom Huntley of Geoxphere also came in to give us a talk on recent advancements with their spinout hardware company, which provides high quality cameras for mapping purposes: the XCam series. Wavelength tries to bridge the gap between industry and acamemia, and both Tom and Alastair’s talk brought in the industry element I was hoping for.

After a nice meal at Strada Kingston, we hit the bowling alley before wrapping up day 2.

Wednesday 15th March

Wednesday’s session opened with delegates talking about mainly data processing. Ed Williamson, from the Centre for Environmental Data Analysis (CEDA) gave a very interesting introduction into the supercomputing facilities they provide (JASMIN), as well as services offered to clients choosing to avail of these services. They host the entire Sentinel catalogue, which is such an outrageous amount of data, and so it was interesting to be given a whirlwind tour of how this is even possible, practically speaking.

We also had the pleasure of listening to José Gómez-Dans from NCEO talk to us about integrating multiple data sources into a consistent estimation of land surface parameters using advanced data assimilation techniques. I had done my Masters’ thesis with Jóse, and (somewhat) fondly remember trying to interpret charts where the error bars couldn’t even be plotted in any reasonable way on them. This is the reality of EO though, uncertainty is part and parcel of it!

The poster session featured a wide range of topics, I even put up my one from EGU last year, and participants were extremely interested in drought mapping in Uganda, as well as numerous uses for InSAR data presented. Congrats to Christine Bischoff for winning the best poster award with her investigations of ground deformation in London.

Proceedings wrapped up with deciding on the next incoming Wavlength host (congrats to Luigi Parente, of Loughborough Uni) and a lovely lunch in the sun.

DSC00021_crop.jpg

Sunny group shot

Summary

Wavelength was really fun and interesting to organise, and I hope it’s a tradition we can keep going as a society. I’ve made the conference booklet publicly available here. For those of you who might be reading this blog and aren’t members I suggest you join, the benefits are evident.

For now, for me, it’s EGU and beyond – I’m also aiming to attend the annual RSPSoc conference in Imperial in September with latest developments from my fieldwork data!