Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

EGU 2017

As a result of a travel grant awarded to me by the Remote Sensing and Photogrammetry Society, I was lucky enough to be able to return to EGU this year, albeit only for the Wednesday. I was there to present my research, in a poster format, based on raw image processing in structure-from-motion workflows. After arriving in Vienna on Tuesday afternoon I went straight the hostel I was staying at to review my poster and to finalize the sessions I would go to.

I got to the conference early in the morning, and set up my poster which was to be presented during the high resolution topography in the geosciences session. After taking a short break to grab a coffee, I headed over to the first session of the day – Imaging, measurements and modelling of physical and biological processes in soils. After last year’s fascinating run of discussions about soil and soil erosion, I decided my one day at EGU would be largely dedicated to that theme!

One particular talk which caught my eye used data fusion of laser scanning and NIR spectrometry with the goal to couple the two datasets for use in examining feedbacks in soil processes. Some very cool kit, and very blue-sky research, a good way to start the day!

After lunch, I almost exclusively attended a land degradation session, which featured some very interesting speakers. Many focused on integrating modern techniques for prevention of soil erosion and gully formation into farming practices in Africa. Interestingly, while the talks almost all focused on case studies and success in showing the physical effects of taking these actions, the Q & As were very much about social aspects, and how to bring about the cultural change within farming communities.

Another notable talk was given by a group who were aiming to promote the use of a targeted carbon economy which sees citizens from carbon consuming countries pay for the upkeep and management of forestry in developing communities. The presentation was very clear and set solid numbers onto each factor introduced, which meant it was much easier to share the vision portrayed, definitely something I’ll be following in the future!

This lead to the poster session in which I was participating, which was well attended and seemed to generate lots of interest. By the time I arrived to present at the evening session, the 15 A4 posters I had printed had been hoovered up, which is always a good sign! Over the course of the hour and a half I was visited by many people who I had met before at various conferences – it’s always nice to have people you know come to say hello, especially as affable a bunch as geomorphologists!

out_poster.jpg

The poster I presented

One group of particular interest were from Trinity College Dublin, where I had done my undergraduate degree many moons ago. Niamh Cullen is doing research into coastal processes in the West of Ireland and is using photogrammetry to make some measurements, and so we had a very good discussion on project requirements/best strategy. She’s also involved in the Irish Geomorphology group, who’s remit seeks to establish a community of geomorphologists in Ireland.

In the evening I attended the ECR geomorphologist dinner, which was great fun, a good way to wrap up proceedings! I look forward to participating in EGU in the future in whatever capacity I can.

Notre Dame

SfM revisited

Snavely’s 2007 paper was one of the first breakout pieces of research bringing the power of bundle adjustment and self-calibration of unordered image collections to the community. It paved the way for the use of SfM in many other contexts, but I always appreciated how simple and focused the piece of work was, and how well explained each step in the process is.

snave

Reconstruction of Notre Dame from Snavely’s paper

For this contribution, I had hoped to try and recreate a figure from this paper, in which the front facade of the Notre Dame cathedral was reconstructed from internet images. I spent last weekend in Paris, so I decided I’d give a go at collecting my own images and pulling them together into a comparable model.

Whilst the doors of the cathedral were not successfully included due to the hordes of tourists in each image, the final model came out OK, and is view-able on my website here.

ND_cat.png

View of the Cathedral on Potree

HDR stacking

As a second mini-experiment, I thought I’d see how a HDR stack compared with a single exposure from my A7. The dynamic range of the A7, shooting from a tripod at ISO 50 is around 14EV stops, so  I wasn’t expecting a huge amount of dynamic range to be outside this, though potentially parts of the windows could be retrieved. For the experiment, I used both Hugin‘s HDR functionality and a custom python script using openCV bindings for generating HDR images which can be downloaded here.

Results were varied, with really only Merten’s method of HDR generation showing any notable improvement on the original input.

This slideshow requires JavaScript.

Some interesting things happened, including Hugin’s alignment algorithm misaligning the image (or miscalculating the lens distortion) to create a bowed out facade by default, pretty interesting to see! I believe, reading Robertson’s paper, his method was generated more to be used on grayscale images rather than full colour, but thought I’d leave the funky result in for completeness.

If we crop into the middle stain glass we can see some of the fine detail the HDR stacks might be picking up in comparison to the original JPG.

This slideshow requires JavaScript.

We can see a lot of the finer detail of the famous stained-glass windows revealed by Merten’s HDR method, which is very cool to see! I’m impressed with just how big the difference is between it and the default off-camera JPG.

Looking at the raw file from the middle exposure, much of the detail of the stain glass is still there, though has been clipped in the on-camera JPG processing.

fre

Original image processed from RAW and contrast boosted showing fine detail on stained glass

It justifies many of the lines of reasoning I’ve presented in the last few contributions on image compression, as these fine details can often reveal features of interest.

I had actually planned to present the results from a different experiment first, though will be returning to that in a later blog post as it requires much more explanation and data processing, watch this space for future contributions from Paris!

EO Detective interviews Tim Peake

I saw this on EODetective‘s twitter account – an interview with Tim Peake about the process behind the astronaut’s photography generated on board the ISS. I’ve actually used a strip of them before to make a photogrammetric model of Italy, and was very curious about the process behind their capture.

Interesting to see they use unmodified Nikon D4s – I was curious about why they were using a relatively small aperture (f/11) for the capture of the images I had downloaded, and while ISO was mentioned I’m still left wondering. I guess they don’t really think about it as they are very busy throughout the day, though he did mention they leave them in fully automatic most of the time. While you could potentially get better quality images from setting a wider aperture, as per DxoMark’s testing on 24 mm lenses, I’m guessing the convenience of using fully-auto settings outweigh the cost.

But that’s not really in the spirit of the interview, which is more to get a general sense of life aboard the ISS.

normed.jpg

A sample image from the ISS

WhatsApp Images

One thing I’ve noticed since sharing images across a range of formats/websites, is that image compression algorithms on various platforms vary noticeably. This is most evident, from my experience, with WhatsApp, where images tend to be resized without even an anti-aliasing filter. The results are images with huge amounts of speckle in them when they are not resized before uploading.

Obviously the target market for WhatsApp and its user base isn’t people using high end cameras to share their images on the application, but it still seems like a couple of functions could fix a lot of the visual problems that I see, which would save me having to do it locally.

It seems astounding to me that such a big company wouldn’t put more time into sensible image compression/resizing, or perhaps they have and I am catching exceptions. The blocky artifacts I’ve written about being associated with the algorithm on this blog before are evident. Even with the third example included, where the image was resized to 20% of it’s sized before compression applied produces a much better result qualitatively, even with the smaller pixel count upon redownload of the latter.

Whilst whatever algorithm they are using is likely directed towards smartphone camera users it still seems like an oversight by the developers. Hopefully WordPress doesn’t apply a similar type of compression when I post this now!

Reflecting on Wavelength

Two years ago I agreed to join the committee of a professional body known as the Remote Sensing and Photogrammetry Society (RSPSoc), a professional body whose remit is to promote and educate its members and the public on advancements in Remote Sensing Science. When I signed up to join as the Wavelength representative, I admittedly knew very little about not only how this society operated, but societies in general, and what their function was in the greater scope of progress of Science. I took on the role knowing I’d have to learn fast, and, after a two year lead period, host a conference focusing on Remote Sensing and Photogrammetry, which would serve to bring early career researchers from both academia and industry together to discuss the latest advancements in RSP Science.

The first Wavelength conference I attended way back in 2015 was at Newcastle, a few months after my first conference experience at the 2014 GRSG meeting in London, just two months after starting my project.

The difference was apparent, with the GRSG attracting the old guard from all over the world to contribute to the conference. I distinctly remember Nigel Press, a veteran Remote Sensor and founder of NPA satellite mapping, turning around to the crowd during a Q and A session pleading with people to start taking risks funding/supporting hyperspectral satellite missions, as their contributions to geological research was so apparent. I didn’t mention it in my write up from that conference, but it really stuck with me as, at least for that minute, it all seemed so human. But apart from that, it was all quite formal and difficult to tell how I, as a novice, could really play a part.

With Wavelength, however, this humanity is what it’s all about! When everyone’s a novice, you can afford to be a bit more gung-ho with your opinions. As someone who tries to always ask, or at least dream up, a question during Q and A portions of talks, I loved it so much. Rich bluesky discussions have kept me motivated around the inevitable slower portions of writing and finicky data processing of my project, and Wavelength had them in buckets! The fact that I got so much out of it was part of my reason for volunteering to host it, as I felt like it would be a way for me to contribute back to the community, and get more involved in RSPSoc.

After an extremely enjoyable and well-run conference at MSSL during the spring of 2016, it was up to me to deliver a conference in Kingston in March 2017, while coordinating the final run in to my PhD project. While things could definitely have been done better, and I should have maybe been a bit more ruthless about advertising the conference to a wider audience, I have to say I think it ran quite smoothly, and the delegates got a lot out of it, as did I! I’ll include a summary of each day below, and my favourite parts throughout the three day agenda, including a longer description of one delegate presentation.

Monday 13th March

Delegates arrived at Kingston train station at around 11.30 am. I had enlisted the help of my colleague Paddy to go and meet the delegates, as I had to run up the poster boards to the conference room. After lunch and a quick roll call, things kicked off with 6 talks spanning image processing and Remote Sensing of vegetation.

Andrew Cunliffe, eventual winner of best speaker, showed some captivating UAV footage of Qikiqtaruk, a site where arctic ecology is being furtively researched to try to gain insight into differences between observations at different scales, both the changing ecological and geomorphological landscapes. I was interested in his hesitance in saying what he was doing for UAVs was not ‘ground truthing’ of satellite images, but more ‘evaluation’ thereof, as ground truth was never really acquired (outside of GCPs for a few of the 3D models). You can check out his profile on google scholar, which lists some pretty interesting research!

Monday wrapped up with a meal at a local Thai food restaurant, the Cocoanut, a staple with the Kingston Research folk!

Tuesday 14th March

After a tour of Kingston’s town centre in the morning, we returned to the conference venue to listen to Alastair Graham, of geoger fame, give an insightful and extremely helpful talk about career options for Remote Sensing scientists. I felt really lucky to have had the opportunity to host him – truth be told it was a bit of a fluke we crossed paths at all! He had been retweeting some of the tweets from the @sentinel_bot twitter account I had made, which caused me to look at his twitter and subsequently his website. Realising he was organising an RS meeting in Oxford the month before Wavelength (Rasters Revealed), I jumped at the chance to get him onboard, and I’m glad I did! I won’t go into his use of sli.do, but only mention that it’s worth looking into.

On Tuesday, James Brennan’s talk about the next generation of MODIS burnt area products brought me back to my Masters’ days at UCL, and my time spent with the JRCTIP products. James’ talk was focused on the binary nature of classification, and how he was looking into using a DCT to model behaviours of fires, something like a fuzzy land classification. It was really engaging and I enjoyed his super-relaxed style of presenting.

DSC00004.JPG

Delegates eye up some posters

Tom Huntley of Geoxphere also came in to give us a talk on recent advancements with their spinout hardware company, which provides high quality cameras for mapping purposes: the XCam series. Wavelength tries to bridge the gap between industry and acamemia, and both Tom and Alastair’s talk brought in the industry element I was hoping for.

After a nice meal at Strada Kingston, we hit the bowling alley before wrapping up day 2.

Wednesday 15th March

Wednesday’s session opened with delegates talking about mainly data processing. Ed Williamson, from the Centre for Environmental Data Analysis (CEDA) gave a very interesting introduction into the supercomputing facilities they provide (JASMIN), as well as services offered to clients choosing to avail of these services. They host the entire Sentinel catalogue, which is such an outrageous amount of data, and so it was interesting to be given a whirlwind tour of how this is even possible, practically speaking.

We also had the pleasure of listening to José Gómez-Dans from NCEO talk to us about integrating multiple data sources into a consistent estimation of land surface parameters using advanced data assimilation techniques. I had done my Masters’ thesis with Jóse, and (somewhat) fondly remember trying to interpret charts where the error bars couldn’t even be plotted in any reasonable way on them. This is the reality of EO though, uncertainty is part and parcel of it!

The poster session featured a wide range of topics, I even put up my one from EGU last year, and participants were extremely interested in drought mapping in Uganda, as well as numerous uses for InSAR data presented. Congrats to Christine Bischoff for winning the best poster award with her investigations of ground deformation in London.

Proceedings wrapped up with deciding on the next incoming Wavlength host (congrats to Luigi Parente, of Loughborough Uni) and a lovely lunch in the sun.

DSC00021_crop.jpg

Sunny group shot

Summary

Wavelength was really fun and interesting to organise, and I hope it’s a tradition we can keep going as a society. I’ve made the conference booklet publicly available here. For those of you who might be reading this blog and aren’t members I suggest you join, the benefits are evident.

For now, for me, it’s EGU and beyond – I’m also aiming to attend the annual RSPSoc conference in Imperial in September with latest developments from my fieldwork data!