RSPSoc Annual Conference

I had a great time at the RSPSoc conference yesterday, and very much enjoyed catching up with the some of the people I made friends with at Wavelength this year – this is a short entry to just make available the slides of both Mike (supervisor) and myself, who’s primary focus was on image quality in photogrammetric work. Unfortunately I think I filled my slides a little too much and probably could have put in about half the content, but somehow couldn’t stop adding plots from the beautiful seaborn library, lesson learned!

Link to Mike slides

Link to my slides

Looking forward to writing a blog on RAW – JPEG conversions very soon, check the undemosaiced sneak preview below 😉

imtest.png

 

Advertisements

Joypy

Not one to miss a fad in data visualisation, I noticed joyplots getting a lot of attention over at reddit’s dataisbeautiful subreddit and have given a go at producing some myself – I’m hoping to integrate them into a talk I’m giving this Wednesday as part of the RSPSoc‘s annual conference, and am hoping they make enough sense to include.

I’m tinkering with the joypy library, a set of scripts whose sole purpose is to produce these types of plots, built ontop of the excellent (and frequently used by myself) seaborn plotting library.

For now, I need to get of the fad wagon and keep on writing!

1_Overstrand_Quality_100.0.png

A sample joyplot I’ve produced.

Gamify it

I’ve been planning and chinking away at writing up the last three years of work into a coherent thesis in the last 6 months or so. It’s very interesting to look back at the reams of planning documents, literature reviews and interim results documents I’ve produced over this time!

Knowing what and how much to write on each topic is a bit of a dark art however; the initial targets I’ve set are very loose, but I think important to form some sort of structure to grow the report into. As a bit of a tongue-in-cheek joke I produced some ‘progress bar’ style bar charts, one for each chapter planned for the final report and have been updating day on day. The satisfaction gained from seeing them creep up has actually been surprisingly effective in getting me into a writing mode each day!

I’ve gone with a traffic light colour palette, the top bar indicates how many words I planned to write, the second the word count to date and the bottom the upper limit I’ve set myself. I know obsessing over word count is a massive waste of time, and I don’t worry about them too much at all, but couldn’t pass up an opportunity for some opportunistic data visualization!

Progress.png

Standard summary report I’ve been producing

Neural nets in Remote Sensing

Neural nets, a summary: (The chain rule * your GPU RAM)

Around 2 years ago I remember having a discussion with Jan Boehm about photogrammetry after my first meeting as the shadow wavelength rep on the Remote Sensing and Photogrammetry committee. He mentioned Agisoft, which I was already using and familiar with at the time, but then mentioned the movement in dense matching algorithms towards use of neural nets, mentioning one which had been submitted to the KITTI stereo benchmark.

right_cnn

Disparity map using Žbontar’s methods

This piqued my curiosity, and I remember reading and being quite excited by Jure’s paper. While some concepts were new to me, the use of Convolutional neural networks (ConvNets) and the two types of architecture used to initialize the initial results, before moving towards post-processing using semi-global matching. I remember sinking a great deal of time into reading about the methods, exploring the github and methods used within the core of the paper, and subsequently hounding a colleague who was using a Titan-X for some deep learning work for some time with it.

I remember I took the ideas with me to EGU 2016, and even went to the point of acquiring a data set I thought would be worthy of testing it with from a German photogrammetrist, Andreas Kaiser. Alas, it wasn’t to be due to the hardware limitations and the fact that I wasn’t very familiar with the lua programming language. However I had learned a lot about the nature of deep learning, which I felt was a decent investment of my time.

The reason for this blog entry, however, isn’t to enlighten the reader of my failure to get up to speed with neural nets at the time, it’s much more hopeful than that! Fast forward two years, and development within the field of deep learning has come on leaps and bounds. With serious development time going into TensorFlow, and a beautiful and accessible front end in the form of keras, the python user really does have the tools to apply neural nets to all sorts of applications within image-based studies.

Having learned the basic ideas around neural nets from my initial excitement a long time ago I decided to try and get involved with the community once more. A few months back, a well timed kaggle competition came up which involved image classification, which raised an eyebrow. I contacted an old friend of mine who had just finished his PhD in medical imaging and we set to take up the challenge.

river

The task for the competition involved labeling satellite imagery

Since starting the task, I feel like I’ve come on leaps and bounds with not only the concepts behind ConvNets, but their architecture and application in the python framework. Whilst we generated lots of code (will be on github in due course), and had lots of ideas floating about, we finished a decidedly average mid-table – this first pass was as much an experience in learning about organisation as well as about imaging science, but it’s made me rethink about using ConvNets in a Remote Sensing/Photogrammetry environment.

Whilst we are seeing more contributions coming out of the community, and the popularity of other less technical concepts like support vector machines have shown I’m hoping to extend my skill set to include all of these in the future. If anyone who happens to be reading this feel the same, don’t hesitate to get in touch!

 

Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.

Writing blues

After having been ill the last week and a half I’m currently trying to get back into the swing of writing, which I find is largely the hardest part of research where really it doesn’t/shouldn’t need to be. One thing in particular I find very difficult is starting – I often pore over the first words/sentence for a very long time when I do sit down to write.

One forward step I’ve come to in an attempt to mitigate this is to give myself as many opportunities as possible to start writing. While obviously this could involve carrying a pen and paper around everywhere and waiting for inspiration to hit, I think the practicalities of translating esoteric squiggles and keeping the notes in decent order a bit beyond me, so I rarely give it a proper go.

Enter the bluetooth keyboard, a product recommended to me by my supervisor to ensuring you can start taking notes/writing wherever you are. I was skeptical at first, due to the variable key size and slight faff of connecting via bluetooth to my phone, but after giving it a couple of hours on a recent visit to the RGS I was sold. Currently I’m typing up a version of this blog post on my phone sitting on a train from Holyhead to Chester on the way back to London. I’m getting great pleasure from watching the trees go by after every few sentences!

2d5dde0a-bac2-42c6-bf82-9bc28a34c520

Product photo from Microsoft’s site

While I know this entry will read like an advertorial, that isn’t the intention, I’m just very wary of the summer’s PhD writing ahead, and am glad to have an excuse to do the lion’s share sitting in a park rather than in my stuffy office! For now, back to writing, though I’m preparing a more technical blog post which should be finished later tomorrow.

for_up.jpg

Spotted from the train in Wales

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw