Sentinel_bot – now with NIR vision

A quick blog post as I’m very much in the throes of writing! I took a few minutes today to introduce false colour (Near Infrared – Red – Green) images into @sentinel_bot’s programming, so now there’s a 20% chance that an image it produces will be false colour. In the near future I think I’ll introduce other band combinations (such as PCA band combos for mineral contrast enhancement), but for now I’m going to let it sit and appreciate some of what it comes up with, such as the image below.

Source : https://github.com/JamesOConnor/Sentinel_bot

Twitter : www.twitter.com/sentinel_bot

dgs6xtyxyaadcvj

NIR – R – G image over Argentina

Gamify it

I’ve been planning and chinking away at writing up the last three years of work into a coherent thesis in the last 6 months or so. It’s very interesting to look back at the reams of planning documents, literature reviews and interim results documents I’ve produced over this time!

Knowing what and how much to write on each topic is a bit of a dark art however; the initial targets I’ve set are very loose, but I think important to form some sort of structure to grow the report into. As a bit of a tongue-in-cheek joke I produced some ‘progress bar’ style bar charts, one for each chapter planned for the final report and have been updating day on day. The satisfaction gained from seeing them creep up has actually been surprisingly effective in getting me into a writing mode each day!

I’ve gone with a traffic light colour palette, the top bar indicates how many words I planned to write, the second the word count to date and the bottom the upper limit I’ve set myself. I know obsessing over word count is a massive waste of time, and I don’t worry about them too much at all, but couldn’t pass up an opportunity for some opportunistic data visualization!

Progress.png

Standard summary report I’ve been producing

Neural nets in Remote Sensing

Neural nets, a summary: (The chain rule * your GPU RAM)

Around 2 years ago I remember having a discussion with Jan Boehm about photogrammetry after my first meeting as the shadow wavelength rep on the Remote Sensing and Photogrammetry committee. He mentioned Agisoft, which I was already using and familiar with at the time, but then mentioned the movement in dense matching algorithms towards use of neural nets, mentioning one which had been submitted to the KITTI stereo benchmark.

right_cnn

Disparity map using Žbontar’s methods

This piqued my curiosity, and I remember reading and being quite excited by Jure’s paper. While some concepts were new to me, the use of Convolutional neural networks (ConvNets) and the two types of architecture used to initialize the initial results, before moving towards post-processing using semi-global matching. I remember sinking a great deal of time into reading about the methods, exploring the github and methods used within the core of the paper, and subsequently hounding a colleague who was using a Titan-X for some deep learning work for some time with it.

I remember I took the ideas with me to EGU 2016, and even went to the point of acquiring a data set I thought would be worthy of testing it with from a German photogrammetrist, Andreas Kaiser. Alas, it wasn’t to be due to the hardware limitations and the fact that I wasn’t very familiar with the lua programming language. However I had learned a lot about the nature of deep learning, which I felt was a decent investment of my time.

The reason for this blog entry, however, isn’t to enlighten the reader of my failure to get up to speed with neural nets at the time, it’s much more hopeful than that! Fast forward two years, and development within the field of deep learning has come on leaps and bounds. With serious development time going into TensorFlow, and a beautiful and accessible front end in the form of keras, the python user really does have the tools to apply neural nets to all sorts of applications within image-based studies.

Having learned the basic ideas around neural nets from my initial excitement a long time ago I decided to try and get involved with the community once more. A few months back, a well timed kaggle competition came up which involved image classification, which raised an eyebrow. I contacted an old friend of mine who had just finished his PhD in medical imaging and we set to take up the challenge.

river

The task for the competition involved labeling satellite imagery

Since starting the task, I feel like I’ve come on leaps and bounds with not only the concepts behind ConvNets, but their architecture and application in the python framework. Whilst we generated lots of code (will be on github in due course), and had lots of ideas floating about, we finished a decidedly average mid-table – this first pass was as much an experience in learning about organisation as well as about imaging science, but it’s made me rethink about using ConvNets in a Remote Sensing/Photogrammetry environment.

Whilst we are seeing more contributions coming out of the community, and the popularity of other less technical concepts like support vector machines have shown I’m hoping to extend my skill set to include all of these in the future. If anyone who happens to be reading this feel the same, don’t hesitate to get in touch!

 

Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

Notre Dame

SfM revisited

Snavely’s 2007 paper was one of the first breakout pieces of research bringing the power of bundle adjustment and self-calibration of unordered image collections to the community. It paved the way for the use of SfM in many other contexts, but I always appreciated how simple and focused the piece of work was, and how well explained each step in the process is.

snave

Reconstruction of Notre Dame from Snavely’s paper

For this contribution, I had hoped to try and recreate a figure from this paper, in which the front facade of the Notre Dame cathedral was reconstructed from internet images. I spent last weekend in Paris, so I decided I’d give a go at collecting my own images and pulling them together into a comparable model.

Whilst the doors of the cathedral were not successfully included due to the hordes of tourists in each image, the final model came out OK, and is view-able on my website here.

ND_cat.png

View of the Cathedral on Potree

HDR stacking

As a second mini-experiment, I thought I’d see how a HDR stack compared with a single exposure from my A7. The dynamic range of the A7, shooting from a tripod at ISO 50 is around 14EV stops, so  I wasn’t expecting a huge amount of dynamic range to be outside this, though potentially parts of the windows could be retrieved. For the experiment, I used both Hugin‘s HDR functionality and a custom python script using openCV bindings for generating HDR images which can be downloaded here.

Results were varied, with really only Merten’s method of HDR generation showing any notable improvement on the original input.

This slideshow requires JavaScript.

Some interesting things happened, including Hugin’s alignment algorithm misaligning the image (or miscalculating the lens distortion) to create a bowed out facade by default, pretty interesting to see! I believe, reading Robertson’s paper, his method was generated more to be used on grayscale images rather than full colour, but thought I’d leave the funky result in for completeness.

If we crop into the middle stain glass we can see some of the fine detail the HDR stacks might be picking up in comparison to the original JPG.

This slideshow requires JavaScript.

We can see a lot of the finer detail of the famous stained-glass windows revealed by Merten’s HDR method, which is very cool to see! I’m impressed with just how big the difference is between it and the default off-camera JPG.

Looking at the raw file from the middle exposure, much of the detail of the stain glass is still there, though has been clipped in the on-camera JPG processing.

fre

Original image processed from RAW and contrast boosted showing fine detail on stained glass

It justifies many of the lines of reasoning I’ve presented in the last few contributions on image compression, as these fine details can often reveal features of interest.

I had actually planned to present the results from a different experiment first, though will be returning to that in a later blog post as it requires much more explanation and data processing, watch this space for future contributions from Paris!

Leafiness

I thought it might be fun to try something different, and delve back into the world of satellite remote sensing (outside of Sentinel_bot, which isn’t a scientific tool). It’s been a while since I’ve tried anything like this, and my skills have definitely degraded somewhat, but I decided to fire up GrassGIS and give it a go with some publicly available data.

I set myself a simple task of trying to guess how ‘leafy’ streets are within an urban for urban environment from Landsat images. Part of the rationale was that whilst we could count trees using object detectors, this requires high resolution images. While I might do a blog on this at a later date, it was outside the scope of what I wanted to achieve here which is at a very coarse scale. I will be using a high resolution aerial image for ground truthing!

For the data, I found an urban area on USGS Earth Explorer with both high resolution orthoimagery and a reasonably cloud free image which were within 10 days of one another in acquisition. This turned out to be reasonably difficult to find, with the aerial imagery being the main limiting factor, but I found a suitable area in Cleveland, Ohio.

The aerial imagery is a 30 cm resolution having been acquired using a Williams ZI Digital Mapping Camera, and was orthorectified prior to download. For the satellite data, a Landsat 5 Thematic Mapper raster was acquired covering the area of interest, with a resolution of 30 m in the bands we are interested in.

This experiment sought to use the much researched NDVI, a simple index used for recovering an estimate of vegetation presence and health.

Initially, I loaded both datasets into QGIS to get an idea of the resolution differences

jezzer.png

Aerial image overlain on Landsat 5 TM data (green channel)

So a decent start, looks like our data is valid in some capacity and should be an interesting mini-experiment to run! The ground truth data is resolute enough to let us know how the NDVI is doing, and will be used farther downstream.

 

Onto GrassGIS, which I’ve always known has great features for processing satellite imagery, though I’ve never used. It’s also largely built on python, which is my coding language of choice, so I feel very comfortable troubleshooting the many errors fired at me!

The bands were loaded, DN -> reflectance conversion done (automatically, using GrassGIS routines) and a subsequent NDVI raster derived.

ndvi2.png

Aerial image overlain on NDVI values. Lighter pixels denote a higher presence of vegetation

Cool! We’ve got our NDVI band, and can ground truth it against the aerial photo as planned.

ndvi1

Lighter values were seen around areas containing vegetation

Last on the list is grabbing a vector file with street data for the area of interest so we can limit the analysis to just pixels beside or on streets. I downloaded the data from here and did a quick clip to the area of interest.

roads1.png

Vector road network (in yellow) for our aerial image. Some new roads appear to have been built.

I then generated a buffer from the road network vector file, and generated a raster mask from this so only data within 20 m of a road would be included in analyses. The result is a first stab at our leafy streets index!

map1.jpg

Visual inspection suggests it’s working reasonably well when compared with the reference aerial image, a few cropped examples are shown below.

This slideshow requires JavaScript.

Lastly, we can use this this data to scale things up, and make a map of the wider area in Cleveland. This would be simple to do for anywhere with decent road data.

map3.jpgThis might be useful for sending people on the scenic route, particularly in unfamiliar locations. Another idea might be to use it in a property search, or see if there’s a correlation with real estate prices. Right now I’ve run out of time for this post, but might return to the theme at a later date!