Mlhub launched

I’ve been following the RadiantEarth project for months now, after having originally seen the CEO, Anne Hale Miglarese, speak at RSPSoc’s conference in Birmingham 2018.

They’ve released an initial version of mlhub, which anyone can open an account with. To celebrate, I’ve put together a done-in-20 minutes notebook which displays the data shapes/labels on a map, available for all to see.

Definitely going to keep a close eye on this in the future!

mlhub.png

Zappa

Zappa is a python library which hugely simplifies the deployment of web apps, by using AWS lambda functions (‘serverless’). In essence, the library packages up an existing app, for example a flask application, and generates the endpoints required as lambda functions.

Why is this useful?

Running servers, at least from a hobbyist level, can be pricy, especially if the app requires lots of resources. Lambda functions are perfect for applications which are used as a demo, or things which are only infrequently required, as the consumer pays only for the time the server is active, billed in ms on AWS.

The downsides?

Generally speaking, lambda functions have a start up time that’s slower than a 24/7 server. When a request comes in for a given function, if the function has not recently been called, it will need to be created before the request can be processed. This can be quite a high overhead for functions with many dependencies. Zappa helps out with this by keeping the function ‘warm’ – periodically sending a request to the function to keep it from being stripped down. If you have a lambda function which gets bursts of requests, it can take time to spin up clones of the functions, limiting its effectiveness in production environments.

Examples:

The terracotta library, which I have mentioned before on here, is a great example of how effective lambda functions can be. NDVI time series by Vincent Sarago is another great example.

SentinelBot upgraded

I’ve been on a webdev kick since starting a new job, and have recently upgraded SentinelBot as a result. It now filters snow scenes less often and can handle atmospherically corrected products – I’ll be updating the github repository, and will be writing a post about my current job soon, but for now feast your eyes on some Sentinel goodness 🙂

 

 

Predictions, predictions, predictions

I’ve just listened to the latest episode of Alastair and Andrew‘s podcast, scene from above, and the discussion section based around near-future predictions for the Earth Observation (EO) industry, as well as some of the discussion in the news section, was extremely interesting. I’m fully onboard the hype train for machine learning booming in EO, with Andrew seemingly somewhat skeptical.

Before I go into why I think that’s the case, I’ll mention Alastair speaks about a Voyager documentary, the Farthest (I’ve actually just noticed a big Irish producer, crossing the line was involved in production, wahay!). It sounds absolutely incredible, and will go on my watch list, but Alastair’s comments reminded me of an xkcd comic alluding to the fact that the edge of the solar system is difficult to define! I actually really enjoyed listening to their thoughts on Voyager in general, and would love to hear more discussion around the history of EO as well as wider planetary missions – every time I read and think about Corona, for example, I can’t help but be amazed.

far

Voyager spacecraft (NASA)

 

One of the main predictions made within the main section of the podcast is that analysis ready data (ARD) will see wider use and release by data providers. We have seen a move towards sentinel 2 ARD and planet have recently released their atmospherically corrected surface reflectance product, I would hope this is an indication that this is quite well developed already!

planet.png

A figure from Planet’s surface reflectance white paper (source)

On the machine learning (ML) front, I attended a google earth engine workshop at the beginning of this year, and having had fruitful discussions with the host on the project’s directions, I think the iron is hot for ML and the hype justified. In particular, the host spoke about the team preparing tensor flow integration into the platform in time for AGU next year. Having been lucky enough to participate in (albeit not at a competitive level) the planet kaggle competition for classifying image excerpts into one or more classes last year, I have a decent idea of just why there has been a frenzy of research surrounding convolutional neural networks (CNNs) in the computer vision community, and I’m surprised that they haven’t appeared more in EO research.

While Andrew notes that supervised and unsupervised classification has been around and used for decades, the difference between those and deep-learned information is like night and day in my opinion. The competition, past the task presented, gave me a look into how neural networks are transforming image analysis, and how recurrent CNNs on massive scales could be leveraged in an environmental context for things like linking phenological mapping to data which might provide reasons as to why a change is happening with spatial context. Object-based analysis is unparalleled for applications like this, and CNNs are now so easy to use and much better at handling massive data sets than previous methods. Computer scientists are poised to integrate more and more with the EO community as higher resolution data becomes available, and so I feel like when high temporal and spatial resolution open data becomes available multi-disciplinary research will really kick off. Infact, I put together a starter ipython notebook for bird identification, showing just how easy it is using a pre-trained CNN for this application, albeit not with EO data.

birds.png

Example plot from ipython notebook

This leads to a prediction of my own – as more imaging scientists move into EO, Unmanned aerial vehicle (UAV) and satellite data will need to be better integrated. Currently, there are a raft of problems linking data collected from consumer level cameras onboard UAVs to satellite data, not least of which is radiometric normalization. The demand for higher resolution data from the deep learning end of the community will lead to new standards being introduced for how UAV data is collected and metadata stored (shameless plug). EO platforms will begin to integrate publicly collected UAV data and satellite researchers will begin to collaborate with computer scientists using nearer earth images. We will then see satellites being used as an early warning systems and UAV missions automatically launched off the back of satellite derived information in a range of new applications.

This isn’t a particularly insightful prediction, but one which continuously hasn’t really been addressed. I’m always surprised as to how infrequently satellite and UAV data are used in tandem, but I’m hoping this will change!

That’s all for now, look for my Google Earth Engine blog coming next week, I was blown away by the product and definitely need to do a separate post on it 🙂

RSPSoc Annual Conference

I had a great time at the RSPSoc conference yesterday, and very much enjoyed catching up with the some of the people I made friends with at Wavelength this year – this is a short entry to just make available the slides of both Mike (supervisor) and myself, who’s primary focus was on image quality in photogrammetric work. Unfortunately I think I filled my slides a little too much and probably could have put in about half the content, but somehow couldn’t stop adding plots from the beautiful seaborn library, lesson learned!

Link to Mike slides

Link to my slides

Looking forward to writing a blog on RAW – JPEG conversions very soon, check the undemosaiced sneak preview below 😉

imtest.png

 

Joypy

Not one to miss a fad in data visualisation, I noticed joyplots getting a lot of attention over at reddit’s dataisbeautiful subreddit and have given a go at producing some myself – I’m hoping to integrate them into a talk I’m giving this Wednesday as part of the RSPSoc‘s annual conference, and am hoping they make enough sense to include.

I’m tinkering with the joypy library, a set of scripts whose sole purpose is to produce these types of plots, built ontop of the excellent (and frequently used by myself) seaborn plotting library.

For now, I need to get of the fad wagon and keep on writing!

1_Overstrand_Quality_100.0.png

A sample joyplot I’ve produced.

Sentinel_bot – now with NIR vision

A quick blog post as I’m very much in the throes of writing! I took a few minutes today to introduce false colour (Near Infrared – Red – Green) images into @sentinel_bot’s programming, so now there’s a 20% chance that an image it produces will be false colour. In the near future I think I’ll introduce other band combinations (such as PCA band combos for mineral contrast enhancement), but for now I’m going to let it sit and appreciate some of what it comes up with, such as the image below.

Source : https://github.com/JamesOConnor/Sentinel_bot

Twitter : www.twitter.com/sentinel_bot

dgs6xtyxyaadcvj

NIR – R – G image over Argentina

Gamify it

I’ve been planning and chinking away at writing up the last three years of work into a coherent thesis in the last 6 months or so. It’s very interesting to look back at the reams of planning documents, literature reviews and interim results documents I’ve produced over this time!

Knowing what and how much to write on each topic is a bit of a dark art however; the initial targets I’ve set are very loose, but I think important to form some sort of structure to grow the report into. As a bit of a tongue-in-cheek joke I produced some ‘progress bar’ style bar charts, one for each chapter planned for the final report and have been updating day on day. The satisfaction gained from seeing them creep up has actually been surprisingly effective in getting me into a writing mode each day!

I’ve gone with a traffic light colour palette, the top bar indicates how many words I planned to write, the second the word count to date and the bottom the upper limit I’ve set myself. I know obsessing over word count is a massive waste of time, and I don’t worry about them too much at all, but couldn’t pass up an opportunity for some opportunistic data visualization!

Progress.png

Standard summary report I’ve been producing

Neural nets in Remote Sensing

Neural nets, a summary: (The chain rule * your GPU RAM)

Around 2 years ago I remember having a discussion with Jan Boehm about photogrammetry after my first meeting as the shadow wavelength rep on the Remote Sensing and Photogrammetry committee. He mentioned Agisoft, which I was already using and familiar with at the time, but then mentioned the movement in dense matching algorithms towards use of neural nets, mentioning one which had been submitted to the KITTI stereo benchmark.

right_cnn

Disparity map using Žbontar’s methods

This piqued my curiosity, and I remember reading and being quite excited by Jure’s paper. While some concepts were new to me, the use of Convolutional neural networks (ConvNets) and the two types of architecture used to initialize the initial results, before moving towards post-processing using semi-global matching. I remember sinking a great deal of time into reading about the methods, exploring the github and methods used within the core of the paper, and subsequently hounding a colleague who was using a Titan-X for some deep learning work for some time with it.

I remember I took the ideas with me to EGU 2016, and even went to the point of acquiring a data set I thought would be worthy of testing it with from a German photogrammetrist, Andreas Kaiser. Alas, it wasn’t to be due to the hardware limitations and the fact that I wasn’t very familiar with the lua programming language. However I had learned a lot about the nature of deep learning, which I felt was a decent investment of my time.

The reason for this blog entry, however, isn’t to enlighten the reader of my failure to get up to speed with neural nets at the time, it’s much more hopeful than that! Fast forward two years, and development within the field of deep learning has come on leaps and bounds. With serious development time going into TensorFlow, and a beautiful and accessible front end in the form of keras, the python user really does have the tools to apply neural nets to all sorts of applications within image-based studies.

Having learned the basic ideas around neural nets from my initial excitement a long time ago I decided to try and get involved with the community once more. A few months back, a well timed kaggle competition came up which involved image classification, which raised an eyebrow. I contacted an old friend of mine who had just finished his PhD in medical imaging and we set to take up the challenge.

river

The task for the competition involved labeling satellite imagery

Since starting the task, I feel like I’ve come on leaps and bounds with not only the concepts behind ConvNets, but their architecture and application in the python framework. Whilst we generated lots of code (will be on github in due course), and had lots of ideas floating about, we finished a decidedly average mid-table – this first pass was as much an experience in learning about organisation as well as about imaging science, but it’s made me rethink about using ConvNets in a Remote Sensing/Photogrammetry environment.

Whilst we are seeing more contributions coming out of the community, and the popularity of other less technical concepts like support vector machines have shown I’m hoping to extend my skill set to include all of these in the future. If anyone who happens to be reading this feel the same, don’t hesitate to get in touch!

 

Django greyscales

Access the application here.

I’ve been learning lots about the django web framework recently as I was hoping to take some of the ideas developed in my PhD and make them into public applications that people can apply to their research. One example of something which could be easily distributed as a web application is the code which serves to generate greyscale image blocks from RGB colour images, a theme touched on in my poster at EGU 2016.

Moving from a suggested improvement (as per the poster) using a complicated non-linear transformation to actually applying it to the general SfM workflow is no mean feat. For this contribution I’ve decided to utilise django along with the methods I use (all written in python, the base language of the framework) to make a minimum working example on a public web server (heroku) which takes an RGB image as a user input and returns the same image with a number of greyscaling algorithms (many discussed in Verhoeven, 2015) as an output. These processed files could then be redownloaded and used in a bundle adjustment to test differences of each greyscale image set. While not set up to do bulk processing, the functionality can easily be extended.

web_out

Landing page of the application, not a lot to look at I’ll admit 😉

To make things more intelligible, I’ve uploaded the application to github so people can see it’s inner workings, and potentially clean up any mistakes which might be present within the code. Many of the base methods were collated by Verhoeven in a Matlab script, which I spent some time translating to the equivalent python code. These methods are seen in the support script im_proc.py.

Many of these aim to maximize the objective information within one channel, and are quite similar in design so it can be quite a difficult game of spot the difference. Also, the scale can often get inverted, which shouldn’t really matter to photogrammetric algorithms processes, but does give an interesting effect. Lastly, the second PC gives some really interesting results, and I’ve spent lots of time poring over them. I’ve certainly learned a lot about PCA over the course of the last few years.

web_out.png

Sample result set from the application

You can access the web version here. All photos are resized so they’re <1,000 pixels in the longest dimension, though this can easily be modified, and the results are served up in a grid as per the screengrab. Photos are deleted after upload. There’s pretty much no styling applied, but it’s functional at least! If it crashes I blame the server.

The result is a cheap and cheerful web application which will hopefully introduce people to the visual differences present within greyscaling algorithms if they are investigating image pre-processing. I’ll be looking to make more simple web applications to support current research I’m working on in the near future, as I think public engagement is a key feature which has been lacking from my PhD thus far.

I’ll include a few more examples below for the curious.

 

This slideshow requires JavaScript.