SentinelBot upgraded

I’ve been on a webdev kick since starting a new job, and have recently upgraded SentinelBot as a result. It now filters snow scenes less often and can handle atmospherically corrected products – I’ll be updating the github repository, and will be writing a post about my current job soon, but for now feast your eyes on some Sentinel goodness 🙂

 

 

Advertisements

Giffing the world

I put together a simple Django app for drawing boxes and returning a gif of that box which includes the 10 latest Sentinel2 images, cropped straight from the S3 bucket using rasterio. It was a lot of fun to make, and I’m hosting it at my github here. It’s running on an AWS micro instance so isn’t the most reliable – but give it a go 🙂

demo-gif.gif

Predictions, predictions, predictions

I’ve just listened to the latest episode of Alastair and Andrew‘s podcast, scene from above, and the discussion section based around near-future predictions for the Earth Observation (EO) industry, as well as some of the discussion in the news section, was extremely interesting. I’m fully onboard the hype train for machine learning booming in EO, with Andrew seemingly somewhat skeptical.

Before I go into why I think that’s the case, I’ll mention Alastair speaks about a Voyager documentary, the Farthest (I’ve actually just noticed a big Irish producer, crossing the line was involved in production, wahay!). It sounds absolutely incredible, and will go on my watch list, but Alastair’s comments reminded me of an xkcd comic alluding to the fact that the edge of the solar system is difficult to define! I actually really enjoyed listening to their thoughts on Voyager in general, and would love to hear more discussion around the history of EO as well as wider planetary missions – every time I read and think about Corona, for example, I can’t help but be amazed.

far

Voyager spacecraft (NASA)

 

One of the main predictions made within the main section of the podcast is that analysis ready data (ARD) will see wider use and release by data providers. We have seen a move towards sentinel 2 ARD and planet have recently released their atmospherically corrected surface reflectance product, I would hope this is an indication that this is quite well developed already!

planet.png

A figure from Planet’s surface reflectance white paper (source)

On the machine learning (ML) front, I attended a google earth engine workshop at the beginning of this year, and having had fruitful discussions with the host on the project’s directions, I think the iron is hot for ML and the hype justified. In particular, the host spoke about the team preparing tensor flow integration into the platform in time for AGU next year. Having been lucky enough to participate in (albeit not at a competitive level) the planet kaggle competition for classifying image excerpts into one or more classes last year, I have a decent idea of just why there has been a frenzy of research surrounding convolutional neural networks (CNNs) in the computer vision community, and I’m surprised that they haven’t appeared more in EO research.

While Andrew notes that supervised and unsupervised classification has been around and used for decades, the difference between those and deep-learned information is like night and day in my opinion. The competition, past the task presented, gave me a look into how neural networks are transforming image analysis, and how recurrent CNNs on massive scales could be leveraged in an environmental context for things like linking phenological mapping to data which might provide reasons as to why a change is happening with spatial context. Object-based analysis is unparalleled for applications like this, and CNNs are now so easy to use and much better at handling massive data sets than previous methods. Computer scientists are poised to integrate more and more with the EO community as higher resolution data becomes available, and so I feel like when high temporal and spatial resolution open data becomes available multi-disciplinary research will really kick off. Infact, I put together a starter ipython notebook for bird identification, showing just how easy it is using a pre-trained CNN for this application, albeit not with EO data.

birds.png

Example plot from ipython notebook

This leads to a prediction of my own – as more imaging scientists move into EO, Unmanned aerial vehicle (UAV) and satellite data will need to be better integrated. Currently, there are a raft of problems linking data collected from consumer level cameras onboard UAVs to satellite data, not least of which is radiometric normalization. The demand for higher resolution data from the deep learning end of the community will lead to new standards being introduced for how UAV data is collected and metadata stored (shameless plug). EO platforms will begin to integrate publicly collected UAV data and satellite researchers will begin to collaborate with computer scientists using nearer earth images. We will then see satellites being used as an early warning systems and UAV missions automatically launched off the back of satellite derived information in a range of new applications.

This isn’t a particularly insightful prediction, but one which continuously hasn’t really been addressed. I’m always surprised as to how infrequently satellite and UAV data are used in tandem, but I’m hoping this will change!

That’s all for now, look for my Google Earth Engine blog coming next week, I was blown away by the product and definitely need to do a separate post on it 🙂

Sentinel_bot – now with NIR vision

A quick blog post as I’m very much in the throes of writing! I took a few minutes today to introduce false colour (Near Infrared – Red – Green) images into @sentinel_bot’s programming, so now there’s a 20% chance that an image it produces will be false colour. In the near future I think I’ll introduce other band combinations (such as PCA band combos for mineral contrast enhancement), but for now I’m going to let it sit and appreciate some of what it comes up with, such as the image below.

Source : https://github.com/JamesOConnor/Sentinel_bot

Twitter : www.twitter.com/sentinel_bot

dgs6xtyxyaadcvj

NIR – R – G image over Argentina

Neural nets in Remote Sensing

Neural nets, a summary: (The chain rule * your GPU RAM)

Around 2 years ago I remember having a discussion with Jan Boehm about photogrammetry after my first meeting as the shadow wavelength rep on the Remote Sensing and Photogrammetry committee. He mentioned Agisoft, which I was already using and familiar with at the time, but then mentioned the movement in dense matching algorithms towards use of neural nets, mentioning one which had been submitted to the KITTI stereo benchmark.

right_cnn

Disparity map using Žbontar’s methods

This piqued my curiosity, and I remember reading and being quite excited by Jure’s paper. While some concepts were new to me, the use of Convolutional neural networks (ConvNets) and the two types of architecture used to initialize the initial results, before moving towards post-processing using semi-global matching. I remember sinking a great deal of time into reading about the methods, exploring the github and methods used within the core of the paper, and subsequently hounding a colleague who was using a Titan-X for some deep learning work for some time with it.

I remember I took the ideas with me to EGU 2016, and even went to the point of acquiring a data set I thought would be worthy of testing it with from a German photogrammetrist, Andreas Kaiser. Alas, it wasn’t to be due to the hardware limitations and the fact that I wasn’t very familiar with the lua programming language. However I had learned a lot about the nature of deep learning, which I felt was a decent investment of my time.

The reason for this blog entry, however, isn’t to enlighten the reader of my failure to get up to speed with neural nets at the time, it’s much more hopeful than that! Fast forward two years, and development within the field of deep learning has come on leaps and bounds. With serious development time going into TensorFlow, and a beautiful and accessible front end in the form of keras, the python user really does have the tools to apply neural nets to all sorts of applications within image-based studies.

Having learned the basic ideas around neural nets from my initial excitement a long time ago I decided to try and get involved with the community once more. A few months back, a well timed kaggle competition came up which involved image classification, which raised an eyebrow. I contacted an old friend of mine who had just finished his PhD in medical imaging and we set to take up the challenge.

river

The task for the competition involved labeling satellite imagery

Since starting the task, I feel like I’ve come on leaps and bounds with not only the concepts behind ConvNets, but their architecture and application in the python framework. Whilst we generated lots of code (will be on github in due course), and had lots of ideas floating about, we finished a decidedly average mid-table – this first pass was as much an experience in learning about organisation as well as about imaging science, but it’s made me rethink about using ConvNets in a Remote Sensing/Photogrammetry environment.

Whilst we are seeing more contributions coming out of the community, and the popularity of other less technical concepts like support vector machines have shown I’m hoping to extend my skill set to include all of these in the future. If anyone who happens to be reading this feel the same, don’t hesitate to get in touch!

 

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

EO Detective interviews Tim Peake

I saw this on EODetective‘s twitter account – an interview with Tim Peake about the process behind the astronaut’s photography generated on board the ISS. I’ve actually used a strip of them before to make a photogrammetric model of Italy, and was very curious about the process behind their capture.

Interesting to see they use unmodified Nikon D4s – I was curious about why they were using a relatively small aperture (f/11) for the capture of the images I had downloaded, and while ISO was mentioned I’m still left wondering. I guess they don’t really think about it as they are very busy throughout the day, though he did mention they leave them in fully automatic most of the time. While you could potentially get better quality images from setting a wider aperture, as per DxoMark’s testing on 24 mm lenses, I’m guessing the convenience of using fully-auto settings outweigh the cost.

But that’s not really in the spirit of the interview, which is more to get a general sense of life aboard the ISS.

normed.jpg

A sample image from the ISS