EGU 2019

EGU this year was a bittersweet affair, as I actually didn’t make the conference myself, despite having two posters presented on my behalf. I enjoy EGU, but this year my aim is to get to a few new conferences, and having already attended the amazing big data from space conference (BiDS) in Munich in February, I’m hungry to branch out as much as possible. Also on the agenda this year is FOSS4G (I have always wanted to go!) and RSPSoc’s conference in Oxford (this is one I think I will go to every year).

That being said, I did still submit two abstracts, both for posters sessions, with colleagues
of mine presenting on my behalf. The first was another extension of my PhD work, which focused principally on image quality of data collected in the field for use in photogrammetric work and it’s effect on the accuracy and precision of photogrammetric products.

This extension used new innovations within the field to further dive into this relationship, by using Mike James’ precision maps (James et. al 2017). In essence it investigates how stable sparse point clouds are when systematically corrupted with noise (in all of the camera positions, parameters and points within the cloud). His research tries to refine a big unknown within bundle adjustment using structure-from-motion, how do we account for variability in the precision of measurement when presenting results. Due to bundle adjustment’s stochasticity, we can never guarantee that out point cloud accurately reflects real life, but by simulating this sensor variation, we can get an idea of how stable this is.

egu2019_final.png

Pdf version available here

In all, the research points to the fact that compressing data is generally a bad thing, causing point clouds to be relatively imprecise and inaccurate when compared with uncompressed data. It would be interesting to extend this to other common degradations to image data (blur, over/underexposure, noise) to see how each of those influences the eventual precision of the cloud.

Secondly, I submitted a poster regarding a simple app I made to present Sentinel2 data to a user. This uses data from an area in Greece, and geoserver to serve the imagery behind a docker-compose network on an AWS server. It’s very simple, but after attending BiDs, I think there is an emerging niche for delivery of specific types of data rapidly at regional scales, with a loss of generality. Many of the solutions at the BiDS were fully general, allowing for arbitrary scripts to be run on raw data on servers – something comparable to what Sentinel-hub offer. By pruning this back, and using tools like docker-compose, we can speed up the spin-up and delivery of products, and offer solutions that don’t need HPCs to run on.

greece

Sample of the app

Lastly, I’ve simplified my personal website massively in an attempt to declutter. I’ve just pinched a template from Github in order to not sink too much time into it, so many thanks to Ryan Fitzgerald for his great work.

aboutme

That’s all for now, I’ll be writing about KisanHub in the next blog!

Advertisements

SentinelBot upgraded

I’ve been on a webdev kick since starting a new job, and have recently upgraded SentinelBot as a result. It now filters snow scenes less often and can handle atmospherically corrected products – I’ll be updating the github repository, and will be writing a post about my current job soon, but for now feast your eyes on some Sentinel goodness 🙂

 

 

Scene from above

I’ve been severely neglecting my blog on account of focusing on writing up my PhD project as well as being sick (don’t underestimate the pain of getting your tonsils out as an adult!).

I wanted to write up a decent post for my 100th entry, but have subsequently realised it’s lead to me posting nothing for the last couple of months! I have a plan for a good entry coming up, though will need to find the time to put it together.

In the meantime, I picked up that Alistair Graham (geoger), who gave a talk at the conference I ran this year, and Andrew Cutts, who I have never met, though I remember worked through the straightforward openCV GUI demo from his website which I thought was great, have started a podcast, scene from above.

Science communication is tricky at the best of times, so I’m excited they’re giving this style of delivery a crack. The demo episode discusses Sentinel 5p and the larger scope of the sentinel project, remap’s webapp and cloud computing more generally, and the launch of a Moroccan satellite.

I think the discussion of the webapp was my favorite part. I appreciated Alistair’s humility in admitting that maybe he was approaching interaction with data from a point of view that was somewhat outdated, as he seems (as am I!) skeptical of the benefits of a sleek interface. Admittedly the app isn’t designed with me or others in the RS community in mind, but I can’t see it being used much in it’s current iteration.

Thinking of my ornithologist friends currently in PhDs/postdocs who would be the target audience for an app like this, they would almost definitely look at it for an hour or two with interest, and never think to use it again. Having consistently tried to get them interested in RS and accurate mapping, the tools need to be unbelievably simple to get people to consider using them seeing as so much of other scientists time is dedicated to learning specialist knowledge and general computing skills. It’s one of the many challenges of interdisciplinary work in science!

I’m looking forward to the next episode of the podcast, and hope a forum opens up for discussion online as I think I’d have something to contribute, and would love to hear other people’s opinions on these ideas!

Keep an eye out for a longer update soon 🙂

Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

Leafiness

I thought it might be fun to try something different, and delve back into the world of satellite remote sensing (outside of Sentinel_bot, which isn’t a scientific tool). It’s been a while since I’ve tried anything like this, and my skills have definitely degraded somewhat, but I decided to fire up GrassGIS and give it a go with some publicly available data.

I set myself a simple task of trying to guess how ‘leafy’ streets are within an urban for urban environment from Landsat images. Part of the rationale was that whilst we could count trees using object detectors, this requires high resolution images. While I might do a blog on this at a later date, it was outside the scope of what I wanted to achieve here which is at a very coarse scale. I will be using a high resolution aerial image for ground truthing!

For the data, I found an urban area on USGS Earth Explorer with both high resolution orthoimagery and a reasonably cloud free image which were within 10 days of one another in acquisition. This turned out to be reasonably difficult to find, with the aerial imagery being the main limiting factor, but I found a suitable area in Cleveland, Ohio.

The aerial imagery is a 30 cm resolution having been acquired using a Williams ZI Digital Mapping Camera, and was orthorectified prior to download. For the satellite data, a Landsat 5 Thematic Mapper raster was acquired covering the area of interest, with a resolution of 30 m in the bands we are interested in.

This experiment sought to use the much researched NDVI, a simple index used for recovering an estimate of vegetation presence and health.

Initially, I loaded both datasets into QGIS to get an idea of the resolution differences

jezzer.png

Aerial image overlain on Landsat 5 TM data (green channel)

So a decent start, looks like our data is valid in some capacity and should be an interesting mini-experiment to run! The ground truth data is resolute enough to let us know how the NDVI is doing, and will be used farther downstream.

 

Onto GrassGIS, which I’ve always known has great features for processing satellite imagery, though I’ve never used. It’s also largely built on python, which is my coding language of choice, so I feel very comfortable troubleshooting the many errors fired at me!

The bands were loaded, DN -> reflectance conversion done (automatically, using GrassGIS routines) and a subsequent NDVI raster derived.

ndvi2.png

Aerial image overlain on NDVI values. Lighter pixels denote a higher presence of vegetation

Cool! We’ve got our NDVI band, and can ground truth it against the aerial photo as planned.

ndvi1

Lighter values were seen around areas containing vegetation

Last on the list is grabbing a vector file with street data for the area of interest so we can limit the analysis to just pixels beside or on streets. I downloaded the data from here and did a quick clip to the area of interest.

roads1.png

Vector road network (in yellow) for our aerial image. Some new roads appear to have been built.

I then generated a buffer from the road network vector file, and generated a raster mask from this so only data within 20 m of a road would be included in analyses. The result is a first stab at our leafy streets index!

map1.jpg

Visual inspection suggests it’s working reasonably well when compared with the reference aerial image, a few cropped examples are shown below.

This slideshow requires JavaScript.

Lastly, we can use this this data to scale things up, and make a map of the wider area in Cleveland. This would be simple to do for anywhere with decent road data.

map3.jpgThis might be useful for sending people on the scenic route, particularly in unfamiliar locations. Another idea might be to use it in a property search, or see if there’s a correlation with real estate prices. Right now I’ve run out of time for this post, but might return to the theme at a later date!

 

Blur detection

I thought I’d supplement the recent blog post I did on No-Reference Image Quality Assessment with the script I used for generating the gradient histograms included with the sample images.

I imagine this would be useful as a start for generating a blur detection algorithm, but for the purposes of this blog post I’ll just direct you to the script on github here. The script takes one argument, the image name (example: ‘python Image_gradients.py 1.jpg’).  Sample input-output is below.

fusion_mertens

Input image

Image_gradients.png

Plot generated