Sentinel bot source

I’ve been sick the last few days, which hasn’t helped in staying focused so I decided to do a few menial tasks, such as cleaning up my references, and some a little bit more involved but not really that demanding, such as adding documentation to the twitter bot I wrote.

While it’s still a bit messy, I think it’s due time I started putting up some code online, particularly because I love doing it so much. When you code for yourself, however, you don’t have to face the wrath of the computer scientists telling you what you’re doing wrong! It’s actually similar in feeling to editing writing, the more you do it the better you get.

As such, I’ve been using Pycharm lately which has forced me to start using PEP8 styling and I have to say it’s been a blessing. There are so many more reasons than I ever thought for using a very high level IDE and I’ll never go back to hacky notepad++ scripts, love it as I may.

In any case, I hope to have some time someday to add functionality – for example have people tweet coordinates + a date @sentinel_bot and have it respond with a decent image close to the request. This kind of very basic engagement for people who mightn’t be bothered going to Earth Explorer or are dissatisfied with Google Earth’s mosaicing or lack of coverage over a certain time period.

The Sentinel missions offer a great deal of opportunity for scientists in the future, and I’ll be trying my best to think of more ways to engage the community as a result.

Find the source code here, please be gentle, it was for fun 🙂

dainlptxkaajaaw

WhatsApp Images

One thing I’ve noticed since sharing images across a range of formats/websites, is that image compression algorithms on various platforms vary noticeably. This is most evident, from my experience, with WhatsApp, where images tend to be resized without even an anti-aliasing filter. The results are images with huge amounts of speckle in them when they are not resized before uploading.

Obviously the target market for WhatsApp and its user base isn’t people using high end cameras to share their images on the application, but it still seems like a couple of functions could fix a lot of the visual problems that I see, which would save me having to do it locally.

It seems astounding to me that such a big company wouldn’t put more time into sensible image compression/resizing, or perhaps they have and I am catching exceptions. The blocky artifacts I’ve written about being associated with the algorithm on this blog before are evident. Even with the third example included, where the image was resized to 20% of it’s sized before compression applied produces a much better result qualitatively, even with the smaller pixel count upon redownload of the latter.

Whilst whatever algorithm they are using is likely directed towards smartphone camera users it still seems like an oversight by the developers. Hopefully WordPress doesn’t apply a similar type of compression when I post this now!

A slippy map for Sentinel bot

Over the weekend I decided to expand on what was in sentinel bot‘s portfolio by having an automatically updating slippy map, which plots where the point for which sentinel bot has found an image is in the world, as long with the basic metadata of date acquired and lat/lon. I was trying to the leaflet’s marker-clusterer to work but to no avail, couldn’t quite get the knack of it! If anyone has experience with it I’d love to hear from you. I continued with just the pins nonetheless!

One really cool github project I used was this, which allows you to cycle through basemap providers for leaflet and provides you with the javascript code for inserting into your own map. I chose Heidelberg university’s surfer roads for no reason in particular, but may change this in the future. I think I’ll be returning to that github for future slippy maps!

In any case, the product is not perfect, but gives an interesting view of what the bots activities have been for the week it’s been active. I’m not trying to reinvent Earth Explorer, so will probably spend no more time on this, but it was an enjoyable pursuit!

Check the map here.

Sentinel Bot

I’ve been interested in the Sentinel satellite missions, but somehow one can get very distanced from these things unless you’re actively working on them or using their products in some sort of project. As such, I decided I needed a stream of images to keep me interested, and so went about having images pulled down automatically.

On top of this, considering I’m quite fond of Twitter (As the only social media I actively use), I decided to try and have the best of both worlds, so others could share in the Sentinel image goodness.

Having thought about it enough, and having a day free on Saturday, I decided to get to it. I hooked up various parts in an image processing pipeline and sentinel_bot was born. The idea was to have a bot which automatically searches for images which are relatively cloud free, and produce a decent-quality image for direct upload to twitter. It’s having some teething issues (Color balance) but I’m tweaking it slightly to try and make sure the images are at least intelligible.

At the minute it’s tweeting once every 40 minutes or so, but I’ll probably slow that down once it’s gotten a few hundred up.

In celebration, I’ve collated 10 interesting ones so far into an album below (click to enlarge), if you want to check it out it’s at www.twitter.com/sentinel_bot

Stereo matching

I love opencv for many reasons, but above all the reason that has me most taken is the extensive documentation provided (I use the python wrapper which is brilliant!). Looking through the numerous basic examples shipped out with the distribution, and given that image alignment and dense matching is a central theme running through my research, it occurred to me that sharing some of what it has to offer would be a good idea.

As such, I’ve thrown together a web based implementation of their dense stereo matcher using Python’s convenient CGI filetype. There’s lots wrong with it, it’s set up to mainly deal with the image sets from the Middlebury examples listed on the page for the moment. I have done it with my own images too which will work provided they’re the same size, I haven’t quite dealt with resizing images and the sort.

The inputs are two URLs to images (.jpg or .png) hosted somewhere, the output after processing is a disparity map generated using the block matching algorithm. It’s being hosted at my website here, and an example is presented below.

Note: It takes about 15 seconds for the disparity map to show

Note2: Now defunct, planning to make into a heroku app!

Disparity map

Disparity map

Website update

Having spent a little bit of time learning about various web apps and making different models to test their capabilities, I’ve gotten around to slapping together something of a better repository for the research I’m undertaking. The aim will be to host compressed versions of all models that form part of anything I present or discuss in the academic community for reference of whoever may need it. It’s still a work in progress, but the consolidated design can be seen here. Any feedback would be appreciated! I plan to populate it with other things mentioned on this blog, as well as other ideas I have for thematic maps and photoscans!

GLAS – Spaceborne LiDAR

IceSAT was a unique satellite launched in 2003 with the aim to provide accurate topographic information of the Earth’s surface over a number of years. One of it’s aims was recovery of ice sheet mass balance and cloud property information, as detailed on NASA’s page here. Onboard this satellite was an instrument called GLAS, the geoscience laser altimetry system, a spaceborne LiDAR I’ve been meaning to look at for a long time. Last Sunday I decided to give it a look in the afternoon, and after a bit of tinkering with the files available here I produced a point cloud showing the Earth’s topography as seen by GLAS, collated for one month in 2003.

The files are structured that topographic information (The GLAH14 and GLAH15 files I used) are split up for each individual day, each with 14 orbits per file. I’ve presented one of the files below, as seen in CloudCompare, about 1.3 million points.

GLAS_One

GLAH14 data collected on the 4th March 2003

As you can see, each day doesn’t recover a detailed scan of the Earth, so a collation of the month gives us a better idea as to the extent of the mission.

GLAS_March

GLAH14/15 data for all of March 2003 – ~64,000,000 points

I really like this as not only can you see the completeness of the data (I had to mask lots of points), but it gives you a really good idea of what a low earth orbit looks like. IceSAT orbited about 600km above the Earth’s surface, and this is the pattern produced.

After removing duplicate points we can then compare each of the heights recovered from each laser pulse. For practicality’s sake I just used height above the WGS84 ellipsoid, a term different from height above sea level which is normally used. This which just saved a bit of time, one of a couple of other corners which were cut to produce the cloud. This gives us a global model of surface topography for March 2003, shown below.

GLAS_Ref

The final product is pretty interesting and it was a really fun exercise to do. I’m hosting the model on my webspace also, which is viewable here.