EO Detective interviews Tim Peake

I saw this on EODetective‘s twitter account – an interview with Tim Peake about the process behind the astronaut’s photography generated on board the ISS. I’ve actually used a strip of them before to make a photogrammetric model of Italy, and was very curious about the process behind their capture.

Interesting to see they use unmodified Nikon D4s – I was curious about why they were using a relatively small aperture (f/11) for the capture of the images I had downloaded, and while ISO was mentioned I’m still left wondering. I guess they don’t really think about it as they are very busy throughout the day, though he did mention they leave them in fully automatic most of the time. While you could potentially get better quality images from setting a wider aperture, as per DxoMark’s testing on 24 mm lenses, I’m guessing the convenience of using fully-auto settings outweigh the cost.

But that’s not really in the spirit of the interview, which is more to get a general sense of life aboard the ISS.

normed.jpg

A sample image from the ISS

WhatsApp Images

One thing I’ve noticed since sharing images across a range of formats/websites, is that image compression algorithms on various platforms vary noticeably. This is most evident, from my experience, with WhatsApp, where images tend to be resized without even an anti-aliasing filter. The results are images with huge amounts of speckle in them when they are not resized before uploading.

Obviously the target market for WhatsApp and its user base isn’t people using high end cameras to share their images on the application, but it still seems like a couple of functions could fix a lot of the visual problems that I see, which would save me having to do it locally.

It seems astounding to me that such a big company wouldn’t put more time into sensible image compression/resizing, or perhaps they have and I am catching exceptions. The blocky artifacts I’ve written about being associated with the algorithm on this blog before are evident. Even with the third example included, where the image was resized to 20% of it’s sized before compression applied produces a much better result qualitatively, even with the smaller pixel count upon redownload of the latter.

Whilst whatever algorithm they are using is likely directed towards smartphone camera users it still seems like an oversight by the developers. Hopefully WordPress doesn’t apply a similar type of compression when I post this now!

Reflecting on Wavelength

Two years ago I agreed to join the committee of a professional body known as the Remote Sensing and Photogrammetry Society (RSPSoc), a professional body whose remit is to promote and educate its members and the public on advancements in Remote Sensing Science. When I signed up to join as the Wavelength representative, I admittedly knew very little about not only how this society operated, but societies in general, and what their function was in the greater scope of progress of Science. I took on the role knowing I’d have to learn fast, and, after a two year lead period, host a conference focusing on Remote Sensing and Photogrammetry, which would serve to bring early career researchers from both academia and industry together to discuss the latest advancements in RSP Science.

The first Wavelength conference I attended way back in 2015 was at Newcastle, a few months after my first conference experience at the 2014 GRSG meeting in London, just two months after starting my project.

The difference was apparent, with the GRSG attracting the old guard from all over the world to contribute to the conference. I distinctly remember Nigel Press, a veteran Remote Sensor and founder of NPA satellite mapping, turning around to the crowd during a Q and A session pleading with people to start taking risks funding/supporting hyperspectral satellite missions, as their contributions to geological research was so apparent. I didn’t mention it in my write up from that conference, but it really stuck with me as, at least for that minute, it all seemed so human. But apart from that, it was all quite formal and difficult to tell how I, as a novice, could really play a part.

With Wavelength, however, this humanity is what it’s all about! When everyone’s a novice, you can afford to be a bit more gung-ho with your opinions. As someone who tries to always ask, or at least dream up, a question during Q and A portions of talks, I loved it so much. Rich bluesky discussions have kept me motivated around the inevitable slower portions of writing and finicky data processing of my project, and Wavelength had them in buckets! The fact that I got so much out of it was part of my reason for volunteering to host it, as I felt like it would be a way for me to contribute back to the community, and get more involved in RSPSoc.

After an extremely enjoyable and well-run conference at MSSL during the spring of 2016, it was up to me to deliver a conference in Kingston in March 2017, while coordinating the final run in to my PhD project. While things could definitely have been done better, and I should have maybe been a bit more ruthless about advertising the conference to a wider audience, I have to say I think it ran quite smoothly, and the delegates got a lot out of it, as did I! I’ll include a summary of each day below, and my favourite parts throughout the three day agenda, including a longer description of one delegate presentation.

Monday 13th March

Delegates arrived at Kingston train station at around 11.30 am. I had enlisted the help of my colleague Paddy to go and meet the delegates, as I had to run up the poster boards to the conference room. After lunch and a quick roll call, things kicked off with 6 talks spanning image processing and Remote Sensing of vegetation.

Andrew Cunliffe, eventual winner of best speaker, showed some captivating UAV footage of Qikiqtaruk, a site where arctic ecology is being furtively researched to try to gain insight into differences between observations at different scales, both the changing ecological and geomorphological landscapes. I was interested in his hesitance in saying what he was doing for UAVs was not ‘ground truthing’ of satellite images, but more ‘evaluation’ thereof, as ground truth was never really acquired (outside of GCPs for a few of the 3D models). You can check out his profile on google scholar, which lists some pretty interesting research!

Monday wrapped up with a meal at a local Thai food restaurant, the Cocoanut, a staple with the Kingston Research folk!

Tuesday 14th March

After a tour of Kingston’s town centre in the morning, we returned to the conference venue to listen to Alastair Graham, of geoger fame, give an insightful and extremely helpful talk about career options for Remote Sensing scientists. I felt really lucky to have had the opportunity to host him – truth be told it was a bit of a fluke we crossed paths at all! He had been retweeting some of the tweets from the @sentinel_bot twitter account I had made, which caused me to look at his twitter and subsequently his website. Realising he was organising an RS meeting in Oxford the month before Wavelength (Rasters Revealed), I jumped at the chance to get him onboard, and I’m glad I did! I won’t go into his use of sli.do, but only mention that it’s worth looking into.

On Tuesday, James Brennan’s talk about the next generation of MODIS burnt area products brought me back to my Masters’ days at UCL, and my time spent with the JRCTIP products. James’ talk was focused on the binary nature of classification, and how he was looking into using a DCT to model behaviours of fires, something like a fuzzy land classification. It was really engaging and I enjoyed his super-relaxed style of presenting.

DSC00004.JPG

Delegates eye up some posters

Tom Huntley of Geoxphere also came in to give us a talk on recent advancements with their spinout hardware company, which provides high quality cameras for mapping purposes: the XCam series. Wavelength tries to bridge the gap between industry and acamemia, and both Tom and Alastair’s talk brought in the industry element I was hoping for.

After a nice meal at Strada Kingston, we hit the bowling alley before wrapping up day 2.

Wednesday 15th March

Wednesday’s session opened with delegates talking about mainly data processing. Ed Williamson, from the Centre for Environmental Data Analysis (CEDA) gave a very interesting introduction into the supercomputing facilities they provide (JASMIN), as well as services offered to clients choosing to avail of these services. They host the entire Sentinel catalogue, which is such an outrageous amount of data, and so it was interesting to be given a whirlwind tour of how this is even possible, practically speaking.

We also had the pleasure of listening to José Gómez-Dans from NCEO talk to us about integrating multiple data sources into a consistent estimation of land surface parameters using advanced data assimilation techniques. I had done my Masters’ thesis with Jóse, and (somewhat) fondly remember trying to interpret charts where the error bars couldn’t even be plotted in any reasonable way on them. This is the reality of EO though, uncertainty is part and parcel of it!

The poster session featured a wide range of topics, I even put up my one from EGU last year, and participants were extremely interested in drought mapping in Uganda, as well as numerous uses for InSAR data presented. Congrats to Christine Bischoff for winning the best poster award with her investigations of ground deformation in London.

Proceedings wrapped up with deciding on the next incoming Wavlength host (congrats to Luigi Parente, of Loughborough Uni) and a lovely lunch in the sun.

DSC00021_crop.jpg

Sunny group shot

Summary

Wavelength was really fun and interesting to organise, and I hope it’s a tradition we can keep going as a society. I’ve made the conference booklet publicly available here. For those of you who might be reading this blog and aren’t members I suggest you join, the benefits are evident.

For now, for me, it’s EGU and beyond – I’m also aiming to attend the annual RSPSoc conference in Imperial in September with latest developments from my fieldwork data!

MP map

Just a quick entry detailing an interactive map showing MPs’ constituencies and party membership created at the request of a friend. It uses leaflet.js and geojson to draw the map, meaning it’s standalone html code which can be easily moved and modified.

mp_map.png

It’s based largely on the chloropleth example included in the leaflet documentation and was pretty interesting to make!

You can see it at my website here.

Leafiness

I thought it might be fun to try something different, and delve back into the world of satellite remote sensing (outside of Sentinel_bot, which isn’t a scientific tool). It’s been a while since I’ve tried anything like this, and my skills have definitely degraded somewhat, but I decided to fire up GrassGIS and give it a go with some publicly available data.

I set myself a simple task of trying to guess how ‘leafy’ streets are within an urban for urban environment from Landsat images. Part of the rationale was that whilst we could count trees using object detectors, this requires high resolution images. While I might do a blog on this at a later date, it was outside the scope of what I wanted to achieve here which is at a very coarse scale. I will be using a high resolution aerial image for ground truthing!

For the data, I found an urban area on USGS Earth Explorer with both high resolution orthoimagery and a reasonably cloud free image which were within 10 days of one another in acquisition. This turned out to be reasonably difficult to find, with the aerial imagery being the main limiting factor, but I found a suitable area in Cleveland, Ohio.

The aerial imagery is a 30 cm resolution having been acquired using a Williams ZI Digital Mapping Camera, and was orthorectified prior to download. For the satellite data, a Landsat 5 Thematic Mapper raster was acquired covering the area of interest, with a resolution of 30 m in the bands we are interested in.

This experiment sought to use the much researched NDVI, a simple index used for recovering an estimate of vegetation presence and health.

Initially, I loaded both datasets into QGIS to get an idea of the resolution differences

jezzer.png

Aerial image overlain on Landsat 5 TM data (green channel)

So a decent start, looks like our data is valid in some capacity and should be an interesting mini-experiment to run! The ground truth data is resolute enough to let us know how the NDVI is doing, and will be used farther downstream.

 

Onto GrassGIS, which I’ve always known has great features for processing satellite imagery, though I’ve never used. It’s also largely built on python, which is my coding language of choice, so I feel very comfortable troubleshooting the many errors fired at me!

The bands were loaded, DN -> reflectance conversion done (automatically, using GrassGIS routines) and a subsequent NDVI raster derived.

ndvi2.png

Aerial image overlain on NDVI values. Lighter pixels denote a higher presence of vegetation

Cool! We’ve got our NDVI band, and can ground truth it against the aerial photo as planned.

ndvi1

Lighter values were seen around areas containing vegetation

Last on the list is grabbing a vector file with street data for the area of interest so we can limit the analysis to just pixels beside or on streets. I downloaded the data from here and did a quick clip to the area of interest.

roads1.png

Vector road network (in yellow) for our aerial image. Some new roads appear to have been built.

I then generated a buffer from the road network vector file, and generated a raster mask from this so only data within 20 m of a road would be included in analyses. The result is a first stab at our leafy streets index!

map1.jpg

Visual inspection suggests it’s working reasonably well when compared with the reference aerial image, a few cropped examples are shown below.

This slideshow requires JavaScript.

Lastly, we can use this this data to scale things up, and make a map of the wider area in Cleveland. This would be simple to do for anywhere with decent road data.

map3.jpgThis might be useful for sending people on the scenic route, particularly in unfamiliar locations. Another idea might be to use it in a property search, or see if there’s a correlation with real estate prices. Right now I’ve run out of time for this post, but might return to the theme at a later date!

 

Control freak

In formulating a research design initially, I spent much time considering how best to control the experiments I was undertaking. Control, from a geoscientific photogrammetry perspective, can really be quite tricky, as the amount of settings and equipment involve can mean that one quickly loses the run of oneself.

Research planning

In my limited wisdom during the planning phase I actually undertook a plan to demonstrate exactly where we would capture imagery from, right down to the OSGB coordinates and orientation of the cameras in the scene, using Cloudcompare to help in visualization. I sourced the topographic data from the LiDAR inventory provided by the UK geomatics service, which provided a DEM with 0.5 m resolution.

FW1.png

A screenshot showing camera positions from my research plan

I think this was a very worthwhile task – it was very demanding in terms of the skills I needed to use and made me think about how far I could bring the experiment in the planning stage. While maybe overkill, I have visions of the near future where one might be able to task a robot with a built in RTK-GPS to acquire images from these exact positions/orientations daily for a specified time period. This would eliminate much of the bias seen in studies done over the same research area, but with different equipment and camera network geometries.

You could argue that this is already happening with programmable UAVs, though I haven’t seen anything that practical for a terrestrial scene. This is outside the scope of this post, but did provide motivation for expanding as much as possible in the planning phase.

So while we might be able to control camera positions and orientations, in the planning phase at least, there are some things we know are absolutely outside our control. The weather is the most obvious one, but with a cavalier attitude I thought how about I might go about controlling that too. This lead me to considering the practicalities of simulating the full SfM workflow.

To attempt this I took a model of Hunstanton which had previously been generated from a reconnaissance mission to Norfolk last May. It had been produced using Agisoft Photoscan and outputted as a textured ‘.obj’ file, a format which I wasn’t overly familiar with, but would become so. What followed was definitely an interesting experiment, though I’m willing to admit it probably wasn’t the most productive use of time.

Controlling the weather

Blender is an open source 3D animation software which I had been toying around with previously for video editing. It struck me that, considering blender actually has a physics based engine, there might be reasonable ways of simulating varying camera parameters within a scene with simulated lighting provided by a sun which we control.

Blender1.png

The Hunstanton obj file, with the Sun included

So the idea here is to put a sun directly overhead, and render some images of the cliff by moving the camera in the scene. For the initial proof of concept I took 5 images along a track, using settings imitating a Nikon D700 with a 24 mm lens, focused to 18 m (approx distance to cliff, from CloudCompare), with shutter speed set to 1/500 s (stationary camera) and ISO at 200. The aperture was f/8, but diffraction effects can’t be introduced in the software due to limitations in the physics engine. The 5 images are displayed below, with settings from the Physical Camera python plugin included at the end.

This slideshow requires JavaScript.

Full control! We have the absolute reference to compare what will be the newly generated model to, we can vary the camera settings to simulate the effects of motion blur, noise and focus and then but the degraded image sets through the software!

Plugging these 5 images back into Agisoft again, masking the regions where there is no data, produces a new point cloud purely derived from the simulation.

FW2.png

Dense point cloud produced from the simulated images

We can then load both the model and derived point cloud into CloudCompare and measure the Cloud-to-mesh distance.

fw_front

From the front

fw_front2

From the back

This is where I left my train of thought, as I needed to return back to doing some practical work. I still think there could be some value in this workflow, though it does definitely need to be hashed out some more – the potential for varying network geometry ontop of all the other settings is very attractive!

For now though, it’s back to real world data for me, as I’m still producing the results for the fieldwork I did back in October!

Too much JPEG!

Having read lots about the JPEG algorithm of late in my investigations of image quality, and having written about it’s effects on image gradients in my last post, I though it would be good to include an entry about it in this blog.

Whilst I invite the more curious reader to delve into the nuances of the algorithm, which in closely related to the Fourier transform which I’ve written about previously, today I’ll be looking past the black box by testing the same key parameter as in the last post which the user has control over, the ‘quality‘ setting. One thing we will note, however, is that the JPEG algorithm operates on 8 x 8 discrete pixel windows, which is one of the more noticeable things when the algorithm is applied at lower quality settings.

Let’s have a look at the impact of varying the quality of a cropped portion (1000 x 1000 pixels) of an image:

The impact at the lower end of the JPG images is dramatic. As the quality is set to 1, 8 x 8 pixel blocks are essentially assigned the same value, and so the image will downgrade visibly. As we increase the quality parameter, this compression will start to disappear, but at quality 25 we can still see some degree of ‘blockiness’ due to the 8 x 8 pixel windows still varying to a large enough degree.

However, past around quality 50 the impact is much more subtle, and I tend not to be able to tell the difference for images cropped to this size. This elucidates the point: The JPG algorithm is amazing at the amount one can save, in terms of file size, in an image.

Let’s take a look at one more set of crops, this time the same image as above, but cropped to just 200 x 200 pixels:

The ‘blockiness’ is certainly evident at quality 50, and less subtle but notable at quality 75. I think the most astounding thing is the lack of perceptible difference between quality 92 and 100, given the file size difference. We can investigate where the difference lies using a comparison image (imagemagick’s compare function), where red pixels show different values. I will also include the difference image between the two cropped sections, which should offer some insight into the spatial distribution of pixel variations, if any exist:

So The mean variation between digital numbers for pixels in each 8 bit band is 1.5, but the file size saving is nearly 75%! The difference image shows that the digital number differences are concentrated in areas of high frequency information, such as along the cracks in the rock wall, areas which could be very important in delineating boundaries, for example.

While subtle, for work which involves photogrammetric precision these effects have not been so well documented – this is one thing I’m working towards within my PhD research. Oftentimes researchers will use JPEGs taken off the camera used, which can have custom filters applied prior to use, making reporting and replication more difficult. If we need to compare research done with different equipment under various lighting conditions on various days, this is one part of the research workflow which is crying out for standardization, as the effects, at least in the case of this one simple example, are clear.

For a visualization of a stack of every quality setting for the first set of crops, please visit this link to my website.