Image compression

Probably my final post on image gradients, I thought I’d include one last mini-experiment on the effect of image compression on image gradient histograms, a la my previous posts NRIQA! No Reference Image Quality Assessment and Blur detection. Using the same script, I generated three image histograms of the same image, though with different RAW -> JPEG/TIF conversions, before analysis in 8 bits using the script.

JPEG ‘quality’ settings are to with the level of compression in the files vs. the originals. While a full review of what JPEG compression is is beyond the scope of this post, at a high level this blog post is very good at presenting the compression artifacts associated with JPEG images.

The RAW file in this case is a .NEF taken from a Nikon D700.

The three images generated were:

  1. Default RAW -> TIF conversion using imagemagick (built on dcraw). This is converted to 8 bits using OpenCV within the script. [Size = 70 Mb]
  2. Default RAW -> JPEG conversion using imagemagick. The default ‘quality’ parameter is 92. The image is visually indistinguishable from the much larger TIF. [Size =3 Mb]
  3. RAW -> JPEG conversion using a ‘quality’ setting of 25. The image is visually degraded and blocky. [Size = 600 Kb]
_dsc8652_tif_tog

TIF image

_dsc8652_jpg_tog

Default JPEG (‘quality’ = 92)

_dsc8652_jpg25_tog

JPEG with ‘quality’ of 25

In a general sense, the script tells us that there is more high frequency (abrupt changes in pixel value) in the Y direction within this image. The comparison between the TIF and default JPEG values shows almost no difference. Within JPEG compression using quality values greater than 90, there is no chroma downsampling, so the differences between the TIF and JPEG images are likely not due to RGB -> gray differences.

The JPEG at quality 25 shows clear signs of quantization – the blocky artifacts are visibily smoothing the image gradients. This pushes the neighbouring pixel changes towards the center of the histogram range, evidence of the fact.

It’s interesting that no signs of degradation are visible within the first two images and it’s actually quite difficult to see where the differences are. For one last test, I subtracted one from the other and did a contrast stretch to see where the differences are occuring. The subtleties of the JPEG compression are revealed – at a pixel level the differences range from -16 to +15 in DN – the larger differences seem reserved for grassy areas.

diff.png

Difference image between default TIF and JPEG images (TIF – JPEG)

Will these subtle changes affect how computer vision algorithms treat these images? Or how will it affect image matching? Can we envision a scenario where these would matter (if we were calculating absolute units such as Radiance, for example)?

Questions which need addressing, in this author’s opinion!

Advertisements

Blur detection

I thought I’d supplement the recent blog post I did on No-Reference Image Quality Assessment with the script I used for generating the gradient histograms included with the sample images.

I imagine this would be useful as a start for generating a blur detection algorithm, but for the purposes of this blog post I’ll just direct you to the script on github here. The script takes one argument, the image name (example: ‘python Image_gradients.py 1.jpg’).¬† Sample input-output is below.

fusion_mertens

Input image

Image_gradients.png

Plot generated

 

 

Photo pairs in VisualSFM (VSfM)

One handy function of VisualSfM which can save a huge amount of time in bundle adjustment is instructing the software on which photos overlap, and which don’t. This will save time on the software trying to match images which have no overlapping area, and will generally just be a lot cleaner.

At the high end level, people can do this by inputting GPS coordinates as an initial ‘guess’, with which the bundle adjustment can then play around with. Our solution assumes we know the overlap of the input photos, and so we know which possible matches there can be. From this, we can produce a file with candidate image pairs for speeding up BA.

I’ve put together a simple python script for this with a few options for creating the file needed to preselect image pairs. The script assumes photos have been taken in order, in either a ‘linear’ (where the ends don’t meet) or ‘circular’ (where the last photo overlaps the first) configuration, and pairs each photo with x photos either side of it. It needs to be executed in the folder where the image files are located and produces a file named ‘list.txt’. This can be inputted into VSfM, with more instructions available here.

The script takes 4 parameters.

  1. Number of images infront/behind the current image with which to make pairs, assuming the images were taken in order
  2. The filetype (case sensitive for now)
  3. The imaging configuration – ‘linear’ if the first image does not overlap the last, ‘circular’ if it does
  4. The delimiter, options are ‘comma’ and ‘space’ (used in VSfM)

Sample: ‘python Make_list.py 3 tif circular comma’

It can be downloaded from the public Github repository here. Hope this helps someone ūüôā

NRIQA! No Reference Image Quality Assessment

This post comes some a place quite close to my research field, and honestly is in response to a growing concern at the lack of standardization of imagery within it. I mentioned on this blog before about how we can improve reporting on geoscientific imagery, how we can try to incorporate concepts like MTF into that reporting and the importance in moving towards open-source image sets within environmental research. I have grown envious again of the satellite image users, who are drawing from public data sources – their data can be accessed by anyone!

When you generate your own imagery things can become a little trickier, particularly when you may have taken thousands of images and can’t report on them all in the bounds of a scientific text, or don’t have the capacity to host them for distribution¬†from a public server. Producing a short, snappy metadata summary on the quality of each image would move a long way towards doing this, as something like that is easily included within supplementary materials.

Whilst researchers would ideally include some control images of, for example, an ISO chart under specific lighting with the settings to be used before a survey, this is massively impractical.The silver bullet to this whole issue would be an objective image quality metric that could score any image, independent of equipment used and without any reference imagery/ground truth to compare it to (No Reference). This metric would need to account for image sharpness,exposure, distortions and focus which makes the whole thing phenomenally complicated, particularly where there are factor interactions.

My approach to big automation problems has changed in recent years, in much due to my increasing knowledge in image processing. The one thing we know is that it’s easy to tell a poor quality image from a good quality image, and¬†we can express the reasons why in logical statements – plenty for a computer scientist to get on with!¬†A functioning, easy to use NRIQA algorithm would be useful far outside the bounds of the geosciences and so research is very active in the field. In this blog post, I’ll look at an approach which is a common starting point.

Natural image statistics

Antonio Torralba’s paper ‘Statistics of natural image categories’ provided a great deal of insight to me on what to consider when thinking about image quality metrics, and I happened upon this paper after seeing a different piece of his work in a paper I was reading. I recommend glossing over it if you want some cutting insight into really basic ideas of how we distinguish image categories. Image gradients are king, and always have been a keen part of image understanding.

His work lead me to Chen and Boviks’¬†paper, with a very elegant paragraph/figure in the introductory section, highlighting how useful gradient analysis can be. They use images from the LIVE database, which I hadn’t come across previously and has proven an interesting resource.

They point out that, in general, blurred images do not contain sharp edges – sharp images will therefore retain higher amounts of high frequency gradient information (that where neighbouring pixels vary by bigger amounts). To demonstrate this, I’ve taken an image from the Middlebury stereo dataset¬†and have produced gradient distributions on both the original and artificially blurred versions – we can see the effect in the same way the Chen and Bovik demonstrate!

For curiosity, I added a noise-degraded version, and we can see that has the opposite effect on gradients. I guess, in this basic case, sharp and noisy images would be hard to distinguish. Whilst I produced some which were both noisy and blurry, the noise dominates the signal and causes the flattening effect seen in the noisy line of the figure.

This is a useful insight that’s quick and easy to demonstrate – a good starting point for these type of analysis. They go on to develop an SVM model trained on sharp and blurry images using similar logic, with what look like some promising results.¬†Within an image block, we could use this approach to separate gradient¬†outliers we suspect might be blurry. This would be massively convenient for users, but doubly ensure some modicum of quality control.

Perhaps, if we were to cheat a bit, reference images (high quality, from a curated database, scaled to be appropriate for the survey) of the type of environment being investigated by the researcher could be used for a quick quality comparison in surveys in terms of gradients. One could then move to include global image information such as histogram mean into the metric, which for a well exposed image should be somewhere near the center of the pixel-value range.

This is a crude starting point perhaps, but a starting point nonetheless, and an area I hope geoscientists using images pay more attention to in the near future. Quality matters!

 

 

 

 

 

 

Sunny 16

Just a short entry to break a long hiatus from the blog (I still have to finish the fieldwork series!).

I was forwarded a paper on Photogrammetry which briefly mentioned the Sunny 16 rule, which I hadn’t come across before. This isn’t altogether surprising to me, as I don’t move very much within the digital photographic community. However having recently picked up an a7, I’m moving more that way now, and appreciate how difficult it can be to get settings correct to capture an image.

The rule of thumb that Sunny 16 tells us is that on a well lit day, if you set your aperture to f/16, the ISO to a number (say, 200) and the shutter speed to the reciprocal of that number (1/200 s, in this case), you should produce a well-exposed image. I found this useful as a starting point as it can function as an intuitive benchmark for planning settings for a variety of different conditions/scales.

However, my supervisor pointed out it’s probably not as important as the photo-journalists number one rule of thumb: f/8 and be there!