Probably my final post on image gradients, I thought I’d include one last mini-experiment on the effect of image compression on image gradient histograms, a la my previous posts NRIQA! No Reference Image Quality Assessment and Blur detection. Using the same script, I generated three image histograms of the same image, though with different RAW -> JPEG/TIF conversions, before analysis in 8 bits using the script.
JPEG ‘quality’ settings are to with the level of compression in the files vs. the originals. While a full review of what JPEG compression is is beyond the scope of this post, at a high level this blog post is very good at presenting the compression artifacts associated with JPEG images.
The RAW file in this case is a .NEF taken from a Nikon D700.
The three images generated were:
- Default RAW -> TIF conversion using imagemagick (built on dcraw). This is converted to 8 bits using OpenCV within the script. [Size = 70 Mb]
- Default RAW -> JPEG conversion using imagemagick. The default ‘quality’ parameter is 92. The image is visually indistinguishable from the much larger TIF. [Size =3 Mb]
- RAW -> JPEG conversion using a ‘quality’ setting of 25. The image is visually degraded and blocky. [Size = 600 Kb]
In a general sense, the script tells us that there is more high frequency (abrupt changes in pixel value) in the Y direction within this image. The comparison between the TIF and default JPEG values shows almost no difference. Within JPEG compression using quality values greater than 90, there is no chroma downsampling, so the differences between the TIF and JPEG images are likely not due to RGB -> gray differences.
The JPEG at quality 25 shows clear signs of quantization – the blocky artifacts are visibily smoothing the image gradients. This pushes the neighbouring pixel changes towards the center of the histogram range, evidence of the fact.
It’s interesting that no signs of degradation are visible within the first two images and it’s actually quite difficult to see where the differences are. For one last test, I subtracted one from the other and did a contrast stretch to see where the differences are occuring. The subtleties of the JPEG compression are revealed – at a pixel level the differences range from -16 to +15 in DN – the larger differences seem reserved for grassy areas.
Will these subtle changes affect how computer vision algorithms treat these images? Or how will it affect image matching? Can we envision a scenario where these would matter (if we were calculating absolute units such as Radiance, for example)?
Questions which need addressing, in this author’s opinion!