Raspberry pi!

I just ordered my first pi after having been wracking my brain to come up with legitimately useful applications. I’ve a few ideas now, and seeing as some of the initial supply issues have been abated and it’s in stock for a reasonable price I’ve pulled the trigger.

For those of you who aren’t aware, the charity has just launched a new model with some impressive specs – the processor is about twice as fast as that of my netbook I bought three years ago! Watch this space as I think pi-based tutorials are something I’d love to do in the future, I have high hopes for some (not very demanding) onboard image processing and potential interaction with other devices (like an arduino/other microprocessors!).

I made a test project over the weekend which involves a camera input and am wondering how it will cope (Potentially will be RAM limited, but we’ll see!), I’ll report back with how it performs!

Advertisements

Plenoptic cameras

The ideas of light field photography have been on my mind lately, given that I’ve been reading lots about image processing and attempting to develop open source workflows for photogrammetric products and am very curious as to how one could use them to interact with existing software.

They work on the basis of a microlens array which estimates the direction light rays travel in a scene. While the championed attribute of these cameras by the pioneers at Lytro are their ability to refocus images after exposure, for Earth Observation I can imagine they would be very useful in generating more accurate BRDF/Reflectance models directly from observation, rather than the fitting uncertain models so frequently dealt with.

I’m tempted to pick up one of Lytro’s cheap models to play around with the file formats, I see a few people have written libraries for Python to deal with .lfp using Python Imaging Library which I’m a big fan of. Certainly could be an interesting area of development for photogrammetry.

The combination of plenoptic imagery with LiDAR is a pretty menacing prospect as the two technologies come into the consumer domain, and I’ll be keeping a close eye on what the computer vision guys (including myself!) can come up with in terms of ideas for both of their use!

Accidental pinhole cameras

I participate in a weekly imaging discussion group and recently we watched a very interesting video lecture on ‘accidental pinhole cameras’ by a researcher called Anotonio Torralba, who wrote a really good paper on statistics of natural images here. I think lots of the figures in that paper could pass for modern art, particularly some of the scenes from figure 1.

The talk, however, discussed the phenomena of cameras being produced all around us and details some of what we may not consider when we look at a picture or process an image. It’s a pretty interesting data mining exercise, and while it’s practical application was certainly a point of discussion (I think if one had a very good model of how the light scatters around a pinspeck and knew lots about the surface onto which it scatters it might be useful for geoscience) the novelty of the whole thing is great!

He’s a very good speaker too, you can watch the video talk yourself here.