Control freak

In formulating a research design initially, I spent much time considering how best to control the experiments I was undertaking. Control, from a geoscientific photogrammetry perspective, can really be quite tricky, as the amount of settings and equipment involve can mean that one quickly loses the run of oneself.

Research planning

In my limited wisdom during the planning phase I actually undertook a plan to demonstrate exactly where we would capture imagery from, right down to the OSGB coordinates and orientation of the cameras in the scene, using Cloudcompare to help in visualization. I sourced the topographic data from the LiDAR inventory provided by the UK geomatics service, which provided a DEM with 0.5 m resolution.

FW1.png

A screenshot showing camera positions from my research plan

I think this was a very worthwhile task – it was very demanding in terms of the skills I needed to use and made me think about how far I could bring the experiment in the planning stage. While maybe overkill, I have visions of the near future where one might be able to task a robot with a built in RTK-GPS to acquire images from these exact positions/orientations daily for a specified time period. This would eliminate much of the bias seen in studies done over the same research area, but with different equipment and camera network geometries.

You could argue that this is already happening with programmable UAVs, though I haven’t seen anything that practical for a terrestrial scene. This is outside the scope of this post, but did provide motivation for expanding as much as possible in the planning phase.

So while we might be able to control camera positions and orientations, in the planning phase at least, there are some things we know are absolutely outside our control. The weather is the most obvious one, but with a cavalier attitude I thought how about I might go about controlling that too. This lead me to considering the practicalities of simulating the full SfM workflow.

To attempt this I took a model of Hunstanton which had previously been generated from a reconnaissance mission to Norfolk last May. It had been produced using Agisoft Photoscan and outputted as a textured ‘.obj’ file, a format which I wasn’t overly familiar with, but would become so. What followed was definitely an interesting experiment, though I’m willing to admit it probably wasn’t the most productive use of time.

Controlling the weather

Blender is an open source 3D animation software which I had been toying around with previously for video editing. It struck me that, considering blender actually has a physics based engine, there might be reasonable ways of simulating varying camera parameters within a scene with simulated lighting provided by a sun which we control.

Blender1.png

The Hunstanton obj file, with the Sun included

So the idea here is to put a sun directly overhead, and render some images of the cliff by moving the camera in the scene. For the initial proof of concept I took 5 images along a track, using settings imitating a Nikon D700 with a 24 mm lens, focused to 18 m (approx distance to cliff, from CloudCompare), with shutter speed set to 1/500 s (stationary camera) and ISO at 200. The aperture was f/8, but diffraction effects can’t be introduced in the software due to limitations in the physics engine. The 5 images are displayed below, with settings from the Physical Camera python plugin included at the end.

This slideshow requires JavaScript.

Full control! We have the absolute reference to compare what will be the newly generated model to, we can vary the camera settings to simulate the effects of motion blur, noise and focus and then but the degraded image sets through the software!

Plugging these 5 images back into Agisoft again, masking the regions where there is no data, produces a new point cloud purely derived from the simulation.

FW2.png

Dense point cloud produced from the simulated images

We can then load both the model and derived point cloud into CloudCompare and measure the Cloud-to-mesh distance.

fw_front

From the front

fw_front2

From the back

This is where I left my train of thought, as I needed to return back to doing some practical work. I still think there could be some value in this workflow, though it does definitely need to be hashed out some more – the potential for varying network geometry ontop of all the other settings is very attractive!

For now though, it’s back to real world data for me, as I’m still producing the results for the fieldwork I did back in October!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s