Generating reasonably accurate 3D models from video would be a great help to the photogrammetric community as it would reduce the expertise needed within image acquisition. To test how dense a cloud VisualSFM could generate from a shaky video input I captured a 22 second video with my Canon 500D at 720p (30 FPS) and took every 6th/10th frame using a MATLAB snippet. The subsequent results (~130 images at 1280×720 resolution) were loaded into VisualSFM, feature points found and a dense cloud generated with some print screens shown below.
I plan to redo this using python to make it fully open source and make an ipython notebook explaining each step so watch this space! I think I’ll revise the input a little, or maybe redo the video adding some high resolution stills to see what kind of effect they may have.
The input video –
The views in VisualSFM showing the sparse cloud –
Some dense point cloud print screens from CloudCompare –