Decimating a video and generating a point cloud

Generating reasonably accurate 3D models from video would be a great help to the photogrammetric community as it would reduce the expertise needed within image acquisition. To test how dense a cloud VisualSFM could generate from a shaky video input I captured a 22 second video with my Canon 500D at 720p (30 FPS) and took every 6th/10th frame using a MATLAB snippet. The subsequent results (~130 images at 1280×720 resolution) were loaded into VisualSFM, feature points found and a dense cloud generated with some print screens shown below.

I plan to redo this using python to make it fully open source and make an ipython notebook explaining each step so watch this space! I think I’ll revise the input a little, or maybe redo the video adding some high resolution stills to see what kind of effect they may have.

The input video –


The views in VisualSFM showing the sparse cloud –

Drew2

Some dense point cloud print screens from CloudCompare –

angle2 Angle3 Angle4 Angle7

James

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s