Digitizing Elmo (Windows)

Just today I participated in the first step of a collaborative project between departments at Kingston which involved a live demo – the product was a point cloud of of an Elmo/Monster toy as shown below. Here I’ll just go through the steps involved in the cloud generation, so hopefully readers can replicate it themselves!


Our test subject

  • Images

For generating the data, I used my Canon 500D with a 50mm EF2 lens, so nothing overly fancy. I started by putting the subject on a raised platform in order to minimize the effect of reconstructing the ground, and used an aperture of f/5 or so to ensure the subject was in focus, but that the ground was not. A good example of precautions to take when dealing with low-texture objects is introduced in this blog post, but can often be limited by the amount of RAM in a computer. As I was somewhat time limited, I decided to forgo the accuracy of using a tripod and so used a fast shutter speed (1/30 s) with an ISO of 400 to compensate, and generally just tried to get a reasonable amount of coverage on the subject. I took some other images with a wider aperture (f/2) and faster shutter speed (1/50 s) also. I threw a few paintbrushes into the scene to generate a bit more texture.

The test dataset (Some images are very poor I’m aware!) can be downloaded here.

  • Model building

For convenience, Agisoft Photoscan (There’s a free 30-day trial) was used to build the model, though other open source alternatives exist, such as VisualSFM or MicMac. I’ve included a short slideshow on what the exact steps are in Photoscan below to hopefully make it easy to follow!

This slideshow requires JavaScript.

  • Level/denoise in CloudCompare

CloudCompare is an open source point cloud editing software available here. Because our model is exported without any coordinate system, it can’t tell up from down, but we can fix this! In CloudCompare we can use the leveling tool to quickly orient the model so it’s a bit easier to view. Another useful tool is the ‘Statistical outlier removal filter’ in tools-> clean-> SOR filter, though we’ll skip it in this case.

This slideshow requires JavaScript.

  • Preparing to upload

Potree is a free point cloud viewer which can be used to host datasets online. Here we’ll just used in it’s most basic form to get a minimum example out. This section gets a bit hairier than the others but hopefully it’s intelligible. We’ll need to download and unzip both the potree converter and potree in the same directory, making a new subdirectory for each; ‘Converter’ and ‘Potree’. Next we’ll add the model we saved from CloudCompare to the potree converter directory, renaming it to ‘model.las’. Then we’ll follow the slides below!

Note – the command for the fourth slide is ‘PotreeConverter.exe model.las -o ../../Potree/potree-1.3/model_out –generate-page model’

This slideshow requires JavaScript.

  • Upload to the web

While there’s instructions for Kingston Students on how to upload web pages, this is a general skill that is good to have. We use FileZilla FTP to log in to our server, and the idea is to upload the entirety of the Potree folder, which contains all resources necessary for rendering the scene. The actual HTML page where the model is located is stored in the directory potree-1.3/model_out/examples/ , and can be accessed by this once uploaded.


The final version of the model generated is viewable at the directory here –

EDIT: I’ve lost the original model for this, though will reupload a new version soon

If the Potree stuff is a bit too hairy CloudCompare is a brilliant software for toying with models, I recommend giving it some time as it’s an extremely useful software package!

  • Conclusion

This is a basic tutorial on how to rapidly get 3D models online using nothing but a handheld camera and a laptop. Including taking the images this process took around 20 minutes, but can be speeded up in many ways (including taking better but fewer images). The cloudcompare step can be skipped to speed up even further, but having a ‘ground floor’ plane is in my opinion almost a necessity for producing a model.

This is not intended to be best practice photogrammetry or even close, this is intended to give an overview on modern photogrammetric processes and how they can be applied to rapidly generate approximations to real world objects. These can be then cleaned and models generated for use in applications such as 3D printing, videogames or interactive gallerys.

  • Complete software list





Automatic raster masks

Recently, I came across a situation where, when making a photogrammetric model, up to 2/3rds of each input image was not relevant to the subject being investigated. Considering I planned on doing extensive tests on the dataset, I needed to generate at least partial masks to ensure there wasn’t too much computational redundancy. Considering I had back of a notepad calculated that to do the model runs as planned with these images without masks would take ~21 days, I set to ease the process by building some masks.

Here I’ll present a two camera case, from Middlebury’s stereo datasets, for simplicity. NOTE: I don’t list any code here, but it can be found on an ipython notebook html here, or on github here, or from here.

OpenCV (version 2.4.11 in Python) was my first port of call, and has a very convenient function called ‘findContours’. This scans the image for edges (The input is a ‘threshold’ image previously generated) and pulls out consistent shapes in an image. Considering the area of interest was quite distinct from the rest of the image, this was somewhat easily done, but left a certain amount of relevant data masked. In this example I target a green cone in the stereo scene:

We can call on a second function ‘BoxPoints’ which draws the minimum bounding box for the area which the contour pulled out. This more or less split the image into a single bounding rectangle which enclosed the area of interest. We can specify a maximum and minimum search size (in pixels) too, which will help in the image segmentation. This could be convenient for generating classified images without going through the process manually.

I can think of two situations where this may be convenient for bigger datasets where manual masking might be impractical:

  1. Where only a portion of the image is relevant to the results. For instance, when shooting an object on stage (such as in Middlebury’s multi-view examples) you can reduce the search space for the feature detectors. This could also be relevant for masking out sky, sea, or other low-value/low-relevance areas in a scene.
  2. Where camera positions are know from an earlier bundle adjustment, but a higher resolution is desired in only a portion of an scene, or for a specific object.

The masking code provided in the attached ipython notebook html is very basic, and meant to guide any python aficionado’s towards making their own inroads into the ideas presented here. I’ve just presented a minimum working example for those curious enough! Other photogrammetric applications include measuring areas of objects in an image (Particularly relevant for, say, microscopy), or simple object tracking when a camera is moving under stable lighting conditions.

Data visualisation

Haven’t posted in the last while, so thought I’d make a quick post about some of my favorite data visualizations I’ve come across lately. The more I read about these the more it makes me want to improve the own graphics I produce, so if you’re looking for inspiration look no further! In no particular order:

Markov Chains

Basic as the visuals are, it really gives a good feel for what finite state problems look like. Can modify with your own code too!

Markov Chains

Baye’s rule/Conditional probability

From the same blog. Bayesian stats can be a bit daunting. Let this visualization of balls dropping through a filter calm you down as you need. Interactive to boot!

Conditional Probabilty

Fourier analysis

Just beautiful graphics putting simply what so many hours of reading couldn’t. Probably my favorite in the list due to the depth it covers!

Fourier analysis


Not something I’m overly familiar with but have bookmarked because of how nice the graphics are to look at. Search is such a basic concept which is such a necessity to modern computing, I love the simplicity with which it’s presented.


Blend4web curiosity app

Some might call it gimmicky, but I think the ability to be able to scroll through the cameras while the robot moves is just such a cool feature.



I can’t believe this is freeware. It’s amongst the best tools on the internet for point cloud viewing and the design is brilliant!



From the DIY category – seaborn is a front end plotting library for making graphs in python. It produces some beautifully crafted graphics! I love the joint plots.

Seaborn joint plot


Actually a pretty standard library it seems, I can’t believe how long it took me to find. I’m preparing some interactive graphics for upcoming conferences and bokeh makes it so simple to do! I particularly like the Lorenz example!

Bokeh Lorenz

Stamen mapping skins

Some very attractive base layers for using in your mapping needs. I think I’ll have to give making a base layer a go at some stage, but for now I can appreciate the possibilities…


100,000 stars

Last on our list, one from the astronomers. An in browser interactive environment for exploring our stellar neighborhood!

100,000 stars