I had a special opportunity recently at the ANU Classics Museum, where I got to 3D scan some of its ancient Greek and Roman artefacts. The items I worked with ranged between 1,800 to 2,500 years old, and are on permanent display in special cabinets in the Museum.
Some items such as this red-slip plate are certainly fine to look at, but the most fascinating part of the of the plate is on the back, where you can see the finger prints of the artist from where they dipped the plate in glaze but of course the contact point of the fingers stayed dry. These kinds of stories are enchanting, but few people get to see the fingerprints because the plate is locked in a glass cabinet that doesn’t let you turn it around.
I approached Elizabeth Minchin, the curator of the Museum, with the idea of making some of the collection more visible to the public, through the method of 3D scanning. The aim was to display the 3D scans on the ANU Classics Museum website as interactive models that the website viewers can spin around and zoom in on. This way, each artefact can be fully appreciated with all of its little details – even the fingerprints.
I spent the day taking hundreds of photographs of eight precious items (don’t drop it.. don’t drop it.. !!) from as many angles as I could, with the aim of processing them through a technique called stereophotogrammetry – the process of creating 3D objects out of a series of photos. It all relies on having distinct features on the object being scanned, and often some attention to detail on the surrounding background. Solid blocks of strong colour can help, easily definable shapes can help too.. because the software can look at the photos and track how these colours and shapes are positioned differently across a series of photos, and be able to figure out the distances and angles between them.
Some of the ancient artefacts I worked with look beautiful in person, but to a computer, the terracotta items for instance would be seen as “brown thing.. with brown bit.. and some brown” which can be hard to track which brown bit is which over a series of photos. So, I made my own special mounting block out of canvas and a map of colours and shapes I drew that would help the software know “if I can see a blue side of a box with a white T on it, I must be looking at the right-hand side of the item… if I can see a purple plus symbol and a grey letter N then I must be looking from above.”
To further aid the software, I used a table spread made from a kids cotton print that was left over from a quilt my mum made. I must say that the Museum’s curator was very restrained and polite in her shocked, “Oh my!” reaction to walking into the room and seeing her beautiful 2,000 year old statue of Goddess Nike sitting on a bright blue blanket with various coloured puppies on it, but after understanding the process and seeing the early results, her faith was restored.
I have been using Agisoft PhotoScan to do the photogrammetry processing, and I’m pretty impressed. There is some cleanup to do however where fringes of light on the edge of an item can fool the software and result in little light coloured edges and floating blobby extras showing up on the items, and removing these is a manual process that takes time.
Still it is a fantastic non-contact method of scanning static objects, and the detail in the finished product is pretty awesome.
Have a look for yourself 🙂
Click this image to load the 3D model. Once loaded:
Click and drag the left mouse button to rotate the model.
Roll the mouse wheel to zoom in and out.
Click and drag the right mouse button to move the item around.