Saturday, November 5, 2016

Photogrammetry and Me - 3D Models from Photos

As I type this post, my computer wants to auto correct the word photogrammetry; I promise I didn't make it up.  Actually, photogrammetry is hardly a new concept but it is one that I feel hasn't received the attention that it deserves.  The basic premise is to use various static photos and lots of computing power to map out an area in 3d space.

Perhaps a more simple application of a similar type of technology is photo stitching.  This function is present in most mobile camera apps that are able to create panorama or photosphere pictures.  Photogrammetry adds it's secret sauce to account for information about the depth of the image after which it can create a 3d model or scene.

123D Catch by Autodesk is one of the easiest and most polished examples of this software.  

Here's a quick capture I did on my desk at work.  The post it notes are to help with reference and you can see that the process captured a little bit of the surrounding area as well.  From here, the model can be imported into another Autodesk app like Meshmixer to be cleaned up.  The whole process is extremely easy for anyone, even young children, to master.

Where to go from here?

This same principle is being used to create larger 3d scans of entire scenes.  Check out the model below of a scene stitched together using drone footage.  I find this type of scan particularly intriguing and can't wait to experiment with my students. Stay tuned for more posts on the subject :).  Comment below and join in on the conversation if you have experience using photogrammetry in the classroom or have plans to do so.

Once you have created a 3d model of a real world model or scene, bringing those images into Virtual Reality is getting easier and easier.  Check out some of my other posts about using VR viewers to show 3d models.

Sketchfab and the Wonderful World of WebVR
Dynamic 3D Viewers for VR

Additional resources for those researching photogrammetry: