Computational imaging is a process of image formation that necessarily requires computational power to ‘produce and enhanced version of a photograph’. It combines knowledge and techniques from multiple disciplines to do so. Computational Imaging is very broad, and the three most relevant applications within cultural heritage projects are: Photogrammetry; Reflectance Transformation Imaging, and Multispectral Photography. The ease of use, relatively low cost, and reliability made computational imaging a great fit for heritage studies.
With photogrammetry, we measure the location and angle of a camera, the position of the object with relation to the camera, and what can be interpreted with these in mind. For Pipp, we used close-range photogrammetry, including handheld photography and use of tripods. Although photogrammetry and computer vision were developed for two different purposes, putting them together creates structure from motion (SfM). Simply put, SfM algorithms look for overlap in pixels between photographs to reconstruct where a camera was with relation to an object at each moment of capture.
On the software side, we worked with Agisoft Metashape to transform the photos into a digital 3D model. Agisoft offers 30-day free trials on their products, meaning we felt the time pressure to do the most in the trial period. We go into more details on the implications of this in the Agisoft Saga article, and in a later article about accessibility. Finally, we worked on this platform, Voyager, to place our model and curate our narrative. Voyager offers many integration tools, so that your 3D models can interact with multimedia and the space, building a new context for it. However, Voyager often has glitches and bugs that have to be solved through trial and error, like Agisoft Metashape. In fact, because both have black-box algorithms, user solutions are often just workarounds that do not fix the root cause of the problem.