For our project, we used Structure from Motion (SfM), a photogrammetric technique that creates 3D models from 2D photographs taken from different angles. The software analyzes camera positions and matches visual points to reconstruct both the shape and texture of the object (Papadopoulos 2024). To process our photos, we used Metashape, a photogrammetry software that turns overlapping images into a textured 3D model using SfM and dense image matching.
Once we understood the method and tools, we began the process of photographing the doll. Right away, we faced several challenges. The first was her voluminous hair. If it moved between shots, the software might perceive it as a completely different object. To keep it stable, we braided it and applied fixing spray.
Another issue was her reflective plastic surface, which made it difficult for the software to identify consistent features. To reduce the shine, we applied baby powder with a brush and then added an anti-reflection spray, commonly used in photography, to dull the surface.
After our first test in Metashape, we noticed that her arms and legs did not come out well in the model. They lacked texture, and there were visible holes in those areas. To improve feature detection, we applied colorful stickers to different parts of the doll’s body. This made alignment easier, but we did not realize how difficult it would be to remove them digitally later without affecting the final model.
We then retook the photos with the additional surface preparation and stickers. However, after processing the second set, we realized the doll’s necklace, which was important for our narrative, had been left out. As a result, we had to redo the entire photoshoot again, this time making sure the necklace was included. The final version was successfully processed in Metashape.