On a tablet, Blaise Agüera y Arcas takes us on a walk through Edinburgh Castle, less than a mile from the TEDGlobal 2013 theater. With a swipe of his finger, we fly through the 15th-century castle — people and objects dissolving as we go — as if we were in a scene from a movie orchestrated by a special-effects master. In fact, these images were taken with a simple cameraphone. The trick: they were assembled into a 3D environment by Photosynth 2.
Blaise Agüera y Arcas: How PhotoSynth can connect the world's images Agüera y Arcas shared the original Photosynth at TED2007. At the time, the technology constructed images onto a spherical model. “When we started Photosynth, we wanted to think about how to reinvent the enterprise of photography for ordinary people using computer vision and augmented reality,” he says. “In the meantime we’ve been working on the next generations of that technology.”
Photosynth enabled computers to understand images not just as pixels, but as spatial captures of a particular moment and place. The new version simplifies the process of stitching together images into a “synth,” as they call the final product, a continuous mesh of photos seamlessly edited together with a 3D effect. He spins us through a dense forest, and shows pictures taken through a plane window. Things closer to the camera move at a different speed than things that are far way, creating a sensation of reality — only stylized.
Before his talk, Agüera y Arcas gave the TED Blog a sneak peek of Photosynth 2, and revealed the question he gets asked most commonly about the technology: How is it different from video? Here’s how: While video is arranged temporally, synths are arranged spatially. The creativity is in the placement of the camera as shots are snapped, as much as it is in what’s being photographed. As he puts it, synths exist somewhere in the space between photo and video.