At TEDGlobal2012, Ramesh Raskar demonstrated his remarkable femtosecond camera, which can image a light-pulse as it travels through an object, or potentially see around corners. But Raskar’s projects go far beyond even that. After the talk, in a brief Q&A with host Bruno Giussani (above) he introduced NETRA, a simple attachment for a smartphone that quickly determines the user’s eyeglass prescription.
In fact, Raskar has developed a number of fantastic camera applications. Including:
- BOKODE: An invisible barcode that is a tenth the size of standard barcodes >>
- NETRA: A simple, cheap smartphone attachment for determining eyeglass prescription >>
- NPR: A “non-photorealistic camera” that renders what it sees as cartoons >>
- HR3D: A glasses-free 3D display >>
- And many, many more >>
Intrigued by all of this, TED’s Ben Lillie called Raskar in his office at the MIT Media Lab to ask him about his research, as well as the future of camera technologies.
In the Q&A with Bruno after your talk, you showed a device that allows a person to diagnose their eyeglass prescription with an iPhone. It’s fascinating that we’re starting to see mass-level consumer applications of what used to be highly-specialized technology.
Exactly. If you go back to what I was showing Bruno, which is a device called Netra, that was something I was developing for a completely different application — a new type of barcode. At some point I realized, “Hey I can use the same barcode effectively to diagnose people’s eyes.” And the reason I could do that is because today’s cellphones have displays with a resolution that’s extremely, extremely high. A retina display, for example, is 326 dots per inch, which means the pixels themselves are about 26 micrometers, which is half the width of a human hair, and at that resolution you can start treating the cellphone as this serious scientific instrument.
What does the attachment actually do? Is it distorting an image in a particular way?
Yeah, what we’re doing is combining the software and the snap-on eyepiece. It’s basically a lens and a few masks. Think of it like a microscope. What you’re doing is — if you have a normal eye, you will see a normal image on the screen. But if you have near-sightedness or far-sightedness or an astigmatism, then the image that you see through this lens is distorted. And what we do is an inverse process — we ask the user to pre-distort the image on the screen so that you will see it as sharp. We are playing the inverse operation to the screen. It’s purely software and so within just about 30 seconds — with a few clicks — the user can distort the screen. And when everything looks right, you stop, and the number of clicks it took to pre-distort the screen gives us your power for your eyeglasses.
You’re saying you can do this in 20 seconds? I’ve been to the eye doctor many times and it takes a good ten minutes or so to go through all the little lenses.
Yeah, it’s like driving a car. If you want to go in a particular direction you just change the direction of your steering wheel. On the other hand, if you’re sitting in the passenger seat and telling the driver what to do, it’s going to take a long, long time. So when you’re driving it yourself, it’s so much smoother.
Oh, so it’s similar technology, but just the fact that I’m doing it is enough to make it much quicker.
Exactly. The microscope on the lens attachment (photo above) we have is extremely low cost, and all the intelligence is in the software. When people typically build devices for checking eyes, a lot of the intelligence is in the mechanical hardware and so on. But here the intelligence is in the software.
Interesting. So what’s driving the ability to make very cheap devices is all the advances in software, because you don’t need highly-specialized, highly-machined hardware to do a lot of tasks?
Right. But again, it’s more of a co-design of software and hardware. And we’re thinking about the fields, whether it’s the iPhone or the Kinect, a combination of hardware and software makes it much more interesting than pure software.
What do you see coming up in camera development that you’re most excited about?
In terms of cameras, we think the world is extremely high-dimensional. There’s geometry, there’s motion, there’s colors and so on. The appearance of the world is very dimensional, but we are stuck with these two-dimensional cameras — cameras that create flat images. Slowly we are moving towards stereoscopic or 3-D, but that’s still very limiting. A true camera would capture the world.
My group is called “Camera Culture,” and what we are doing is creating new class of imaging platforms that have an understanding of the world that far exceeds human ability. We can produce meaningful abstractions and synthesize a world that’s well within comprehensibility. That’s really our goal for creating new imaging platforms. So we are creating cameras that can of course capture the world in 3-D, look at the world with extremely high speed. But the goal is not just to create better images, but to create, I call it, “very novel feature-revealing computational cameras.” They are not just capturing pixels, but they’re capturing features. They’re capturing some meta-information about the world. These cameras can create lightweight medical imaging mechanisms in the future, or create — you know, spawn — new visual artforms. Or even facilitate positive social impact when these cameras become highly personalized.
Have you had projects that have been used by artists or other people yet?
Yeah, absolutely. I, myself, have done lots of installations of art. And in my installations I kind of cheat and use my own cameras.
I don’t think that’s cheating.
You know we often desire technologies that are difficult to conceive but very easy to describe. That’s what I like about it, including this high-speed camera. It’s very easy to describe what it does, but of course it’s very difficult to conceive and also difficult to implement, and so in my art installations I have created these cameras, I call it “camera non photo.” It’s a camera that doesn’t take a photo, but actually enhances your senses.
You know how you open an instruction manual, like a car manual or those instructions for the emergency exit on the brochures in an airplane? You have these beautifully drawn sketches. You don’t have photos. And the question is, why do people hire professional artists to draw these sketches for something that can be easily photographed? The reason for that, of course, is that professional artists are very good at how to emphasize and de-emphasize different aspects so that you create images that are very comprehensible. So imagine if I can create a camera that gives me those sketches as opposed to giving a photo-realistic picture. We’re not going to replace professional artists because they will do a good job, but it’s still a tool that a lot of people can use.
So what I created was this camera called a non photo-realistic camera, where you can stand in front of this camera and create your cartoon, and do video-conferencing. A lot of people are reluctant to do video conferencing because, you know, “I haven’t shaved that day, and if I don’t look good that day I don’t want to be on camera.” But it’s perfectly fine to transmit my shape and emotions and expressions. So imagine a camera that conveys your emotions and expressions, but doesn’t really send the color of your skin or how much moisturizer you have used, for example.
Do you remember this music video, “Take on Me” by A-ha? From the 1980s? A large part of the video is actually rotoscoped cartoons. I was really inspired by that video, actually, and so I created this camera. The way the camera works is it has four flashes and every time a flash goes off you see a sliver of shadow next to your silhouette. And by exploiting the shadows it turns out you can create these very high-quality cartoons.
Back to your TEDTalk, how did you get the idea to build a femtosecond camera?
I do a lot of work on all kinds of imaging, and at some point I realized that the time dimension hasn’t been explored. We have explored everything else. When you buy a camera, we worry a lot about the resolution — how many megapixels it has — but it’s very rare to think about the time. You wouldn’t even think about time when you think about video, how many frames per second. But the time of light — how long it takes for light to travel — is rarely considered. So I was very fascinated with that and that’s why I went in this direction.
And presumably it hasn’t been explored because technology to resolve things on that time scale hasn’t been around until recently.
I mean, of course there has been very special purpose hardware that scientists used in very special scenarios to make use of the time of light. I really like to think about two different communities. There’s the community of computer scientists who think about machine learning and signal processing and I call them “computation chauvinists” because they think everything thus far is just for computation. And the people on the other side who just believe that if they just build the most amazing camera and the most amazing optics they can solve some other problems. I call them “optical chauvinists.” And I think the real excitement is actually at the intersection of the two. So when you do a cool design of optical and digital processing it opens up a whole new world. Going back to femto-photography, people have been thinking about exporting images on one side and people have been thinking about how to use speed of light, but for very specific things, and now what I’m able to do is to take ideas from both communities and go in new directions.