Pranav Mistry is the MIT grad student behind Sixth Sense, a tool that connects the physical world with the world of data. He and his advisor at the MIT Media Lab, Pattie Maes, unveiled Sixth Sense at TED2009, and the Sixth Sense demo premiered yesterday on TED.com — and in both places, it has fired people’s imaginations. The TED Blog spoke with Pranav this morning, to ask him some questions that have arisen on TED.com and at the TED office. From the interview:
Why choose a projector versus goggles?
We actually thought a lot about this. At MIT, lots of research has been done with glasses — there’s even research going on to put information in your contact lenses. But this particular project has an important aspect: We want this thing to merge with the physical world in a real physical sense. You are touching that object and projecting info onto that object. The information will look like it is part of the object.
Read the full interview with Pranav Mistry, after the jump >>
Watch the Sixth Sense demo on TED.com >>
TED Blog’s interview with Pranav Mistry, 3/11/2009
What was your role on the project?
I took this from the idea, the concept, and developed the software and the hardware. Pattie is my advisor. She helps me brainstorm, “What should we do next?” From the beginning, I started working on the concept of merging the physical world and the digital world, like with my earlier project, Quickies, merging physical sticky notes with digital data.
Why choose a projector versus goggles?
We actually thought a lot about this. At MIT, lots of research has been done with glasses — there’s even research going on to put information in your contact lenses. But this particular project has an important aspect: We want this thing to merge with the physical world in a real physical sense. You are touching that object and projecting info onto that object. The information will look like it is part of the object.
Also, a projector opens up interaction and sharing. Say you and I are walking down the street in New York, talking, and suddenly I get mail we both want to see. I can show it on a wall, and we can take a decision together right there. It’s opening information to share. At a coffeeshop, we can use the whole table as an interactive surface.
Where’s the battery?
The project itself contains a battery inside, with 3 hours of battery life. The other thing is, I’m making a small solar panel. I’m trying that out right now because I want to go with sustainable energy — and so you don’t always need to be charging. Whenever you’re outside, you’ll be charging, and with the system, you can be outside more.
How does the software know what you want the system to do next?
The software works on the basis of computer vision. There’s a small camera acting as your eye, your third eye, your digital eye, connecting you to the world of digital information. Processing is happening in your mobile phone, and basically works on computer vision algorithms that we developed ourselves, taking advantage of some open-source code but mainly writing code ourselves here at the lab. We had to write a lot of algorithms from scratch here because there was nothing that did what we wanted. We wrote 50,000 lines of code.
The software recognizes 3 kinds of gestures:
+ multitouch gestures, like the ones you see in Microsoft Surface or the iPhone — where you touch the screen and make the map move by pinching and dragging.
+ what I call freehand gestures, like when you take a picture [as in the photo above]. Or, you might have noticed in the demo, because of my culture, I do a namaste gesture to start the projection on the wall.
+ iconic gestures, drawing an icon in the air. Like, Whenever I draw a star, show me the weather. When I draw a magnifying glass, show me the map.
You might want to use other gestures that you use in everyday life. This system is very customizable. Because it’s my choice, I do it my way in the demo, but I don’t want the user to change their habits. I want the Sixth Sense to change for them.
Have you thought about using this device for gaming?
Definitely. We can do all the kinds of gaming that exists now, but not only that, we can use the physical world inside the game. You can play with physical stuff, invent some new games. Maybe you can hide something in the physical world — open a book and hide something in the pages.
We’ve been using “Minority Report” as shorthand to explain the device, or the heads-up screen in “Robocop.” But was this device influenced by science fiction
I’m not a very big fan of science fiction. I think that I’m a very big fan of living in the physical world. I’m good with digital technology, but I start to miss the physical world. I miss riding my bike, talking to friends. Technology now separates us from the physical world more and more. Even social networking sites are taking us away from the physical world.
At the lab, we like making things that we can touch, we can feel, we can take with us wherever we want to go, that we know how to interact with. The digital world has power because it has dynamic information, but it’s important that we stay human instead of being another machine sitting in front of a machine.
Whatever science fiction movies we watch now, we can make the technology real in two days. What we can do is not important. What we should do is more important.
What would you have said on the TED stage, if you and Pattie had had one more minute?
We have lots of applications. Any application now on your computer or mobile phone, you can use in Sixth Sense. In fact, I was going to come on the stage and do a live demo, but then we decided to use the demos I had filmed. We were worried about technical problems and the low light. But we have more to show.
The demo has gotten a lot of attention. What’s life been like since your demo at TED?
As soon as the talk finished, so many people rushed up to us. Before TED I was working, easy, three months straight in my lab. I was thinking after TED I would take a one-week break. But even today I haven’t taken that one-week break. We are getting, every day, 50 mails, “we should do this, we should do that.”
There’s some really interesting comments, not only from people interested in making computer interfaces, but people asking, “Why can’t we use this system for people who have accessibility problems, blind people, deaf people?” The camera can act as a third eye for the blind person, and tell them what it sees. It could be an ear for a deaf person. Ideas are also coming from developing countries — in part because of the low cost. It cost me $350 to build Sixth Sense in the lab, but the price will come down.
Comments (52)
Pingback: Pranav Mistry’s Sixth Sense « Seeing Comes Before Words
Pingback: PSProduktions
Pingback: Pranav Mistry — Innovation + Compassion + Passion « Sparkling Insights
Pingback: Pise na Grama [DP&D] » Blog Archive » Sixth sense
Pingback: Tutorial notes from Week 2 « thinkfashionblog
Pingback: A help need for PIC project
Pingback: Video: Is the Web Really Dead? Part 2 : Theme Zoom Podcast