Jeff Han is a research scientist for New York University’s Courant Institute of Mathematical Sciences . Here, he demonstrates, for the first time publicly, his intuitive, “interface-free,” touch-driven computer screen, which can be manipulated intuitively with the fingertips, and responds to varying levels of pressure. (Recorded February 2006 in Monterey, CA. Duration: 09:32)
Subscribe to the TED Blog >>
Jeff Han: Unveiling the genius of multi-touch interface design
I’m really really excited to be here today, because I’m about to show you some stuff that’s just ready to come out of the lab, literally, and I’m really glad that you guys are going to be amongst the first to be able to see it in person because I really really think this is gonna change- really change the way we interact with the machines from this point on.
(standing in front of flat table level screen showing a grid)
Now this is a rear-projected drafting table, it’s about 36 inches wide, and it’s equipped with a multi-touch sensor. Now, normal touch sensors that you see, like on a kiosk or interactive light ports, can only register a kind of one point of contact at a time.
(traces single wavy line on grid screen with finger)
This thing allows you to have multiple points at the same time.
(traces lines with multiple fingers, then traces lines with all of his fingers at once)
They can use both my hands, I can use chording actions, and I can just go right up and use all 10 fingers if I wanted to. There, like that.
Now, multi-touch sensing isn’t anything- isn’t completely new, I mean, people like Bill Buxton have been playing around with it in the ’80s. However, the approach I developed here is actually high-resolution, low cost, and probably most importantly, very scalable. So, the technology, you know, isn’t the most exciting thing here right now, other than probably its new-found accessibility, what’s really interesting here is kind of what you can do with it, and the kind of interfaces you can build on top of it. So let’s see…
So, for instance we have a LavaLamp application here-
(overhead video of Han pushing and squeezing colored blobs on light table)
Now you can see, I can use both of my hands to kind of squeeze together, and put the blobs together.
(holds hands at bottom of screen, heat of fingers brightens colors, brighter colored blobs remain on screen, then he pushes two small blobs together to make one big one)
I can inject heat into the system here, or I can pull it apart with two of my fingers, it’s completely intuitive, there’s no instruction manual, the interface just kind of disappears. This started out as kind of a screensaver app that one of the PhD students in our lab, Ilya Rosenberg, made. But I think its true identity comes out here.
(fiddles around with blobs some more)
Now what’s great about a multi-touch sensor is that, you know, I could be doing this with as many fingers here, but of course multi-touch also inherently means multi-user. So Chris could be out here interacting with another part of Lava while I kind of play around with it here. You can imagine a new kind of sculpting tool, where I’m kind of warming something up, kind of making it malleable, and then kind of letting it cool down and solidifying in a certain state. (fiddles more) Google should have something like this in their lobby. (laughter)
(tabletop cuts to grid)
I’ll show you something a little more of a concrete example here- as this thing loads-
(shows photo images scattered haphazardly around the screen, using both hands, Han pushes photos around on the screen)
Now this is a photographer’s light box application. Again, I can use both of my hands to kind of interact and move photos around. But, what’s even cooler-
(uses fingers to ‘grab’ two corners of one of the photos and ‘pulls’ it to full screen size)
is that, if I have two fingers, I can actually grab a photo and then stretch it out like that really easily. I can pan, zoom, and rotate it effortlessly.
(slides piles of photos around)
I can do that grossly with both of my hands,
(pulls photo out of stack & enlarges it)
or if I can do it just with two fingers on each of my hands together.
(grabs empty space around photos & zooms in and out of canvas)
If I grab the canvas I can kind of do the same thing- stretch it out- I can do it simultaneously, where I’m holding this down-
(holds pile of photos down while pulling out another)
-and gripping on another one, stretching this out like this.
Again, the interface just disappears here. There’s no manual. This is exactly what you kind of expect, especially if you haven’t interacted with a computer before. Now, when you have initiatives like the hundred dollar laptop, I kind of cringe at the idea that we’re gonna introduce a whole new generation of people to computing with kind of this standard mouse-and-windows pointer interface. This is something that I think is really the way we should be interacting with the machines from this point on. (applause)
(taps bottom of screen, graphical keyboard appears on screen, he types caption ‘my vacation’ onto one photo, which he then pulls off and puts on top of screen. )
Now, of course, I can bring up a keyboard, and I can bring that around, put that up there. Now- obviously this is kind of a standard keyboard, but of course I can re-scale it to make it work well for my hands. And that’s really important, because there’s no reason in this day and age that we should be conforming to a physical device. I mean, that leads to bad things, like RSI. I mean, we have so much technology nowadays that these interfaces should start conforming to us. I mean, there’s so little applied now to actually improving the way we interact with interfaces from this point on.
This keyboard is- I- probably actually the really wrong way to go. You can imagine, in the future, as we develop this kind of technology, a keyboard with a kind of automatically drifts as your hand kind of moves away, and kind of really intelligently anticipates which key you’re trying to stroke with your hands. So- again, isn’t this great? (moves photos around more, pause)
(audience member: “Where’s your lab?”)
I’m a Research Scientist at NYU, it’s in New York.
(switches off photobox application)
(turns on new graphics app- he makes little balls of light by touching the screen, which are brighter the longer he holds his fingers down, and which move in paths according to his finger movements)
Here’s an example of another kind of app- I can kind of make these little fuzz dolls- it’ll kind of remember the strokes I’m making. Of course I can do it with all my hands. It’s pressure sensitive, you can notice-
(makes larger, brighter ball by holding finger down longer, then he zooms out by stretching screen as he did with photos, making balls on screen smaller. once he zooms out, he can make larger balls, then zoom back in, making the older ones their original size.)
But what’s neat about that is again, I showed you that two finger gesture that allows you to zoom in really quickly. Because you don’t have to switch to a hand tool, or the magnifying glass tool, you can just kind of continuously make things in real multiple scales, all at the same time. (zooms out) I can create big things out here, but I can go back (zooms in) and really quickly go back to where I started, (zooms in further) and make even smaller things here.
Now this is going to be really important as we start getting to things like data visualization. For instance, I think we all really enjoyed Hans Rosling’s talk, and he really emphasized the fact that I’ve been thinking about for a long time too, we have all this great data, but for some reason, it’s just sitting there. We’re not really accessing it. And one of the reasons why I think that is, is because of things like graphics- will be helped by things like graphics and visualization and inference tools. But I also think a big part of it is gonna be- starting to be able to have better interfaces, to be able to drill down into this kind of data, while still thinking about the big picture here.
Let me show you another app here-
(screen showing Earth globe)
This is something called World Wind, it’s done by NASA, it’s a kind of- we’ve all seen Google Earth, this is kind of an open source version of that.
(using previously demonstrated techniques to zoom in and out on various parts of the map, zooming all the way down to San Francisco peninsula, to a street level satellite view of SF)
There are plug-ins to be able to load in different data sets that NASA’s collected over the years. But as you can see, I can use the same two fingered gestures to kind of go down and go in really seamlessly- there’s no interface, again, it really allows anybody to kind of go in- and, just does what you’d kind of expect, you know? Again, there’s just no interface here. The interface just kind of disappears.
(view switches from satellite photo to street contour map)
I can switch to a different data views, that’s what’s neat about this app here- there you go.
(switches to another satellite data view, this time color enhanced)
NASA’s really cool, they have these hyper-spectral images that are kind of false colored so you can- it’s really good for determining vegetative use- well, let’s go back to this:
(switches to another map, zooming away from SF)
Now the great thing about mapping applications- it’s not really kind of 2D, it’s kind of 3D. So, again, with a multi point interface, you can kind of do a gesture like this-
(moves hands, and map flips perspectives to a side on view of a coastal mountain range)
so you can kind of be able to tilt around like that, you know. It’s not just simply relegated to a kind of 2D panning and motion. Now this gesture that we’ve developed, again, is just kind of- putting two fingers down- it’s kind of defining an axis of tilt, and I can kind of tilt up and down that way. That’s something we just came up with on the spot, you know, it’s probably not the right thing to do, but there’s such interesting things you can do with this kind of interface. It’s just so much fun playing around with too.
And, OK, so the last thing I want to show you is, you know, I’m sure we can all think of a lot of entertainment apps that you can kind of do with this thing. I’m a little more interested in the kind of creative applications we can do with this. Now here’s a simple application here-
(draws figure on the desk, once he connects the last lines, it fills in w/ a light blue color)
I can kind of draw out a curve. And when I close it, it becomes a character. But the neat thing about it is, I can add control points-
(touches several points of figure, leaving small dots which he is then able to flex and move separately)
And then what I can do is kind of manipulate them with both of my fingers at the same time. (figure moves as if animated) And you notice what it does. It’s kind of a puppeteering thing, where I can use as many fingers as I have to kind of draw-
(draws second figure above first, rotates and bends it)
Now, there’s a lot of actual math going on under here for this kind of to control this mesh and do a lot- do the right thing. I mean, this is a- this technique of being able to manipulate a mesh here, with multiple control points, is actually something that’s state of the art, was just released at Siggraph last year, but it’s a great example of the kind of research I really love. Kind of- all this compute power to apply to make things kind of do the right things. The intuitive things. Do kind of exactly what you expect.
(wiggles figures around)
So, multi-touch, kind of interaction research, is a very active field right now in HCI. I’m not the only one doing it, there are a lot of other people getting into it, this kind of technology is going to let even more people get into it, and I’m really looking forward to interacting with all you guys over the next few days and seeing how it can apply to your respective fields. Thank you.