In this session of TED2015, five speakers explored the bounds of perception — from how babies form expectations to how vision is hardly needed to see. Short recaps of these bold talks …
The logic of the young mind. Scientists have to make generalizations on tiny amounts of data – and so do babies. Laura Schulz investigates the young mind and how it learns, and in the opening talk in Session 2, she explores how babies draw conclusions. She shows a video in which a fifteen-month-old sees a box of mostly blue balls and a few yellow ones. The experimenter pulls three blue balls from the box, and shows the baby that each one squeaks. When shown a yellow ball, what does the baby expect will happen? That the ball will squeak. It’s plausibly random that any ball taken from the box will squeak, so the baby expects that the yellow ball will also squeak. But what happens when the sample is changed? Adorable video shows that the baby brain is good at making broad guesses from limited information. And that, Schulz says, is the fundamental skill of human cognition. It’s why we sometimes get the world wrong … but it’s also why we so often get it right.
Life in rectangles. Aside from lunch and recess, Jason Padgett wasn’t interested in school, deeply bored by math. But in 2002, he was beat up outside a karaoke bar. On the way home from the hospital, he noted that instead of seeing life in fluid motion and subtle curves, things looked very strange. Every object around him was composed of a series of rectangles, each tilting slightly to form an outline of the shape. Everything moved in stop-motion. At first, he thought it was his pain medication, but it wasn’t: It was a new way of seeing. And he has experienced life in this way ever since. He began drawing images of the complex geometric shapes that his vision had become. He enrolled in math classes at community college, he began spending hours in the math lab, he began seeking out conversations with mathematicians and physicists. “It was like a door in mind that I had never opened suddenly opened and I walked through it,” he says. “I am trying to absorb all this incredible and beautiful knowledge that I had completely ignored previously in my life … This is my mission: to try to inspire students and create scientists.”
Beyond what Mother Nature has given us. As humans, we perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our biology.” The slice of the world we normally perceive is called our umwelt — German for “the surrounding world” – and Eagleman believes that we can use technology to expand ours. Cochlear and retinal implants have already shown that we can marry technology to biology and train the brain to understand data regardless of how we input it. After all, to your brain, information is just electrochemical signals: It doesn’t care whether you saw, tasted, or even felt it. On stage at TED today, Eagleman rips off his shirt to unveil his contribution to this idea: a beeping, buzzing vest that converts information into vibrations. His lab has developed technology that takes in sound through a tablet and converts that sound into vibrations in the vest. The result: Deaf people who are trained on the system can understand spoken words. But Eagleman is setting his sights even higher. He shows experiments in which someone wearing the vest receives real-time information about the stock market and attempts to experience the information and develop the skills to make good buy and sell decisions. Beyond that? Eagleman has a lot of ideas. The applications to expanding our umwelt, he believes, are limitless. Read our interview with David Eagleman.
Do we see reality as it is? When you open your eyes and experience seeing a tomato, you believe there’s a round red thing that exists in time and space a meter away. And when you close your eyes, what happens to that tomato? Is it still there? Well, says cognitive scientist Donald Hoffman, maybe not. We think what we see with our eyes is an objective reality, but Hoffman dares us to believe that we don’t. We’re inclined to believe that seeing reality accurately would be an evolutionary advantage, but using a computer simulation for evolution, Hoffman and his team have shown that accurate perceptions aren’t necessarily fitter ones. Instead, he says, evolution has given us simple tricks and hacks to survive. Consider a file on your computer: You experience it as something kept in a small blue folder in the lower right-hand corner of your desktop – but of course that’s a simplified interface to help you navigate information, not the reality of the file storage system, made up of complex diodes and resistors. This, says Hoffman, is a metaphor for what evolution has done for us. Indeed, he says, when we perceive what we would describe as a tomato, it’s in fact nothing like the reality of a tomato. That’s just a hack.
An introduction to human echolocation. Daniel Kish has been blind since he was 13 months old — one by one, he lost his eyes to a rare form of cancer called retinoblastoma. The first thing he did after his last surgery? He climbed out of his crib and begin exploring the intensive care nursery. Rather than fearing for their curious, endlessly exploring son, his parents did something different: “They put my freedom first. They understood that ignorance and fear were but matters of the mind — and that the mind is adaptable,” he says. Kish learned to use a form of echolocation, which he calls FlashSonar, to help him navigate the world. On the TED stage, he demonstrates how it works, asking the audience to close their eyes as he whistles and moves a piece of plastic. Even with our eyes tightly shut, it’s easy to hear when the plastic moves, as the sound of Kish’s whistle changes. “This is how I have learned to see through my blindness,” says Kish. Read our profile of Kish — and hear what he’d like sighted people to take for his story »
You can watch this session, uncut and as it happened, via TED Live’s on-demand conference archive. A fee is charged to help defray our storage and streaming cost. Sessions start at $25. Learn more.