Live from TED2017 TED Conferences

Our robotic overlords: The talks of Session 2 of TED2017

Posted by: , , , and

Are machines going to save or destroy us? It’s a question we’ve been grappling with since we first put 1s and 0s together to create computer code. And it feels like we’re no closer to answering it; machines are growing smarter by the day, and while they’re taking us to places we’ve never imagined, we seem to be losing turf as the supreme thinking and feeling being.

In the second session of TED2017, hosted by TED’s curator, Chris Anderson, and editorial director Helen Walters, seven speakers (and one robot) showed us visions of the future — from robots that can pass college entrance exams and learn human values to the future of personal mobility (hint: we’re going to fly).

Below, recaps of the talks from Session 2, in chronological order.

Boston Dynamics founder Marc Raibert, foreground, and Seth Davis put the oddly adorable SpotMini through its paces at TED2017, April 25, 2017, Vancouver, BC, Canada. Photo: Bret Hartman / TED

A vision of the robots that may serve and assist you. SpotMini, an electronic quadruped robot that looks like a cross between a large dog and a small giraffe, trots onstage, circles the red carpet and acknowledges the audience before taking its place alongside Marc Raibert, founder of Boston Dynamics — the company responsible for some of the coolest (and, perhaps, most terrifying) robots on the planet. Boston Dynamics’s basic design principles, Raibert says, are aimed at achieving three things: balance, dexterity and perception. He takes us through a status report of robots he’s developing towards these ends, showing video featuring BigDog, a cheetah-like robot that runs with a galloping gait; AlphaDog, a massive robot that can negotiate through 10 inches of snow; Spot, a bigger version of SpotMini that can open complex doors; Atlas, a humanoid robot that walks upright on two legs and uses its hands to handle packages; and Handle, which has wheels for feet and can lift 100-pound packages and jump on top of tables with ease. With that, SpotMini wakes up, under the direction of Boston Dynamics’s Seth Davis, and to the delight of the TED crowd shows off its omnidirectional gait, moving sideways, running in place and hopping back and forth from side to side. Raibert shows onscreen how SpotMini creates a dynamic map of the world around it, allowing it to navigate an obstacle course set up on stage with ease — and even delivering a soda to Raibert on command.

Noriko Arai wondered if an AI could pass the entrance exam at a top university. She shares her prediction at TED2017, April 25, 2017, Vancouver, BC, Canada. Photo: Ryan Lash / TED

A robot that can pass a college entrance exam — and what that means. Noriko Arai wondered: could an AI pass the entrance exam for the University of Tokyo? The university, known as Todai, is Japan’s Harvard — and the Todai Robot Project aimed to get an AI in by 2020. Why? “To study the performance of AI in comparison to humans,” says Arai, “on skills believed to be done only by humans with education.” Last year, the Todai Robot placed in the top 1 percent of students on math, and we watch as it starts composing a 600-character essay on maritime trade in the 17th century. Arai turns her attention to how the robot did this: it broke down math problems into machine-readable formulas, multiple-choice questions into Googlable factoids and essay writing into a task of copying and combining. “None of the AIs today, including Watson, Siri and Todai Robot, [are] able to read — but they are good at searching and optimizing,” she says. They don’t understand; they only appear to. Yet while this AI fell short of Todai last year, it scored among the top 20% of all the students who took the first stage national standardized test, and qualified for 60 percent of Japanese universities. “How on earth could this unintelligent machine outperform students, our children?” asks Arai. After giving similar tests to thousands of high school students, she found an answer: students aren’t that good at reading either. One-third missed simple questions. “We believe anyone can learn and learn well,” says Arai. But the best educational materials only benefit those who read well — and many aren’t there.

Teaching robots human values. In an age of working toward all-knowing robots, Stuart Russell is working toward the opposite — robots with uncertainty. He says that this is the key to harnessing the full power of AI while also preventing the Armageddon of robotic takeover.  When we worry about robots becoming too intelligent or deviating from their programmed purpose, we’re worrying about what’s called “the value alignment problem,” Russell explains. So how do we program robots to do exactly what we want them to without them following their objectives too literally? After all, as Russell cautions, we don’t want to end up like King Midas whose friends all turned to gold. The solution involves Human-Compatible AI, which focuses on creating uncertainty in an altruistic robot’s objective and teaching it to fill that gap with knowledge of human values learned through observing human behavior. Creating this human common sense in robots will “change the definition of AI so that we have provably beneficial machines … and, hopefully, in the process we will learn to be better people.”

Joseph Redmon demonstrates the YOLO object-detection algorithm at TED2017, April 25, 2017, Vancouver, BC, Canada. Photo: Bret Hartman / TED

YOLO (You Only Look Once). How do computers tell cats apart from dogs? In 2007, the best algorithms could only tell a cat from a dog with 60 percent accuracy. Today, computers can do it with more than 99 percent accuracy. Computer scientist Joseph Redmon works on the YOLO algorithm, which combines the simple face detection of your phone camera with a cloud-based AI — in real time. The YOLO object detection system is a single neural network that predicts all of the bounding boxes — or the physical shape of a given object — as well as object classes simultaneously, and it’s extremely fast. In a demo utilizing the TED audience, we see how seamlessly the algorithm can detect a person, stuffed-animal cat or dog, backpack or tie. More importantly, the object detection system can train for any image domain: “It is fully trainable so our method can be used to detect animals in natural images, cancer cells in medical biopsies or anything else you can imagine,” Redmon says.

Tom Gruber helped create Siri, and imagines where the next step in AI will take humans. He speaks at TED2017, April 25, 2017, Vancouver, BC, Canada. Photo: Bret Hartman / TED

How smart can our machines make us? “What’s the purpose of artificial intelligence?” asks Tom Gruber, AI developer and co-creator of Siri. Is it to make machines intelligent, so they can do automated tasks we don’t want to do, beat us at complex games like chess and Go and, perhaps, develop superintelligence and become our overlords? No, says Gruber — instead of competing with us, AI should augment and collaborate with us. “Superintelligence should give us superhuman abilities,” Gruber says. Taking us back 30 years to the first intelligent assistant he created, which helped a cerebral palsy patient communicate, to Siri, which helps us do everything from navigate cities to answer complex questions, Gruber explains his vision for “humanistic AI” — machines designed to meet human needs by collaborating with and augmenting us. Gruber invites us into a future where superintelligent AI can augment our memories and help us remember the name of everyone we’ve ever met, every song we’ve ever heard and everything we’ve ever read. “We have a choice in how we use this powerful tech. We can use it to compete with us or to collaborate with us — to overcome our limitations and help us do what we want to do, only better,” Gruber says. “Every time a machine gets smarter, we get smarter.”

How you’ll be able to fly … by the end of the year. In 2015, engineer Todd Reichert set a record for human-powered speed in an ultra-lightweight bike that traveled 89.6 miles per hour — with no engine. But he’s not on the TED stage to talk about that. Instead, to gasps, he introduces us to an all-electric, ultra-light aircraft called the Kitty Hawk Flyer, which his company plans to make available by the end of 2017. The pilot sits aboard it as if it were a motorcycle — only instead of wheels below, there’s a mesh platform that suspends eight rotors. In a video we watch together, the craft flies about 15 feet over the water, the rider out in the open air. Reichert explains the two technologies that make this possible: simple electronics that let the flyer control and stabilize the rotors like a video game, and rapidly advancing batteries. “Your flying dreams are potentially a lot closer than you think,” Reichert says to anyone who’s ever wondered when their jetpack or flying car is coming. His team is working with regulatory bodies to figure out the steps for making this craft available (first step: because it weighs less than 254 pounds, you don’t need a pilot’s license). As Reichert says: “I realize we aren’t The Jetsons yet. But this is the first step in a totally new type of freedom.”

The power of the collective. In sci-fi visions of the future, we generally see AI modeled on a human-like intelligence, only amplified — but there are many kinds of intelligence found in nature that are very different from our own, like the collective intelligence displayed by insects and fish schools. Computer scientist Radhika Nagpal has spent her career studying systems of collective intelligence, searching to understand the rules that govern them so we can create our own using, say, robots. “Once you understand the rules, many different kinds of robot visions become possible,” she says.