Our robotic overlords - EASY TECH

Providing solutions

Sunday, 24 December 2017

Our robotic overlords

Our robotic overlords

Are machines going to save or destroy us? It’s a question we’ve been grappling with since we first put 1s and 0s together to create computer code. And it feels like we’re no closer to answering it; machines are growing smarter by the day, and while they’re taking us to places we’ve never imagined, we seem to be losing turf as the supreme thinking and feeling being.

In the second session of TED2017, hosted by TED’s curator, Chris Anderson, and editorial director Helen Walters, seven speakers (and one robot) showed us visions of the future — from robots that can pass college entrance exams and learn human values to the future of personal mobility (hint: we’re going to fly).
Below, recaps of the talks from Session 2, in chronological order.






A vision of the robots that may serve and assist you. SpotMini, an electronic quadruped robot that looks like a cross between a large dog and a small giraffe, trots onstage, circles the red carpet and acknowledges the audience before taking its place alongside Marc Raibert, founder of Boston Dynamics — the company responsible for some of the coolest (and, perhaps, most terrifying) robots on the planet. Boston Dynamics’s basic design principles, Raibert says, are aimed at achieving three things: balance, dexterity and perception. He takes us through a status report of robots he’s developing towards these ends, showing video featuring BigDog, a cheetah-like robot that runs with a galloping gait; AlphaDog, a massive robot that can negotiate through 10 inches of snow; Spot, a bigger version of SpotMini that can open complex doors; Atlas, a humanoid robot that walks upright on two legs and uses its hands to handle packages; and Handle, which has wheels for feet and can lift 100-pound packages and jump on top of tables with ease. With that, SpotMini wakes up, under the direction of Boston Dynamics’s Seth Davis, and to the delight of the TED crowd shows off its omnidirectional gait, moving sideways, running in place and hopping back and forth from side to side. Raibert shows onscreen how SpotMini creates a dynamic map of the world around it, allowing it to navigate an obstacle course set up on stage with ease — and even delivering a soda to Raibert on command.





A robot that can pass a college entrance exam — and what that means. Noriko Arai wondered: could an AI pass the entrance exam for the University of Tokyo? The university, known as Todai, is Japan’s Harvard — and the Todai Robot Project aimed to get an AI in by 2020. Why? “To study the performance of AI in comparison to humans,” says Arai, “on skills believed to be done only by humans with education.” Last year, the Todai Robot placed in the top 1 percent of students on math, and we watch as it starts composing a 600-character essay on maritime trade in the 17th century. Arai turns her attention to how the robot did this: it broke down math problems into machine-readable formulas, multiple-choice questions into Googlable factoids and essay writing into a task of copying and combining. “None of the AIs today, including Watson, Siri and Todai Robot, [are] able to read — but they are good at searching and optimizing,” she says. They don’t understand; they only appear to. Yet while this AI fell short of Todai last year, it scored among the top 20% of all the students who took the first stage national standardized test, and qualified for 60 percent of Japanese universities. “How on earth could this unintelligent machine outperform students, our children?” asks Arai. After giving similar tests to thousands of high school students, she found an answer: students aren’t that good at reading either. One-third missed simple questions. “We believe anyone can learn and learn well,” says Arai. But the best educational materials only benefit those who read well — and many aren’t there.
Teaching robots human values. In an age of working toward all-knowing robots, Stuart Russell is working toward the opposite — robots with uncertainty. He says that this is the key to harnessing the full power of AI while also preventing the Armageddon of robotic takeover.  When we worry about robots becoming too intelligent or deviating from their programmed purpose, we’re worrying about what’s called “the value alignment problem,” Russell explains. So how do we program robots to do exactly what we want them to without them following their objectives too literally? After all, as Russell cautions, we don’t want to end up like King Midas whose friends all turned to gold. The solution involves Human-Compatible AI, which focuses on creating uncertainty in an altruistic robot’s objective and teaching it to fill that gap with knowledge of human values learned through observing human behavior. Creating this human common sense in robots will “change the definition of AI so that we have provably beneficial machines … and, hopefully, in the process we will learn to be better people.”




YOLO (You Only Look Once). How do computers tell cats apart from dogs? In 2007, the best algorithms could only tell a cat from a dog with 60 percent accuracy. Today, computers can do it with more than 99 percent accuracy. Computer scientist Joseph Redmon works on the YOLO algorithm, which combines the simple face detection of your phone camera with a cloud-based AI — in real time. The YOLO object detection system is a single neural network that predicts all of the bounding boxes — or the physical shape of a given object — as well as object classes simultaneously, and it’s extremely fast. In a demo utilizing the TED audience, we see how seamlessly the algorithm can detect a person, stuffed-animal cat or dog, backpack or tie. More importantly, the object detection system can train for any image domain: “It is fully trainable so our method can be used to detect animals in natural images, cancer cells in medical biopsies or anything else you can imagine,” Redmon says.


 

No comments:

Post a Comment

Popular