Why Solving a Rubik's Cube Does Not Signal Robot Supremacy


If someone can solve a Rubik’s Cube, you might safely assume they are both nimble-fingered and good at puzzles. That may not be true for a cube-conquering robot.

OpenAI, a research company in San Francisco whose founders include Elon Musk and Sam Altman, made a splash Tuesday by revealing a robotic system that learned how to solve a Rubik’s cube using its humanoid hand.

In a press release, OpenAI claimed that its robot, called Dactyl, is “close to human-level dexterity.” And videos of the machine effortlessly turning and spinning the cube certainly seem to suggest as much. The clips were heralded by some on social media as evidence that a revolution in robot manipulation has at long last arrived.

In fact, it may be some time before robots are capable of the kind of manipulation that we humans take for granted.

But there are serious caveats with the Dactyl demo. For one thing, the robot dropped the cube eight out of 10 times in testing—hardly evidence of superhuman, or even human, deftness. For another, it required the equivalent of 10,000 years of simulated training to learn how to manipulate the cube.

“I wouldn’t say it’s total hype—it’s not,” says Ken Goldberg, a roboticist at UC Berkeley who also uses reinforcement learning, a technique in which artificial intelligence programs “learn” from repeated experimentation. “But people are going to look at that video and think ‘My god, next it’s going to be shuffling cards and other things,’ which it isn’t.”

Showy demos are now a standard part of the AI business. Companies and universities know that putting on an impressive demo—one that captures the public imagination—can produce more headlines than just an academic paper and a press release. This is especially important for companies competing fiercely for research talent, customers, and funding.

Others are more critical of the demo and the hoopla around it. “Do you know any 6-year-old that drops a Rubik’s cube 80 percent of the time?” says Gary Marcus, a cognitive scientist who is critical of AI hype. “You would take them to a neurologist.”

More important, Dactyl’s dexterity is highly specific and constrained. It can adapt to small disturbances (cutely demonstrated in the video by nudging the robot hand with a toy giraffe). But without extensive additional training, the system can’t pick up a cube from a table, manipulate it with a different grip, or grasp and handle another object.

“From the robotics perspective, it’s extraordinary that they were able to get it to work,” says Leslie Pack Kaelbling, a professor at MIT who has previously worked on reinforcement learning. But Kaelbling cautions that the approach likely won’t create general-purpose robots, because it requires so much training. Still, she adds, “there’s a kernel of something good here.”

Dactyl’s real innovation, which isn’t evident from the videos, involves how it transfers learning from simulation to the real world.



Source link