New coverage of Open AI’s ‘Dactyl’ robot shows how humans tend to evaluate computer capabilities when it comes to robotic and AI synthesis.
A detailed story in Wired shows how Dactyl, developed at Elon Musk’s AI disruptor, was able to eventually solve a Rubik’s cube physically with a robotic hand, a new landmark in AI design.
On the one hand, (no pun intended,) Dactyl was able to solve the Rubik’s cube puzzle without assistance – something that few humans can easily do. Many would never be able to do it at all!
However, critics of this bellwether for AI capabilities suggest that since Dactyl dropped the Rubiks cube eight times in testing, and required massive machine learning input to solve the puzzle, this benchmark doesn’t necessarily prove that robots will be able to do hosts of other tasks requiring human cognition and physical dexterity.
A quoted expert suggests that Dactyl will not be “shuffling cards” in the near future.
At the same time, some of those who have been following the rapid advancement of artificial intelligence to solve problems traditionally presented to the human brain suggest that we may see a lot of these robotics advances sooner rather than later.
The evolution of open AI’s robot also harkens back to another Muskian imperative – the need to create ethical AI oversight in the human community.
A Viox column last year details concerns from Musk and other top gurus that AI can get out of hand if not costrained by ethical audits – in fact, the Wried story above notes that OpenAI declined to release a recent text-generating technology that planners deemed “dangerous” for human consumption.
In the end, giving a robot a complex physical and mental puzzle like a Rubiks cube and getting a successful result does mean something for the frontier of AI – we just can’t all agree on exactly what it means.