In a round of Unreal Tournament 2004 played for 2012 BotPrize competition, two bots were mistaken for human 52% of the time. This is big. In fact, this is HUGE. One hundred years after Alan Turing invented the Turing Test, artificial intelligence was both smart enough and human-seeming enough to pass it.
Credit where credit is due: This achievement was the work of computer scientists at The University of Texas at Austin: Risto Miikkulainen, a computer science professor in the College of Natural Sciences, and doctoral students Jacob Schrum and Igor Karpov.
The human players identified the AI entities as human because they did not play perfectly. Their imperfection was by design. The way to trick a human into believing a robot is human, too, is to give it flaws.
Jacob Schrum explained the winning design in a press release:
“In the case of the BotPrize,” said Schrum, “a great deal of the challenge is in defining what ‘human-like’ is, and then setting constraints upon the neural networks so that they evolve toward that behavior.
“If we just set the goal as eliminating one’s enemies, a bot will evolve toward having perfect aim, which is not very human-like. So we impose constraints on the bot’s aim, such that rapid movements and long distances decrease accuracy. By evolving for good performance under such behavioral constraints, the bot’s skill is optimized within human limitations, resulting in behavior that is good but still human-like.”
A basic principle is clear: humans screw up, robots don’t.
Where do we go with that, I wonder? There are many situations in which perfection would be a positive boon: driving a car, for example; or performing any repetitive task. Bots do not get tired or bored, or wish they were somewhere else. (At least not yet.)
If your idea of perfection is to be able to aim at a video game target and never miss, go for it.
Then I thought of the great line from the miniseries Slings & Arrows: “Forget perfection! There’s nothing more boring than perfection.”
Consider the strange result from the same tournament where human players were correctly identified as human only 40% of the time. What is going on THERE?
Imperfect judgment such as that would drive a bot crazy, if it could be driven crazy and it probably could be.
Imperfection would keep bot-human relationships from being boring, though at some point the human might wonder why he/she is bothering. The bots would have to continually evolve, change as life changes us, to be interesting to be with.