Google’s Artificial Intelligence program learns like people do
A general artificial intelligence program developed by the British company DeepMind, a Google subsidiary, can learn in a more natural way than a human being, using his previous knowledge to solve the next problems.
Researchers who reported in the PNAS journal, according to British Guardian, reported that the new program – which manages almost as well as a man – has now overcome past difficulties, which tallied the learning process from the machines.
The most important step forward is that the program learns to solve one problem after another, based on the knowledge and skills it has acquired along the way, without forgetting its accumulated experience.
The ultimate goal is to create machines with generic artificial intelligence that reflect human intelligence in a perfectly natural and integrated way, something that has not yet been made possible.
“If we are going to have computer programs that are smarter and more useful then they should have the ability to learn in a sequential manner,” said DeepMind researcher James Kirkpatrick.
The ability to remember old knowledge and skills, which they then use to solve the next problems, is something that happens naturally to people, but not to the machines so far. Most artificially intelligent computers are based on so-called neural networks, e.g. learn to play chess or poker through endless rectal-error tests. However, to learn another game later, the “smart” computer must first erase what he had learned about previous games (something called “destructive oblivion”).
The new study makes an important step to stop computers from forgetting the useful things they have learned. The researchers were inspired by neuroscience, which shows that animals – and humans – are constantly learning to keep in their brain those neural connections that relate to past knowledge critical to their survival (eg how to hide from predators or how to find food).
Similarly, the new method of learning mechanics, before proceeding to the next problem, distinguishes which neural network connections to the computer are the most important for what has been learned up to that point, so that they will not be changed or deleted when added the new knowledge.
So the machine can reuse what it has learned, as the experiments have shown, in which the Artificial Intelligence program was asked to play various video games. Thanks to the knowledge of one game, the machine could learn to play the next games faster, as people do.
However, for the time being, it remains unclear if, thanks to the new method, he can learn to play not only faster but also better many different games than he would if – by the traditional method – he specialized in a single game .
“We have shown that the program can learn successively different things, but we have not shown that it can do better because it has learned them successively. There is therefore room for improvement, “said Kirkpatrick. He acknowledged, however, that “we are still far from a general artificial intelligence and there are many scientific challenges that need to be overcome.”