Creating an AI system that can think like a human has been one of the greatest challenges in computer science.
Now, researchers claim to have created an AI that can think like a baby, by teaching it basic rules of the physical world.
Their deep learning system can learn ‘intuitive physics’ – the common sense rules of how physical objects interact.
In experiments, the academics trained the new system, called PLATO, with a set of animated slides of moving balls.
After being trained with a small set of the visual animations, PLATO was able to demonstrate learning and even ‘surprise’ if a ball moved in an impossible way.
Researchers claim to have created an AI that can think like a baby, by teaching it basic rules of the physical world. In experiments, the academics taught a deep learning system, named PLATO, with a set of animated slides of a ball’s movement (pictured)
The researchers explain that even very young children are aware of ‘intuitive physics’ – the common sense rules of how the world works.
Intuitive physics is common-sense knowledge that we use to understand how objects behave and interact.
Those with a grasp of intuitive physics have expectations of how two objects may interact.
Whether we’re born with or quickly learn intuitive physics is a matter of scientific debate.
The new study was conducted by experts at Princeton University in New Jersey, University College London and Google-owned firm DeepMind, and published in Nature Human Behaviour.
Their findings are important in the quest to build AI models that have the same physical understanding as adult humans, they say.
‘Understanding the physical world is a critical skill that most people deploy effortlessly,’ said study author Dr Luis S. Piloto at DeepMind.
‘However, this still poses a challenge to artificial intelligence – if we’re to deploy helpful systems in the real world, we want these models to share our intuitive sense of physics.’
In 1950, legendary British computer scientist Alan Turing proposed training an AI to give it the intelligence of a child, and then provide the appropriate experiences to build up its intelligence to that of an adult.
‘Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s?’ Turing wrote in Computing Machinery and Intelligence, his seminal research paper by him.
In 1950, legendary British computer scientist Alan Turing (pictured) proposed the theory of training an AI to give it the intelligence of a child, and then provide the appropriate experiences to build up its intelligence to that of an adult
WHAT IS GOOGLE’S DEEPMIND PROJECT?
DeepMind was founded in London in 2010 and acquired by Google in 2014.
It now has additional research centers in Edmonton and Montreal, Canada, and a DeepMind Applied team in Mountain View, California.
DeepMind is on a mission to push the boundaries of AI, developing programs that can solve any complex problem without needing to be taught how.
The company has hit the headlines for a number of its creations, including software it created a that taught itself how to play and win at 49 completely different Atari titles, with just raw pixels as input.
It’s also known for creating an AI that beat a human professional Go player Lee Sedol, the world champion, in a five-game match in 2016.
‘If this were then subjected to an appropriate course of education one would obtain the adult brain.’
The authors of this new study explain that even very young children are aware of ‘intuitive physics’ – the common sense rules of how the world works.
For example, if someone were to dangle their keys in mid-air and declare that they were going to let them go, everyone around them would know unsupported objects do not float in mid-air.
They would also know that two objects – the keys and a table underneath, for example – would not pass through one another. Therefore, people would expect the keys to fail until they meet the table.
This knowledge is not unique to adults – even three-month-old infants have these expectations, and they react if they encounter a ‘magical’ situation that seems to violate these expectations.
For example, babies as young as five months of age are surprised if they are shown a situation which involves a physically impossible event, such as a toy suddenly disappearing.
For their study, the researchers asked whether AI models can learn a diverse set of physical concepts — specifically ones that young infants understand, such as solidity (that two objects do not pass through one another) and continuity (that objects do not blink in and out). out of existence.
They built an AI system, PLATO, so it could represent visual inputs as a set of objects and reason about interactions between the objects.
The authors trained PLATO by showing it videos of many simple scenes, such as balls falling to the ground, balls rolling behind other objects and reappearing, and balls bouncing off each other.
Researchers asked whether AI models can learn a diverse set of physical concepts — specifically ones that young infants understand, such as solidity (that two objects do not pass through one another) and continuity (that objects do not blink in and out of existence.
After training, PLATO was tested by showing it videos that sometimes contained impossible scenes, such as balls disappearing and reappearing on the other side of the frame.
Just like a young child, PLATO showed ‘surprise’ when it was shown anything that did not make sense, such as objects moving through each other without interacting.
‘One interpretation of the definition “surprise” is expecting to see something and finding another outcome,’ said Dr Piloto.
‘PLATO makes predictions about the configuration of objects it will observe next. As the video plays out, it then observes the actual configuration of objects.
‘The surprise is the difference between the configuration it predicted and the actual configuration in the next frame of the video.’
These learning effects were seen after watching as little as 28 hours of videos.
Authors conclude that PLATO could offer a powerful tool for research into how humans learn intuitive physics.
Results also show deep learning systems modeled on an infant outperforms the more traditional ‘learning from scratch’ systems.
‘The findings from this paper suggest that Turing might have been right,’ say Susan Hespos and Apoorva Shivaram in an accompanying News & Views piece.
‘Common-sense physics is a situation in which development elaborates and refines knowledge without fundamentally changing it.
‘This means that studies of object knowledge in infancy can lend insight into object knowledge in adults, and potentially tell us how to build better computer models that simulate the human mind.’
‘THE GAME IS OVER!’ GOOGLE’S DEEPMIND SAYS IT IS CLOSE TO ACHIEVING ‘HUMAN-LEVEL’ ARTIFICIAL INTELLIGENCE – BUT IT STILL NEEDS TO BE SCALED UP
DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI).
Nando de Freitas, a scientist at DeepMind and machine learning professor at Oxford University, has said ‘the game is over’ in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI).
AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training.
According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI.
De Freitas comments came in response to an opinion piece published on The Next Web that said humans alive today won’t ever achieve AGI.
In May 2022, DeepMind unveiled a new AI ‘agent’ called Gato that can complete 604 different tasks ‘across a wide range of environments’.