A computer model has been created that can replicate humans’ unique ability to learn new concepts from a single example.
Researchers at MIT, New York University have managed to develop machine-learning algorithms that can compete with a human’s capacity for learning. The model created by the team is capable of learning and making generalisations about handwritten characters from alphabets,
Lead researcher Brendan Lake acknowledges that that he and the other researchers had a number of reasons for completing their study. “One goal was to better understand human learning,” says Lake. “The other goal was to develop new more human like learning algorithms.
The research team used alphabets from around the world because they sought to directly compare the model they created with human comprehension.
Firstly the model was built to learn a large class of visual symbols, and make generalizations about them from very few examples. This modeling scheme was called the Bayesian Program Learning framework (BPL).
“If we had chosen full visual scenes, such as a street scene in New York City, people would obviously have an advantage over machines because so much of the brain is devoted to visual processing,” says Lake. “If we’d chosen concepts based on financial data then machines would probably have an advantage because they’re better at crunching numbers and these concepts are less intuitive.
“Characters provided a relatively level-playing field. We hoped [that by using alphabets] to generate insight that could generalise to other domains.”
The researchers hope that the approach taken by the computer model can be broadened so that it can be used for other symbol-based systems, like gestures, dance moves, and the words of spoken and signed languages.
“As we were developing the model, we identified three core ingredients that are important for the model’s performance. The first ingredient is compositionality which is an old idea that representation should be built up from simpler primitives, so in the case of characters these were the pen strokes,” says Lake.
“Another critical ingredient was causality which is representing in an abstract way the causal structure behind where objects come from, and the last key ingredient is learning to learn which is the idea that knowledge or previous concepts can be used to learn new concepts.
“These principles may help to explain how we learn and use other concepts in different domains.”
Read the full paper at sciencemag.org.