Recent years have seen steady advances in machine learning, yet people are still far better than machines at learning new concepts, often needing just an example or two compared to the tens or hundreds machines typically require. What's more, after learning a concept for the first time, people can typically use it in rich and diverse ways. Brenden Lake and colleagues sought to develop a model that captured these human-learning abilities. They focused on a large class of simple visual concepts -- handwritten characters from alphabets around the world - building their model to "learn" this large class of visual symbols, and make generalizations about it, from very few examples. They call this modeling scheme the Bayesian program learning framework, or BPL. After developing the BPL approach, the researchers directly compared people, BPL, and other computational approaches on a set of five challenging concept learning tasks, including generating new examples of characters only seen a few times. On a challenging one-shot classification task, the BPL model achieved human-level performance while outperforming recent deep learning approaches, the researchers show. Their model classifies, parses, and recreates handwritten characters, and can generate new letters of the alphabet that look 'right' as judged by Turing-like tests of the model's output in comparison to what real humans produce.
http://www.eurekalert.org/multimedia/pub/105119.php
This short movie summarizes the work by Brenden Lake et al. This material relates to a paper that appeared in the Dec. 11, 2015 issue of Science, published by AAAS. The paper, by B.M. Lake at New York University in New York, NY, and colleagues was titled, 'Human-level concept learning through probabilistic program induction.' (Brenden Lake)