TY - JOUR TI - One-shot generalization in humans revealed through a drawing task AU - Tiedemann, Henning AU - Morgenstern, Yaniv AU - Schmidt, Filipp AU - Fleming, Roland W A2 - Barense, Morgan A2 - Baker, Chris I A2 - Bainbridge, Wilma VL - 11 PY - 2022 DA - 2022/05/10 SP - e75485 C1 - eLife 2022;11:e75485 DO - 10.7554/eLife.75485 UR - https://doi.org/10.7554/eLife.75485 AB - Humans have the amazing ability to learn new visual concepts from just a single exemplar. How we achieve this remains mysterious. State-of-the-art theories suggest observers rely on internal ‘generative models’, which not only describe observed objects, but can also synthesize novel variations. However, compelling evidence for generative models in human one-shot learning remains sparse. In most studies, participants merely compare candidate objects created by the experimenters, rather than generating their own ideas. Here, we overcame this key limitation by presenting participants with 2D ‘Exemplar’ shapes and asking them to draw their own ‘Variations’ belonging to the same class. The drawings reveal that participants inferred—and synthesized—genuine novel categories that were far more varied than mere copies. Yet, there was striking agreement between participants about which shape features were most distinctive, and these tended to be preserved in the drawn Variations. Indeed, swapping distinctive parts caused objects to swap apparent category. Our findings suggest that internal generative models are key to how humans generalize from single exemplars. When observers see a novel object for the first time, they identify its most distinctive features and infer a generative model of its shape, allowing them to mentally synthesize plausible variants. KW - visual perception KW - categorization KW - shape perception JF - eLife SN - 2050-084X PB - eLife Sciences Publications, Ltd ER -