Humans parsimoniously represent auditory sequences by pruning and completing the underlying network structure
Successive auditory inputs are rarely independent, their relationships ranging from local transitions between elements to hierarchical and nested representations. In many situations, humans retrieve these dependencies even from limited datasets. However, this learning at multiple scale levels is poorly understood. Here we used the formalism proposed by network science to study the representation of local and higher-order structures and their interaction in auditory sequences. We show that human adults exhibited biases in their perception of local transitions between elements, which made them sensitive to high-order network structures such as communities. This behavior is consistent with the creation of a parsimonious simpliﬁed model from the evidence they receive, achieved by pruning and completing relationships between network elements. This observation suggests that the brain does not rely on exact memories but on a parsimonious representation of the world. Moreover, this bias can be analytically modeled by a memory/eﬃciency trade-off. This model correctly accounts for previous ﬁndings, including local transition probabilities as well as high-order network structures, unifying sequence learning across scales. We ﬁnally propose putative brain implementations of such bias.
All Data and analysis are publicly available at https://osf.io/e8u7f/
Article and author information
European Research Council (695710)
- Ghislaine Dehaene-Lambertz
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Human subjects: All participants gave their informed consents for participation and publication and this research was approved by the Ethical research committee of Paris-Saclay University under the reference CER-Paris-Saclay-2019-063
- Floris P de Lange, Donders Institute for Brain, Cognition and Behaviour, Netherlands
- Received: January 25, 2023
- Accepted: April 28, 2023
- Accepted Manuscript published: May 2, 2023 (version 1)
© 2023, Benjamin et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
- Page views
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
One signature of the human brain is its ability to derive knowledge from language inputs, in addition to nonlinguistic sensory channels such as vision and touch. How does human language experience modulate the mechanism by which semantic knowledge is stored in the human brain? We investigated this question using a unique human model with varying amounts and qualities of early language exposure: early deaf adults who were born to hearing parents and had reduced early exposure and delayed acquisition of any natural human language (speech or sign), with early deaf adults who acquired sign language from birth as the control group that matches on nonlinguistic sensory experiences. Neural responses in a semantic judgment task with 90 written words that were familiar to both groups were measured using fMRI. The deaf group with reduced early language exposure, compared with the deaf control group, showed reduced semantic sensitivity, in both multivariate pattern (semantic structure encoding) and univariate (abstractness effect) analyses, in the left dorsal anterior temporal lobe (dATL). These results provide positive, causal evidence that language experience drives the neural semantic representation in the dATL, highlighting the roles of language in forming human neural semantic structures beyond nonverbal sensory experiences.
Across phyla, males often produce species-specific vocalizations to attract females. Although understanding the neural mechanisms underlying behavior has been challenging in vertebrates, we previously identified two anatomically distinct central pattern generators (CPGs) that drive the fast and slow clicks of male Xenopus laevis, using an ex vivo preparation that produces fictive vocalizations. Here, we extended this approach to four additional species, X. amieti, X. cliivi, X. petersii, and X. tropicalis, by developing ex vivo brain preparation from which fictive vocalizations are elicited in response to a chemical or electrical stimulus. We found that even though the courtship calls are species-specific, the CPGs used to generate clicks are conserved across species. The fast CPGs, which critically rely on reciprocal connections between the parabrachial nucleus and the nucleus ambiguus, are conserved among fast-click species, and slow CPGs are shared among slow-click species. In addition, our results suggest that testosterone plays a role in organizing fast CPGs in fast-click species, but not in slow-click species. Moreover, fast CPGs are not inherited by all species but monopolized by fast-click species. The results suggest that species-specific calls of the genus Xenopus have evolved by utilizing conserved slow and/or fast CPGs inherited by each species.