BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming

  1. Chaoming Wang
  2. Tianqiu Zhang
  3. Xiaoyu Chen
  4. Sichao He
  5. Shangyang Li
  6. Si Wu  Is a corresponding author
  1. Peking University, China
  2. Beijing Jiaotong University, China

Abstract

Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Tensor Processing Unit (TPU), which ensures high running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs

Data availability

BrainPy is distributed via the pypi package index (https://pypi.org/project/brainpy/) and is publicly released on GitHub (https://github.com/brainpy/BrainPy/) under the license of GNU General Public License v3.0. Its documentation is hosted on the free documentation hosting platform Read the Docs (https://brainpy.readthedocs.io/). Rich examples and illustrations of BrainPy are publicly available at the website of https://brainpy-examples.readthedocs.io/. The source codes of these examples are available at https://github.com/brainpy/examples/. All the codes to reproduce the results in the paper can be found at the following GitHub repository: https://github.com/brainpy/brainpy-paper-reproducibility/.

The following previously published data sets were used

Article and author information

Author details

  1. Chaoming Wang

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  2. Tianqiu Zhang

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  3. Xiaoyu Chen

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  4. Sichao He

    Beijing Jiaotong University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  5. Shangyang Li

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  6. Si Wu

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    For correspondence
    siwu@pku.edu.cn
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9650-6935

Funding

Peking University (2021ZD0200204)

  • Si Wu

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2023, Wang et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,690
    views
  • 294
    downloads
  • 11
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Chaoming Wang
  2. Tianqiu Zhang
  3. Xiaoyu Chen
  4. Sichao He
  5. Shangyang Li
  6. Si Wu
(2023)
BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming
eLife 12:e86365.
https://doi.org/10.7554/eLife.86365

Share this article

https://doi.org/10.7554/eLife.86365

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Cesare V Parise, Marc O Ernst
    Research Article

    Audiovisual information reaches the brain via both sustained and transient input channels, representing signals’ intensity over time or changes thereof, respectively. To date, it is unclear to what extent transient and sustained input channels contribute to the combined percept obtained through multisensory integration. Based on the results of two novel psychophysical experiments, here we demonstrate the importance of the transient (instead of the sustained) channel for the integration of audiovisual signals. To account for the present results, we developed a biologically inspired, general-purpose model for multisensory integration, the multisensory correlation detectors, which combines correlated input from unimodal transient channels. Besides accounting for the results of our psychophysical experiments, this model could quantitatively replicate several recent findings in multisensory research, as tested against a large collection of published datasets. In particular, the model could simultaneously account for the perceived timing of audiovisual events, multisensory facilitation in detection tasks, causality judgments, and optimal integration. This study demonstrates that several phenomena in multisensory research that were previously considered unrelated, all stem from the integration of correlated input from unimodal transient channels.

    1. Computational and Systems Biology
    Franck Simon, Maria Colomba Comes ... Herve Isambert
    Tools and Resources

    Live-cell microscopy routinely provides massive amounts of time-lapse images of complex cellular systems under various physiological or therapeutic conditions. However, this wealth of data remains difficult to interpret in terms of causal effects. Here, we describe CausalXtract, a flexible computational pipeline that discovers causal and possibly time-lagged effects from morphodynamic features and cell–cell interactions in live-cell imaging data. CausalXtract methodology combines network-based and information-based frameworks, which is shown to discover causal effects overlooked by classical Granger and Schreiber causality approaches. We showcase the use of CausalXtract to uncover novel causal effects in a tumor-on-chip cellular ecosystem under therapeutically relevant conditions. In particular, we find that cancer-associated fibroblasts directly inhibit cancer cell apoptosis, independently from anticancer treatment. CausalXtract uncovers also multiple antagonistic effects at different time delays. Hence, CausalXtract provides a unique computational tool to interpret live-cell imaging data for a range of fundamental and translational research applications.