Comparative fMRI reveals differences in the functional organization of the visual cortex for animacy perception in dogs and humans

  1. Neuroethology of Communication Lab, Department of Ethology, Institute of Biology, Eötvös Loránd University, Budapest, Hungary
  2. MTA-ELTE ‘Lendület’ Neuroethology of Communication Research Group, Eötvös Loránd University, Budapest, Hungary
  3. ELTE NAP Canine Brain Research Group, Budapest, Hungary
  4. Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
  5. Facultad de Medicina, Universidad Nacional Autónoma de México, Santiago de Querétaro, Mexico
  6. Department of Ethology, Institute of Biology, Eötvös Loránd University, Budapest, Hungary
  7. HUN-REN–ELTE Comparative Ethology Research Group, Budapest, Hungary

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Saad Jbabdi
    University of Oxford, Oxford, United Kingdom
  • Senior Editor
    Joshua Gold
    University of Pennsylvania, Philadelphia, United States of America

Reviewer #1 (Public review):

Summary

Farkas and colleagues conducted a comparative neuroimaging study with domestic dogs and humans to explore whether social perception in both species is underpinned by an analogous distinction between animate and inanimate entities an established functional organizing principle in the primate and human brain. Presenting domestic dogs and humans with clips of three animate classes (dogs, humans, cats) and one inanimate control (cars), the authors also set out to compare how dogs and humans perceive their own vs other species. Both research questions have been previously studied in dogs, but the authors used novel dynamic stimuli and added animate and inanimate classes, which have not been investigated before (i.e., cats and cars). Combining univariate and multivariate analysis approaches, they identified functionally analogous areas in the dog and human occipito-temporal cortex involved in the perception of animate entities, largely replicating previous observations. This further emphasizes a potentially shared functional organizing principle of social perception in the two species. The authors also describe between-species divergencies in the perception of the different animate classes, arguing for a less generalized perception of animate entities in dogs, but this conclusion is not convincingly supported by the applied analyses and reported findings.

Strengths

Domestic dogs represent a compelling model species to study the neural bases of social perception and potentially shared functional organizing principles with humans and primates. The field of comparative neuroimaging with dogs is still young, with a growing but still small number of studies, and the present study exemplifies the reproducibility of previous research. Using dynamic instead of static stimuli and adding new stimuli classes, Farkas and colleagues successfully replicated and expanded previous findings, adding to the growing body of evidence that social perception is underpinned by a shared functional organizing principle in the dog and human occipito-temporal cortex.

Weaknesses

The study design is imbalanced, with only one category of inanimate objects vs. three animate entities. Moreover, based on the example videos, it appears that the animate stimuli also differed in the complexity of the content from the car stimuli, with often multiple agents interacting or performing goal-directed actions. Moreover, while dogs are familiar with cars, they are definitely of lower relevance and interest to them than the animate stimuli. Thus, to a certain extent, the results might also reflect differences in attention towards/salience of the stimuli.

The methods section and rationale behind the chosen approaches were often difficult to follow and lacked a lot of information, which makes it difficult to judge the evidence and the drawn conclusions, and it weakens the potential for reproducibility of this work. For example, for many preprocessing and analysis steps, parameters were missing or descriptions of the tools used, no information on anatomical masks and atlas used in humans was provided, and it is often not clear if the authors are referring to the univariate or multivariate analysis.

In regard to the chosen approaches and rationale, the authors generally binarize a lot of rich information. Instead of directly testing potential differences in the neural representations of the different animate entities, they binarize dissimilarity maps for, e.g. animate entity > inanimate cars and then calculate the overlap between the maps. The comparison of the overlap of these three maps between species is also problematic, considering that the human RSA was constricted to the occipital and temporal cortex (there is now information on how they defined it) vs. whole-brain in dogs. Considering that the stimuli do differ based on low-level visual properties (just not significantly within a run), the RSA would also allow the authors to directly test if some of the (dis)similarities might be driven by low-level visual features like they, e.g. did with the early visual cortex model. I do think RSA is generally an excellent choice to investigate the neural representation of animate (and inanimate) stimuli, but the authors should apply it more appropriately and use its full potential.

The authors localized some of the "animate areas" also with the early visual cortex model (e.g. ectomarginal gyrus, mid suprasylvian); in humans, it only included the known early visual cortex - what does this mean for the animate areas in dogs?

The results section also lacks information and statistical evidence; for example, for the univariate region-of-interest (ROI) analysis (called response profiles) comparing activation strength towards each stimulus type, it is not reported if comparisons were significant or not, but the authors state they conducted t-tests. The authors describe that they created spheres on all peaks reported for the contrast animate > inanimate, but they only report results for the mid suprasylvian and occipital gyrus (e.g. caudal suprasylvian gyrus is missing). Furthermore, considering that the ROIs were chosen based on the contrast animate > inanimate stimuli, activation strength should only be compared between animate entities (i.e., dogs, humans, cats), while cars should not be reported (as this would be double dipping, after selecting voxels showing lower activation for that category). The descriptive data in Figure 3B (pending statistical evidence) suggests there were no strong differences in activation for the three species in dog and human animate areas. Thus, the ROI analysis appears to contradict findings from the binary analysis approach to investigate species preference, but the authors only discuss the results of the latter in support of their narrative for conspecific preference in dogs and do not discuss research from other labs investigating own-species preference.

The authors also unnecessarily exaggerate novelty claims. Animate vs inanimate and own vs other species perceptions have both been investigated before in dogs (and humans), so any claims in that direction seem unsubstantiated - and also not needed, as novelty itself is not a sign of quality; what is novel, and a sign of theoretical advance besides the novelty, are as said the conceptual extension and replication of previous work.

Overall, more analyses and appropriate tests are needed to support the conclusions drawn by the authors, as well as a more comprehensive discussion of all findings.

Reviewer #2 (Public review):

Summary:

The manuscript reports an fMRI study looking at whether there is animacy organization in a non-primate, mammal, the domestic dog, that is similar to that observed in humans and non-human primates (NHPs). A simple experiment was carried out with four kinds of stimulus videos (dogs, humans, cats, and cars), and univariate contrasts and RSA searchlight analysis was performed. Previous studies have looked at this question or closely associated questions (e.g. whether there is face selectivity in dogs). The import of the present study is that it looks at multiple types of animate objects, dogs, humans, and cats, and tests whether there was overlapping/similar topography (or magnitude) of responses when these stimuli were compared to the inanimate reference class of cars. The main finding was of some selectivity for animacy though this was primarily driven by the dog stimuli, which did overlap with the other animate stimulus types, but far less so than in humans.

Strengths:

I believe that this is an interesting study in so far as it builds on other recent work looking at category-selectivity in the domestic dog. Given the limited number of such studies, I think it is a natural step to consider a number of different animate stimuli and look at their overlap. While some of the results were not wholly surprising (e.g. dog brains respond more selectively for dogs than humans or cats), that does not take away from their novelty, such as it is. The findings of this study are useful as a point of comparison with other recent work on the organization of high-level visual function in the brain of the domestic dog.

Weaknesses:

(1) One challenge for all studies like this is a lack of clarity when we say there is organization for "animacy" in the human and NHP brains. The challenge is by no means unique to the present study, but I do think it brings up two more specific topics.

First, one property associated with animate things is "capable of self-movement". While cognitively we know that cars require a driver, and are otherwise inanimate, can we really assume that dogs think of cars in the same way? After all, just think of some dogs that chase cars. If dogs represent moving cars as another kind of self-moving thing, then it is not clear we can say from this study that we have a contrast between animate vs inanimate. This would not mean that there are no real differences in neural organization being found. It was unclear whether all or some of the car videos showed them moving. But if many/most do, then I think this is a concern.

Second, there is quite a lot of potential complexity in the human case that is worth considering when interpreting the results of this study. In the human case, some evidence suggests that animacy may be more of a continuum (Sha et al. 2015), which may reflect taxonomy (Connolly et al. 2012, 2016). However moving videos seem to be dominated more by signals relevant to threat or predation relative to taxonomy (Nastase et al. 2017). Some evidence suggests that this purported taxonomic organization might be driven by gradation in representing faces and bodies of animals based on their relative similarity to humans (Ritchie et al. 2021). Also, it may be that animacy organization reflects a number of (partially correlated) dimensions (Thorat et al. 2019, Jozwik et al. 2022). One may wonder whether the regions of (partial) overlap in animate responses in the dog brain might have some of these properties as well (or not).

(2) It is stated that previous studies provide evidence that the dog brain shows selectivity to "certain aspects of animacy". One of these already looked at selectivity for dog and human faces and bodies and identified similar regions of activity (Boch et al. 2023). An earlier study by Dilks et al. (2015), not cited in the present work (as far as I can tell), also used dynamic stimuli and did not suffer from the above limitations in choosing inanimate stimuli (e.g. using toy and scene objects for inanimate stimuli). But it only included human faces as the dynamic animate stimulus. So, as far as stimulus design, it seems the import of the present study is that it included a *third* animate stimulus (cats) and that the stimuli were dynamic.

(3) I am concerned that the univariate results, especially those depicted in Figure 3B, include double dipping (Kriegesorte et al. 2009). The analysis uses the response peak for the A > iA contrast to then look at the magnitude of the D, H, C vs iA contrasts. This means the same data is being used for feature selection and then to estimate the responses. So, the estimates are going to be inflated. For example, the high magnitudes for the three animate stimuli above the inanimate stimuli are going to inherently be inflated by this analysis and cannot be taken at face value. I have the same concern with the selectivity preference results in Figure 3E.

I think the authors have two options here. Either they drop these analyses entirely (so that the total set of analyses really mirrors those in Figure 4), or they modify them to address this concern. I think this could be done in one of two ways. One would be to do a within-subject standard split-half analysis and use one-half of the data for feature selection and the other for magnitude estimation. The other would be to do a between-subject design of some kind, like using one subject for magnitude estimation based on an ROI defined using the data for the other subjects.

(4) There are two concerns with how the overlap analyses were carried out. First, as typically carried out to look at overlap in humans, the proportion is of overlapping results of the contrasts of interest, e.g, for face and body selectivity overlap (Schwarlose et al. 2006), hand and tool overlap (Bracci et al. 2012), or more recently, tool and food overlap (Ritchie et al. 2024). There are a number of ways of then calculating the overlap, with their own strengths and weaknesses (see Tarr et al. 2007). Of these, I think the Jaccard index is the most intuitive, which is just the intersection of two sets as a proportion of their union. So, for example, the N of overlapping D > iA and H > iA active voxels is divided by the total number of unique active voxels for the two contrasts. Such an overlap analysis is more standard and interpretable relative to previous findings. I would strongly encourage the authors to carry out such an analysis or use a similar metric of overlap, in place of what they have currently performed (to the extent the analysis makes sense to me).

Second, the results summarized in Figure 3A suggest multiple distinct regions of animacy selectivity. Other studies have also identified similar networks of regions (e.g. Boch et al. 2023). These regions may serve different functions, but the overlap analysis does not tell us whether there is overlap in some of these portions of the cortex and not in others. The overlap is only looked at in a very general sense. There may be more overlap locally in some portions of the cortex and not in others.

(5) Two comments about the RSA analyses. First, I am not quite sure why the authors used HMAX rather than layers of a standardly trained ImageNet deep convolutional neural network. This strikes me also as a missed opportunity since many labs have looked at whether later layers of DNNs trained on object categorization show similar dissimilarity structures as category-selective regions in humans and NHPs. In so far as cross-species comparisons are the motivation here, it would be genuinely interesting to see what would happen if one did a correlation searchlight with the dog brain and layers of a DNN, a la Cichy et al. (2016).

Second, from the text is hard to tell what the models for the class- and category-boundary effects were. Are there RDMs that can be depicted here? I am very familiar with RSA searchlight and I found the description of the methods to be rather opaque. The same point about overlap earlier regarding the univariate results also applies to the RSA results. Also, this is again a reason to potentially compare DNN RDMs to both the categorical models and the brains of both species.

(6) There has been emphasis of late on the role of face and body selective regions and social cognition (Pitcher and Ungerleider, 2021, Puce, 2024), and also on whether these regions are more specialized for representing whole bodies/persons (Hu et al. 2020, Taubert, et al. 2022). It may be that the supposed animacy organization is more about how we socialize and interact with other organisms than anything about animacy as such (see again the earlier comments about animacy, taxonomy, and threat/predation). The result, of a great deal of selectivity for dogs, some for humans, and little for cats, seems to readily make sense if we assume it is driven by the social value of the three animate objects that are presented. This might be something worth reflecting on in relation to the present findings.

Author response:

Public Reviews:

Reviewer #1 (Public review):

Summary

Farkas and colleagues conducted a comparative neuroimaging study with domestic dogs and humans to explore whether social perception in both species is underpinned by an analogous distinction between animate and inanimate entities an established functional organizing principle in the primate and human brain. Presenting domestic dogs and humans with clips of three animate classes (dogs, humans, cats) and one inanimate control (cars), the authors also set out to compare how dogs and humans perceive their own vs other species. Both research questions have been previously studied in dogs, but the authors used novel dynamic stimuli and added animate and inanimate classes, which have not been investigated before (i.e., cats and cars). Combining univariate and multivariate analysis approaches, they identified functionally analogous areas in the dog and human occipitotemporal cortex involved in the perception of animate entities, largely replicating previous observations. This further emphasizes a potentially shared functional organizing principle of social perception in the two species. The authors also describe between- species divergencies in the perception of the different animate classes, arguing for a less generalized perception of animate entities in dogs, but this conclusion is not convincingly supported by the applied analyses and reported findings.

Strengths

Domestic dogs represent a compelling model species to study the neural bases of social perception and potentially shared functional organizing principles with humans and primates. The field of comparative neuroimaging with dogs is still young, with a growing but still small number of studies, and the present study exemplifies the reproducibility of previous research. Using dynamic instead of static stimuli and adding new stimuli classes, Farkas and colleagues successfully replicated and expanded previous findings, adding to the growing body of evidence that social perception is underpinned by a shared functional organizing principle in the dog and human occipito-temporal cortex.

Weaknesses

The study design is imbalanced, with only one category of inanimate objects vs. three animate entities. Moreover, based on the example videos, it appears that the animate stimuli also differed in the complexity of the content from the car stimuli, with often multiple agents interacting or performing goal-directed actions. Moreover, while dogs are familiar with cars, they are definitely of lower relevance and interest to them than the animate stimuli. Thus, to a certain extent, the results might also reflect differences in attention towards/salience of the stimuli.

We agree with the Reviewer and were aware that using only one class of inanimate objects but three classes of animate entities, along with the differences in complexity and relevance between the animate and the inanimate stimuli potentially elicited more attention to the inanimate condition and may have thus introduced a confound. We are revising the related limitation in the discussion to acknowledge this and to emphasize why we believe these differences do not compromise our main findings.

The methods section and rationale behind the chosen approaches were often difficult to follow and lacked a lot of information, which makes it difficult to judge the evidence and the drawn conclusions, and it weakens the potential for reproducibility of this work. For example, for many preprocessing and analysis steps, parameters were missing or descriptions of the tools used, no information on anatomical masks and atlas used in humans was provided, and it is often not clear if the authors are referring to the univariate or multivariate analysis.

We acknowledge the concerns regarding the clarity and completeness of the methods section and are significantly revising the descriptions of the methods. Of note, in humans, the Harvard-Oxford Cortical Structural Atlas (Frazier et al., 2005; Makris et al., 2006; Desikan et al., 2006; Goldstein et al., 2007), implemented within the FSL software package, was used for anatomical masks, while the Automated Anatomical Labeling atlas (Tzourio-Mazoyer et al., 2002) was used for assigning labels.

In regard to the chosen approaches and rationale, the authors generally binarize a lot of rich information. Instead of directly testing potential differences in the neural representations of the different animate entities, they binarize dissimilarity maps for, e.g. animate entity > inanimate cars and then calculate the overlap between the maps.

We thank the Reviewer for these comments and ideas. We also appreciate the second Reviewer for their related concerns and suggestions about the overlap calculation. Since the neural processing of different animate entities in the dog brain is largely unexplored, in some of our analyses we aimed to provide a straightforward and directly comparable characterization of animacy perception in the two species. We believe that a measure of how overlapping the neural representations of different animate classes are in the dog vs. the human visual cortex is a simple but meaningful and insightful characterization of how animacy perception is structured in the two species, despite the lack of spatial detail. Our decision to use binarization was based on these considerations. In response to this Reviewer’s request for providing richer information, in our revised manuscript, we will present more details and additional non-binarized calculations. Specifically, we are going to use nonbinarized data to present the response profiles of a broad, anatomically defined set of regions that have been related in other works to visual functions, to thus show where there is significant difference and overlap between the neural responses for the three animate classes in each species.

The comparison of the overlap of these three maps between species is also problematic, considering that the human RSA was constricted to the occipital and temporal cortex (there is now information on how they defined it) vs. whole-brain in dogs.

We thank this Reviewer for raising yet another relevant point about overlap calculation. We note that the overlap calculation for univariate results used the visually responsive cortex in both dogs and humans. The decision to restrict the multivariate analysis to the occipital and temporal lobes in humans, where the visual areas are, was to reduce computational load. Since RSA in dogs yielded significant voxels almost exclusively in the occipital and temporal cortices, we believe this decision did not introduce major bias in our results. This concern will also be discussed in our revised submission.

Of note, in the category- and class-boundary test, as for the other multivariate tests, the occipital and temporal cortex of humans was delineated based on the MNI atlas.

Considering that the stimuli do differ based on low-level visual properties (just not significantly within a run), the RSA would also allow the authors to directly test if some of the (dis)similarities might be driven by low-level visual features like they, e.g. did with the early visual cortex model. I do think RSA is generally an excellent choice to investigate the neural representation of animate (and inanimate) stimuli, but the authors should apply it more appropriately and use its full potential.

We thank the Reviewer for this suggestion. While this study did not aim to investigate the correlation between low-level visual features and animacy, the data is available, and the suggested analysis can be conducted in the future. This issue will also be discussed in our revised submission.

The authors localized some of the "animate areas" also with the early visual cortex model (e.g. ectomarginal gyrus, mid suprasylvian); in humans, it only included the known early visual cortex - what does this mean for the animate areas in dogs?

We thank the Reviewer for raising this point. Although the labels are the same, both EMG and mSSG are relatively large gyri, and the clusters revealed by each of the two analyses hardly overlap, with peak coordinates more than 12 mm apart for R EMG, and in different hemispheres for mSSG (but more than 11 mm apart even if projected on the same hemisphere). We will detail the differences and the overlaps in the revised submission.

The results section also lacks information and statistical evidence; for example, for the univariate region-of-interest (ROI) analysis (called response profiles) comparing activation strength towards each stimulus type, it is not reported if comparisons were significant or not, but the authors state they conducted t-tests. The authors describe that they created spheres on all peaks reported for the contrast animate > inanimate, but they only report results for the mid suprasylvian and occipital gyrus (e.g. caudal suprasylvian gyrus is missing).

We thank this Reviewer for catching these errors. The missing statistics will be provided in the revised manuscript. Also, we mistakenly named the peak in caudal suprasylvian gyrus occipital gyrus on the figure depicting the response profiles. This will also be corrected.

Furthermore, considering that the ROIs were chosen based on the contrast animate > inanimate stimuli, activation strength should only be compared between animate entities (i.e., dogs, humans, cats), while cars should not be reported (as this would be double dipping, after selecting voxels showing lower activation for that category).

We thank both Reviewers for raising this relevant point about potential double dipping. The aim of this analysis was to describe the relationship between the neural response elicited by the three animate stimulus classes, to show that the animacy-sensitive peaks are not the results of the standalone greater response to a single animate class. We conducted t-tests only to assess significant difference between these three animate conditions and no stats were performed or reported for any animate class vs. inanimate comparisons in these ROIs. In addition to providing the missing t-tests (comparing animate classes), we will present response profiles and corresponding statistics for a broad set of additional, independent ROIs, defined either anatomically or functionally by other studies in the revised version.

The descriptive data in Figure 3B (pending statistical evidence) suggests there were no strong differences in activation for the three species in dog and human animate areas. Thus, the ROI analysis appears to contradict findings from the binary analysis approach to investigate species preference, but the authors only discuss the results of the latter in support of their narrative for conspecific preference in dogs and do not discuss research from other labs investigating own-species preference.

Studying conspecific-preference was not the primary aim of this study. We only used our data to characterize the animate-sensitive regions from this aspect. The species-preference test provides an overall characterization of the entire animate-sensitive region, revealing a higher number of voxels with a maximal response to conspecific than other stimuli in dogs (and a similar tendency in humans), confirming previous evidence on neural conspecific preference in visual areas in both species. The response profiles presented so far describe only the ROIs around the main animate-sensitive peaks and, as the Reviewer points out, in most cases reveal no significant conspecific bias. We believe there is no contradiction here: the entire animate-sensitive region may weakly but still be conspecific-preferring, whereas the main animate-sensitive peaks are not; the centers of conspecific preference may be located elsewhere in the visual cortex and may be supported by mechanisms other than animacy-sensitivity. In the revised manuscript, we will elaborate more on this. Additionally, in response to other comments, and for a better and more coherent characterization of species preference (and animacy sensitivity) across the visual cortex, we will present response profiles for other, independently defined regions and explore conspecific-sensitivity in those additional regions as well. Furthermore, we will discuss related own-species preference literature in greater detail.

The authors also unnecessarily exaggerate novelty claims. Animate vs inanimate and own vs other species perceptions have both been investigated before in dogs (and humans), so any claims in that direction seem unsubstantiated - and also not needed, as novelty itself is not a sign of quality; what is novel, and a sign of theoretical advance besides the novelty, are as said the conceptual extension and replication of previous work.

We agree with this Reviewer regarding novelty claims in general, and we confirm that we had no intention to overstate the uniqueness of our results. We also did not mean to imply that this work would be the first one on animacy perception in dogs, which it obviously is not. But we understand that we could have been more explicit presenting our work as a conceptual extension and replication of previous works, and we are revising the wording of the discussion from this aspect.

Overall, more analyses and appropriate tests are needed to support the conclusions drawn by the authors, as well as a more comprehensive discussion of all findings.

We are thankful for all comments. We will revise the methods section to provide sufficient detail and ensure replicability; conduct additional analyses as detailed above; and provide a more comprehensive discussion of all findings.

Reviewer #2 (Public review):

Summary:

The manuscript reports an fMRI study looking at whether there is animacy organization in a non-primate, mammal, the domestic dog, that is similar to that observed in humans and non-human primates (NHPs). A simple experiment was carried out with four kinds of stimulus videos (dogs, humans, cats, and cars), and univariate contrasts and RSA searchlight analysis was performed. Previous studies have looked at this question or closely associated questions (e.g. whether there is face selectivity in dogs). The import of the present study is that it looks at multiple types of animate objects, dogs, humans, and cats, and tests whether there was overlapping/similar topography (or magnitude) of responses when these stimuli were compared to the inanimate reference class of cars. The main finding was of some selectivity for animacy though this was primarily driven by the dog stimuli, which did overlap with the other animate stimulus types, but far less so than in humans.

Strengths:

I believe that this is an interesting study in so far as it builds on other recent work looking at category-selectivity in the domestic dog. Given the limited number of such studies, I think it is a natural step to consider a number of different animate stimuli and look at their overlap. While some of the results were not wholly surprising (e.g. dog brains respond more selectively for dogs than humans or cats), that does not take away from their novelty, such as it is. The findings of this study are useful as a point of comparison with other recent work on the organization of high-level visual function in the brain of the domestic dog.

Weaknesses:

(1) One challenge for all studies like this is a lack of clarity when we say there is organization for "animacy" in the human and NHP brains. The challenge is by no means unique to the present study, but I do think it brings up two more specific topics.

First, one property associated with animate things is "capable of self-movement". While cognitively we know that cars require a driver, and are otherwise inanimate, can we really assume that dogs think of cars in the same way? After all, just think of some dogs that chase cars. If dogs represent moving cars as another kind of selfmoving thing, then it is not clear we can say from this study that we have a contrast between animate vs inanimate. This would not mean that there are no real differences in neural organization being found.

It was unclear whether all or some of the car videos showed them moving. But if many/most do, then I think this is a concern.

We thank this Reviewer for raising this relevant point about the potential animacy of cars for dogs and its implication for our results. Of note, two-thirds of our car stimuli showed a car moving (slow, accelerating, or fast). We acknowledge that these stimuli contained motionbased animacy cues, and in this regard, there was no clear difference between our animate and inanimate conditions, and possibly between some of the representations they elicited. However, our animate and inanimate stimuli differed in other key factors accounting for animacy organization, such as visual features including the presence of faces, bodies, body parts, postures, and certain aspects of biological motion. So we believe that this limitation does not compromise our main conclusions. We will elaborate on this point further in the revised discussion, also considering how dogs’ differential behavioral responses to cars and animate entities may provide additional insights in this regard.

Second, there is quite a lot of potential complexity in the human case that is worth considering when interpreting the results of this study. In the human case, some evidence suggests that animacy may be more of a continuum (Sha et al. 2015), which may reflect taxonomy (Connolly et al. 2012, 2016). However moving videos seem to be dominated more by signals relevant to threat or predation relative to taxonomy (Nastase et al. 2017). Some evidence suggests that this purported taxonomic organization might be driven by gradation in representing faces and bodies of animals based on their relative similarity to humans (Ritchie et al. 2021). Also, it may be that animacy organization reflects a number of (partially correlated) dimensions (Thorat et al. 2019, Jozwik et al. 2022). One may wonder whether the regions of (partial) overlap in animate responses in the dog brain might have some of these properties as well (or not).

We agree that it would be interesting to dissect which animacy-related factor(s) contribute to the observed animacy sensitivity in different regions, and although this was not the original aim of the study, we agree that we could have made better use of the variation in our stimuli to discuss this aspect. Specifically, some animacy features are shared by all three animate stimulus classes, namely the presence of biological motions, faces, and bodies. In contrast, animate classes differed in some other aspects, for example in how dogs perceived dogs, humans, and cats as social agents and in their potential behavioral goals towards them. It can therefore be argued that regions with two- and especially three-way overlapping activations are more probably involved in processing biological motion, face and body aspects, and non-overlapping ones the social agency- and behavioural goal-related aspects. In line with this, the shared animacy features are indeed ones that have been reported to be central in human animacy representation and that may have made the overlaps in human brain responses greater. We will provide a more detailed discussion of the results from this viewpoint in the revised manuscript.

(2) It is stated that previous studies provide evidence that the dog brain shows selectivity to "certain aspects of animacy". One of these already looked at selectivity for dog and human faces and bodies and identified similar regions of activity (Boch et al. 2023). An earlier study by Dilks et al. (2015), not cited in the present work (as far as I can tell), also used dynamic stimuli and did not suffer from the above limitations in choosing inanimate stimuli (e.g. using toy and scene objects for inanimate stimuli). But it only included human faces as the dynamic animate stimulus. So, as far as stimulus design, it seems the import of the present study is that it included a *third* animate stimulus (cats) and that the stimuli were dynamic.

We agree with this Reviewer that the findings of Dilks et al. (2015) are relevant to our study and have therefore cited them. However, the citation itself was imprecise and will be corrected in the revised manuscript.

(3) I am concerned that the univariate results, especially those depicted in Figure 3B, include double dipping (Kriegesorte et al. 2009). The analysis uses the response peak for the A > iA contrast to then look at the magnitude of the D, H, C vs iA contrasts. This means the same data is being used for feature selection and then to estimate the responses. So, the estimates are going to be inflated. For example, the high magnitudes for the three animate stimuli above the inanimate stimuli are going to inherently be inflated by this analysis and cannot be taken at face value. I have the same concern with the selectivity preference results in Figure 3E.

I think the authors have two options here. Either they drop these analyses entirely (so that the total set of analyses really mirrors those in Figure 4), or they modify them to address this concern. I think this could be done in one of two ways. One would be to do a within- subject standard split-half analysis and use one-half of the data for feature selection and the other for magnitude estimation. The other would be to do a between-subject design of some kind, like using one subject for magnitude estimation based on an ROI defined using the data for the other subjects.

We thank both Reviewers again for raising this important point about potential double dipping. We also thank this Reviewer for specific suggestions for split-half analyses – we agree that, had our original analyses involved double dipping, such a modification would be necessary. But, as we explained in our response above, this was not the case. Indeed, whereas we do visualize all four conditions in Fig. 3B, we only conducted t-tests to assess differences between the three animate conditions (the corresponding stats have been missing from the original manuscript but will be added during revision). So, importantly, we did not evaluate the magnitude of the D, H, C vs iA contrasts in any of the ROIs defined by animate-sensitive peaks; therefore, we believe that these analyses do not involve double dipping. This holds for the species preference results in Fig. 3E as well. We will clarify this in the revised manuscript. Of note, in response to a request by the other reviewer and to provide richer information about the univariate results, we will also provide response profiles and corresponding stats for a broad set of additional ROIs, defined either anatomically or functionally by other studies (e.g., Boch et al., 2023).

(4) There are two concerns with how the overlap analyses were carried out. First, as typically carried out to look at overlap in humans, the proportion is of overlapping results of the contrasts of interest, e.g, for face and body selectivity overlap (Schwarlose et al. 2006), hand and tool overlap (Bracci et al. 2012), or more recently, tool and food overlap (Ritchie et al. 2024). There are a number of ways of then calculating the overlap, with their own strengths and weaknesses (see Tarr et al. 2007). Of these, I think the Jaccard index is the most intuitive, which is just the intersection of two sets as a proportion of their union. So, for example, the N of overlapping D > iA and H > iA active voxels is divided by the total number of unique active voxels for the two contrasts. Such an overlap analysis is more standard and interpretable relative to previous findings. I would strongly encourage the authors to carry out such an analysis or use a similar metric of overlap, in place of what they have currently performed (to the extent the analysis makes sense to me).

We agree with this Reviewer that the Jaccard index is an intuitive and straightforward overlap measure. Importantly, for our overlap calculations we already use this measure (and a very similar one) – but we acknowledge that this was not clear from the original description. Specifically, for the multivariate overlap test, we used the Jaccard index exactly as described by this Reviewer. For the univariate overlap test, we use a very similar measure, with the only difference that there, to reference the search space, the intersection of specific animate-inanimate contrasts was divided by the total voxel number of animate-sensitive areas (which is highly similar to the union of the specific animate-inanimate contrasts). In the revised submission we will provide a more detailed explanation of the overlap calculations, making it explicit that we used the Jaccard index (and a variant of it).

Second, the results summarized in Figure 3A suggest multiple distinct regions of animacy selectivity. Other studies have also identified similar networks of regions (e.g. Boch et al. 2023). These regions may serve different functions, but the overlap analysis does not tell us whether there is overlap in some of these portions of the cortex and not in others. The overlap is only looked at in a very general sense. There may be more overlap locally in some portions of the cortex and not in others.

We thank this Reviewer for this comment, we agree that adding spatial specificity to these results will improve the manuscript. Therefore, during revision, we will assess the anatomical distribution of the overlap results, making use of a broad set of ROIs potentially relevant for animacy perception, defined either anatomically or functionally by other studies (e.g., Boch et al., 2023 for dogs).

(5) Two comments about the RSA analyses. First, I am not quite sure why the authors used HMAX rather than layers of a standardly trained ImageNet deep convolutional neural network. This strikes me also as a missed opportunity since many labs have looked at whether later layers of DNNs trained on object categorization show similar dissimilarity structures as category-selective regions in humans and NHPs. In so far as cross-species comparisons are the motivation here, it would be genuinely interesting to see what would happen if one did a correlation searchlight with the dog brain and layers of a DNN, a la Cichy et al. (2016).

We thank the Reviewer for this comment and suggestion. At the start of the project, HMAX was the most feasible model to implement given our time and expertise constrains. Additionally, the biologically motivated HMAX was also an appropriate choice, as it simulates the selective tuning of neurons in the primary visual cortex (V1) of primates, which is considered homologous with V1 in carnivores (Boch et al., 2024).

Although we agree that using DNNs have recently been extensively and successfully used to explore object representations and could provide valuable additional insights for dogs’ visual perception as well, we believe that adding a large set of additional analyses would stretch the frames of this manuscript, disproportionately shifting its focus from our original research question. Also, our experiment, designed with a different, more specific aim in mind, did not provide a large enough stimulus variety of animate stimuli for a general comparison of the cortical hierarchy underlying object representations in dog and human brains and thus our data are not an optimal starting point for such extensive explorations. Having said that, we are thankful for this Reviewer for the idea and will consider using a DNN to uncover dog’ visual cortical hierarchy in future studies with a better suited stimulus set. Furthermore, in accordance with eLife’s data-sharing policies, we will make the current dataset publicly available so further hypothesis and models can be tested.

Second, from the text is hard to tell what the models for the class- and categoryboundary effects were. Are there RDMs that can be depicted here? I am very familiar with RSA searchlight and I found the description of the methods to be rather opaque. The same point about overlap earlier regarding the univariate results also applies to the RSA results. Also, this is again a reason to potentially compare DNN RDMs to both the categorical models and the brains of both species.

In the revised manuscript we will provide a more detailed explanation of the methods used to determine class- and category-boundary effects. In short, the analysis we performed here followed Kriegeskorte et al. (2008), and the searchlight test looked for regions in which between-class/category differences were greater than within-class/category differences. We will also include RDMs. Additionally, we will provide anatomical details for the overlap results for RSA, just as for the univariate results, using the same independently defined broad set of ROIs, defined either anatomically or functionally by other studies (e.g., Boch et al., 2023 for dogs).

(6) There has been emphasis of late on the role of face and body selective regions and social cognition (Pitcher and Ungerleider, 2021, Puce, 2024), and also on whether these regions are more specialized for representing whole bodies/persons (Hu et al. 2020, Taubert, et al. 2022). It may be that the supposed animacy organization is more about how we socialize and interact with other organisms than anything about animacy as such (see again the earlier comments about animacy, taxonomy, and threat/predation). The result, of a great deal of selectivity for dogs, some for humans, and little for cats, seems to readily make sense if we assume it is driven by the social value of the three animate objects that are presented. This might be something worth reflecting on in relation to the present findings.

We thank the Reviewer for this suggestion. The original manuscript already discussed how motion-related animacy cues involved in social cognition may explain that animacysensitive regions reported in our study extend beyond those reported previously and also the role of biological motion in the observed across-species differences. This discussion of the role of visual diagnostic features and features that involved in perceiving social agents will be extended in the revised discussion, also in response to the first comment of this Reviewer, to reflect on how social cognition-related animacy cues may have affected our results in dogs.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation