• Figure 1.
    Download figureOpen in new tabFigure 1. The activity of neurons in the top-left panel gradually changes from left to right, whereas changes are more abrupt in the top-middle and top-right panels.

    Each square in the grid represents a voxel which summates activity within its frame as shown in the bottom panels. For the smoother pattern of neural activity, the summation of each voxel (bottom left) captures the changing gradient from left to right depicted in the top-left, whereas for the less smooth representation in the middle panel all voxels sum to the same orange value (bottom middle). Thus, differences in activation of yellow vs. red neurons are detectable using fMRI for the smooth case, but not for the less smooth case because voxel response is homogenous. Improving spatial resolution (right panels) by reducing voxel size overcomes these sampling limits, resulting in voxel inhomogeneity (bottom-right panel).

    DOI: http://dx.doi.org/10.7554/eLife.21397.003

    Figure 2.
    Download figureOpen in new tabFigure 2. As models become more complex with added layers, similarity structure becomes harder to recover, which might parallel function along the ventral stream. 

    (A) For the artificial neural network coding schemes, similarity to the prototype falls off with increasing distortion (i.e., noise). The models, numbered 1–11, are (1) vector space coding, (2) gain control coding, (3) matrix multiplication coding, (4), perceptron coding, (5) 2-layer network, (6) 3-layer network, (7) 4-layer network, (8) 5-layer network, (9) 6-layer network (10) 7-layer network, and (11), 8-layer network. The darker a model is, the simpler the model is and the more the model preserves similarity structure under fMRI. (B) A deep artificial neural network and the ventral stream can be seen as performing related computations. As in our simulation results, neural similarity should be more difficult to recover in the more advanced layers.

    DOI: http://dx.doi.org/10.7554/eLife.21397.005

    Figure 3.
    Download figureOpen in new tabFigure 3. The effect of matrix multiplication followed by the tanh function on the input stimulus. 

    The output of this one-layer network is shown, as well as the outcome of applying a non-linearity to the output of the matrix multiplication. In this example, functional smoothness is preserved whereas super-voxel smoothness is not. The result of applying this non-linearity can serve as the input to the next layer of a multi-layer network.

    DOI: http://dx.doi.org/10.7554/eLife.21397.006

    Figure 4.
    Download figureOpen in new tabFigure 4. Similarity structure becomes more difficult to recover in the more advanced layers of the DLN.

    (A) The similarity structure in a middle layer of a DLN, Inception-v3 GoogLeNet. The mammals (lions and tigers) and birds (robins and partridges) correlate forming a high-level domain, rendering the upper-left quadrant a darker shade of red. Whereas the vehicles (sportscars and mopeds) and musical instruments (guitars and banjos) form two high-level categories. (B) In contrast, at a later layer in this network, the similarity space shows high within-category correlations and weakened correlations between categories. While some structure between categories is preserved, mopeds are no more similar to sportscars than they are to robins.

    DOI: http://dx.doi.org/10.7554/eLife.21397.007

  • Table 1.

    Design matrix for a 23 full factorial design.

    DOI: http://dx.doi.org/10.7554/eLife.21397.004