Humans perform tasks involving the manipulation of inputs regardless of how these signals are perceived by the brain, thanks to representations that are invariant to the stimulus modality. In this paper, we present modality-agnostic decoders that leverage such modality-invariant representations to predict which stimulus a subject is seeing, irrespective of the modality in which the stimulus is presented. Training these modality-agnostic decoders is made possible thanks to our new large-scale fMRI dataset SemReps-8K, released publicly along with this paper. It comprises six subjects watching both images and short text descriptions of such images, as well as the conditions during which the subjects were imagining visual scenes. We find that modality-agnostic decoders can perform as well as modality-specific decoders and even outperform them when decoding captions and mental imagery. Furthermore, a searchlight analysis revealed that large areas of the brain contain modality-invariant representations. Such areas are also particularly suitable for decoding visual scenes from the mental imagery condition.
Keywords: decoding, fMRI, human, imagery, modality-invariant, neuroscience, searchlight
eLife
Journal Article
English
41995708
Guideline Central and select third party use “cookies” on this website to enhance the user experience.
This technology helps us gather statistical and analytical information to optimize the relevant content for you.
The user also has the option to opt-out which may have an effect on the browsing experience.