Matches in SemOpenAlex for { <https://semopenalex.org/work/W4205656806> ?p ?o ?g. }
- W4205656806 abstract "Abstract The sensory cortex is characterized by general organizational principles such as topography and hierarchy. However, measured brain activity given identical input exhibits substantially different patterns across individuals. Although anatomical and functional alignment methods have been proposed in functional magnetic resonance imaging (fMRI) studies, it remains unclear whether and how hierarchical and fine-grained representations can be converted between individuals while preserving the encoded perceptual content. In this study, we trained a method of functional alignment called neural code converter that predicts a target subject’s brain activity pattern from a source subject given the same stimulus, and analyzed the converted patterns by decoding hierarchical visual features and reconstructing perceived images. The converters were trained on fMRI responses to identical sets of natural images presented to pairs of individuals, using the voxels on the visual cortex that covers from V1 through the ventral object areas without explicit labels of the visual areas. We decoded the converted brain activity patterns into the hierarchical visual features of a deep neural network using decoders pre-trained on the target subject and then reconstructed images via the decoded features. Without explicit information about the visual cortical hierarchy, the converters automatically learned the correspondence between visual areas of the same levels. Deep neural network feature decoding at each layer showed higher decoding accuracies from corresponding levels of visual areas, indicating that hierarchical representations were preserved after conversion. The visual images were reconstructed with recognizable silhouettes of objects even with relatively small numbers of data for converter training. The decoders trained on pooled data from multiple individuals through conversions lead to a slight improvement over those trained on a single individual. These results demonstrate that the hierarchical and fine-grained representation can be converted by functional alignment, while preserving sufficient visual information to enable inter-individual visual image reconstruction. Highlights ● Neural code converters convert brain activity patterns across individuals with moderate conversion accuracy and learn reasonable visual area correspondences at the same level between individuals. ● The converted brain activity patterns can be decoded into hierarchical DNN features to reconstruct visual images, even though the converter is trained on a small number of data samples. ● The information of hierarchical and fine-scale visual features that enable visual image reconstruction are preserved after functional alignment." @default.
- W4205656806 created "2022-01-26" @default.
- W4205656806 creator A5003716190 @default.
- W4205656806 creator A5029586167 @default.
- W4205656806 creator A5039116747 @default.
- W4205656806 creator A5078484869 @default.
- W4205656806 date "2022-01-02" @default.
- W4205656806 modified "2023-09-26" @default.
- W4205656806 title "Inter-individual deep image reconstruction" @default.
- W4205656806 cites W1619178479 @default.
- W4205656806 cites W1687468892 @default.
- W4205656806 cites W1715013381 @default.
- W4205656806 cites W1915485278 @default.
- W4205656806 cites W1968467469 @default.
- W4205656806 cites W1970928383 @default.
- W4205656806 cites W1973776237 @default.
- W4205656806 cites W1975502436 @default.
- W4205656806 cites W1976193721 @default.
- W4205656806 cites W1982250103 @default.
- W4205656806 cites W2010326750 @default.
- W4205656806 cites W2020044743 @default.
- W4205656806 cites W2025009638 @default.
- W4205656806 cites W2037926997 @default.
- W4205656806 cites W2051434435 @default.
- W4205656806 cites W2052644075 @default.
- W4205656806 cites W2057069782 @default.
- W4205656806 cites W2058616551 @default.
- W4205656806 cites W2067036998 @default.
- W4205656806 cites W2093793583 @default.
- W4205656806 cites W2103364043 @default.
- W4205656806 cites W2108598243 @default.
- W4205656806 cites W2113619522 @default.
- W4205656806 cites W2113900663 @default.
- W4205656806 cites W2116360511 @default.
- W4205656806 cites W2117140276 @default.
- W4205656806 cites W2117340355 @default.
- W4205656806 cites W2123341385 @default.
- W4205656806 cites W2130010412 @default.
- W4205656806 cites W2131354767 @default.
- W4205656806 cites W2136573752 @default.
- W4205656806 cites W2138173706 @default.
- W4205656806 cites W2151591509 @default.
- W4205656806 cites W2151721316 @default.
- W4205656806 cites W2154281919 @default.
- W4205656806 cites W2155893237 @default.
- W4205656806 cites W2201865119 @default.
- W4205656806 cites W2301881409 @default.
- W4205656806 cites W2475287302 @default.
- W4205656806 cites W2591418495 @default.
- W4205656806 cites W2790548233 @default.
- W4205656806 cites W2902108885 @default.
- W4205656806 cites W2946207497 @default.
- W4205656806 cites W2949290395 @default.
- W4205656806 cites W2949698470 @default.
- W4205656806 cites W2951287344 @default.
- W4205656806 cites W2951583631 @default.
- W4205656806 cites W2952856751 @default.
- W4205656806 cites W2963341661 @default.
- W4205656806 cites W2963488396 @default.
- W4205656806 cites W3092387711 @default.
- W4205656806 cites W3193978033 @default.
- W4205656806 cites W3208323570 @default.
- W4205656806 cites W4205172691 @default.
- W4205656806 cites W4282832562 @default.
- W4205656806 doi "https://doi.org/10.1101/2021.12.31.474501" @default.
- W4205656806 hasPublicationYear "2022" @default.
- W4205656806 type Work @default.
- W4205656806 citedByCount "0" @default.
- W4205656806 crossrefType "posted-content" @default.
- W4205656806 hasAuthorship W4205656806A5003716190 @default.
- W4205656806 hasAuthorship W4205656806A5029586167 @default.
- W4205656806 hasAuthorship W4205656806A5039116747 @default.
- W4205656806 hasAuthorship W4205656806A5078484869 @default.
- W4205656806 hasBestOaLocation W42056568061 @default.
- W4205656806 hasConcept C11413529 @default.
- W4205656806 hasConcept C153180895 @default.
- W4205656806 hasConcept C154945302 @default.
- W4205656806 hasConcept C15744967 @default.
- W4205656806 hasConcept C169760540 @default.
- W4205656806 hasConcept C178253425 @default.
- W4205656806 hasConcept C180747234 @default.
- W4205656806 hasConcept C26760741 @default.
- W4205656806 hasConcept C2779226451 @default.
- W4205656806 hasConcept C2779345533 @default.
- W4205656806 hasConcept C2779918689 @default.
- W4205656806 hasConcept C31972630 @default.
- W4205656806 hasConcept C41008148 @default.
- W4205656806 hasConcept C54170458 @default.
- W4205656806 hasConcept C57273362 @default.
- W4205656806 hasConcept C94487597 @default.
- W4205656806 hasConceptScore W4205656806C11413529 @default.
- W4205656806 hasConceptScore W4205656806C153180895 @default.
- W4205656806 hasConceptScore W4205656806C154945302 @default.
- W4205656806 hasConceptScore W4205656806C15744967 @default.
- W4205656806 hasConceptScore W4205656806C169760540 @default.
- W4205656806 hasConceptScore W4205656806C178253425 @default.
- W4205656806 hasConceptScore W4205656806C180747234 @default.
- W4205656806 hasConceptScore W4205656806C26760741 @default.
- W4205656806 hasConceptScore W4205656806C2779226451 @default.
- W4205656806 hasConceptScore W4205656806C2779345533 @default.