Text representation is an extensively studied field in natural
language processing (NLP), currently dominated by distributed
semantic models (DSM). DSMs have been successfully
applied as meaning representations for decoding brain
activities, but most of the work exclusively examined the
word and phrases. Not much is known about distributed sentence
representations for brain decoding. To better benefit
brain decoding from DSMs, we carry out a systematic evaluation,
covering both widely-used baselines and state-of-the-art
sentence representation models. We show how well different
types of sentence representations decode the brain activation
patterns and give empirical explanations of the performance
difference. By testing on two different decoding frameworks 1
and multiple tasks, we also demonstrate whether they deliver
consistent performance when the decoding scenario varies.
To gain deeper understanding, we further compare the representation’s
correspondence to different brain cortices associated
with high-level cognitive functions.We find the state-ofthe-
art sentence representation model actively probes the language
atlas of human brain. To the best of our knowledge, this
work is the first comprehensive evaluation of distributed sentence
representations for brain decoding. We hope this work
could contribute to bridging NLP representation models and
brain semantic representations.
修改评论