**2.2 Stimuli**

Each visual stimulus consisted of a picture at the top and a Japanese sentence at the bottom (Fig. 1). The pictures used for AS, PS, and SS were identical (the number of lines used in each picture, mean ± SD: 14 ± 2.4, n = 6). There was one sentence control (SC) condition with intransitive verbs (e.g., "-to ∆-ga hashitteru", " and ∆ run") and equally complex pictures (14 ± 2.5, n = 6), which were all different from those used under the three main conditions. Half of the pictures depicted action occurring from left to right, and the other half depicted action from right to left. In the pictures, the use of symbols was also counterbalanced for both sides within each condition.

The sentences describing actions were written in a combination of the "hiragana" and "kanji" writing systems, and all sentence stimuli were grammatical in Japanese. Each sentence included two noun phrases and one verb; for example, a noun phrase (-ga) consisted of a symbol () and a hiragana (ga). Two sets of Japanese verbs (six transitive verbs: pull, push, scold, kick, hit, and call; and six intransitive verbs: lie, stand, walk, run, tumble, and cry) were used, each of which, including the passive forms, had either four or five syllables. Note that the verb "call" is used only as a transitive verb in Japanese. There was no significant difference in frequency between the two sets of verbs (t (10) = 0.7, p = 0.5), according to the Japanese lexical database ("Nihongo-no Goitokusei" (Lexical Properties of Japanese), Nippon Telegraph and Telephone Corporation Communication Science Laboratories, Tokyo, Japan, 2003). We prepared eight stimuli for each verb; there were 48 stimuli for each condition.

Each stimulus consisted of one picture (top) and one sentence (bottom). Pictures depicting actions consisted of two stick figures; each stick figure was distinguished by one of three "head" symbols: a circle (○), square (), or triangle (∆).The participants indicated whether or not the meaning of each sentence matched the action depicted in the corresponding picture by pressing one of two buttons. (A) Under the active sentence (AS) condition, canonical / subject-initial active sentences were presented ("∆-ga ○-o hiiteru"). Below each example, a word-by-word translation in English is shown. Nom, nominative case; Acc, accusative case; Dat, dative case. (B) Under the passive sentence (PS) condition, noncanonical / subject-initial passive sentences were presented ("○-ga ∆-ni hikareru"). (C) Under the scrambled sentence (SS) condition, non-canonical / object-initial scrambled sentences were presented ("○-o ∆-ga hiireru"). An identical picture set was used under these three conditions. The sentence stimuli were all grammatical and commonly used in Japanese.All stimuli were presented visually in yellow against a dark background. Each stimulus was presented for 5800 ms followed by a 200 ms blank interval, which was ample time for the patients (see Table 2). For fixation, a red cross was also shown at the center of the screen. Stimulus presentation and behavioural data collection were controlled using the LabVIEW software and interface (National Instruments, Austin, TX, USA).

#### **2.3 Tasks**

110 Advances in the Biology, Imaging and Therapies for Glioblastoma

Fig. 2. Lesion overlap map for 21 patients with a glioma in the left frontal cortex

Each visual stimulus consisted of a picture at the top and a Japanese sentence at the bottom (Fig. 1). The pictures used for AS, PS, and SS were identical (the number of lines used in each picture, mean ± SD: 14 ± 2.4, n = 6). There was one sentence control (SC) condition with intransitive verbs (e.g., "-to ∆-ga hashitteru", " and ∆ run") and equally complex pictures (14 ± 2.5, n = 6), which were all different from those used under the three main conditions. Half of the pictures depicted action occurring from left to right, and the other half depicted action from right to left. In the pictures, the use of symbols was also counterbalanced for

The sentences describing actions were written in a combination of the "hiragana" and "kanji" writing systems, and all sentence stimuli were grammatical in Japanese. Each sentence included two noun phrases and one verb; for example, a noun phrase (-ga) consisted of a symbol () and a hiragana (ga). Two sets of Japanese verbs (six transitive verbs: pull, push, scold, kick, hit, and call; and six intransitive verbs: lie, stand, walk, run, tumble, and cry) were used, each of which, including the passive forms, had either four or five syllables. Note that the verb "call" is used only as a transitive verb in Japanese. There was no significant difference in frequency between the two sets of verbs (t (10) = 0.7, p = 0.5), according to the Japanese lexical database ("Nihongo-no Goitokusei" (Lexical Properties of Japanese), Nippon Telegraph and Telephone Corporation Communication Science Laboratories, Tokyo, Japan, 2003). We prepared eight stimuli for each verb; there

Each stimulus consisted of one picture (top) and one sentence (bottom). Pictures depicting actions consisted of two stick figures; each stick figure was distinguished by one of three "head" symbols: a circle (○), square (), or triangle (∆).The participants indicated whether or not the meaning of each sentence matched the action depicted in the corresponding picture by pressing one of two buttons. (A) Under the active sentence (AS) condition,

**2.2 Stimuli** 

both sides within each condition.

were 48 stimuli for each condition.

In the picture-sentence matching task (Fig. 1), the participants read a sentence silently and indicated whether or not the meaning of each sentence matched the action of the corresponding picture by pressing one of two buttons. For AS, PS, and SS, all mismatched sentences were made by exchanging two symbols in the original sentences, e.g., " pushes ○" instead of "○ pushes ". For SC, symbol-mismatched and action-mismatched sentences were presented equally often, requiring the sentences to be read completely in order for the participants to arrive at a correct judgment.

In addition to the picture-sentence matching task, we used a visual control task (VC), which required neither word nor sentence processing, as a baseline condition (Kinno et al., 2008). For VC, the same sets of pictures used in the picture-sentence matching task were presented, together with a string of jumbled letters taken from a single sentence in which the symbols (○, , or ∆) and "kanji" appeared at the same positions in the string as in the picturesentence matching task. The participants were asked to judge whether or not all the symbols in a letter string were the same as those in the picture, irrespective of the order of the symbols. The participants underwent practice sessions before testing to become fully familiarized with the tasks.

A single run of the testing sessions contained 24 "trial events" of the picture-sentence matching task (six times each for AS, PS, SS, and SC), with variable inter-trial intervals of 6 and 12 s (one and two VC, respectively), pseudorandomized within a run. Since meaningless letter strings were presented throughout VC while sentences were presented only in the trial events, the participants could switch from VC to the trial events according to the stimulus type. The order of AS, PS, SS, and SC was pseudorandomized in each run to prevent any condition-specific strategy. Eight runs were tested in a day per one participant. Half of the stimuli consisted of matched picture-sentence pairs (24 trials for each condition), and the other half consisted of mismatched pairs (24 trials for each condition). All patients underwent the testing sessions inside the scanner while they received three to six fMRI runs, and then they completed the rest of the eight runs outside the scanner. Because the number of fMRI runs was limited by the patients' medical conditions, here we focused on the behavioural data and the anatomical MRI scans alone. All of the behavioural data from normal controls were acquired outside the scanner.

Gray or White? – The Contribution of Gray Matter in a Glioma to Language Deficits 113

30 ms, acquisition time: 8 ms, flip angle: 60°, field of view: 192 × 192 mm2, resolution: 0.75 × 0.75 × 1 mm3) was acquired for each patient. The location of the glioma was first identified on this MR image, and the glioma boundary was semi-automatically determined using MRIcroN software (http://www.mricro.com/) (Rorden & Brett, 2000). T2-weighted MR images (Department of Neurosurgery of Tokyo Women's Medical University) and positronemission tomography (PET) data (Chubu Medical Center for Prolonged Traumatic Brain Dysfunction, Mino-Kamo-shi, Japan) were also used to assist the precise determination of the boundary. Each individual's structural image was spatially normalized to the standard brain space as defined by the Montreal Neurological Institute (MNI) using the "unified segmentation" algorithm, which is a generative model that combines tissue segmentation, bias correction and spatial normalization in a single unified model (Ashburner & Friston, 2005), which was resampled to 1 × 1 × 1 mm3 voxel size using statistical parametric mapping SPM8 software (Wellcome Department of Cognitive Neurology, London, UK) (Friston et al.,

These resultant individually normalized images were divided into gray and white matters as follows (Fig. 3). Firstly, a GM image was made by dividing the standard brain into gray and white matters using MRIcroN software. This GM image was used as a mask, which

Using the resultant GM image of each glioma, we next employed voxel-based lesionsymptom mapping (VLSM) to analyze the relationship between glioma location and the error rates on a voxel-by-voxel basis (Bates et al., 2003). The patients were divided into two groups according to whether they did or did not have a glioma including that voxel. The error rates for each condition or the difference in error rates between two conditions (e.g., PS – AS) were then compared for these two groups by a t-test, in which the statistical threshold was set to p = 0.05 after correction for multiple comparisons using the false discovery rate (FDR). To minimize the effects of outlier observations, the voxels used in the VLSM analysis were within the gliomas of at least two patients. Finally, the result of VLSM was projected

In our paradigm with three main conditions of AS, PS, and SS, under which two-argument relationships were critically required (see the Introduction), the same set of actions depicted by pictures was used, thus controlling semantic comprehension per se. In contrast, a different set of pictures were used under the SC condition (e.g., " and ∆ run"), which basically required matching between words (symbols and verbs) and pictures alone, without syntactic analyses for the two-argument relationships. Thus, the SC condition was syntactically less complex and easier to comprehend than other conditions. It was therefore mandatory to analyze the three main conditions and SC separately. Moreover, the analyses also match with our fMRI study (Kinno et al., 2008), in which SC was used as a separate control. In the sections 3.1-3.3, we focus on the main conditions of AS, PS, and SS, and the

The ERs for the patients and the normal controls are shown in Table 2. A repeated-measures analysis of variance (rANOVA) with two factors (group [patients, normal controls] ×

1995) on MATLAB (Math Works, Natick, MA, USA).

onto a standard brain using MRIcroN software.

results of SC are presented in the section 3.4.

**3.1 Behavioral analyses**

**3. Results** 

applied to the individually normalized image of each glioma.
