Does tool-related fmri activity within the intraparietal sulcus reflect the plan to grasp?

Size: px
Start display at page:

Download "Does tool-related fmri activity within the intraparietal sulcus reflect the plan to grasp?"

Transcription

1 NeuroImage 36 (2007) T94 T108 Does tool-related fmri activity within the intraparietal sulcus reflect the plan to grasp? Kenneth F. Valyear, a Cristiana Cavina-Pratesi, b Amanda J. Stiglick, c and Jody C. Culham a,d, a Neuroscience Program, University of Western Ontario, London, Ontario, Canada b Department of Psychology, University of Durham, Durham, UK c Medical Program, Queen s University, Kingston, Ontario, Canada d CIHR Group on Action and Perception, Department of Psychology, University of Western Ontario, London, Ontario, Canada Received 11 August 2006; accepted 20 March 2007 Available online 31 March 2007 Neuroimaging investigations reliably describe a left-lateralized network of areas as underlying the representations of knowledge about familiar tools. Among the critical nodes of the network, an area centered within the left intraparietal sulcus (IPS) is thought to be related to the motoric representations associated with familiar tools and their usage. This area is in the vicinity of an area implicated in the control of object-directed grasping actions: the anterior intraparietal area, AIP. The current study aimed to evaluate whether this toolrelated intraparietal activity could be accounted for by the graspable nature of tools or whether it was due to additional factors such as the functionality of tools. First, we found that during a naming task activation within a discrete region of the left anterior intraparietal cortex was higher for tools than for graspable objects, but did not differ between graspable and non-graspable objects. In addition, the peak activity associated with tool naming was found to be largely distinct and consistently posterior to that associated with real object grasping. A separate region, anterior to the tool-selective focus and possibly overlapping with AIP, demonstrated weak selectivity for both tools and graspable objects relative to non-graspable objects. These findings indicate that this tool-selective area at the anterior end of the left IPS is both separable from the grasp-related intraparietal activity and, consistently, it does not simply reflect the processing of grasping affordances. Taken together, these results suggest that object graspability alone cannot account for the left intraparietal activity driven by the naming of tools. Instead, this activity may relate to learned motor representations associated with the skillful use of familiar tools Elsevier Inc. All rights reserved. Corresponding author. Postal address: Department of Psychology, University of Western Ontario, Canada. Fax: address: culham@uwo.ca (J.C. Culham). URL: (J.C. Culham). Available online on ScienceDirect ( Introduction The concept of affordances was first articulated by James Gibson (1979) who argued that an animal s perception of its surroundings includes a description of how the environment and the objects within it influence the animal s potential for action. Following these ideas, it was suggested that simply seeing an object automatically evokes movement representations consistent with the possible actions that could be performed on that particular object given its shape, size, orientation, and location. In other words, objects potentiate certain actions depending on their intrinsic and extrinsic visual properties and this translates into an increased level of neuronal activity within the representations underlying those actions. Some of the most compelling evidence in support of this proposal has come from neurophysiological studies in the macaque monkey. Several of these investigations have shown that grasping-related neurons in both the ventral premotor cortex (area F5) and the anterior intraparietal sulcus (area AIP) not only respond during the act of grasping but also during the passive viewing of graspable objects (Murata et al., 1997, 2000; Taira et al., 1990). Importantly, many of these visual responses were congruent with the motor properties encoded by the neurons. That is, those objects for which the cells responded most strongly during grasping actions were often the same objects that evoked the highest responses during viewing. These results indicate that neuronal populations dedicated to mediating specific objectdirected actions can also be engaged during the simple viewing of objects in the absence of any overt movement. More recently, some intriguing behavioral results from human psychophysical studies also suggest that simply viewing objects can activate motor representations consistent with the particular actions that the objects most strongly afford (Craighero et al., 2002; Derbyshire et al., 2006; Ellis and Tucker, 2000; Helbig et al., 2006; Symes et al., 2006; Tucker and Ellis, 1998, 2004; Vainio et al., 2006). For example, Tucker and Ellis (2004) showed that responses during an object categorization task are influenced by the graspable properties of the object. The authors had previously /$ - see front matter 2007 Elsevier Inc. All rights reserved. doi: /j.neuroimage

2 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T95 developed a response apparatus that simulated both a precisiontype grasp (which requires the opposition of the thumb and index finger) and a power-type grasp (which requires the opposition of the fingers and the palm of the hand) (Ellis and Tucker, 2000). Using this apparatus they instructed subjects to decide if an object, presented in picture format, was man-made or natural. Responses that simulated a precision grasp were facilitated if the object under question afforded a precision grasp (e.g., a mushroom) and responses that simulated a power grasp were facilitated if the object under question afforded a power grasp (e.g., a cucumber). Moreover, these effects were also observed when the stimuli were not the images of the objects but rather simply their written names. This last finding suggests that simply thinking about the concept of an object is enough to automatically evoke motor representations associated with grasping it. In parallel with this line of research, human neuroimaging studies have described a number of areas within the posterior parietal cortex (PPC) as specialized for the control of particular actions (for reviews, see Culham et al., 2005; Culham and Valyear, 2006). Analogous to the functional organization within the macaque monkey PPC (for review, see Colby and Goldberg, 1999), a discrete region at the anterior end of the intraparietal sulcus (IPS) has been implicated in the guidance and control of object-directed grasping (Binkofski et al., 1998; Culham et al., 2003; Frey et al., 2005). Activity within this region, referred to as human area AIP, was elevated during grasping actions relative to reaching actions (Culham et al., 2003) and the grasp-related activation was not merely due to somatosensory stimulation (Culham, 2004). Critical to the current discussion, this graspingrelated area also appears to respond during object viewing in the absence of any movement (Grèzes et al., 2003a,b; Grèzes and Decety, 2002). These findings have been interpreted as evidence in support of the idea that an object s visual affordances can activate specific motor representations; in this case, an object s graspable properties activate motor modules specialized for grasping. Other imaging research has reliably shown selectivity for familiar tools within the PPC, particularly in the left hemisphere (for reviews, see Johnson-Frey, 2004; Lewis, 2006). Some of these activation foci appear close to area AIP and it has been suggested that this activity relates to the hand actions associated with using familiar tools (Chao and Martin, 2000). However, perhaps what becomes activated within the PPC is not so much those motor representations specialized for using tools but rather those more generally associated with object-directed grasping. In other words, perhaps this tool-related activity reflects processing within the visuomotor system specialized for grasping, driven by the perceived graspability of tools. A recent imaging study by Creem-Regehr and Lee (2005) aimed to test this possibility. The authors compared the activations associated with novel graspable objects to those associated with familiar tools. Their results revealed that during passive viewing, tool stimuli exclusively activated motor-related parieto-frontal areas. These results were interpreted to reflect an interaction between the functional identity of the tool and its perceived potential for action. However, since the two stimulus categories also differed in their overall familiarity, including the strength and extent of the sensorimotor experiences associated with them, perhaps the pattern of activity that was observed simply reflected these differences. Moreover, the unfamiliar graspable shapes were presented as 2D images with no cues as to their real world size and thus their graspability. Therefore, as the authors noted, it was uncertain whether the subjects perceived the shapes as graspable or not. The purpose of the present study was to further investigate whether the intraparietal activity associated with tool naming simply reflects the fact that tools afford grasping, whereas typical control stimuli do not (e.g., Chao and Martin, 2000, used images of animals as control stimuli and animals do not typically afford grasping). To evaluate this possibility we measured responses during the naming of familiar objects that varied with respect to their graspability. Within each individual, we identified the intraparietal area selective for naming images of tools (vs. animals, as in Chao and Martin, 2000) and then examined the activity within this area during the naming of familiar tools, familiar graspable objects, and familiar non-graspable objects. The aim here was to evaluate whether or not responses within this area would generalize to other objects that were graspable but did not have the functional properties of tools. We also examined the combined group data using a whole-volume voxel-wise approach. To identify areas showing selectivity for tools we contrasted naming tools vs. naming graspable objects, and to identify areas sensitive to object graspability we contrasted naming graspable vs. non-graspable objects. At the same time, in a subset of subjects, we compared parietal activations associated with real grasping actions to those associated with naming tools. If the tool-related activity within the intraparietal cortex was being driven by the graspable properties of tools, perhaps reflecting the automatic activation of motor programs associated with grasping, then we would predict the following: (i) responses within the region should also be elevated during the naming and viewing of other graspable objects compared with non-graspable objects; and (ii) the region should show significant spatial overlap with area AIP as defined by real grasping (vs. reaching) actions. Alternatively, if the activity within this region was to reflect processing specifically related to tools then we would expect the area to respond selectively to our tool stimuli and not to our other categories of objects and perhaps also to be distinct from AIP. Preliminary data from this project have previously been presented in abstract form. Methods Subjects Eleven neurologically intact individuals participated in the study (five female; age range of 22 35). All were right-handed, with normal or corrected-to-normal visual acuity and all were naive to the goals of the study. Each participant provided informed consent according to procedures approved by the University of Western Ontario Health Sciences Review Ethics Board. Stimuli Visual stimuli were presented using a PC laptop connected to a video projector. Images were shown on a rear-projection screen which straddled the subject s waist while they lay supine in the scanner. Subjects viewed the screen through a mirror mounted to the top of the head coil. At a viewing distance of approximately 120 cm, the images subtended approximately 5 of visual angle. Presentation of the stimuli was controlled using SuperLab Pro version (Cedrus Corporation, San Pedro, CA). Stimuli were selected from the Hemera Photo-Objects image database (Hemera

3 T96 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 Technologies Inc., Gatineau, QC) and were then converted to greyscale. For the scrambled stimuli, we divided each of our object images into a grid of cells and then randomly reordered the cells of the grid. A small black circle was superimposed in the center of each image to serve as a fixation point. Localizer paradigm Our localizer paradigm included familiar tools, animals, and scrambled images divided into separate 16 s epochs interleaved with fixation periods of the same duration (Fig. 1a, Supplementary Fig. 1). Within a single run, there were three separate epochs for each of the three conditions, resulting in a total of 19 epochs (9 stimulus and 10 baseline epochs; total run length of 5 min and 4 s). Each stimulus epoch comprised eight different stimuli, each presented for 2 s with no inter-stimulus interval. For each of the two categories of intact images, we used only eight different identities during the entire run (i.e., 8 different tools and 8 different animals; see Supplementary Fig. 1). However, to avoid repeating the exact same stimuli, we used different exemplars within each epoch. For example, although each epoch included an image of a hammer, a different hammer was shown each time. We had two different runs (using the same stimuli in both) in which the order of the stimuli and stimulus epochs were randomized. Throughout each run, subjects were instructed to maintain fixation and to silently name each of the stimuli (without making any overt speech movements) during the epochs of intact images. Experimental paradigm Our experimental paradigm included familiar tools, other graspable but non-tool objects, non-graspable objects, and scrambled versions of these same images interleaved with fixation periods (Fig. 1b, Supplementary Fig. 2). The stimulus and epoch Fig. 1. Stimulus paradigms. (a) The localizer paradigm comprised familiar tools, animals, and scrambled versions of these stimuli. Stimuli were presented in a block design, eight images per epoch, with 16 s epoch durations interspersed with fixation periods of the same length. (b) The experimental paradigm comprised familiar tools, graspable objects, non-graspable objects, and scrambled versions of these stimuli. Tool stimuli were different from those used in the localizer scans. Stimuli were presented in a block design, eight images per epoch, with 16 s epoch durations interspersed with fixation periods of the same length. For both paradigms, subjects were asked to silently name the stimuli during object epochs.

4 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T97 durations were exactly the same as in the localizer design. Our experimental runs comprised two epochs per condition, giving rise to a total of 17 epochs (8 stimulus and 9 baseline epochs; total run length of 4 min and 32 s). For each object condition, we used 16 different identities of objects (and the tool identities differed from those used in our localizer paradigm). These stimuli were repeated across four separate runs, each with a different stimulus and epoch order. Again, subjects were instructed to fixate throughout the runs and to covertly name the stimuli during the intact image epochs. Imaging parameters All imaging was performed at the Robarts Research Institute (London, Ontario, Canada) using a 4 T, whole-body MRI system (Varian-Siemens, Palo Alto, California and Erlangen, Germany) and a full-volume radiofrequency head coil. Each imaging session comprised six functional runs and a single high-resolution anatomical scan. High-resolution anatomical volumes were collected using a T1-weighted 3D magnetization-prepared FLASH acquisition (inversion time, TI=500 ms; time to echo, TE=5.5 ms; repetition time, TR = 10 ms; flip angle, FA= 11 ). Functional volumes were collected using a T2*-weighted, segmented (navigator corrected), interleaved EPI acquisition (TE = 15 ms, TR=1000 ms, FA=40, two segments/plane) to image the bloodoxygenation-level dependent (BOLD) signal over time (Ogawa et al., 1992). Each functional volume took 2 s to acquire. A single volume acquisition covered 17 continuous, 6 mm, quasi-axial slices, ranging from the most superior point of the cortex down through the ventral fusiform cortex, including approximately 2/3 of the cerebellum. The imaging field of view was set at 19.2 cm 19.2 cm, with an in-plane resolution of pixels, resulting in a voxel size of 3.0 mm 3.0 mm 6.0 mm. Data preprocessing and analysis Imaging data were preprocessed and analyzed using Brain Voyager QX version (Brain Innovation, Maastricht, The Netherlands). Anatomical volumes were transformed into standard stereotaxic space (Talairach and Tournoux, 1988). Each functional run was assessed for subject head motion by viewing cineloop animation and by examining the motion detection parameter plots after running 3D motion correction algorithms on the untransformed two-dimensional data. No abrupt movements were detected in the animations and no deviations larger than 1 mm (translations) or 1 (rotations) were observed in the motion correction output. Functional data were preprocessed with linear trend removal and underwent high-pass temporal frequency filtering to remove frequencies below three cycles per run. Functional volumes were then aligned to the transformed anatomical volumes, thereby transforming the functional data into a common stereotaxic space across subjects. All imaging data were analyzed using contrasts within a general linear model (GLM) for each type of run (localizer and experimental runs). Each GLM included predictor functions for each of the conditions (except the fixation baseline), generated by rectangular wave functions (high during the condition and low during all other conditions) convolved with a standard hemodynamic response function (Boynton et al., 1996). Prior to GLM analysis, each run was z-transformed, effectively giving each run a mean signal of zero and converting beta weights into units of standard deviations. Our analysis uses both functional region of interest (ROI) and voxel-wise approaches. Although there is a longstanding debate on the relative merits of the two approaches, one that has recently been argued in the literature (Friston and Henson, 2006; Friston et al., 2006; Saxe et al., 2006), our lab has traditionally used both approaches to take advantage of their respective advantages. The ROI approach enables comparisons of functional areas with earlier studies (e.g., Chao and Martin, 2000) and allows for analysis of single subjects data without the pitfalls of intersubject averaging in stereotaxic space (which we have found particularly important for AIP, given its proximity to nearby somatosensory regions). In addition, the use of voxel-wise analyses provides a more openended approach which does not constrain the results to ROIs (if for example only a subregion of an ROI is involved) or only to anticipated activation foci. Region of interest (ROI) analysis For each individual, data from the two localizer scans were used to identify tool-selective ROIs using a contrast between the naming of tools vs. animals. Based on past neuroimaging studies of tool naming (Chao et al., 1999; Chao and Martin, 2000), we expected activation within several regions: the left anterior intraparietal cortex (AIPC), the left lateral temporo-occipital cortex (LTOC, near the posterior middle-temporal gyrus), and the left inferior frontal cortex (IFC). We have used the terminology AIPC for the tool-related parietal focus to designate its anatomical location; later analyses will address the degree to which AIPC overlaps with grasp-selective area AIP (Binkofski et al., 1998; Culham et al., 2003). Although the contrast between naming tools vs. animals produced robust activation foci for most regions in most subjects, in a few cases, the activation was weaker (particularly in the IFC, see Table 1c). However, rather than exclude those subjects with less robust activation, we preferred to select the ROIs based on localizer activation at more liberal thresholds. Although our minimum threshold (t > 2.0, corresponding to an uncorrected p value of less than.05) was lower than convention, two points should be noted. First, we constrained our selection of ROIs to known anatomical locations identified by prior studies; thus these were not random areas of activity that happened to be significant at liberal thresholds. Second, because our ROIs were localized by runs that were independent from the experimental runs in question, any errors in ROI selection would only lead to a greater tendency to find null results for contrasts within the experimental runs. Indeed as we will see in the results, the tool-selective preference was replicated in the experimental runs for each of the three areas, even in some subjects whose localizer data had been less robust (e.g., Subject 2, LTOC). In addition, we wanted to avoid a common problem with ROI analyses, namely that if a fixed threshold is used, the extent of activation can vary tremendously between subjects. Thus we constrained the maximum extent of the individual ROIs by identifying the peak focus of activation in each subject, then setting the threshold to our determined minimum (t>2.0) we selected a volume of interest up to (10 mm) 3 = 1000 mm 3 around the peak (for a similar ROI selection procedure and rationale, see Downing et al., 2006). For each subject s ROI, we extracted the average time course activity, aligned to the onset of each epoch, from all four experimental runs. Also, for illustrating average results, the average ROI time courses for the experimental runs were averaged across subjects. Again, it is important to emphasize that this

5 T98 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 t Talairach coordinates Volume t Talairach coordinates Volume Table 1 Localizer ROIs Individual ROI results (tools>animals) Ss (a) Anterior intraparietal cortex (AIPC) (b) Lateral temporo-occipital cortex (LTOC) (c) Inferior frontal cortex (IFC) Talairach coordinates Volume t x y z mm 3 x y z mm 3 x y z mm Mean Se Areas were identified within each individual by contrasting the naming of tools with the naming of animals based on the localizer scans. The mean center of mass Talairach coordinates, cluster sizes, and peak t-values are listed for each subject's ROI. Notice that for most individual activation foci the peak effects were observed at reasonably high statistical thresholds. The group average Talairach coordinates, cluster size, and peak t-value for each of the three ROIs is also shown. The standard error of the mean (Se) Talairach coordinates computed for each area indicates a good amount of spatial consistency across individuals. activity is completely independent from the activity used to identify and select the regions based on the localizer. Within a given subject s ROI, the mean percent BOLD signal change (mean %BSC) associated with each condition was computed as the average of the activation at the peak of the response (i.e., volumes 1 9, corresponding to 2 18 s after the start of each epoch). In order to compare activations across conditions, the mean %BSC values were then entered into a one-way repeated measures analysis of variance, with subject as a random factor. Where significant differences were found, in order to test for differences between pairs of conditions, all possible post hoc comparisons were performed by computing an F-statistic. Tukey s wholly significant difference (WSD) was then used to correct the critical significance value so to control for the problem of multiple comparisons. Comparison of tool-related and grasp-related activation in anterior intraparietal cortex We also wished to determine whether or not the tool-related focus in the anterior IPS overlapped with an area previously implicated in visually guided grasping (Binkofski et al., 1998; Culham et al., 2003). Seven of our subjects had previously participated in fmri studies comparing visually guided reach-tograsp (grasping) movements with reach-to-touch (reaching) movements to identify a human grasp-selective area, AIP (see Culham et al., 2003, for full methodological details). To compare tool- and grasp-related foci within each individual, we aligned the anatomical and functional data from both the current experiment and the grasping experiment to a high-resolution anatomical scan. We then created two statistical activation maps, one from the contrast between real grasping vs. reaching movements and the other from the contrast of naming tools vs. animals (i.e., the parietal ROI from the localizer runs). For each map we then selected the region of peak activity within the anterior intraparietal sulcus. To evaluate the spatial relationship between the two foci, we extracted the stereotaxic location (Talairach coordinates) of the peak activations for each contrast. Specifically, to determine whether or not there was a reliable relationship in any of the three canonical directions, paired samples t- tests were performed on each dimension (x, y, z) of the Talairach coordinates. We also had some interest in how grasp-related area AIP might respond during our naming paradigm. Thus, we extracted the averaged activation during the naming experiment from each of the seven subjects grasp-related area AIP and then entered the mean %BSC values into a one-way repeated measures analysis of variance, with subject as a random factor. Where significant differences were found, in order to test for differences between pairs of conditions, all possible post hoc comparisons were performed by computing an F-statistic. Tukey s WSD was then used to correct the critical significance value so to control for the problem of multiple comparisons. Voxel-wise group analyses In addition to the ROI analyses, we performed a whole-volume voxel-wise analysis for the group of subjects using an averaged GLM fitted for random effects analyses, with separate predictor functions for each condition (except the baseline) for each subject. We then ran two independent contrasts in order to evaluate the key effects of interest. The first contrasted the naming of tools with the naming of graspable objects and the second contrasted the naming of graspable objects with the naming of non-graspable objects. We expected that if the parietal response to tools was driven by graspability, there should be no difference between tools and graspable objects but more activation for graspable than nongraspable objects. Alternatively, if the response to tools was driven by other factors such as functionality, there should be a significant difference between tools and graspable objects, but not between graspable and non-graspable objects. For each of the two critical contrasts, activation maps were set to reliable statistical thresholds (p < 0.015, minimum cluster size of

6 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T mm 3 ), using Monte Carlo simulations (performed with AlphaSim software, courtesy of Douglas Ward, Medical College of Wisconsin) to verify that the resultant clusters were unlikely to have arisen due to chance (corrected p<0.05), given the problem of multiple comparisons. However, when considering the significance of activity within the intraparietal cortex, an area for which we had clear a priori expectations, these thresholding methods (based on our entire collection of voxels) seemed overly conservative. Thus, to determine a more appropriate threshold, we first estimated the number of voxels within the intraparietal cortex and then re-calculated the minimum cluster size based on these constraints (again using AlphaSim software). To estimate the number of intraparietal voxels, we used a Talairach-based mask representing an estimate of the volume of gray matter specific to (bilateral) Brodmann Area 40 (as available with Brain Voyager software). The results from these second set of simulations (again performed at an uncorrected p < 0.015) specified an acceptable minimum cluster size of 162 mm 3, specifically adjusted for the intraparietal cortex (corrected p < 0.05). Behavioral control experiment An unfortunate facet of the silent naming paradigm was the absence of behavioral measures that would have enabled us to determine if our object categories were comparable in terms of naming difficulty. Of course, if our object categories were associated with differential task (or attentional) demands, this could influence the pattern of activity observed and thus confound the interpretation of the results. Thus, to obtain a measure of naming difficulty, we had a separate group of subjects (N=12) name our stimuli outside the scanner while we recorded naming accuracies and voice-onset latencies. The exact same stimuli, order of stimuli, and stimulus durations (2 s) were used as in the imaging experiment, minus the fixation and scrambled epochs. Again, the presentation of the stimuli was controlled using SuperLab Pro version (Cedrus Corporation, San Pedro, CA). The mean voice-onset reaction times and accuracy scores for each subject for each object category were entered into a one-way repeated measures analysis of variance, with subject as a random factor. An outlier analysis was performed for each subject, and reaction times that were three standard deviations above or below the mean reaction time for each condition were excluded from the analysis. Where significant differences were found, in order to test for differences between pairs of conditions, all possible post hoc comparisons were performed by computing an F-statistic. Tukey s WSD was then used to correct the critical significance value so to control for the problem of multiple comparisons. Results ROI analysis All three ROIs that had been identified using an independent localizer (naming tools vs. animals) the left AIPC, the left LTOC, and the left IFC demonstrated a preference for tools but not graspable objects in the experimental runs (see Fig. 2). Specifically, in each of these three areas, the activation during the naming of tools was higher than during the naming of graspable objects; however, activation levels during the naming of graspable and nongraspable objects were comparable. Most importantly, the left intraparietal area showed selectivity for tools but not graspable objects (Figs. 2a, b, c). In each individual, this activity was reliably situated at the anterior end of the IPS, near the junction of the postcentral sulcus. The stereotaxic location of this focus appears reasonably close to the intraparietal focus from previous imaging studies of tool naming (Chao and Martin, 2000; Chao et al., 2002; Okada et al., 2000). Post hoc comparisons between the average activation levels in different conditions indicated that naming tools evoke significantly higher activity within the region as compared with all other stimulus conditions (Tools > Non-graspable, p <.01; Tools>Graspable, p<.05; Tools>Scrambled, p <.01). There were no significant differences between graspable, non-graspable, and scrambled conditions; most importantly, the comparison for graspable vs. non-graspable objects was non-significant (p =.63). Thus, this intraparietal area appeared to be insensitive to the graspable properties of familiar objects but was nonetheless highly responsive to tools. This suggests that tools activate this area for reasons beyond their simple graspability. Similar results were obtained for the tool-related region identified within the left LTOC (Figs. 2d, e, f). In all subjects this activity was centered within the lateral occipital sulcus encompassing some portion of the posterior middle temporal gyrus. This region corresponds well with the activations reported in previous imaging studies involving the naming of tools (Chao et al., 1999, 2002; Martin et al., 1996; Okada et al., 2000). Like the tool-related parietal focus, post hoc comparisons revealed a selective preference for the naming of familiar tools as compared with all other stimulus categories (Tools > Non-graspable, p <.01; Tools > Graspable, p <.01; Tools > Scrambled, p <.01). Naming graspable and non-graspable objects also evoked stronger activity relative to the scrambled object condition (Graspable > Scrambled, p <.05, Non-graspable > Scrambled, p <.05), a pattern that may reflect sensitivity to the naming task, sensitivity to intact objects (vs. scrambled), or both. Activity associated with naming graspable vs. non-graspable objects did not differ (p =.89). The tool-responsive region within the left IFC was localized to the posterior portions of the inferior frontal gyrus in eight of the eleven individuals (Figs. 2g, h, i). The averaged group location is consistent with tool-related inferior frontal activations reported in previous imaging studies (Chao and Martin, 2000; Chao et al., 2002; Grafton et al., 1997; Martin et al., 1996). Post hoc comparisons revealed that the activity associated with naming tools was greater than that associated with all other experimental conditions (Tools>Non-graspable, p <.01; Tools>Graspable, p <.05; Tools > Scrambled, p <.01). In addition, activity evoked during the naming of both graspable and non-graspable object conditions was found to be significantly greater than that evoked during the viewing of scrambled images (Graspable > Scrambled, p<.01, Non-graspable>Scrambled, p<.05). This last result may indicate some recruitment of the area for silent naming in general, perhaps reflecting some partial overlap with Broca s area. There was no significant difference between the activity association with naming graspable compared with non-graspable objects (p=0.87). Comparison of tool-related and grasp-related activation in anterior intraparietal cortex An important objective of this study was to directly compare the parietal activations associated with the naming of tools (as

7 T100 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 Fig. 2. Individual ROI results. Areas were identified within each individual by contrasting the naming of tools with the naming of animals based on the localizer scans. Each subject's ROI is superimposed on the anatomy of a single individual, shown in a unique color. Notice how closely the foci cluster together which indicates a good amount of functional anatomical consistency across individuals. (a) Anterior intraparietal tool areas (AIPC) localized within the left anterior intraparietal sulcus of all eleven subjects. (b) The event-related averaged time course illustrating the pattern of activity during experimental scans, averaged across each individual's AIPC ROI. (c) The mean percent signal change from area AIPC for each experimental condition, shown for each subject and as the group average. (d) Lateral temporo-occipital tool areas (LTOC) localized within the left lateral occipital sulcus/posterior middle temporal gyrus of all eleven subjects. (e) The event-related averaged time course illustrating the pattern of activity during experimental scans, averaged across each individual's LTOC ROI. (f) The mean percent signal change from area LTOC for each experimental condition, shown for each subject and as the group average. (g) Inferior frontal tool areas (IFC) localized within the left inferior frontal gyrus of eight individuals. (h) The event-related averaged time course illustrating the pattern of activity during experimental scans, averaged across each individual's IFC ROI. (i) The mean percent signal change from area IFC for each experimental condition, shown for each subject and as the group average. defined by our localizer contrast naming tools vs. naming animals) with those activations associated with grasping actions made toward real 3D objects (as defined by a contrast of grasping vs. reaching; see Fig. 3). Fig. 3 shows both activation maps for each individual, grasping (vs. reaching) in green and naming tools (vs. animals) in blue, with the overlap between the two maps highlighted in yellow. The percentage of this overlap, calculated as the number of overlapping voxels divided by the mean of the total voxels from each ROI combined, is also indicated. Although partial overlap is evident in a few subjects (in particular S7 and S10), the activity along the anterior end of the IPS associated with tool naming is typically quite distinct from that associated with object grasping. Moreover, the spatial relationship between the two areas is consistent across all seven subjects, with the focus associated with grasping always lying anterior to that associated with tools (Fig. 3). Our analyses of Talairach coordinates confirmed this, with the average peak y- coordinate for grasping lying approximately 6 mm anterior to the average y-coordinate for tool naming, a difference that was found to be highly significant (p < 0.005). No significant differences were observed in the medial lateral (x) or superior-inferior (z) locations of the two foci. Our analysis of the time course activity during our naming paradigm within grasp-related area AIP, averaged across the seven individuals, indicated significant differences among our conditions, F(3,18) = 8.6, p <.001. Post hoc comparisons revealed that tools and graspable objects differed from scrambled (p <.05); however, other comparisons were not significant, including the comparison between graspable and non-graspable objects (p=.63). Voxel-wise group analyses The first analysis we performed aimed to identify areas selective for tools by contrasting naming tools with naming graspable objects. We expected this comparison to reveal areas of activity corresponding well with those described in the individual ROI analyses, since our three ROIs identified with this approach were shown to selectively respond to tools. Consistent with these expectations, contrasting tools vs. graspable objects revealed

8 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T101 Fig. 3. Compare activation maps. Two activations maps are shown for each individual, with grasping activity (grasping vs. reaching) shown in green, tool naming activity (tools vs. animals) shown in blue, and the extent of overlap highlighted in yellow. The principal anatomical area of interest, the left anterior intraparietal sulcus near the junction of the postcentral sulcus, is marked and shown in closer detail to the left of each individual. The percent overlap between the two independently defined ROIs is indicated, computed as the number of overlapping voxels divided by the mean of the total voxels from each ROI combined. Notice how in each individual the activity associated with tool naming is reliably posterior to that associated with object grasping. Notice also that the anterior parietal activation for tools is largely left lateralized whereas the activation for grasping is evident bilaterally in most subjects. significant activity within left anterior intraparietal cortex; corresponding well with the area defined via the ROI approach (i.e., the area we referred to as AIPC; compare Figs. 2a with 4b, and Table 1a with 2a). Both approaches also revealed strong tool selectivity within the left lateral occipital cortex, and the location of the group voxel-wise derived focus corresponded well with the individual ROI derived focus (i.e., the area we referred to as LTOC; compare Figs. 2d with 4f, and Table 1b with 2a). However, unlike the ROI results, contrasting tools vs. graspable objects did not reveal significant activations within the left inferior frontal cortex. This result is not entirely surprising given the relative inconsistency of the tool-related effects within this area of cortex, as found with our ROI analyses (see Table 1c). Contrasting naming tools vs. graspable objects also revealed some additional areas of activity, including left posterior intraparietal cortex (Fig. 4e, Table 2a), left lateral frontal cortex (Fig. 4d, Table 2a), and bilateral anterior cingulate cortex (Fig. 4c, Table 2a). These areas, with the possible exception of the posterior intraparietal cortex, have been previously implicated in various motor functions, including action planning, sequencing, imagery, and execution (e.g., Binkofski et al., 1999; Decety et al., 1994; for review of functional areas and connectivity, see Wise et al., 1997). Moreover, several other tool-related imaging paradigms have also reported activity within these areas (for review, see Lewis, 2006). Thus, we interpret this widespread, mostly left lateralized, parietal and frontal activity as additional support for the view that tool naming can engage motor representations normally involved in subserving complex actions (such as tool use). The second critical contrast we performed aimed to identify areas sensitive to object graspability by contrasting naming graspable vs. non-graspable objects. Of most importance was

9 T102 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 Fig. 4. Group voxel-wise results. Section I (top) shows the activation maps corresponding to each of our comparisons of interest: the blue colored activation indicates those areas showing significantly higher activity during the naming of tools relative to graspable objects; the green colored activation indicates those areas showing significantly higher activity during the naming of graspable relative to non-graspable objects; the red colored activation indicates those areas showing significantly higher activity during the naming of non-graspable relative to graspable objects (i.e., the opposite of green). Section II (bottom) shows the averaged time course activity extracted from each area aligned to the onset of each epoch. For both sections: a. left anterior intraparietal cortex (AIPC GO ); b. left anterior intraparietal cortex (AIPC TOOL ); c. bilateral anterior cingulated cortex; d. left lateral frontal cortex; e. left posterior intraparietal cortex; f. left lateral occipital cortex; g. bilateral parahippocampal cortex. determining whether or not this comparison would reveal activity within the intraparietal cortex. Indeed, our results revealed a discrete focus of activity at the anterior extent of the left intraparietal cortex (Fig. 4a, Table 2b). Importantly, this area was clearly distinct from the tool-selective area localized to the left intraparietal cortex (in both the ROI and voxel-wise approaches), and unlike the tool-selective area, this area showed sensitivity to the graspability of objects in general. Thus, to distinguish between these two anterior intraparietal areas, we will refer to the area showing selectivity for tools (tools > graspable objects) as AIPC TOOL and to the area showing sensitivity to object graspability (graspable >non-graspable objects) as AIPC GO. Indeed, the response pattern observed within AIPC GO is consistent with an area tuned to the graspable affordances of objects (i.e., responses were

10 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T103 Table 2 Voxel-wise group results (a) Contrast: naming tools>naming graspable objects Region Talairach coordinates Volume mm 3 T>G T>N G>N x y z Left anterior intraparietal cortex (AIPC TOOL ) Left lateral occipital cortex Left posterior intraparietal cortex ( ) Left lateral frontal cortex Bilateral anterior cingulate cortex (b) Contrast: naming graspable objects>naming non-graspable objects Region Talairach coordinates Volume mm 3 G>N G>T T>N x y z Left anterior intraparietal cortex (AIPC GO ) Right anterior inferior parietal lobule Right insular cortex Right dorsal medial occipital cortex Bilateral calcarine sulcus Right superior temporal sulcus Right ventral occipito-temporal cortex (c) Contrast: naming non-graspable objects>naming graspable objects Region Talairach coordinates Volume mm 3 N>G N>T T>G x y z Right angular gyrus Bilateral parahippocampal cortex Bilateral posterior cingulate cortex ( ) This significance value actually reflects an N>G effect. All activation foci were identified based on the data from the experimental paradigm and the combined group GLM. (a) Areas were detected by contrasting the naming of tools with the naming of graspable objects. (b) Areas were detected by contrasting the naming of graspable objects with the naming of non-graspable objects. (c) Areas were detected by contrasting the naming of non-graspable objects with the naming of graspable objects. For each table, for each statistical contrast, the mean center of mass Talairach coordinates and cluster size for each area are listed. Also, within each table, for each region various post hoc comparisons between pairs of conditions and the corresponding statistics are given. In each case, the statistical comparison used to identify the regions is shaded in gray. T=naming tools; G=naming graspable objects; N=naming non-graspable objects. comparable to tools and graspable objects and both were higher than non-graspable objects). Ideally, we would have liked to have done a subject-by-subject comparison between this focus and the focus for grasping vs. reaching. Unfortunately, however, the anterior parietal activation for naming graspable vs. non-graspable objects was not sufficiently robust to be observed in individual subjects. Thus, the only comparisons we can make are between the average stereotaxic foci (see Fig. 5), which suggest that the anterior parietal area sensitive to the graspable properties of objects (graspable > non-graspable) is lateral to that specialized for actual grasping (grasping > reaching). Contrasting graspable vs. non-graspable object naming also revealed significant activity in several other areas (see Table 2b). Some of these activations may relate to differences in the visual properties of graspable vs. non-graspable objects. For example, compared with non-graspable objects, the real world sizes of graspable objects are necessarily smaller, thus, perhaps these types of objects are more strongly associated with areas specialized for object processing within central (foveal) vision (Hasson et al., 2002, 2003; Malach et al., 2002). Likewise, since non-graspable objects are appreciably larger they may preferentially activate areas specialized for processing objects within the visual periphery. Consistent with this conjecture, cortical areas previously associated with scene processing (Epstein and Kanwisher, 1998), which have also been shown to demonstrate a peripheral visual field processing bias (Hasson et al., 2002, 2003; Malach et al., 2002), were more strongly activated by our non-graspable objects (i.e., bilateral parahippocampal cortex; Fig. 4g, Table 2c). Overall the voxel-wise group analyses served to validate our main findings from the ROI based approach. Both analyses indicate a specific region of the left intraparietal cortex as selective for naming tools but not other graspable objects. Furthermore, another distinct, but nearby, anterior parietal area was found to be equally responsive to tools and graspable objects, but not at all responsive to non-graspable objects. These results suggest a complex pattern of organization within the left anterior parietal cortex, with areas closer to somatosensory cortex showing sensitivity to the graspable properties of objects and more posterior intraparietal areas showing selectivity for familiar tools. These results mesh well with the conclusions drawn from our individual ROI analyses whereby grasp-related intraparietal activity was found to be situated anterior to tool-related intraparietal activity. Fig. 5 illustrates these spatial relationships and provides a schematic overview of all anterior parietal activations in accordance with our complete set of analyses.

11 T104 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 areas discussed here AIPC, LOTC, and IFG did not match the pattern that would be expected based on naming latencies. Although the only significant difference in naming latencies was between tools and graspable objects, there were trends that indicated that non-graspable objects were of intermediate difficulty. In none of the three key areas did non-graspable objects yield intermediate activation. Second, several neuropsychological and neuroimaging investigations have found the ventral temporal cortex to be involved in semantic retrieval and correlated with naming difficulty (Galton et al., 2001; Grossman et al., 2004; Martin et al., 1996; McMillan et al., 2004; Zelkowicz et al., 1998); however, we found no activation differences in this area for our comparisons. Third, tool-related activity within anterior intraparietal cortex has been frequently demonstrated in other studies of tool processing (e.g., Fridman et al., 2006; Lewis, 2006; Lewis et al., 2005), including tool naming studies that have controlled for word frequency and difficulty (Chao and Martin, 2000). It seems unlikely that all of these reports would simply reflect attentional confounds. Discussion Fig. 5. Summary schematic. Shown here is an illustration of the spatial relationship across all anterior parietal ROIs identified throughout each of our experiments. Each area is simply represented by a colored circle: light green corresponds with the area identified by contrasting grasping vs. reaching (according to the individual ROI approach); dark green corresponds with the area identified by contrasting naming graspable vs. non-graspable objects (AIPC GO, according to the group voxel-wise approach); light blue corresponds with the area identified by contrasting naming tools vs. animals (according to the individual ROI approach); dark blue corresponds with the area identified by contrasting naming tools vs. graspable objects (AIPC TOOL, according to the group voxel-wise approach). The representation of each area is shown superimposed on a single axial slice of one individual (top right; z=43). Each circle is centered on the group mean center of mass x and y Talairach coordinates obtained for each of the areas. These locations were then extended (approximated) to the cortical surface representation of this individual. It should be emphasized, then, that this is a schematic and not meant to reflect the veridical locations of these areas but rather to simply provide an estimate for the purpose of summarizing our results. The cortical representation was constructed from the individual's high-resolution anatomical image based on the gray-white matter boundary (Kriegeskorte and Goebel, 2001). The cortical surface was then partially inflated to better reveal the depths of the sulci. Behavioral results The results of our repeated measures analysis of variance indicated a significant difference among the mean voice-onset reaction times for each of our three conditions, F(2,22) = 18.7, p <.001. Post hoc comparisons revealed that the mean voice-onset latency for naming tools (M = ms) was greater than for graspable objects (M=706.2 ms; p<.01) but not non-graspable objects (M = ms; p =.08). Mean voice-onset latencies for naming graspable and non-graspable objects did not differ (p =.13). No significant differences were observed in the accuracy of naming across object categories (Tools: M = 98.7%, SE =.40; Graspable: M=99.8%, SE=.20; Non-graspable: M=99.6%, SE=.40). Although our behavioral results suggest that our stimuli showed some differences in nameability, we do not believe that these differences can account for the pattern of activation we observed for three reasons. First, the pattern of activation in the three key The goal of this study was to evaluate if the intraparietal activity driven by tool naming could be accounted for simply by the graspable nature of tools (compared to control stimuli used in previous studies, such as animals) or whether it was due to additional factors such as the functionality of tools. Our results revealed an area within the left intraparietal cortex that was active for tools but not other graspable objects, and responses to graspable and non-graspable objects did not differ. Thus, this area seems insensitive to the graspability of objects but yet highly responsive to familiar tools. In addition, in a subset of seven subjects who also performed real grasping (vs. reaching) actions toward objects, the tool-selective activation was found to be largely distinct and consistently posterior to that associated with grasping. Taken together, our results suggest that this tool-related intraparietal activity is both separable from grasp-related intraparietal activity and, consistently, it does not merely reflect the processing of grasping affordances. Instead, it appears that other properties of implements, beyond their graspability, are at play. Tools, unlike many other objects, are tightly coupled to specific actions. In particular, elaborate sequences of arm and hand actions develop and become strongly associated with tools following experience using them. Even seemingly simple tool-mediated actions, such as the use of a screwdriver, require the appropriate application of forces, the fine coordination of multiple limbs, and the efficient timing, amplitude, and direction of constituent movements. Importantly, this well-defined set of action operations is tied to prior experience and stored knowledge of object function. In contrast, interacting with most other (i.e., non-tool) objects can often be accomplished without any reference to prelearned action programs. This is not to say that prior experience does not play an important role when interacting with other objects, but rather that knowledge of object function and manipulability is fundamental to the action representations associated with highly familiar tools. As a consequence, perhaps the strength of these associations is such that simply viewing familiar tools results in the activation of motor representations underlying their usage. In other words, perhaps the most salient action affordances tied to the visual properties of tools correspond to those action representations subserving tool use abilities. Thus,

12 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T105 rather than activating parietal representations associated with grasping, perhaps tool naming activates representations associated with tool usage. However, since the early work of Hugo Liepmann, it has been postulated that the left inferior parietal cortex, and in particular the supramarginal gyrus (SMG), plays a critical role in the storage and retrieval of learned movement representations associated with skilled actions (Heilman et al., 1982; Rothi and Heilman, 1997). Damage to these representations, which Liepmann coined as movement formulae, leads to profound deficits in the planning, performance, and imitation of skilled actions. This condition, often referred to as ideomotor apraxia, is characterized by an inability to produce the correct hand configurations associated with tools according to their known functional properties. If parietal responses during the naming of tools were to reflect the activation of areas specialized for tool use actions, then shouldn t this activity map onto the left inferior parietal cortex and the SMG? Instead, we found the most reliable tool-related activations to be centered within the IPS. This may suggest that our activity does not directly relate to the representations subserving actual tool use. Alternatively, perhaps in addition to the tool use representations within the SMG, there are areas within the IPS that play a critical role in using tools. This is certainly the case within the intraparietal cortex of the macaque monkey. Here it has been shown that tool use training can induce remarkable changes in the molecular and neurophysiological properties of cells within the IPS (Hihara et al., 2006; Iriki et al., 1996; Ishibashi et al., 2002a,b). Also, with humans, several other tool-related imaging studies, including those involving the actual execution of tool use actions, report activation foci within the IPS (e.g., Fridman et al., 2006; Johnson-Frey et al., 2005). Finally, parietal lesions associated with ideomotor apraxia often encompass parts of the IPS (Buxbaum et al., 2005a,b; Haaland et al., 2000). In short, there may well be several spatially separable tool-associated areas within the PPC, and it is likely that many of these areas, including that which we have localized to the left IPS, directly relate to the praxis system and the ability to skillfully use familiar tools. Worth noting, at least some patients diagnosed with ideomotor apraxia have no troubles when preshaping their hand in accordance with the orientation and size of an object during simple prehension (Buxbaum et al., 2003, 2005b; Sirigu et al., 1995; 2003). In other words, grasping actions based on the physical attributes of objects can be well preserved despite an inability to form the correct hand postures when attempting to utilize these same objects based on their learned functional attributes. These findings have led to the proposal that parieto-frontal systems critical to object prehension can be distinguished from those critical to object utilization (Daprati and Sirigu, 2006; Johnson-Frey, 2003). Our findings are consistent with this distinction. That is, our results indicate that the tool-selective representations within the left IPS are distinct from those representations mediating the visuomotor transformations underlying simple grasping actions (Fig. 3). However, naming tools and grasping objects differ in many respects, and there are serious challenges with drawing inferences from such disparate paradigms. Thus, although our conclusions regarding these activities are consistent with the above neuropsychological evidence suggesting separable parietal systems for grasping and using objects, we admit that additional experiments allowing for more direct comparisons are needed. Indeed, discovering how these two systems relate and interact during actual tool use is an exciting target for future research. In addition to our main findings, which highlight a toolselective anterior intraparietal area as insensitive to the graspability of objects (i.e., area AIPC TOOL, Fig. 4b), our results also revealed some evidence for sensitivity to object graspability within an adjacent region of anterior parietal cortex. By directly contrasting our graspable vs. non-graspable object conditions, we identified an area at the anterior extent of the intraparietal cortex (i.e., area AIPC GO, Fig. 4a). Activity near to this location has been noted during object manipulation (e.g., Binkofski et al., 1999), somatosensory stimulation (e.g., Culham, 2004), visuotactile object recognition (Grefkes et al., 2002), and during the passive viewing of objects and object directed actions (e.g., Buccino et al., 2001; Grèzes et al., 2003a; Shmuelof and Zohary, 2005, 2006). One hypothesis about the nature of this activity is that it corresponds to grasp-related circuitry, reflecting the automatic activation of visuomotor areas specialized for object grasping, as previously shown in both human and macaque grasprelated anterior intraparietal cortex (e.g., Grèzes et al., 2003a; Murata et al., 2000). However, if this were the case, one would also expect our grasp-related area AIP to show a clear preference for naming tools and other graspable objects relative to nongraspable objects. Instead, although our results from graspdefined area AIP (for the subset of seven subjects for whom we had data) showed some tendencies to respond in this direction (i.e., tools and graspable objects evoked higher activity than scrambled images whereas non-graspable objects did not), direct comparisons between tools, graspable objects, and non-graspable objects did not reach significance. In summary, there is a suggestion for an effect of graspability within and around AIP; however, the effect is quite weak, failing to reach significance in planned comparisons within seven subjects and appearing in voxel-wise group contrasts without being observed in individual subjects. There are several possible explanations for the weakness of the effects of graspability in and around AIP. First, recent delayed grasping data from our lab (Singhal et al., 2006, following up on early pilot data from Culham, 2004) suggest that AIP, as defined by grasping versus reaching, may be functionally heterogeneous. That is, the anterior aspects of AIP appear to respond to the somatosensory and motor components of the task, while the posterior aspects of AIP appear to respond to the visual components as well. Perhaps by choosing the peak focus of activity in response to grasping vs. reaching, we have biased our selection of the area in favor of more somatosensory and motor-related voxels, rather than the more posterior visually driven voxels. Alternatively, the graspability effect may simply reflect somatosensory associations with tools and graspable objects, a suggestion bolstered by the finding that the AIP GO focus was somewhat lateral to the expected location of AIP, where we have sometimes observed somatosensory responses in our lab. Second, perhaps our non-graspable object stimuli served to partially activate motor representations associated with grasping by virtue of the fact that most of these objects had graspable components (e.g., the door handles on a car), a confound that we found difficult to avoid given that we wanted to only include very common and easily namable objects. However, it may also be the case that AIP is not strongly recruited by the presentation of 2D images of objects on a screen, which do not afford genuine interaction. The posterior end of AIP responds to the visual presentation of 3D objects during passive viewing, though 2D objects may be less effective (Culham et al., 2003). Moreover, for an unnatural task (pantomiming a grasp vs. reach beside the object),

13 T106 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 grasp-selectivity in AIP drops to negligible levels (Króliczak et al., 2007). Consistent with the argument that AIP activation may be sensitive to the task as well as the stimuli, other studies (e.g., Grèzes et al., 2003a) have found activation in AIP for the presentation of 2D objects in a task that involved a motor response, which might be expected to prime visuomotor pathways more strongly than our naming task. Regardless of the weakness of the graspability effect in and around AIP, however, our results nevertheless demonstrate highly consistent and robust tool-selectivity in a region just posterior to AIP that cannot be accounted for by graspability. Although we strongly favor our action-related account of the tool selectivity, we have observed in area AIPC TOOL, it is possible, though we think unlikely, that the effects could have arisen from stimulus differences. Specifically, the majority of our tool stimuli have a clear axis of elongation, an inherent feature of most hand-held implements, whereas our graspable (and our nongraspable) object stimuli typically do not. These differences, at least in principle, may indeed contribute to the pattern of activity we have observed; in this case tool selectivity would reflect a general preference for the processing of elongated objects. In our view, however, this explanation seems less likely than our actionrelated account. If our tool-selective activity was to simply reflect a processing bias in favor of more elongated objects, then we would expect the area to demonstrate sensitivity to the orientation of elongated objects. In a recent study (Rice et al., 2007), we found no evidence of sensitivity to object orientation within anterior parietal cortex, but rather, in line with previous work (James et al., 2002; Valyear et al., 2006), orientation sensitivity was localized to a discrete region within the right lateral parietooccipital cortex. Instead, as described above, most of the toolselective activity we observed here, including the anterior intraparietal activity, more readily maps onto cortical areas known to be specialized for the guidance and control of actions. We have purposefully focused our attention on the toolselective area within the left anterior IPS but it is clear that this area is part of a larger cortical network underling tool-related knowledge (for reviews, see Johnson-Frey, 2004; Lewis, 2006). Indeed, our voxel-wise analyses revealed tool selectivity in many other cortical regions, including an additional posterior parietal focus as well as some dorsal frontal and lateral temporo-occipital areas (see Fig. 4 and Table 2a). As discussed in the voxel-wise results section, most of these areas have been previously associated with motor-related processes as well as tools and tool use abilities (e.g., Binkofski et al., 1999; for review, see Lewis, 2006). As planned, in line with previous imaging studies involving tool naming, we used our localizer paradigm to identify two additional tool-responsive areas, one within the left inferior frontal and the other within the left temporo-occipital cortex. Like the parietal tool representations, the inferior frontal area is thought to play a critical role in the planning and production of tool-mediated actions (Fridman et al., 2006; Johnson-Frey et al., 2005). This area of cortex is densely interconnected with parts of the posterior parietal cortex and together these areas form a cortical circuit critical for object grasping and manipulation (Luppino et al., 1999; for reviews, see Luppino and Rizzolatti, 2000; Rizzolatti and Luppino, 2001). Also, along with inferior parietal areas, damage to the left inferior frontal cortex has often been associated with ideomotor apraxia (Buxbaum et al., 2005a,b; Haaland et al., 2000). Others have also suggested that this cortical region is important in recognizing and interpreting observed actions and in understanding the functional relevance of objects (for review, see Binkofski and Buccino, 2006; Iacoboni et al., 2005; Johnson-Frey et al., 2003; for review, see Rizzolatti and Craighero, 2004). In contrast, the toolrelated activity within the left temporo-occipital cortex, observed with both the localizer and full-volume analyses, is likely to reflect semantic and/or perceptual processing related to familiar tools and the actions with which they are associated (Chao et al., 1999; Damasio et al., 2001; Martin et al., 1996). This area is robustly active during the retrieval of semantic information about the function and manipulability of familiar tools (Boronat et al., 2005; Kellenbach et al., 2003), when generating action words (Martin et al., 1995), and during the processing and recognition of toolmediated actions and spatial relations (Damasio et al., 2001). Thus, it seems plausible that these lateral temporo-occipital tool representations may relate to conceptual praxis disorders (or ideational apraxia) whereby individuals have selective difficulties with semantic knowledge of tools and their associated actions (e.g., Heilman et al., 1997). Interestingly, this area also preferentially responds to the sounds of tools in action (Beauchamp et al., 2004a,b; Lewis et al., 2005). Others have shown this area to prefer tool-related motion as compared with biological motion (Beauchamp et al., 2002, 2003). These results were used to suggest that this area is selectively involved in the encoding and processing of motion information associated with tools and other man-made artifacts. In short, this area appears to integrate various types of information about familiar tools, perhaps representing a convergence point whereby this information is processed and bound together. Of course more investigation is needed to provide further support for this idea. To summarize, although finer distinctions are beginning to emerge, areas within the left posterior parietal, inferior frontal, and posterior temporo-occipital cortices represent critical nodes of a larger cortical network underlying the representations of knowledge associated with familiar tools and their skilled usage. The current data identify a specific area within the left anterior IPS, distinct from that associated with object-directed grasping, that responds exclusively to tool stimuli and not to other graspable objects. Taken together, these findings suggest that this tool-related activity does not simply reflect the graspable properties of the stimuli, or relate to their natural affordances. Instead, we have argued that this activity relates to action representations which are tied to prior experience and knowledge of object function. In particular, this intraparietal activity reflects the activation of learned motor representations associated with the skillful use of familiar tools. These representations appear to be distinct from those specialized for object grasping, which are thought to operate in real time without the need to reference stored information. Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at doi: /j.neuroimage References Beauchamp, M.S., Lee, K.E., Haxby, J.V., Martin, A., Parallel visual motion processing streams for manipulable objects and human movements. Neuron 34, Beauchamp, M.S., Lee, K.E., Haxby, J.V., Martin, A., FMRI responses to video and point-light displays of moving humans and manipulable objects. J. Cogn. Neurosci. 15,

14 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 T107 Beauchamp, M.S., Argall, B.D., Bodurka, J., Duyn, J.H., Martin, A., 2004a. Unraveling multisensory integration: patchy organization within human STS multisensory cortex. Nat. Neurosci. 7, Beauchamp, M.S., Lee, K.E., Argall, B.D., Martin, A., 2004b. Integration of auditory and visual information about objects in superior temporal sulcus. Neuron 41, Binkofski, F., Buccino, G., The role of ventral premotor cortex in action execution and action understanding. J. Physiol. (Paris) 99, Binkofski, F., Dohle, C., Posse, S., Stephan, K.M., Hefter, H., Seitz, R.J., Freund, H.J., Human anterior intraparietal area subserves prehension: a combined lesion and functional MRI activation study. Neurology 50, Binkofski, F., Buccino, G., Posse, S., Seitz, R.J., Rizzolatti, G., Freund, H., A fronto-parietal circuit for object manipulation in man: evidence from an fmri-study. Eur. J. Neurosci. 11, Boronat, C.B., Buxbaum, L.J., Coslett, H.B., Tang, K., Saffran, E.M., Kimberg, D.Y., Detre, J.A., Distinctions between manipulation and function knowledge of objects: evidence from functional magnetic resonance imaging. Brain Res. Cogn. Brain Res. 23, Boynton, G.M., Engel, S.A., Glover, G.H., Heeger, D.J., Linear systems analysis of functional magnetic resonance imaging in human V1. J. Neurosci. 16, Buccino, G., Binkofski, F., Fink, G.R., Fadiga, L., Fogassi, L., Gallese, V., Seitz, R.J., Zilles, K., Rizzolatti, G., Freund, H.J., Action observation activates premotor and parietal areas in a somatotopic manner: an fmri study. Eur. J. Neurosci. 13, Buxbaum, L.J., Sirigu, A., Schwartz, M.F., Klatzky, R., Cognitive representations of hand posture in ideomotor apraxia. Neuropsychologia 41, Buxbaum, L.J., Johnson-Frey, S.H., Bartlett-Williams, M., 2005a. Deficient internal models for planning hand-object interactions in apraxia. Neuropsychologia 43, Buxbaum, L.J., Kyle, K.M., Menon, R., 2005b. On beyond mirror neurons: internal representations subserving imitation and recognition of skilled object-related actions in humans. Brain Res. Cogn. Brain Res. 25, Chao, L.L., Martin, A., Representation of manipulable man-made objects in the dorsal stream. NeuroImage 12, Chao, L.L., Haxby, J.V., Martin, A., Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects. Nat. Neurosci. 2, Chao, L.L., Weisberg, J., Martin, A., Experience-dependent modulation of category-related cortical activity. Cereb. Cortex 12, Colby, C.L., Goldberg, M.E., Space and attention in parietal cortex. Annu. Rev. Neurosci. 22, Craighero, L., Bello, A., Fadiga, L., Rizzolatti, G., Hand action preparation influences the responses to hand pictures. Neuropsychologia 40, Creem-Regehr, S.H., Lee, J.N., Neural representations of graspable objects: are tools special? Brain Res Cogn. Brain Res. 22, Culham, J.C., Human brain imaging reveals a parietal area specialized for grasping. In: Kanwisher, N., Duncan, J. (Eds.), Functional Neuroimaging of Visual Cognition: Attention and Performance XX. Oxford Univ. Press, Oxford, pp Culham, J.C., Valyear, K.F., Human parietal cortex in action. Curr. Opin. Neurobiol. 16, Culham, J.C., Danckert, S.L., DeSouza, J.F., Gati, J.S., Menon, R.S., Goodale, M.A., Visually guided grasping produces fmri activation in dorsal but not ventral stream brain areas. Exp. Brain Res. 153, Culham, J.C., Cavina-Pratesi, C., Singhal, A., The role of parietal cortex in visuomotor control: what have we learned from neuroimaging? Neuropsychologia 44, Damasio, H., Grabowski, T.J., Tranel, D., Ponto, L.L., Hichwa, R.D., Damasio, A.R., Neural correlates of naming actions and of naming spatial relations. NeuroImage 13, Daprati, E., Sirigu, A., How we interact with objects: learning from brain lesions. Trends Cogn. Sci. 10, Decety, J., Perani, D., Jeannerod, M., Bettinardi, V., Tadary, B., Woods, R., Mazziotta, J.C., Fazio, F., Mapping motor representations with positron emission tomography. Nature 371, Derbyshire, N., Ellis, R., Tucker, M., The potentiation of two components of the reach-to-grasp action during object categorisation in visual memory. Acta Psychol. (Amst) 122, Downing, P.E., Chan, A.W., Peelen, M.V., Dodds, C.M., Kanwisher, N., Domain specificity in visual cortex. Cereb. Cortex 16, Ellis, R., Tucker, M., Micro-affordance: the potentiation of components of action by seen objects. Br. J. Psychol. 91, Epstein, R., Kanwisher, N., A cortical representation of the local visual environment. Nature 392, Frey, S.H., Vinton, D., Norlund, R., Grafton, S.T., Cortical topography of human anterior intraparietal cortex active during visually guided grasping. Brain Res. Cogn. Brain Res. 23, Fridman, E.A., Immisch, I., Hanakawa, T., Bohlhalter, S., Waldvogel, D., Kansaku, K., Wheaton, L., Wu, T., Hallett, M., The role of the dorsal stream for gesture production. NeuroImage 29, Friston, K.J., Henson, R.N., Commentary on: divide and conquer; a defence of functional localisers. NeuroImage 30, Friston, K.J., Rotshtein, P., Geng, J.J., Sterzer, P., Henson, R.N., A critique of functional localisers. NeuroImage 30, Galton, C.J., Patterson, K., Graham, K., Lambon-Ralph, M.A., Williams, G., Antoun, N., Sahakian, B.J., Hodges, J.R., Differing patterns of temporal atrophy in Alzheimer's disease and semantic dementia. Neurology 57, Gibson, J.J., The Ecological Approach to Visual Perception. Houghton Mifflin, Dallas. Grafton, S.T., Fadiga, L., Arbib, M.A., Rizzolatti, G., Premotor cortex activation during observation and naming of familiar tools. NeuroImage 6, Grefkes, C., Weiss, P.H., Zilles, K., Fink, G.R., Crossmodal processing of object features in human anterior intraparietal cortex: an fmri study implies equivalencies between humans and monkeys. Neuron 35, Grèzes, J., Decety, J., Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia 40, Grèzes, J., Armony, J.L., Rowe, J., Passingham, R.E., 2003a. Activations related to mirror and canonical neurones in the human brain: an fmri study. NeuroImage 18, Grèzes, J., Tucker, M., Armony, J., Ellis, R., Passingham, R.E., 2003b. Objects automatically potentiate action: an fmri study of implicit processing. Eur. J. Neurosci. 17, Grossman, M., McMillan, C., Moore, P., Ding, L., Glosser, G., Work, M., Gee, J., What's in a name: voxel-based morphometric analyses of MRI and naming difficulty in Alzheimer's disease, frontotemporal dementia and corticobasal degeneration. Brain 127, Haaland, K.Y., Harrington, D.L., Knight, R.T., Neural representations of skilled movement. Brain 123, Hasson, U., Levy, I., Behrmann, M., Hendler, T., Malach, R., Eccentricity bias as an organizing principle for human high-order object areas. Neuron 34, Hasson, U., Harel, M., Levy, I., Malach, R., Large-scale mirrorsymmetry organization of human occipito-temporal object areas. Neuron 37, Heilman, K.M., Rothi, L.J., Valenstein, E., Two forms of ideomotor apraxia. Neurology 32, Heilman, K.M., Maher, L.M., Greenwald, M.L., Rothi, L.J., Conceptual apraxia from lateralized lesions. Neurology 49, Helbig, H.B., Graf, M., Kiefer, M., The role of action representations in visual object recognition. Exp. Brain Res. 174, Hihara, S., Notoya, T., Tanaka, M., Ichinose, S., Ojima, H., Obayashi, S., Fujii, N., Iriki, A., Extension of corticocortical afferents into the anterior bank of the intraparietal sulcus by tool-use training in adult monkeys. Neuropsychologia 44,

15 T108 K.F. Valyear et al. / NeuroImage 36 (2007) T94 T108 Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J.C., Rizzolatti, G., Grasping the intentions of others with one's own mirror neuron system. PLoS Biol. 3, e79. Iriki, A., Tanaka, M., Iwamura, Y., Coding of modified body schema during tool use by macaque postcentral neurones. NeuroReport 7, Ishibashi, H., Hihara, S., Takahashi, M., Heike, T., Yokota, T., Iriki, A., 2002a. Tool-use learning induces BDNF expression in a selective portion of monkey anterior parietal cortex. Brain Res. Mol. Brain Res. 102, Ishibashi, H., Hihara, S., Takahashi, M., Heike, T., Yokota, T., Iriki, A., 2002b. Tool-use learning selectively induces expression of brain-derived neurotrophic factor, its receptor trkb, and neurotrophin 3 in the intraparietal multisensory cortex of monkeys. Brain Res. Cogn. Brain Res. 14, 3 9. James, T.W., Humphrey, G.K., Gati, J.S., Menon, R.S., Goodale, M.A., Differential effects of viewpoint on object-driven activation in dorsal and ventral streams. Neuron 35, Johnson-Frey, S.H., Cortical representations of human tool use. In: Johnson-Frey, S.H. (Ed.), Taking Action: Cognitive Neuroscience Perspectives on Intentional Acts. MIT Press, Cambridge, MA, pp Johnson-Frey, S.H., The neural bases of complex tool use in humans. Trends Cogn. Sci. 8, Johnson-Frey, S.H., Maloof, F.R., Newman-Norlund, R., Farrer, C., Inati, S., Grafton, S.T., Actions or hand-object interactions? Human inferior frontal cortex and action observation. Neuron 39, Johnson-Frey, S.H., Newman-Norlund, R., Grafton, S.T., A distributed left hemisphere network active during planning of everyday tool use skills. Cereb. Cortex 15, Kellenbach, M.L., Brett, M., Patterson, K., Actions speak louder than functions: the importance of manipulability and action in tool representation. J. Cogn. Neurosci. 15, Kriegeskorte, N., Goebel, R., An efficient algorithm for topologically correct segmentation of the cortical sheet in anatomical mr volumes. NeuroImage 14, Króliczak, G., Cavina-Pratesi, C., Goodman, D., Culham, J.C., What does the brain do when you fake it? An fmri study of pantomimed and real grasping. J. Neurophysiol. 97, Lewis, J.W., Cortical networks related to human use of tools. Neuroscientist 12, Lewis, J.W., Brefczynski, J.A., Phinney, R.E., Janik, J.J., DeYoe, E.A., Distinct cortical pathways for processing tool versus animal sounds. J. Neurosci. 25, Luppino, G., Rizzolatti, G., The organization of the frontal motor cortex. News Physiol. Sci. 15, Luppino, G., Murata, A., Govoni, P., Matelli, M., Largely segregated parietofrontal connections linking rostral intraparietal cortex (areas AIP and VIP) and the ventral premotor cortex (areas F5 and F4). Exp. Brain Res. 128, Malach, R., Levy, I., Hasson, U., The topography of high-order human object areas. Trends Cogn. Sci. 6, Martin, A., Haxby, J.V., Lalonde, F.M., Wiggs, C.L., Ungerleider, L.G., Discrete cortical regions associated with knowledge of color and knowledge of action. Science 270, Martin, A., Wiggs, C.L., Ungerleider, L.G., Haxby, J.V., Neural correlates of category-specific knowledge. Nature 379, McMillan, C., Gee, J., Moore, P., Dennis, K., DeVita, C., Grossman, M., Confrontation naming and morphometric analyses of structural MRI in frontotemporal dementia. Dementia Geriatr. Cognit. Disord. 17, Murata, A., Fadiga, L., Fogassi, L., Gallese, V., Raos, V., Rizzolatti, G., Object representation in the ventral premotor cortex (area F5) of the monkey. J. Neurophysiol. 78, Murata, A., Gallese, V., Luppino, G., Kaseda, M., Sakata, H., Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J. Neurophysiol. 83, Ogawa, S., Tank, D.W., Menon, R., Ellermann, J.M., Kim, S.G., Merkle, H., Ugurbil, K., Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proc. Natl. Acad. Sci. U.S.A. 89, Okada, T., Tanaka, S., Nakai, T., Nishizawa, S., Inui, T., Sadato, N., Yonekura, Y., Konishi, J., Naming of animals and tools: a functional magnetic resonance imaging study of categorical differences in the human brain areas commonly used for naming visually presented objects. Neurosci. Lett. 296, Rice, N.J., Valyear, K.F., Goodale, M.A., Milner, A.D., Culham, J.C., Orientation sensitivity to graspable objects: an fmr adaptation study. NeuroImage 36, T87 T93. Rizzolatti, G., Craighero, L., The mirror-neuron system. Annu. Rev. Neurosci. 27, Rizzolatti, G., Luppino, G., The cortical motor system. Neuron 31, Rothi, L.J.G., Heilman, K.M., In: Rothi, L.J.G., Heilman, K.M. (Eds.), Apraxia: The Neuropsychology of Action. Psychology Press, East Sussex, UK. Saxe, R., Brett, M., Kanwisher, N., Divide and conquer: a defense of functional localizers. NeuroImage 30, (discussion ). Shmuelof, L., Zohary, E., Dissociation between ventral and dorsal fmri activation during object and action recognition. Neuron 47, Shmuelof, L., Zohary, E., A mirror representation of others' actions in the human anterior parietal cortex. J. Neurosci. 26, Singhal, A., Kaufman, L., Valyear, K.F., Culham, J.C., fmri reactivation of the human lateral occipital complex during delayed actions to remembered objects. Vis. Cogn. 14, Sirigu, A., Cohen, L., Duhamel, J.R., Pillon, B., Dubois, B., Agid, Y., A selective impairment of hand posture for object utilization in apraxia. Cortex 31, Sirigu, A., Daprati, E., Buxbaum, L.J., Giraux, P., Pradat-Diehl, P., How the human brain represents manual gestures: effects of brain damage. In: Johnson-Frey, S.H. (Ed.), Taking Action: Cognitive Neuroscience Perspectives on Intentional Acts. MIT Press, Cambridge, MA, pp Symes, E., Ellis, R., Tucker, M., Visual object affordances: object orientation. Acta Psychol. (Amst) 124, Taira, M., Mine, S., Georgopoulos, A.P., Murata, A., Sakata, H., Parietal cortex neurons of the monkey related to the visual guidance of hand movement. Exp. Brain Res. 83, Talairach, J., Tournoux, P., Co-Planar Stereotaxic Atlas of the Human Brain. Thieme Medical Publishers, New York. Tucker, M., Ellis, R., On the relations between seen objects and components of potential actions. J. Exp. Psychol. Hum. Percept. Perform. 24, Tucker, M., Ellis, R., Action priming by briefly presented objects. Acta Psychol. (Amst) 116, Vainio, L., Ellis, R., Tucker, M., Symes, E., Manual asymmetries in visually primed grasping. Exp. Brain Res. 173, Valyear, K.F., Culham, J.C., Sharif, N., Westwood, D., Goodale, M.A., A double dissociation between sensitivity to changes in object identity and object orientation in the ventral and dorsal visual streams: a human fmri study. Neuropsychologia 44, Wise, S.P., Boussaoud, D., Johnson, P.B., Caminiti, R., Premotor and parietal cortex: corticocortical connectivity and combinatorial computations. Annu. Rev. Neurosci. 20, Zelkowicz, B.J., Herbster, A.N., Nebes, R.D., Mintun, M.A., Becker, J.T., An examination of regional cerebral blood flow during object naming tasks. J. Int. Neuropsychol. Soc. 4,

Subjects: Fourteen Princeton undergraduate and graduate students were recruited to

Subjects: Fourteen Princeton undergraduate and graduate students were recruited to Supplementary Methods Subjects: Fourteen Princeton undergraduate and graduate students were recruited to participate in the study, including 9 females and 5 males. The mean age was 21.4 years, with standard

More information

NEURO M203 & BIOMED M263 WINTER 2014

NEURO M203 & BIOMED M263 WINTER 2014 NEURO M203 & BIOMED M263 WINTER 2014 MRI Lab 1: Structural and Functional Anatomy During today s lab, you will work with and view the structural and functional imaging data collected from the scanning

More information

Trends in Neuroscience and Education

Trends in Neuroscience and Education Trends in Neuroscience and Education ] (]]]]) ]]] ]]] Contents lists available at SciVerse ScienceDirect Trends in Neuroscience and Education journal homepage: www.elsevier.com/locate/tine The effects

More information

Obtaining Knowledge. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology.

Obtaining Knowledge. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology 1.Obtaining Knowledge 1. Correlation 2. Causation 2.Hypothesis Generation & Measures 3.Looking into

More information

Video-Based Eye Tracking

Video-Based Eye Tracking Video-Based Eye Tracking Our Experience with Advanced Stimuli Design for Eye Tracking Software A. RUFA, a G.L. MARIOTTINI, b D. PRATTICHIZZO, b D. ALESSANDRINI, b A. VICINO, b AND A. FEDERICO a a Department

More information

7 The use of fmri. to detect neural responses to cognitive tasks: is there confounding by task related changes in heart rate?

7 The use of fmri. to detect neural responses to cognitive tasks: is there confounding by task related changes in heart rate? 7 The use of fmri to detect neural responses to cognitive tasks: is there confounding by task related changes in heart rate? This chapter is submitted as: D. van t Ent, A. den Braber, E. Rotgans, E.J.C.

More information

Anna Martelli Ravenscroft

Anna Martelli Ravenscroft Left vs Right processing of & Place in fovea & periphery Psych204b Background: Anna Martelli Ravenscroft Vision depends on multiple regions of the brain, from the specialized photoreceptors of the retina,

More information

PRIMING OF POP-OUT AND CONSCIOUS PERCEPTION

PRIMING OF POP-OUT AND CONSCIOUS PERCEPTION PRIMING OF POP-OUT AND CONSCIOUS PERCEPTION Peremen Ziv and Lamy Dominique Department of Psychology, Tel-Aviv University zivperem@post.tau.ac.il domi@freud.tau.ac.il Abstract Research has demonstrated

More information

An fmri study on reading Hangul and Chinese Characters by Korean Native Speakers

An fmri study on reading Hangul and Chinese Characters by Korean Native Speakers 언 어 치 료 연 구, 제14 권 제4호 Journal of Speech & Hearing Disorders 2005, Vol.14, No.4, 29 ~ 36 An fmri study on reading Hangul and Chinese Characters by Korean Native Speakers Hyo-Woon Yoon(Brain Science Research

More information

Function (& other notes)

Function (& other notes) LAB 8. ANATOMY OF THE HUMAN BRAIN In this exercise you each will map the human brain both anatomy and function so that you can develop a more accurate picture of what s going on in your head :-) EXTERNAL

More information

MRI DATA PROCESSING. Compiled by: Nicolas F. Lori and Carlos Ferreira. Introduction

MRI DATA PROCESSING. Compiled by: Nicolas F. Lori and Carlos Ferreira. Introduction MRI DATA PROCESSING Compiled by: Nicolas F. Lori and Carlos Ferreira Introduction Magnetic Resonance Imaging (MRI) is a clinical exam that is safe to the patient. Nevertheless, it s very important to attend

More information

Visual area MT responds to local motion. Visual area MST responds to optic flow. Visual area STS responds to biological motion. Macaque visual areas

Visual area MT responds to local motion. Visual area MST responds to optic flow. Visual area STS responds to biological motion. Macaque visual areas Visual area responds to local motion MST a Visual area MST responds to optic flow MST a Visual area STS responds to biological motion STS Macaque visual areas Flattening the brain What is a visual area?

More information

Effects of Achievement Goals on Challenge Seeking and Feedback Processing: Behavioral and fmri Evidence

Effects of Achievement Goals on Challenge Seeking and Feedback Processing: Behavioral and fmri Evidence on Challenge Seeking and Feedback Processing: Behavioral and fmri Evidence Woogul Lee, Sung-il Kim* Department of Education and bmri (Brain and Motivation Research Institute), Korea University, Seoul,

More information

2 Neurons. 4 The Brain: Cortex

2 Neurons. 4 The Brain: Cortex 1 Neuroscience 2 Neurons output integration axon cell body, membrane potential Frontal planning control auditory episodes soma motor Temporal Parietal action language objects space vision Occipital inputs

More information

The Capacity of Visual Short- Term Memory Is Set Both by Visual Information Load and by Number of Objects G.A. Alvarez and P.

The Capacity of Visual Short- Term Memory Is Set Both by Visual Information Load and by Number of Objects G.A. Alvarez and P. PSYCHOLOGICAL SCIENCE Research Article The Capacity of Visual Short- Term Memory Is Set Both by Visual Information Load and by Number of Objects G.A. Alvarez and P. Cavanagh Harvard University ABSTRACT

More information

Using Neuroscience to Understand the Role of Direct Mail

Using Neuroscience to Understand the Role of Direct Mail Millward Brown: Case Study Using Neuroscience to Understand the Role of Direct Mail Business Challenge Virtual media has experienced explosive growth in recent years, while physical media, such as print

More information

The neural origins of specific and general memory: the role of the fusiform cortex

The neural origins of specific and general memory: the role of the fusiform cortex Neuropsychologia 43 (2005) 847 859 The neural origins of specific and general memory: the role of the fusiform cortex Rachel J. Garoff, Scott D. Slotnick, Daniel L. Schacter Department of Psychology, Harvard

More information

Subjects. Subjects were undergraduates at the University of California, Santa Barbara, with

Subjects. Subjects were undergraduates at the University of California, Santa Barbara, with Category-specific visual attention 1 SI Appendix 1 Method Subjects. Subjects were undergraduates at the University of California, Santa Barbara, with normal or corrected-to-normal vision. Exp 1: n=30;

More information

The Visual Cortex 0 http://www.tutis.ca/neuromd/index.htm 20 February 2013

The Visual Cortex 0 http://www.tutis.ca/neuromd/index.htm 20 February 2013 T he Visual Cortex 0 Chapter contents Contents Chapter 2... 0 T he Visual Cortex... 0 Chapter Contents... 1 Introduction... 2 Optic Chiasm... 2 Where do the eye's ganglion cells project to?... 3 To where

More information

NeuroImage 60 (2012) 661 672. Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage: www.elsevier.

NeuroImage 60 (2012) 661 672. Contents lists available at SciVerse ScienceDirect. NeuroImage. journal homepage: www.elsevier. NeuroImage 60 (2012) 661 672 Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Cortical plasticity for visuospatial processing and object recognition

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

More information

Chapter 14: The Cutaneous Senses

Chapter 14: The Cutaneous Senses Chapter 14: The Cutaneous Senses Skin - heaviest organ in the body Cutaneous System Epidermis is the outer layer of the skin, which is made up of dead skin cells Dermis is below the epidermis and contains

More information

Visual Attention and Emotional Perception

Visual Attention and Emotional Perception Visual Attention and Emotional Perception Luiz Pessoa 1 and Leslie G. Ungerleider 2 (1) Department of Psychology, Brown University, Providence, RI (2) Laboratory of Brain & Cognition, National Institute

More information

Reduction of the flash-lag effect in terms of active observation

Reduction of the flash-lag effect in terms of active observation Attention, Perception, & Psychophysics 2010, 72 (4), 1032-1044 doi:10.3758/app.72.4.1032 Reduction of the flash-lag effect in terms of active observation MAKOTO ICHIKAWA Chiba University, Chiba, Japan

More information

The Wondrous World of fmri statistics

The Wondrous World of fmri statistics Outline The Wondrous World of fmri statistics FMRI data and Statistics course, Leiden, 11-3-2008 The General Linear Model Overview of fmri data analysis steps fmri timeseries Modeling effects of interest

More information

Vision: Receptors. Modes of Perception. Vision: Summary 9/28/2012. How do we perceive our environment? Sensation and Perception Terminology

Vision: Receptors. Modes of Perception. Vision: Summary 9/28/2012. How do we perceive our environment? Sensation and Perception Terminology How do we perceive our environment? Complex stimuli are broken into individual features, relayed to the CNS, then reassembled as our perception Sensation and Perception Terminology Stimulus: physical agent

More information

Brain areas underlying visual mental imagery and visual perception: an fmri study

Brain areas underlying visual mental imagery and visual perception: an fmri study Cognitive Brain Research 20 (2004) 226 241 Research report Brain areas underlying visual mental imagery and visual perception: an fmri study Giorgio Ganis a,b,c, *, William L. Thompson a, Stephen M. Kosslyn

More information

2. MATERIALS AND METHODS

2. MATERIALS AND METHODS Difficulties of T1 brain MRI segmentation techniques M S. Atkins *a, K. Siu a, B. Law a, J. Orchard a, W. Rosenbaum a a School of Computing Science, Simon Fraser University ABSTRACT This paper looks at

More information

Integration and Visualization of Multimodality Brain Data for Language Mapping

Integration and Visualization of Multimodality Brain Data for Language Mapping Integration and Visualization of Multimodality Brain Data for Language Mapping Andrew V. Poliakov, PhD, Kevin P. Hinshaw, MS, Cornelius Rosse, MD, DSc and James F. Brinkley, MD, PhD Structural Informatics

More information

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT

More information

Agent Simulation of Hull s Drive Theory

Agent Simulation of Hull s Drive Theory Agent Simulation of Hull s Drive Theory Nick Schmansky Department of Cognitive and Neural Systems Boston University March 7, 4 Abstract A computer simulation was conducted of an agent attempting to survive

More information

5 Factors Affecting the Signal-to-Noise Ratio

5 Factors Affecting the Signal-to-Noise Ratio 5 Factors Affecting the Signal-to-Noise Ratio 29 5 Factors Affecting the Signal-to-Noise Ratio In the preceding chapters we have learned how an MR signal is generated and how the collected signal is processed

More information

runl I IUI%I/\L Magnetic Resonance Imaging

runl I IUI%I/\L Magnetic Resonance Imaging runl I IUI%I/\L Magnetic Resonance Imaging SECOND EDITION Scott A. HuetteS Brain Imaging and Analysis Center, Duke University Allen W. Song Brain Imaging and Analysis Center, Duke University Gregory McCarthy

More information

Measuring and modeling attentional functions

Measuring and modeling attentional functions Measuring and modeling attentional functions Søren Kyllingsbæk & Signe Vangkilde Center for Visual Cognition Slide 1 A Neural Theory of Visual Attention Attention at the psychological and neurophysiological

More information

Activation neuroimaging studies - GABA receptor function - alcohol cues in alcoholism

Activation neuroimaging studies - GABA receptor function - alcohol cues in alcoholism Activation neuroimaging studies - GABA receptor function A - alcohol cues in alcoholism Professor David Nutt Psychopharmacology Unit, University of Bristol. MRC Clinical Sciences Centre, London. Study

More information

Functional neuroimaging. Imaging brain function in real time (not just the structure of the brain).

Functional neuroimaging. Imaging brain function in real time (not just the structure of the brain). Functional neuroimaging Imaging brain function in real time (not just the structure of the brain). The brain is bloody & electric Blood increase in neuronal activity increase in metabolic demand for glucose

More information

A Data-Driven Mapping of Five ACT-R Modules on the Brain

A Data-Driven Mapping of Five ACT-R Modules on the Brain A Data-Driven Mapping of Five ACT-R Modules on the Brain Jelmer P. Borst (jelmer@cmu.edu) 1,2 Menno Nijboer (m.nijboer@rug.nl) 2 Niels A. Taatgen (n.a.taatgen@rug.nl) 2 John R. Anderson (ja+@cmu.edu) 1

More information

E190Q Lecture 5 Autonomous Robot Navigation

E190Q Lecture 5 Autonomous Robot Navigation E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator

More information

MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients

MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients MEDIMAGE A Multimedia Database Management System for Alzheimer s Disease Patients Peter L. Stanchev 1, Farshad Fotouhi 2 1 Kettering University, Flint, Michigan, 48504 USA pstanche@kettering.edu http://www.kettering.edu/~pstanche

More information

Designing eye tracking experiments to measure human behavior

Designing eye tracking experiments to measure human behavior Designing eye tracking experiments to measure human behavior Eindhoven, The Netherlands August, 2010 Ricardo Matos Tobii Technology Steps involved in measuring behaviour 1. Formulate and initial question

More information

Data Exploration Data Visualization

Data Exploration Data Visualization Data Exploration Data Visualization What is data exploration? A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping to select

More information

SITE IMAGING MANUAL ACRIN 6698

SITE IMAGING MANUAL ACRIN 6698 SITE IMAGING MANUAL ACRIN 6698 Diffusion Weighted MR Imaging Biomarkers for Assessment of Breast Cancer Response to Neoadjuvant Treatment: A sub-study of the I-SPY 2 TRIAL Version: 1.0 Date: May 28, 2012

More information

Adolescent Brain Development and Effects of Alcohol Use

Adolescent Brain Development and Effects of Alcohol Use Adolescent Brain Development and Effects of Alcohol Use Monica Luciana, Ph.D. Professor and Chair Department of Psychology and Center for Neurobehavioral Development University of Minnesota (lucia003@umn.edu)

More information

Brain Voyager Manual: Organizing, Processing and Analyzing fmri data using Brain Voyager

Brain Voyager Manual: Organizing, Processing and Analyzing fmri data using Brain Voyager Brain Voyager Manual: Organizing, Processing and Analyzing fmri data using Brain Voyager This project or effort undertaken was sponsored by MSU s Institute for Imaging & Analytical Technologies/ Office

More information

Recoding, storage, rehearsal and grouping in verbal short-term memory: an fmri study p

Recoding, storage, rehearsal and grouping in verbal short-term memory: an fmri study p Neuropsychologia 38 (2000) 426±440 www.elsevier.com/locate/neuropsychologia Recoding, storage, rehearsal and grouping in verbal short-term memory: an fmri study p R.N.A. Henson a, b, *, N. Burgess b, c,

More information

Cortical Regions Involved in Perceiving Object Shape

Cortical Regions Involved in Perceiving Object Shape The Journal of Neuroscience, May 1, 2000, 20(9):3310 3318 Cortical Regions Involved in Perceiving Object Shape Zoe Kourtzi and Nancy Kanwisher Department of Brain and Cognitive Science, Massachusetts Institute

More information

Binocular Vision and The Perception of Depth

Binocular Vision and The Perception of Depth Binocular Vision and The Perception of Depth Visual Perception How one visually interprets a scene 4 forms of perception to be studied: Depth Color Temporal Motion Depth Perception How does one determine

More information

Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study

Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study Cerebral Cortex March 2006;16:415-424 doi:10.1093/cercor/bhi121 Advance Access publication June 15, 2005 Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study Mika

More information

Human brain potential correlates of repetition priming in face and name recognition

Human brain potential correlates of repetition priming in face and name recognition Neuropsychologia 40 (2002) 2057 2073 Human brain potential correlates of repetition priming in face and name recognition Stefan R. Schweinberger, Esther C. Pickering, A. Mike Burton, Jürgen M. Kaufmann

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR

DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 49th ANNUAL MEETING 2005 2235 DISPLAYING SMALL SURFACE FEATURES WITH A FORCE FEEDBACK DEVICE IN A DENTAL TRAINING SIMULATOR Geb W. Thomas and Li

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information

ART A. PROGRAM RATIONALE AND PHILOSOPHY

ART A. PROGRAM RATIONALE AND PHILOSOPHY ART A. PROGRAM RATIONALE AND PHILOSOPHY Art education is concerned with the organization of visual material. A primary reliance upon visual experience gives an emphasis that sets it apart from the performing

More information

Neuroplasticity associated with tactile language communication in a deaf-blind subject

Neuroplasticity associated with tactile language communication in a deaf-blind subject HUMAN NEUROSCIENCE ORIGINAL RESEARCH ARTICLE published: 04 January 2010 doi: 10.3389/neuro.09.060.2009 Neuroplasticity associated with tactile language communication in a deaf-blind subject Souzana Obretenova,

More information

Understanding Animate Agents

Understanding Animate Agents PSYCHOLOGICAL SCIENCE Research Report Understanding Animate Agents Distinct Roles for the Social Network and Mirror System Thalia Wheatley, Shawn C. Milleville, and Alex Martin Laboratory of Brain & Cognition,

More information

CELL PHONE INDUCED PERCEPTUAL IMPAIRMENTS DURING SIMULATED DRIVING

CELL PHONE INDUCED PERCEPTUAL IMPAIRMENTS DURING SIMULATED DRIVING CELL PHONE INDUCED PERCEPTUAL IMPAIRMENTS DURING SIMULATED DRIVING David L. Strayer, Frank A. Drews, Robert W. Albert, and William A. Johnston Department of Psychology University of Utah Salt Lake City,

More information

Quantifying Spatial Presence. Summary

Quantifying Spatial Presence. Summary Quantifying Spatial Presence Cedar Riener and Dennis Proffitt Department of Psychology, University of Virginia Keywords: spatial presence, illusions, visual perception Summary The human visual system uses

More information

Skill acquisition. Skill acquisition: Closed loop theory Feedback guides learning a motor skill. Problems. Motor learning practice

Skill acquisition. Skill acquisition: Closed loop theory Feedback guides learning a motor skill. Problems. Motor learning practice Motor learning theories closed loop theory schema theory hierarchical theory Skill acquisition Motor learning practice Fitt s three stages motor imagery physical changes Skill acquisition: Closed loop

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3

COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3 COMP 5318 Data Exploration and Analysis Chapter 3 What is data exploration? A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping

More information

Using Retrocausal Practice Effects to Predict On-Line Roulette Spins. Michael S. Franklin & Jonathan Schooler UCSB, Department of Psychology.

Using Retrocausal Practice Effects to Predict On-Line Roulette Spins. Michael S. Franklin & Jonathan Schooler UCSB, Department of Psychology. Using Retrocausal Practice Effects to Predict On-Line Roulette Spins Michael S. Franklin & Jonathan Schooler UCSB, Department of Psychology Summary Modern physics suggest that time may be symmetric, thus

More information

NeuroImage. Taking perspective into account in a communicative task. Iroise Dumontheil a,, Olivia Küster a, Ian A. Apperly b, Sarah-Jayne Blakemore a

NeuroImage. Taking perspective into account in a communicative task. Iroise Dumontheil a,, Olivia Küster a, Ian A. Apperly b, Sarah-Jayne Blakemore a NeuroImage 52 (2010) 1574 1583 Contents lists available at ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Taking perspective into account in a communicative task Iroise Dumontheil

More information

Convention Paper Presented at the 112th Convention 2002 May 10 13 Munich, Germany

Convention Paper Presented at the 112th Convention 2002 May 10 13 Munich, Germany Audio Engineering Society Convention Paper Presented at the 112th Convention 2002 May 10 13 Munich, Germany This convention paper has been reproduced from the author's advance manuscript, without editing,

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

Decoding mental states from brain activity in humans

Decoding mental states from brain activity in humans NEUROIMAGING Decoding mental states from brain activity in humans John-Dylan Haynes* and Geraint Rees Abstract Recent advances in human neuroimaging have shown that it is possible to accurately decode

More information

Experiment #1, Analyze Data using Excel, Calculator and Graphs.

Experiment #1, Analyze Data using Excel, Calculator and Graphs. Physics 182 - Fall 2014 - Experiment #1 1 Experiment #1, Analyze Data using Excel, Calculator and Graphs. 1 Purpose (5 Points, Including Title. Points apply to your lab report.) Before we start measuring

More information

GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time

GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time Module 8: IQ: Acquisition Time IQ : Acquisition Time Objectives...Describe types of data acquisition modes....compute acquisition times for 2D and 3D scans. 2D Acquisitions The 2D mode acquires and reconstructs

More information

DISSECTION OF THE SHEEP'S BRAIN

DISSECTION OF THE SHEEP'S BRAIN DISSECTION OF THE SHEEP'S BRAIN Introduction The purpose of the sheep brain dissection is to familiarize you with the threedimensional structure of the brain and teach you one of the great methods of studying

More information

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 8/05/2005 1 What is data exploration? A preliminary

More information

Neuropsychologia 50 (2012) 2245 2256. Contents lists available at SciVerse ScienceDirect. Neuropsychologia

Neuropsychologia 50 (2012) 2245 2256. Contents lists available at SciVerse ScienceDirect. Neuropsychologia Neuropsychologia 50 (2012) 2245 2256 Contents lists available at SciVerse ScienceDirect Neuropsychologia journal homepage: www.elsevier.com/locate/neuropsychologia Note Inside out: A neuro-behavioral signature

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

How To Run Statistical Tests in Excel

How To Run Statistical Tests in Excel How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting

More information

Digital image processing

Digital image processing 746A27 Remote Sensing and GIS Lecture 4 Digital image processing Chandan Roy Guest Lecturer Department of Computer and Information Science Linköping University Digital Image Processing Most of the common

More information

THE HUMAN BRAIN. observations and foundations

THE HUMAN BRAIN. observations and foundations THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as

More information

Cognitive Neuroscience. Questions. Multiple Methods. Electrophysiology. Multiple Methods. Approaches to Thinking about the Mind

Cognitive Neuroscience. Questions. Multiple Methods. Electrophysiology. Multiple Methods. Approaches to Thinking about the Mind Cognitive Neuroscience Approaches to Thinking about the Mind Cognitive Neuroscience Evolutionary Approach Sept 20-22, 2004 Interdisciplinary approach Rapidly changing How does the brain enable cognition?

More information

NIH Public Access Author Manuscript Neuroimage. Author manuscript; available in PMC 2012 April 1.

NIH Public Access Author Manuscript Neuroimage. Author manuscript; available in PMC 2012 April 1. NIH Public Access Author Manuscript Published in final edited form as: Neuroimage. 2011 April 1; 55(3): 1357 1372. doi:10.1016/j.neuroimage.2010.12.024. Patterns of brain reorganization subsequent to left

More information

Linking Hemodynamic and Electrophysiological Measures of Brain Activity: Evidence from Functional MRI and Intracranial Field Potentials

Linking Hemodynamic and Electrophysiological Measures of Brain Activity: Evidence from Functional MRI and Intracranial Field Potentials Linking Hemodynamic and Electrophysiological Measures of Brain Activity: Evidence from Functional MRI and Intracranial Field Potentials Scott A. Huettel 1, Martin J. McKeown 1, Allen W. Song 1, Sarah Hart

More information

A STATISTICS COURSE FOR ELEMENTARY AND MIDDLE SCHOOL TEACHERS. Gary Kader and Mike Perry Appalachian State University USA

A STATISTICS COURSE FOR ELEMENTARY AND MIDDLE SCHOOL TEACHERS. Gary Kader and Mike Perry Appalachian State University USA A STATISTICS COURSE FOR ELEMENTARY AND MIDDLE SCHOOL TEACHERS Gary Kader and Mike Perry Appalachian State University USA This paper will describe a content-pedagogy course designed to prepare elementary

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Determining optimal window size for texture feature extraction methods

Determining optimal window size for texture feature extraction methods IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Functions of the Brain

Functions of the Brain Objectives 0 Participants will be able to identify 4 characteristics of a healthy brain. 0 Participants will be able to state the functions of the brain. 0 Participants will be able to identify 3 types

More information

An Introduction to ERP Studies of Attention

An Introduction to ERP Studies of Attention An Introduction to ERP Studies of Attention Logan Trujillo, Ph.D. Post-Doctoral Fellow University of Texas at Austin Cognitive Science Course, Fall 2008 What is Attention? Everyone knows what attention

More information

International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in Office Applications Light Measurement & Quality Parameters

International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in Office Applications Light Measurement & Quality Parameters www.led-professional.com ISSN 1993-890X Trends & Technologies for Future Lighting Solutions ReviewJan/Feb 2015 Issue LpR 47 International Year of Light 2015 Tech-Talks BREGENZ: Mehmet Arik Well-Being in

More information

RIEGL VZ-400 NEW. Laser Scanners. Latest News March 2009

RIEGL VZ-400 NEW. Laser Scanners. Latest News March 2009 Latest News March 2009 NEW RIEGL VZ-400 Laser Scanners The following document details some of the excellent results acquired with the new RIEGL VZ-400 scanners, including: Time-optimised fine-scans The

More information

Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features

Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with Panchromatic Textural Features Remote Sensing and Geoinformation Lena Halounová, Editor not only for Scientific Cooperation EARSeL, 2011 Multiscale Object-Based Classification of Satellite Images Merging Multispectral Information with

More information

Have you ever missed a call while moving? : The Optimal Vibration Frequency for Perception in Mobile Environments

Have you ever missed a call while moving? : The Optimal Vibration Frequency for Perception in Mobile Environments Have you ever missed a call while moving? : The Optimal Vibration Frequency for Perception in Mobile Environments Youngmi Baek and Rohae Myung Dept. of Industrial and Information Engineering Korea University

More information

GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS

GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS GAZETRACKERrM: SOFTWARE DESIGNED TO FACILITATE EYE MOVEMENT ANALYSIS Chris kankford Dept. of Systems Engineering Olsson Hall, University of Virginia Charlottesville, VA 22903 804-296-3846 cpl2b@virginia.edu

More information

NeuroImage. Brain talks over boring quotes: Top-down activation of voice-selective areas while listening to monotonous direct speech quotations

NeuroImage. Brain talks over boring quotes: Top-down activation of voice-selective areas while listening to monotonous direct speech quotations NeuroImage 6 (22) 832 842 Contents lists available at SciVerse ScienceDirect NeuroImage journal homepage: www.elsevier.com/locate/ynimg Full Length Article Brain talks over boring quotes: Top-down activation

More information

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt. Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.edu Outline Motivation why do we need GPUs? Past - how was GPU programming

More information

Overlapping mechanisms of attention and spatial working memory

Overlapping mechanisms of attention and spatial working memory Review 119 Overlapping mechanisms of attention and spatial working memory Edward Awh and John Jonides Spatial selective attention and spatial working memory have largely been studied in isolation. Studies

More information

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain

More information

Sensory-motor control scheme based on Kohonen Maps and AVITE model

Sensory-motor control scheme based on Kohonen Maps and AVITE model Sensory-motor control scheme based on Kohonen Maps and AVITE model Juan L. Pedreño-Molina, Antonio Guerrero-González, Oscar A. Florez-Giraldo, J. Molina-Vilaplana Technical University of Cartagena Department

More information

Visual areas involved in the perception of human movement from dynamic form analysis

Visual areas involved in the perception of human movement from dynamic form analysis BRAIN IMAGING Visual areas involved in the perception of human movement from dynamic form analysis Lars Michels, 1,CA Markus Lappe 1 and Lucia Maria Vaina 2,3 1 Psychologisches Institut II,WestfÌlische

More information

PUTTING SCIENCE BEHIND THE STANDARDS. A scientific study of viewability and ad effectiveness

PUTTING SCIENCE BEHIND THE STANDARDS. A scientific study of viewability and ad effectiveness PUTTING SCIENCE BEHIND THE STANDARDS A scientific study of viewability and ad effectiveness EXECUTIVE SUMMARY The concept of when an ad should be counted as viewable, what effects various levels of viewability

More information

Why do we have so many brain coordinate systems? Lilla ZölleiZ WhyNHow seminar 12/04/08

Why do we have so many brain coordinate systems? Lilla ZölleiZ WhyNHow seminar 12/04/08 Why do we have so many brain coordinate systems? Lilla ZölleiZ WhyNHow seminar 12/04/08 About brain atlases What are they? What do we use them for? Who creates them? Which one shall I use? Brain atlas

More information

Whole-brain Functional MR Imaging Activation from a Finger-tapping Task Examined with Independent Component Analysis

Whole-brain Functional MR Imaging Activation from a Finger-tapping Task Examined with Independent Component Analysis AJNR Am J Neuroradiol 21:1629 1635, October 2000 Whole-brain Functional MR Imaging Activation from a Finger-tapping Task Examined with Independent Component Analysis Chad H. Moritz, Victor M. Haughton,

More information

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan, Steinbach, Kumar What is data exploration? A preliminary exploration of the data to better understand its characteristics.

More information

Using angular speed measurement with Hall effect sensors to observe grinding operation with flexible robot.

Using angular speed measurement with Hall effect sensors to observe grinding operation with flexible robot. Using angular speed measurement with Hall effect sensors to observe grinding operation with flexible robot. François Girardin 1, Farzad Rafieian 1, Zhaoheng Liu 1, Marc Thomas 1 and Bruce Hazel 2 1 Laboratoire

More information

THEORY, SIMULATION, AND COMPENSATION OF PHYSIOLOGICAL MOTION ARTIFACTS IN FUNCTIONAL MRI. Douglas C. Noll* and Walter Schneider

THEORY, SIMULATION, AND COMPENSATION OF PHYSIOLOGICAL MOTION ARTIFACTS IN FUNCTIONAL MRI. Douglas C. Noll* and Walter Schneider THEORY, SIMULATION, AND COMPENSATION OF PHYSIOLOGICAL MOTION ARTIFACTS IN FUNCTIONAL MRI Douglas C. Noll* and Walter Schneider Departments of *Radiology, *Electrical Engineering, and Psychology University

More information