Info

FIGURE 2 (A) The mean onset times across individual time courses (along with the standard error and the number of observations shown in parentheses) for five different cortical regions evoked by stimulating the central versus peripheral visual fields. Onset latencies were computed by an algorithm, where onset time was defined as the time when the activity exceeded two standard deviations of the noise level. The group-averaged cortical response profiles shown for two cortical areas [V1 and intraparietal sulcus (IPS)] display curves associated with central field stimulation (solid lines) and peripheral field stimulation (dashed lines). The IPS is a dorsal stream structure. The MRIs shown at the right reveal the locations of these visual areas for two subjects. The small triangles and circles shown on the MRIs reflect source locations associated with peripheral field and central field stimulation, respectively. (B) Another parietal location (dorsal stream structure), the superior lateral occi-pital gyrus (S. LOG), which revealed earlier onset times for peripheral field stimulation, as well.

FIGURE 2 (A) The mean onset times across individual time courses (along with the standard error and the number of observations shown in parentheses) for five different cortical regions evoked by stimulating the central versus peripheral visual fields. Onset latencies were computed by an algorithm, where onset time was defined as the time when the activity exceeded two standard deviations of the noise level. The group-averaged cortical response profiles shown for two cortical areas [V1 and intraparietal sulcus (IPS)] display curves associated with central field stimulation (solid lines) and peripheral field stimulation (dashed lines). The IPS is a dorsal stream structure. The MRIs shown at the right reveal the locations of these visual areas for two subjects. The small triangles and circles shown on the MRIs reflect source locations associated with peripheral field and central field stimulation, respectively. (B) Another parietal location (dorsal stream structure), the superior lateral occi-pital gyrus (S. LOG), which revealed earlier onset times for peripheral field stimulation, as well.

the region of the fovea for real objects or faces (Desimone and Gross, 1979, Gross et al, 1974; Perrett et al, 1982; Baylis et al, 1987). Responses to faces are 2-10 times greater than responses to grating stimuli, simple geometrical stimuli, or complex three-dimensional objects (Perrett et al., 1982; Baylis et al., 1987). These cells have some properties of perceptual invariance because they respond well even when the faces are inverted or rotated. TE projects heavily into the perirhinal region, a region important for visual recognition memory of objects, because this region projects to the hippocampus via entorhinal cortex. Ablation of TE produces long lasting deficits in the monkey's ability to learn visual discriminations but leaves intact the ability to learn discriminations in other modalities (e.g., Mishkin, 1982; Cowey and Weiskrantz, 1967; Gross, 1973; Sahgal and Iversen, 1978; Bagshaw et al., 1972; Butter, 1969).

The use of tasks involving face perception or face recognition in functional neu-roimaging studies became popular because of the converging lines of evidence crossing nonhuman animal studies, human lesion studies, and noninvasive functional imaging studies in humans, suggesting that a region of the temporal lobe, belonging to the object-processing stream, is selective for face processing. Face-selective neurons in monkeys have been identified in (1) area TE in monkeys (Gross et al., 1972; Rolls et al., 1977), (2) superior temporal sulcus (STS), which receives projections from inferotemporal cortex (Gross et al., 1972; Perrett et al, 1982), and (3) amygdala, parietal cortex, and frontal cortex, all of which receive projections from the fundus of the STS (Sanghera et al., 1979; Aggleton et al., 1980; Leinonen and Nyman, 1979; Seltzer and Pandya, 1978; Pigarev et al., 1979; Jacobsen and Trojanowski, 1977; Rolls, 1984).

Clinical evidence for face-selective regions in humans comes from cases of prosopagnosia, a difficulty in recognizing faces of familiar persons that is associated with damage to the inferior occipitotempo-ral region (Meadows, 1974; Whiteley and Warrington, 1977; Damasio, 1985; Sergent and Poncet, 1990). Object recognition, in contrast, is not typically impaired in prosopagnosic patients (De Renzi, 1986). As Damasio and colleagues (1982) and Logothetis and Sheinberg (1996) point out, prosopagnosics tend to have difficulties making within-category discriminations. For example, these patients have difficulties differentiating between various cars, or fruits.

Dissociation between face- and object-selective areas within ITC was attempted by Sergent and colleagues (1992) by comparing rCBF measures when subjects were engaged in several tasks (e.g., discriminating the orientation of sine wave gratings, face gender, face identity, object identity). They concluded that face identity caused activation of right extrastriate cortex, fusiform gyrus, and the anterior temporal cortex of both hemispheres. In contrast, object recognition primarily activated left occipitotemporal cortex. Other PET studies (Kim et al., 1999; Kapur et al., 1995; Campanella et al., 2001) and fMRI studies (e.g., Puce et al, 1995; Courtney et al., 1997; Kanwisher et al., 1997; Haxby et al., 1999; Halgren et al., 1999; Jiang et al., 2000; Maguire et al., 2001) attempted to demonstrate the selectivity and locus of face-selective areas. Results from all of these studies generally agree in showing fusiform gyrus involvement (either medial, lateral, or posterior), but some studies revealed additional areas as well, such as inferior temporal cortex, right middle occipital gyrus, right lingual gyrus, superior temporal, inferior occipital sulcus, lateral occipital sulcus, and middle temporal gyrus/STS, in addition to prefrontal regions.

In addition, the fMRI/PET studies generally suggest that object perception activated occipitotemporal regions (fusiform gyrus), an area similar to the area activated by face perception, but more medial

(Malach et al., 1995; Martin et al., 1995; Puce et al., 1995; Halgren et al., 1999; Haxby et al., 1999; Martin and Chao, 2001; Maguire et al., 2001). However, there are clear differences of opinion concerning the exact locations of the object areas. Malach and colleagues (1995), for example, proposed that lateral occipital cortex (LO) near the temporal border was the homolog of monkey TE, because pictures of objects activated this region much more strongly than did scrambled images. Kanwisher and colleagues (1997) also found a region selective to images of objects compared to scrambled images, but this region was just anterior and ventral to Malach's LO. In both cases, the putative object region evoked strong responses to objects and faces, as well as familiar and novel objects.

MEG examinations of face processing primarily focused on deflections seen in MEG-averaged wave forms to face stimuli versus other objects or degraded face stimuli. Lu and colleagues (1991) found that faces evoked greater amplitude deflections than did birds at 150, 260, and 500 msec, which were localized to bilateral occipitotemporal junction, inferior parietal cortex, and middle temporal lobe, respectively. Sams and colleagues (1997) found a face-selective area in four of seven subjects in occipitotemporal cortex peaking at 150-170 msec. They noted, however, that the source locations of the face-specific response varied across subjects and that even nonface stimuli can activate the face area, although with less magnitude. Swithenby and colleagues (1998) also found a significant difference for faces versus other images (i.e., greater normalized regional power) at 140 msec in the sensors over the right occipitotemporal region, and source modeling suggested the ventral occipitotemporal region (see Fig. 3). Each stimulus image subtended a visual angle of 8 x 6° and was luminance matched to the other images. In contrast, Liu and colleagues (1999), using magnetic field tomography (MFT) analysis on single-trial

FIGURE 3 Averaged evoked field patterns recorded over right occipitotemporal cortex for four subjects. Responses to faces are depicted by solid tracings; responses to other images are shown as dotted lines. The largest peaks, occurring between 130 and 150 msec for each subject, were evoked by face stimuli. Reprinted with permission from Experimental Brain Research; Neural processing of human faces: A magnetoencephalographic study; S. J. Swithenby, A. J. Bailey, S. Bräutigam, O. E. Josephs, V. Jousmäki, and C. D. Tesche; Vol. 118, p. 505, Fig. 3, 1998. Copyright 1998 Springer-Verlag.

FIGURE 3 Averaged evoked field patterns recorded over right occipitotemporal cortex for four subjects. Responses to faces are depicted by solid tracings; responses to other images are shown as dotted lines. The largest peaks, occurring between 130 and 150 msec for each subject, were evoked by face stimuli. Reprinted with permission from Experimental Brain Research; Neural processing of human faces: A magnetoencephalographic study; S. J. Swithenby, A. J. Bailey, S. Bräutigam, O. E. Josephs, V. Jousmäki, and C. D. Tesche; Vol. 118, p. 505, Fig. 3, 1998. Copyright 1998 Springer-Verlag.

data, found no evidence of a face-specific area in the sense that all complex objects appear to activate the fusiform gyrus (at 125-175 msec and 240-265 msec in the left hemisphere and 150-180 msec for the right fusiform), along with many other areas. A previous report from the same laboratory (Streit et al., 1999), using MFT analysis on averaged responses from single subjects, found early activity (~160 msec) related to the emotional content of faces in the posterior sector of superior right temporal cortex and inferior occipito temporal cortex, followed by bilateral activity (right hemisphere leading the left hemisphere) in the middle sector of temporal cortex (~200 msec) and in the amygdala (~220 msec).

In contrast with the Liu et al. study (1999), for each condition (faces, pointillized faces, and inverted faces), Linkenkaer-Hansen and colleagues (1998) also localized activity around 170 msec to the fusiform gyrus, but they interpreted these results as evidence for face-selectivity [Halgren et al. (2000) found bilateral fusiform activity]. They also suggested that face selectivity may occur as early as 120 msec (although differences in latencies were not statistically significant), because latency differences were evident in the MEG wave forms between faces and pointillized faces (i.e., degraded faces). However, even though these investigators controlled for many of the low-level features such as luminance, contrast, and intensity distribution of the gray-scale levels, such latency differences can occur due to differences in spatial frequency between these stimuli (see Okada et al, 1982). Face stimuli are usually regarded as containing content of low spatial frequency, but pointillized faces have additional high-frequency content. In the face and inverted-face conditions, however, where spatial frequency was held constant, inverted faces resulted in greater amplitude and longer peak latency when compared to upright faces. Liu and colleagues (2000) did not attempt source localization but they found that face stimuli evoked larger responses as compared to nonface stimuli (animal and human hand) at 160 msec after stimulus onset in sensors over both left and right occipitotemporal cortex. Inverted-face stimuli and face stimuli produced similar response magnitudes, but the former occurred 13 msec later than the latter. The latency differences were similar to results noted above in the study by Linkenkaer-Hansen and colleagues (1998)

In conclusion, the MEG studies, along with PET and fMRI studies of face selectiv ity, generally suggest that the right fusiform gyrus produces larger response amplitudes to face stimuli as compared with other nonface stimuli. Other regions were also identified (e.g., extrastriate) as being more strongly activated in response to face stimuli. However, most of these studies did not apply appropriate controls to eliminate an alternative interpretation that response strengths or peak latencies and moments may vary between faces and other objects because (1) the area of retinal stimulation was not the same, (2) contrast or luminance was not equated, and (3) spatial frequency content was not equated. The MEG studies presented here indicate that each of these parameters can have a large impact on source amplitude, latency, and location. In this sense, these studies have not demonstrated that a region of the fusiform gyrus is selective for faces until all of these lower level features are controlled.

Other functional neuroimaging studies (e.g., Shen et al., 1999; Haxby et al., 1999) also question the selectivity of the face area, because face perception does not appear to be associated with a region or sets of regions that are dedicated solely to face processing. These "face-selective" regions also respond significantly to objects such as houses. Gauthier and colleagues (1999) suggest that results showing a specialized face area in the fusiform gyrus merely reflect the expertise we have at perceiving and remembering faces, rather than being specific to faces. These investigators showed that expertise with nonface objects can also activate the right fusiform gyrus. Recent single-unit studies have also shown that the selectivity of inferotemporal (IT) neurons found for faces could be generated for any novel object (e.g., computer-generated wire and spheroidal objects) as the result of extensive training (reviewed in Logethetis and Sheinberg, 1996; Kobatake et al., 1998). According to Logothetis and Sheinberg (1996), the anterior region of IT is more concerned with object class, whereas the posterior end of IT is concerned with the specifics of an object item (also see review by Tanaka, 1997). Consistent with the suggestion by Gauthier and colleagues, face selectivity is viewed as another form of object recognition. The site of activation within the inferior temporal lobe (i.e., anterior versus posterior) depends on whether object class or object identity is manipulated by the experimental task. Lesion studies also suggest two functional subdivisions of inferotemporal cortex: (1) lesions in posterior TEO lead to simple pattern deficits and (2) lesions in anterior TE lead to associative and visual memory deficits (see review by Logothetis and Sheinberg, 1996).

Cue Invariance

A few studies have tackled the issue of cue invariance of the object-related processing areas. As touched on briefly above, monkey studies indicate both segregation of visual cues into different processing streams such as the M and P streams (e.g., Livingstone and Hubel, 1988) and convergence of several primary cues within single visual areas or even within single neurons. In the latter case, the visual system can be viewed as a series of processing stages that represent a progressive increase in complexity of neuronal representations that are dependent on the output of preceding stages (e.g., Zeki, 1978; Van Essen, 1985; Van Essen and Maunsell, 1983; De Yoe et al., 1994; Sary et al., 1993). As Grill-Spector and colleagues (1998) note, the organizational principles underlying the specialization of visual areas remain a matter of debate (e.g., Ungerleider and Haxby, 1994; Goodale et al., 1994). Grill-Spector and colleagues used fMRI to determine the cue invariance of object-related areas; i.e., does the same preferential activation exist for objects defined solely by luminance, motion, and texture cues? These authors found a region in LO that was preferentially activated by objects defined by all cues tested. They also found an earlier retinotopic area, V3a, which exhibited convergence of object-related cues as well.

Okusa and colleagues (2000), using MEG, examined cue-invariant shape perception in humans. These investigators presented three different dynamic random-dot patterns (flicker, texture, and luminance), subtending 5 x 5° in the central field. Three different stimulus figures (diamond, noise, and a cross) were presented in the foreground against the background random dots. In the luminance condition, for example, dots comprising the stimulus figure (e.g., diamond) were abruptly reduced in luminance. MEG recordings were made from occipito-temporal cortex using a 37-channel system. By measuring the peak amplitude and peak latency of the first component in the wave forms, they found that the peak latency was different for the cues (250 msec for luminance, 270 msec for flicker, and 360 msec for texture) but not for the figures (diamond, cross, noise). In contrast, the peak amplitude was different for the figures (96-114 fT), but not for the cues. Source locations were determined using a single-dipole model for a time interval of 0-500 msec poststimulus. Source locations evoked by the figures were similar across conditions within subjects but were different across subjects (fusiform gyrus, lateral occipital gyrus, etc.). Reaction times correlated with the peak latency differences noted above for the different cues. They concluded that the shape defined by different visual cues activated the same region in lateral extrastriate cortex regardless of differences between the visual cues. The correspondence between the RTs and peak latencies for the cues suggest that LO is responsible for the perception of shape.

Motion or Spatial Vision (Dorsal Processing Stream)

The magnocellular layers of monkey LGN, associated with motion pathways, project to V1, V2, V3, MT, MST, ventral intraparietal area (VIP), and area 7a in parietal cortex (Van Essen and Maunsell, 1983). In general, the retinotopic specificity decreases progressively in successive levels of the motion pathway and average RF area increases (Maunsell and Newsome, 1987). MT, found on the posterior bank and floor of the superior temporal sulcus, is the lowest order area in which a selective emphasis on motion analysis is apparent (Van Essen and Maunsell, 1983). V1, V2, and MT all have direction-selective cells, but the preferred range of speeds for the overall population of cells is nearly an order of magnitude greater in MT than in V1. In addition, there are pronounced surround interactions in MT permitting the inhibition of responses within the excitatory RF due to motion in other parts of the visual field (signaling relative motion). MST receives strong input from MT, is position invariant, and is sensitive to trans-lational motion, expansion, contraction, and rotation (Geesaman and Andersen, 1996). In general, there is convergence of inputs in the STS from the far peripheral representations of V1 and V2. Inputs to parietal cortex tend to arise either from areas that have been implicated in spatial or motion analysis (e.g., areas within the STS) or from peripheral field representations in the prestriate cortex (Baizer et al., 1991).

Many types of motion have been studied by the functional neuroimaging techniques that have been available over the past 20 years (Watson et al., 1993; Tootell et al., 1995a,b; Cheng et al., 1995; Cornette et al., 1998). Most of this research has focused on trying to identify and characterize the human homolog of monkey area MT. MT in monkeys has been studied extensively by a number of different groups to identify the characteristics of this motion-sensitive area, including identifying cells that are direction selective, speed selective, and orientation selective (Zeki, 1980; Maunsell and Van Essen, 1983;

Albright, 1984; Lagae et al., 1993), as well as defining the receptive field properties (Felleman and Kaas, 1984) and M and P contributions to this area (Maunsell et al., 1990). The overall conclusion of these studies is that the monkey area MT is sensitive to a diverse range of motion stimuli while being fairly insensitive to other visual characteristics such as color. Therefore, functional neuroimaging studies have studied a variety of motion stimuli, including continuous motion using random-dot displays, changes in direction of motion, onset and offset of motion, as well as apparent motion. The functional imaging tools that rely on measuring metabolic or blood flow changes have had reasonable success at locating a region in humans near the parieto-temporo-occipital border that is homologous to monkey area MT/MST (Watson et al., 1993; Tootell et al, 1995a,b; Cheng et al, 1995; Cornette et al, 1998). MEG studies have shown reasonable agreement in the location of the motion-specific activity while adding temporal information about this motion-sensitive area.

The visual MEG studies of motion have attempted to characterize fully the different types of motion, as many VEP (Clarke, 1973a,b), PET, and fMRI studies have done in the past. For example, ffytche et al. (1995a) found that area V5 (or MT) was also activated by the perception of motion defined by hue, rather than luminance cues. In this study, area V4 (typically associated with color processing) was not active, suggesting that although V5 has traditionally been seen as insensitive to color it can apparently use this information to extract the motion information from stimuli. ffytche and colleagues interpreted these results as signifying a parvocellular contribution to area V5, as seen in monkeys. Fylan and colleagues (1997) similarly used isoluminant red/green sinusoidal gratings to demonstrate the presence of chromatic-sensitive units in V1, which play a role in processing motion information. Patzwahl et al. (1996) found that the initial response to motion was dipolar and localized to area V5. However, the later activity was much more complex, suggesting multiple areas are necessary to interpret motion stimuli. Holliday et al. (1997) found evidence of an additional pathway to V5 based on a case study in which the patient's left hemisphere representation of V1 was absent. The normal hemisphere showed a bimodal response in area V5, but the response was delayed by 25-40 msec compared to controls. The affected side showed a unimodal response that was consistent in time with the second peak from the normal side. This pattern of results suggests that the input for the first peak originated from V1 whereas the second peak had a nongeniculostriate origin.

Anderson and colleagues (1996) performed an extensive set of experiments on area V5 to characterize the spatial and temporal frequency preferences of this area. They found that similar to the monkey literature, area V5 is selective for low spatial frequencies [in cycles per degree (cpd), <4 cpd] and a large range of temporal drift frequencies (<35 Hz). In addition, there was clear response saturation at 10% contrast. Based on psychophysical findings that show that the visual system is sensitive to motion for spatial frequencies < 35 cpd, and contrary to the ffytche et al. (1995a) findings, Anderson et al. suggest that the P pathway conveys this information and that this motion information is not processed in area V5. They also suggest that although the M pathway conveys some motion information it is primarily conveying information to the parietal cortex with a more specific goal of identifying motion in the peripheral field.

Although many of the MEG studies on the perception of motion have attempted to localize the motion-specific cortical areas similar to the fMRI and PET studies, they have often extended the analysis to provide additional information about the temporal characteristics as well. In another study, ffytche et al. (1995b) looked at the timing of the V5 response relative to different drift speeds. They found that for speeds greater than 22°/sec, the signals in V5 arrived before the signals in V1, whereas for speeds less than 6°/sec, the signals in V1 arrived before the signals in V5. This timing information is also in agreement with the case study by Holliday et al. (1997) that suggested the existence of a separate pathway to area V5.

Lam et al. (2000) studied the differences between coherent versus incoherent motion using random-dot stimuli. Although onset latency did not change with speed of the stimuli, there was an inverse relationship between offset latency and speed of the stimuli. They also found that the sources evoked by slower motion stimuli localized more laterally than for faster motion stimuli. Similar results were seen for coherent as well as incoherent motion, suggesting that the same area processed both types of motion. In a study examining changes in the direction of motion (Ahlfors et al., 1999), fMRI locations were used to help guide multidipole analysis of the MEG data (although the MEG locations were not constrained by the fMRI activation areas). These investigators found five different areas responsive to the motion stimuli: MT+, frontal eye field (FEF), posterior STS (pSTS), V3A, and V1/V2. They used the notation MT+ to refer to the collection of motion-sensitive areas, including MT and MST, located along the occipitotemporal border. The onset of the response was first seen in area MT+ around 130 msec, followed by activation of FEF 0-20 msec later. The remaining areas were active later. In general, two types of responses were seen in the five areas, transient (MT, V1/V2) and sustained (pSTS, frontal). They suggest that pSTS is responsible for processing information from a number of different areas due to the long-duration response, consistent with results from monkey studies.

Uusitalo et al. (1996) looked at the memory lifetimes in the visual system; that is, how long does it take visual areas to recover from a previous stimulus? They found that the primary and early visual areas with short-onset latencies also had short memory traces (0.2-0.6 sec), whereas the higher order areas (prefrontal, supratemporal gyrus, parieto-occipital-temporal junction), which had later onset times, showed significantly longer memory traces (7-30 sec). A later study examined the memory lifetime of area V5, specifically (Uusitalo et al., 1997). This study showed that area V5 had an activation lifetime of between 0.4 and 1.4 sec across subjects, but the difference between hemispheres, within subjects, was less than 0.1 sec. The activation lifetime of area V5 implies that it is a later stage in the processing hierarchy, as suggested by Felleman and Van Essen (1991).

In addition to testing the different characteristics of continuous motion, there have been a number of studies examining the phenomenon of apparent motion. Apparent motion stimuli are spatially distinct stimuli that are presented sequentially at a sufficiently fast rate for one to perceive the offset of the first stimulus and onset of the second stimulus as a moving stimulus (phi phenomenon). Technically, all motion stimuli created by a computer reflect apparent motion, but the apparent motion stimuli discussed here refer to stimuli that the eye can physically distinguish in a static state but can also be perceived as motion with appropriate timing. Kaneoke and colleagues (1997) have performed the most extensive studies on this type of apparent motion. They initially compared apparent motion with continuous or smooth motion (using random-dot stimuli) to determine if they were perceived in the same way. Although the same area was active in response to these two types of stimuli, the peak latency was significantly shorter for apparent motion (162 and 171 msec) as compared to smooth motion stimuli (294-320 msec). They suggest that a different pathway is available to the apparent motion stimuli. Based on two additional studies, Kaneoke and colleagues concluded that apparent motion was not a simple summation of on/off responses (Kaneoke et al., 1998; Kawakami et al., 2000). Naito et al. (2000) found an interesting field effect in apparent motion. In the lower visual field there were no direction preferences; however in the upper visual field, downward motion produced a significantly stronger response compared to upward motion despite the same location and orientation of the modeled response.

Two additional types of visual motion stimuli that have been studied briefly with significant results are visuomotor integration and action observation. In the first study, Nishitani et al. (1999) had the subjects perform three tasks: visual fixation, eye pursuit, and finger-eye pursuit. They found four main areas of activation: V1, anterior intraparietal lobule (aIPL), dorso-lateral frontal area (DLF), and superior parietal lobule (SPL). They suggest aIPL is critical for visuomotor integration, whereas SPL played a role in visuospatial attention. They suggested that the lack of V5 activation was due to a weaker response to change in direction, rather than onset/offset of motion in this area. A study by Vanni et al. (1999) found enhanced 8- to 15-Hz mu rhythm activity in the postcentral gyrus after a change in perception to a motion stimulus in a binocular rivalry task. This task had a high-contrast stationary horizontal grating stimulus presented to one eye, whereas the other eye was presented with a low-contrast vertical grating. Movement of the weak vertical grating caused perceptual shifts of attention from the high-contrast grating to the low-contrast grating (noted by a verbal response from the subject). The mu enhancement effects were primarily seen during the binocular rivalry task and only very weakly during a separate visual motion task, suggesting that somatosensory cortex plays a role in perception even with no associated movement. They suggest this is possibly related to eye fixation or micro-saccades during the task.

In the remaining studies, the primary task was arm/hand movement with a visual component. In monkeys, it was found that motor cortex was active in response to action observation as well as in response to actually performing the action. There was additional frontal area activation (area F5 in monkeys) that is considered part of the mirror system, i.e., the network of cortical areas employed to mirror someone else's movements or actions (e.g., Gallese et al., 1996; Rizzolatti et al., 1996). Similar MEG studies were performed in humans; the subject was asked to perform hand movements as well as observe another person performing similar hand movements. Salenius (1999) looked initially at the suppression of spontaneous 20-Hz activity in response to hand movement and observation of hand movements. This study showed that viewing the movements significantly diminished the rebound of 20-Hz activity, similar to actual movements. Salenius suggested that the motor cortex is a critical component of the mirror system not only for language but also for inferring someone else's thoughts and actions. Nishitani and Hari (2000) performed a similar study with more extensive localization analysis and found activation of area V5, as well as activation of Brodmann's areas BA44 and BA4 in both the action and the action observation tasks. In addition, BA44 was active much earlier than BA4, suggesting that it plays a critical role in the observation/execution loop. They suggest that BA44, traditionally seen as a language area, is part of the mirror system due to the role hand gestures played in prelingual communication. In addition, these studies show the close link between the different sensory areas in creating a unified percept of the outside world.

0 0

Post a comment