Jump to content

User:Kochc/sandbox

From Wikipedia, the free encyclopedia

{{User[1] sandbox}} Crossmodal (or cross-modal) refers to integrating information from multiple sensory modalities, such as vision and audition. Crossmodal is typically used in conjunction with either attention or percpetion. Crossmodal attention refers to when information from one modality shifts attention, or heightens processing, of another modality so that both modalities are attending to the same location or object at the same time. Crossmodal perception refers to when the perception of an object, for example, in one modality is influenced by the perception of the same object in a different modality.[2] Unique cases of crossmodal perception include synesthesia, sensory substitution, and the McGurk effect, in which vision and hearing interact in speech perception.

However, crossmodal attention and perception are important in everyday situations as well. For instance, localizing a visual target in space is aided by the location of attracting sounds, even when the sounds are irrelevant to the target stimulus.[3] For example, while playing hide-and-seek, it may be difficult to spot someone hiding. However, if a sound emanates close to a hiding location, vision is directed to that location as well making it easier to find the person hiding. This is an example of crossmodal attention since auditory attention directs visual attention. Food is a good source for crossmodal perception since flavor can be influenced by color[4] and a pleasant aroma. In these examples of crossmodal perception, perceived taste is either influenced by color perception of the food or olfactory perception associated with the aroma.

Although information from one modality can influence the perception of sensory information from another modality, there is the question of whether or not information from each sensory modality is equally weighted. The visual capture effect is sometimes used to argue that visual information is more important, or given greater weight, than other modalities. Researchers use cross-modality matching to examine our ability to equate the intensities across different modalities. A typical cross-modality matching study might require participants to match the loudness of a sound to the brightness of a light.

Crossmodal perception, crossmodal integration and cross modal plasticity of the human brain are increasingl studied in neuroscience to gain a better understanding of the large-scale and long-term properties of the brain.[5] A related research theme is the study of multisensory perception and multisensory integration.

  1. ^ Rolls, E.T. (1994). "Gustatory, olfactory, and visual convergence within the primate orbitofrontal cortex". Journal of Neuroscience. 14: 5437–5452. PMID 8083747.
  2. ^ Lalanneab, Christophe; Lorenceaua, Jean (2004). "Crossmodal integration for perception and action". Journal of Physiology. 98 (1): 265–279. doi:10.1016/j.jphysparis.2004.06.001.
  3. ^ McDonald, JJ (2013). "Salient sounds activate human visual cortex automatically". Journal of Neuroscience. 33: 9194–9201. doi:10.1523/JNEUROSCI.4869-13.2014.
  4. ^ Spence, Charles (2010). "Does food color influence taste and flavor perception in humans?". Chemosensory Perception. 3: 68–84. doi:10.1007/s12078-010-9067-z.
  5. ^ Shams, Ladan; Kima, Robyn (September 2010). "Crossmodal influences on visual perception". Physics of Life Reviews. 7 (3): 269–284. doi:10.1016/j.plrev.2010.04.006.