User:Etpuisonvoit

From Wikipedia, the free encyclopedia

Cross-correlation models in sensory processing are attempts to explain, in a mathematical fashion, how the initial neural pathways in the sensory systems of complex biological organisms are able to detect and make use of statistical relationships, between sensory inputs that are separated in space and time, to pick out particular features in the environment which are relevant to behavior.

Introduction[edit]

Due to the complexity of nervous systems, detailed understanding of the neural circuits involved in behavior does not exist. Nevertheless, through the use of computational or mathematical models, scientists have been successful at describing the functioning of sensory processing subunits under real world conditions. A certain class of these models, attractive in their conceptual simplicity, comprise the vanguard of the neuroscientist's push to describe visual and auditory processing down to molecular detail.

Built around the idea of cross-correlation, taken from the engineering field of signal processing, these models describe how signals of large spatial or temporal extent can be combined into a concise "summary" signal encoding some gross feature, which is then passed on to upstream processing units.

Two such models have in fact survived rigorous comparisons against experiments, the Jeffress model for sound localization, and the Hassenstein-Reichardt model for motion detection.

Basic Elements: Delay Line and Coincidence Detector[edit]

The 2 main elements of a correlation detector, which takes 2 inputs, is a delay line with a fixed delay time, and a coincidence detector or “multiplier”. The delay line compensates for the space-time separation between the inputs and the coincidence detector responds only when the processed input signals are simultaneous and identical.

Cross-Correlation in Sound Localization: Jeffress Model[edit]

According to Jeffress[1], in order to compute the location of a sound source in space from interaural time differences, an auditory system relies on delay lines: the induced signal from an ipsilateral auditory receptor to a particular neuron is delayed for the same time as it takes for the original sound to go in space from that ear to the other. Each postsynaptic cell is differently delayed and thus specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation.

Following Fischer and Anderson[2], the response of the postsynaptic neuron to the signals from the left and right ears is given by

Here, and are the instantaneous intensities of the signals from the left and right ears and the constants and are included for consistency with the experimental fact that there is response to input at only one ear.

Structures have been located in the barn owl which are consistent with Jeffress-type mechanisms.[3]

Cross-Correlation for Motion Detection: Hassenstein-Reichardt Model[edit]

A motion detector needs to satisfy three general requirements: pair-inputs, asymmetry and nonlinearity.[4] The cross-correlation operation implemented asymmetrically on the responses from a pair of photoreceptors satisfies these minimal criteria, and furthermore, predicts features which have been observed in the response of neurons of the lobula plate in bi-wing insects [5].

The master equation for response is

The HR model predicts a peaking of the response at a particular input temporal frequency. The conceptually similar Barlow-Levick model is deficient in the sense that a stimulus presented to only one receptor of the pair is sufficient to generate a response. This is unlike the HR model, which requires two correlated signals delivered in a time ordered fashion. However the HR model does not show a saturation of response at high contrasts, which is observed in experiment. Extensions of the Barlow-Levick model can provide for this discrepancy[6].

See also[edit]

Computational neuroscience

Coincidence detection

Motion perception

Neuroethology

References[edit]

  1. ^ Jeffress, L.A., 1948. A place theory of sound localization. Journal of Comparative and Physiological Psychology 41, 35-39.
  2. ^ Brian J. Fischer and Charles H. Anderson, 2004. A computational model of sound localization in the barn owl Neurocomputing" 58–60 (2004) 1007 – 1012
  3. ^ Catherine E. Carr, 1993. Delay Line Models of Sound Localization in the Barn Owl "American Zoologist" Vol. 33, No. 1 79-85
  4. ^ Borst A, Egelhaaf M., 1989. Principles of visual motion detection. "Trends in Neuroscience" 12(8):297-306
  5. ^ Joesch, M. et al. (2008) Response properties of motion-sensitive visual interneurons in the lobula plate of Drosophila melanogaster. Curr. Biol. 18, 368–374
  6. ^ Gonzalo G. de Polavieja, 2006. Neuronal Algorithms That Detect the Temporal Order of Events "Neural Computation" 18 (2006) 2102 – 2121

External links[edit]