Search This Blog

Tuesday, June 6, 2023

Sensory nervous system

From Wikipedia, the free encyclopedia
Sensory nervous system
Gray722.svg
Typical sensory system: the visual system, illustrated by the classic Gray's FIG. 722– This scheme shows the flow of information from the eyes to the central connections of the optic nerves and optic tracts, to the visual cortex. Area V1 is the region of the brain which is engaged in vision.
 
Details
Identifiers
Latinorgana sensuum
TA98A15.0.00.000
TA26729
FMA78499 75259, 78499

The visual system and the somatosensory system are active even during resting state fMRI
 
Activation and response in the sensory nervous system

The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.

The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system.

Stimulus

Organisms need information to solve at least three kinds of problems: (a) to maintain an appropriate environment, i.e., homeostasis; (b) to time activities (e.g., seasonal changes in behavior) or synchronize activities with those of conspecifics; and (c) to locate and respond to resources or threats (e.g., by moving towards resources or evading or attacking threats). Organisms also need to transmit information in order to influence another's behavior: to identify themselves, warn conspecifics of danger, coordinate activities, or deceive.

Sensory systems code for four aspects of a stimulus; type (modality), intensity, location, and duration. Arrival time of a sound pulse and phase differences of continuous sound are used for sound localization. Certain receptors are sensitive to certain types of stimuli (for example, different mechanoreceptors respond best to different kinds of touch stimuli, like sharp or blunt objects). Receptors send impulses in certain patterns to send information about the intensity of a stimulus (for example, how loud a sound is). The location of the receptor that is stimulated gives the brain information about the location of the stimulus (for example, stimulating a mechanoreceptor in a finger will send information to the brain about that finger). The duration of the stimulus (how long it lasts) is conveyed by firing patterns of receptors. These impulses are transmitted to the brain through afferent neurons.

Quiescent state

Most sensory systems have a quiescent state, that is, the state that a sensory system converges to when there is no input.

This is well-defined for a linear time-invariant system, whose input space is a vector space, and thus by definition has a point of zero. It is also well-defined for any passive sensory system, that is, a system that operates without needing input power. The quiescent state is the state the system converges to when there is no input power.

It is not always well-defined for nonlinear, nonpassive sensory organs, since they can't function without input energy. For example, a cochlea is not a passive organ, but actively vibrates its own sensory hairs to improve its sensitivity. This manifests as otoacoustic emissions in healthy ears, and tinnitus in pathological ears. There is still a quiescent state for the cochlea, since there is a well-defined mode of power input that it receives (vibratory energy on the eardrum), which provides an unambiguous definition of "zero input power".

Some sensory systems can have multiple quiescent states depending on its history, like flip-flops, and magnetic material with hysteresis. It can also adapt to different quiescent states. In complete darkness, the retinal cells become extremely sensitive, and there is noticeable "visual snow" caused by the retinal cells firing randomly without any light input. In brighter light, the retinal cells become a lot less sensitive, and consequently visual noise decreases.

Quiescent state is less well-defined when the sensory organ can be controlled by other systems, like a dog's ears that turn towards the front or the sides as the brain commands. Some spiders can use their nets as a large touch-organ, like weaving a skin for themselves. Even in the absence of anything falling on the net, hungry spiders may increase web thread tension, so as to respond promptly even to usually less noticeable, and less profitable prey, such as small fruit flies, creating two different "quiescent states" for the net.

Things become completely ill-defined for a system which connects its output to its own input, thus ever-moving without any external input. The prime example is the brain, with its default mode network.

Senses and receptors

While debate exists among neurologists as to the specific number of senses due to differing definitions of what constitutes a sense, Gautama Buddha and Aristotle classified five 'traditional' human senses which have become universally accepted: touch, taste, smell, sight, and hearing. Other senses that have been well-accepted in most mammals, including humans, include nociception, equilibrioception, kinaesthesia, and thermoception. Furthermore, some nonhuman animals have been shown to possess alternate senses, including magnetoreception and electroreception.

Receptors

The initialization of sensation stems from the response of a specific receptor to a physical stimulus. The receptors which react to the stimulus and initiate the process of sensation are commonly characterized in four distinct categories: chemoreceptors, photoreceptors, mechanoreceptors, and thermoreceptors. All receptors receive distinct physical stimuli and transduce the signal into an electrical action potential. This action potential then travels along afferent neurons to specific brain regions where it is processed and interpreted.

Chemoreceptors

Chemoreceptors, or chemosensors, detect certain chemical stimuli and transduce that signal into an electrical action potential. The two primary types of chemoreceptors are:

Photoreceptors

Photoreceptors are capable of phototransduction, a process which converts light (electromagnetic radiation) into, among other types of energy, a membrane potential. The three primary types of photoreceptors are: Cones are photoreceptors which respond significantly to color. In humans the three different types of cones correspond with a primary response to short wavelength (blue), medium wavelength (green), and long wavelength (yellow/red). Rods are photoreceptors which are very sensitive to the intensity of light, allowing for vision in dim lighting. The concentrations and ratio of rods to cones is strongly correlated with whether an animal is diurnal or nocturnal. In humans rods outnumber cones by approximately 20:1, while in nocturnal animals, such as the tawny owl, the ratio is closer to 1000:1. Ganglion Cells reside in the adrenal medulla and retina where they are involved in the sympathetic response. Of the ~1.3 million ganglion cells present in the retina, 1-2% are believed to be photosensitive ganglia. These photosensitive ganglia play a role in conscious vision for some animals, and are believed to do the same in humans.

Mechanoreceptors

Mechanoreceptors are sensory receptors which respond to mechanical forces, such as pressure or distortion. While mechanoreceptors are present in hair cells and play an integral role in the vestibular and auditory systems, the majority of mechanoreceptors are cutaneous and are grouped into four categories:

  • Slowly adapting type 1 receptors have small receptive fields and respond to static stimulation. These receptors are primarily used in the sensations of form and roughness.
  • Slowly adapting type 2 receptors have large receptive fields and respond to stretch. Similarly to type 1, they produce sustained responses to a continued stimuli.
  • Rapidly adapting receptors have small receptive fields and underlie the perception of slip.
  • Pacinian receptors have large receptive fields and are the predominant receptors for high-frequency vibration.

Thermoreceptors

Thermoreceptors are sensory receptors which respond to varying temperatures. While the mechanisms through which these receptors operate is unclear, recent discoveries have shown that mammals have at least two distinct types of thermoreceptors:

TRPV1 is a heat-activated channel that acts as a small heat detecting thermometer in the membrane which begins the polarization of the neural fiber when exposed to changes in temperature. Ultimately, this allows us to detect ambient temperature in the warm/hot range. Similarly, the molecular cousin to TRPV1, TRPM8, is a cold-activated ion channel that responds to cold. Both cold and hot receptors are segregated by distinct subpopulations of sensory nerve fibers, which shows us that the information coming into the spinal cord is originally separate. Each sensory receptor has its own "labeled line" to convey a simple sensation experienced by the recipient. Ultimately, TRP channels act as thermosensors, channels that help us to detect changes in ambient temperatures.

Nociceptors

Nociceptors respond to potentially damaging stimuli by sending signals to the spinal cord and brain. This process, called nociception, usually causes the perception of pain. They are found in internal organs, as well as on the surface of the body. Nociceptors detect different kinds of damaging stimuli or actual damage. Those that only respond when tissues are damaged are known as "sleeping" or "silent" nociceptors.

  • Thermal nociceptors are activated by noxious heat or cold at various temperatures.
  • Mechanical nociceptors respond to excess pressure or mechanical deformation.
  • Chemical nociceptors respond to a wide variety of chemicals, some of which are signs of tissue damage. They are involved in the detection of some spices in food.

Sensory cortex

All stimuli received by the receptors listed above are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area of the brain. While the term sensory cortex is often used informally to refer to the somatosensory cortex, the term more accurately refers to the multiple areas of the brain at which senses are received to be processed. For the five traditional senses in humans, this includes the primary and secondary cortices of the different senses: the somatosensory cortex, the visual cortex, the auditory cortex, the primary olfactory cortex, and the gustatory cortex. Other modalities have corresponding sensory cortex areas as well, including the vestibular cortex for the sense of balance.

Somatosensory cortex

Located in the parietal lobe, the primary somatosensory cortex is the primary receptive area for the sense of touch and proprioception in the somatosensory system. This cortex is further divided into Brodmann areas 1, 2, and 3. Brodmann area 3 is considered the primary processing center of the somatosensory cortex as it receives significantly more input from the thalamus, has neurons highly responsive to somatosensory stimuli, and can evoke somatic sensations through electrical stimulation. Areas 1 and 2 receive most of their input from area 3. There are also pathways for proprioception (via the cerebellum), and motor control (via Brodmann area 4). See also: S2 Secondary somatosensory cortex.

The human eye is the first element of a sensory system: in this case, vision, for the visual system.

Visual cortex

The visual cortex refers to the primary visual cortex, labeled V1 or Brodmann area 17, as well as the extrastriate visual cortical areas V2-V5. Located in the occipital lobe, V1 acts as the primary relay station for visual input, transmitting information to two primary pathways labeled the dorsal and ventral streams. The dorsal stream includes areas V2 and V5, and is used in interpreting visual 'where' and 'how.' The ventral stream includes areas V2 and V4, and is used in interpreting 'what.' Increases in Task-negative activity are observed in the ventral attention network, after abrupt changes in sensory stimuli, at the onset and offset of task blocks, and at the end of a completed trial.

Auditory cortex

Located in the temporal lobe, the auditory cortex is the primary receptive area for sound information. The auditory cortex is composed of Brodmann areas 41 and 42, also known as the anterior transverse temporal area 41 and the posterior transverse temporal area 42, respectively. Both areas act similarly and are integral in receiving and processing the signals transmitted from auditory receptors.

Primary olfactory cortex

Located in the temporal lobe, the primary olfactory cortex is the primary receptive area for olfaction, or smell. Unique to the olfactory and gustatory systems, at least in mammals, is the implementation of both peripheral and central mechanisms of action. The peripheral mechanisms involve olfactory receptor neurons which transduce a chemical signal along the olfactory nerve, which terminates in the olfactory bulb. The chemoreceptors in the receptor neurons that start the signal cascade are G protein-coupled receptors. The central mechanisms include the convergence of olfactory nerve axons into glomeruli in the olfactory bulb, where the signal is then transmitted to the anterior olfactory nucleus, the piriform cortex, the medial amygdala, and the entorhinal cortex, all of which make up the primary olfactory cortex.

In contrast to vision and hearing, the olfactory bulbs are not cross-hemispheric; the right bulb connects to the right hemisphere and the left bulb connects to the left hemisphere.

Gustatory cortex

The gustatory cortex is the primary receptive area for taste. The word taste is used in a technical sense to refer specifically to sensations coming from taste buds on the tongue. The five qualities of taste detected by the tongue include sourness, bitterness, sweetness, saltiness, and the protein taste quality, called umami. In contrast, the term flavor refers to the experience generated through integration of taste with smell and tactile information. The gustatory cortex consists of two primary structures: the anterior insula, located on the insular lobe, and the frontal operculum, located on the frontal lobe. Similarly to the olfactory cortex, the gustatory pathway operates through both peripheral and central mechanisms. Peripheral taste receptors, located on the tongue, soft palate, pharynx, and esophagus, transmit the received signal to primary sensory axons, where the signal is projected to the nucleus of the solitary tract in the medulla, or the gustatory nucleus of the solitary tract complex. The signal is then transmitted to the thalamus, which in turn projects the signal to several regions of the neocortex, including the gustatory cortex.

The neural processing of taste is affected at nearly every stage of processing by concurrent somatosensory information from the tongue, that is, mouthfeel. Scent, in contrast, is not combined with taste to create flavor until higher cortical processing regions, such as the insula and orbitofrontal cortex.

Human sensory system

The human sensory system consists of the following subsystems:

Diseases

Disability-adjusted life year for sense organ diseases per 100,000 inhabitants in 2002.
  no data
  less than 200
  200-400
  400-600
  600-800
  800-1000
  1000-1200
  1200-1400
  1400-1600
  1600-1800
  1800-2000
  2000-2300
  more than 2300

Photoreceptor cell

From Wikipedia, the free encyclopedia
Photoreceptor cell
1414 Rods and Cones.jpg
Functional parts of the rods and cones, which are two of the three types of photosensitive cells in the retina
Identifiers
MeSHD010786
NeuroLex IDsao226523927
FMA85613 86740, 85613

A photoreceptor cell is a specialized type of neuroepithelial cell found in the retina that is capable of visual phototransduction. The great biological importance of photoreceptors is that they convert light (visible electromagnetic radiation) into signals that can stimulate biological processes. To be more specific, photoreceptor proteins in the cell absorb photons, triggering a change in the cell's membrane potential.

There are currently three known types of photoreceptor cells in mammalian eyes: rods, cones, and intrinsically photosensitive retinal ganglion cells. The two classic photoreceptor cells are rods and cones, each contributing information used by the visual system to form an image of the environment, sight. Rods primarily mediate scotopic vision (dim conditions) whereas cones primarily mediate to photopic vision (bright conditions), but the processes in each that supports phototransduction is similar. A third class of mammalian photoreceptor cell was discovered during the 1990s: the intrinsically photosensitive retinal ganglion cells. These cells are thought not to contribute to sight directly, but have a role in the entrainment of the circadian rhythm and pupillary reflex.

Photosensitivity

Normalized human photoreceptor absorbances for different wavelengths of light

Each photoreceptor absorbs light according to its spectral sensitivity (absorptance), which is determined by the photoreceptor proteins expressed in that cell. Humans have three classes of cones (L, M, S) that each differ in spectral sensitivity and 'prefer' photons of different wavelengths (see graph). For example, the peak wavelength of the S-cone's spectral sensitivity is approximately 420 nm (nanometers, a measure of wavelength), so it is more likely to absorb a photon at 420 nm than at any other wavelength. Light of a longer wavelength can also produce the same response from an S-cone, but it would have to be brighter to do so.

In accordance with the principle of univariance, a photoreceptor's output signal is proportional only to the number of photons absorbed. The photoreceptors can not measure the wavelength of light that it absorbs and therefore does not detect color on its own. Rather, it is the ratios of responses of the three types of cone cells that can estimate wavelength, and therefore enable color vision.

Histology

Anatomy of a Rod Cell[4]
Cone cell structure
Anatomy of rods and cones varies slightly.

Rod and cone photoreceptors are found on the outermost layer of the retina; they both have the same basic structure. Closest to the visual field (and farthest from the brain) is the axon terminal, which releases a neurotransmitter called glutamate to bipolar cells. Farther back is the cell body, which contains the cell's organelles. Farther back still is the inner segment, a specialized part of the cell full of mitochondria. The chief function of the inner segment is to provide ATP (energy) for the sodium-potassium pump. Finally, closest to the brain (and farthest from the field of view) is the outer segment, the part of the photoreceptor that absorbs light. Outer segments are actually modified cilia that contain disks filled with opsin, the molecule that absorbs photons, as well as voltage-gated sodium channels.

The membranous photoreceptor protein opsin contains a pigment molecule called retinal. In rod cells, these together are called rhodopsin. In cone cells, there are different types of opsins that combine with retinal to form pigments called photopsins. Three different classes of photopsins in the cones react to different ranges of light frequency, a differentiation that allows the visual system to calculate color. The function of the photoreceptor cell is to convert the light information of the photon into a form of information communicable to the nervous system and readily usable to the organism: This conversion is called signal transduction.

The opsin found in the intrinsically photosensitive ganglion cells of the retina is called melanopsin. These cells are involved in various reflexive responses of the brain and body to the presence of (day)light, such as the regulation of circadian rhythms, pupillary reflex and other non-visual responses to light. Melanopsin functionally resembles invertebrate opsins.

Retinal mosaic

Illustration of the distribution of cone cells in the fovea of an individual with normal color vision (left), and a color blind (protanopic) retina. Note that the center of the fovea holds very few blue-sensitive cones.
 
Distribution of rods and cones along a line passing through the fovea and the blind spot of a human eye

Most vertebrate photoreceptors are located in the retina. The distribution of rods and cones (and classes thereof) in the retina is called the retinal mosaic. Each human retina has approximately 6 million cones and 120 million rods. At the "center" of the retina (the point directly behind the lens) lies the fovea (or fovea centralis), which contains only cone cells; and is the region capable of producing the highest visual acuity or highest resolution. Across the rest of the retina, rods and cones are intermingled. No photoreceptors are found at the blind spot, the area where ganglion cell fibers are collected into the optic nerve and leave the eye. The distribution of cone classes (L, M, S) are also nonhomogenous, with no S-cones in the fovea, and the ratio of L-cones to M-cones differing between individuals.

The number and ratio of rods to cones varies among species, dependent on whether an animal is primarily diurnal or nocturnal. Certain owls, such as the nocturnal tawny owl, have a tremendous number of rods in their retinae. Other vertebrates will also have a different number of cone classes, ranging from monochromats to pentachromats.

Signaling

The absorption of light leads to an isomeric change in the retinal molecule.
 

The path of a visual signal is described by the phototransduction cascade, the mechanism by which the energy of a photon signals a mechanism in the cell that leads to its electrical polarization. This polarization ultimately leads to either the transmittance or inhibition of a neural signal that will be fed to the brain via the optic nerve. The steps that apply to the phototransductuion pathway from vertebrate rod/cone photoreceptors are:

  1. The Vertebrate visual opsin in the disc membrane of the outer segment absorbs a photon, changing the configuration of a retinal Schiff base cofactor inside the protein from the cis-form to the trans-form, causing the retinal to change shape.
  2. This results in a series of unstable intermediates, the last of which binds stronger to a G protein in the membrane, called transducin, and activates it. This is the first amplification step – each photoactivated opsin triggers activation of about 100 transducins.
  3. Each transducin then activates the enzyme cGMP-specific phosphodiesterase (PDE).
  4. PDE then catalyzes the hydrolysis of cGMP to 5' GMP. This is the second amplification step, where a single PDE hydrolyses about 1000 cGMP molecules.
  5. The net concentration of intracellular cGMP is reduced (due to its conversion to 5' GMP via PDE), resulting in the closure of cyclic nucleotide-gated Na+ ion channels located in the photoreceptor outer segment membrane.
  6. As a result, sodium ions can no longer enter the cell, and the photoreceptor outer segment membrane becomes hyperpolarized, due to the charge inside the membrane becoming more negative.
  7. This change in the cell's membrane potential causes voltage-gated calcium channels to close. This leads to a decrease in the influx of calcium ions into the cell and thus the intracellular calcium ion concentration falls.
  8. A decrease in the intracellular calcium concentration means that less glutamate is released via calcium-induced exocytosis to the bipolar cell (see below). (The decreased calcium level slows the release of the neurotransmitter glutamate, which excites the postsynaptic bipolar cells and horizontal cells.)
  9. ATP provided by the inner segment powers the sodium-potassium pump. This pump is necessary to reset the initial state of the outer segment by taking the sodium ions that are entering the cell and pumping them back out.

Hyperpolarization

Unlike most sensory receptor cells, photoreceptors actually become hyperpolarized when stimulated; and conversely are depolarized when not stimulated. This means that glutamate is released continuously when the cell is unstimulated, and stimulus causes release to stop. In the dark, cells have a relatively high concentration of cyclic guanosine 3'-5' monophosphate (cGMP), which opens cGMP-gated ion channels. These channels are nonspecific, allowing movement of both sodium and calcium ions when open. The movement of these positively charged ions into the cell (driven by their respective electrochemical gradient) depolarizes the membrane, and leads to the release of the neurotransmitter glutamate.

Unstimulated (in the dark), cyclic-nucleotide gated channels in the outer segment are open because cyclic GMP (cGMP) is bound to them. Hence, positively charged ions (namely sodium ions) enter the photoreceptor, depolarizing it to about −40 mV (resting potential in other nerve cells is usually −65 mV). This depolarization current is often known as dark current.

Bipolar cells

The photoreceptors (rods and cones) transmit to the bipolar cells, which transmit then to the retinal ganglion cells. Retinal ganglion cell axons collectively form the optic nerve, via which they project to the brain.

The rod and cone photoreceptors signal their absorption of photons via a decrease in the release of the neurotransmitter glutamate to bipolar cells at its axon terminal. Since the photoreceptor is depolarized in the dark, a high amount of glutamate is being released to bipolar cells in the dark. Absorption of a photon will hyperpolarize the photoreceptor and therefore result in the release of less glutamate at the presynaptic terminal to the bipolar cell.

Every rod or cone photoreceptor releases the same neurotransmitter, glutamate. However, the effect of glutamate differs in the bipolar cells, depending upon the type of receptor imbedded in that cell's membrane. When glutamate binds to an ionotropic receptor, the bipolar cell will depolarize (and therefore will hyperpolarize with light as less glutamate is released). On the other hand, binding of glutamate to a metabotropic receptor results in a hyperpolarization, so this bipolar cell will depolarize to light as less glutamate is released.

In essence, this property allows for one population of bipolar cells that gets excited by light and another population that gets inhibited by it, even though all photoreceptors show the same response to light. This complexity becomes both important and necessary for detecting color, contrast, edges, etc.

Advantages

Phototransduction in rods and cones is somewhat unusual in that the stimulus (in this case, light) reduces the cell's response or firing rate, different from most other sensory systems in which a stimulus increases the cell's response or firing rate. This difference has important functional consequences:

  1. the classic (rod or cone) photoreceptor is depolarized in the dark, which means many sodium ions are flowing into the cell. Thus, the random opening or closing of sodium channels will not affect the membrane potential of the cell; only the closing of a large number of channels, through absorption of a photon, will affect it and signal that light is in the visual field. This system may have less noise relative to sensory transduction schema that increase rate of neural firing in response to stimulus, like touch and olfaction.
  2. there is a lot of amplification in two stages of classic phototransduction: one pigment will activate many molecules of transducin, and one PDE will cleave many cGMPs. This amplification means that even the absorption of one photon will affect membrane potential and signal to the brain that light is in the visual field. This is the main feature that differentiates rod photoreceptors from cone photoreceptors. Rods are extremely sensitive and have the capacity of registering a single photon of light, unlike cones. On the other hand, cones are known to have very fast kinetics in terms of rate of amplification of phototransduction, unlike rods.

Difference between rods and cones

Comparison of human rod and cone cells, from Eric Kandel et al. in Principles of Neural Science.

Rods Cones
Used for scotopic vision (vision under low light conditions) Used for photopic vision (vision under high light conditions)
Very light sensitive; sensitive to scattered light Not very light sensitive; sensitive only to direct light
Loss causes night blindness Loss causes legal blindness
Low visual acuity High visual acuity; better spatial resolution
Not present in fovea Concentrated in fovea
Slow response to light, stimuli added over time Fast response to light, can perceive more rapid changes in stimuli
Have more pigment than cones, so can detect lower light levels Have less pigment than rods, require more light to detect images
Stacks of membrane-enclosed disks are unattached to cell membrane directly Disks are attached to outer membrane
About 120 million rods distributed around the retina About 6 million cones distributed in each retina
One type of photosensitive pigment Three types of photosensitive pigment in humans
Confer achromatic vision Confer color vision

Development

The key events mediating rod versus S cone versus M cone differentiation are induced by several transcription factors, including RORbeta, OTX2, NRL, CRX, NR2E3 and TRbeta2. The S cone fate represents the default photoreceptor program; however, differential transcriptional activity can bring about rod or M cone generation. L cones are present in primates, however there is not much known for their developmental program due to use of rodents in research. There are five steps to developing photoreceptors: proliferation of multi-potent retinal progenitor cells (RPCs); restriction of competence of RPCs; cell fate specification; photoreceptor gene expression; and lastly axonal growth, synapse formation and outer segment growth.

Early Notch signaling maintains progenitor cycling. Photoreceptor precursors come about through inhibition of Notch signaling and increased activity of various factors including achaete-scute homologue 1. OTX2 activity commits cells to the photoreceptor fate. CRX further defines the photoreceptor specific panel of genes being expressed. NRL expression leads to the rod fate. NR2E3 further restricts cells to the rod fate by repressing cone genes. RORbeta is needed for both rod and cone development. TRbeta2 mediates the M cone fate. If any of the previously mentioned factors' functions are ablated, the default photoreceptor is a S cone. These events take place at different time periods for different species and include a complex pattern of activities that bring about a spectrum of phenotypes. If these regulatory networks are disrupted, retinitis pigmentosa, macular degeneration or other visual deficits may result.

Ganglion cell photoreceptors

Intrinsically photosensitive retinal ganglion cells (ipRGCs) are a subset (≈1–3%) of retinal ganglion cells, unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin, a light-sensitive protein. Therefore they constitute a third class of photoreceptors, in addition to rod and cone cells.

In humans the ipRGCs contribute to non-image-forming functions like circadian rhythms, behavior and pupillary light reflex. Peak spectral sensitivity of the receptor is between 460 and 482 nm. However, they may also contribute to a rudimentary visual pathway enabling conscious sight and brightness detection. Classic photoreceptors (rods and cones) also feed into the novel visual system, which may constribute to color constancy. ipRGCs could be instrumental in understanding many diseases including major causes of blindness worldwide like glaucoma, a disease that affects ganglion cells, and the study of the receptor offered potential as a new avenue to explore in trying to find treatments for blindness.

ipRGCs were only definitively detected ipRGCs in humans during landmark experiments in 2007 on rodless, coneless humans. As had been found in other mammals, the identity of the non-rod non-cone photoreceptor in humans was found to be a ganglion cell in the inner retina. The researchers had tracked down patients with rare diseases wiping out classic rod and cone photoreceptor function but preserving ganglion cell function. Despite having no rods or cones the patients continued to exhibit circadian photoentrainment, circadian behavioural patterns, melanopsin suppression, and pupil reactions, with peak spectral sensitivities to environmental and experimental light matching that for the melanopsin photopigment. Their brains could also associate vision with light of this frequency.

Non-human photoreceptors

Rod and cone photoreceptors are common to almost all vertebrates. The pineal and parapineal glands are photoreceptive in non-mammalian vertebrates, but not in mammals. Birds have photoactive cerebrospinal fluid (CSF)-contacting neurons within the paraventricular organ that respond to light in the absence of input from the eyes or neurotransmitters. Invertebrate photoreceptors in organisms such as insects and molluscs are different in both their morphological organization and their underlying biochemical pathways. This article describes human photoreceptors.

Monday, June 5, 2023

Binocular rivalry

From Wikipedia, the free encyclopedia

Binocular rivalry is a phenomenon of visual perception in which perception alternates between different images presented to each eye.

An image demonstrating binocular rivalry. If you view the image with red-cyan 3D glasses, the text will alternate between Red and Blue.3d glasses red cyan.svg 3D red cyan glasses are recommended to view this image correctly.
 
Binocular rivalry. If you view the image with red-cyan 3D glasses, the angled Warp and weft will alternate between the Red and the Blue lines.3d glasses red cyan.svg 3D red cyan glasses are recommended to view this image correctly.

When one image is presented to one eye and a very different image is presented to the other (also known as dichoptic presentation), instead of the two images being seen superimposed, one image is seen for a few moments, then the other, then the first, and so on, randomly for as long as one cares to look. For example, if a set of vertical lines is presented to one eye, and a set of horizontal lines to the same region of the retina of the other, sometimes the vertical lines are seen with no trace of the horizontal lines, and sometimes the horizontal lines are seen with no trace of the vertical lines.

At transitions, brief, unstable composites of the two images may be seen. For example, the vertical lines may appear one at a time to obscure the horizontal lines from the left or from the right, like a traveling wave, switching slowly one image for the other. Binocular rivalry occurs between any stimuli that differ sufficiently, including simple stimuli like lines of different orientation and complex stimuli like different alphabetic letters or different pictures such as of a face and of a house.

Very small differences between images, however, might yield singleness of vision and stereopsis. Binocular rivalry has been extensively studied in the last century. In recent years neuroscientists have used neuroimaging techniques and single-cell recording techniques to identify neural events responsible for the perceptual dominance of a given image and for the perceptual alternations.

Types

When the images presented to the eyes differ only in their contours, rivalry is referred to as binocular contour rivalry. When the images presented to the eyes differ only in their colours, rivalry is referred to as binocular colour rivalry. When the images presented to the eyes differ only in their lightnesses, a form of rivalry called binocular lustre may be seen. When an image is presented to one eye and a blank field to the other, the image is usually seen continuously. This is referred to as contour dominance. Occasionally however, the blank field, or even the dark field of a closed eye, can become visible, making the image invisible for about as long as it would be invisible were it in rivalry with another image of equal stimulus strength. When an image is presented to one eye and a blank field to the other, introducing a different image onto the blank field usually results in that image being seen immediately. This is referred to as flash suppression.

History

Binocular rivalry was discovered by Porta. Porta put one book in front of one eye, and another in front of the other. He reported that he could read from one book at a time and that changing from one to the other required withdrawing the "visual virtue" from one eye and moving it to the other. According to Wade (1998), binocular colour rivalry was first reported by Le Clerc (1712). Desaguiliers (1716) also recorded it when looking at different colours from spectra in the bevel of a mirror. The clearest early description of both colour and contour rivalry was made by Dutour (1760, 1763). To experience colour rivalry Dutour either crossed his eyes or overdiverged his eyes (a form of free fusion commonly used also at the end of the 20th century to view Magic Eye stereograms) to look at differently coloured pieces of cloth (Dutour 1760) or differently coloured pieces of glass (Dutour 1763). To experience contour rivalry Dutour again used free fusion of different objects or used a prism or a mirror in front of one eye to project different images into it. The first clear description of rivalry in English was by Charles Wheatstone (1838). Wheatstone invented the stereoscope, an optical device (in Wheatstone's case using mirrors) to present different images to the two eyes.

Early theories

Various theories were proposed to account for binocular rivalry. Porta and Dutour took it as evidence for an ancient theory of visual perception that has come to be known as suppression theory. Its essential idea is that, despite having two eyes, we see only one of everything (known as singleness of vision) because we see with one eye at a time. According to this theory, we do not normally notice the alternations between the two eyes because their images are too similar. By making the images very different, Porta and Dutour argued, this natural alternation can be seen. Wheatstone, on the other hand, supported the alternative theory of singleness of vision, fusion theory, proposed by Aristotle. Its essential idea is that we see only one of everything because the information from the two eyes is combined or fused. Wheatstone also discovered binocular stereopsis, the perception of depth arising from the lateral placement of the eyes. Wheatstone was able to prove that stereopsis depended on the different horizontal positions (the horizontal disparity) of points in the images viewed by each eye by creating the illusion of depth from flat depictions of such images displayed in his stereoscope. Such stereopsis is impossible unless information is being combined from each eye. Although Wheatstone's discovery of stereopsis supported fusion theory, he still had to account for binocular rivalry. He regarded binocular rivalry as a special case in which fusion is impossible, saying "the mind is inattentive to impressions made on one retina when it cannot combine the impressions on the two retinae together so as to occasion a perception resembling that of some external object" (p. 264).

Other theories of binocular rivalry dealt more with how it occurs than why it occurs. Dutour speculated that the alternations could be controlled by attention, a theory promoted in the nineteenth century by Hermann von Helmholtz. But Dutour also speculated that the alternations could be controlled by structural properties of the images (such as by temporary fluctuations in the blur of one image, or temporary fluctuations in the luminance of one image). This theory was promoted in the nineteenth century by Helmholtz's traditional rival, Ewald Hering.

Empirical studies: B. B. Breese (1899, 1909)

The most comprehensive early study of binocular rivalry was conducted by B. B. Breese (1899, 1909). Breese quantified the amount of rivalry by requiring his observers to press keys while observing rivalry for 100-second trials. An observer pressed one key whenever and for as long as he or she saw one rival stimulus with no trace of the other, and another key whenever and for as long as he or she saw the other rival stimulus with no trace of the first. This has come to be known as recording periods of exclusive visibility. From the key-press records (Breese's were made on a kymograph drum), Breese was able to quantify rivalry in three ways: the number of periods of exclusive visibility of each stimulus (the rate of rivalry), the total duration of exclusive visibility of each stimulus, and the average duration of each period of rivalry.

Breese first found that although observers could increase the time one rival stimulus was seen by attending to it, they could not increase the rate of that stimulus. Moreover, when he asked his observers to refrain from moving their eyes over the attended stimulus, control was abolished. When he asked observers specifically to move their eyes over one stimulus, that stimulus predominated in rivalry. He could also increase predominance of a stimulus by increasing the number of its contours, by moving it, by reducing its size, by making it brighter, and by contracting the muscles on the same side of the body as the eye viewing that stimulus. Breese also showed that rivalry occurs between afterimages. Breese also discovered the phenomenon of monocular rivalry: if the two rival stimuli are optically superimposed to the same eye and one fixates on the stimuli, then alternations in the clarity of the two stimuli are seen. Occasionally, one image disappears altogether, as in binocular rivalry, although this is much rarer than in binocular rivalry.

Other senses

Auditory and olfactory forms of perceptual rivalry can occur when there are conflicting and so rivaling inputs into the two ears or two nostrils.

Binocular vision

From Wikipedia, the free encyclopedia
 
Principle of binocular vision with horopter shown

In biology, binocular vision is a type of vision in which an animal has two eyes capable of facing the same direction to perceive a single three-dimensional image of its surroundings. Binocular vision does not typically refer to vision where an animal has eyes on opposite sides of its head and shares no field of view between them, like in some animals.

Neurological researcher Manfred Fahle has stated six specific advantages of having two eyes rather than just one:

  1. It gives a creature a "spare eye" in case one is damaged.
  2. It gives a wider field of view. For example, humans have a maximum horizontal field of view of approximately 190 degrees with two eyes, approximately 120 degrees of which makes up the binocular field of view (seen by both eyes) flanked by two uniocular fields (seen by only one eye) of approximately 40 degrees.
  3. It can give stereopsis in which binocular disparity (or parallax) provided by the two eyes' different positions on the head gives precise depth perception. This also allows a creature to break the camouflage of another creature.
  4. It allows the angles of the eyes' lines of sight, relative to each other (vergence), and those lines relative to a particular object (gaze angle) to be determined from the images in the two eyes. These properties are necessary for the third advantage.
  5. It allows a creature to see more of, or all of, an object behind an obstacle. This advantage was pointed out by Leonardo da Vinci, who noted that a vertical column closer to the eyes than an object at which a creature is looking might block some of the object from the left eye but that part of the object might be visible to the right eye.
  6. It gives binocular summation in which the ability to detect faint objects is enhanced.

Other phenomena of binocular vision include utrocular discrimination (the ability to tell which of two eyes has been stimulated by light), eye dominance (the habit of using one eye when aiming something, even if both eyes are open), allelotropia (the averaging of the visual direction of objects viewed by each eye when both eyes are open), binocular fusion or singleness of vision (seeing one object with both eyes despite each eye having its own image of the object), and binocular rivalry (seeing one eye's image alternating randomly with the other when each eye views images that are so different they cannot be fused).

Binocular vision helps with performance skills such as catching, grasping, and locomotion. It also allows humans to walk over and around obstacles at greater speed and with more assurance. Optometrists and orthoptists are eyecare professionals who fix binocular vision problems.

Etymology

The term binocular comes from two Latin roots, bini for double, and oculus for eye.

Field of view and eye movements

The field of view of a pigeon compared to that of an owl.

Some animals – usually, but not always, prey animals – have their two eyes positioned on opposite sides of their heads to give the widest possible field of view. Examples include rabbits, buffalo, and antelopes. In such animals, the eyes often move independently to increase the field of view. Even without moving their eyes, some birds have a 360-degree field of view.

Some other animals – usually, but not always, predatory animals – have their two eyes positioned on the front of their heads, thereby allowing for binocular vision and reducing their field of view in favor of stereopsis. However, front-facing eyes are a highly evolved trait in vertebrates, and there are only three extant groups of vertebrates with truly forward-facing eyes: primates, carnivorous mammals, and birds of prey.

Some predatory animals, particularly large ones such as sperm whales and killer whales, have their two eyes positioned on opposite sides of their heads, although it is possible they have some binocular visual field. Other animals that are not necessarily predators, such as fruit bats and a number of primates, also have forward-facing eyes. These are usually animals that need fine depth discrimination/perception; for instance, binocular vision improves the ability to pick a chosen fruit or to find and grasp a particular branch.

The direction of a point relative to the head (the angle between the straight ahead position and the apparent position of the point, from the egocenter) is called visual direction, or version. The angle between the line of sight of the two eyes when fixating a point is called the absolute disparity, binocular parallax, or vergence demand (usually just vergence). The relation between the position of the two eyes, version and vergence is described by Hering's law of visual direction.

In animals with forward-facing eyes, the eyes usually move together.

The grey crowned crane, an animal that has laterally-placed eyes which can also face forward.

Eye movements are either conjunctive (in the same direction), version eye movements, usually described by their type: saccades or smooth pursuit (also nystagmus and vestibulo-ocular reflex). Or they are disjunctive (in opposite direction), vergence eye movements. The relation between version and vergence eye movements in humans (and most animals) is described by Hering's law of equal innervation.

Some animals use both of the above strategies. A starling, for example, has laterally placed eyes to cover a wide field of view, but can also move them together to point to the front so their fields overlap giving stereopsis. A remarkable example is the chameleon, whose eyes appear as if mounted on turrets, each moving independently of the other, up or down, left or right. Nevertheless, the chameleon can bring both of its eyes to bear on a single object when it is hunting, showing vergence and stereopsis.

Binocular summation

Binocular summation is the process by which the detection threshold for a stimulus is lower with two eyes than with one. There are various types of possibilities when comparing binocular performance to monocular. Neural binocular summation occurs when the binocular response is greater than the probability summation. Probability summation assumes complete independence between the eyes and predicts a ratio ranging between 9-25%. Binocular inhibition occurs when binocular performance is less than monocular performance. This suggests that a weak eye affects a good eye and causes overall combined vision. Maximum binocular summation occurs when monocular sensitivities are equal. Unequal monocular sensitivities decrease binocular summation. There are unequal sensitivities of vision disorders such as unilateral cataract and amblyopia. Other factors that can affect binocular summation include are, spatial frequency, stimulated retinal points, and temporal separation.

Binocular interaction

Apart from binocular summation, the two eyes can influence each other in at least three ways.

  • Pupillary diameter. Light falling in one eye affects the diameter of the pupils in both eyes. One can easily see this by looking at a friend's eye while he or she closes the other: when the other eye is open, the pupil of the first eye is small; when the other eye is closed, the pupil of the first eye is large.
  • Accommodation and vergence. Accommodation is the state of focus of the eye. If one eye is open and the other closed, and one focuses on something close, the accommodation of the closed eye will become the same as that of the open eye. Moreover, the closed eye will tend to converge to point at the object. Accommodation and convergence are linked by a reflex, so that one evokes the other.
  • Interocular transfer. The state of adaptation of one eye can have a small effect on the state of light adaptation of the other. Aftereffects induced through one eye can be measured through the other.

Singleness of vision

Once the fields of view overlap, there is a potential for confusion between the left and right eye's image of the same object. This can be dealt with in two ways: one image can be suppressed, so that only the other is seen, or the two images can be fused. If two images of a single object are seen, this is known as double vision or diplopia.

Fusion of images (commonly referred to as 'binocular fusion') occurs only in a small volume of visual space around where the eyes are fixating. Running through the fixation point in the horizontal plane is a curved line for which objects there fall on corresponding retinal points in the two eyes. This line is called the empirical horizontal horopter. There is also an empirical vertical horopter, which is effectively tilted away from the eyes above the fixation point and towards the eyes below the fixation point. The horizontal and vertical horopters mark the centre of the volume of singleness of vision. Within this thin, curved volume, objects nearer and farther than the horopters are seen as single. The volume is known as Panum's fusional area (it's presumably called an area because it was measured by Panum only in the horizontal plane). Outside of Panum's fusional area (volume), double vision occurs.

Eye dominance

When each eye has its own image of objects, it becomes impossible to align images outside of Panum's fusional area with an image inside the area. This happens when one has to point to a distant object with one's finger. When one looks at one's fingertip, it is single but there are two images of the distant object. When one looks at the distant object it is single but there are two images of one's fingertip. To point successfully, one of the double images has to take precedence and one be ignored or suppressed (termed "eye dominance"). The eye that can both move faster to the object and stay fixated on it is more likely to be termed as the dominant eye.

Stereopsis

The overlapping of vision occurs due to the position of the eyes on the head (eyes are located on the front of the head, not on the sides). This overlap allows each eye to view objects with a slightly different viewpoint. As a result of this overlap of vision, binocular vision provides depth. Stereopsis (from stereo- meaning "solid" or "three-dimensional", and opsis meaning “appearance” or “sight”) is the impression of depth that is perceived when a scene is viewed with both eyes by someone with normal binocular vision. Binocular viewing of a scene creates two slightly different images of the scene in the two eyes due to the eyes' different positions on the head. These differences, referred to as binocular disparity, provide information that the brain can use to calculate depth in the visual scene, providing a major means of depth perception. There are two aspects of stereopsis: the nature of the stimulus information specifying stereopsis, and the nature of the brain processes responsible for registering that information. The distance between the two eyes on an adult is almost always 6.5 cm and that is the same distance in shift of an image when viewing with only one eye. Retinal disparity is the separation between objects as seen by the left eye and the right eye and helps to provide depth perception. Retinal disparity provides relative depth between two objects, but not exact or absolute depth. The closer objects are to each other, the retinal disparity will be small. If the objects are farther away from each other, then the retinal disparity will be larger. When objects are at equal distances, the two eyes view the objects as the same and there is zero disparity.

Allelotropia

Because the eyes are in different positions on the head, any object away from fixation and off the plane of the horopter has a different visual direction in each eye. Yet when the two monocular images of the object are fused, creating a Cyclopean image, the object has a new visual direction, essentially the average of the two monocular visual directions. This is called allelotropia. The origin of the new visual direction is a point approximately between the two eyes, the so-called cyclopean eye. The position of the cyclopean eye is not usually exactly centered between the eyes, but tends to be closer to the dominant eye.

Binocular rivalry

When very different images are shown to the same retinal regions of the two eyes, perception settles on one for a few moments, then the other, then the first, and so on, for as long as one cares to look. This alternation of perception between the images of the two eyes is called binocular rivalry. Humans have limited capacity to process an image fully at one time. That is why the binocular rivalry occurs. Several factors can influence the duration of gaze on one of the two images. These factors include context, increasing of contrast, motion, spatial frequency, and inverted images. Recent studies have even shown that facial expressions can cause longer attention to a particular image. When an emotional facial expression is presented to one eye, and a neutral expression is presented to the other eye, the emotional face dominates the neutral face and even causes the neutral face to not been seen.

Disorders

To maintain stereopsis and singleness of vision, the eyes need to be pointed accurately. The position of each eye in its orbit is controlled by six extraocular muscles. Slight differences in the length or insertion position or strength of the same muscles in the two eyes can lead to a tendency for one eye to drift to a different position in its orbit from the other, especially when one is tired. This is known as phoria. One way to reveal it is with the cover-uncover test. To do this test, look at a cooperative person's eyes. Cover one eye of that person with a card. Have the person look at your finger tip. Move the finger around; this is to break the reflex that normally holds a covered eye in the correct vergence position. Hold your finger steady and then uncover the person's eye. Look at the uncovered eye. You may see it flick quickly from being wall-eyed or cross-eyed to its correct position. If the uncovered eye moved from out to in, the person has esophoria. If it moved from in to out, the person has exophoria. If the eye did not move at all, the person has orthophoria. Most people have some amount of exophoria or esophoria; it is quite normal. If the uncovered eye also moved vertically, the person has hyperphoria (if the eye moved from down to up) or hypophoria (if the eye moved from up to down). Such vertical phorias are quite rare. It is also possible for the covered eye to rotate in its orbit, such a condition is known as cyclophoria. They are rarer than vertical phorias. Cover test may be used to determine direction of deviation in cyclophorias also.

The cover-uncover test can also be used for more problematic disorders of binocular vision, the tropias. In the cover part of the test, the examiner looks at the first eye as he or she covers the second. If the eye moves from in to out, the person has exotropia. If it moved from out to in, the person has esotropia. People with exotropia or esotropia are wall-eyed or cross-eyed respectively. These are forms of strabismus that can be accompanied by amblyopia. There are numerous definitions of amblyopia. A definition that incorporates all of these defines amblyopia as a unilateral condition in which vision in worse than 20/20 in the absence of any obvious structural or pathologic anomalies, but with one or more of the following conditions occurring before the age of six: amblyogenic anisometropia, constant unilateral esotropia or exotropia, amblyogenic bilateral isometropia, amblyogenic unilateral or bilateral astigmatism, image degradation. When the covered eye is the non-amblyopic eye, the amblyopic eye suddenly becomes the person's only means of seeing. The strabismus is revealed by the movement of that eye to fixate on the examiner's finger. There are also vertical tropias (hypertropia and hypotropia) and cyclotropias.

Binocular vision anomalies include: diplopia (double vision), visual confusion (the perception of two different images superimposed onto the same space), suppression (where the brain ignores all or part of one eye's visual field), horror fusionis (an active avoidance of fusion by eye misalignment), and anomalous retinal correspondence (where the brain associates the fovea of one eye with an extrafoveal area of the other eye).

Binocular vision anomalies are among the most common visual disorders. They are usually associated with symptoms such as headaches, asthenopia, eye pain, blurred vision, and occasional diplopia. About 20% of patients who come to optometry clinics will have binocular vision anomalies. Many children these days are using digital devices for a significant period of time. This could lead to various binocular vision anomalies (such as reduced amplitudes of accommodation, accommodative facility, and positive fusional vergence both at near and distance). The most effective way to diagnosis vision anomalies is with the near point of convergence test. During the NPC test, a target, such as a finger, is brought towards the face until the examiner notices that one eye has turned outward and/or the person has experienced diplopia or doubled vision.

Up to a certain extent, binocular disparities can be compensated for by adjustments of the visual system. If, however, defects of binocular vision are too great – for example if they would require the visual system to adapt to overly large horizontal, vertical, torsional or aniseikonic deviations – the eyes tend to avoid binocular vision, ultimately causing or worsening a condition of strabismus.

Ocean temperature

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Ocean_temperature Graph showing ocean tempe...