Search This Blog

Saturday, March 16, 2019

Neurotechnology

From Wikipedia, the free encyclopedia

Neurotechnology is any technology that has a fundamental influence on how people understand the brain and various aspects of consciousness, thought, and higher order activities in the brain. It also includes technologies that are designed to improve and repair brain function and allow researchers and clinicians to visualize the brain.

Background

The field of neurotechnology has been around for nearly half a century but has only reached maturity in the last twenty years. The advent of brain imaging revolutionized the field, allowing researchers to directly monitor the brain's activities during experiments. Neurotechnology has made significant impact on society, though its presence is so commonplace that many do not realize its ubiquity. From pharmaceutical drugs to brain scanning, neurotechnology affects nearly all industrialized people either directly or indirectly, be it from drugs for depression, sleep, ADD, or anti-neurotics to cancer scanning, stroke rehabilitation, and much more. 

As the field's depth increases it will potentially allow society to control and harness more of what the brain does and how it influences lifestyles and personalities. Commonplace technologies already attempt to do this; games like BrainAge, and programs like Fast ForWord that aim to improve brain function, are neurotechnologies. 

Currently, modern science can image nearly all aspects of the brain as well as control a degree of the function of the brain. It can help control depression, over-activation, sleep deprivation, and many other conditions. Therapeutically it can help improve stroke victims' motor coordination, improve brain function, reduce epileptic episodes, improve patients with degenerative motor diseases (Parkinson's disease, Huntington's disease, ALS), and can even help alleviate phantom pain perception. Advances in the field promise many new enhancements and rehabilitation methods for patients suffering from neurological problems. The neurotechnology revolution has given rise to the Decade of the Mind initiative, which was started in 2007. It also offers the possibility of revealing the mechanisms by which mind and consciousness emerge from the brain.

Current technologies

Live Imaging

Magnetoencephalography is a functional neuroimaging technique for mapping brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain, using very sensitive magnetometers. Arrays of SQUIDs (superconducting quantum interference devices) are the most common magnetometer. Applications of MEG include basic research into perceptual and cognitive brain processes, localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback. This can be applied in a clinical setting to find locations of abnormalities as well as in an experimental setting to simply measure brain activity.

Magnetic resonance imaging (MRI) is used for scanning the brain for topological and landmark structure in the brain, but can also be used for imaging activation in the brain. While detail about how MRI works is reserved for the actual MRI article, the uses of MRI are far reaching in the study of neuroscience. It is a cornerstone technology in studying the mind, especially with the advent of functional MRI (fMRI). Functional MRI measures the oxygen levels in the brain upon activation (higher oxygen content = neural activation) and allows researchers to understand what loci are responsible for activation under a given stimulus. This technology is a large improvement to single cell or loci activation by means of exposing the brain and contact stimulation. Functional MRI allows researchers to draw associative relationships between different loci and regions of the brain and provides a large amount of knowledge in establishing new landmarks and loci in the brain.

Computed tomography (CT) is another technology used for scanning the brain. It has been used since the 1970s and is another tool used by neuroscientists to track brain structure and activation. While many of the functions of CT scans are now done using MRI, CT can still be used as the mode by which brain activation and brain injury are detected. Using an X-ray, researchers can detect radioactive markers in the brain that indicate brain activation as a tool to establish relationships in the brain as well as detect many injuries/diseases that can cause lasting damage to the brain such as aneurysms, degeneration, and cancer. 

Positron emission tomography (PET) is another imaging technology that aids researchers. Instead of using magnetic resonance or X-rays, PET scans rely on positron emitting markers that are bound to a biologically relevant marker such as glucose. The more activation in the brain the more that region requires nutrients, so higher activation appears more brightly on an image of the brain. PET scans are becoming more frequently used by researchers because PET scans are activated due to metabolism whereas MRI is activated on a more physiological basis (sugar activation versus oxygen activation).

Transcranial magnetic stimulation

Transcranial magnetic stimulation (TMS) is essentially direct magnetic stimulation to the brain. Because electric currents and magnetic fields are intrinsically related, by stimulating the brain with magnetic pulses it is possible to interfere with specific loci in the brain to produce a predictable effect. This field of study is currently receiving a large amount of attention due to the potential benefits that could come out of better understanding this technology. Transcranial magnetic movement of particles in the brain shows promise for drug targeting and delivery as studies have demonstrated this to be noninvasive on brain physiology.

Transcranial direct current stimulation

Transcranial direct current stimulation (tDCS) is a form of neurostimulation which uses constant, low current delivered via electrodes placed on the scalp. The mechanisms underlying tDCS effects are still incompletely understood, but recent advances in neurotechnology allowing for in vivo assessment of brain electric activity during tDCS promise to advance understanding of these mechanisms. Research into using tDCS on healthy adults have demonstrated that tDCS can increase cognitive performance on a variety of tasks, depending on the area of the brain being stimulated. tDCS has been used to enhance language and mathematical ability (though one form of tDCS was also found to inhibit math learning), attention span, problem solving, memory, and coordination.

Cranial surface measurements

Electroencephalography (EEG) is a method of measuring brainwave activity non-invasively. A number of electrodes are placed around the head and scalp and electrical signals are measured. Typically EEGs are used when dealing with sleep, as there are characteristic wave patterns associated with different stages of sleep. Clinically EEGs are used to study epilepsy as well as stroke and tumor presence in the brain. EEGs are a different method to understand the electrical signaling in the brain during activation. 

Magnetoencephalography (MEG) is another method of measuring activity in the brain by measuring the magnetic fields that arise from electrical currents in the brain. The benefit to using MEG instead of EEG is that these fields are highly localized and give rise to better understanding of how specific loci react to stimulation or if these regions over-activate (as in epileptic seizures).

Implant technologies

Neurodevices are any devices used to monitor or regulate brain activity. Currently there are a few available for clinical use as a treatment for Parkinson's disease. The most common neurodevices are deep brain stimulators (DBS) that are used to give electrical stimulation to areas stricken by inactivity. Parkinson's disease is known to be caused by an inactivation of the basal ganglia (nuclei) and recently DBS has become the more preferred form of treatment for Parkinson's disease, although current research questions the efficiency of DBS for movement disorders.

Neuromodulation is a relatively new field that combines the use of neurodevices and neurochemistry. The basis of this field is that the brain can be regulated using a number of different factors (metabolic, electrical stimulation, physiological) and that all these can be modulated by devices implanted in the neural network. While currently this field is still in the researcher phase, it represents a new type of technological integration in the field of neurotechnology. The brain is a very sensitive organ, so in addition to researching the amazing things that neuromodulation and implanted neural devices can produce, it is important to research ways to create devices that elicit as few negative responses from the body as possible. This can be done by modifying the material surface chemistry of neural implants.

Cell therapy

Researchers have begun looking at uses for stem cells in the brain, which recently have been found in a few loci. A large number of studies are being done to determine if this form of therapy could be used in a large scale. Experiments have successfully used stem cells in the brains of children who suffered from injuries in gestation and elderly people with degenerative diseases in order to induce the brain to produce new cells and to make more connections between neurons.

Pharmaceuticals

Pharmaceuticals play a vital role in maintaining stable brain chemistry, and are the most commonly used neurotechnology by the general public and medicine. Drugs like sertraline, methylphenidate, and zolpidem act as chemical modulators in the brain, and they allow for normal activity in many people whose brains cannot act normally under physiological conditions. While pharmaceuticals are usually not mentioned and have their own field, the role of pharmaceuticals is perhaps the most far-reaching and commonplace in modern society (the focus on this article will largely ignore neuropharmaceuticals, for more information, see neuropsychopharmacology). Movement of magnetic particles to targeted brain regions for drug delivery is an emerging field of study and causes no detectable circuit damage.

Low field magnetic stimulation

Stimulation with low-intensity magnetic fields is currently under study for depression at Harvard Medical School, and has previously been explored by Bell. It has FDA approval for treatment of depression. It is also being researched for other applications such as autism. One issue is that no 2 brains are alike and stimulation can cause either polarization or depolarization. (et al.), Marino (et al.), and others.

How these help study the brain

Magnetic resonance imaging is a vital tool in neurological research in showing activation in the brain as well as providing a comprehensive image of the brain being studied. While MRIs are used clinically for showing brain size, it still has relevance in the study of brains because it can be used to determine extent of injuries or deformation. These can have a significant effect on personality, sense perception, memory, higher order thinking, movement, and spatial understanding. However, current research tends to focus more so on fMRI or real-time functional MRI (rtfMRI). These two methods allow the scientist or the participant, respectively, to view activation in the brain. This is incredibly vital in understanding how a person thinks and how their brain reacts to a person's environment, as well as understanding how the brain works under various stressors or dysfunctions. Real-time functional MRI is a revolutionary tool available to neurologists and neuroscientists because patients can see how their brain reacts to stressors and can perceive visual feedback. CT scans are very similar to MRI in their academic use because they can be used to image the brain upon injury, but they are more limited in perceptual feedback. CTs are generally used in clinical studies far more than in academic studies, and are found far more often in a hospital than a research facility. PET scans are also finding more relevance in academia because they can be used to observe metabolic uptake of neurons, giving researchers a wider perspective about neural activity in the brain for a given condition. Combinations of these methods can provide researchers with knowledge of both physiological and metabolic behaviors of loci in the brain and can be used to explain activation and deactivation of parts of the brain under specific conditions.

Transcranial magnetic stimulation is a relatively new method of studying how the brain functions and is used in many research labs focused on behavioral disorders and hallucinations. What makes TMS research so interesting in the neuroscience community is that it can target specific regions of the brain and shut them down or activate temporarily; thereby changing the way the brain behaves. Personality disorders can stem from a variety of external factors, but when the disorder stems from the circuitry of the brain TMS can be used to deactivate the circuitry. This can give rise to a number of responses, ranging from “normality” to something more unexpected, but current research is based on the theory that use of TMS could radically change treatment and perhaps act as a cure for personality disorders and hallucinations. Currently, repetitive transcranial magnetic stimulation (rTMS) is being researched to see if this deactivation effect can be made more permanent in patients suffering from these disorders. Some techniques combine TMS and another scanning method such as EEG to get additional information about brain activity such as cortical response.

Both EEG and MEG are currently being used to study the brain's activity under different conditions. Each uses similar principles but allows researchers to examine individual regions of the brain, allowing isolation and potentially specific classification of active regions. As mentioned above, EEG is very useful in analysis of immobile patients, typically during the sleep cycle. While there are other types of research that utilize EEG, EEG has been fundamental in understanding the resting brain during sleep. There are other potential uses for EEG and MEG such as charting rehabilitation and improvement after trauma as well as testing neural conductivity in specific regions of epileptics or patients with personality disorders. 

Neuromodulation can involve numerous technologies combined or used independently to achieve a desired effect in the brain. Gene and cell therapy are becoming more prevalent in research and clinical trials and these technologies could help stunt or even reverse disease progression in the central nervous system. Deep brain stimulation is currently used in many patients with movement disorders and is used to improve the quality of life in patients. While deep brain stimulation is a method to study how the brain functions per se, it provides both surgeons and neurologists important information about how the brain works when certain small regions of the basal ganglia (nuclei) are stimulated by electrical currents.

Future technologies

The future of neurotechnologies lies in how they are fundamentally applied, and not so much on what new versions will be developed. Current technologies give a large amount of insight into the mind and how the brain functions, but basic research is still needed to demonstrate the more applied functions of these technologies. Currently, rtfMRI is being researched as a method for pain therapy. deCharms et al. have shown that there is a significant improvement in the way people perceive pain if they are made aware of how their brain is functioning while in pain. By providing direct and understandable feedback, researchers can help patients with chronic pain decrease their symptoms. This new type of bio/mechanical-feedback is a new development in pain therapy. Functional MRI is also being considered for a number of more applicable uses outside of the clinic. Research has been done on testing the efficiency of mapping the brain in the case when someone lies as a new way to detect lying. Along the same vein, EEG has been considered for use in lie detection as well. TMS is being used in a variety of potential therapies for patients with personality disorders, epilepsy, PTSD, migraine, and other brain-firing disorders, but has been found to have varying clinical success for each condition. The end result of such research would be to develop a method to alter the brain's perception and firing and train patients' brains to rewire permanently under inhibiting conditions (for more information see rTMS). In addition, PET scans have been found to be 93% accurate in detecting Alzheimer's disease nearly 3 years before conventional diagnosis, indicating that PET scanning is becoming more useful in both the laboratory and the clinic.

Stem cell technologies are always salient both in the minds of the general public and scientists because of their large potential. Recent advances in stem cell research have allowed researchers to ethically pursue studies in nearly every facet of the body, which includes the brain. Research has shown that while most of the brain does not regenerate and is typically a very difficult environment to foster regeneration, there are portions of the brain with regenerative capabilities (specifically the hippocampus and the olfactory bulbs). Much of the research in central nervous system regeneration is how to overcome this poor regenerative quality of the brain. It is important to note that there are therapies that improve cognition and increase the amount of neural pathways, but this does not mean that there is a proliferation of neural cells in the brain. Rather, it is called a plastic rewiring of the brain (plastic because it indicates malleability) and is considered a vital part of growth. Nevertheless, many problems in patients stem from death of neurons in the brain, and researchers in the field are striving to produce technologies that enable regeneration in patients with stroke, Parkinson's diseases, severe trauma, and Alzheimer's disease, as well as many others. While still in fledgling stages of development, researchers have recently begun making very interesting progress in attempting to treat these diseases. Researchers have recently successfully produced dopaminergic neurons for transplant in patients with Parkinson's diseases with the hopes that they will be able to move again with a more steady supply of dopamine. Many researchers are building scaffolds that could be transplanted into a patient with spinal cord trauma to present an environment that promotes growth of axons (portions of the cell attributed with transmission of electrical signals) so that patients unable to move or feel might be able to do so again. The potentials are wide-ranging, but it is important to note that many of these therapies are still in the laboratory phase and are slowly being adapted in the clinic. Some scientists remain skeptical with the development of the field, and warn that there is a much larger chance that electrical prosthesis will be developed to solve clinical problems such as hearing loss or paralysis before cell therapy is used in a clinic.

Novel drug delivery systems are being researched in order to improve the lives of those who struggle with brain disorders that might not be treated with stem cells, modulation, or rehabilitation. Pharmaceuticals play a very important role in society, and the brain has a very selective barrier that prevents some drugs from going from the blood to the brain. There are some diseases of the brain such as meningitis that require doctors to directly inject medicine into the spinal cord because the drug cannot cross the blood–brain barrier. Research is being conducted to investigate new methods of targeting the brain using the blood supply, as it is much easier to inject into the blood than the spine. New technologies such as nanotechnology are being researched for selective drug delivery, but these technologies have problems as with any other. One of the major setbacks is that when a particle is too large, the patient's liver will take up the particle and degrade it for excretion, but if the particle is too small there will not be enough drug in the particle to take effect. In addition, the size of the capillary pore is important because too large a particle might not fit or even plug up the hole, preventing adequate supply of the drug to the brain. Other research is involved in integrating a protein device between the layers to create a free-flowing gate that is unimpeded by the limitations of the body. Another direction is receptor-mediated transport, where receptors in the brain used to transport nutrients are manipulated to transport drugs across the blood–brain barrier. Some have even suggested that focused ultrasound opens the blood–brain barrier momentarily and allows free passage of chemicals into the brain. Ultimately the goal for drug delivery is to develop a method that maximizes the amount of drug in the loci with as little degraded in the blood stream as possible.

Neuromodulation is a technology currently used for patients with movement disorders, although research is currently being done to apply this technology to other disorders. Recently, a study was done on if DBS could improve depression with positive results, indicating that this technology might have potential as a therapy for multiple disorders in the brain. DBS is limited by its high cost however, and in developing countries the availability of DBS is very limited. A new version of DBS is under investigation and has developed into the novel field, optogenetics. Optogenetics is the combination of deep brain stimulation with fiber optics and gene therapy. Essentially, the fiber optic cables are designed to light up under electrical stimulation, and a protein would be added to a neuron via gene therapy to excite it under light stimuli. So by combining these three independent fields, a surgeon could excite a single and specific neuron in order to help treat a patient with some disorder. Neuromodulation offers a wide degree of therapy for many patients, but due to the nature of the disorders it is currently used to treat its effects are often temporary. Future goals in the field hope to alleviate that problem by increasing the years of effect until DBS can be used for the remainder of the patient's life. Another use for neuromodulation would be in building neuro-interface prosthetic devices that would allow quadriplegics the ability to maneuver a cursor on a screen with their thoughts, thereby increasing their ability to interact with others around them. By understanding the motor cortex and understanding how the brain signals motion, it is possible to emulate this response on a computer screen.

Ethics

Stem cells

The ethical debate about use of embryonic stem cells has stirred controversy both in the United States and abroad; although more recently these debates have lessened due to modern advances in creating induced pluripotent stem cells from adult cells. The greatest advantage for use of embryonic stem cells is the fact that they can differentiate (become) nearly any type of cell provided the right conditions and signals. However, recent advances by Shinya Yamanaka et al. have found ways to create pluripotent cells without the use of such controversial cell cultures. Using the patient's own cells and re-differentiating them into the desired cell type bypasses both possible patient rejection of the embryonic stem cells and any ethical concerns associated with using them, while also providing researchers a larger supply of available cells. However, induced pluripotent cells have the potential to form benign (though potentially malignant) tumors, and tend to have poor survivability in vivo (in the living body) on damaged tissue. Much of the ethics concerning use of stem cells has subsided from the embryonic/adult stem cell debate due to its rendered moot, but now societies find themselves debating whether or not this technology can be ethically used. Enhancements of traits, use of animals for tissue scaffolding, and even arguments for moral degeneration have been made with the fears that if this technology reaches its full potential a new paradigm shift will occur in human behavior.

Military application

New neurotechnologies have always garnered the appeal of governments, from lie detection technology and virtual reality to rehabilitation and understanding the psyche. Due to the Iraq War and War on Terror, American soldiers coming back from Iraq and Afghanistan are reported to have percentages up to 12% with PTSD. There are many researchers hoping to improve these peoples' conditions by implementing new strategies for recovery. By combining pharmaceuticals and neurotechnologies, some researchers have discovered ways of lowering the "fear" response and theorize that it may be applicable to PTSD. Virtual reality is another technology that has drawn much attention in the military. If improved, it could be possible to train soldiers how to deal with complex situations in times of peace, in order to better prepare and train a modern army.

Privacy

Finally, when these technologies are being developed society must understand that these neurotechnologies could reveal the one thing that people can always keep secret: what they are thinking. While there are large amounts of benefits associated with these technologies, it is necessary for scientists, citizens and policy makers alike to consider implications for privacy. This term is important in many ethical circles concerned with the state and goals of progress in the field of neurotechnology (see Neuroethics). Current improvements such as “brain fingerprinting” or lie detection using EEG or fMRI could give rise to a set fixture of loci/emotional relationships in the brain, although these technologies are still years away from full application. It is important to consider how all these neurotechnologies might affect the future of society, and it is suggested that political, scientific, and civil debates are heard about the implementation of these newer technologies that potentially offer a new wealth of once-private information. Some ethicists are also concerned with the use of TMS and fear that the technique could be used to alter patients in ways that are undesired by the patient.

Cognitive liberty

Cognitive liberty refers to a suggested right to self-determination of individuals to control their own mental processes, cognition, and consciousness including by the use of various neurotechnologies and psychoactive substances. This perceived right is relevant for reformation and development of associated laws.

Neuropharmacology

From Wikipedia, the free encyclopedia

Neuropharmacology is the study of how drugs affect cellular function in the nervous system, and the neural mechanisms through which they influence behavior. There are two main branches of neuropharmacology: behavioral and molecular. Behavioral neuropharmacology focuses on the study of how drugs affect human behavior (neuropsychopharmacology), including the study of how drug dependence and addiction affect the human brain. Molecular neuropharmacology involves the study of neurons and their neurochemical interactions, with the overall goal of developing drugs that have beneficial effects on neurological function. Both of these fields are closely connected, since both are concerned with the interactions of neurotransmitters, neuropeptides, neurohormones, neuromodulators, enzymes, second messengers, co-transporters, ion channels, and receptor proteins in the central and peripheral nervous systems. Studying these interactions, researchers are developing drugs to treat many different neurological disorders, including pain, neurodegenerative diseases such as Parkinson's disease and Alzheimer's disease, psychological disorders, addiction, and many others.

History

Neuropharmacology did not appear in the scientific field until, in the early part of the 20th century, scientists were able to figure out a basic understanding of the nervous system and how nerves communicate between one another. Before this discovery, there were drugs that had been found that demonstrated some type of influence on the nervous system. In the 1930s, French scientists began working with a compound called phenothiazine in the hope of synthesizing a drug that would be able to combat malaria. Though this drug showed very little hope in the use against malaria-infected individuals, it was found to have sedative effects along with what appeared to be beneficial effects toward patients with Parkinson’s disease. This black box method, wherein an investigator would administer a drug and examine the response without knowing how to relate drug action to patient response, was the main approach to this field, until, in the late 1940s and early 1950s, scientists were able to identify specific neurotransmitters, such as norepinephrine (involved in the constriction of blood vessels and the increase in heart rate and blood pressure), dopamine (the chemical whose shortage is involved in Parkinson’s disease), and serotonin (soon to be recognized as deeply connected to depression). In the 1950s, scientists also became better able to measure levels of specific neurochemicals in the body and thus correlate these levels with behavior. The invention of the voltage clamp in 1949 allowed for the study of ion channels and the nerve action potential. These two major historical events in neuropharmacology allowed scientists not only to study how information is transferred from one neuron to another but also to study how a neuron processes this information within itself.

Overview

Neuropharmacology is a very broad region of science that encompasses many aspects of the nervous system from single neuron manipulation to entire areas of the brain, spinal cord, and peripheral nerves. To better understand the basis behind drug development, one must first understand how neurons communicate with one another. This article will focus on both behavioral and molecular neuropharmacology; the major receptors, ion channels, and neurotransmitters manipulated through drug action and how people with a neurological disorder benefit from this drug action.

Neurochemical interactions

To understand the potential advances in medicine that neuropharmacology can bring, it is important to understand how human behavior and thought processes are transferred from neuron to neuron and how medications can alter the chemical foundations of these processes. 

Neurons are known as excitable cells because on its surface membrane there are an abundance of proteins known as ion-channels that allow small charged particles to pass in and out of the cell. The structure of the neuron allows chemical information to be received by its dendrites, propagated through the perikaryon (cell body) and down its axon, and eventually passing on to other neurons through its axon terminal.

Labeling of different parts of a neuron
 
These voltage-gated ion channels allow for rapid depolarization throughout the cell. This depolarization, if it reaches a certain threshold, will cause an action potential. Once the action potential reaches the axon terminal, it will cause an influx of calcium ions into the cell. The calcium ions will then cause vesicles, small packets filled with neurotransmitters, to bind to the cell membrane and release its contents into the synapse. This cell is known as the pre-synaptic neuron, and the cell that interacts with the neurotransmitters released is known as the post-synaptic neuron. Once the neurotransmitter is released into the synapse, it can either bind to receptors on the post-synaptic cell, the pre-synaptic cell can re-uptake it and save it for later transmission, or it can be broken down by enzymes in the synapse specific to that certain neurotransmitter. These three different actions are major areas where drug action can affect communication between neurons.

There are two types of receptors that neurotransmitters interact with on a post-synaptic neuron. The first types of receptors are ligand-gated ion channels or LGICs. LGIC receptors are the fastest types of transduction from chemical signal to electrical signal. Once the neurotransmitter binds to the receptor, it will cause a conformational change that will allow ions to directly flow into the cell. The second types are known as G-protein-coupled receptors or GPCRs. These are much slower than LGICs due to an increase in the amount of biochemical reactions that must take place intracellularly. Once the neurotransmitter binds to the GPCR protein, it causes a cascade of intracellular interactions that can lead to many different types of changes in cellular biochemistry, physiology, and gene expression. Neurotransmitter/receptor interactions in the field of neuropharmacology are extremely important because many drugs that are developed today have to do with disrupting this binding process.

Molecular neuropharmacology

Molecular neuropharmacology involves the study of neurons and their neurochemical interactions, and receptors on neurons, with the goal of developing new drugs that will treat neurological disorders such as pain, neurodegenerative diseases, and psychological disorders (also known in this case as neuropsychopharmacology). There are a few technical words that must be defined when relating neurotransmission to receptor action:
  • Agonist – a molecule that binds to a receptor protein and activates that receptor
  • Competitive antagonist – a molecule that binds to the same site on the receptor protein as the agonist, preventing activation of the receptor
  • Non-competitive antagonist – a molecule that binds to a receptor protein on a different site than that of the agonist, but causes a conformational change in the protein that does not allow activation.
The following neurotransmitter/receptor interactions can be affected by synthetic compounds that act as one of the three above. Sodium/potassium ion channels can also be manipulated throughout a neuron to induce inhibitory effects of action potentials.

GABA

The GABA neurotransmitter mediates the fast synaptic inhibition in the central nervous system. When GABA is released from its pre-synaptic cell, it will bind to a receptor (most likely the GABAA receptor) that causes the post-synaptic cell to hyperpolarize (stay below its action potential threshold). This will counteract the effect of any excitatory manipulation from other neurotransmitter/receptor interactions.

This GABAA receptor contains many binding sites that allow conformational changes and are the primary target for drug development. The most common of these binding sites, benzodiazepine, allows for both agonist and antagonist effects on the receptor. A common drug, diazepam, acts as an allosteric enhancer at this binding site. Another receptor for GABA, known as GABAB, can be enhanced by a molecule called baclofen. This molecule acts as an agonist, therefore activating the receptor, and is known to help control and decrease spastic movement.

Dopamine

The dopamine neurotransmitter mediates synaptic transmission by binding to five specific GPCRs. These five receptor proteins are separated into two classes due to whether the response elicits an excitatory or inhibitory response on the post-synaptic cell. There are many types of drugs, legal and illegal, that effect dopamine and its interactions in the brain. With Parkinson's disease, a disease that decreases the amount of dopamine in the brain, the dopamine precursor Levodopa is given to the patient due to the fact that dopamine cannot cross the blood–brain barrier and L-dopa can. Some dopamine agonists are also given to Parkinson's patients that have a disorder known as restless leg syndrome or RLS. Some examples of these are ropinirole and pramipexole.

Psychological disorders like that of attention deficit hyperactivity disorder (ADHD) can be treated with drugs like methylphenidate (also known as Ritalin), which block the re-uptake of dopamine by the pre-synaptic cell, thereby providing an increase of dopamine left in the synaptic gap. This increase in synaptic dopamine will increase binding to receptors of the post-synaptic cell. This same mechanism is also used by other illegal and more potent stimulant drugs such as cocaine.

Serotonin

The serotonin neurotransmitter has the ability to mediate synaptic transmission through either GPCR's or LGIC receptors. Depending on what part of the brain region serotonin is being acted upon, will depend on whether the output is either increasing or decreasing post-synaptic responses. The most popular and widely used drugs in the regulation of serotonin during depression are known as SSRIs or selective serotonin reuptake inhibitors. These drugs inhibit the transport of serotonin back into the pre-synaptic neuron, leaving more serotonin in the synaptic gap to be used.

Before the discovery of SSRIs, there were also very many drugs that inhibited the enzyme that breaks down serotonin. MAOIs or monoamine oxidase inhibitors increased the amount of serotonin in the pre-synaptic cell, but had many side-effects including intense migraines and high blood pressure. This was eventually linked to the drug's interacting with a common chemical known as tyramine found in many types of food.

Ion channels

Ion channels located on the surface membrane of the neuron allows for an influx of sodium ions and outward movement of potassium ions during an action potential. Selectively blocking these ion channels will decrease the likelihood of an action potential to occur. The drug riluzole is a neuroprotective drug that blocks sodium ion channels. Since these channels cannot activate, there is no action potential, and the neuron does not perform any transduction of chemical signals into electrical signals and the signal does not move on. This drug is used as an anesthetic as well as a sedative.

Behavioral neuropharmacology

Dopamine and serotonin pathway
 
One form of behavioral neuropharmacology focuses on the study of drug dependence and how drug addiction affects the human mind. Most research has shown that the major part of the brain that reinforces addiction through neurochemical reward is the nucleus accumbens. The image to the right shows how dopamine is projected into this area. Chronic alcohol abuse can cause dependence and addiction. How this addiction occurs is described below.

Ethanol

Alcohol's rewarding and reinforcing (i.e., addictive) properties are mediated through its effects on dopamine neurons in the mesolimbic reward pathway, which connects the ventral tegmental area to the nucleus accumbens (NAcc). One of alcohol's primary effects is the allosteric inhibition of NMDA receptors and facilitation of GABAA receptors (e.g., enhanced GABAA receptor-mediated chloride flux through allosteric regulation of the receptor). At high doses, ethanol inhibits most ligand gated ion channels and voltage gated ion channels in neurons as well. Alcohol inhibits sodium-potassium pumps in the cerebellum and this is likely how it impairs cerebellar computation and body co-ordination.

With acute alcohol consumption, dopamine is released in the synapses of the mesolimbic pathway, in turn heightening activation of postsynaptic D1 receptors. The activation of these receptors triggers postsynaptic internal signaling events through protein kinase A which ultimately phosphorylate cAMP response element binding protein (CREB), inducing CREB-mediated changes in gene expression.

With chronic alcohol intake, consumption of ethanol similarly induces CREB phosphorylation through the D1 receptor pathway, but it also alters NMDA receptor function through phosphorylation mechanisms; an adaptive downregulation of the D1 receptor pathway and CREB function occurs as well. Chronic consumption is also associated with an effect on CREB phosphorylation and function via postsynaptic NMDA receptor signaling cascades through a MAPK/ERK pathway and CAMK-mediated pathway. These modifications to CREB function in the mesolimbic pathway induce expression (i.e., increase gene expression) of ΔFosB in the NAcc, where ΔFosB is the "master control protein" that, when overexpressed in the NAcc, is necessary and sufficient for the development and maintenance of an addictive state (i.e., its overexpression in the nucleus accumbens produces and then directly modulates compulsive alcohol consumption).

Research

Parkinson's disease

Parkinson's disease is a neurodegenerative disease described by the selective loss of dopaminergic neurons located in the substantia nigra. Today, the most commonly used drug to combat this disease is levodopa or L-DOPA. This precursor to dopamine can penetrate through the blood–brain barrier, whereas the neurotransmitter dopamine cannot. There has been extensive research to determine whether L-dopa is a better treatment for Parkinson's disease rather than other dopamine agonists. Some believe that the long-term use of L-dopa will compromise neuroprotection and, thus, eventually lead to dopaminergic cell death. Though there has been no proof, in-vivo or in-vitro, some still believe that the long-term use of dopamine agonists is better for the patient.

Alzheimer's disease

While there are a variety of hypotheses that have been proposed for the cause of Alzheimer's disease, the knowledge of this disease is far from complete to explain, making it difficult to develop methods for treatment. In the brain of Alzheimer's patients, both neuronal nicotinic acetylcholine (nACh) receptors and NMDA receptors are known to be down-regulated. Thus, four anticholinesterases have been developed and approved by the U.S. Food and Drug Administration (FDA) for the treatment in the U.S.A. However, these are not ideal drugs, considering their side-effects and limited effectiveness. One promising drug, nefiracetam, is being developed for the treatment of Alzheimer's and other patients with dementia, and has unique actions in potentiating the activity of both nACh receptors and NMDA receptors.

Future

With advances in technology and our understanding of the nervous system, the development of drugs will continue with increasing drug sensitivity and specificity. Structure-activity relationships are a major area of research within neuropharmacology; an attempt to modify the effect or the potency (i.e., activity) of bioactive chemical compounds by modifying their chemical structures.

Human intelligence

From Wikipedia, the free encyclopedia

Human intelligence is the intellectual prowess of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness. Through their intelligence, humans possess the cognitive abilities to learn, form concepts, understand, apply logic, and reason, including the capacities to recognize patterns, comprehend ideas, plan, solve problems, make decisions, retain information, and use language to communicate. Intelligence enables humans to experience and think.

Correlates

As a construct and measured by intelligence tests, intelligence is one of the most useful concepts used in psychology, because it correlates with lots of relevant variables, like the probability of suffering an accident, earning a higher salary, etc.
Education
According to a 2018 metastudy of educational effects on intelligence, education appears to be the "most consistent, robust, and durable method" known for raising intelligence.
Myopia
A number of studies have shown a correlation between IQ and myopia. Some suggest that the reason for the correlation is environmental, whereby intelligent people are more likely to damage their eyesight with prolonged reading, while others contend that a genetic link exists.
Aging
There is evidence that aging causes decline in cognitive functions. In one cross-sectional study, various cognitive functions measured declines by about 0.8 in z-score from age 20 to age 50, the cognitive functions included speed of processing, working memory and long term memory.

Theories

Relevance of IQ tests

In psychology, human intelligence is commonly assessed by IQ scores, determined by IQ tests. However, there are critics of IQ, who do not dispute the stability of IQ test scores, or the fact that they predict certain forms of achievement rather effectively. They do argue, however, that to base a concept of intelligence on IQ test scores alone is to ignore many important aspects of mental ability.

On the other hand, Linda S. Gottfredson (2006) has argued that the results of thousands of studies support the importance of IQ for school and job performance (see also the work of Schmidt & Hunter, 2004). She says that IQ also predicts or correlates with numerous other life outcomes. In contrast, empirical support for non-g intelligences is lacking or very poor.

Theory of multiple intelligences

Howard Gardner's theory of multiple intelligences is based on studies not only of normal children and adults, but also of gifted individuals (including so-called "savants"), of persons who have suffered brain damage, of experts and virtuosos, and of individuals from diverse cultures. Gardner breaks intelligence down into at least a number of different components. In the first edition of his book Frames of Mind (1983), he described seven distinct types of intelligence—logical-mathematical, linguistic, spatial, musical, kinesthetic, interpersonal, and intrapersonal. In a second edition of this book, he added two more types of intelligence—naturalist and existential intelligences. He argues that psychometric (IQ) tests address only linguistic and logical plus some aspects of spatial intelligence. A major criticism of Gardner's theory is that it has never been tested, or subjected to peer review, by Gardner or anyone else, and indeed that it is unfalsifiable. Others (e.g. Locke, 2005) have suggested that recognizing many specific forms of intelligence (specific aptitude theory) implies a political—rather than scientific—agenda, intended to appreciate the uniqueness in all individuals, rather than recognizing potentially true and meaningful differences in individual capacities. Schmidt and Hunter (2004) suggest that the predictive validity of specific aptitudes over and above that of general mental ability, or "g", has not received empirical support. On the other hand, Jerome Bruner agreed with Gardner that the intelligences were "useful fictions," and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

Howard Gardner describes his first seven intelligences as follows:
  1. Linguistic intelligence: People high in linguistic intelligence have an affinity for words, both spoken and written.
  2. Logical-mathematical intelligence: It implies logical and mathematical abilities.
  3. Spatial intelligence: The ability to form a mental model of a spatial world and to be able to maneuver and operate using that model.
  4. Musical intelligence: Those with musical Intelligence have excellent pitch, and may even be absolute pitch.
  5. Bodily-kinesthetic intelligence: The ability to solve problems or to fashion products using one's whole body, or parts of the body. Gifted people in this intelligence may be good dancers, athletes, surgeons, craftspeople, and others.
  6. Interpersonal intelligence: The ability to see things from the perspective of others, or to understand people in the sense of empathy. Strong interpersonal intelligence would be an asset in those who are teachers, politicians, clinicians, religious leaders, etc.
  7. Intrapersonal intelligence: It is a capacity to form an accurate, veridical model of oneself and to be able to use that model to operate effectively in life.

Triarchic theory of intelligence

Robert Sternberg proposed the triarchic theory of intelligence to provide a more comprehensive description of intellectual competence than traditional differential or cognitive theories of human ability. The triarchic theory describes three fundamental aspects of intelligence. Analytic intelligence comprises the mental processes through which intelligence is expressed. Creative intelligence is necessary when an individual is confronted with a challenge that is nearly, but not entirely, novel or when an individual is engaged in automatizing the performance of a task. Practical intelligence is bound in a sociocultural milieu and involves adaptation to, selection of, and shaping of the environment to maximize fit in the context. The triarchic theory does not argue against the validity of a general intelligence factor; instead, the theory posits that general intelligence is part of analytic intelligence, and only by considering all three aspects of intelligence can the full range of intellectual functioning be fully understood.

More recently, the triarchic theory has been updated and renamed as the Theory of Successful Intelligence by Sternberg. Intelligence is now defined as an individual's assessment of success in life by the individual's own (idiographic) standards and within the individual's sociocultural context. Success is achieved by using combinations of analytical, creative, and practical intelligence. The three aspects of intelligence are referred to as processing skills. The processing skills are applied to the pursuit of success through what were the three elements of practical intelligence: adapting to, shaping of, and selecting of one's environments. The mechanisms that employ the processing skills to achieve success include utilizing one's strengths and compensating or correcting for one's weaknesses. 

Sternberg's theories and research on intelligence remain contentious within the scientific community.

PASS theory of intelligence

Based on A. R. Luria's (1966) seminal work on the modularization of brain function, and supported by decades of neuroimaging research, the PASS Theory of Intelligence proposes that cognition is organized in three systems and four processes. The first process is the Planning, which involves executive functions responsible for controlling and organizing behavior, selecting and constructing strategies, and monitoring performance. The second is the Attention process, which is responsible for maintaining arousal levels and alertness, and ensuring focus on relevant stimuli. The next two are called Simultaneous and Successive processing and they involve encoding, transforming, and retaining information. Simultaneous processing is engaged when the relationship between items and their integration into whole units of information is required. Examples of this include recognizing figures, such as a triangle within a circle vs. a circle within a triangle, or the difference between 'he had a shower before breakfast' and 'he had breakfast before a shower.' Successive processing is required for organizing separate items in a sequence such as remembering a sequence of words or actions exactly in the order in which they had just been presented. These four processes are functions of four areas of the brain. Planning is broadly located in the front part of our brains, the frontal lobe. Attention and arousal are combined functions of the frontal lobe and the lower parts of the cortex, although the parietal lobes are also involved in attention as well. Simultaneous processing and Successive processing occur in the posterior region or the back of the brain. Simultaneous processing is broadly associated with the occipital and the parietal lobes while Successive processing is broadly associated with the frontal-temporal lobes. The PASS (Planning/Attention/Simultaneous/Successive) theory is heavily indebted to both Luria (1966, 1973), and studies in cognitive psychology involved in promoting a better look at intelligence.

Piaget's theory and Neo-Piagetian theories

In Piaget's theory of cognitive development the focus is not on mental abilities but rather on a child's mental models of the world. As a child develops, increasingly more accurate models of the world are developed which enable the child to interact with the world better. One example being object permanence where the child develops a model where objects continue to exist even when they cannot be seen, heard, or touched. 

Piaget's theory described four main stages and many sub-stages in the development. These four main stages are:
  1. sensory motor stage (birth-2yrs);
  2. pre-operational stage (2yrs-7rs);
  3. concrete operational stage (7rs-11yrs); and
  4. formal operations stage (11yrs-16yrs).
Degree of progress through these stages are correlated, but not identical with psychometric IQ. Piaget conceptualizes intelligence as an activity more than a capacity. 

One of Piaget's most famous studies focused purely on the discriminative abilities of children between the ages of two and a half years old, and four and a half years old. He began the study by taking children of different ages and placing two lines of sweets, one with the sweets in a line spread further apart, and one with the same number of sweets in a line placed more closely together. He found that, "Children between 2 years, 6 months old and 3 years, 2 months old correctly discriminate the relative number of objects in two rows; between 3 years, 2 months and 4 years, 6 months they indicate a longer row with fewer objects to have "more"; after 4 years, 6 months they again discriminate correctly". Initially younger children were not studied, because if at the age of four years a child could not conserve quantity, then a younger child presumably could not either. The results show however that children that are younger than three years and two months have quantity conservation, but as they get older they lose this quality, and do not recover it until four and a half years old. This attribute may be lost temporarily because of an overdependence on perceptual strategies, which correlates more candy with a longer line of candy, or because of the inability for a four-year-old to reverse situations. By the end of this experiment several results were found. First, younger children have a discriminative ability that shows the logical capacity for cognitive operations exists earlier than acknowledged. This study also reveals that young children can be equipped with certain qualities for cognitive operations, depending on how logical the structure of the task is. Research also shows that children develop explicit understanding at age 5 and as a result, the child will count the sweets to decide which has more. Finally the study found that overall quantity conservation is not a basic characteristic of humans' native inheritance.

Piaget's theory has been criticized for the age of appearance of a new model of the world, such as object permanence, being dependent on how the testing is done (see the article on object permanence). More generally, the theory may be very difficult to test empirically because of the difficulty of proving or disproving that a mental model is the explanation for the results of the testing.

Neo-Piagetian theories of cognitive development expand Piaget's theory in various ways such as also considering psychometric-like factors such as processing speed and working memory, "hypercognitive" factors like self-monitoring, more stages, and more consideration on how progress may vary in different domains such as spatial or social.

Parieto-frontal integration theory of intelligence

Based on a review of 37 neuroimaging studies, Jung and Haier (2007) proposed that the biological basis of intelligence stems from how well the frontal and parietal regions of the brain communicate and exchange information with each other. Subsequent neuroimaging and lesion studies report general consensus with the theory. A review of the neuroscience and intelligence literature concludes that the parieto-frontal integration theory is the best available explanation for human intelligence differences.

Investment theory

Based on the Cattell–Horn–Carroll theory, the tests of intelligence most often used in the relevant studies include measures of fluid ability (Gf) and crystallized ability (Gc); that differ in their trajectory of development in individuals. The 'investment theory' by Cattell states that the individual differences observed in the procurement of skills and knowledge (Gc) are partially attributed to the 'investment' of Gf, thus suggesting the involvement of fluid intelligence in every aspect of the learning process. It is essential to highlight that the investment theory suggests that personality traits affect 'actual' ability, and not scores on an IQ test. In association, Hebb's theory of intelligence suggested a bifurcation as well, Intelligence A (physiological), that could be seen as a semblance of fluid intelligence and Intelligence B (experiential), similar to crystallized intelligence.

Intelligence compensation theory (ICT)

The intelligence compensation theory (a term first coined by Wood and Englert, 2009) states that individuals who are comparatively less intelligent work harder, more methodically, become more resolute and thorough (more conscientious) in order to achieve goals, to compensate for their 'lack of intelligence' whereas more intelligent individuals do not require traits/behaviours associated with the personality factor conscientiousness to progress as they can rely on the strength of their cognitive abilities as opposed to structure or effort. The theory suggests the existence of a causal relationship between intelligence and conscientiousness, such that the development of the personality trait conscientiousness is influenced by intelligence. This assumption is deemed plausible as it is unlikely that the reverse causal relationship could occur; implying that the negative correlation would be higher between fluid intelligence (Gf) and conscientiousness. The justification being the timeline of development of Gf, Gc and personality, as crystallized intelligence would not have developed completely when personality traits develop. Subsequently, during school-going ages, more conscientious children would be expected to gain more crystallized intelligence (knowledge) through education, as they would be more efficient, thorough, hard-working and dutiful.

This theory has recently been contradicted by evidence, that identifies compensatory sample selection. Thus, attributing the previous findings to the bias in selecting samples with individuals above a certain threshold of achievement.

Bandura's theory of self-efficacy and cognition

The view of cognitive ability has evolved over the years, and it is no longer viewed as a fixed property held by an individual. Instead, the current perspective describes it as a general capacity, comprising not only cognitive, but motivational, social and behavioural aspects as well. These facets work together to perform numerous tasks. An essential skill often overlooked is that of managing emotions, and aversive experiences that can compromise one's quality of thought and activity. The link between intelligence and success has been bridged by crediting individual differences in self-efficacy. Bandura's theory identifies the difference between possessing skills and being able to apply them in challenging situations. Thus, the theory suggests that individuals with the same level of knowledge and skill may perform badly, averagely or excellently based on differences in self-efficacy. 

A key role of cognition is to allow for one to predict events and in turn devise methods to deal with these events effectively. These skills are dependent on processing of stimuli that is unclear and ambiguous. To learn the relevant concepts, individuals must be able to rely on the reserve of knowledge to identify, develop and execute options. They must be able to apply the learning acquired from previous experiences. Thus, a stable sense of self-efficacy is essential to stay focused on tasks in the face of challenging situations.

To summarize, Bandura's theory of self-efficacy and intelligence suggests that individuals with a relatively low sense of self-efficacy in any field will avoid challenges. This effect is heightened when they perceive the situations as personal threats. When failure occurs, they recover from it more slowly than others, and credit it to an insufficient aptitude. On the other hand, persons with high levels of self-efficacy hold a task-diagnostic aim that leads to effective performance.

Process, personality, intelligence and knowledge theory (PPIK)

Predicted growth curves for Intelligence as process, crystallized intelligence, occupational knowledge and avocational knowledge based on Ackerman's PPIK Theory.
 
Developed by Ackerman, the PPIK (process, personality, intelligence and knowledge) theory further developes the approach on intelligence as proposed by Cattell, the Investment theory and Hebb, suggesting a distinction between intelligence as knowledge and intelligence as process (two concepts that are comparable and related to Gc and Gf respectively, but broader and closer to Hebb's notions of "Intelligence A" and "Intelligence B") and integrating these factors with elements such as personality, motivation and interests.

Ackerman describes the difficulty of distinguishing process from knowledge, as content cannot be entirely eliminated from any ability test. Personality traits have not shown to be significantly correlated with the intelligence as process aspect except in the context of psychopathology. One exception to this generalization has been the finding of sex differences in cognitive abilities, specifically abilities in mathematical and spatial form. On the other hand, the intelligence as knowledge factor has been associated with personality traits of Openness and Typical Intellectual Engagement, which also strongly correlate with verbal abilities (associated with crystallized intelligence).

Improving

Because intelligence appears to be at least partly dependent on brain structure and the genes shaping brain development, it has been proposed that genetic engineering could be used to enhance the intelligence, a process sometimes called biological uplift in science fiction. Experiments on mice have demonstrated superior ability in learning and memory in various behavioral tasks.

IQ leads to greater success in education, but independently education raises IQ scores. A 2017 meta-analysis suggests education increases IQ by 1-5 points per year of education, or at least increases IQ test taking ability.

Attempts to raise IQ with brain training have led to increases on aspects related with the training tasks – for instance working memory – but it is yet unclear if these increases generalise to increased intelligence per se.

A 2008 research paper claimed that practicing a dual n-back task can increase fluid intelligence (Gf), as measured in several different standard tests. This finding received some attention from popular media, including an article in Wired. However, a subsequent criticism of the paper's methodology questioned the experiment's validity and took issue with the lack of uniformity in the tests used to evaluate the control and test groups. For example, the progressive nature of Raven's Advanced Progressive Matrices (APM) test may have been compromised by modifications of time restrictions (i.e., 10 minutes were allowed to complete a normally 45-minute test).

Substances which actually or purportedly improve intelligence or other mental functions are called nootropics. A meta analysis shows omega 3 fatty acids improves cognitive performance among those with cognitive deficits, but not among healthy subjects. A meta-regression shows omega 3 fatty acids improve the moods of patients with major depression (major depression is associated with mental deficits). However, exercise, not just performance-enhancing drugs, enhances cognition for healthy and non healthy subjects as well.

On the philosophical front, conscious efforts to influence intelligence raise ethical issues. Neuroethics considers the ethical, legal and social implications of neuroscience, and deals with issues such as the difference between treating a human neurological disease and enhancing the human brain, and how wealth impacts access to neurotechnology. Neuroethical issues interact with the ethics of human genetic engineering

Transhumanist theorists study the possibilities and consequences of developing and using techniques to enhance human abilities and aptitudes. 

Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention. Eugenics has variously been regarded as meritorious or deplorable in different periods of history, falling greatly into disrepute after the defeat of Nazi Germany in World War II.

Measuring

Chart of IQ Distributions on 1916 Stanford-Binet Test
Score distribution chart for sample of 905 children tested on 1916 Stanford-Binet Test
 
The approach to understanding intelligence with the most supporters and published research over the longest period of time is based on psychometric testing. It is also by far the most widely used in practical settings. Intelligence quotient (IQ) tests include the Stanford-Binet, Raven's Progressive Matrices, the Wechsler Adult Intelligence Scale and the Kaufman Assessment Battery for Children. There are also psychometric tests that are not intended to measure intelligence itself but some closely related construct such as scholastic aptitude. In the United States examples include the SSAT, the SAT, the ACT, the GRE, the MCAT, the LSAT, and the GMAT. Regardless of the method used, almost any test that requires examinees to reason and has a wide range of question difficulty will produce intelligence scores that are approximately normally distributed in the general population.

Intelligence tests are widely used in educational, business, and military settings because of their efficacy in predicting behavior. IQ and g (discussed in the next section) are correlated with many important social outcomes—individuals with low IQs are more likely to be divorced, have a child out of marriage, be incarcerated, and need long-term welfare support, while individuals with high IQs are associated with more years of education, higher status jobs and higher income. Intelligence is significantly correlated with successful training and performance outcomes, and IQ/g is the single best predictor of successful job performance.

General intelligence factor or g

There are many different kinds of IQ tests using a wide variety of test tasks. Some tests consist of a single type of task, others rely on a broad collection of tasks with different contents (visual-spatial, verbal, numerical) and asking for different cognitive processes (e.g., reasoning, memory, rapid decisions, visual comparisons, spatial imagery, reading, and retrieval of general knowledge). The psychologist Charles Spearman early in the 20th century carried out the first formal factor analysis of correlations between various test tasks. He found a trend for all such tests to correlate positively with each other, which is called a positive manifold. Spearman found that a single common factor explained the positive correlations among tests. Spearman named it g for "general intelligence factor". He interpreted it as the core of human intelligence that, to a larger or smaller degree, influences success in all cognitive tasks and thereby creates the positive manifold. This interpretation of g as a common cause of test performance is still dominant in psychometrics. (Although, an alternative interpretation was recently advanced by van der Maas and colleagues. Their mutualism model assumes that intelligence depends on several independent mechanisms, none of which influences performance on all cognitive tests. These mechanisms support each other so that efficient operation of one of them makes efficient operation of the others more likely, thereby creating the positive manifold.) 

IQ tasks and tests can be ranked by how highly they load on the g factor. Tests with high g-loadings are those that correlate highly with most other tests. One comprehensive study investigating the correlations between a large collection of tests and tasks has found that the Raven's Progressive Matrices have a particularly high correlation with most other tests and tasks. The Raven's is a test of inductive reasoning with abstract visual material. It consists of a series of problems, sorted approximately by increasing difficulty. Each problem presents a 3 x 3 matrix of abstract designs with one empty cell; the matrix is constructed according to a rule, and the person must find out the rule to determine which of 8 alternatives fits into the empty cell. Because of its high correlation with other tests, the Raven's Progressive Matrices are generally acknowledged as a good indicator of general intelligence. This is problematic, however, because there are substantial gender differences on the Raven's, which are not found when g is measured directly by computing the general factor from a broad collection of tests.

General collective intelligence factor or c

A recent scientific understanding of collective intelligence, defined as a group's general ability to perform a wide range of tasks, expands the areas of human intelligence research applying similar methods and concepts to groups. Definition, operationalization and methods are similar to the psychometric approach of general individual intelligence where an individual's performance on a given set of cognitive tasks is used to measure intelligence indicated by the general intelligence factor g extracted via factor analysis. In the same vein, collective intelligence research aims to discover a ‘c factor’ explaining between-group differences in performance as well as structural and group compositional causes for it.

Historical psychometric theories

Several different theories of intelligence have historically been important for psychometrics. Often they emphasized more factors than a single one like in g factor.

Cattell–Horn–Carroll theory

Many of the broad, recent IQ tests have been greatly influenced by the Cattell–Horn–Carroll theory. It is argued to reflect much of what is known about intelligence from research. A hierarchy of factors for human intelligence is used. g is at the top. Under it there are 10 broad abilities that in turn are subdivided into 70 narrow abilities. The broad abilities are:
  • Fluid intelligence (Gf): includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized intelligence (Gc): includes the breadth and depth of a person's acquired knowledge, the ability to communicate one's knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative reasoning (Gq): the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading & writing ability (Grw): includes basic reading and writing skills.
  • Short-term memory (Gsm): is the ability to apprehend and hold information in immediate awareness and then use it within a few seconds.
  • Long-term storage and retrieval (Glr): is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual processing (Gv): is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory processing (Ga): is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing speed (Gs): is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/reaction time/speed (Gt): reflect the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; not to be confused with Gs, which typically is measured in intervals of 2–3 minutes).
Modern tests do not necessarily measure of all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ. Gt may be difficult to measure without special equipment. 

g was earlier often subdivided into only Gf and Gc which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.

Controversies

While not necessarily a dispute about the psychometric approach itself, there are several controversies regarding the results from psychometric research. 

One criticism has been against the early research such as craniometry. A reply has been that drawing conclusions from early intelligence research is like condemning the auto industry by criticizing the performance of the Model T.

Several critics, such as Stephen Jay Gould, have been critical of g, seeing it as a statistical artifact, and that IQ tests instead measure a number of unrelated abilities. The American Psychological Association's report "Intelligence: Knowns and Unknowns" stated that IQ tests do correlate and that the view that g is a statistical artifact is a minority one.

Intelligence across cultures

Psychologists have shown that the definition of human intelligence is unique to the culture that one is studying. Robert Sternberg is among the researchers who have discussed how one's culture affects the person's interpretation of intelligence, and he further believes that to define intelligence in only one way without considering different meanings in cultural contexts may cast an investigative and unintentionally egocentric view on the world. To negate this, psychologists offer the following definitions of intelligence:
  1. Successful intelligence is the skills and knowledge needed for success in life, according to one's own definition of success, within one's sociocultural context.
  2. Analytical intelligence is the result of intelligence's components applied to fairly abstract but familiar kinds of problems.
  3. Creative intelligence is the result of intelligence's components applied to relatively novel tasks and situations.
  4. Practical intelligence is the result of intelligence's components applied to experience for purposes of adaption, shaping and selection.
Although typically identified by its western definition, multiple studies support the idea that human intelligence carries different meanings across cultures around the world. In many Eastern cultures, intelligence is mainly related with one's social roles and responsibilities. A Chinese conception of intelligence would define it as the ability to empathize with and understand others — although this is by no means the only way that intelligence is defined in China. In several African communities, intelligence is shown similarly through a social lens. However, rather than through social roles, as in many Eastern cultures, it is exemplified through social responsibilities. For example, in the language of Chi-Chewa, which is spoken by some ten million people across central Africa, the equivalent term for intelligence implies not only cleverness but also the ability to take on responsibility. Furthermore, within American culture there are a variety of interpretations of intelligence present as well. One of the most common views on intelligence within American societies defines it as a combination of problem-solving skills, deductive reasoning skills, and Intelligence quotient (IQ), while other American societies point out that intelligent people should have a social conscience, accept others for who they are, and be able to give advice or wisdom.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...