Search This Blog

Wednesday, December 9, 2020

Human intelligence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_intelligence

Human intelligence is the intellectual capability of humans, which is marked by complex cognitive feats and high levels of motivation and self-awareness.

Through intelligence, humans possess the cognitive abilities to learn, form concepts, understand, apply logic, and reason, including the capacities to recognize patterns, plan, innovate, solve problems, make decisions, retain information, and use language to communicate.

Correlates

As a construct and measured by intelligence tests, intelligence is considered to be one of the most useful concepts used in psychology, because it correlates with many relevant variables, for instance the probability of suffering an accident, salary, and more.

Education

According to a 2018 metastudy of educational effects on intelligence, education appears to be the "most consistent, robust, and durable method" known for raising intelligence.

Myopia

A number of studies have shown a correlation between IQ and myopia. Some suggest that the reason for the correlation is environmental, whereby people with a higher IQ are more likely to damage their eyesight with prolonged reading, or the other way around whereby people who read more are more likely to reach a higher IQ, while others contend that a genetic link exists.

Aging

There is evidence that aging causes a decline in cognitive functions. In one cross-sectional study, various cognitive functions measured declines by about 0.8 in z-score from age 20 to age 50, the cognitive functions included speed of processing, working memory, and long-term memory.

Genes

A number of single-nucleotide polymorphisms in human DNA are correlated with intelligence.

Theories

Relevance of IQ tests

In psychology, human intelligence is commonly assessed by IQ scores that are determined by IQ tests. However, while IQ test scores show a high degree of inter-test reliability, and predict certain forms of achievement rather effectively, their construct validity as a holistic measure of human intelligence is considered dubious. While IQ tests are generally understood to measure some forms of intelligence, they may fail to serve as an accurate measure of broader definitions of human intelligence inclusive of creativity and social intelligence. According to psychologist Wayne Weiten, "IQ tests are valid measures of the kind of intelligence necessary to do well in academic work. But if the purpose is to assess intelligence in a broader sense, the validity of IQ tests is questionable."

Theory of multiple intelligences

Howard Gardner's theory of multiple intelligences is based on studies not only of normal children and adults, but also of gifted individuals (including so-called "savants"), of persons who have suffered brain damage, of experts and virtuosos, and of individuals from diverse cultures. Gardner breaks intelligence down into at least a number of different components. In the first edition of his book Frames of Mind (1983), he described seven distinct types of intelligence—logical-mathematical, linguistic, spatial, musical, kinesthetic, interpersonal, and intrapersonal. In a second edition of this book, he added two more types of intelligence—naturalist and existential intelligences. He argues that psychometric (IQ) tests address only linguistic and logical plus some aspects of spatial intelligence. A major criticism of Gardner's theory is that it has never been tested, or subjected to peer review, by Gardner or anyone else, and indeed that it is unfalsifiable. Others (e.g. Locke, 2005) have suggested that recognizing many specific forms of intelligence (specific aptitude theory) implies a political—rather than scientific—agenda, intended to appreciate the uniqueness in all individuals, rather than recognizing potentially true and meaningful differences in individual capacities. Schmidt and Hunter (2004) suggest that the predictive validity of specific aptitudes over and above that of general mental ability, or "g", has not received empirical support. On the other hand, Jerome Bruner agreed with Gardner that the intelligences were "useful fictions", and went on to state that "his approach is so far beyond the data-crunching of mental testers that it deserves to be cheered."

Howard Gardner describes his first seven intelligences as follows:

  • Linguistic intelligence: People high in linguistic intelligence have an affinity for words, both spoken and written.
  • Logical-mathematical intelligence: It implies logical and mathematical abilities.
  • Spatial intelligence: The ability to form a mental model of a spatial world and to be able to maneuver and operate using that model.
  • Musical intelligence: Those with musical intelligence have excellent pitch, and may even be absolute pitch.
  • Bodily-kinesthetic intelligence: The ability to solve problems or to fashion products using one's whole body, or parts of the body. Gifted people in this intelligence may be good dancers, athletes, surgeons, craftspeople, and others.
  • Interpersonal intelligence: The ability to see things from the perspective of others, or to understand people in the sense of empathy. Strong interpersonal intelligence would be an asset in those who are teachers, politicians, clinicians, religious leaders, etc.
  • Intrapersonal intelligence: It is a capacity to form an accurate, veridical model of oneself and to be able to use that model to operate effectively in life.

Triarchic theory of intelligence

Robert Sternberg proposed the triarchic theory of intelligence to provide a more comprehensive description of intellectual competence than traditional differential or cognitive theories of human ability. The triarchic theory describes three fundamental aspects of intelligence:

  • Analytic intelligence comprises the mental processes through which intelligence is expressed.
  • Creative intelligence is necessary when an individual is confronted with a challenge that is nearly, but not entirely, novel or when an individual is engaged in automatizing the performance of a task.
  • Practical intelligence is bound in a sociocultural milieu and involves adaptation to, selection of, and shaping of the environment to maximize fit in the context.

The triarchic theory does not argue against the validity of a general intelligence factor; instead, the theory posits that general intelligence is part of analytic intelligence, and only by considering all three aspects of intelligence can the full range of intellectual functioning be fully understood.

More recently, the triarchic theory has been updated and renamed as the Theory of Successful Intelligence by Sternberg. Intelligence is now defined as an individual's assessment of success in life by the individual's own (idiographic) standards and within the individual's sociocultural context. Success is achieved by using combinations of analytical, creative, and practical intelligence. The three aspects of intelligence are referred to as processing skills. The processing skills are applied to the pursuit of success through what were the three elements of practical intelligence: adapting to, shaping of, and selecting of one's environments. The mechanisms that employ the processing skills to achieve success include utilizing one's strengths and compensating or correcting for one's weaknesses.

Sternberg's theories and research on intelligence remain contentious within the scientific community.

PASS theory of intelligence

Based on A. R. Luria's (1966) seminal work on the modularization of brain function, and supported by decades of neuroimaging research, the PASS Theory of Intelligence proposes that cognition is organized in three systems and four processes. The first process is the Planning, which involves executive functions responsible for controlling and organizing behavior, selecting and constructing strategies, and monitoring performance. The second is the Attention process, which is responsible for maintaining arousal levels and alertness, and ensuring focus on relevant stimuli. The next two are called Simultaneous and Successive processing and they involve encoding, transforming, and retaining information. Simultaneous processing is engaged when the relationship between items and their integration into whole units of information is required. Examples of this include recognizing figures, such as a triangle within a circle vs. a circle within a triangle, or the difference between 'he had a shower before breakfast' and 'he had breakfast before a shower.' Successive processing is required for organizing separate items in a sequence such as remembering a sequence of words or actions exactly in the order in which they had just been presented. These four processes are functions of four areas of the brain. Planning is broadly located in the front part of our brains, the frontal lobe. Attention and arousal are combined functions of the frontal lobe and the lower parts of the cortex, although the parietal lobes are also involved in attention as well. Simultaneous processing and Successive processing occur in the posterior region or the back of the brain. Simultaneous processing is broadly associated with the occipital and the parietal lobes while Successive processing is broadly associated with the frontal-temporal lobes. The PASS (Planning/Attention/Simultaneous/Successive) theory is heavily indebted to both Luria (1966, 1973), and studies in cognitive psychology involved in promoting a better look at intelligence.

Piaget's theory and Neo-Piagetian theories

In Piaget's theory of cognitive development the focus is not on mental abilities but rather on a child's mental models of the world. As a child develops, increasingly more accurate models of the world are developed which enable the child to interact with the world better. One example being object permanence where the child develops a model where objects continue to exist even when they cannot be seen, heard, or touched.

Piaget's theory described four main stages and many sub-stages in the development. These four main stages are:

  • sensory motor stage (birth-2yrs);
  • pre-operational stage (2yrs-7rs);
  • concrete operational stage (7rs-11yrs); and
  • formal operations stage (11yrs-16yrs)

Degree of progress through these stages are correlated, but not identical with psychometric IQ. Piaget conceptualizes intelligence as an activity more than a capacity.

One of Piaget's most famous studies focused purely on the discriminative abilities of children between the ages of two and a half years old, and four and a half years old. He began the study by taking children of different ages and placing two lines of sweets, one with the sweets in a line spread further apart, and one with the same number of sweets in a line placed more closely together. He found that, "Children between 2 years, 6 months old and 3 years, 2 months old correctly discriminate the relative number of objects in two rows; between 3 years, 2 months and 4 years, 6 months they indicate a longer row with fewer objects to have "more"; after 4 years, 6 months they again discriminate correctly". Initially younger children were not studied, because if at the age of four years a child could not conserve quantity, then a younger child presumably could not either. The results show however that children that are younger than three years and two months have quantity conservation, but as they get older they lose this quality, and do not recover it until four and a half years old. This attribute may be lost temporarily because of an overdependence on perceptual strategies, which correlates more candy with a longer line of candy, or because of the inability for a four-year-old to reverse situations. By the end of this experiment several results were found. First, younger children have a discriminative ability that shows the logical capacity for cognitive operations exists earlier than acknowledged. This study also reveals that young children can be equipped with certain qualities for cognitive operations, depending on how logical the structure of the task is. Research also shows that children develop explicit understanding at age 5 and as a result, the child will count the sweets to decide which has more. Finally the study found that overall quantity conservation is not a basic characteristic of humans' native inheritance.

Piaget's theory has been criticized for the age of appearance of a new model of the world, such as object permanence, being dependent on how the testing is done (see the article on object permanence). More generally, the theory may be very difficult to test empirically because of the difficulty of proving or disproving that a mental model is the explanation for the results of the testing.

Neo-Piagetian theories of cognitive development expand Piaget's theory in various ways such as also considering psychometric-like factors such as processing speed and working memory, "hypercognitive" factors like self-monitoring, more stages, and more consideration on how progress may vary in different domains such as spatial or social.

Parieto-frontal integration theory of intelligence

Based on a review of 37 neuroimaging studies, Jung and Haier (2007) proposed that the biological basis of intelligence stems from how well the frontal and parietal regions of the brain communicate and exchange information with each other. Subsequent neuroimaging and lesion studies report general consensus with the theory. A review of the neuroscience and intelligence literature concludes that the parieto-frontal integration theory is the best available explanation for human intelligence differences.

Investment theory

Based on the Cattell–Horn–Carroll theory, the tests of intelligence most often used in the relevant studies include measures of fluid ability (Gf) and crystallized ability (Gc); that differ in their trajectory of development in individuals. The 'investment theory' by Cattell states that the individual differences observed in the procurement of skills and knowledge (Gc) are partially attributed to the 'investment' of Gf, thus suggesting the involvement of fluid intelligence in every aspect of the learning process. It is essential to highlight that the investment theory suggests that personality traits affect 'actual' ability, and not scores on an IQ test. In association, Hebb's theory of intelligence suggested a bifurcation as well, Intelligence A (physiological), that could be seen as a semblance of fluid intelligence and Intelligence B (experiential), similar to crystallized intelligence.

Intelligence compensation theory (ICT)

The intelligence compensation theory (a term first coined by Wood and Englert, 2009) states that individuals who are comparatively less intelligent work harder, more methodically, become more resolute and thorough (more conscientious) in order to achieve goals, to compensate for their 'lack of intelligence' whereas more intelligent individuals do not require traits/behaviours associated with the personality factor conscientiousness to progress as they can rely on the strength of their cognitive abilities as opposed to structure or effort. The theory suggests the existence of a causal relationship between intelligence and conscientiousness, such that the development of the personality trait conscientiousness is influenced by intelligence. This assumption is deemed plausible as it is unlikely that the reverse causal relationship could occur; implying that the negative correlation would be higher between fluid intelligence (Gf) and conscientiousness. The justification being the timeline of development of Gf, Gc and personality, as crystallized intelligence would not have developed completely when personality traits develop. Subsequently, during school-going ages, more conscientious children would be expected to gain more crystallized intelligence (knowledge) through education, as they would be more efficient, thorough, hard-working and dutiful.

This theory has recently been contradicted by evidence, that identifies compensatory sample selection. Thus, attributing the previous findings to the bias in selecting samples with individuals above a certain threshold of achievement.

Bandura's theory of self-efficacy and cognition

The view of cognitive ability has evolved over the years, and it is no longer viewed as a fixed property held by an individual. Instead, the current perspective describes it as a general capacity, comprising not only cognitive, but motivational, social and behavioural aspects as well. These facets work together to perform numerous tasks. An essential skill often overlooked is that of managing emotions, and aversive experiences that can compromise one's quality of thought and activity. The link between intelligence and success has been bridged by crediting individual differences in self-efficacy. Bandura's theory identifies the difference between possessing skills and being able to apply them in challenging situations. Thus, the theory suggests that individuals with the same level of knowledge and skill may perform badly, averagely or excellently based on differences in self-efficacy.

A key role of cognition is to allow for one to predict events and in turn devise methods to deal with these events effectively. These skills are dependent on processing of stimuli that is unclear and ambiguous. To learn the relevant concepts, individuals must be able to rely on the reserve of knowledge to identify, develop and execute options. They must be able to apply the learning acquired from previous experiences. Thus, a stable sense of self-efficacy is essential to stay focused on tasks in the face of challenging situations.

To summarize, Bandura's theory of self-efficacy and intelligence suggests that individuals with a relatively low sense of self-efficacy in any field will avoid challenges. This effect is heightened when they perceive the situations as personal threats. When failure occurs, they recover from it more slowly than others, and credit it to an insufficient aptitude. On the other hand, persons with high levels of self-efficacy hold a task-diagnostic aim that leads to effective performance.

Process, personality, intelligence and knowledge theory (PPIK)

Predicted growth curves for Intelligence as process, crystallized intelligence, occupational knowledge and avocational knowledge based on Ackerman's PPIK Theory.

Developed by Ackerman, the PPIK (process, personality, intelligence and knowledge) theory further develops the approach on intelligence as proposed by Cattell, the Investment theory and Hebb, suggesting a distinction between intelligence as knowledge and intelligence as process (two concepts that are comparable and related to Gc and Gf respectively, but broader and closer to Hebb's notions of "Intelligence A" and "Intelligence B") and integrating these factors with elements such as personality, motivation and interests.

Ackerman describes the difficulty of distinguishing process from knowledge, as content cannot be entirely eliminated from any ability test. Personality traits have not shown to be significantly correlated with the intelligence as process aspect except in the context of psychopathology. One exception to this generalization has been the finding of sex differences in cognitive abilities, specifically abilities in mathematical and spatial form. On the other hand, the intelligence as knowledge factor has been associated with personality traits of Openness and Typical Intellectual Engagement, which also strongly correlate with verbal abilities (associated with crystallized intelligence).

Latent inhibition

It appears that Latent inhibition can influence one's creativity.

Improving

Because intelligence appears to be at least partly dependent on brain structure and the genes shaping brain development, it has been proposed that genetic engineering could be used to enhance the intelligence, a process sometimes called biological uplift in science fiction. Experiments on mice have demonstrated superior ability in learning and memory in various behavioral tasks.

IQ leads to greater success in education, but independently education raises IQ scores. A 2017 meta-analysis suggests education increases IQ by 1-5 points per year of education, or at least increases IQ test taking ability.

Attempts to raise IQ with brain training have led to increases on aspects related with the training tasks – for instance working memory – but it is yet unclear if these increases generalize to increased intelligence per se.

A 2008 research paper claimed that practicing a dual n-back task can increase fluid intelligence (Gf), as measured in several different standard tests. This finding received some attention from popular media, including an article in Wired. However, a subsequent criticism of the paper's methodology questioned the experiment's validity and took issue with the lack of uniformity in the tests used to evaluate the control and test groups. For example, the progressive nature of Raven's Advanced Progressive Matrices (APM) test may have been compromised by modifications of time restrictions (i.e., 10 minutes were allowed to complete a normally 45-minute test).

Substances which actually or purportedly improve intelligence or other mental functions are called nootropics. A meta analysis shows omega 3 fatty acids improves cognitive performance among those with cognitive deficits, but not among healthy subjects. A meta-regression shows omega 3 fatty acids improve the moods of patients with major depression (major depression is associated with mental deficits). However, exercise, not just performance-enhancing drugs, enhances cognition for healthy and non healthy subjects as well.

On the philosophical front, conscious efforts to influence intelligence raise ethical issues. Neuroethics considers the ethical, legal and social implications of neuroscience, and deals with issues such as the difference between treating a human neurological disease and enhancing the human brain, and how wealth impacts access to neurotechnology. Neuroethical issues interact with the ethics of human genetic engineering.

Transhumanist theorists study the possibilities and consequences of developing and using techniques to enhance human abilities and aptitudes.

Eugenics is a social philosophy which advocates the improvement of human hereditary traits through various forms of intervention. Eugenics has variously been regarded as meritorious or deplorable in different periods of history, falling greatly into disrepute after the defeat of Nazi Germany in World War II.

Measuring

Chart of IQ Distributions on 1916 Stanford-Binet Test
Score distribution chart for sample of 905 children tested on 1916 Stanford-Binet Test

The approach to understanding intelligence with the most supporters and published research over the longest period of time is based on psychometric testing. It is also by far the most widely used in practical settings. Intelligence quotient (IQ) tests include the Stanford-Binet, Raven's Progressive Matrices, the Wechsler Adult Intelligence Scale and the Kaufman Assessment Battery for Children. There are also psychometric tests that are not intended to measure intelligence itself but some closely related construct such as scholastic aptitude. In the United States examples include the SSAT, the SAT, the ACT, the GRE, the MCAT, the LSAT, and the GMAT. Regardless of the method used, almost any test that requires examinees to reason and has a wide range of question difficulty will produce intelligence scores that are approximately normally distributed in the general population.

Intelligence tests are widely used in educational, business, and military settings because of their efficacy in predicting behavior. IQ and g (discussed in the next section) are correlated with many important social outcomes—individuals with low IQs are more likely to be divorced, have a child out of marriage, be incarcerated, and need long-term welfare support, while individuals with high IQs are associated with more years of education, higher status jobs and higher income. Intelligence is significantly correlated with successful training and performance outcomes, and IQ/g is the single best predictor of successful job performance.

General intelligence factor or g

There are many different kinds of IQ tests using a wide variety of test tasks. Some tests consist of a single type of task, others rely on a broad collection of tasks with different contents (visual-spatial, verbal, numerical) and asking for different cognitive processes (e.g., reasoning, memory, rapid decisions, visual comparisons, spatial imagery, reading, and retrieval of general knowledge). The psychologist Charles Spearman early in the 20th century carried out the first formal factor analysis of correlations between various test tasks. He found a trend for all such tests to correlate positively with each other, which is called a positive manifold. Spearman found that a single common factor explained the positive correlations among tests. Spearman named it g for "general intelligence factor". He interpreted it as the core of human intelligence that, to a larger or smaller degree, influences success in all cognitive tasks and thereby creates the positive manifold. This interpretation of g as a common cause of test performance is still dominant in psychometrics. (Although, an alternative interpretation was recently advanced by van der Maas and colleagues. Their mutualism model assumes that intelligence depends on several independent mechanisms, none of which influences performance on all cognitive tests. These mechanisms support each other so that efficient operation of one of them makes efficient operation of the others more likely, thereby creating the positive manifold.)

IQ tasks and tests can be ranked by how highly they load on the g factor. Tests with high g-loadings are those that correlate highly with most other tests. One comprehensive study investigating the correlations between a large collection of tests and tasks has found that the Raven's Progressive Matrices have a particularly high correlation with most other tests and tasks. The Raven's is a test of inductive reasoning with abstract visual material. It consists of a series of problems, sorted approximately by increasing difficulty. Each problem presents a 3 x 3 matrix of abstract designs with one empty cell; the matrix is constructed according to a rule, and the person must find out the rule to determine which of 8 alternatives fits into the empty cell. Because of its high correlation with other tests, the Raven's Progressive Matrices are generally acknowledged as a good indicator of general intelligence. This is problematic, however, because there are substantial gender differences on the Raven's, which are not found when g is measured directly by computing the general factor from a broad collection of tests.

General collective intelligence factor or c

A recent scientific understanding of collective intelligence, defined as a group's general ability to perform a wide range of tasks, expands the areas of human intelligence research applying similar methods and concepts to groups. Definition, operationalization and methods are similar to the psychometric approach of general individual intelligence where an individual's performance on a given set of cognitive tasks is used to measure intelligence indicated by the general intelligence factor g extracted via factor analysis. In the same vein, collective intelligence research aims to discover a ‘c factor’ explaining between-group differences in performance as well as structural and group compositional causes for it.

Historical psychometric theories

Several different theories of intelligence have historically been important for psychometrics. Often they emphasized more factors than a single one like in g factor.

Cattell–Horn–Carroll theory

Many of the broad, recent IQ tests have been greatly influenced by the Cattell–Horn–Carroll theory. It is argued to reflect much of what is known about intelligence from research. A hierarchy of factors for human intelligence is used. g is at the top. Under it there are 10 broad abilities that in turn are subdivided into 70 narrow abilities. The broad abilities are:

  • Fluid intelligence (Gf): includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized intelligence (Gc): includes the breadth and depth of a person's acquired knowledge, the ability to communicate one's knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative reasoning (Gq): the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading & writing ability (Grw): includes basic reading and writing skills.
  • Short-term memory (Gsm): is the ability to apprehend and hold information in immediate awareness and then use it within a few seconds.
  • Long-term storage and retrieval (Glr): is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual processing (Gv): is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory processing (Ga): is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing speed (Gs): is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/reaction time/speed (Gt): reflect the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; not to be confused with Gs, which typically is measured in intervals of 2–3 minutes).

Modern tests do not necessarily measure of all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ. Gt may be difficult to measure without special equipment.

g was earlier often subdivided into only Gf and Gc which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.

Controversies

While not necessarily a dispute about the psychometric approach itself, there are several controversies regarding the results from psychometric research.

One criticism has been against the early research such as craniometry. A reply has been that drawing conclusions from early intelligence research is like condemning the auto industry by criticizing the performance of the Model T.

Several critics, such as Stephen Jay Gould, have been critical of g, seeing it as a statistical artifact, and that IQ tests instead measure a number of unrelated abilities. The American Psychological Association's report "Intelligence: Knowns and Unknowns" stated that IQ tests do correlate and that the view that g is a statistical artifact is a minority one.

Intelligence across cultures

Psychologists have shown that the definition of human intelligence is unique to the culture that one is studying. Robert Sternberg is among the researchers who have discussed how one's culture affects the person's interpretation of intelligence, and he further believes that to define intelligence in only one way without considering different meanings in cultural contexts may cast an investigative and unintentionally egocentric view on the world. To negate this, psychologists offer the following definitions of intelligence:

  1. Successful intelligence is the skills and knowledge needed for success in life, according to one's own definition of success, within one's sociocultural context.
  2. Analytical intelligence is the result of intelligence's components applied to fairly abstract but familiar kinds of problems.
  3. Creative intelligence is the result of intelligence's components applied to relatively novel tasks and situations.
  4. Practical intelligence is the result of intelligence's components applied to experience for purposes of adaption, shaping and selection.

Although typically identified by its western definition, multiple studies support the idea that human intelligence carries different meanings across cultures around the world. In many Eastern cultures, intelligence is mainly related with one's social roles and responsibilities. A Chinese conception of intelligence would define it as the ability to empathize with and understand others — although this is by no means the only way that intelligence is defined in China. In several African communities, intelligence is shown similarly through a social lens. However, rather than through social roles, as in many Eastern cultures, it is exemplified through social responsibilities. For example, in the language of Chi-Chewa, which is spoken by some ten million people across central Africa, the equivalent term for intelligence implies not only cleverness but also the ability to take on responsibility. Furthermore, within American culture there are a variety of interpretations of intelligence present as well. One of the most common views on intelligence within American societies defines it as a combination of problem-solving skills, deductive reasoning skills, and Intelligence quotient (IQ), while other American societies point out that intelligent people should have a social conscience, accept others for who they are, and be able to give advice or wisdom.

Doomsday argument

 

Future of an expanding universe

From Wikipedia, the free encyclopedia

Observations suggest that the expansion of the universe will continue forever. If so, then a popular theory is that the universe will cool as it expands, eventually becoming too cold to sustain life. For this reason, this future scenario once popularly called "Heat Death" is now known as the "Big Chill" or "Big Freeze".

If dark energy—represented by the cosmological constant, a constant energy density filling space homogeneously, or scalar fields, such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space—accelerates the expansion of the universe, then the space between clusters of galaxies will grow at an increasing rate. Redshift will stretch ancient, incoming photons (even gamma rays) to undetectably long wavelengths and low energies. Stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. According to theories that predict proton decay, the stellar remnants left behind will disappear, leaving behind only black holes, which themselves eventually disappear as they emit Hawking radiation. Ultimately, if the universe reaches a state in which the temperature approaches a uniform value, no further work will be possible, resulting in a final heat death of the universe.

Cosmology

Infinite expansion does not determine the overall spatial curvature of the universe. It can be open (with negative spatial curvature), flat, or closed (positive spatial curvature), although if it is closed, sufficient dark energy must be present to counteract the gravitational forces or else the universe will end in a Big Crunch.

Observations of the cosmic background radiation by the Wilkinson Microwave Anisotropy Probe and the Planck mission suggest that the universe is spatially flat and has a significant amount of dark energy. In this case, the universe should continue to expand at an accelerating rate. The acceleration of the universe's expansion has also been confirmed by observations of distant supernovae. If, as in the concordance model of physical cosmology (Lambda-cold dark matter or ΛCDM), dark energy is in the form of a cosmological constant, the expansion will eventually become exponential, with the size of the universe doubling at a constant rate.

If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang; but inflation ended, indicating an equation of state much more complicated than those assumed so far for present-day dark energy. It is possible that the dark energy equation of state could change again resulting in an event that would have consequences which are extremely difficult to parametrize or predict.

Future history

In the 1970s, the future of an expanding universe was studied by the astrophysicist Jamal Islam and the physicist Freeman Dyson. Then, in their 1999 book The Five Ages of the Universe, the astrophysicists Fred Adams and Gregory Laughlin divided the past and future history of an expanding universe into five eras. The first, the Primordial Era, is the time in the past just after the Big Bang when stars had not yet formed. The second, the Stelliferous Era, includes the present day and all of the stars and galaxies now seen. It is the time during which stars form from collapsing clouds of gas. In the subsequent Degenerate Era, the stars will have burnt out, leaving all stellar-mass objects as stellar remnantswhite dwarfs, neutron stars, and black holes. In the Black Hole Era, white dwarfs, neutron stars, and other smaller astronomical objects have been destroyed by proton decay, leaving only black holes. Finally, in the Dark Era, even black holes have disappeared, leaving only a dilute gas of photons and leptons.

This future history and the timeline below assume the continued expansion of the universe. If space in the universe begins to contract, subsequent events in the timeline may not occur because the Big Crunch, the collapse of the universe into a hot, dense state similar to that after the Big Bang, will supervene.

Timeline

The Stelliferous Era

From the present to about 1014 (100 trillion) years after the Big Bang

The observable universe is currently 1.38×1010 (13.8 billion) years old. This time is in the Stelliferous Era. About 155 million years after the Big Bang, the first star formed. Since then, stars have formed by the collapse of small, dense core regions in large, cold molecular clouds of hydrogen gas. At first, this produces a protostar, which is hot and bright because of energy generated by gravitational contraction. After the protostar contracts for a while, its center will become hot enough to fuse hydrogen and its lifetime as a star will properly begin.

Stars of very low mass will eventually exhaust all their fusible hydrogen and then become helium white dwarfs. Stars of low to medium mass, such as our own sun, will expel some of their mass as a planetary nebula and eventually become white dwarfs; more massive stars will explode in a core-collapse supernova, leaving behind neutron stars or black holes. In any case, although some of the star's matter may be returned to the interstellar medium, a degenerate remnant will be left behind whose mass is not returned to the interstellar medium. Therefore, the supply of gas available for star formation is steadily being exhausted.

Milky Way Galaxy and the Andromeda Galaxy merge into one

4–8 billion years from now (17.8 – 21.8 billion years after the Big Bang)

The Andromeda Galaxy is currently approximately 2.5 million light years away from our galaxy, the Milky Way Galaxy, and they are moving towards each other at approximately 300 kilometers (186 miles) per second. Approximately five billion years from now, or 19 billion years after the Big Bang, the Milky Way and the Andromeda Galaxy will collide with one another and merge into one large galaxy based on current evidence. Up until 2012, there was no way to confirm whether the possible collision was going to happen or not. In 2012, researchers came to the conclusion that the collision is definite after using the Hubble Space Telescope between 2002 and 2010 to track the motion of Andromeda. This results in the formation of Milkdromeda (also known as Milkomeda).

Coalescence of Local Group and galaxies outside the Local Super-cluster are no longer accessible

1011 (100 billion) to 1012 (1 trillion) years

The galaxies in the Local Group, the cluster of galaxies which includes the Milky Way and the Andromeda Galaxy, are gravitationally bound to each other. It is expected that between 1011 (100 billion) and 1012 (1 trillion) years from now, their orbits will decay and the entire Local Group will merge into one large galaxy.

Assuming that dark energy continues to make the universe expand at an accelerating rate, in about 150 billion years all galaxies outside the Local Supercluster will pass behind the cosmological horizon. It will then be impossible for events in the Local Group to affect other galaxies. Similarly it will be impossible for events after 150 billion years, as seen by observers in distant galaxies, to affect events in the Local Group. However, an observer in the Local Supercluster will continue to see distant galaxies, but events they observe will become exponentially more red shifted as the galaxy approaches the horizon until time in the distant galaxy seems to stop. The observer in the Local Supercluster never observes events after 150 billion years in their local time, and eventually all light and background radiation lying outside the local supercluster will appear to blink out as light becomes so red-shifted that its wavelength has become longer than the physical diameter of the horizon.

Technically, it will take an infinitely long time for all causal interaction between our local supercluster and this light; however, due to the redshifting explained above, the light will not necessarily be observed for an infinite amount of time, and after 150 billion years, no new causal interaction will be observed.

Therefore, after 150 billion years, intergalactic transportation and communication beyond the Local Super-cluster becomes causally impossible, unless ftl communication, warp drives, and/or traversable artificial wormholes are developed.

Luminosities of galaxies begin to diminish

8×1011 (800 billion) years

8×1011 (800 billion) years from now, the luminosities of the different galaxies, approximately similar until then to the current ones thanks to the increasing luminosity of the remaining stars as they age, will start to decrease, as the less massive red dwarf stars begin to die as white dwarfs.

Galaxies outside the Local Supercluster are no longer detectable

2×1012 (2 trillion) years

2×1012 (2 trillion) years from now, all galaxies outside the Local Supercluster will be red-shifted to such an extent that even gamma rays they emit will have wavelengths longer than the size of the observable universe of the time. Therefore, these galaxies will no longer be detectable in any way.

Degenerate Era

From 1014 (100 trillion) to 1040 (10 duodecillion) years

By 1014 (100 trillion) years from now, star formation will end, leaving all stellar objects in the form of degenerate remnants. If protons do not decay, stellar-mass objects will disappear more slowly, making this era last longer.

Star formation ceases

1012–14 (1–100 trillion) years

By 1014 (100 trillion) years from now, star formation will end. This period, known as the "Degenerate Era", will last until the degenerate remnants finally decay. The least massive stars take the longest to exhaust their hydrogen fuel (see stellar evolution). Thus, the longest living stars in the universe are low-mass red dwarfs, with a mass of about 0.08 solar masses (M), which have a lifetime of order 1013 (10 trillion) years. Coincidentally, this is comparable to the length of time over which star formation takes place. Once star formation ends and the least massive red dwarfs exhaust their fuel, nuclear fusion will cease. The low-mass red dwarfs will cool and become black dwarfs. The only objects remaining with more than planetary mass will be brown dwarfs, with mass less than 0.08 M, and degenerate remnants; white dwarfs, produced by stars with initial masses between about 0.08 and 8 solar masses; and neutron stars and black holes, produced by stars with initial masses over 8 M. Most of the mass of this collection, approximately 90%, will be in the form of white dwarfs. In the absence of any energy source, all of these formerly luminous bodies will cool and become faint.

The universe will become extremely dark after the last stars burn out. Even so, there can still be occasional light in the universe. One of the ways the universe can be illuminated is if two carbonoxygen white dwarfs with a combined mass of more than the Chandrasekhar limit of about 1.4 solar masses happen to merge. The resulting object will then undergo runaway thermonuclear fusion, producing a Type Ia supernova and dispelling the darkness of the Degenerate Era for a few weeks. Neutron stars could also collide, forming even brighter supernovae and dispelling up to 6 solar masses of degenerate gas into the interstellar medium. The resulting matter from these supernovae could potentially create new stars. If the combined mass is not above the Chandrasekhar limit but is larger than the minimum mass to fuse carbon (about 0.9 M), a carbon star could be produced, with a lifetime of around 106 (1 million) years. Also, if two helium white dwarfs with a combined mass of at least 0.3 M collide, a helium star may be produced, with a lifetime of a few hundred million years. Finally brown dwarfs can form new stars colliding with each other to form a red dwarf star, that can survive for 1013 (10 trillion) years, or accreting gas at very slow rates from the remaining interstellar medium until they have enough mass to start hydrogen burning as red dwarfs too. This process, at least on white dwarfs, could induce Type Ia supernovae too.

Planets fall or are flung from orbits by a close encounter with another star

1015 (1 quadrillion) years

Over time, the orbits of planets will decay due to gravitational radiation, or planets will be ejected from their local systems by gravitational perturbations caused by encounters with another stellar remnant.

Stellar remnants escape galaxies or fall into black holes

1019 to 1020 (10 to 100 quintillion) years

Over time, objects in a galaxy exchange kinetic energy in a process called dynamical relaxation, making their velocity distribution approach the Maxwell–Boltzmann distribution. Dynamical relaxation can proceed either by close encounters of two stars or by less violent but more frequent distant encounters. In the case of a close encounter, two brown dwarfs or stellar remnants will pass close to each other. When this happens, the trajectories of the objects involved in the close encounter change slightly, in such a way that their kinetic energies are more nearly equal than before. After a large number of encounters, then, lighter objects tend to gain speed while the heavier objects lose it.

Because of dynamical relaxation, some objects in the Universe will gain just enough energy to reach galactic escape velocity and depart the galaxy, leaving behind a smaller, denser galaxy. Since encounters are more frequent in the denser galaxy, the process then accelerates. The end result is that most objects (90% to 99%) are ejected from the galaxy, leaving a small fraction (maybe 1% to 10%) which fall into the central supermassive black hole. It has been suggested that the matter of the fallen remnants will form an accretion disk around it that will create a quasar, as long as enough matter is present there.

Possible ionization of matter

>1023 years from now

In an expanding universe with decreasing density and non-zero cosmological constant, matter density would reach zero, resulting in most matter except black dwarfs, neutron stars, black holes, and planets ionizing and dissipating at thermal equilibrium.

Future with proton decay

The following timeline assumes that protons do decay.

Chance: 1034 (10 decillion) – 1039 years (1 duodecillion)

The subsequent evolution of the universe depends on the possibility and rate of proton decay. Experimental evidence shows that if the proton is unstable, it has a half-life of at least 1034 years. Some of the Grand Unified theories (GUTs) predict long-term proton instability between 1031 and 1036 years, with the upper bound on standard (non-supersymmetry) proton decay at 1.4×1036 years and an overall upper limit maximum for any proton decay (including supersymmetry models) at 6×1039 years. Recent research showing proton lifetime (if unstable) at or exceeding 1034–1035 year range rules out simpler GUTs and most non-supersymmetry models.

Nucleons start to decay

Neutrons bound into nuclei are also suspected to decay with a half-life comparable to that of protons. Planets (substellar objects) would decay in a simple cascade process from heavier elements to pure hydrogen while radiating energy.

In the event that the proton does not decay at all, stellar objects would still disappear, but more slowly. See Future without proton decay below.

Shorter or longer proton half-lives will accelerate or decelerate the process. This means that after 1037 years (the maximum proton half-life used by Adams & Laughlin (1997)), one-half of all baryonic matter will have been converted into gamma ray photons and leptons through proton decay.

All nucleons decay

1040 (10 duodecillion) years

Given our assumed half-life of the proton, nucleons (protons and bound neutrons) will have undergone roughly 1,000 half-lives by the time the universe is 1040 years old. To put this into perspective, there are an estimated 1080 protons currently in the universe. This means that the number of nucleons will be slashed in half 1,000 times by the time the universe is 1040 years old. Hence, there will be roughly 0.51,000 (approximately 10−301) as many nucleons remaining as there are today; that is, zero nucleons remaining in the universe at the end of the Degenerate Age. Effectively, all baryonic matter will have been changed into photons and leptons. Some models predict the formation of stable positronium atoms with diameters greater than the observable universe's current diameter (roughly 6 · 1034 metres) in 1085 years, and that these will in turn decay to gamma radiation in 10141 years.

The supermassive black holes are all that remain of galaxies once all protons decay, but even these giants are not immortal.

If protons decay on higher order nuclear processes

Chance: 1065 to 10200 years

In the event that the proton does not decay according to the theories described above, the Degenerate Era will last longer, and will overlap or surpass the Black Hole Era. On a time scale of 1065 years solid matter will behave as liquid and become smooth spheres due to diffusion and gravity. Degenerate stellar objects can still experience proton decay, for example via processes involving the Adler–Bell–Jackiw anomaly, virtual black holes, or higher-dimension supersymmetry possibly with a half-life of under 10200 years.

>10150 years from now

Although protons are stable in standard model physics, a quantum anomaly may exist on the electroweak level, which can cause groups of baryons (protons and neutrons) to annihilate into antileptons via the sphaleron transition. Such baryon/lepton violations have a number of 3 and can only occur in multiples or groups of three baryons, which can restrict or prohibit such events. No experimental evidence of sphalerons has yet been observed at low energy levels, though they are believed to occur regularly at high energies and temperatures.

The photon, electron, positron, and neutrino are now the final remnants of the universe as the last of the supermassive black holes evaporate.

Black Hole Era

1040 (10 duodecillion) years to approximately 10100 (1 googol) years, up to 10108 years for the largest supermassive black holes

After 1040 years, black holes will dominate the universe. They will slowly evaporate via Hawking radiation. A black hole with a mass of around 1 M will vanish in around 2×1066 years. As the lifetime of a black hole is proportional to the cube of its mass, more massive black holes take longer to decay. A supermassive black hole with a mass of 1011 (100 billion) M will evaporate in around 2×1099 years.

The largest black holes in the universe are predicted to continue to grow. Larger black holes of up to 1014 (100 trillion) M may form during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of 10106 to 10108 years.

Hawking radiation has a thermal spectrum. During most of a black hole's lifetime, the radiation has a low temperature and is mainly in the form of massless particles such as photons and hypothetical gravitons. As the black hole's mass decreases, its temperature increases, becoming comparable to the Sun's by the time the black hole mass has decreased to 1019 kilograms. The hole then provides a temporary source of light during the general darkness of the Black Hole Era. During the last stages of its evaporation, a black hole will emit not only massless particles, but also heavier particles, such as electrons, positrons, protons, and antiprotons.

Dark Era and Photon Age

From 10100 years (10 duotrigintillion years or 1 googol years)

After all the black holes have evaporated (and after all the ordinary matter made of protons has disintegrated, if protons are unstable), the universe will be nearly empty. Photons, neutrinos, electrons, and positrons will fly from place to place, hardly ever encountering each other. Gravitationally, the universe will be dominated by dark matter, electrons, and positrons (not protons).

By this era, with only very diffuse matter remaining, activity in the universe will have tailed off dramatically (compared with previous eras), with very low energy levels and very large time scales. Electrons and positrons drifting through space will encounter one another and occasionally form positronium atoms. These structures are unstable, however, and their constituent particles must eventually annihilate. Other low-level annihilation events will also take place, albeit very slowly. The universe now reaches an extremely low-energy state.

Future without proton decay

If the protons do not decay, stellar-mass objects will still become black holes, but more slowly. The following timeline assumes that proton decay does not take place.

Degenerate Era

Matter decays into iron

101100-1032000 years from now

In 101500 years, cold fusion occurring via quantum tunneling should make the light nuclei in stellar-mass objects fuse into iron-56 nuclei. Fission and alpha particle emission should make heavy nuclei also decay to iron, leaving stellar-mass objects as cold spheres of iron, called iron stars. Before this happens, in some black dwarfs the process is expected to lower their Chandrasekhar limit resulting in a supernova in 101100 years. Non-degenerate silicon has been calculated to tunnel to iron in approximately 1032000 years.

Black Hole Era

Collapse of iron stars to black holes

101026 to 101076 years from now

Quantum tunneling should also turn large objects into black holes, which (on these timescales) will instantaneously evaporate into subatomic particles. Depending on the assumptions made, the time this takes to happen can be calculated as from 101026 years to 101076 years. Quantum tunneling may also make iron stars collapse into neutron stars in around 101076 years..

Dark Era (without proton decay)

101076 years from now

With black holes evaporated, virtually no matter still exists, the universe having become an almost pure vacuum (possibly accompanied with a false vacuum). The expansion of the universe slowly cools it down to absolute zero.

Beyond

Beyond 102500 years if proton decay occurs, or 101076 years without proton decay

It is possible that a Big Rip event may occur far off into the future. This singularity would take place at a finite scale factor.

If the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.

Presumably, extreme low-energy states imply that localized quantum events become major macroscopic phenomena rather than negligible microscopic events because the smallest perturbations make the biggest difference in this era, so there is no telling what may happen to space or time. It is perceived that the laws of "macro-physics" will break down, and the laws of quantum physics will prevail.

The universe could possibly avoid eternal heat death through random quantum tunneling and quantum fluctuations, given the non-zero probability of producing a new Big Bang in roughly 10101056 years.

Over an infinite amount of time, there could be a spontaneous entropy decrease, by a Poincaré recurrence or through thermal fluctuations (see also fluctuation theorem).

Massive black dwarves could also potentially explode into supernovae after up to 1032000 years, assuming protons do not decay. 

The possibilities above are based on a simple form of dark energy. But the physics of dark energy are still a very active area of research, and the actual form of dark energy could be much more complex. For example, during inflation dark energy affected the universe very differently than it does today, so it is possible that dark energy could trigger another inflationary period in the future. Until dark energy is better understood its possible effects are extremely difficult to predict or parametrize.

Vehicle-to-grid

From Wikipedia, the free encyclopedia
 
A V2G-enabled EV fast charger

Vehicle-to-grid (V2G) describes a system in which plug-in electric vehicles, such as battery electric vehicles (BEV), plug-in hybrids (PHEV) or hydrogen fuel cell electric vehicles (FCEV), communicate with the power grid to sell demand response services by either returning electricity to the grid or by throttling their charging rate. V2G storage capabilities can enable EVs to store and discharge electricity generated from renewable energy sources such as solar and wind, with output that fluctuates depending on weather and time of day.

V2G can be used with gridable vehicles, that is, plug-in electric vehicles (BEV and PHEV), with grid capacity. Since at any given time 95 percent of cars are parked, the batteries in electric vehicles could be used to let electricity flow from the car to the electric distribution network and back. A 2015 report on potential earnings associated with V2G found that with proper regulatory support, vehicle owners could earn $454, $394, and $318 per year depending on whether their average daily drive was 32, 64, or 97 km (20, 40, or 60 miles), respectively.

Batteries have a finite number of charging cycles, as well as a shelf-life, therefore using vehicles as grid storage can impact battery longevity. Studies that cycle batteries two or more times per day have shown large decreases in capacity and greatly shortened life. However, battery capacity is a complex function of factors such as battery chemistry, charging and discharging rate, temperature, state of charge and age. Most studies with slower discharge rates show only a few percent of additional degradation while one study has suggested that using vehicles for grid storage could improve longevity.

Sometimes the modulation of charging of a fleet of electric vehicles by an aggregator to offer services to the grid but without actual electrical flow from the vehicles to the grid is called unidirectional V2G, as opposed to the bidirectional V2G that is generally discussed in this article.

Applications

Peak load leveling

The concept allows V2G vehicles to provide power to help balance loads by "valley filling" (charging at night when demand is low) and "peak shaving" (sending power back to the grid when demand is high, see duck curve). Peak load leveling can enable new ways for utilities to provide regulation services (keeping voltage and frequency stable) and provide spinning reserves (meet sudden demands for power). These services coupled with "smart-meters" would allow V2G vehicles to give power back to the grid and in return, receive monetary benefits based on how much power given back to the grid. In its current development, it has been proposed that such use of electric vehicles could buffer renewable power sources such as wind power for example, by storing excess energy produced during windy periods and providing it back to the grid during high load periods, thus effectively stabilizing the intermittency of wind power. Some see this application of vehicle-to-grid technology as an approach to help renewable energy become a base load electricity technology.

It has been proposed that public utilities would not have to build as many natural gas or coal-fired power plants to meet peak demand or as an insurance policy against power outages. Since demand can be measured locally by a simple frequency measurement, dynamic load leveling can be provided as needed. Carbitrage, a portmanteau of 'car' and 'arbitrage', is sometimes used to refer to the minimum price of electricity at which a vehicle would discharge its battery.

Backup power

Modern electric vehicles can generally store in their batteries more than an average home's daily energy demand. Even without a PHEV's gas generation capabilities such a vehicle could be used for emergency power for several days (for example, lighting, home appliances, etc.). This would be an example of Vehicle-to-home transmission (V2H). As such they may be seen as a complementary technology for intermittent renewable power resources such as wind or solar electric. Hydrogen fuel cell vehicles (FCV) with tanks containing up to 5.6 kg of hydrogen can deliver more than 90 kWh of electricity.

Types of V2G

Unidirectional V2G or V1G

Many of the grid-scale benefits of V2G can be accomplished with unidirectional V2G, also known as V1G or "smart charging". The California Independent System Operator (CAISO) defines V1G as "unidirectional managed charging services" and defines the four levels of Vehicle-Grid Interface (VGI), which encompasses all of the ways that EVs can provide grid services, as follows:

  1. Unidirectional power flow (V1G) with one resource and unified actors
  2. V1G with aggregated resources
  3. V1G with fragmented actor objectives
  4. Bidirectional power flow (V2G)

V1G involves varying the time or rate at which an electric vehicle is charged in order to provide ancillary services to the grid, while V2G also includes reverse power flow. V1G includes applications such as timing vehicles to charge in the middle of the day to absorb excess solar generation, or varying the charge rate of electric vehicles to provide frequency response services or load balancing services.

V1G may be the best option to begin integrating EVs as controllable loads onto the electric grid due to technical issues that currently exist with regards to the feasibility of V2G. V2G requires specialized hardware (especially bi-directional inverters), has fairly high losses and limited round-trip efficiency, and may contribute to EV battery degradation due to increased energy throughput. Additionally, revenues from V2G in an SCE pilot project were lower than the costs of administering the project, indicating that V2G still has a ways to go before being economically feasible.

Bidirectional local V2G (V2H , V2B, V2X)

Vehicle-to-home (V2H) or vehicle-to-building (V2B) or vehicle-to-everything (V2X) do not typically directly affect grid performance but creates a balance within the local environment. The electric vehicle is used as a residential back-up power supply during periods of power outage or for increasing self-consumption of energy produced on-site (demand charge avoidance).

Unlike more mature V1G solutions, V2X has not yet reached market deployment, apart from Japan where commercial V2H solutions have been available since 2012 as back-up a solution in case of electricity black-out.

Bidirectional V2G

With V2G, the electric vehicles could be equipped to actually provide electricity to the grid. The utility or transmission system operator may be willing to purchase energy from customers during periods of peak demand, or to use the EV battery capacity for providing ancillary services, such as balancing and frequency control, including primary frequency regulation and secondary reserve. Thus, V2G is in most applications deemed to have higher potential commercial value than V2B or V2H. A 6kW CHAdeMO V2G may cost AU$10,000 (US$7,000).

Efficiency

Most modern battery electric vehicles use lithium-ion cells that can achieve round-trip efficiency greater than 90%. The efficiency of the battery depends on factors like charge rate, charge state, battery state of health, and temperature.

The majority of losses, however, are in system components other than the battery. Power electronics, such as inverters, typically dominate overall losses. A study found overall round-trip efficiency for V2G system in the range of 53% to 62%'. Another study reports an efficiency of about 70%. The overall efficiency however depends on several factors and can vary widely.

Implementation by country

A study conducted in 2012 by the Idaho National Laboratory revealed the following estimations and future plans for V2G in various countries. It is important to note that this is difficult to quantify because the technology is still in its nascent stage, and is therefore difficult to reliably predict adoption of the technology around the world. The following list is not intended to be exhaustive, but rather to give an idea of the scope of development and progress in these areas around the world.

United States

PJM Interconnection has envisioned using US Postal Service trucks, school buses and garbage trucks that remain unused overnight for grid connection. This could generate millions of dollars because these companies aid in storing and stabilizing some of the national grid's energy. The United States was projected to have one million electric vehicles on the road between 2015 and 2019. Studies indicate that 160 new power plants will need to be built by 2020 to compensate for electric vehicles if integration with the grid does not move forward.

In North America, at least two major school-bus manufacturers—Blue Bird and Lion—are working on proving the benefits of electrification and vehicle-to-grid technology. As school buses in the U.S. currently use $3.2B of diesel a year, their electrification can help stabilize the electrical grid, lessen the need for new power plants, and reduce kids’ exposure to cancer-causing exhaust.

In 2017, at the University of California San Diego, V2G technology provider Nuvve launched a pilot program called INVENT, funded by the California Energy Commission, with the installation of 50 V2G bi-directional charging stations around the campus. The program expanded in 2018 to include a fleet of EVs for its free nighttime shuttle service, Triton Rides.

In 2018, Nissan launched a pilot program under the Nissan Energy Share initiative in partnership with vehicle-to-grid systems company Fermata Energy seeking to use bi-directional charging technology to partially power Nissan North America's headquarters in Franklin, Tn. In 2020 Fermata Energy’s bidirectional electric vehicle charging system became the first to be certified to the North American safety standard, UL 9741, the Standard for Bidirectional Electric Vehicle (EV) Charging System Equipment.

Japan

In order to meet the 2030 target of 10 percent of Japan's energy being generated by renewable resources, a cost of $71.1 billion will be required for the upgrades of existing grid infrastructure. The Japanese charging infrastructure market is projected to grow from $118.6 million to $1.2 billion between 2015 and 2020. Starting in 2012, Nissan plans to bring to market a kit compatible with the LEAF EV that will be able to provide power back into a Japanese home. Currently, there is a prototype being tested in Japan. Average Japanese homes use 10 to 12 KWh/day, and with the LEAF's 24 KWh battery capacity, this kit could potentially provide up to two days of power. Production in additional markets will follow upon Nissan's ability to properly complete adaptations.

In November 2018 in Toyota City, Aichi Prefecture, Toyota Tsusho Corporation and Chubu Electric Power Co., Inc initiated charging and discharging demonstrations with storage batteries of electric vehicles and plug-in hybrid vehicles using V2G technology. The demonstration examines how to excel the ability of V2G systems to balance demand and supply of electricity is and what impacts V2G has on the power grid. In addition to ordinary usage of EVs/PHVs such as by transportation, the group is producing new values of EVs/PHVs by providing V2G services even when EVs/PHVs are parked. Two bi-directional charging stations, connected to a V2G aggregation server managed by Nuvve Corporation, have been installed at a parking lot in Toyota City, Aichi Prefecture to conduct the demonstration test. The group aims to assess the capacity of EVs/PHVs to balance out demand and supply of electrical power by charging EVs/PHVs and supplying electrical power to the grid from EVs/PHVs.

Denmark

Denmark is one of the world's largest wind-based power generators. Initially, Denmark's goal is to replace 10% of all vehicles with PEVs, with an ultimate goal of a complete replacement to follow. The Edison Project implements a new set of goals that will allow enough turbines to be built to accommodate 50% of total power while using V2G to prevent negative impacts to the grid. Because of the unpredictability of wind, the Edison Project plans to use PEVs while they are plugged into the grid to store additional wind energy that the grid cannot handle. Then, during peak energy use hours, or when the wind is calm, the power stored in these PEVs will be fed back into the grid. To aid in the acceptance of EVs, policies have been enforced that create a tax differential between zero emission cars and traditional automobiles. The Danish PEV market value is expected to grow from $50 to $380 million between 2015 and 2020. PEV developmental progress and advancements pertaining to the use of renewable energy resources will make Denmark a market leader with respect to V2G innovation (ZigBee 2010).

Following the Edison project, the Nikola project was started which focused on demonstrating the V2G technology in a lab setting, located at the Risø Campus (DTU). DTU is a partner along with Nuvve and Nissan. The Nikola project completed in 2016, laying the groundwork for Parker, which uses a fleet of EVs to demonstrate the technology in a real-life setting. This project is partnered by DTU, Insero, Nuvve, Nissan and Frederiksberg Forsyning (Danish DSO in Copenhagen). Besides demonstrating the technology the project also aims to clear the path for V2G-integration with other OEMs as well as calculating the business case for several types of V2G, such as Adaptive charging, overload protection, peak shaving, emergency backup and frequency balancing. In the project the partners explored the most viable commercial opportunities by systematically testing and demonstrating V2G services across car brands. Here, economic and regulatory barriers were identified as well as the economic and technical impacts of the applications on the power system and markets. The project started in August 2016 and ended in September 2018.

United Kingdom

The V2G market in the UK will be stimulated by aggressive smart grid and PEV rollouts. Starting in January 2011, programs and strategies to assist in PEV have been implemented. The UK has begun devising strategies to increase the speed of adoption of EVs. This includes providing universal high-speed internet for use with smart grid meters, because most V2G-capable PEVs will not coordinate with the larger grid without it. The "Electric Delivery Plan for London" states that by 2015, there will be 500 on-road charging stations; 2,000 stations off-road in car parks; and 22,000 privately owned stations installed. Local grid substations will need to be upgraded for drivers who cannot park on their own property. By 2020 in the UK, every residential home will have been offered a smart meter, and about 1.7 million PEVs should be on the road. The UK's electric vehicle market value is projected to grow from $0.1 to $1.3 billion between 2015 and 2020 (ZigBee 2010).

In 2018, EDF Energy announced a partnership with a leading green technology company, Nuvve, to install up to 1,500 Vehicle to Grid (V2G) chargers in the UK. The chargers will be offered to EDF Energy’s business customers and will be used at its own sites to provide up to 15 MW of additional energy storage capacity. That’s the equivalent amount of energy required to power 4,000 homes. The stored electricity will be made available for sale on the energy markets or for supporting grid flexibility at times of peak energy use. EDF Energy is the largest electricity supplier to UK businesses and its partnership with Nuvve could see the largest deployment of V2G chargers so far in this country.

In fall 2019, a consortium called Vehicle to Grid Britain (V2GB) released a research report on the potential of V2G technologies.

Research

Edison

Denmark's Edison project, an abbreviation for 'Electric vehicles in a Distributed and Integrated market using Sustainable energy and Open Networks' was a partially state funded research project on the island of Bornholm in Eastern Denmark. The consortium of IBM, Siemens the hardware and software developer EURISCO, Denmark's largest energy company Ørsted (formerly DONG Energy), the regional energy company Østkraft, the Technical University of Denmark and the Danish Energy Association, explored how to balance the unpredictable electricity loads generated by Denmark's many wind farms, currently generating approximately 20 percent of the country's total electricity production, by using electric vehicles (EV) and their accumulators. The aim of the project is to develop infrastructure that enables EVs to intelligently communicate with the grid to determine when charging, and ultimately discharging, can take place. At least one rebuild V2G capable Toyota Scion will be used in the project. The project is key in Denmark's ambitions to expand its wind-power generation to 50% by 2020.

According to a source of British newspaper The Guardian 'It's never been tried at this scale' previously. The project concluded in 2013.

Southwest Research Institute

In 2014, Southwest Research Institute (SwRI) developed the first vehicle-to-grid aggregation system qualified by the Electric Reliability Council of Texas (ERCOT). The system allows for owners of electric delivery truck fleets to make money by assisting in managing the grid frequency. When the electric grid frequency drops below 60 Hertz, the system suspends vehicle charging which removes the load on the grid thus allowing the frequency to rise to a normal level. The system is the first of its kind because it operates autonomously.

The system was originally developed as part of the Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Phase II program, led by Burns and McDonnell Engineering Company, Inc. The goals of the SPIDERS program are to increase energy security in the event of power loss from a physical or cyber disruption, provide emergency power, and manage the grid more efficiently. In November 2012, SwRI was awarded a $7 million contract from the U.S. Army Corps of Engineers to demonstrate the integration of vehicle-to-grid technologies as a source for emergency power at Fort Carson, Colorado. In 2013, SwRI researchers tested five DC fast-charge stations at the army post. The system passed integration and acceptance testing in August 2013.

Delft University of Technology

Prof. Dr. Ad van Wijk, Vincent Oldenbroek and Dr. Carla Robledo, researchers at Delft University of Technology, in 2016 conducted research on V2G technology with hydrogen FCEVs. Both experimental work with V2G FCEVs and techno-economic scenario studies for 100% renewable integrated energy and transport systems are done, using only hydrogen and electricity as energy carriers. They modified a Hyundai ix35 FCEV together with Hyundai R&D so it can deliver up to 10 kW DC Power while maintaining road access permit. They developed together with the company Accenda b.v. a V2G unit converting the DC power of the FCEV into 3-phase AC power and injecting it into the Dutch national electricity grid. The Future Energy Systems Group also recently did tests with their V2G FCEVs whether it could offer frequency reserves. Based on the positive outcome of the tests an MSc thesis was published looking into the technical and economic feasibility assessment of a hydrogen and FCEV based Car Park as Power Plant offering frequency reserves.

University of Delaware

Willett Kempton, Suresh Advani, and Ajay Prasad are the researchers at the University of Delaware who are currently conducting research on the V2G technology, with Dr. Kempton being the lead on the project. Dr. Kempton has published a number of articles on the technology and the concept, many of which can be found on the V2G project page. The group is involved in researching the technology itself, as well as its performance when used on the grid. In addition to the technical research, the team has worked with Dr. Meryl Gardner, a marketing professor in the Alfred Lerner College of Business and Economic at the University of Delaware, to develop marketing strategies for both consumer and corporate fleet adoption. A 2006 Toyota Scion xB car was modified for testing in 2007.

In 2010, Kempton and Gregory Poilasne co-founded Nuvve, a V2G solutions company. The company has formed a number of industry partnerships and implemented V2G pilot projects on five continents worldwide.

Lawrence Berkeley National Laboratory

At Lawrence Berkeley National Laboratory, Dr. Samveg Saxena currently serves as the project lead for Vehicle-to-Grid Simulator (V2G-Sim). V2G-Sim is a simulation platform tool used to model spatial and temporal driving and charging behavior of individual plug-in electric vehicles on the electric grid. Its models are used to investigate the challenges and opportunities of V2G services, such as modulation of charging time and charging rate for peak demand response and utility frequency regulation. V2G-Sim has also been used to research the potential of plug-in electric vehicles for renewable energy integration. Preliminary findings using V2G-Sim have shown controlled V2G service can provide peak-shaving and valley-filling services to balance daily electric load and mitigate the duck curve. On the contrary, uncontrolled vehicle charging was shown to exacerbate the duck curve. The study also found that even at 20 percent fade in capacity, EV batteries still met the needs of 85 percent of drivers.

In another research initiative at Lawrence Berkeley Lab using V2G-Sim, V2G services were shown to have minor battery degradation impacts on electric vehicles as compared to cycling losses and calendar aging. In this study, three electric vehicles with different daily driving itineraries were modelled over a ten-year time horizon, with and without V2G services. Assuming daily V2G service from 7PM to 9PM at a charging rate of 1.440 kW, the capacity losses of the electric vehicles due to V2G over ten years were 2.68%, 2.66%, and 2.62%.

Nissan and Enel

In May 2016, Nissan and Enel power company announced a collaborative V2G trial project in the United Kingdom, the first of its kind in the country. The trial comprises 100 V2G charging units to be used by Nissan Leaf and e-NV200 electric van users. The project claims electric vehicle owners will be able to sell stored energy back to the grid at a profit.

One notable V2G project in the United States is at the University of Delaware, where a V2G team headed by Dr. Willett Kempton has been conducting on-going research. An early operational implementation in Europe was conducted via the German government-funded MeRegioMobil project at the "KIT Smart Energy Home" of Karlsruhe Institute of Technology in cooperation with Opel as vehicle partner and utility EnBW providing grid expertise. Their goals are to educate the public about the environmental and economic benefits of V2G and enhance the product market. Other investigators are the Pacific Gas and Electric Company, Xcel Energy, the National Renewable Energy Laboratory, and, in the United Kingdom, the University of Warwick.

University of Warwick

WMG and Jaguar Land Rover collaborated with the Energy and Electrical Systems group of the university. Dr Kotub Uddin analysed lithium ion batteries from commercially available EVs over a two year period. He created a model of battery degradation and discovered that some patterns of vehicle-to-grid storage were able to significantly increase the longevity of the vehicle's battery over conventional charging strategies, while permitting them to be driven in normal ways. 

Skepticism

There is some skepticism among experts about the feasibility of V2G and several studies have questioned the concept's economic rationale. For example, a 2015 study found that economic analyses favorable to V2G fail to include many of the less obvious costs associated with its implementation. When these less obvious costs are included, the study finds that V2G represents an economically inefficient solution.

The more a battery is used the sooner it needs replacing. Replacement cost is approximately 1/3 the cost of the electric car. Over their lifespan, batteries degrade progressively with reduced capacity, cycle life, and safety due to chemical changes to the electrodes. Capacity loss/fade is expressed as a percentage of initial capacity after a number of cycles (e.g., 30% loss after 1,000 cycles). Cycling loss is due to usage and depends on both the maximum state of charge and the depth of discharge. JB Straubel, CTO of Tesla Inc., discounts V2G because battery wear outweighs economic benefit. He also prefers recycling over re-use for grid once batteries have reached the end of their useful car life. A 2017 study found decreasing capacity, and a 2012 hybrid-EV study found minor benefit.

Another common criticism is related to the overall efficiency of the process. Charging a battery system and returning that energy from the battery to the grid, which includes "inverting" the DC power back to AC inevitably incurs some losses. This needs to be factored against potential cost savings, along with increased emissions if the original source of power is fossil based. This cycle of energy efficiency may be compared with the 70–80% efficiency of large-scale pumped-storage hydroelectricity, which is however limited by geography, water resources and environment.

Additionally, in order for V2G to work, it must be on a large scale basis. Power companies must be willing to adopt the technology in order to allow vehicles to give power back to the power grid. With vehicles giving power back to the grid, the aforementioned "smart-meters" would have to be in place in order to measure the amount of power being transferred to the grid.

Vehicles

There are several electric vehicles that have been specially modified or are designed to be compatible with V2G. Hyundai ix35 FCEV from Delft University of Technology is modified with a 10 kW DC V2G output. Two vehicles that have a theoretical V2G capability include the Nissan Leaf and Nissan e-NV200.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...