Search This Blog

Sunday, September 27, 2020

Collective intelligence

From Wikipedia, the free encyclopedia
 
Types of collective intelligence

Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. The term appears in sociobiology, political science and in context of mass peer review and crowdsourcing applications. It may involve consensus, social capital and formalisms such as voting systems, social media and other means of quantifying mass activity. Collective IQ is a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed to bacteria and animals.

It can be understood as an emergent property from the synergies among: 1) data-information-knowledge; 2) software-hardware; and 3) experts (those with new insights as well as recognized authorities) that continually learns from feedback to produce just-in-time knowledge for better decisions than these three elements acting alone; or more narrowly as an emergent property between people and ways of processing information. This notion of collective intelligence is referred to as "symbiotic intelligence" by Norman Lee Johnson. The concept is used in sociology, business, computer science and mass communications: it also appears in science fiction. Pierre Lévy defines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities." According to researchers Pierre Lévy and Derrick de Kerckhove, it refers to capacity of networked ICTs (Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions.

Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According to Eric S. Raymond (1998) and JC Herz (2005), open source intelligence will eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations (Flew 2008). Media theorist Henry Jenkins sees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence. Both Pierre Lévy (2007) and Henry Jenkins (2008) support the claim that collective intelligence is important for democratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.

Similar to the g factor (g) for general individual intelligence, a new scientific understanding of collective intelligence aims to extract a general collective intelligence factor c factor for groups indicating a group's ability to perform a wide range of tasks.[9] Definition, operationalization and statistical methods are derived from g. Similarly as g is highly interrelated with the concept of IQ, this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Causes for c and predictive validity are investigated as well.

Collective intelligence is used to help create widely known platforms including Google, Wikipedia and political groups. Google is a major search engine that is made of millions of websites that have been created by people all around the world. It has the ability to share knowledge and creativity with each other to collaborate and expand thoughts and expressions. Google includes five key dynamics within their teams to create a well-collaborated system. Dynamics include psychological safety, dependability, structure & clarity, meaning of work and impact of work. Their ideas behind their rediscovery of collective intelligence is to ensure that all workers can express themselves without any fear of potential embarrassment. Google's teamwork is said to be one of the main reasons for their success by including the use of emotional and collective intelligence to ensure teamwork is involved in any discussions. The system behind Google exemplifies the combining of knowledge of the web-to-people not just knowledge of people-to-people.

Writers who have influenced the idea of collective intelligence include Francis Galton, Douglas Hofstadter (1979), Peter Russell (1983), Tom Atlee (1993), Pierre Lévy (1994), Howard Bloom (1995), Francis Heylighen (1995), Douglas Engelbart, Louis Rosenberg, Cliff Joslyn, Ron Dembo, Gottfried Mayer-Kress (2003).

History

H.G. Wells World Brain (1936–1938)

The concept (although not so named) originated in 1785 with the Marquis de Condorcet, whose "jury theorem" states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group (see Condorcet's jury theorem). Many theorists have interpreted Aristotle's statement in the Politics that "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision. Recent scholarship, however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.

A precursor of the concept is found in entomologist William Morton Wheeler's observation that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism (1910). Wheeler saw this collaborative process at work in ants that acted like the cells of a single beast he called a superorganism.

In 1912 Émile Durkheim identified society as the sole source of human logical thought. He argued in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time. Other antecedents are Vladimir Vernadsky and Pierre Teilhard de Chardin's concept of "noosphere" and H.G. Wells's concept of "world brain" (see also the term "global brain"). Peter Russell, Elisabet Sahtouris, and Barbara Marx Hubbard (originator of the term "conscious evolution") are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone". In 1994, he coined the term 'collective IQ' as a measure of collective intelligence, to focus attention on the opportunity to significantly raise collective IQ in business and society.

The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to as epistemic democracy. Epistemic democratic theories refer to the capacity of the populace, either through deliberation or aggregation of knowledge, to track the truth and relies on mechanisms to synthesize and apply collective intelligence.

Collective intelligence was introduced into the machine learning community in the late 20th century, and matured into a broader consideration of how to design "collectives" of self-interested adaptive agents to meet a system-wide goal. This was related to single-agent work on "reward shaping" and has been taken forward by numerous researchers in the game theory and engineering communities.

Dimensions

Complex adaptive systems model

Howard Bloom has discussed mass behavior – collective behavior from the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts of apoptosis, parallel distributed processing, group selection, and the superorganism to produce a theory of how collective intelligence works. Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered by John Holland.

Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life. Ant societies exhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for example aphids for "milking". Leaf cutters care for fungi and carry leaves to feed the fungi.

David Skrbina cites the concept of a 'group mind' as being derived from Plato's concept of panpsychism (that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a 'group mind' as articulated by Thomas Hobbes in "Leviathan" and Fechner's arguments for a collective consciousness of mankind. He cites Durkheim as the most notable advocate of a "collective consciousness" and Teilhard de Chardin as a thinker who has developed the philosophical implications of the group mind.

Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individual cognitive bias in order to allow a collective to cooperate on one process – while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration." Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action". Their approach is rooted in scientific community metaphor.

The term group intelligence is sometimes used interchangeably with the term collective intelligence. Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity. The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction. The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.

Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer. Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts. Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member. Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.

Robert David Steele Vivas in The New Craft of Intelligence portrayed all citizens as "intelligence minutemen," drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.

Stigmergic Collaboration: a theoretical framework for mass collaboration

According to Don Tapscott and Anthony D. Williams, collective intelligence is mass collaboration. In order for this concept to happen, four principles need to exist:

Openness
Sharing ideas and intellectual property: though these resources provide the edge over competitors more benefits accrue from allowing others to share ideas and gain significant improvement and scrutiny through collaboration.

Peering
Horizontal organization as with the 'opening up' of the Linux program where users are free to modify and develop it provided that they make it available for others. Peering succeeds because it encourages self-organization – a style of production that works more effectively than hierarchical management for certain tasks.

Sharing
Companies have started to share some ideas while maintaining some degree of control over others, like potential and critical patent rights. Limiting all intellectual property shuts out opportunities, while sharing some expands markets and brings out products faster.

Acting Globally
The advancement in communication technology has prompted the rise of global companies at low overhead costs. The internet is widespread, therefore a globally integrated company has no geographical boundaries and may access new markets, ideas and technology.

Collective intelligence factor c

Scree plot showing percent of explained variance for the first factors in Woolley et al.'s (2010) two original studies.

A new scientific understanding of collective intelligence defines it as a group's general ability to perform a wide range of tasks.[9] Definition, operationalization and statistical methods are similar to the psychometric approach of general individual intelligence. Hereby, an individual's performance on a given set of cognitive tasks is used to measure general cognitive ability indicated by the general intelligence factor g extracted via factor analysis. In the same vein as g serves to display between-individual performance differences on cognitive tasks, collective intelligence research aims to find a parallel intelligence factor for groups 'c factor' (also called 'collective intelligence factor' (CI)) displaying between-group differences on task performance. The collective intelligence score then is used to predict how this same group will perform on any other similar task in the future. Yet tasks, hereby, refer to mental or intellectual tasks performed by small groups even though the concept is hoped to be transferable to other performances and any groups or crowds reaching from families to companies and even whole cities. Since individuals' g factor scores are highly correlated with full-scale IQ scores, which are in turn regarded as good estimates of g, this measurement of collective intelligence can also be seen as an intelligence indicator or quotient respectively for a group (Group-IQ) parallel to an individual's intelligence quotient (IQ) even though the score is not a quotient per se.

Mathematically, c and g are both variables summarizing positive correlations among different tasks supposing that performance on one task is comparable with performance on other similar tasks. c thus is a source of variance among groups and can only be considered as a group's standing on the c factor compared to other groups in a given relevant population. The concept is in contrast to competing hypotheses including other correlational structures to explain group intelligence, such as a composition out of several equally important but independent factors as found in individual personality research.

Besides, this scientific idea also aims to explore the causes affecting collective intelligence, such as group size, collaboration tools or group members' interpersonal skills. The MIT Center for Collective Intelligence, for instance, announced the detection of The Genome of Collective Intelligence as one of its main goals aiming to develop a taxonomy of organizational building blocks, or genes, that can be combined and recombined to harness the intelligence of crowds.

Causes

Individual intelligence is shown to be genetically and environmentally influenced. Analogously, collective intelligence research aims to explore reasons why certain groups perform more intelligent than other groups given that c is just moderately correlated with the intelligence of individual group members. According to Woolley et al.'s results, neither team cohesion nor motivation or satisfaction is correlated with c. However, they claim that three factors were found as significant correlates: the variance in the number of speaking turns, group members' average social sensitivity and the proportion of females. All three had similar predictive power for c, but only social sensitivity was statistically significant (b=0.33, P=0.05).

The number speaking turns indicates that "groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking". Hence, providing multiple team members the chance to speak up made a group more intelligent.

Group members' social sensitivity was measured via the Reading the Mind in the Eyes Test (RME) and correlated .26 with c. Hereby, participants are asked to detect thinking or feeling expressed in other peoples' eyes presented on pictures and assessed in a multiple choice format. The test aims to measure peoples' theory of mind (ToM), also called 'mentalizing' or 'mind reading', which refers to the ability to attribute mental states, such as beliefs, desires or intents, to other people and in how far people understand that others have beliefs, desires, intentions or perspectives different from their own ones. RME is a ToM test for adults that shows sufficient test-retest reliability and constantly differentiates control groups from individuals with functional autism or Asperger Syndrome. It is one of the most widely accepted and well-validated tests for ToM within adults. ToM can be regarded as an associated subset of skills and abilities within the broader concept of emotional intelligence.

The proportion of females as a predictor of c was largely mediated by social sensitivity (Sobel z = 1.93, P= 0.03) which is in vein with previous research showing that women score higher on social sensitivity tests. While a mediation, statistically speaking, clarifies the mechanism underlying the relationship between a dependent and an independent variable, Wolley agreed in an interview with the Harvard Business Review that these findings are saying that groups of women are smarter than groups of men. However, she relativizes this stating that the actual important thing is the high social sensitivity of group members.

It is theorized that the collective intelligence factor c is an emergent property resulting from bottom-up as well as top-down processes. Hereby, bottom-up processes cover aggregated group-member characteristics. Top-down processes cover group structures and norms that influence a group's way of collaborating and coordinating.

Processes

Predictors for the collective intelligence factor c. Suggested by Woolley, Aggarwal & Malone (2015)

Top-down processes

Top-down processes cover group interaction, such as structures, processes, and norms. An example of such top-down processes is conversational turn-taking. Research further suggest that collectively intelligent groups communicate more in general as well as more equally; same applies for participation and is shown for face-to-face as well as online groups communicating only via writing.

Bottom-up processes

Bottom-up processes include group composition, namely the characteristics of group members which are aggregated to the team level. An example of such bottom-up processes is the average social sensitivity or the average and maximum intelligence scores of group members. Furthermore, collective intelligence was found to be related to a group's cognitive diversity including thinking styles and perspectives. Groups that are moderately diverse in cognitive style have higher collective intelligence than those who are very similar in cognitive style or very different. Consequently, groups where members are too similar to each other lack the variety of perspectives and skills needed to perform well. On the other hand, groups whose members are too different seem to have difficulties to communicate and coordinate effectively.

Serial vs Parallel processes

For most of human history, collective intelligence was confined to small tribal groups in which opinions were aggregated through real-time parallel interactions among members. In modern times, mass communication, mass media, and networking technologies have enabled collective intelligence to span massive groups, distributed across continents and time-zones. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time. In engineering, aggregating many engineering decisions allows for identifying typical good designs. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%.

To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature. Based on natural process of Swarm Intelligence, these artificial swarms of networked humans enable participants to work together in parallel to answer questions and make predictions as an emergent collective intelligence. In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.

The value of parallel collective intelligence was demonstrated in medical applications by researchers at Stanford University School of Medicine and Unanimous AI in a set of published studies wherein groups of human doctors were connected by real-time swarming algorithms and tasked with diagnosing chest x-rays for the presence of pneumonia. When working together as "human swarms," the groups of experienced radiologists demonstrated a 33% reduction in diagnostic errors as compared to traditional methods.

Evidence

Standardized Regression Coefficients for the collective intelligence factor c and group member intelligence regressed on the two criterion tasks as found in Woolley et al.'s (2010) two original studies.
Standardized Regression Coefficients for the collective intelligence factor c as found in Woolley et al.'s (2010) two original studies. c and average (maximum) member intelligence scores are regressed on the criterion tasks.

Woolley, Chabris, Pentland, Hashmi, & Malone (2010), the originators of this scientific understanding of collective intelligence, found a single statistical factor for collective intelligence in their research across 192 groups with people randomly recruited from the public. In Woolley et al.'s two initial studies, groups worked together on different tasks from the McGrath Task Circumplex, a well-established taxonomy of group tasks. Tasks were chosen from all four quadrants of the circumplex and included visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. The results in these tasks were taken to conduct a factor analysis. Both studies showed support for a general collective intelligence factor c underlying differences in group performance with an initial eigenvalue accounting for 43% (44% in study 2) of the variance, whereas the next factor accounted for only 18% (20%). That fits the range normally found in research regarding a general individual intelligence factor g typically accounting for 40% to 50% percent of between-individual performance differences on cognitive tests.

Afterwards, a more complex criterion task was absolved by each group measuring whether the extracted c factor had predictive power for performance outside the original task batteries. Criterion tasks were playing checkers (draughts) against a standardized computer in the first and a complex architectural design task in the second study. In a regression analysis using both individual intelligence of group members and c to predict performance on the criterion tasks, c had a significant effect, but average and maximum individual intelligence had not. While average (r=0.15, P=0.04) and maximum intelligence (r=0.19, P=0.008) of individual group members were moderately correlated with c, c was still a much better predictor of the criterion tasks. According to Woolley et al., this supports the existence of a collective intelligence factor c, because it demonstrates an effect over and beyond group members' individual intelligence and thus that c is more than just the aggregation of the individual IQs or the influence of the group member with the highest IQ.

Engel et al. (2014) replicated Woolley et al.'s findings applying an accelerated battery of tasks with a first factor in the factor analysis explaining 49% of the between-group variance in performance with the following factors explaining less than half of this amount. Moreover, they found a similar result for groups working together online communicating only via text and confirmed the role of female proportion and social sensitivity in causing collective intelligence in both cases. Similarly to Wolley et al., they also measured social sensitivity with the RME which is actually meant to measure people's ability to detect mental states in other peoples' eyes. The online collaborating participants, however, did neither know nor see each other at all. The authors conclude that scores on the RME must be related to a broader set of abilities of social reasoning than only drawing inferences from other people's eye expressions.

A collective intelligence factor c in the sense of Woolley et al. was further found in groups of MBA students working together over the course of a semester, in online gaming groups as well as in groups from different cultures and groups in different contexts in terms of short-term versus long-term groups. None of these investigations considered team members' individual intelligence scores as control variables.

Note as well that the field of collective intelligence research is quite young and published empirical evidence is relatively rare yet. However, various proposals and working papers are in progress or already completed but (supposedly) still in a scholarly peer reviewing publication process.

Predictive validity

Next to predicting a group's performance on more complex criterion tasks as shown in the original experiments, the collective intelligence factor c was also found to predict group performance in diverse tasks in MBA classes lasting over several months. Thereby, highly collectively intelligent groups earned significantly higher scores on their group assignments although their members did not do any better on other individually performed assignments. Moreover, highly collective intelligent teams improved performance over time suggesting that more collectively intelligent teams learn better. This is another potential parallel to individual intelligence where more intelligent people are found to acquire new material quicker.

Individual intelligence can be used to predict plenty of life outcomes from school attainment and career success to health outcomes and even mortality. Whether collective intelligence is able to predict other outcomes besides group performance on mental tasks has still to be investigated.

Potential connections to individual intelligence

Gladwell (2008) showed that the relationship between individual IQ and success works only to a certain point and that additional IQ points over an estimate of IQ 120 do not translate into real life advantages. If a similar border exists for Group-IQ or if advantages are linear and infinite, has still to be explored. Similarly, demand for further research on possible connections of individual and collective intelligence exists within plenty of other potentially transferable logics of individual intelligence, such as, for instance, the development over time or the question of improving intelligence. Whereas it is controversial whether human intelligence can be enhanced via training, a group's collective intelligence potentially offers simpler opportunities for improvement by exchanging team members or implementing structures and technologies. Moreover, social sensitivity was found to be, at least temporarily, improvable by reading literary fiction as well as watching drama movies. In how far such training ultimately improves collective intelligence through social sensitivity remains an open question.

There are further more advanced concepts and factor models attempting to explain individual cognitive ability including the categorization of intelligence in fluid and crystallized intelligence[97][98] or the hierarchical model of intelligence differences.[99][100] Further supplementing explanations and conceptualizations for the factor structure of the Genomes of collective intelligence besides a general 'c factor', though, are missing yet.[101]

Controversies

Other scholars explain team performance by aggregating team members' general intelligence to the team level instead of building an own overall collective intelligence measure. Devine and Philips (2001) showed in a meta-analysis that mean cognitive ability predicts team performance in laboratory settings (.37) as well as field settings (.14) – note that this is only a small effect. Suggesting a strong dependence on the relevant tasks, other scholars showed that tasks requiring a high degree of communication and cooperation are found to be most influenced by the team member with the lowest cognitive ability.

Tasks in which selecting the best team member is the most successful strategy, are shown to be most influenced by the member with the highest cognitive ability.

Since Woolley et al.'s results do not show any influence of group satisfaction, group cohesiveness, or motivation, they, at least implicitly, challenge these concepts regarding the importance for group performance in general and thus contrast meta-analytically proven evidence concerning the positive effects of group cohesion, motivation and satisfaction on group performance.

Noteworthy is also that the involved researchers among the confirming findings widely overlap with each other and with the authors participating in the original first study around Anita Woolley.

Alternative mathematical techniques

Computational collective intelligence

Computational Collective Intelligence, by Tadeusz Szuba

In 2001, Tadeusz (Tad) Szuba from the AGH University in Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.

In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic. They are quasi-randomly displacing due to their interaction with their environments with their intended displacements. Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence. Thus, a non-Turing model of computation is used. This theory allows simple formal definition of collective intelligence as the property of social structure and seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure". While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation. Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.

Collective intelligence quotient

One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient" (or "cooperation quotient") – which can be normalized from the "individual" intelligence quotient (IQ) – thus making it possible to determine the marginal intelligence added by each new individual participating in the collective action, thus using metrics to avoid the hazards of group think and stupidity.

Applications

Elicitation of point estimates

Here, the goal is to get an estimate (in a single value) of something. For example, estimating the weight of an object, or the release date of a product or probability of success of a project etc. as seen in prediction markets like Intrade, HSX or InklingMarkets and also in several implementations of crowdsourced estimation of a numeric outcome. Essentially, we try to get the average value of the estimates provided by the members in the crowd.

Opinion aggregation

In this situation, opinions are gathered from the crowd regarding an idea, issue or product. For example, trying to get a rating (on some scale) of a product sold online (such as Amazon's star rating system). Here, the emphasis is to collect and simply aggregate the ratings provided by customers/users.

Idea Collection

In these problems, someone solicits ideas for projects, designs or solutions from the crowd. For example, ideas on solving a data science problem (as in Kaggle) or getting a good design for a T-shirt (as in Threadless) or in getting answers to simple problems that only humans can do well (as in Amazon's Mechanical Turk). The objective is to gather the ideas and devise some selection criteria to choose the best ideas.

James Surowiecki divides the advantages of disorganized decision-making into three main categories, which are cognition, cooperation and coordination.

Cognition

Market judgment

Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable. Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion. The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.

Collective intelligence underpins the efficient-market hypothesis of Eugene Fama – although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted by Michael Jensen in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidence index funds became popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.

Predictions in politics and technology

Voting methods used in the United States 2016

Political parties mobilize large numbers of people to form policy, select candidates and finance and run election campaigns. Knowledge focusing through various voting methods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus. Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.

Companies such as Affinnova (acquired by Nielsen), Google, InnoCentive, Marketocracy, and Threadless have successfully employed the concept of collective intelligence in bringing about the next generation of technological changes through their research and development (R&D), customer service, and knowledge management. An example of such application is Google's Project Aristotle in 2012, where the effect of collective intelligence on team makeup was examined in hundreds of the company's R&D teams.

Cooperation

Networks of trust

Application of collective intelligence in the Millennium Project

In 2012, the Global Futures Collective Intelligence System (GFIS) was created by The Millennium Project, which epitomizes collective intelligence as the synergistic intersection among data/information/knowledge, software/hardware, and expertise/insights that has a recursive learning process for better decision-making than the individual players alone.

New media are often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources (Flew 2008) resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users.

Francis Heylighen, Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science and cybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of a global brain.

The developer of the World Wide Web, Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early '90s, the Internet's potential was still untapped, until the mid-1990s when 'critical mass', as termed by the head of the Advanced Research Project Agency (ARPA), Dr. J.C.R. Licklider, demanded more accessibility and utility. The driving force of this Internet-based collective intelligence is the digitization of information and communication. Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture (Flew 2008). He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills. Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.

Lévy and de Kerckhove consider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed (Flew 2008). With the development of the Internet and its widespread use, the opportunity to contribute to knowledge-building communities, such as Wikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive"[130] Researchers at the MIT Center for Collective Intelligence research and explore collective intelligence of groups of people and computers.

In this context collective intelligence is often confused with shared knowledge. The former is the sum total of information held individually by members of a community while the latter is information that is believed to be true and known by all members of the community. Collective intelligence as represented by Web 2.0 has less user engagement than collaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one. Another art project using collective intelligence to produce artistic work is Curatron, where a large group of artists together decides on a smaller group that they think would make a good collaborative group. The process is used based on an algorithm computing the collective preferences In creating what he calls 'CI-Art', Nova Scotia based artist Mathew Aldred follows Pierry Lévy's definition of collective intelligence. Aldred's CI-Art event in March 2016 involved over four hundred people from the community of Oxford, Nova Scotia, and internationally. Later work developed by Aldred used the UNU swarm intelligence system to create digital drawings and paintings. The Oxford Riverside Gallery (Nova Scotia) held a public CI-Art event in May 2016, which connected with online participants internationally.

Parenting social network and collaborative tagging as pillars for automatic IPTV content blocking system

In social bookmarking (also called collaborative tagging), users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured by models of collaborative tagging.

Recent research using data from the social bookmarking website Delicious, has shown that collaborative tagging systems exhibit a form of complex systems (or self-organizing) dynamics. Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stable power law distributions. Once such stable distributions form, examining the correlations between different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies. Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.

P2P business

Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:

Talent utilization
At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ. This also helps generate continual interest in the firm in the form of those drawn to new idea creation as well as investment opportunities.
Demand creation
Firms can create a new market for complementary goods by engaging in open source community. Firms also are able to expand into new fields that they previously would not have been able to without the addition of resources and collaboration from the community. This creates, as mentioned before, a new market for complementary goods for the products in said new fields.
Costs reduction
Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs. New ideas can also be generated and explored by collaboration of online communities creating opportunities for free R&D outside the confines of the company.

Open source software

Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of the Trainz product. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".

The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig and Bray and Konsynski, such as intellectual property and property ownership rights.

Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion of alternate reality gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences" as events that happen outside the game reality "reach out" into the player's lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.

Benefits of co-operation

Co-operation helps to solve most important and most interesting multi-science problems. In his book, James Surowiecki mentioned that most scientists think that benefits of co-operation have much more value when compared to potential costs. Co-operation works also because at best it guarantees number of different viewpoints. Because of the possibilities of technology global co-operation is nowadays much easier and productive than before. It is clear that, when co-operation goes from university level to global it has significant benefits.

For example, why do scientists co-operate? Science has become more and more isolated and each science field has spread even more and it is impossible for one person to be aware of all developments. This is true especially in experimental research where highly advanced equipment requires special skills. With co-operation scientists can use information from different fields and use it effectively instead of gathering all the information just by reading by themselves."

Coordination

Ad-hoc communities

Military, trade unions, and corporations satisfy some definitions of CI – the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions. Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.

The UNU open platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence. When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time. Early testing shows that human swarms can out-predict individuals. In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.

Specialized information sites such as Digital Photography Review or Camera Labs is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites.

In learner-generated context a group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context. Learner-generated contexts represent an ad hoc community that facilitates coordination of collective action in a network of trust. An example of learner-generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space". As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas. (Flew 2008)

Games such as The Sims Series, and Second Life are designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations. For them, collective intelligence has become a norm. In Terry Flew's discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers, he refers to Pierre Lévy's concept of Collective Intelligence (Lévy 1998) and argues this is active in videogames as clans or guilds in MMORPG constantly work to achieve goals. Henry Jenkins proposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends. Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow.

Coordinating collective actions

The cast of After School Improv learns an important lesson about improvisation and life

Improvisational actors also experience a type of collective intelligence which they term "group mind", as theatrical improvisation relies on mutual cooperation and agreement, leading to the unity of "group mind".

Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand. The full impact has yet to be felt but the anti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing. The Indymedia organization does this in a more journalistic way. Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goal.

A further application of collective intelligence is found in the "Community Engineering for Innovations". In such an integrated framework proposed by Ebner et al., idea competitions and virtual communities are combined to better realize the potential of the collective intelligence of the participants, particularly in open-source R&D. In management theory the use of collective intelligence and crowd sourcing leads to innovations and very robust answers to quantitative issues. Therefore, collective intelligence and crowd sourcing is not necessaryly leading to the best solution to economic problems, but to a stable, good solution.

Coordination in different types of tasks

Collective actions or tasks require different amounts of coordination depending on the complexity of the task. Tasks vary from being highly independent simple tasks that require very little coordination to complex interdependent tasks that are built by many individuals and require a lot of coordination. In the article written by Kittur, Lee and Kraut the writers introduce a problem in cooperation: "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.

Group collective intelligence is a property that emerges through coordination from both bottom-up and top-down processes. In a bottom-up process the different characteristics of each member are involved in contributing and enhancing coordination. Top-down processes are more strict and fixed with norms, group structures and routines that in their own way enhance the group's collective work.

Alternative views

A tool for combating self-preservation

Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions. A single person tends to make decisions motivated by self-preservation. Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.

Separation from IQism

Phillip Brown and Hugh Lauder quotes Bowles and Gintis (1976) that in order to truly define collective intelligence, it is crucial to separate 'intelligence' from IQism. They go on to argue that intelligence is an achievement and can only be developed if allowed to. For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built. This reflects how powerful collective intelligence can be if left to develop.

Artificial intelligence views

Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk of bodily harm and bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluid mass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells. This strain of thought is most obvious in the anti-globalization movement and characterized by the works of John Zerzan, Carol Moore, and Starhawk, who typically shun academics. These theorists are more likely to refer to ecological and collective wisdom and to the role of consensus process in making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".

Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as the new tribalists and the Gaians. Whether these can be said to be collective intelligence systems is an open question. Some, e.g. Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.

In contrast to these views, companies such as Amazon Mechanical Turk and CrowdFlower are using collective intelligence and crowdsourcing or consensus-based assessment to collect the enormous amounts of data for machine learning algorithms.

Solving climate change

Global collective intelligence is seen as the key in solving the challenges humankind faces now and in the future. Climate change is an example of a global issue which collective intelligence is currently trying to tackle. With the help of collective intelligence applications such as online crowdsourcing, people across the globe are collaborating in developing solutions to climate change.

Web 2.0

From Wikipedia, the free encyclopedia
 
A tag cloud (a typical Web 2.0 phenomenon in itself) presenting Web 2.0 themes

Web 2.0 (also known as Participative (or Participatory) and Social Web) refers to websites that emphasize user-generated content, ease of use, participatory culture and interoperability (i.e., compatible with other products, systems, and devices) for end users.

The term was invented by Darcy DiNucci in 1999 and later popularized by Tim O'Reilly and Dale Dougherty at the O'Reilly Media Web 2.0 Conference in late 2004. The Web 2.0 framework specifies only the design and use of websites and does not place any technical demands or specifications on designers. The transition was gradual and, therefore, no precise date for when this change happened has been given.

A Web 2.0 website allows users to interact and collaborate with each other through social media dialogue as creators of user-generated content in a virtual community. This contrasts the first generation of Web 1.0-era websites where people were limited to viewing content in a passive manner. Examples of Web 2.0 features include social networking sites or social media sites (e.g., Facebook), blogs, wikis, folksonomies ("tagging" keywords on websites and links), video sharing sites (e.g., YouTube), image sharing sites (e.g., Flickr), hosted services, Web applications ("apps"), collaborative consumption platforms, and mashup applications.

Whether Web 2.0 is substantially different from prior Web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who describes the term as jargon. His original vision of the Web was "a collaborative medium, a place where we [could] all meet and read and write." On the other hand, the term Semantic Web (sometimes referred to as Web 3.0) was coined by Berners-Lee to refer to a web of content where the meaning can be processed by machines.

History

Web 1.0

Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1991 to 2004. According to Cormode and Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content." Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and defunct GeoCities. With Web 2.0, it became common for average web users to have social-networking profiles (on sites such as Myspace and Facebook) and personal blogs (sites like Blogger, Tumblr and LiveJournal) through either a low-cost web hosting service or through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously.

Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had a guestbook page for visitor comments, instead of a comment section at the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a

"move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on "tagging" website content using keywords (folksonomy)."

Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze".

Characteristics

Some common design elements of a Web 1.0 site include:

Web 2.0

The term "Web 2.0" was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article "Fragmented Future":

The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven.

Writing when Palm Inc. introduced its first web-capable personal digital assistant (supporting Web access with WAP), DiNucci saw the Web "fragmenting" into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced by HTTP would be used by a variety of devices and platforms. As such, her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use.

The term Web 2.0 did not resurface until 2002. Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform". In 2004, the term began to popularize when O'Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you". They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated this term with the business models of Netscape and the Encyclopædia Britannica Online. For example,

Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.

In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O'Reilly contrasted this with Google, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "the perpetual beta". A similar difference can be seen between the Encyclopædia Britannica Online and Wikipedia – while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of the open source software adage "given enough eyeballs, all bugs are shallow". This maxim is stating that if enough users are able to look at a software product's code (or a website), then these users will be able to fix any "bugs" or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. O'Reilly's Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, representatives from large companies, tech experts and technology reporters.

The popularity of Web 2.0 was acknowledged by 2006 TIME magazine Person of The Year (You). That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites.

In the cover story, Lev Grossman explains:

It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's network YouTube and the online metropolis MySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world but also change the way the world changes.

Characteristics

Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site's content by commenting on published articles, or creating a user account or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software ("apps") and file storage facilities. This has been called "network as platform" computing. Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress' easy-to-use blog and website creation tools), "tagging" (which enables users to label websites, videos or photos in some fashion), "like" buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking.

Users can provide the data and exercise some control over what they share on a Web 2.0 site. These sites may have an "architecture of participation" that encourages users to add value to the application as they use it. Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects. Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet.

Edit box interface through which anyone could edit a Wikipedia article.

Web 2.0 offers almost all users the same freedom to contribute. While this opens the possibility for serious debate and collaboration, it also increases the incidence of "spamming", "trolling", and can even create a venue for racist hate speech, cyberbullying, and defamation. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and "free ride" on the contributions of others. This requires what is sometimes called radical trust by the management of the Web site.

According to Best, the characteristics of Web 2.0 are rich user experience, user participation, dynamic content, metadata, Web standards, and scalability. Further characteristics, such as openness, freedom, and collective intelligence by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contribute user-generated content to have access to the website, to discourage "free riding".

A list of ways that people can volunteer to improve Mass Effect Wiki, an example of content generated by users working collaboratively.

The key features of Web 2.0 include:

  1. Folksonomy – free classification of information; allows users to collectively classify and find information (e.g. "tagging" of websites, images, videos or links)
  2. Rich user experience – dynamic content that is responsive to user input (e.g., a user can "click" on an image to enlarge it or find out more information)
  3. User participation – information flows two ways between the site owner and site users by means of evaluation, review, and online commenting. Site users also typically create user-generated content for others to see (e.g., Wikipedia, an online encyclopedia that anyone can write articles for or edit)
  4. Software as a service (SaaS) – Web 2.0 sites developed APIs to allow automated usage, such as by a Web "app" (software application) or a mashup
  5. Mass participation – near-universal web access leads to differentiation of concerns, from the traditional Internet user base (who tended to be hackers and computer hobbyists) to a wider variety of users

Technologies

The client-side (Web browser) technologies used in Web 2.0 development include Ajax and JavaScript frameworks. Ajax programming uses JavaScript and the Document Object Model (DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously).

Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application.

When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web-based word processor.

As a widely available plug-in independent of W3C standards (the World Wide Web Consortium is the governing body of Web standards and protocols), Adobe Flash is capable of doing many things that were not possible pre-HTML5. Of Flash's many capabilities, the most commonly used is its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash's security, the role of Flash is decreasing.

In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel.

On the server-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such as Perl, PHP, Python, Ruby, as well as Enterprise Java (J2EE) and Microsoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to share machine readable formats such as XML (Atom, RSS, etc.) and JSON. When data is available in one of these formats, another website can use it to integrate a portion of that site's functionality.

Concepts

Web 2.0 can be described in three parts:

  • Rich Internet application (RIA) — defines the experience brought from desktop to browser, whether it is "rich" from a graphical point of view or a usability/interactivity or features point of view.
  • Web-oriented architecture (WOA) — defines how Web 2.0 applications expose their functionality so that other applications can leverage and integrate the functionality providing a set of much richer applications. Examples are feeds, RSS feeds, web services, mashups.
  • Social Web — defines how Web 2.0 websites tend to interact much more with the end user and make the end user an integral part of the website, either by adding his or her profile, adding comments on content, uploading new content, or adding user-generated content (e.g., personal digital photos).

As such, Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented Web browsers may use plug-ins and software extensions to handle the content and user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment known as "Web 1.0".

Web 2.0 sites include the following features and techniques, referred to as the acronym SLATES by Andrew McAfee:

Search
Finding information through keyword search.
Links to other websites
Connects information sources together using the model of the Web.
Authoring
The ability to create and update content leads to the collaborative work of many authors. Wiki users may extend, undo, redo and edit each other's work. Comment systems allow readers to contribute their viewpoints.
Tags
Categorization of content by users adding "tags" — short, usually one-word or two-word descriptions — to facilitate searching. For example, a user can tag a metal song as "death metal". Collections of tags created by many users within a single system may be referred to as "folksonomies" (i.e., folk taxonomies).
Extensions
Software that makes the Web an application platform as well as a document server. Examples include Adobe Reader, Adobe Flash, Microsoft Silverlight, ActiveX, Oracle Java, QuickTime, and Windows Media.
Signals
The use of syndication technology, such as RSS feeds to notify users of content changes.

While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in enterprise uses.

Social Web

A third important part of Web 2.0 is the social web. The social Web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by:

The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to append a flurry of 2.0's to existing concepts and fields of study, including Library 2.0, Social Work 2.0, Enterprise 2.0, PR 2.0, Classroom 2.0, Publishing 2.0, Medicine 2.0, Telco 2.0, Travel 2.0, Government 2.0,[41] and even Porn 2.0. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation", Paul Miller argues

Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others.

Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of Web 2.0 is role dependent. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]."

There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal online communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line.

Marketing

Web 2.0 is used by companies, non-profit organisations and governments for interactive marketing. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development, customer service enhancement, product or service improvement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company employees have created wikis—Web sites that allow users to add, delete, and edit content — to list answers to frequently asked questions about each product, and consumers have added significant contributions.

Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing. Mainstream media usage of Web 2.0 is increasing. Saturating media hubs—like The New York Times, PC Magazine and Business Week — with links to popular new Web sites and services, is critical to achieving the threshold for mass adoption of those services.[47] User web content can be used to gauge consumer satisfaction. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitors social media outlets to address customer issues and improve products.

Destination marketing

In tourism industries, social media is an effective channel to attract travellers and promote tourism products and services by engaging with customers. The brand of tourist destinations can be built through marketing campaigns on social media and by engaging with customers. For example, the “Snow at First Sight” campaign launched by the State of Colorado aimed to bring brand awareness to Colorado as a winter destination. The campaign used social media platforms, for example, Facebook and Twitter, to promote this competition, and requested the participants to share experiences, pictures and videos on social media platforms. As a result, Colorado enhanced their image as a winter destination and created a campaign worth about $2.9 million.

The tourism organisation can earn brand royalty from interactive marketing campaigns on social media with engaging passive communication tactics. For example, “Moms” advisors of the Walt Disney World are responsible for offering suggestions and replying to questions about the family trips at Walt Disney World. Due to its characteristic of expertise in Disney, “Moms” was chosen to represent the campaign. Social networking sites, such as Facebook, can be used as a platform for providing detailed information about the marketing campaign, as well as real-time online communication with customers. Korean Airline Tour created and maintained a relationship with customers by using Facebook for individual communication purposes.

Travel 2.0 refers a model of Web 2.0 on tourism industries which provides virtual travel communities. The travel 2.0 model allows users to create their own content and exchange their words through globally interactive features on websites. The users also can contribute their experiences, images and suggestions regarding their trips through online travel communities. For example, TripAdvisor is an online travel community which enables user to rate and share autonomously their reviews and feedback on hotels and tourist destinations. Non pre-associate users can interact socially and communicate through discussion forums on TripAdvisor.

Social media, especially Travel 2.0 websites, plays a crucial role in decision-making behaviors of travelers. The user-generated content on social media tools have a significant impact on travelers choices and organisation preferences. Travel 2.0 sparked radical change in receiving information methods for travelers, from business-to-customer marketing into peer-to-peer reviews. User-generated content became a vital tool for helping a number of travelers manage their international travels, especially for first time visitors. The travellers tend to trust and rely on peer-to-peer reviews and virtual communications on social media rather than the information provided by travel suppliers.

In addition, an autonomous review feature on social media would help travelers reduce risks and uncertainties before the purchasing stages. Social media is also a channel for customer complaints and negative feedback which can damage images and reputations of organisations and destinations. For example, a majority of UK travellers read customer reviews before booking hotels, these hotels receiving negative feedback would be refrained by half of customers.

Therefore, the organisations should develop strategic plans to handle and manage the negative feedback on social media. Although the user-generated content and rating systems on social media are out of a businesses controls, the business can monitor those conversations and participate in communities to enhance customer loyalty and maintain customer relationships.

Education

Web 2.0 could allow for more collaborative education. For example, blogs give students a public space to interact with one another and the content of the class. Some studies suggest that Web 2.0 can increase the public's understanding of science, which could improve governments' policy decisions. A 2012 study by researchers at the University of Wisconsin-Madison notes that "...the internet could be a crucial tool in increasing the general public’s level of science literacy. This increase could then lead to better communication between researchers and the public, more substantive discussion, and more informed policy decision."

Web-based applications and desktops

Ajax has prompted the development of Web sites that mimic desktop applications, such as word processing, the spreadsheet, and slide-show presentation. WYSIWYG wiki and blogging sites replicate many features of PC authoring applications. Several browser-based services have emerged, including EyeOS and YouOS.(No longer active.) Although named operating systems, many of these services are application platforms. They mimic the user experience of desktop operating systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these so-called "operating systems" do not directly control the hardware on the client's computer. Numerous web-based application services appeared during the dot-com bubble of 1997–2001 and then vanished, having failed to gain a critical mass of customers.

Distribution of media

XML and RSS

Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit end-users to make use of a site's data in another context (such as another Web site, a browser plugin, or a separate desktop application). Protocols permitting syndication include RSS (really simple syndication, also known as Web syndication), RDF (as in RSS 1.1), and Atom, all of which are XML-based formats. Observers have started to refer to these technologies as Web feeds. Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites and permit end-users to interact without centralized Web sites.

Web APIs

Web 2.0 often uses machine-based interactions such as REST and SOAP. Servers often expose proprietary Application programming interfaces (API), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involve XML or JSON payloads. REST APIs, through their use of self-descriptive messages and hypermedia as the engine of application state, should be self-describing once an entry URI is known. Web Services Description Language (WSDL) is the standard way of publishing a SOAP Application programming interface and there are a range of Web service specifications.

Trademark

In November 2004, CMP Media applied to the USPTO for a service mark on the use of the term "WEB 2.0" for live events. On the basis of this application, CMP Media sent a cease-and-desist demand to the Irish non-profit organisation IT@Cork on May 24, 2006, but retracted it two days later. The "WEB 2.0" service mark registration passed final PTO Examining Attorney review on May 10, 2006, and was registered on June 27, 2006. The European Union application (which would confer unambiguous status in Ireland) was declined on May 23, 2007.

Criticism

Critics of the term claim that "Web 2.0" does not represent a new version of the World Wide Web at all, but merely continues to use so-called "Web 1.0" technologies and concepts. First, techniques such as Ajax do not replace underlying protocols like HTTP, but add a layer of abstraction on top of them. Second, many of the ideas of Web 2.0 were already featured in implementations on networked systems well before the term "Web 2.0" emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002. Previous developments also came from research in computer-supported collaborative learning and computer supported cooperative work (CSCW) and from established products like Lotus Notes and Lotus Domino, all phenomena that preceded Web 2.0. Tim Berners-Lee, who developed the initial technologies of the Web, has been an outspoken critic of the term, while supporting many of the elements associated with it. In the environment where the Web originated, each workstation had a dedicated IP address and always-on connection to the Internet. Sharing a file or publishing a web page was as simple as moving the file into a shared folder.

Perhaps the most common criticism is that the term is unclear or simply a buzzword. For many people who work in software, version numbers like 2.0 and 3.0 are for software versioning or hardware versioning only, and to assign 2.0 arbitrarily to many technologies with a variety of real version numbers has no meaning. The web does not have a version number. For example, in a 2006 interview with IBM developerWorks podcast editor Scott Laningham, Tim Berners-Lee described the term "Web 2.0" as a jargon:

"Nobody really knows what it means... If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along... Web 2.0, for some people, it means moving some of the thinking [to the] client side, so making it more immediate, but the idea of the Web as interaction between people is really what the Web is. That was what it was designed to be... a collaborative space where people can interact."

Other critics labeled Web 2.0 "a second bubble" (referring to the Dot-com bubble of 1997–2000), suggesting that too many Web 2.0 companies attempt to develop the same product with a lack of business models. For example, The Economist has dubbed the mid- to late-2000s focus on Web companies as "Bubble 2.0".

In terms of Web 2.0's social impact, critics such as Andrew Keen argue that Web 2.0 has created a cult of digital narcissism and amateurism, which undermines the notion of expertise by allowing anybody, anywhere to share and place undue value upon their own opinions about any subject and post any kind of content, regardless of their actual talent, knowledge, credentials, biases or possible hidden agendas. Keen's 2007 book, Cult of the Amateur, argues that the core assumption of Web 2.0, that all opinions and user-generated content are equally valuable and relevant, is misguided. Additionally, Sunday Times reviewer John Flintoff has characterized Web 2.0 as "creating an endless digital forest of mediocrity: uninformed political commentary, unseemly home videos, embarrassingly amateurish music, unreadable poems, essays and novels... [and that Wikipedia is full of] mistakes, half-truths and misunderstandings". In a 1994 Wired interview, Steve Jobs, forecasting the future development of the web for personal publishing, said "The Web is great because that person can't foist anything on you-you have to go get it. They can make themselves available, but if nobody wants to look at their site, that's fine. To be honest, most people who have something to say get published now." Michael Gorman, former president of the American Library Association has been vocal about his opposition to Web 2.0 due to the lack of expertise that it outwardly claims, though he believes that there is hope for the future.

"The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print".

There is also a growing body of critique of Web 2.0 from the perspective of political economy. Since, as Tim O'Reilly and John Batelle put it, Web 2.0 is based on the "customers... building your business for you," critics have argued that sites such as Google, Facebook, YouTube, and Twitter are exploiting the "free labor" of user-created content. Web 2.0 sites use Terms of Service agreements to claim perpetual licenses to user-generated content, and they use that content to create profiles of users to sell to marketers. This is part of increased surveillance of user activity happening within Web 2.0 sites. Jonathan Zittrain of Harvard's Berkman Center for the Internet and Society argues that such data can be used by governments who want to monitor dissident citizens. The rise of AJAX-driven web sites where much of the content must be rendered on the client has meant that users of older hardware are given worse performance versus a site purely composed of HTML, where the processing takes place on the server. Accessibility for disabled or impaired users may also suffer in a Web 2.0 site.

Others have noted that Web 2.0 technologies are tied to particular political ideologies. "Web 2.0 discourse is a conduit for the materialization of neoliberal ideology." The technologies of Web 2.0 may also "function as a disciplining technology within the framework of a neoliberal political economy."

When looking at Web 2.0 from a cultural convergence view, according to Henry Jenkins, it can be problematic because the consumers are doing more and more work in order to entertain themselves. For instance, Twitter offers online tools for users to create their own tweet, in a way the users are doing all the work when it comes to producing media content. At the heart of Web 2.0's participatory culture is an inherent disregard for privacy, although it was not much of an issue for giant platforms like Facebook and Google, as users are discovering and exploring the internet because they want users to participate and create more content. More importantly, because user participation creates fresh content and profile data that are useful for third parties such as advertising corporates and national security. Therefore, suppression of privacy is built into the business model of Web 2.0 and one should not be too tied up to the optimistic notion of Web 2.0 being the next evolutionary step for digital media.

Coherence therapy

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Coherence therapy is a system of psychotherapy based in the theory that symptoms of mood, thought and behavior are produced coherently according to the person's current mental models of reality, most of which are implicit and unconscious. It was founded by Bruce Ecker and Laurel Hulley in the 1990s. It has been considered among the most well respected postmodern/constructivist therapies.

General description

The basis of coherence therapy is the principle of symptom coherence. This is the view that any response of the brain–mind–body system is an expression of coherent personal constructs (or schemas), which are nonverbal, emotional, perceptual and somatic knowings, not verbal-cognitive propositions. A therapy client's presenting symptoms are understood as an activation and enactment of specific constructs. The principle of symptom coherence can be found in varying degrees, explicitly or implicitly, in the writings of a number of historical psychotherapy theorists, including Sigmund Freud (1923), Harry Stack Sullivan (1948), Carl Jung (1964), R. D. Laing (1967), Gregory Bateson (1972), Virginia Satir (1972), Paul Watzlawick (1974), Eugene Gendlin (1982), Vittorio Guidano & Giovanni Liotti (1983), Les Greenberg (1993), Bessel van der Kolk (1994), Robert Kegan & Lisa Lahey (2001), Sue Johnson (2004), and others.

The principle of symptom coherence maintains that an individual's seemingly irrational, out-of-control symptoms are actually sensible, cogent, orderly expressions of the person's existing constructions of self and world, rather than a disorder or pathology. Even a person's psychological resistance to change is seen as a result of the coherence of the person's mental constructions. Thus, coherence therapy, like some other postmodern therapies, approaches a person's resistance to change as an ally in psychotherapy and not an enemy.

Coherence therapy is considered a type of psychological constructivism. It differs from some other forms of constructivism in that the principle of symptom coherence is fully explicit and rigorously operationalized, guiding and informing the entire methodology. The process of coherence therapy is experiential rather than analytic, and in this regard is similar to Gestalt therapy, Focusing or Hakomi. The aim is for the client to come into direct, emotional experience of the unconscious personal constructs (akin to complexes or ego-states) which produce an unwanted symptom and to undergo a natural process of revising or dissolving these constructs, thereby eliminating the symptom. Practitioners claim that the entire process often requires a dozen sessions or less, although it can take longer when the meanings and emotions underlying the symptom are particularly complex or intense.

Symptom coherence

Symptom coherence is defined by Ecker and Hulley as follows:

  1. A person produces a particular symptom because, despite the suffering it entails, the symptom is compellingly necessary to have, according to at least one unconscious, nonverbal, emotionally potent schema or construction of reality.
  2. Each symptom-requiring construction is cogent—a sensible, meaningful, well-knit, well-defined schema that was formed adaptively in response to earlier experiences and is still carried and applied in the present.
  3. The person ceases producing the symptom as soon as there no longer exists any construction of reality in which the symptom is necessary to have.

There are several forms of symptom coherence. For example, some symptoms are necessary because they serve a crucial function (such as depression that protects against feeling and expressing anger), while others have no function but are necessary in the sense of being an inevitable effect, or by-product, caused by some other adaptive, coherent but unconscious response (such as depression resulting from isolation, which itself is a strategy for feeling safe). Both functional and functionless symptoms are coherent, according to the client's own material.

In other words, the theory states that symptoms are produced by how the individual strives, without conscious awareness, to carry out self-protecting or self-affirming purposes formed in the course of living. This model of symptom production fits into the broader category of psychological constructivism, which views the person as having profound, if unrecognized, agency in shaping experience and behavior.

Symptom coherence does not apply to those symptoms that are not directly or indirectly caused by implicit schemas or emotional learnings—for example, hypothyroidism-induced depression, autism, and biochemical addiction.

Hierarchical organization of constructs

As a tool for identifying all of a person's relevant schemas or constructions of reality, Ecker and Hulley defined several logically hierarchical domains or orders of construction (inspired by Gregory Bateson):

  • The first order consists of a person's overt responses: thoughts, feelings, and behaviors.
  • The second order consists of the person's specific meaning of the concrete situation to which they are responding.
  • The third order consists of the person's broad purposes and strategies for construing that specific meaning (teleology).
  • The fourth order consists of the person's general meaning of the nature of self, others, and the world (ontology).
  • The fifth order consists of the person's broad purposes and strategies for construing that general meaning.
  • Higher orders (beyond the fifth order) are rarely involved in psychotherapy.

A person's first-order symptoms of thought, mood, or behavior follow from a second-order construal of the situation, and that second-order construal is powerfully influenced by the person's third- and fourth-order constructions. Hence the third and higher orders constitute what Ecker and Hulley call "the emotional truth of the symptom", which are the meanings and purposes that are intended to be discovered, integrated, and transformed in therapy.

History

Coherence therapy was developed in the late 1980s and early 1990s as Ecker and Hulley investigated why certain psychotherapy sessions seemed to produce deep transformations of emotional meaning and immediate symptom cessation, while most sessions did not. Studying many such transformative sessions for several years, they concluded that in these sessions, the therapist had desisted from doing anything to oppose or counteract the symptom, and the client had a powerful, felt experience of some previously unrecognized "emotional truth" that was making the symptom necessary to have.

Ecker and Hulley began developing experiential methods to intentionally facilitate this process. They found that a majority of their clients could begin having experiences of the underlying coherence of their symptoms from the first session. In addition to creating a methodology for swift retrieval of the emotional schemas driving symptom production, they also identified the process by which retrieved schemas then undergo profound change or dissolution: the retrieved emotional schema must be activated while concurrently the individual vividly experiences something that sharply contradicts it. Neuroscientists subsequently determined that these same steps are precisely what unlocks and deletes the neural circuit in implicit memory that stores an emotional learning—the process of reconsolidation.

Due to the swiftness of change that Ecker and Hulley began experiencing with many of their clients, they initially named this new system depth-oriented brief therapy (DOBT).

In 2005, Ecker and Hulley began calling the system coherence therapy in order for the name to more clearly reflect the central principle of the approach, and also because many therapists had come to associate the phrase "brief therapy" with depth-avoidant methods that they regard as superficial.

Evidence from neuroscience

In a series of three articles published in the Journal of Constructivist Psychology from 2007 to 2009, Bruce Ecker and Brian Toomey presented evidence that coherence therapy may be one of the systems of psychotherapy which, according to current neuroscience, makes fullest use of the brain's built-in capacities for change.

Ecker and Toomey argued that the mechanism of change in coherence therapy correlates with the recently discovered neural process of "memory reconsolidation", a process that can "unwire" and delete longstanding emotional conditioning held in implicit memory. The assertions that coherence therapy achieves implicit memory deletion are unproven but align with the growing body of evidence supporting memory reconsolidation. Ecker and colleagues claim that: (a) their procedural steps match those identified by neuroscientists for reconsolidation, (b) their procedural steps result in effortless cessation of symptoms, and (c) the emotional experience of the retrieved, symptom-generating emotional schemas can no longer be evoked by cues that formerly evoked it strongly.

The process of removing the neural basis of the symptom in coherence therapy (and in similar postmodern therapies) is different from the counteractive strategy of some behavioral therapies. In such behavioral therapies, new preferred behavioral patterns are typically practiced to compete against and hopefully override the unwanted ones; this counteractive process, like the "extinction" of conditioned responses in animals, is known to be inherently unstable and prone to relapse, because the neural circuit of the unwanted pattern continues to exist even when the unwanted pattern is in abeyance. Through reconsolidation, the unwanted neural circuits are "unwired" and cannot relapse.

Spouse

From Wikipedia, the free encyclopedia ...