Search This Blog

Tuesday, February 20, 2024

Collective intelligence

From Wikipedia, the free encyclopedia
Types of collective intelligence

Collective intelligence (CI) is shared or group intelligence (GI) that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. The term appears in sociobiology, political science and in context of mass peer review and crowdsourcing applications. It may involve consensus, social capital and formalisms such as voting systems, social media and other means of quantifying mass activity. Collective IQ is a measure of collective intelligence, although it is often used interchangeably with the term collective intelligence. Collective intelligence has also been attributed to bacteria and animals.

It can be understood as an emergent property from the synergies among:

  1. data-information-knowledge
  2. software-hardware
  3. individuals (those with new insights as well as recognized authorities) that continually learn from feedback to produce just-in-time knowledge for better decisions than these three elements acting alone

Or it can be more narrowly understood as an emergent property between people and ways of processing information. This notion of collective intelligence is referred to as "symbiotic intelligence" by Norman Lee Johnson. The concept is used in sociology, business, computer science and mass communications: it also appears in science fiction. Pierre Lévy defines collective intelligence as, "It is a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills. I'll add the following indispensable characteristic to this definition: The basis and goal of collective intelligence is mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities." According to researchers Pierre Lévy and Derrick de Kerckhove, it refers to capacity of networked ICTs (Information communication technologies) to enhance the collective pool of social knowledge by simultaneously expanding the extent of human interactions. A broader definition was provided by Geoff Mulgan in a series of lectures and reports from 2006 onwards  and in the book Big Mind which proposed a framework for analysing any thinking system, including both human and machine intelligence, in terms of functional elements (observation, prediction, creativity, judgement etc.), learning loops and forms of organisation. The aim was to provide a way to diagnose, and improve, the collective intelligence of a city, business, NGO or parliament.

Collective intelligence strongly contributes to the shift of knowledge and power from the individual to the collective. According to Eric S. Raymond in 1998 and JC Herz in 2005, open-source intelligence will eventually generate superior outcomes to knowledge generated by proprietary software developed within corporations. Media theorist Henry Jenkins sees collective intelligence as an 'alternative source of media power', related to convergence culture. He draws attention to education and the way people are learning to participate in knowledge cultures outside formal learning settings. Henry Jenkins criticizes schools which promote 'autonomous problem solvers and self-contained learners' while remaining hostile to learning through the means of collective intelligence. Both Pierre Lévy and Henry Jenkins support the claim that collective intelligence is important for democratization, as it is interlinked with knowledge-based culture and sustained by collective idea sharing, and thus contributes to a better understanding of diverse society.

Similar to the g factor (g) for general individual intelligence, a new scientific understanding of collective intelligence aims to extract a general collective intelligence factor c factor for groups indicating a group's ability to perform a wide range of tasks. Definition, operationalization and statistical methods are derived from g. Similarly as g is highly interrelated with the concept of IQ, this measurement of collective intelligence can be interpreted as intelligence quotient for groups (Group-IQ) even though the score is not a quotient per se. Causes for c and predictive validity are investigated as well.

Writers who have influenced the idea of collective intelligence include Francis Galton, Douglas Hofstadter (1979), Peter Russell (1983), Tom Atlee (1993), Pierre Lévy (1994), Howard Bloom (1995), Francis Heylighen (1995), Douglas Engelbart, Louis Rosenberg, Cliff Joslyn, Ron Dembo, Gottfried Mayer-Kress (2003), and Geoff Mulgan.

History

H.G. Wells World Brain (1936–1938)

The concept (although not so named) originated in 1785 with the Marquis de Condorcet, whose "jury theorem" states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases with the number of members of the group (see Condorcet's jury theorem). Many theorists have interpreted Aristotle's statement in the Politics that "a feast to which many contribute is better than a dinner provided out of a single purse" to mean that just as many may bring different dishes to the table, so in a deliberation many may contribute different pieces of information to generate a better decision. Recent scholarship, however, suggests that this was probably not what Aristotle meant but is a modern interpretation based on what we now know about team intelligence.

A precursor of the concept is found in entomologist William Morton Wheeler's observation in 1910 that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism. Wheeler saw this collaborative process at work in ants that acted like the cells of a single beast he called a superorganism.

In 1912 Émile Durkheim identified society as the sole source of human logical thought. He argued in "The Elementary Forms of Religious Life" that society constitutes a higher intelligence because it transcends the individual over space and time. Other antecedents are Vladimir Vernadsky and Pierre Teilhard de Chardin's concept of "noosphere" and H.G. Wells's concept of "world brain". Peter Russell, Elisabet Sahtouris, and Barbara Marx Hubbard (originator of the term "conscious evolution") are inspired by the visions of a noosphere – a transcendent, rapidly evolving collective intelligence – an informational cortex of the planet. The notion has more recently been examined by the philosopher Pierre Lévy. In a 1962 research report, Douglas Engelbart linked collective intelligence to organizational effectiveness, and predicted that pro-actively 'augmenting human intellect' would yield a multiplier effect in group problem solving: "Three people working together in this augmented mode [would] seem to be more than three times as effective in solving a complex problem as is one augmented person working alone". In 1994, he coined the term 'collective IQ' as a measure of collective intelligence, to focus attention on the opportunity to significantly raise collective IQ in business and society.

The idea of collective intelligence also forms the framework for contemporary democratic theories often referred to as epistemic democracy. Epistemic democratic theories refer to the capacity of the populace, either through deliberation or aggregation of knowledge, to track the truth and relies on mechanisms to synthesize and apply collective intelligence.

Collective intelligence was introduced into the machine learning community in the late 20th century, and matured into a broader consideration of how to design "collectives" of self-interested adaptive agents to meet a system-wide goal. This was related to single-agent work on "reward shaping" and has been taken forward by numerous researchers in the game theory and engineering communities.

Dimensions

Complex adaptive systems model

Howard Bloom has discussed mass behavior – collective behavior from the level of quarks to the level of bacterial, plant, animal, and human societies. He stresses the biological adaptations that have turned most of this earth's living beings into components of what he calls "a learning machine". In 1986 Bloom combined the concepts of apoptosis, parallel distributed processing, group selection, and the superorganism to produce a theory of how collective intelligence works. Later he showed how the collective intelligences of competing bacterial colonies and human societies can be explained in terms of computer-generated "complex adaptive systems" and the "genetic algorithms", concepts pioneered by John Holland.

Bloom traced the evolution of collective intelligence to our bacterial ancestors 1 billion years ago and demonstrated how a multi-species intelligence has worked since the beginning of life. Ant societies exhibit more intelligence, in terms of technology, than any other animal except for humans and co-operate in keeping livestock, for example aphids for "milking". Leaf cutters care for fungi and carry leaves to feed the fungi.

David Skrbina cites the concept of a 'group mind' as being derived from Plato's concept of panpsychism (that mind or consciousness is omnipresent and exists in all matter). He develops the concept of a 'group mind' as articulated by Thomas Hobbes in "Leviathan" and Fechner's arguments for a collective consciousness of mankind. He cites Durkheim as the most notable advocate of a "collective consciousness" and Teilhard de Chardin as a thinker who has developed the philosophical implications of the group mind.

Tom Atlee focuses primarily on humans and on work to upgrade what Howard Bloom calls "the group IQ". Atlee feels that collective intelligence can be encouraged "to overcome 'groupthink' and individual cognitive bias in order to allow a collective to cooperate on one process – while achieving enhanced intellectual performance." George Pór defined the collective intelligence phenomenon as "the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration." Atlee and Pór state that "collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action". Their approach is rooted in scientific community metaphor.

The term group intelligence is sometimes used interchangeably with the term collective intelligence. Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity. The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction. The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.

Atlee and Pór suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer. Individuals who respect collective intelligence are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts. Maximizing collective intelligence relies on the ability of an organization to accept and develop "The Golden Suggestion", which is any potentially useful input from any member. Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.

Robert David Steele Vivas in The New Craft of Intelligence portrayed all citizens as "intelligence minutemen," drawing only on legal and ethical sources of information, able to create a "public intelligence" that keeps public officials and corporate managers honest, turning the concept of "national intelligence" (previously concerned about spies and secrecy) on its head.

Stigmergic Collaboration: a theoretical framework for mass collaboration

According to Don Tapscott and Anthony D. Williams, collective intelligence is mass collaboration. In order for this concept to happen, four principles need to exist:

  • Openness - Sharing ideas and intellectual property: though these resources provide the edge over competitors more benefits accrue from allowing others to share ideas and gain significant improvement and scrutiny through collaboration.
  • Peering - Horizontal organization as with the 'opening up' of the Linux program where users are free to modify and develop it provided that they make it available for others. Peering succeeds because it encourages self-organization – a style of production that works more effectively than hierarchical management for certain tasks.
  • Sharing - Companies have started to share some ideas while maintaining some degree of control over others, like potential and critical patent rights. Limiting all intellectual property shuts out opportunities, while sharing some expands markets and brings out products faster.
  • Acting Globally - The advancement in communication technology has prompted the rise of global companies at low overhead costs. The internet is widespread, therefore a globally integrated company has no geographical boundaries and may access new markets, ideas and technology.

Collective intelligence factor c

Scree plot showing percent of explained variance for the first factors in Woolley et al.'s two original studies in 2010

A new scientific understanding of collective intelligence defines it as a group's general ability to perform a wide range of tasks.[17] Definition, operationalization and statistical methods are similar to the psychometric approach of general individual intelligence. Hereby, an individual's performance on a given set of cognitive tasks is used to measure general cognitive ability indicated by the general intelligence factor g proposed by English psychologist Charles Spearman and extracted via factor analysis. In the same vein as g serves to display between-individual performance differences on cognitive tasks, collective intelligence research aims to find a parallel intelligence factor for groups 'c factor' (also called 'collective intelligence factor' (CI)) displaying between-group differences on task performance. The collective intelligence score then is used to predict how this same group will perform on any other similar task in the future. Yet tasks, hereby, refer to mental or intellectual tasks performed by small groups even though the concept is hoped to be transferable to other performances and any groups or crowds reaching from families to companies and even whole cities. Since individuals' g factor scores are highly correlated with full-scale IQ scores, which are in turn regarded as good estimates of this measurement of collective intelligence can also be seen as an intelligence indicator or quotient respectively for a group (Group-IQ) parallel to an individual's intelligence quotient (IQ) even though the score is not a quotient per se.

Mathematically, c and g are both variables summarizing positive correlations among different tasks supposing that performance on one task is comparable with performance on other similar tasks. c thus is a source of variance among groups and can only be considered as a group's standing on the c factor compared to other groups in a given relevant population. The concept is in contrast to competing hypotheses including other correlational structures to explain group intelligence, such as a composition out of several equally important but independent factors as found in individual personality research.

Besides, this scientific idea also aims to explore the causes affecting collective intelligence, such as group size, collaboration tools or group members' interpersonal skills. The MIT Center for Collective Intelligence, for instance, announced the detection of The Genome of Collective Intelligence as one of its main goals aiming to develop a "taxonomy of organizational building blocks, or genes, that can be combined and recombined to harness the intelligence of crowds".

Causes

Individual intelligence is shown to be genetically and environmentally influenced. Analogously, collective intelligence research aims to explore reasons why certain groups perform more intelligently than other groups given that c is just moderately correlated with the intelligence of individual group members. According to Woolley et al.'s results, neither team cohesion nor motivation or satisfaction is correlated with c. However, they claim that three factors were found as significant correlates: the variance in the number of speaking turns, group members' average social sensitivity and the proportion of females. All three had similar predictive power for c, but only social sensitivity was statistically significant (b=0.33, P=0.05).

The number speaking turns indicates that "groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking". Hence, providing multiple team members the chance to speak up made a group more intelligent.

Group members' social sensitivity was measured via the Reading the Mind in the Eyes Test (RME) and correlated .26 with c. Hereby, participants are asked to detect thinking or feeling expressed in other peoples' eyes presented on pictures and assessed in a multiple choice format. The test aims to measure peoples' theory of mind (ToM), also called 'mentalizing' or 'mind reading', which refers to the ability to attribute mental states, such as beliefs, desires or intents, to other people and in how far people understand that others have beliefs, desires, intentions or perspectives different from their own ones. RME is a ToM test for adults that shows sufficient test-retest reliability and constantly differentiates control groups from individuals with functional autism or Asperger Syndrome. It is one of the most widely accepted and well-validated tests for ToM within adults. ToM can be regarded as an associated subset of skills and abilities within the broader concept of emotional intelligence.

The proportion of females as a predictor of c was largely mediated by social sensitivity (Sobel z = 1.93, P= 0.03) which is in vein with previous research showing that women score higher on social sensitivity tests. While a mediation, statistically speaking, clarifies the mechanism underlying the relationship between a dependent and an independent variable, Wolley agreed in an interview with the Harvard Business Review that these findings are saying that groups of women are smarter than groups of men. However, she relativizes this stating that the actual important thing is the high social sensitivity of group members.

It is theorized that the collective intelligence factor c is an emergent property resulting from bottom-up as well as top-down processes. Hereby, bottom-up processes cover aggregated group-member characteristics. Top-down processes cover group structures and norms that influence a group's way of collaborating and coordinating.

Processes

Predictors for the collective intelligence factor c. Suggested by Woolley, Aggarwal & Malone (2015)

Top-down processes

Top-down processes cover group interaction, such as structures, processes, and norms. An example of such top-down processes is conversational turn-taking. Research further suggest that collectively intelligent groups communicate more in general as well as more equally; same applies for participation and is shown for face-to-face as well as online groups communicating only via writing.

Bottom-up processes

Bottom-up processes include group composition, namely the characteristics of group members which are aggregated to the team level. An example of such bottom-up processes is the average social sensitivity or the average and maximum intelligence scores of group members. Furthermore, collective intelligence was found to be related to a group's cognitive diversity including thinking styles and perspectives. Groups that are moderately diverse in cognitive style have higher collective intelligence than those who are very similar in cognitive style or very different. Consequently, groups where members are too similar to each other lack the variety of perspectives and skills needed to perform well. On the other hand, groups whose members are too different seem to have difficulties to communicate and coordinate effectively.

Serial vs Parallel processes

For most of human history, collective intelligence was confined to small tribal groups in which opinions were aggregated through real-time parallel interactions among members. In modern times, mass communication, mass media, and networking technologies have enabled collective intelligence to span massive groups, distributed across continents and time-zones. To accommodate this shift in scale, collective intelligence in large-scale groups been dominated by serialized polling processes such as aggregating up-votes, likes, and ratings over time. While modern systems benefit from larger group size, the serialized process has been found to introduce substantial noise that distorts the collective output of the group. In one significant study of serialized collective intelligence, it was found that the first vote contributed to a serialized voting system can distort the final result by 34%.

To address the problems of serialized aggregation of input among large-scale groups, recent advancements collective intelligence have worked to replace serialized votes, polls, and markets, with parallel systems such as "human swarms" modeled after synchronous swarms in nature. Based on natural process of Swarm Intelligence, these artificial swarms of networked humans enable participants to work together in parallel to answer questions and make predictions as an emergent collective intelligence. In one high-profile example, a human swarm challenge by CBS Interactive to predict the Kentucky Derby. The swarm correctly predicted the first four horses, in order, defying 542–1 odds and turning a $20 bet into $10,800.

The value of parallel collective intelligence was demonstrated in medical applications by researchers at Stanford University School of Medicine and Unanimous AI in a set of published studies wherein groups of human doctors were connected by real-time swarming algorithms and tasked with diagnosing chest x-rays for the presence of pneumonia. When working together as "human swarms," the groups of experienced radiologists demonstrated a 33% reduction in diagnostic errors as compared to traditional methods.

Evidence

Standardized Regression Coefficients for the collective intelligence factor c and group member intelligence regressed on the two criterion tasks as found in Woolley et al.'s (2010) two original studies.
Standardized Regression Coefficients for the collective intelligence factor c as found in Woolley et al.'s (2010) two original studies. c and average (maximum) member intelligence scores are regressed on the criterion tasks.

Woolley, Chabris, Pentland, Hashmi, & Malone (2010), the originators of this scientific understanding of collective intelligence, found a single statistical factor for collective intelligence in their research across 192 groups with people randomly recruited from the public. In Woolley et al.'s two initial studies, groups worked together on different tasks from the McGrath Task Circumplex, a well-established taxonomy of group tasks. Tasks were chosen from all four quadrants of the circumplex and included visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. The results in these tasks were taken to conduct a factor analysis. Both studies showed support for a general collective intelligence factor c underlying differences in group performance with an initial eigenvalue accounting for 43% (44% in study 2) of the variance, whereas the next factor accounted for only 18% (20%). That fits the range normally found in research regarding a general individual intelligence factor g typically accounting for 40% to 50% percent of between-individual performance differences on cognitive tests.

Afterwards, a more complex task was solved by each group to determine whether c factor scores predict performance on tasks beyond the original test. Criterion tasks were playing checkers (draughts) against a standardized computer in the first and a complex architectural design task in the second study. In a regression analysis using both individual intelligence of group members and c to predict performance on the criterion tasks, c had a significant effect, but average and maximum individual intelligence had not. While average (r=0.15, P=0.04) and maximum intelligence (r=0.19, P=0.008) of individual group members were moderately correlated with c, c was still a much better predictor of the criterion tasks. According to Woolley et al., this supports the existence of a collective intelligence factor c, because it demonstrates an effect over and beyond group members' individual intelligence and thus that c is more than just the aggregation of the individual IQs or the influence of the group member with the highest IQ.

Engel et al. (2014) replicated Woolley et al.'s findings applying an accelerated battery of tasks with a first factor in the factor analysis explaining 49% of the between-group variance in performance with the following factors explaining less than half of this amount. Moreover, they found a similar result for groups working together online communicating only via text and confirmed the role of female proportion and social sensitivity in causing collective intelligence in both cases. Similarly to Wolley et al., they also measured social sensitivity with the RME which is actually meant to measure people's ability to detect mental states in other peoples' eyes. The online collaborating participants, however, did neither know nor see each other at all. The authors conclude that scores on the RME must be related to a broader set of abilities of social reasoning than only drawing inferences from other people's eye expressions.

A collective intelligence factor c in the sense of Woolley et al. was further found in groups of MBA students working together over the course of a semester, in online gaming groups as well as in groups from different cultures and groups in different contexts in terms of short-term versus long-term groups. None of these investigations considered team members' individual intelligence scores as control variables.

Note as well that the field of collective intelligence research is quite young and published empirical evidence is relatively rare yet. However, various proposals and working papers are in progress or already completed but (supposedly) still in a scholarly peer reviewing publication process.

Predictive validity

Individual intelligence can be used to predict plenty of life outcomes from school attainment and career success to health outcomes and even mortality. Whether collective intelligence is able to predict other outcomes besides group performance on mental tasks has still to be investigated.

Potential connections to individual intelligence

Gladwell (2008) showed that the relationship between individual IQ and success works only to a certain point and that additional IQ points over an estimate of IQ 120 do not translate into real life advantages. If a similar border exists for Group-IQ or if advantages are linear and infinite, has still to be explored. Similarly, demand for further research on possible connections of individual and collective intelligence exists within plenty of other potentially transferable logics of individual intelligence, such as, for instance, the development over time or the question of improving intelligence. Whereas it is controversial whether human intelligence can be enhanced via training, a group's collective intelligence potentially offers simpler opportunities for improvement by exchanging team members or implementing structures and technologies. Moreover, social sensitivity was found to be, at least temporarily, improvable by reading literary fiction as well as watching drama movies. In how far such training ultimately improves collective intelligence through social sensitivity remains an open question.

There are further more advanced concepts and factor models attempting to explain individual cognitive ability including the categorization of intelligence in fluid and crystallized intelligence or the hierarchical model of intelligence differences. Further supplementing explanations and conceptualizations for the factor structure of the Genomes of collective intelligence besides a general 'c factor', though, are missing yet.

Controversies

Other scholars explain team performance by aggregating team members' general intelligence to the team level instead of building an own overall collective intelligence measure. Devine and Philips (2001) showed in a meta-analysis that mean cognitive ability predicts team performance in laboratory settings (0.37) as well as field settings (0.14) – note that this is only a small effect. Suggesting a strong dependence on the relevant tasks, other scholars showed that tasks requiring a high degree of communication and cooperation are found to be most influenced by the team member with the lowest cognitive ability. Tasks in which selecting the best team member is the most successful strategy, are shown to be most influenced by the member with the highest cognitive ability.

Since Woolley et al.'s results do not show any influence of group satisfaction, group cohesiveness, or motivation, they, at least implicitly, challenge these concepts regarding the importance for group performance in general and thus contrast meta-analytically proven evidence concerning the positive effects of group cohesion, motivation and satisfaction on group performance.

Noteworthy is also that the involved researchers among the confirming findings widely overlap with each other and with the authors participating in the original first study around Anita Woolley.

Alternative mathematical techniques

Computational collective intelligence

Computational Collective Intelligence, by Tadeusz Szuba

In 2001, Tadeusz (Tad) Szuba from the AGH University in Poland proposed a formal model for the phenomenon of collective intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure.

In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic. They are quasi-randomly displacing due to their interaction with their environments with their intended displacements. Their interaction in abstract computational space creates multi-thread inference process which we perceive as collective intelligence. Thus, a non-Turing model of computation is used. This theory allows simple formal definition of collective intelligence as the property of social structure and seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of collective intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as "the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure". While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation. Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against collective intelligence of bacterial colonies.

Collective intelligence quotient

One measure sometimes applied, especially by more artificial intelligence focused theorists, is a "collective intelligence quotient" (or "cooperation quotient") – which can be normalized from the "individual" intelligence quotient (IQ) – thus making it possible to determine the marginal intelligence added by each new individual participating in the collective action, thus using metrics to avoid the hazards of group think and stupidity.

Applications

There have been many recent applications of collective intelligence, including in fields such as crowd-sourcing, citizen science and prediction markets. The Nesta Centre for Collective Intelligence Design  was launched in 2018 and has produced many surveys of applications as well as funding experiments. In 2020 the UNDP Accelerator Labs  began using collective intelligence methods in their work to accelerate innovation for the Sustainable Development Goals.

Elicitation of point estimates

Here, the goal is to get an estimate (in a single value) of something. For example, estimating the weight of an object, or the release date of a product or probability of success of a project etc. as seen in prediction markets like Intrade, HSX or InklingMarkets and also in several implementations of crowdsourced estimation of a numeric outcome such as the Delphi method. Essentially, we try to get the average value of the estimates provided by the members in the crowd.

Opinion aggregation

In this situation, opinions are gathered from the crowd regarding an idea, issue or product. For example, trying to get a rating (on some scale) of a product sold online (such as Amazon's star rating system). Here, the emphasis is to collect and simply aggregate the ratings provided by customers/users.

A similar approach is used in political science, where the opinions collected from different media such as Facebook, Twitter, Twitter Sentiment, YouTube, Google are aggregated via simple averaging or factor analysis to study or predict elections such as Obama's in 2012 or Trump's in 2016. Opinion aggregation is also the basis of other election prediction studies, which adopt this approach to assess the forecasting accuracy of groups consisting of political scientists, journalists, citizens, and the wider public.

Idea Collection

In these problems, someone solicits ideas for projects, designs or solutions from the crowd. For example, ideas on solving a data science problem (as in Kaggle) or getting a good design for a T-shirt (as in Threadless) or in getting answers to simple problems that only humans can do well (as in Amazon's Mechanical Turk). The objective is to gather the ideas and devise some selection criteria to choose the best ideas.

James Surowiecki divides the advantages of disorganized decision-making into three main categories, which are cognition, cooperation and coordination.

Cognition

Market judgment

Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable. Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion. The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.

Collective intelligence underpins the efficient-market hypothesis of Eugene Fama – although the term collective intelligence is not used explicitly in his paper. Fama cites research conducted by Michael Jensen in which 89 out of 115 selected funds underperformed relative to the index during the period from 1955 to 1964. But after removing the loading charge (up-front fee) only 72 underperformed while after removing brokerage costs only 58 underperformed. On the basis of such evidence index funds became popular investment vehicles using the collective intelligence of the market, rather than the judgement of professional fund managers, as an investment strategy.

Predictions in politics and technology

Voting methods used in the United States 2016

Political parties mobilize large numbers of people to form policy, select candidates, and finance and run election campaigns. Knowledge focusing through various voting methods allows perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus. Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.

Companies such as Affinnova (acquired by Nielsen), Google, InnoCentive, Marketocracy, and Threadless have successfully employed the concept of collective intelligence in bringing about the next generation of technological changes through their research and development (R&D), customer service, and knowledge management. An example of such application is Google's Project Aristotle in 2012, where the effect of collective intelligence on team makeup was examined in hundreds of the company's R&D teams.

Cooperation

Networks of trust

Application of collective intelligence in the Millennium Project

In 2012, the Global Futures Collective Intelligence System (GFIS) was created by The Millennium Project, which epitomizes collective intelligence as the synergistic intersection among data/information/knowledge, software/hardware, and expertise/insights that has a recursive learning process for better decision-making than the individual players alone.

New media are often associated with the promotion and enhancement of collective intelligence. The ability of new media to easily store and retrieve information, predominantly through databases and the Internet, allows for it to be shared without difficulty. Thus, through interaction with new media, knowledge easily passes between sources resulting in a form of collective intelligence. The use of interactive new media, particularly the internet, promotes online interaction and this distribution of knowledge between users.

Francis Heylighen, Valentin Turchin, and Gottfried Mayer-Kress are among those who view collective intelligence through the lens of computer science and cybernetics. In their view, the Internet enables collective intelligence at the widest, planetary scale, thus facilitating the emergence of a global brain.

The developer of the World Wide Web, Tim Berners-Lee, aimed to promote sharing and publishing of information globally. Later his employer opened up the technology for free use. In the early '90s, the Internet's potential was still untapped, until the mid-1990s when 'critical mass', as termed by the head of the Advanced Research Project Agency (ARPA), Dr. J.C.R. Licklider, demanded more accessibility and utility. The driving force of this Internet-based collective intelligence is the digitization of information and communication. Henry Jenkins, a key theorist of new media and media convergence draws on the theory that collective intelligence can be attributed to media convergence and participatory culture. He criticizes contemporary education for failing to incorporate online trends of collective problem solving into the classroom, stating "whereas a collective intelligence community encourages ownership of work as a group, schools grade individuals". Jenkins argues that interaction within a knowledge community builds vital skills for young people, and teamwork through collective intelligence communities contribute to the development of such skills. Collective intelligence is not merely a quantitative contribution of information from all cultures, it is also qualitative.

Lévy and de Kerckhove consider CI from a mass communications perspective, focusing on the ability of networked information and communication technologies to enhance the community knowledge pool. They suggest that these communications tools enable humans to interact and to share and collaborate with both ease and speed. With the development of the Internet and its widespread use, the opportunity to contribute to knowledge-building communities, such as Wikipedia, is greater than ever before. These computer networks give participating users the opportunity to store and to retrieve knowledge through the collective access to these databases and allow them to "harness the hive" Researchers at the MIT Center for Collective Intelligence research and explore collective intelligence of groups of people and computers.

In this context collective intelligence is often confused with shared knowledge. The former is the sum total of information held individually by members of a community while the latter is information that is believed to be true and known by all members of the community. Collective intelligence as represented by Web 2.0 has less user engagement than collaborative intelligence. An art project using Web 2.0 platforms is "Shared Galaxy", an experiment developed by an anonymous artist to create a collective identity that shows up as one person on several platforms like MySpace, Facebook, YouTube and Second Life. The password is written in the profiles and the accounts named "Shared Galaxy" are open to be used by anyone. In this way many take part in being one. Another art project using collective intelligence to produce artistic work is Curatron, where a large group of artists together decides on a smaller group that they think would make a good collaborative group. The process is used based on an algorithm computing the collective preferences In creating what he calls 'CI-Art', Nova Scotia based artist Mathew Aldred follows Pierry Lévy's definition of collective intelligence. Aldred's CI-Art event in March 2016 involved over four hundred people from the community of Oxford, Nova Scotia, and internationally. Later work developed by Aldred used the UNU swarm intelligence system to create digital drawings and paintings. The Oxford Riverside Gallery (Nova Scotia) held a public CI-Art event in May 2016, which connected with online participants internationally.

Parenting social network and collaborative tagging as pillars for automatic IPTV content blocking system

In social bookmarking (also called collaborative tagging), users assign tags to resources shared with other users, which gives rise to a type of information organisation that emerges from this crowdsourcing process. The resulting information structure can be seen as reflecting the collective knowledge (or collective intelligence) of a community of users and is commonly called a "Folksonomy", and the process can be captured by models of collaborative tagging.

Recent research using data from the social bookmarking website Delicious, has shown that collaborative tagging systems exhibit a form of complex systems (or self-organizing) dynamics. Although there is no central controlled vocabulary to constrain the actions of individual users, the distributions of tags that describe different resources has been shown to converge over time to a stable power law distributions. Once such stable distributions form, examining the correlations between different tags can be used to construct simple folksonomy graphs, which can be efficiently partitioned to obtained a form of community or shared vocabularies. Such vocabularies can be seen as a form of collective intelligence, emerging from the decentralised actions of a community of users. The Wall-it Project is also an example of social bookmarking.

P2P business

Research performed by Tapscott and Williams has provided a few examples of the benefits of collective intelligence to business:

Talent utilization
At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ. This also helps generate continual interest in the firm in the form of those drawn to new idea creation as well as investment opportunities.
Demand creation
Firms can create a new market for complementary goods by engaging in open-source community. Firms also are able to expand into new fields that they previously would not have been able to without the addition of resources and collaboration from the community. This creates, as mentioned before, a new market for complementary goods for the products in said new fields.
Costs reduction
Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs. New ideas can also be generated and explored by collaboration of online communities creating opportunities for free R&D outside the confines of the company.

Open-source software

Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of the Trainz product. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software".

The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This gives rise to fundamental legal issues, highlighted by Lessig and Bray and Konsynski, such as intellectual property and property ownership rights.

Gosney extends this issue of Collective Intelligence in videogames one step further in his discussion of alternate reality gaming. This genre, he describes as an "across-media game that deliberately blurs the line between the in-game and out-of-game experiences" as events that happen outside the game reality "reach out" into the player's lives in order to bring them together. Solving the game requires "the collective and collaborative efforts of multiple players"; thus the issue of collective and collaborative team play is essential to ARG. Gosney argues that the Alternate Reality genre of gaming dictates an unprecedented level of collaboration and "collective intelligence" in order to solve the mystery of the game.

Benefits of co-operation

Co-operation helps to solve most important and most interesting multi-science problems. In his book, James Surowiecki mentioned that most scientists think that benefits of co-operation have much more value when compared to potential costs. Co-operation works also because at best it guarantees number of different viewpoints. Because of the possibilities of technology global co-operation is nowadays much easier and productive than before. It is clear that, when co-operation goes from university level to global it has significant benefits.

For example, why do scientists co-operate? Science has become more and more isolated and each science field has spread even more and it is impossible for one person to be aware of all developments. This is true especially in experimental research where highly advanced equipment requires special skills. With co-operation scientists can use information from different fields and use it effectively instead of gathering all the information just by reading by themselves."

Coordination

Ad-hoc communities

Military, trade unions, and corporations satisfy some definitions of CI – the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from "law" or "customers" to constrain actions. Online advertising companies are using collective intelligence to bypass traditional marketing and creative agencies.

The UNU open platform for "human swarming" (or "social swarming") establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence. When connected to UNU, groups of distributed users collectively answer questions and make predictions in real-time. Early testing shows that human swarms can out-predict individuals. In 2016, an UNU swarm was challenged by a reporter to predict the winners of the Kentucky Derby, and successfully picked the first four horses, in order, beating 540 to 1 odds.

Specialized information sites such as Digital Photography Review or Camera Labs is an example of collective intelligence. Anyone who has an access to the internet can contribute to distributing their knowledge over the world through the specialized information sites.

In learner-generated context a group of users marshal resources to create an ecology that meets their needs often (but not only) in relation to the co-configuration, co-creation and co-design of a particular learning space that allows learners to create their own context. Learner-generated contexts represent an ad hoc community that facilitates coordination of collective action in a network of trust. An example of learner-generated context is found on the Internet when collaborative users pool knowledge in a "shared intelligence space". As the Internet has developed so has the concept of CI as a shared public forum. The global accessibility and availability of the Internet has allowed more people than ever to contribute and access ideas.

Games such as The Sims Series, and Second Life are designed to be non-linear and to depend on collective intelligence for expansion. This way of sharing is gradually evolving and influencing the mindset of the current and future generations. For them, collective intelligence has become a norm. In Terry Flew's discussion of 'interactivity' in the online games environment, the ongoing interactive dialogue between users and game developers, he refers to Pierre Lévy's concept of Collective Intelligence and argues this is active in videogames as clans or guilds in MMORPG constantly work to achieve goals. Henry Jenkins proposes that the participatory cultures emerging between games producers, media companies, and the end-users mark a fundamental shift in the nature of media production and consumption. Jenkins argues that this new participatory culture arises at the intersection of three broad new media trends. Firstly, the development of new media tools/technologies enabling the creation of content. Secondly, the rise of subcultures promoting such creations, and lastly, the growth of value adding media conglomerates, which foster image, idea and narrative flow.

Coordinating collective actions

The cast of After School Improv learns an important lesson about improvisation and life.

Improvisational actors also experience a type of collective intelligence which they term "group mind", as theatrical improvisation relies on mutual cooperation and agreement, leading to the unity of "group mind".

Growth of the Internet and mobile telecom has also produced "swarming" or "rendezvous" events that enable meetings or even dates on demand. The full impact has yet to be felt but the anti-globalization movement, for example, relies heavily on e-mail, cell phones, pagers, SMS and other means of organizing. The Indymedia organization does this in a more journalistic way. Such resources could combine into a form of collective intelligence accountable only to the current participants yet with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously democratic form to advance shared goal.

A further application of collective intelligence is found in the "Community Engineering for Innovations". In such an integrated framework proposed by Ebner et al., idea competitions and virtual communities are combined to better realize the potential of the collective intelligence of the participants, particularly in open-source R&D. In management theory the use of collective intelligence and crowd sourcing leads to innovations and very robust answers to quantitative issues. Therefore, collective intelligence and crowd sourcing is not necessarily leading to the best solution to economic problems, but to a stable, good solution.

Coordination in different types of tasks

Collective actions or tasks require different amounts of coordination depending on the complexity of the task. Tasks vary from being highly independent simple tasks that require very little coordination to complex interdependent tasks that are built by many individuals and require a lot of coordination. In the article written by Kittur, Lee and Kraut the writers introduce a problem in cooperation: "When tasks require high coordination because the work is highly interdependent, having more contributors can increase process losses, reducing the effectiveness of the group below what individual members could optimally accomplish". Having a team too large the overall effectiveness may suffer even when the extra contributors increase the resources. In the end the overall costs from coordination might overwhelm other costs.

Group collective intelligence is a property that emerges through coordination from both bottom-up and top-down processes. In a bottom-up process the different characteristics of each member are involved in contributing and enhancing coordination. Top-down processes are more strict and fixed with norms, group structures and routines that in their own way enhance the group's collective work.

Alternative views

A tool for combating self-preservation

Tom Atlee reflects that, although humans have an innate ability to gather and analyze data, they are affected by culture, education and social institutions.[177][self-published source?] A single person tends to make decisions motivated by self-preservation. Therefore, without collective intelligence, humans may drive themselves into extinction based on their selfish needs.[46]

Separation from IQism

Phillip Brown and Hugh Lauder quotes Bowles and Gintis (1976) that in order to truly define collective intelligence, it is crucial to separate 'intelligence' from IQism. They go on to argue that intelligence is an achievement and can only be developed if allowed to. For example, earlier on, groups from the lower levels of society are severely restricted from aggregating and pooling their intelligence. This is because the elites fear that the collective intelligence would convince the people to rebel. If there is no such capacity and relations, there would be no infrastructure on which collective intelligence is built. This reflects how powerful collective intelligence can be if left to develop.

Artificial intelligence views

Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk of bodily harm and bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluid mass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells. This train of thought is most obvious in the anti-globalization movement and characterized by the works of John Zerzan, Carol Moore, and Starhawk, who typically shun academics. These theorists are more likely to refer to ecological and collective wisdom and to the role of consensus process in making ontological distinctions than to any form of "intelligence" as such, which they often argue does not exist, or is mere "cleverness".

Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as the new tribalists and the Gaians. Whether these can be said to be collective intelligence systems is an open question. Some, e.g. Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.

In contrast to these views, companies such as Amazon Mechanical Turk and CrowdFlower are using collective intelligence and crowdsourcing or consensus-based assessment to collect the enormous amounts of data for machine learning algorithms.

Monday, February 19, 2024

World Brain

From Wikipedia, the free encyclopedia
First edition (publ. Methuen)

World Brain is a collection of essays and addresses by the English science fiction pioneer, social reformer, evolutionary biologist and historian H. G. Wells, dating from the period of 1936–1938. Throughout the book, Wells describes his vision of the World Brain: a new, free, synthetic, authoritative, permanent "World Encyclopaedia" that could help world citizens make the best use of universal information resources and make the best contribution to world peace.

Background

Plans for creating a global knowledge network long predate Wells. Andrew Michael Ramsay described, c. 1737, an objective of freemasonry as follows:

... to furnish the materials for a Universal Dictionary ... By this means the lights of all nations will be united in one single work, which will be a universal library of all that is beautiful, great, luminous, solid, and useful in all the sciences and in all noble arts. This work will augment in each century, according to the increase of knowledge.

The Encyclopedist movement in France in the mid-eighteenth century was a major attempt to actualize this philosophy. However, efforts to encompass all knowledge came to seem less possible as the available corpus expanded exponentially.

In 1926, extending the analogy between global telegraphy and the nervous system, Nikola Tesla speculated that:

When wireless is perfectly applied the whole earth will be converted into a huge brain … Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.

Paul Otlet, a contemporary of Wells and information science pioneer, revived this movement in the twentieth century. Otlet wrote in 1935, "Man would no longer need documentation if he were assimilated into a being that has become omniscient, in the manner of God himself." Otlet, like Wells, supported the internationalist efforts of the League of Nations and its International Institute of Intellectual Cooperation.

For his part, Wells had advocated world government for at least a decade, arguing in such books as The Open Conspiracy for control of education by a scientific elite.

Synopsis

In the wake of the first World War, Wells believed that people needed to become more educated and conversant with events and knowledge that surrounded them. In order to do this he offered the idea of the knowledge system of the World Brain that all humans could access.

World Encyclopedia

This section, Wells's first expression of his dream of a World Brain, was delivered as a lecture at the Royal Institution of Great Britain, Weekly Evening Meeting, Friday, 20 November 1936.

Wells begins the lecture with a statement on his preference for cohesive worldviews rather than isolated facts. Correspondingly, he wishes the world to be such a whole "as coherent and consistent as possible". He mentions The Work, Wealth and Happiness of Mankind (1931), one of his own attempts at providing intellectual synthesis, and calls it disappointingly unmatched.

He expresses dismay at the ignorance of social science among the Treaty of Versailles and League of Nations framers. He mentions some recent works on the role of science in society and states his main problem as follows:

We want the intellectual worker to become a more definitely organised factor in the human scheme. How is that factor to be organised? Is there any way of implementing knowledge for ready and universal effect?

In answer he introduces the doctrine of New Encyclopaedism as a framework for integrating intellectuals into an organic whole. For the ordinary man, who will necessarily be an educated citizen in the modern state:

From his point of view the World Encyclopaedia would be a row of volumes in his own home or in some neighbouring house or in a convenient public library or in any school or college, and in this row of volumes he would, without any great toil or difficulty, find in clear understandable language, and kept up to date, the ruling concepts of our social order, the outlines and main particulars in all fields of knowledge, an exact and reasonably detailed picture of our universe, a general history of the world, and if by any chance he wanted to pursue a question into its ultimate detail, a trustworthy and complete system of reference to primary sources of knowledge. In fields where wide varieties of method and opinion existed, he would find, not casual summaries of opinions, but very carefully chosen and correlated statements and arguments. [...] This World Encyclopaedia would be the mental background of every intelligent man in the world. It would be alive and growing and changing continually under revision, extension and replacement from the original thinkers in the world everywhere. Every university and research institution should be feeding it. Every fresh mind should be brought into contact with its standing editorial organisation. And on the other hand its contents would be the standard source of material for the instructional side of school and college work, for the verification of facts and the testing of statements—everywhere in the world. Even journalists would deign to use it; even newspaper proprietors might be made to respect it.

Such an encyclopedia would be akin to a secular bible. Universal acceptance would be possible due to the underlying similarity of human brains. For specialists and intellectuals, the World Encyclopedia will provide valuable coordination with other intellectuals working in similar areas.

Wells calls for the formation of an Encyclopaedia Society to promote the project and defend it from exploitation (e.g. by an "enterprising publisher" trying to profit from it). This society would also organize departments for production. Of course, the existence of a society has its own risks:

And there will be a constant danger that some of the early promoters may feel and attempt to realise a sort of proprietorship in the organisation, to make a group or a gang of it. But to recognise that danger is half-way to averting it.

The language of the World Encyclopedia would be English because of its greater range, precision, and subtlety.

Intellectual workers across the world would be increasingly bound together through their participation.

Wells wishes that wise world citizens would ensure world peace. He suggests that a world intellectual project will have more positive impact to this end than will any political movement such as communism, fascism, imperialism, pacifism, etc.

He ended his lecture as follows:

[W]hat I am saying ... is this, that without a World Encyclopaedia to hold men's minds together in something like a common interpretation of reality, there is no hope whatever of anything but an accidental and transitory alleviation of any of our world troubles.

The Brain Organization of the Modern World

This section was first delivered as a lecture in America, October and November 1937.

This lecture promotes the doctrine New Encyclopedism described previously. Wells begins with the observation that the world has become a single interconnected community due to the enormously increased speed of telecommunications. Secondly, he says that energy is available on a new scale, enabling, among other things, the capability for mass destruction. Consequently, the establishment of a new world order is imperative:

One needs an exceptional stupidity even to question the urgency we are under to establish some effective World Pax, before gathering disaster overwhelms us. The problem of reshaping human affairs on a world-scale, this World problem, is drawing together an ever-increasing multitude of minds.

Neither Christianity nor socialism can solve the World Problem. The solution is a modernized "World Knowledge Apparatus"—the World Encyclopedia—"a sort of mental clearing house for the mind, a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared".[1]: 49  Wells thought that technological advances such as microfilm could be used towards this end so that "any student, in any part of the world, will be able to sit with his projector in his own study at his or her convenience to examine any book, any document, in an exact replica".

In this lecture Wells develops the analogy of the encyclopedia to a brain, saying, "it would be a clearing house for universities and research institutions; it would play the role of a cerebral cortex to these essential ganglia".

He mentions the International Committee on Intellectual Cooperation, an advisory branch of the League of Nations, and the 1937 World Congress of Universal Documentation as contemporary forerunners of the world brain.

A Permanent World Encyclopedia

This section was first published in Harper's Magazine, April 1937, and contributed to the new Encyclopédie française, August 1937.

In this essay, Wells explains how current encyclopaedias have failed to adapt to both the growing increase in recorded knowledge and the expansion of people requiring information that was accurate and readily accessible. He asserts that these 19th-century encyclopaedias continue to follow the 18th-century pattern, organisation and scale. "Our contemporary encyclopedias are still in the coach-and-horse phase of development," he argued, "rather than in the phase of the automobile and the aeroplane."

Wells saw the potential for world-altering impacts this technology could bring. He felt that the creation of the encyclopaedia could bring about the peaceful days of the past, "with a common understanding and the conception of a common purpose, and of a commonwealth such as now we hardly dream of".

Wells anticipated the effect and contribution that his World Brain would have on the university system as well. He wanted to see universities contributing to it, helping it grow, and feeding its search for holistic information. "Every university and research institution should be feeding it" (p. 14). Elsewhere Wells wrote: "It would become the logical nucleus of the world's research universities and post-graduate studies." He suggested that the organization he was proposing "would outgrow in scale and influence alike any single university that exists, and it would inevitably take the place of the loose-knit university system of the world in the concentration of research and thought and the direction of the general education of mankind". In fact the new encyclopedism he was advocating was "the only possible method I can imagine, of bringing the universities and research institutions around the world into effective cooperation and creating an intellectual authority sufficient to control and direct collective life". Ultimately the World Encyclopaedia would be "a permanent institution, a mighty super-university, holding together, utilizing and dominating all of the teaching and research organizations at present in existence".

Speech to the Congrès Mondial De La Documentation Universelle

This section provides a brief excerpt of Wells's speech at the World Congress of Universal Documentation, 20 August 1937. He tells the participants directly that they are participating in the creation of a world brain. He says:

I am speaking of a process of mental organisation throughout the world which I believe to be as inevitable as anything can be in human affairs. The world has to pull its mind together, and this is the beginning of its effort. The world is a Phoenix. It perishes in flames and even as it dies it is born again. This synthesis of knowledge is the necessary beginning to the new world.

The Informative Content of Education

This section was delivered as the Presidential Address to the Educational Science Section of the British Association for the Advancement of Science, 2 September 1937.

Wells expresses his dismay at the general state of public ignorance, even among the educated, and suggest that the Educational Science Section focus on the bigger picture:

For this year I suggest we give the questions of drill, skills, art, music, the teaching of languages, mathematics and other symbols, physical, aesthetic, moral and religious training and development, a rest, and that we concentrate on the inquiry: What are we telling young people directly about the world in which they are to live?

He asks how the "irreducible minimum of knowledge" can be imparted to all people within ten years of education—realistically, he says, amounting to 2400 hours of classroom instruction. He suggests minimizing the teaching of names and dates in British history and focusing instead on newly available information about prehistory, early civilisation (without the traditionally heavy emphasis on Palestine and the Israelites), and the broad contours of world history. He suggests better education in geography, with an inventory of the world's natural resources, and a better curriculum in money and economics. He calls for a "modernised type of teacher", better paid, with better equipment, and continually updated training.

Influence

1930s: World Congress of Universal Documentation

One of the stated goals of this Congress, held in Paris, France, in 1937, was to discuss ideas and methods for implementing Wells's ideas of the World Brain. Wells himself gave a lecture at the Congress.

Reginald Arthur Smith extended Wells's ideas in the book A Living Encyclopædia: A Contribution to Mr. Wells's New Encyclopædism (London: Andrew Dakers Ltd., 1941).

1960s: The World Brain as a supercomputer

From World Library to World Brain

In his 1962 book Profiles of the Future, Arthur C. Clarke predicted that the construction of what H. G. Wells called the World Brain would take place in two stages. He identified the first stage as the construction of the World Library, which is basically Wells's concept of a universal encyclopaedia accessible to everyone from their home on computer terminals. He predicted this phase would be established (at least in the developed countries) by the year 2000. The second stage, the World Brain, would be a superintelligent artificially intelligent supercomputer that humans would be able to mutually interact with to solve various world problems. The "World Library" would be incorporated into the "World Brain" as a subsection of it. He suggested that this supercomputer should be installed in the former war rooms of the United States and the Soviet Union once the superpowers had matured enough to agree to co-operate rather than conflict with each other. Clarke predicted the construction of the "World Brain" would be completed by the year 2100.

In 1964, Eugene Garfield published an article in the journal Science introducing the Science Citation Index; the article's first sentence invoked Wells's "magnificent, if premature, plea for the establishment of a world information center", and Garfield predicted that the Science Citation Index "is a harbinger of things to come—a forerunner of the World Brain".

1990s: World Wide Web of documents

World Wide Web as a World Brain

Brian R. Gaines in his 1996 paper "Convergence to the Information Highway" saw the World Wide Web as an extension of Wells's "World Brain" that individuals can access using personal computers. In papers published in 1996 and 1997 (that did not cite Wells), Francis Heylighen and Ben Goertzel envisaged the further development of the World Wide Web into a global brain, i.e. an intelligent network of people and computers at the planetary level. The difference between "global brain" and "world brain" is that the latter, as envisaged by Wells, is centrally controlled, while the former is fully decentralised and self-organizing.

In 2001, Doug Schuler, a professor at Evergreen State University, proposed a worldwide civic intelligence network as the fulfillment of Wells's world brain. As examples he cited Sustainable Seattle and the "Technology Healthy City" project in Seattle.

Wikipedia as a World Brain

A number of commentators have suggested that Wikipedia represents the World Brain as described by Wells. Joseph Reagle has compared Wells's warning about the need to defend the World Encyclopedia from propaganda with Wikipedia's "Neutral Point of View" norm:

In keeping with the universal vision, and anticipating a key Wikipedia norm, H. G. Wells was concerned that his World Brain be an "encyclopedia appealing to all mankind," and therefore it must remain open to corrective criticism, be skeptical of myths (no matter how "venerated") and guard against "narrowing propaganda." This strikes me as similar to the pluralism inherent in the Wikipedia "Neutral Point of View" goal of "representing significant views fairly, proportionately, and without bias."

Infinite monkey theorem

From Wikipedia, the free encyclopedia
A chimpanzee probably not writing Hamlet

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, including the complete works of William Shakespeare. In fact, the monkey would almost surely type every possible finite text an infinite number of times. The theorem can be generalized to state that any sequence of events that has a non-zero probability of happening will almost certainly occur an infinite number of times, given an infinite amount of time or a Universe that is infinite in size.

In this context, "almost surely" is a mathematical term meaning the event happens with probability 1, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. Variants of the theorem include multiple and even infinitely many typists, and the target text varies between an entire library and a single sentence.

One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Émile Borel in 1913, but the first instance may have been even earlier. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters. In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.

Solution

Direct proof

There is a straightforward proof of this theorem. As an introduction, recall that if two events are statistically independent, then the probability of both happening equals the product of the probabilities of each one happening independently. For example, if the chance of rain in Moscow on a particular day in the future is 0.4 and the chance of an earthquake in San Francisco on any particular day is 0.00003, then the chance of both happening on the same day is 0.4 × 0.00003 = 0.000012, assuming that they are indeed independent.

Consider the probability of typing the word banana on a typewriter with 50 keys. Suppose that the keys are pressed randomly and independently, meaning that each key has an equal chance of being pressed regardless of what keys had been pressed previously. The chance that the first letter typed is 'b' is 1/50, and the chance that the second letter typed is 'a' is also 1/50, and so on. Therefore, the probability of the first six letters spelling banana is

(1/50) × (1/50) × (1/50) × (1/50) × (1/50) × (1/50) = (1/50)6 = 1/15,625,000,000.

The result is less than one in 15 billion, but not zero.

From the above, the chance of not typing banana in a given block of 6 letters is 1 − (1/50)6. Because each block is typed independently, the chance Xn of not typing banana in any of the first n blocks of 6 letters is

As n grows, Xn gets smaller. For n = 1 million, Xn is roughly 0.9999, but for n = 10 billion Xn is roughly 0.53 and for n = 100 billion it is roughly 0.0017. As n approaches infinity, the probability Xn approaches zero; that is, by making n large enough, Xn can be made as small as is desired, and the chance of typing banana approaches 100%. Thus, the probability of the word banana appearing at some point in an infinite sequence of keystrokes is equal to one.

The same argument applies if we replace one monkey typing n consecutive blocks of text with n monkeys each typing one block (simultaneously and independently). In this case, Xn = (1 − (1/50)6)n is the probability that none of the first n monkeys types banana correctly on their first try. Therefore, at least one of infinitely many monkeys will (with probability equal to one) produce a text as quickly as it would be produced by a perfectly accurate human typist copying it from the original.

Infinite strings

This can be stated more generally and compactly in terms of strings, which are sequences of characters chosen from some finite alphabet:

  • Given an infinite string where each character is chosen uniformly at random, any given finite string almost surely occurs as a substring at some position.
  • Given an infinite sequence of infinite strings, where each character of each string is chosen uniformly at random, any given finite string almost surely occurs as a prefix of one of these strings.

Both follow easily from the second Borel–Cantelli lemma. For the second theorem, let Ek be the event that the kth string begins with the given text. Because this has some fixed nonzero probability p of occurring, the Ek are independent, and the below sum diverges,

the probability that infinitely many of the Ek occur is 1. The first theorem is shown similarly; one can divide the random string into nonoverlapping blocks matching the size of the desired text, and make Ek the event where the kth block equals the desired string.

Probabilities

However, for physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably small.

Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 1028). In the case of the entire text of Hamlet, the probabilities are so vanishingly small as to be inconceivable. The text of Hamlet contains approximately 130,000 letters. Thus there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946, or including punctuation, 4.4 × 10360,783.

Even if every proton in the observable universe (which is estimated at roughly 1080) were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641 observable universes made of protonic monkeys. As Kittel and Kroemer put it in their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys, "The probability of Hamlet is therefore zero in any operational sense of an event ...", and the statement that the monkeys must eventually succeed "gives a misleading conclusion about very, very large numbers."

In fact there is less than a one in a trillion chance of success that such a universe made of monkeys could type any particular document a mere 79 characters long.

Almost surely

The probability that an infinite randomly generated string of text will contain a particular finite substring is 1. However, this does not mean the substring's absence is "impossible", despite the absence having a prior probability of 0. For example, the immortal monkey could randomly type G as its first letter, G as its second, and G as every single letter thereafter, producing an infinite string of Gs; at no point must the monkey be "compelled" to type anything else. (To assume otherwise implies the gambler's fallacy.) However long a randomly generated finite string is, there is a small but nonzero chance that it will turn out to consist of the same character repeated throughout; this chance approaches zero as the string's length approaches infinity. There is nothing special about such a monotonous sequence except that it is easy to describe; the same fact applies to any nameable specific sequence, such as "RGRGRG" repeated forever, or "a-b-aa-bb-aaa-bbb-...", or "Three, Six, Nine, Twelve…".

If the hypothetical monkey has a typewriter with 90 equally likely keys that include numerals and punctuation, then the first typed keys might be "3.14" (the first three digits of pi) with a probability of (1/90)4, which is 1/65,610,000. Equally probable is any other string of four characters allowed by the typewriter, such as "GGGG", "mATh", or "q%8e". The probability that 100 randomly typed keys will consist of the first 99 digits of pi (including the separator key), or any other particular sequence of that length, is much lower: (1/90)100. If the monkey's allotted length of text is infinite, the chance of typing only the digits of pi is 0, which is just as possible (mathematically probable) as typing nothing but Gs (also probability 0).

The same applies to the event of typing a particular version of Hamlet followed by endless copies of itself; or Hamlet immediately followed by all the digits of pi; these specific strings are equally infinite in length, they are not prohibited by the terms of the thought problem, and they each have a prior probability of 0. In fact, any particular infinite sequence the immortal monkey types will have had a prior probability of 0, even though the monkey must type something.

This is an extension of the principle that a finite string of random text has a lower and lower probability of being a particular string the longer it is (though all specific strings are equally unlikely). This probability approaches 0 as the string approaches infinity. Thus, the probability of the monkey typing an endlessly long string, such as all of the digits of pi in order, on a 90-key keyboard is (1/90) which equals (1/∞) which is essentially 0. At the same time, the probability that the sequence contains a particular subsequence (such as the word MONKEY, or the 12th through 999th digits of pi, or a version of the King James Bible) increases as the total string increases. This probability approaches 1 as the total string approaches infinity, and thus the original theorem is correct.

Correspondence between strings and numbers

In a simplification of the thought experiment, the monkey could have a typewriter with just two keys: 1 and 0. The infinitely long string thusly produced would correspond to the binary digits of a particular real number between 0 and 1. A countably infinite set of possible strings end in infinite repetitions, which means the corresponding real number is rational. Examples include the strings corresponding to one-third (010101...), five-sixths (11010101...) and five-eighths (1010000...). Only a subset of such real number strings (albeit a countably infinite subset) contains the entirety of Hamlet (assuming that the text is subjected to a numerical encoding, such as ASCII).

Meanwhile, there is an uncountably infinite set of strings which do not end in such repetition; these correspond to the irrational numbers. These can be sorted into two uncountably infinite subsets: those which contain Hamlet and those which do not. However, the "largest" subset of all the real numbers are those which not only contain Hamlet, but which contain every other possible string of any length, and with equal distribution of such strings. These irrational numbers are called normal. Because almost all numbers are normal, almost all possible strings contain all possible finite substrings. Hence, the probability of the monkey typing a normal number is 1. The same principles apply regardless of the number of keys from which the monkey can choose; a 90-key keyboard can be seen as a generator of numbers written in base 90.

History

Statistical mechanics

In one of the forms in which probabilists now know this theorem, with its "dactylographic" [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel's 1913 article "Mécanique Statique et Irréversibilité" (Static mechanics and irreversibility), and in his book "Le Hasard" in 1914. His "monkeys" are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel's image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys' success is effectively impossible, and it may safely be said that such a process will never happen. It is clear from the context that Eddington is not suggesting that the probability of this happening is worthy of serious consideration. On the contrary, it was a rhetorical illustration of the fact that below certain levels of probability, the term improbable is functionally equivalent to impossible.

Origins and "The Total Library"

In a 1939 essay entitled "The Total Library", Argentine writer Jorge Luis Borges traced the infinite-monkey concept back to Aristotle's Metaphysics. Explaining the views of Leucippus, who held that the world arose through the random combination of atoms, Aristotle notes that the atoms themselves are homogeneous and their possible arrangements only differ in shape, position and ordering. In On Generation and Corruption, the Greek philosopher compares this to the way that a tragedy and a comedy consist of the same "atoms", i.e., alphabetic characters. Three centuries later, Cicero's De natura deorum (On the Nature of the Gods) argued against the Epicurean atomist worldview:

Is it possible for any man to behold these things, and yet imagine that certain solid and individual bodies move by their natural force and gravitation, and that a world so beautifully adorned was made by their fortuitous concourse? He who believes this may as well believe that if a great quantity of the one-and-twenty letters, composed either of gold or any other matter, were thrown upon the ground, they would fall into such order as legibly to form the Annals of Ennius. I doubt whether fortune could make a single verse of them.

Borges follows the history of this argument through Blaise Pascal and Jonathan Swift, then observes that in his own time, the vocabulary had changed. By 1939, the idiom was "that a half-dozen monkeys provided with typewriters would, in a few eternities, produce all the books in the British Museum." (To which Borges adds, "Strictly speaking, one immortal monkey would suffice.") Borges then imagines the contents of the Total Library which this enterprise would produce if carried to its fullest extreme:

Everything would be in its blind volumes. Everything: the detailed history of the future, Aeschylus' The Egyptians, the exact number of times that the waters of the Ganges have reflected the flight of a falcon, the secret and true name of Rome, the encyclopedia Novalis would have constructed, my dreams and half-dreams at dawn on August 14, 1934, the proof of Pierre Fermat's theorem, the unwritten chapters of Edwin Drood, those same chapters translated into the language spoken by the Garamantes, the paradoxes Berkeley invented concerning Time but didn't publish, Urizen's books of iron, the premature epiphanies of Stephen Dedalus, which would be meaningless before a cycle of a thousand years, the Gnostic Gospel of Basilides, the song the sirens sang, the complete catalog of the Library, the proof of the inaccuracy of that catalog. Everything: but for every sensible line or accurate fact there would be millions of meaningless cacophonies, verbal farragoes, and babblings. Everything: but all the generations of mankind could pass before the dizzying shelves – shelves that obliterate the day and on which chaos lies – ever reward them with a tolerable page.

Borges' total library concept was the main theme of his widely read 1941 short story "The Library of Babel", which describes an unimaginably vast library consisting of interlocking hexagonal chambers, together containing every possible volume that could be composed from the letters of the alphabet and some punctuation characters.

Actual monkeys

In 2002, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes crested macaques in Paignton Zoo in Devon, England from May 1 to June 22, with a radio link to broadcast the results on a website.

Not only did the monkeys produce nothing but five total pages largely consisting of the letter "S", the lead male began striking the keyboard with a stone, and other monkeys followed by urinating and defecating on the machine. Mike Phillips, director of the university's Institute of Digital Arts and Technology (i-DAT), said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. ... They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."

Applications and criticisms

Evolution

Thomas Huxley is sometimes misattributed with proposing a variant of the theory in his debates with Samuel Wilberforce.

In his 1931 book The Mysterious Universe, Eddington's rival James Jeans attributed the monkey parable to a "Huxley", presumably meaning Thomas Henry Huxley. This attribution is incorrect. Today, it is sometimes further reported that Huxley applied the example in a now-legendary debate over Charles Darwin's On the Origin of Species with the Anglican Bishop of Oxford, Samuel Wilberforce, held at a meeting of the British Association for the Advancement of Science at Oxford on 30 June 1860. This story suffers not only from a lack of evidence, but the fact that in 1860 the typewriter was not yet commercially available.

Despite the original mix-up, monkey-and-typewriter arguments are now common in arguments over evolution. As an example of Christian apologetics Doug Powell argued that even if a monkey accidentally types the letters of Hamlet, it has failed to produce Hamlet because it lacked the intention to communicate. His parallel implication is that natural laws could not produce the information content in DNA. A more common argument is represented by Reverend John F. MacArthur, who claimed that the genetic mutations necessary to produce a tapeworm from an amoeba are as unlikely as a monkey typing Hamlet's soliloquy, and hence the odds against the evolution of all life are impossible to overcome.

Evolutionary biologist Richard Dawkins employs the typing monkey concept in his book The Blind Watchmaker to demonstrate the ability of natural selection to produce biological complexity out of random mutations. In a simulation experiment Dawkins has his weasel program produce the Hamlet phrase METHINKS IT IS LIKE A WEASEL, starting from a randomly typed parent, by "breeding" subsequent generations and always choosing the closest match from progeny that are copies of the parent, with random mutations. The chance of the target phrase appearing in a single step is extremely small, yet Dawkins showed that it could be produced rapidly (in about 40 generations) using cumulative selection of phrases. The random choices furnish raw material, while cumulative selection imparts information. As Dawkins acknowledges, however, the weasel program is an imperfect analogy for evolution, as "offspring" phrases were selected "according to the criterion of resemblance to a distant ideal target." In contrast, Dawkins affirms, evolution has no long-term plans and does not progress toward some distant goal (such as humans). The weasel program is instead meant to illustrate the difference between non-random cumulative selection, and random single-step selection. In terms of the typing monkey analogy, this means that Romeo and Juliet could be produced relatively quickly if placed under the constraints of a nonrandom, Darwinian-type selection because the fitness function will tend to preserve in place any letters that happen to match the target text, improving each successive generation of typing monkeys.

A different avenue for exploring the analogy between evolution and an unconstrained monkey lies in the problem that the monkey types only one letter at a time, independently of the other letters. Hugh Petrie argues that a more sophisticated setup is required, in his case not for biological evolution but the evolution of ideas:

In order to get the proper analogy, we would have to equip the monkey with a more complex typewriter. It would have to include whole Elizabethan sentences and thoughts. It would have to include Elizabethan beliefs about human action patterns and the causes, Elizabethan morality and science, and linguistic patterns for expressing these. It would probably even have to include an account of the sorts of experiences which shaped Shakespeare's belief structure as a particular example of an Elizabethan. Then, perhaps, we might allow the monkey to play with such a typewriter and produce variants, but the impossibility of obtaining a Shakespearean play is no longer obvious. What is varied really does encapsulate a great deal of already-achieved knowledge.

James W. Valentine, while admitting that the classic monkey's task is impossible, finds that there is a worthwhile analogy between written English and the metazoan genome in this other sense: both have "combinatorial, hierarchical structures" that greatly constrain the immense number of combinations at the alphabet level.

Zipf's law

Zipf's law states that the frequency of words is a power law function of its frequency rank:

where are real numbers. Assuming that a monkey is typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the text produced by the monkey follows Zipf's law.

Literary theory

R. G. Collingwood argued in 1938 that art cannot be produced by accident, and wrote as a sarcastic aside to his critics,

... some ... have denied this proposition, pointing out that if a monkey played with a typewriter ... he would produce ... the complete text of Shakespeare. Any reader who has nothing to do can amuse himself by calculating how long it would take for the probability to be worth betting on. But the interest of the suggestion lies in the revelation of the mental state of a person who can identify the 'works' of Shakespeare with the series of letters printed on the pages of a book ...

Nelson Goodman took the contrary position, illustrating his point along with Catherine Elgin by the example of Borges' "Pierre Menard, Author of the Quixote",

What Menard wrote is simply another inscription of the text. Any of us can do the same, as can printing presses and photocopiers. Indeed, we are told, if infinitely many monkeys ... one would eventually produce a replica of the text. That replica, we maintain, would be as much an instance of the work, Don Quixote, as Cervantes' manuscript, Menard's manuscript, and each copy of the book that ever has been or will be printed.

In another writing, Goodman elaborates, "That the monkey may be supposed to have produced his copy randomly makes no difference. It is the same text, and it is open to all the same interpretations. ..." Gérard Genette dismisses Goodman's argument as begging the question.

For Jorge J. E. Gracia, the question of the identity of texts leads to a different question, that of author. If a monkey is capable of typing Hamlet, despite having no intention of meaning and therefore disqualifying itself as an author, then it appears that texts do not require authors. Possible solutions include saying that whoever finds the text and identifies it as Hamlet is the author; or that Shakespeare is the author, the monkey his agent, and the finder merely a user of the text. These solutions have their own difficulties, in that the text appears to have a meaning separate from the other agents: What if the monkey operates before Shakespeare is born, or if Shakespeare is never born, or if no one ever finds the monkey's typescript?

Random document generation

The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation.

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on 4 August 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, "VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".

A website entitled The Monkey Shakespeare Simulator, launched on 1 July 2003, contained a Java applet that simulated a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

Due to processing power limitations, the program used a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detected a match" (that is, the RNG generated a certain value or a value within a certain range), the simulator simulated the match by generating matched text.

Testing of random-number generators

Questions about the statistics describing how often an ideal monkey is expected to type certain strings translate into practical tests for random-number generators; these range from the simple to the "quite sophisticated". Computer-science professors George Marsaglia and Arif Zaman report that they used to call one such category of tests "overlapping m-tuple tests" in lectures, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993.

In popular culture

The infinite monkey theorem and its associated imagery is considered a popular and proverbial illustration of the mathematics of probability, widely known to the general public because of its transmission through popular culture rather than through formal education. This is helped by the innate humor stemming from the image of literal monkeys rattling away on a set of typewriters, and is a popular visual gag.

A quotation attributed to a 1996 speech by Robert Wilensky stated, "We've heard that a million monkeys at a million keyboards could produce the complete works of Shakespeare; now, thanks to the Internet, we know that is not true."

The enduring, widespread popularity of the theorem was noted in the introduction to a 2001 paper, "Monkeys, Typewriters and Networks: The Internet in the Light of the Theory of Accidental Excellence". In 2002, an article in The Washington Post said, "Plenty of people have had fun with the famous notion that an infinite number of monkeys with an infinite number of typewriters and an infinite amount of time could eventually write the works of Shakespeare". In 2003, the previously mentioned Arts Council funded experiment involving real monkeys and a computer keyboard received widespread press coverage. In 2007, the theorem was listed by Wired magazine in a list of eight classic thought experiments.

American playwright David Ives' short one-act play Words, Words, Words, from the collection All in the Timing, pokes fun of the concept of the infinite monkey theorem.

In 2015 Balanced Software released Monkey Typewriter on the Microsoft Store. The software generates random text using the Infinite Monkey theorem string formula. The software queries the generated text for user inputted phrases. However the software should not be considered true to life representation of the theory. This is a more of a practical presentation of the theory rather than scientific model on how to randomly generate text.

Academic discipline

From Wikipedia, the free encyclopedia

An academic discipline or academic field is a subdivision of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part) and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties within colleges and universities to which their practitioners belong. Academic disciplines are conventionally divided into the humanities, including language, art and cultural studies, and the scientific disciplines, such as physics, chemistry, and biology; the social sciences are sometimes considered a third category.

Individuals associated with academic disciplines are commonly referred to as experts or specialists. Others, who may have studied liberal arts or systems theory rather than concentrating in a specific academic discipline, are classified as generalists.

While academic disciplines in and of themselves are more or less focused practices, scholarly approaches such as multidisciplinarity/interdisciplinarity, transdisciplinarity, and cross-disciplinarity integrate aspects from multiple academic disciplines, therefore addressing any problems that may arise from narrow concentration within specialized fields of study. For example, professionals may encounter trouble communicating across academic disciplines because of differences in language, specified concepts, or methodology.

Some researchers believe that academic disciplines may, in the future, be replaced by what is known as Mode 2 or "post-academic science", which involves the acquisition of cross-disciplinary knowledge through the collaboration of specialists from various academic disciplines.

It is also known as a field of study, field of inquiry, research field and branch of knowledge. The different terms are used in different countries and fields.

History of the concept

The University of Paris in 1231 consisted of four faculties: Theology, Medicine, Canon Law and Arts. Educational institutions originally used the term "discipline" to catalog and archive the new and expanding body of information produced by the scholarly community. Disciplinary designations originated in German universities during the beginning of the nineteenth century.

Most academic disciplines have their roots in the mid-to-late-nineteenth century secularization of universities, when the traditional curricula were supplemented with non-classical languages and literatures, social sciences such as political science, economics, sociology and public administration, and natural science and technology disciplines such as physics, chemistry, biology, and engineering.

In the early twentieth century, new academic disciplines such as education and psychology were added. In the 1970s and 1980s, there was an explosion of new academic disciplines focusing on specific themes, such as media studies, women's studies, and Africana studies. Many academic disciplines designed as preparation for careers and professions, such as nursing, hospitality management, and corrections, also emerged in the universities. Finally, interdisciplinary scientific fields of study such as biochemistry and geophysics gained prominence as their contribution to knowledge became widely recognized. Some new disciplines, such as public administration, can be found in more than one disciplinary setting; some public administration programs are associated with business schools (thus emphasizing the public management aspect), while others are linked to the political science field (emphasizing the policy analysis aspect).

As the twentieth century approached, these designations were gradually adopted by other countries and became the accepted conventional subjects. However, these designations differed between various countries. In the twentieth century, the natural science disciplines included: physics, chemistry, biology, geology, and astronomy. The social science disciplines included: economics, politics, sociology, and psychology.

Prior to the twentieth century, categories were broad and general, which was expected due to the lack of interest in science at the time. With rare exceptions, practitioners of science tended to be amateurs and were referred to as "natural historians" and "natural philosophers"—labels that date back to Aristotle—instead of "scientists". Natural history referred to what we now call life sciences and natural philosophy referred to the current physical sciences.

Prior to the twentieth century, few opportunities existed for science as an occupation outside the educational system. Higher education provided the institutional structure for scientific investigation, as well as economic support for research and teaching. Soon, the volume of scientific information rapidly increased and researchers realized the importance of concentrating on smaller, narrower fields of scientific activity. Because of this narrowing, scientific specializations emerged. As these specializations developed, modern scientific disciplines in universities also improved their sophistication. Eventually, academia's identified disciplines became the foundations for scholars of specific specialized interests and expertise.

Functions and criticism

An influential critique of the concept of academic disciplines came from Michel Foucault in his 1975 book, Discipline and Punish. Foucault asserts that academic disciplines originate from the same social movements and mechanisms of control that established the modern prison and penal system in eighteenth-century France, and that this fact reveals essential aspects they continue to have in common: "The disciplines characterize, classify, specialize; they distribute along a scale, around a norm, hierarchize individuals in relation to one another and, if necessary, disqualify and invalidate." (Foucault, 1975/1979, p. 223)

Communities of academic disciplines

Communities of academic disciplines can be found outside academia within corporations, government agencies, and independent organizations, where they take the form of associations of professionals with common interests and specific knowledge. Such communities include corporate think tanks, NASA, and IUPAC. Communities such as these exist to benefit the organizations affiliated with them by providing specialized new ideas, research, and findings.

Nations at various developmental stages will find the need for different academic disciplines during different times of growth. A newly developing nation will likely prioritize government, political matters and engineering over those of the humanities, arts and social sciences. On the other hand, a well-developed nation may be capable of investing more in the arts and social sciences. Communities of academic disciplines would contribute at varying levels of importance during different stages of development.

Interactions

These categories explain how the different academic disciplines interact with one another.

Multidisciplinary

Multidisciplinary knowledge is associated with more than one existing academic discipline or profession.

A multidisciplinary community or project is made up of people from different academic disciplines and professions. These people are engaged in working together as equal stakeholders in addressing a common challenge. A multidisciplinary person is one with degrees from two or more academic disciplines. This one person can take the place of two or more people in a multidisciplinary community. Over time, multidisciplinary work does not typically lead to an increase or a decrease in the number of academic disciplines. One key question is how well the challenge can be decomposed into subparts, and then addressed via the distributed knowledge in the community. The lack of shared vocabulary between people and communication overhead can sometimes be an issue in these communities and projects. If challenges of a particular type need to be repeatedly addressed so that each one can be properly decomposed, a multidisciplinary community can be exceptionally efficient and effective.

There are many examples of a particular idea appearing in different academic disciplines, all of which came about around the same time. One example of this scenario is the shift from the approach of focusing on sensory awareness of the whole, "an attention to the 'total field'", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in art (in the form of cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from the era of mechanization, which brought sequentiality, to the era of the instant speed of electricity, which brought simultaneity.

Multidisciplinary approaches also encourage people to help shape the innovation of the future. The political dimensions of forming new multidisciplinary partnerships to solve the so-called societal Grand Challenges were presented in the Innovation Union and in the European Framework Programme, the Horizon 2020 operational overlay. Innovation across academic disciplines is considered the pivotal foresight of the creation of new products, systems, and processes for the benefit of all societies' growth and wellbeing. Regional examples such as Biopeople and industry-academia initiatives in translational medicine such as SHARE.ku.dk in Denmark provide evidence of the successful endeavour of multidisciplinary innovation and facilitation of the paradigm shift.

Transdisciplinary

In practice, transdisciplinary can be thought of as the union of all interdisciplinary efforts. While interdisciplinary teams may be creating new knowledge that lies between several existing disciplines, a transdisciplinary team is more holistic and seeks to relate all disciplines into a coherent whole.

Cross-disciplinary

Cross-disciplinary knowledge is that which explains aspects of one discipline in terms of another. Common examples of cross-disciplinary approaches are studies of the physics of music or the politics of literature.

Bibliometric studies of disciplines

Bibliometrics can be used to map several issues in relation to disciplines, for example, the flow of ideas within and among disciplines (Lindholm-Romantschuk, 1998) or the existence of specific national traditions within disciplines. Scholarly impact and influence of one discipline on another may be understood by analyzing the flow of citations.

The Bibliometrics approach is described as straightforward because it is based on simple counting. The method is also objective but the quantitative method may not be compatible with a qualitative assessment and therefore manipulated. The number of citations is dependent on the number of persons working in the same domain instead of inherent quality or published result's originality.

Politics of Europe

From Wikipedia, the free encyclopedia ...