Search This Blog

Tuesday, October 2, 2018

Swarm intelligence

From Wikipedia, the free encyclopedia
 
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
 
SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. Examples in natural systems of SI include ant colonies, bird flocking, animal herding, bacterial growth, fish schooling and microbial intelligence.

The application of swarm principles to robots is called swarm robotics, while 'swarm intelligence' refers to the more general set of algorithms. 'Swarm prediction' has been used in the context of forecasting problems.

Models of swarm behavior

Boids (Reynolds 1987)

Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object.

As with most artificial life simulations, Boids is an example of emergent behavior; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:
  • separation: steer to avoid crowding local flockmates
  • alignment: steer towards the average heading of local flockmates
  • cohesion: steer to move toward the average position (center of mass) of local flockmates
More complex rules can be added, such as obstacle avoidance and goal seeking.

Self-propelled particles (Vicsek et al. 1995)

Self-propelled particles (SPP), also referred to as the Vicsek model, was introduced in 1995 by Vicsek et al. as a special case of the boids model introduced in 1986 by Reynolds. A swarm is modelled in SPP by a collection of particles that move with a constant speed but respond to a random perturbation by adopting at each time increment the average direction of motion of the other particles in their local neighbourhood. SPP models predict that swarming animals share certain properties at the group level, regardless of the type of animals in the swarm. Swarming systems give rise to emergent behaviours which occur at many different scales, some of which are turning out to be both universal and robust. It has become a challenge in theoretical physics to find minimal statistical models that capture these behaviours.

Metaheuristics

Evolutionary algorithms (EA), particle swarm optimization (PSO), ant colony optimization (ACO) and their variants dominate the field of nature-inspired metaheuristics. This list includes algorithms published up to circa the year 2000. A large number of more recent metaphor-inspired metaheuristics have started to attract criticism in the research community for hiding their lack of novelty behind an elaborate metaphor. For algorithms published since that time, see List of metaphor-based metaheuristics.

Stochastic diffusion search (Bishop 1989)

First published in 1989 Stochastic diffusion search (SDS) was the first Swarm Intelligence metaheuristic. SDS is an agent-based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by evaluating a randomly selected partial objective function parameterised by the agent's current hypothesis. In the standard version of SDS such partial function evaluations are binary, resulting in each agent becoming active or inactive. Information on hypotheses is diffused across the population via inter-agent communication. Unlike the stigmergic communication used in ACO, in SDS agents communicate hypotheses via a one-to-one communication strategy analogous to the tandem running procedure observed in Leptothorax acervorum. A positive feedback mechanism ensures that, over time, a population of agents stabilise around the global-best solution. SDS is both an efficient and robust global search and optimisation algorithm, which has been extensively mathematically described. Recent work has involved merging the global search properties of SDS with other swarm intelligence algorithms.

Ant colony optimization (Dorigo 1992)

Ant colony optimization (ACO), introduced by Dorigo in his doctoral dissertation, is a class of optimization algorithms modeled on the actions of an ant colony. ACO is a probabilistic technique useful in problems that deal with finding better paths through graphs. Artificial 'ants'—simulation agents—locate optimal solutions by moving through a parameter space representing all possible solutions. Natural ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate for better solutions.

Particle swarm optimization (Kennedy, Eberhart & Shi 1995)

Particle swarm optimization (PSO) is a global optimization algorithm for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space, and are evaluated according to some fitness criterion after each timestep. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large number of members that make up the particle swarm make the technique impressively resilient to the problem of local minima.

Applications

Swarm Intelligence-based techniques can be used in a number of applications. The U.S. military is investigating swarm techniques for controlling unmanned vehicles. The European Space Agency is thinking about an orbital swarm for self-assembly and interferometry. NASA is investigating the use of swarm technology for planetary mapping. A 1992 paper by M. Anthony Lewis and George A. Bekey discusses the possibility of using swarm intelligence to control nanobots within the body for the purpose of killing cancer tumors. Conversely al-Rifaie and Aber have used stochastic diffusion search to help locate tumours. Swarm intelligence has also been applied for data mining.

Ant-based routing

The use of swarm intelligence in telecommunication networks has also been researched, in the form of ant-based routing. This was pioneered separately by Dorigo et al. and Hewlett Packard in the mid-1990s, with a number of variations since. Basically, this uses a probabilistic routing table rewarding/reinforcing the route successfully traversed by each "ant" (a small control packet) which flood the network. Reinforcement of the route in the forwards, reverse direction and both simultaneously have been researched: backwards reinforcement requires a symmetric network and couples the two directions together; forwards reinforcement rewards a route before the outcome is known (but then one would pay for the cinema before one knows how good the film is). As the system behaves stochastically and is therefore lacking repeatability, there are large hurdles to commercial deployment. Mobile media and new technologies have the potential to change the threshold for collective action due to swarm intelligence (Rheingold: 2002, P175).

The location of transmission infrastructure for wireless communication networks is an important engineering problem involving competing objectives. A minimal selection of locations (or sites) are required subject to providing adequate area coverage for users. A very different-ant inspired swarm intelligence algorithm, stochastic diffusion search (SDS), has been successfully used to provide a general model for this problem, related to circle packing and set covering. It has been shown that the SDS can be applied to identify suitable solutions even for large problem instances.

Airlines have also used ant-based routing in assigning aircraft arrivals to airport gates. At Southwest Airlines a software program uses swarm theory, or swarm intelligence—the idea that a colony of ants works better than one alone. Each pilot acts like an ant searching for the best airport gate. "The pilot learns from his experience what's the best for him, and it turns out that that's the best solution for the airline," Douglas A. Lawson explains. As a result, the "colony" of pilots always go to gates they can arrive at and depart from quickly. The program can even alert a pilot of plane back-ups before they happen. "We can anticipate that it's going to happen, so we'll have a gate available," Lawson says.

Crowd simulation

Artists are using swarm technology as a means of creating complex interactive systems or simulating crowds.

Stanley and Stella in: Breaking the Ice was the first movie to make use of swarm technology for rendering, realistically depicting the movements of groups of fish and birds using the Boids system. Tim Burton's Batman Returns also made use of swarm technology for showing the movements of a group of bats. The Lord of the Rings film trilogy made use of similar technology, known as Massive, during battle scenes. Swarm technology is particularly attractive because it is cheap, robust, and simple.

Airlines have used swarm theory to simulate passengers boarding a plane. Southwest Airlines researcher Douglas A. Lawson used an ant-based computer simulation employing only six interaction rules to evaluate boarding times using various boarding methods.(Miller, 2010, xii-xviii).

Human swarming

Enabled by mediating software such as the SWARM platform (formally unu) from Unanimous A.I., networks of distributed users can be organized into "human swarms" through the implementation of real-time closed-loop control systems. As published by Rosenberg (2015), such real-time systems enable groups of human participants to behave as a unified collective intelligence that works as a single entity to make predictions, answer questions, and evoke opinions. Such systems, also referred to as "Artificial Swarm Intelligence" (or the brand name Swarm AI) have been shown to significantly amplify human intelligence, resulting in a string of high-profile predictions of extreme accuracy. Academic testing shows that human swarms can out-predict individuals across a variety of real-world projections. Famously, human swarming was used to correctly predict the Kentucky Derby Superfecta, against 541 to 1 odds, in response to a challenge from reporters.

Swarm grammars

Swarm grammars are swarms of stochastic grammars that can be evolved to describe complex properties such as found in art and architecture. These grammars interact as agents behaving according to rules of swarm intelligence. Such behavior can also suggest deep learning algorithms, in particular when mapping of such swarms to neural circuits is considered.

Swarmic art

In a series of works al-Rifaie et al. have successfully used two swarm intelligence algorithms—one mimicking the behaviour of one species of ants (Leptothorax acervorum) foraging (stochastic diffusion search, SDS) and the other algorithm mimicking the behaviour of birds flocking (particle swarm optimization, PSO)—to describe a novel integration strategy exploiting the local search properties of the PSO with global SDS behaviour. The resulting hybrid algorithm is used to sketch novel drawings of an input image, exploiting an artistic tension between the local behaviour of the 'birds flocking'—as they seek to follow the input sketch—and the global behaviour of the "ants foraging"—as they seek to encourage the flock to explore novel regions of the canvas. The "creativity" of this hybrid swarm system has been analysed under the philosophical light of the "rhizome" in the context of Deleuze's "Orchid and Wasp" metaphor.

In a more recent work of al-Rifaie et al., "Swarmic Sketches and Attention Mechanism", introduces a novel approach deploying the mechanism of 'attention' by adapting SDS to selectively attend to detailed areas of a digital canvas. Once the attention of the swarm is drawn to a certain line within the canvas, the capability of PSO is used to produce a 'swarmic sketch' of the attended line. The swarms move throughout the digital canvas in an attempt to satisfy their dynamic roles—attention to areas with more details—associated to them via their fitness function. Having associated the rendering process with the concepts of attention, the performance of the participating swarms creates a unique, non-identical sketch each time the 'artist' swarms embark on interpreting the input line drawings. In other works while PSO is responsible for the sketching process, SDS controls the attention of the swarm.

In a similar work, "Swarmic Paintings and Colour Attention", non-photorealistic images are produced using SDS algorithm which, in the context of this work, is responsible for colour attention.
The "computational creativity" of the above-mentioned systems are discussed in through the two prerequisites of creativity (i.e. freedom and constraints) within the swarm intelligence's two infamous phases of exploration and exploitation.

Michael Theodore and Nikolaus Correll use swarm intelligent art installation to explore what it takes to have engineered systems to appear lifelike Notable work include Swarm Wall (2012) and endo-exo (2014).

Distributed cognition

From Wikipedia, the free encyclopedia
 
Distributed cognition is an approach to cognitive science research that deploys models of the extended mind by taking as the fundamental unit of analysis "a collection of individuals and artifacts and their relations to each other in a particular work practice". "DCog" is a specific approach to distributed cognition (distinct from other meanings) which takes a computational perspective towards goal-based activity systems. Dcog frameworks employed were originally developed in the mid-1980s by Edwin Hutchins, who continues to be the leading pioneer and whose research is based at the University of California at San Diego.
 
Using insights from sociology, cognitive science, and the psychology of Vygotsky (cf. cultural-historical psychology) it emphasizes the ways that cognition is off-loaded into the environment through social and technological means. It is a framework for studying cognition rather than a type of cognition. This framework involves the coordination between individuals, artifacts and the environment. According to Zhang & Norman (1994), the distributed cognition approach has three key components:
  1. Embodiment of information that is embedded in representations of interaction
  2. Coordination of enaction among embodied agents
  3. Ecological contributions to a cognitive ecosystem
'Dcog' studies the "propagation of representational states across media" (Rogers and Ellis, ibid.). Mental content is considered to be non-reducible to individual cognition and is more properly understood as off-loaded and extended into the environment, where information is also made available to other agents (Heylighen, Heath, & Overwalle, 2003). It is often understood as an approach in specific opposition to earlier and still prevalent "brain in a vat" models which ignore "situatedness, embodiment and enaction" as key to any cognitive act (Ibid.).

These representation-based frameworks consider distributed cognition as "a cognitive system whose structures and processes are distributed between internal and external representations, across a group of individuals, and across space and time" (Zhang and Patel, 2006). In general terms, they consider a distributed cognition system to have two components: internal and external representations. In their description, internal representations are knowledge and structure in individuals' minds while external representations are knowledge and structure in the external environment (Zhang, 1997b; Zhang and Norman, 1994).

DCog studies the ways that memories, facts, or knowledge is embedded in the objects, individuals, and tools in our environment. DCog is a useful approach for (re)designing the technologically mediated social aspects of cognition by putting emphasis on the individual and his/her environment, and the media channels with which people interact, either in order to communicate with each other, or socially coordinate to perform complex tasks. Distributed cognition views a system of cognition as a set of representations propagated through specific media, and models the interchange of information between these representational media. These representations can be either in the mental space of the participants or external representations available in the environment.

These interactions can be categorized into three distinct types of processes:
  1. Cognitive processes may be distributed across the members of a social group.
  2. Cognitive processes may be distributed in the sense that the operation of the cognitive system involves coordination between internal and external (material or environmental) structure.
  3. Processes may be distributed through time in such a way that the products of earlier events can transform the nature of related events.

Early research

John Milton Roberts thought that social organization could be seen as cognition through a community (Roberts 1964). He described the cognitive aspects of a society by looking at the present information and how it moves through the people in the society.

Daniel L. Schwartz (1978) proposed a distribution of cognition through culture and the distribution of beliefs across the members of a society.

In 1998, Mark Perry from Brunel University London explored the problems and the benefits brought by distributed cognition to "understanding the organisation of information within its contexts." He considered that distributed cognition draws from the information processing metaphor of cognitive science where a system is considered in terms of its inputs and outputs and tasks are decomposed into a problem space (Perry, 1998). He believed that information should be studied through the representation within the media or artifact that represents the information. Cognition is said to be "socially distributed" when it is applied to demonstrate how interpersonal processes can be used to coordinate activity within a social group.

In 1999, Gavriel Salomon stated that there were two classes of distributive cognition: shared cognition and off-loading. Shared cognition is that which is shared among people through common activity such as conversation where there is a constant change of cognition based on the other person's responses. An example of off-loading would be using a calculator to do arithmetic or a creating a grocery list when going shopping. In that sense, the cognitive duties are off-loaded to a material object.

Later, John Sutton (2006) defined five appropriate domains of investigation for research in Dcog:
  1. External cultural tools, artifacts, and symbol systems.
  2. Natural environmental resources.
  3. Interpersonal and social distribution or scaffolding.
  4. Embodied capacities and skills.
  5. Internalized cognitive artifacts.

Applications

The application area of DCog is systems design and implementation in specific work environments. Its main method is field research, going into the workplace and making rigorous observations, e.g. through capturing work performances with video, studying and coding the recorded activities using qualitative research methods to codify the various ways in which cognition is distributed in the local environment, through the social and technical systems with which the workers engage.

Distributed cognition as a theory of learning, i.e. one in which the development of knowledge is attributed to the system of thinking agents interacting dynamically with artifacts, has been widely applied in the field of distance learning, especially in relation to computer-supported collaborative learning (CSCL) and other computer-supported learning tools. For example, in the field of teaching English Composition, Kevin LaGrandeur has argued that CSCL provides a source of common memory, collaborative space, and a cognitive artifact (tool to enhance cognition) that allows students to more easily build effective written compositions via explicit and implicit machine-human collaboration. Distributed cognition illustrates the process of interaction between people and technologies in order to determine how to best represent, store and provide access to digital resources and other artifacts.

Collaborative tagging on the World Wide Web is one of the most recent developments in technological support for distributed cognition. Beginning in 2004 and quickly becoming a standard on websites, collaborative tagging allows users to upload or select materials (e.g. pictures, music files, texts, websites) and associate tags with these materials. Tags can be chosen freely, and are similar to keywords. Other users can then browse through tags; a click on a tag connects a user to similarly tagged materials. Tags furthermore enable tag clouds, which graphically represent the popularity of tags, demonstrating co-occurrence relations between tags and thus jump from one tag to another.

Dcog has also been used to understand learning and communication in clinical settings and to obtain an integrated view of clinical workplace learning. It has been observed how medical actors use and connect gestural practices, along with visual and haptic structures of their own bodies and of artifacts such as technological instruments and computational devices. In so doing they co-construct complex, multimodal representations that go beyond the mental representations usually studied from a cognitive perspective of learning (Pimmer, Pachler & Genewein, 2013).

Distributed cognition can also be seen through cultures and communities. Learning certain habits or following certain traditions is seen as cognition distributed over a group of people. Exploring distributed cognition through community and culture is one way to understand how it may work.
With the new research that is emerging in this field, the overarching concept of distributed cognition enhances the understanding of interactions between individual human beings and artifacts such as technologies and machines, and complex external environments. This concept has been applied to educational research in the areas of distributed leadership and distributed instruction.

Metaphors and examples

Distributed cognition is seen when using paper and pencil to do a complicated arithmetic problem. The person doing the problem may talk with a friend to clarify the problem, and then must write the partial answers on the paper in order to be able to keep track of all the steps in the calculation. In this example, the parts of distributed cognition are seen in:
  • setting up the problem, in collaboration with another person,
  • performing manipulation/arithmetic procedures, both in one's head and by writing down resulting partial answers.
The process of working out the answer requires not only the perception and thought of two people, it also requires the use of a tool (paper) to extend an individual's memory. So the intelligence is distributed, both between people, and a person and an object.

Another well-researched site for analyzing distributed cognition and applying the discovered insights towards the design of more optimal systems is aviation,where both cockpits and air traffic control environments have been studied as scenes that technologically and socially distribute cognition through systems of externalized representational media. It is not the cognitive performance and expertise of any one single person or machine that is important for the continued operation or the landing and takeoff of airplanes. The cognition is distributed over the personnel, sensors, and machinery both in the plane and on the ground, including but not limited to the controllers, pilots and crew as a whole.

Hutchins also examined another scene of distributed cognition within the context of navigating a US navy vessel. In his book on USS Palau, he explains in detail how distributed cognition is manifested through the interaction between crew members as they interpret, process, and transform information into various representational states in order to safely navigate the ship. In this functional unit, crew members (e.g. pelorus operators, bearing takers, plotters, and the ship's captain) play the role of actors who transform information into different representational states (i.e. triangulation, landmark sightings, bearings, and maps). In this context, navigation is embodied through the combined efforts of actors in the functional unit.

In his study on process, representation and taskworld, Mark Perry (1998) demonstrated how distributed cognition analysis can be conducted in a field study. His example was design analysis in Civil engineering. In this work, he showed how an information processing approach can be applied by carrying a detailed analysis of the background of the study - goals and resources, inputs and outputs, representations and processes, and transformational activity, "how information was transformed from the design drawings and site onto tables of measurements (different representations)" and then onto "a graphical representation" which provided a clearer demonstration of the relationship between the two data sets (Perry, 1998).

Quotes

On educational psychology:
People think in conjunction and partnership with others and with the help of culturally provided tools and implements.
— Salomon, 1997 p. xiii
On cognitive science:
Nervous systems do not form representations of the world, they can only form representations of interactions with the world.
The emphasis on finding and describing "knowledge structures" that are somewhere "inside" the individual encourages us to overlook the fact that human cognition is always situated in a complex sociocultural world and cannot be unaffected by it.
— Hutchins, 1995 p. xiii

Connectivism

From Wikipedia, the free encyclopedia
 
Connectivism is a theory of learning in a digital age that emphasizes the role of social and cultural context in how and where learning occurs. Learning does not simply happen within an individual, but within and across the networks. What sets connectivism apart from theories such as constructivism is the view that "learning (defined as actionable knowledge) can reside outside of ourselves (within an organization or a database), is focused on connecting specialized information sets, and the connections that enable us to learn more are more important than our current state of knowing". Connectivism sees knowledge as a network and learning as a process of pattern recognition. Connectivism has similarities with Vygotsky's 'zone of proximal development' (ZPD) and Engeström's Activity theory. The phrase "a learning theory for the digital age" indicates the emphasis that connectivism gives to technology's effect on how people live, communicate, and learn.

Nodes and links

The central aspect of connectivism is the metaphor of a network with nodes and connections. In this metaphor, a node is anything that can be connected to another node such as an organization, information, data, feelings, and images. Connectivism recognizes three node types: neural, conceptual (internal) and external. Connectivism sees learning as the process of creating connections and expanding or increasing network complexity. Connections may have different directions and strength. In this sense, a connection joining nodes A and B which goes from A to B is not the same as one that goes from B to A. There are some special kinds of connections such as "self-join" and pattern. A self-join connection joins a node to itself and a pattern can be defined as "a set of connections appearing together as a single whole".

The idea of organisation as cognitive systems where knowledge is distributed across nodes originated from the Perceptron (Artificial neuron) in an Artificial Neural Network, and is directly borrowed from Connectionism, "a software structure developed based on concepts inspired by biological functions of brain; it aims at creating machines able to learn like human".

The network metaphor allows a notion of "know-where" (the understanding of where to find the knowledge when it is needed) to supplement to the ones of "know-how" and "know-what" that make the cornerstones of many theories of learning.

As Downes states: "at its heart, connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks".

Principles

  • Learning and knowledge rests in diversity of opinions.
  • Learning is a process of connecting specialized nodes or information sources.
  • Learning may reside in non-human appliances.
  • Learning is more critical than knowing.
  • Maintaining and nurturing connections is needed to facilitate continual learning.
  • Perceiving connections between fields, ideas and concepts is a core skill.
  • Currency (accurate, up-to-date knowledge) is the intent of learning activities.
  • Decision-making is itself a learning process. Choosing what to learn and the meaning of incoming information is seen through the lens of a shifting reality. While there is a right answer now, it may be wrong tomorrow due to alterations in the information climate affecting the decision.

Teaching methods

Summarizing connectivist teaching and learning, Downes states: "to teach is to model and demonstrate, to learn is to practice and reflect."

In 2008, Siemens and Downes delivered an online course called "Connectivism and Connective Knowledge". It covered connectivism as content while attempting to implement some of their ideas. The course was free to anyone who wished to participate, and over 2000 people worldwide enrolled. The phrase "Massive Open Online Course" (MOOC) describes this model. All course content was available through RSS feeds, and learners could participate with their choice of tools: threaded discussions in Moodle, blog posts, Second Life and synchronous online meetings. The course was repeated in 2009 and in 2011.

At its core, connectivism is a form of experiential learning which prioritizes the set of formed by actions and experience over the idea that knowledge is propositional.

History

Connectivism was introduced in 2005 by two publications, Siemens’ Connectivism: Learning as Network Creation and Downes’ An Introduction to Connective Knowledge. Both works received significant attention in the blogosphere and an extended discourse has followed on the appropriateness of connectivism as a learning theory for the digital age. In 2007 Kerr entered into the debate with a series of lectures and talks on the matter, as did Forster, both at the Online Connectivism Conference at the University of Manitoba. In 2008, in the context of digital and e-learning, connectivism was reconsidered and its technological implications were discussed by Siemens' and Ally.

Criticisms

The idea that connectivism is a new theory of learning is not widely accepted. Verhagen argued that connectivism is rather a "pedagogical view."

The lack of comparative literature reviews in Connectivism papers complicate evaluating how Connectivism relates to prior theories, such as Socially Distributed Cognition (Hutchins, 1995), which explored how connectionist ideas could be applied to social systems. Classical theories of cognition such as Activity theory (Vygotsky, Leont’ev, Luria, and others starting in the 1920s) proposed that people are embedded actors, with learning considered via three features – a subject (the learner), an object (the task or activity) and tool or mediating artifacts. Social cognitive theory (Bandura, 1962) claimed that people learn by watching others. Social learning theory (Miller and Dollard) elaborated this notion. Situated cognition (Brown, Collins, & Duguid, 1989; Greeno & Moore, 1993) alleged that knowledge is situated in activity bound to social, cultural and physical contexts; knowledge and learning that requires thinking on the fly rather than the storage and retrieval of conceptual knowledge. Community of practice (Lave & Wenger 1991) asserted that the process of sharing information and experiences with the group enables members to learn from each other. Collective intelligence (Lévy, 1994) described a shared or group intelligence that emerges from collaboration and competition.

Kerr claims that although technology affects learning environments, existing learning theories are sufficient. Kop and Hill conclude that while it does not seem that connectivism is a separate learning theory, it "continues to play an important role in the development and emergence of new pedagogies, where control is shifting from the tutor to an increasingly more autonomous learner."

AlDahdouh examined the relation between connectivism and Artificial Neural Network (ANN) and the results, unexpectedly, revealed that ANN researchers use constructivism principles to teach ANN with labeled training data. However, he argued that connectivism principles are used to teach ANN only when the knowledge is unknown.

Ally recognizes that the world has changed and become more networked, so learning theories developed prior to these global changes are less relevant. However, he argues that, "What is needed is not a new stand-alone theory for the digital age, but a model that integrates the different theories to guide the design of online learning materials.".

Chatti notes that Connectivism misses some concepts, which are crucial for learning, such as reflection, learning from failures, error detection and correction, and inquiry. He introduces the Learning as a Network (LaaN) theory which builds upon connectivism, complexity theory, and double-loop learning. LaaN starts from the learner and views learning as the continuous creation of a personal knowledge network (PKN).

Open educational resources

From Wikipedia, the free encyclopedia

UNESCO Global Open Educational Resources Logo

Open educational resources (OER) are freely accessible, openly licensed text, media, and other digital assets that are useful for teaching, learning, and assessing as well as for research purposes. There is no universal usage of open file formats in OER. They are available online, often through e-learning programs and online course providers.

The term OER describes publicly accessible materials and resources for any user to use, re-mix, improve and redistribute under some licenses.

The development and promotion of open educational resources is often motivated by a desire to provide an alternate or enhanced educational paradigm or create more accessible means of professional and personal development.

Definition and scope

The idea of open educational resources (OER) has numerous working definitions. The term was firstly coined at UNESCO's 2002 Forum on Open Courseware and designates "teaching, learning and research materials in any medium, digital or otherwise, that reside in the public domain or have been released under an open license that permits no-cost access, use, adaptation and redistribution by others with no or limited restrictions. Open licensing is built within the existing framework of intellectual property rights as defined by relevant international conventions and respects the authorship of the work". Often cited is the William and Flora Hewlett Foundation term which defines OER as:
teaching, learning, and research resources that reside in the public domain or have been released under an intellectual property license that permits their free use and re-purposing by others. Open educational resources include full courses, course materials, modules, textbooks, streaming videos, tests, software, and any other tools, materials, or techniques used to support access to knowledge.
The Organization for Economic Co-operation and Development (OECD) defines OER as: "digitised materials offered freely and openly for educators, students, and self-learners to use and reuse for teaching, learning, and research. OER includes learning content, software tools to develop, use, and distribute content, and implementation resources such as open licences". (This is the definition cited by Wikipedia's sister project, Wikiversity.) By way of comparison, the Commonwealth of Learning "has adopted the widest definition of Open Educational Resources (OER) as 'materials offered freely and openly to use and adapt for teaching, learning, development and research'". The WikiEducator project suggests that OER refers "to educational resources (lesson plans, quizzes, syllabi, instructional modules, simulations, etc.) that are freely available for use, reuse, adaptation, and sharing'.

The above definitions expose some of the tensions that exist with OER:
  • Nature of the resource: Several of the definitions above limit the definition of OER to digital resources, while others consider that any educational resource can be included in the definition.
  • Source of the resource: While some of the definitions require a resource to be produced with an explicit educational aim in mind, others broaden this to include any resource which may potentially be used for learning.
  • Level of openness: Most definitions require that a resource be placed in the public domain or under a fully open license. Others require only that free use to be granted for educational purposes, possibly excluding commercial uses.
These definitions also have common elements, namely they all:
  • cover use and reuse, repurposing, and modification of the resources;
  • include free use for educational purposes by teachers and learners
  • encompass all types of digital media.
Given the diversity of users, creators and sponsors of open educational resources, it is not surprising to find a variety of use cases and requirements. For this reason, it may be as helpful to consider the differences between descriptions of open educational resources as it is to consider the descriptions themselves. One of several tensions in reaching a consensus description of OER (as found in the above definitions) is whether there should be explicit emphasis placed on specific technologies. For example, a video can be openly licensed and freely used without being a streaming video. A book can be openly licensed and freely used without being an electronic document. This technologically driven tension is deeply bound up with the discourse of open-source licensing.

There is also a tension between entities which find value in quantifying usage of OER and those which see such metrics as themselves being irrelevant to free and open resources. Those requiring metrics associated with OER are often those with economic investment in the technologies needed to access or provide electronic OER, those with economic interests potentially threatened by OER, or those requiring justification for the costs of implementing and maintaining the infrastructure or access to the freely available OER. While a semantic distinction can be made delineating the technologies used to access and host learning content from the content itself, these technologies are generally accepted as part of the collective of open educational resources.

Since OER are intended to be available for a variety of educational purposes, most organizations using OER neither award degrees nor provide academic or administrative support to students seeking college credits towards a diploma from a degree granting accredited institution. In open education, there is an emerging effort by some accredited institutions to offer free certifications, or achievement badges, to document and acknowledge the accomplishments of participants.

In order for educational resources to be OER, they must have an open license. Many educational resources made available on the Internet are geared to allowing online access to digitised educational content, but the materials themselves are restrictively licensed. Thus, they are not OER. Often, this is not intentional. Most educators are not familiar with copyright law in their own jurisdictions, never mind internationally. International law and national laws of nearly all nations, and certainly of those who have signed onto the World Intellectual Property Organization (WIPO), restrict all content under strict copyright (unless the copyright owner specifically releases it under an open license). The Creative Commons license is the most widely used licensing framework internationally used for OER.

History

The term learning object was coined in 1994 by Wayne Hodgins and quickly gained currency among educators and instructional designers, popularizing the idea that digital materials can be designed to allow easy reuse in a wide range of teaching and learning situations.

The OER movement originated from developments in open and distance learning (ODL) and in the wider context of a culture of open knowledge, open source, free sharing and peer collaboration, which emerged in the late 20th century. OER and Free/Libre Open Source Software (FLOSS), for instance, have many aspects in common, a connection first established in 1998 by David Wiley who coined the term open content and introduced the concept by analogy with open source. Richard Baraniuk made the same connection independently in 1999 with the founding of Connexions (now called OpenStax_CNX).

The MIT OpenCourseWare project is credited for having sparked a global Open Educational Resources Movement after announcing in 2001 that it was going to put MIT's entire course catalog online and launching this project in 2002. In a first manifestation of this movement, MIT entered a partnership with Utah State University, where assistant professor of instructional technology David Wiley set up a distributed peer support network for the OCW's content through voluntary, self-organizing communities of interest.

The term "open educational resources" was first adopted at UNESCO's 2002 Forum on the Impact of Open Courseware for Higher Education in Developing Countries.

In 2005 OECD's Centre for Educational Research and Innovation (CERI) launched a 20-month study to analyse and map the scale and scope of initiatives regarding "open educational resources" in terms of their purpose, content, and funding. The report "Giving Knowledge for Free: The Emergence of Open Educational Resources", published in May 2007, is the main output of the project, which involved a number of expert meetings in 2006.

In September 2007, the Open Society Institute and the Shuttleworth Foundation convened a meeting in Cape Town to which thirty leading proponents of open education were invited to collaborate on the text of a manifesto. The Cape Town Open Education Declaration was released on 22 January 2008, urging governments and publishers to make publicly funded educational materials available at no charge via the internet.

The global movement for OER culminated at the 1st World OER Congress convened in Paris on 20–22 June 2012 by UNESCO, COL and other partners. The resulting Paris OER Declaration (2012) reaffirmed the shared commitment of international organizations, governments, and institutions to promoting the open licensing and free sharing of publicly funded content, the development of national policies and strategies on OER, capacity-building, and open research. In 2018, the 2nd World OER Congress in Ljubljana, Slovenia, was co-organized by UNESCO and the Government of Slovenia. The 500 experts and national delegates from 111 countries adopted the Ljubljana OER Action Plan. It recommends 41 actions to mainstream open-licensed resources to achieve the 2030 Sustainable Development Goal 4 on “quality and lifelong education".

An historical antecedent to consider is the pedagogy of artist Joseph Beuys and the founding of the Free International University for Creativity and Interdisciplinary Research in 1973. After co-creating with his students, in 1967, the German Student Party, Beuys was dismissed from his teaching post in 1972 at the Staatliche Kunstakademie Düsseldorf. The institution did not approve of the fact that he permitted 50 students who had been rejected from admission to study with him. The Free University became increasingly involved in political and radical actions calling for a revitalization and restructuring of educational systems.

Licensing and types

Turning a Resource into an Open Educational Resource

Open educational resources often involve issues relating to intellectual property rights. Traditional educational materials, such as textbooks, are protected under conventional copyright terms. However, alternative and more flexible licensing options have become available as a result of the work of Creative Commons, a non-profit organization that provides ready-made licensing agreements that are less restrictive than the "all rights reserved" terms of standard international copyright. These new options have become a "critical infrastructure service for the OER movement." Another license, typically used by developers of OER software, is the GNU General Public License from the free and open-source software (FOSS) community. Open licensing allows uses of the materials that would not be easily permitted under copyright alone.

Types of open educational resources include: full courses, course materials, modules, learning objects, open textbooks, openly licensed (often streamed) videos, tests, software, and other tools, materials, or techniques used to support access to knowledge. OER may be freely and openly available static resources, dynamic resources which change over time in the course of having knowledge seekers interacting with and updating them (such as this Wikipedia article), or a course or module with a combination of these resources.

OER policy

Open educational resources policies are principles or tenets adopted by governing bodies in support of the use of open content and practices in educational institutions. Many of these policies require publicly funded resources be openly licensed. Such policies are emerging increasingly at the country, state/province and more local level.

Creative Commons hosts an open educational resources policy registry lists 95 current and proposed open education policies from around the world.

Creative Commons and multiple other open organizations launched the Open Policy Network to foster the creation, adoption and implementation of open policies and practices that advance the public good by supporting open policy advocates, organizations and policy makers, connecting open policy opportunities with assistance, and sharing open policy information.

Costs

One of the most frequently cited benefits of OER is their potential to reduce costs. While OER seem well placed to bring down total expenditures, they are not cost-free. New OER can be assembled or simply reused or repurposed from existing open resources. This is a primary strength of OER and, as such, can produce major cost savings. OER need not be created from scratch. On the other hand, there are some costs in the assembly and adaptation process. And some OER must be created and produced originally at some time. While OER must be hosted and disseminated, and some require funding, OER development can take different routes, such as creation, adoption, adaptation and curation.

Each of these models provides different cost structure and degree of cost-efficiency. Upfront costs in developing the OER infrastructure can be expensive, such as building the OER infrastructure. Butcher and Hoosen noted that “a key argument put forward by those who have written about the potential benefits of OER relates to its potential for saving cost or, at least, creating significant economic efficiencies. However, to date there has been limited presentation of concrete data to back up this assertion, which reduces the effectiveness of such arguments and opens the OER movement to justified academic criticism.”

Institutional support

A large part of the early work on open educational resources was funded by universities and foundations such as the William and Flora Hewlett Foundation, which was the main financial supporter of open educational resources in the early years and has spent more than $110 million in the 2002 to 2010 period, of which more than $14 million went to MIT. The Shuttleworth Foundation, which focuses on projects concerning collaborative content creation, has contributed as well. With the British government contributing £5.7m, institutional support has also been provided by the UK funding bodies JISC and HEFCE.

UNESCO is taking a leading role in "making countries aware of the potential of OER." The organisation has instigated debate on how to apply OERs in practice and chaired vivid discussions on this matter through its International Institute of Educational Planning (IIEP). Believing that OERs can widen access to quality education, particularly when shared by many countries and higher education institutions, UNESCO also champions OERs as a means of promoting access, equity and quality in the spirit of the Universal Declaration of Human Rights. In 2012 the Paris OER Declaration was approved during the 2012 OER World Congress held at UNESCO's headquarters in Paris.

Initiatives

A parallel initiative, OpenStax CNX (formerly Connexions), came out of Rice University starting in 1999. In the beginning, the Connexions project focused on creating an open repository of user-generated content. In contrast to the OCW projects, content licenses are required to be open under a Creative Commons Attribution International 4.0 (CC BY) license. The hallmark of Connexions is the use of a custom XML format CNXML, designed to aid and enable mixing and reuse of the content.
In 2012, OpenStax was created from the basis of the Connexions project. In contrast to user-generated content libraries, OpenStax hires subject matter experts to create college-level textbooks that are peer-reviewed, openly licensed, and available online for free. Like the content in OpenStax CNX, OpenStax books are available under Creative Commons CC BY licenses that allow users to reuse, remix, and redistribute content as long as they provide attribution. OpenStax's stated mission is to create professional grade textbooks for the highest-enrollment undergraduate college courses that are the same quality as traditional textbooks, but are adaptable and available free to students.

Other initiatives derived from MIT OpenCourseWare are China Open Resources for Education and OpenCourseWare in Japan. The OpenCourseWare Consortium, founded in 2005 to extend the reach and impact of open course materials and foster new open course materials, counted more than 200 member institutions from around the world in 2009.

OER Africa, an initiative established by the South African Institute for Distance Education (Saide) to play a leading role in driving the development and use of OER across all education sectors on the African continent. The OER4Schools project focusses on the use of Open Educational Resources in teacher education in sub-Saharan Africa.

Wikiwijs (the Netherlands), was a program intended to promote the use of open educational resources (OER) in the Dutch education sector.

The Open educational resources programme (phases one and two) (United Kingdom), funded by HEFCE, the UK Higher Education Academy and Jisc, which has supported pilot projects and activities around the open release of learning resources, for free use and repurposing worldwide.

In 2003, the ownership of Wikipedia and Wiktionary projects was transferred to the Wikimedia Foundation, a non-profit charitable organization whose goal is to collecting and developing free educational content and to disseminate it effectively and globally. Wikipedia ranks in the top-ten most visited websites worldwide since 2007.

OER Commons was spearheaded in 2007 by ISKME, a nonprofit education research institute dedicated to innovation in open education content and practices, as a way to aggregate, share, and promote open educational resources to educators, administrators, parents, and students. OER Commons also provides educators tools to align OER to the Common Core State Standards; to evaluate the quality of OER to OER Rubrics; and to contribute and share OERs with other teachers and learners worldwide. To further promote the sharing of these resources among educators, in 2008 ISKME launched the OER Commons Teacher Training Initiative, which focuses on advancing open educational practices and on building opportunities for systemic change in teaching and learning.
One of the first OER resources for K-12 education is Curriki. A nonprofit organization, Curriki provides an Internet site for open source curriculum (OSC) development, to provide universal access to free curricula and instructional materials for students up to the age of 18 (K-12). By applying the open source process to education, Curriki empowers educational professionals to become an active community in the creation of good curricula. Kim Jones serves as Curriki's Executive Director.

In August 2006 WikiEducator was launched to provide a venue for planning education projects built on OER, creating and promoting open education resources (OERs), and networking towards funding proposals. Its Wikieducator's Learning4Content project builds skills in the use of MediaWiki and related free software technologies for mass collaboration in the authoring of free content and claims to be the world's largest wiki training project for education. By 30 June 2009 the project facilitated 86 workshops training 3,001 educators from 113 different countries.

Between 2006 and 2007, as a Transversal Action under the European eLearning Programme, the Open e-Learning Content Observatory Services (OLCOS) project carries out a set of activities that aim at fostering the creation, sharing and re-use of Open Educational Resources (OER) in Europe and beyond. The main result of OLCOS was a Roadmap, in order to provide decision makers with an overview of current and likely future developments in OER and recommendations on how various challenges in OER could be addressed.

Peer production has also been utilized in producing collaborative open education resources (OERs). Writing Commons, an international open textbook spearheaded by Joe Moxley at the University of South Florida, has evolved from a print textbook into a crowd-sourced resource for college writers around the world. Massive open online course (MOOC) platforms have also generated interest in building online eBooks. The Cultivating Change Community (CCMOOC) at the University of Minnesota is one such project founded entirely on a grassroots model to generate content. In 10 weeks, 150 authors contributed more than 50 chapters to the CCMOOC eBook and companion site.

In 2011-12, academicians from the University of Mumbai, India created an OER Portal with free resources on Micro Economics, Macro Economics, and Soft Skills – available for global learners.

Another project is the Free Education Initiative from the Saylor Foundation, which is currently more than 80% of the way towards its initial goal of providing 241 college-level courses across 13 subject areas. The Saylor Foundation makes use of university and college faculty members and subject experts to assist in this process, as well as to provide peer review of each course to ensure its quality. The foundation also supports the creation of new openly licensed materials where they are not already available as well as through its Open Textbook Challenge.

In 2010 the University of Birmingham and the London School of Economics worked together on the HEA and JISC funded DELILA project, the main aim of the project was to release a small sample of open educational resources to support embedding digital and information literacy education into institutional teacher training courses accredited by the HEA including PGCerts and other CPD courses. One of the main barriers that the project found to sharing resources in information literacy was copyright that belonged to commercial database providers.

In 2006, the African Virtual University (AVU) released 73 modules of its Teacher Education Programs as open education resources to make the courses freely available for all. In 2010, the AVU developed the OER Repository which has contributed to increase the number of Africans that use, contextualize, share and disseminate the existing as well as future academic content. The online portal serves as a platform where the 219 modules of Mathematics, Physics, Chemistry, Biology, ICT in education, and teacher education professional courses are published. The modules are available in three different languages – English, French, and Portuguese, making the AVU the leading African institution in providing and using open education resources.

In August 2013, Tidewater Community College become the first college in the U.S. to create an Associate of Science degree based entirely on openly licensed content – the "Z-Degree". The combined efforts of a 13-member faculty team, college staff and administration culminated when students enrolled in the first "z-courses" which are based solely on OER. The goals of this initiative were twofold: 1) to improve student success, and 2) to increase instructor effectiveness. Courses were stripped down to the Learning Outcomes and rebuilt using openly licensed content, reviewed and selected by the faculty developer based on its ability to facilitate student achievement of the objectives. The 21 z-courses that make up an associate of science degree in business administration were launched simultaneously across four campus locations. TCC is the 11th largest public two-year college in the nation, enrolling nearly 47,000 students annually.

During this same time period from 2013-2014, Northern Virginia Community College (NOVA) also created 2 zero cost OER degree pathways: one an associate's degree in General Studies, the other an associate's degree in Social Science. One of the largest community colleges in the nation, NOVA serves around 75,000 students across six campuses. The Extended Learning Institute (ELI) is the centralized online learning hub for the entire college. Dr. Wm. Preston Davis, Director of Instructional services at ELI, led this OER initiative called the OER-Based General Education Project. Davis led the ELI team of faculty, instructional designers and librarians on the project to create what NOVA calls "digital open" courses. During the planning phase, the team was careful to select core, high-enrollment courses that could impact as many students as possible, regardless of specific course of study. At the same time, the team looked beyond individual courses to create depth and quality around full pathways for students to earn an entire degree. Currently, 20% of students at NOVA have taken an OER course and enrollment in these courses is rising. From Fall 2013 to Fall 2016, more than 15,000 students have enrolled in NOVA OER courses yielding textbook cost savings of over 2 million dollars over the three-year period. Currently, NOVA is working to add a third OER degree pathway in Liberal Arts.

Nordic OER is a Nordic network to promote open education and collaboration amongst stakeholders in all educational sectors. The network has members from all Nordic countries and facilitates discourse and dialogue on open education but also participates in projects and development programs. The network is supported by the Nordic OER project co-funded by Nordplus.

In Norway the Norwegian Digital Learning Arena (NDLA) is a joint county enterprise offering open digital learning resources for upper secondary education. In addition to being a compilation of open educational resources, NDLA provides a range of other online tools for sharing and cooperation. At project startup in 2006, increased volume and diversity were seen as significant conditions for the introduction of free learning material in upper secondary education. The incentive was an amendment imposing the counties to provide free educational material, in print as well as digital, including digital hardware.

In Sweden there is a growing interest in open publication and the sharing of educational resources but the pace of development is still slow. There are many questions to be dealt with in this area; for universities, academic management and teaching staff. Teachers in all educational sectors require support and guidance to be able to use OER pedagogically and with quality in focus. To realize the full potential of OER for students' learning it is not enough to make patchwork use of OER – resources have to be put into context. Valuable teacher time should be used for contextual work and not simply for the creation of content. The aim of the project OER for learning OERSweden is to stimulate an open discussion about collaboration in infrastructural questions regarding open online knowledge sharing. A network of ten universities led by Karlstad University will arrange a series of open webinars during the project period focusing on the use and production of open educational resources. A virtual platform for Swedish OER initiatives and resources will also be developed. The project intends to focus in particular on how OER affects teacher trainers and decision makers. The objectives of the project are: To increase the level of national collaboration between universities and educational organisations in the use and production of OER, To find effective online methods to support teachers and students, in terms of quality, technology and retrievability of OER, To raise awareness for the potential of webinars as a tool for open online learning, To increase the level of collaboration between universities' support functions and foster national resource sharing, with a base in modern library and educational technology units, and To contribute to the creation of a national university structure for tagging, distribution and storage of OER.

Founded in 2007, the CK-12 Foundation is a California-based non-profit organization whose stated mission is to reduce the cost of, and increase access to, K-12 education in the United States and worldwide. CK-12 provides free and fully customizable K-12 open educational resources aligned to state curriculum standards and tailored to meet student and teacher needs. The foundation's tools are used by 38,000 schools in the US, and additional international schools.

LATIn Project brings a Collaborative Open Textbook Initiative for Higher Education tailored specifically for Latin America. This initiative encourages and supports local professors and authors to contribute with individual sections or chapters that could be assembled into customized books by the whole community. The created books are freely available to the students in an electronic format or could be legally printed at low cost because there is no license or fees to be paid for their distribution, since all they are released as OER with a Creative Commons CC-BY-SA license. This solution also contributes to the creation of customized textbooks where each professor could select the sections appropriate for their courses or could freely adapt existing sections to their needs. Also, the local professors will be the sink and source of the knowledge, contextualized to the Latin American Higher Education system.

In 2014, the William and Flora Hewlett Foundation started funding the establishment of an OER World Map that documents OER initiatives around the world. Since 2015, the hbz and graphthinking GmbH develop the service with funding by the Hewlett Foundation at https://oerworldmap.org. The first version of the website was launched in March 2015 and the website is continuously developing. The OER World Map invites people to enter a personal profile as well to add their organization, OER project or service to the database.

In March 2015, Eliademy.com launched the crowdsourcing of OER courses under CC licence. The platform expects to collect 5000 courses during the first year that can be reused by teachers worldwide.

In 2015, the University of Idaho Doceo Center launched open course content for K-12 schools, with the purpose of improving awareness of OER among K-12 educators. This was shortly followed by an Open Textbook Crash Course, which provides K-12 educators with basic knowledge about copyright, open licensing, and attribution. Results of these projects have been used to inform research into how to support K-12 educator OER adoption literacies and the diffusion of open practices.

In 2015, the MGH Institute of Health Professions, with help from an Institute of Museum and Library Services Grant (#SP-02-14-0), launched the Open Access Course Reserves. With the idea that many college level courses rely on more than a single textbook to deliver information to students, the OACR is inspired by library courses reserves in that it supplies entire reading lists for typical courses. Faculty can find, create, and share reading lists of open access materials.

Today, OER initiatives across the United States rely on individual college and university librarians to curate resources into lists on library content management systems called LibGuides. Find OER repositories by discipline through the use of an individualized LibGuide such as the one found here from Indian River State College.

International programs

High hopes have been voiced for OERs to alleviate the digital divide between the global North and the global South, and to make a contribution to the development of less advanced economies.
  • Europe – Learning Resource Exchange for schools (LRE) is a service launched by European Schoolnet in 2004 enabling educators to find multilingual open educational resources from many different countries and providers. Currently, more than 200,000 learning resources are searchable in one portal based on language, subject, resource type and age range.
  • India – National Council Of Educational Research and Training digitized all its textbooks from 1st standard to 12th standard. The textbooks are available online for free. Central Institute of Educational Technology, a constituent Unit of NCERT, digitized more than thousand audio and video programmes. All the educational AV material developed by CIET is presently available at Sakshat Portal an initiative of Ministry of Human Resources and Development. In addition, NROER (National Repository for Open Educational Resources) houses variety of e-content.
  • US – Washington State's Open Course Library Project is a collection of expertly developed educational materials – including textbooks, syllabi, course activities, readings, and assessments – for 81 high-enrolling college courses. All course have now been released and are providing faculty with a high-quality option that will cost students no more than $30 per course. However, a study found that very few classes were actually using these materials (http://www.nacs.org/Portals/NACS/Uploaded_Documents/PDF/Research/OCLresults2014.pdf).
  • Dominica – The Free Curricula Centre at New World University expands the utility of existing OER textbooks by creating and curating supplemental videos to accompany them, and by converting them to the EPUB format for better display on smartphones and tablets.
  • Bangladesh is the first country to digitize a complete set of textbooks for grades 1-12. Distribution is free to all.
  • Uruguay sought up to 1,000 digital learning resources in a Request For Proposals (RFP) in June 2011.
  • South Korea has announced a plan to digitize all of its textbooks and to provide all students with computers and digitized textbooks.
  • The California Learning Resources Network Free Digital Textbook Initiative at high school level, initiated by former Gov. Arnold Schwarzenegger.
  • The Michigan Department of Education provided $600,000 to create the Michigan Open Book Project in 2014. The initial selection of OER textbooks in history, economics, geography and social studies was issued in August, 2015. There has been significant negative reaction to the materials' inaccuracies, design flaws and confusing distribution.
  • The Shuttleworth Foundation's Free high school science texts for South Africa.
  • Saudi Arabia had a comprehensive project in 2008 to digitize and improve the Math and Science text books in all k-12 grades.
  • Saudi Arabia started a project in 2011 to digitize all text books other than Math and Science.
  • The Arab League Educational, Cultural and Scientific Organization (ALECSO) and the U.S. State Department launched an Open Book Project in 2013, supporting "the creation of Arabic-language open educational resources (OERs)".

OER global logo adopted by UNESCO

With the advent of growing international awareness and implementation of open educational resources, a global OER logo was adopted for use in multiple languages by UNESCO. The design of the Global OER logo creates a common global visual idea, representing "subtle and explicit representations of the subjects and goals of OER". Its full explanation and recommendation of use is available from UNESCO.

Critical discourse about OER as a movement

External discourse

The OER movement has been accused of insularity and failure to connect globally: "OERs will not be able to help countries reach their educational goals unless awareness of their power and potential can rapidly be expanded beyond the communities of interest that they have already attracted."

More fundamentally, doubts were cast on the altruistic motives typically claimed by OERs. The project itself was accused of imperialism because the economic, political, and cultural preferences of highly developed countries determine the creation and dissemination of knowledge that can be used by less-developed countries and may be a self-serving imposition.

To counter the general dominance of OER from the developed countries, the Research on OER for development (ROER4D) research project, aims to study how OER can be produced in the global south (developing countries) which can meet the local needs of the institutions and people. It seeks to understand in what ways, and under what circumstances can the adoption of OER address the increasing demand for accessible, relevant, high-quality and affordable post-secondary education in the Global South.

Internal discourse

Within the open educational resources movement, the concept of OER is active. Consider, for example, the conceptions of gratis versus libre knowledge as found in the discourse about massive open online courses, which may offer free courses but charge for end-of-course awards or course verification certificates from commercial entities. A second example of essentially contested ideas in OER can be found in the usage of different OER logos which can be interpreted as indicating more or less allegiance to the notion of OER as a global movement.

Stephen Downes has argued that, from a connectivist perspective, the production of OER is ironic because "in the final analysis, we cannot produce knowledge for people. Period. The people who are benefiting from these open education resource initiatives are the people who are producing these resources."

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...